id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
272,585,112 | kubernetes | Support liveness and readiness probe with HTTP check that checks response text | <!-- This form is for bug reports and feature requests ONLY!
If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
/kind feature
/sig node
*Problem:*
This is equivalent to the HAProxy `http-check expect string READY` (https://www.haproxy.com/documentation/aloha/7-0/haproxy/healthchecks/)
Based http://kubernetes.io/docs/user-guide/liveness/ it appears that the healthz checks only check the HTTP response code.
I'm working with a health check that is a company standard and that returns status strings.
*Proposed Solution:*
Some capability to check the text of the response to see if it contains a particular string.
This check would probably need to only check the first n characters to prevent issues with memory.
HAProxy also supports an `rstring` that is a regex but my use case does not require it.
*Page to Update:*
http://kubernetes.io/docs/user-guide/liveness/
| priority/backlog,sig/network,sig/node,kind/feature,lifecycle/frozen,triage/accepted | medium | Critical |
272,603,103 | pytorch | making .cuda() falls back to an identity function when gpu is not available | it would be really convenient, if .cuda() could automatically fall back to doing nothing when there's no gpu available on the machine. it was suggested by @khanhptnk
cc @ngimel | module: cuda,triaged | medium | Major |
272,603,141 | go | math/big: mulAddVWW too slow | ### What version of Go are you using?
go1.9.2 darwin/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using?
darwin/amd64
### What did you do?
The following Go program multiplies big.Int with one Word constant:
package main
import (
"fmt"
"math/big"
)
func main() {
a := big.NewInt(1)
b := big.NewInt(1)
a.Lsh(a, 8096)
for i := uint64(1); i <= 10000000; i++ {
a.Mul(a, b)
}
fmt.Println(a.BitLen())
}
### What did you expect to see?
The following C program uses [GMP](https://gmplib.org/) 6.1.2:
#include <stdio.h>
#include <stdint.h>
#include "gmp.h"
int main(int argc, char *argv[]) {
mpz_t a, b;
mpz_init_set_ui(a, 1);
mpz_init_set_ui(b, 1);
mpz_mul_2exp(a, a, 8096);
for (uint64_t i = 1; i <= 10000000ULL; i++) {
mpz_mul(a, a, b);
}
printf("%lu\n", mpz_sizeinbase(a, 2));
return 0;
}
and terminates under around 1s.
### What did you see instead?
The Go program completes on my machine in 1.87s, i.e. almost twice as slow.
`mulAddVWW` is building block for multiplication (by one word, or more) so a better performance would benefit many applications. | Performance,NeedsInvestigation | medium | Major |
272,615,283 | react | Formalize top-level ES exports | Currently we only ship CommonJS versions of all packages. However we might want to ship them as ESM in the future (https://github.com/facebook/react/issues/10021).
We can't quite easily do this because we haven't really decided on what top-level ES exports would look like from each package. For example, does `react` have a bunch of named exports, but also a default export called `React`? Should we encourage people to `import *` for better tree shaking? What about `react-test-renderer/shallow` that currently exports a class (and thus would start failing in Node were it converted to be a default export)? | Component: Build Infrastructure,Type: Discussion,Type: Breaking Change,React Core Team | high | Critical |
272,768,970 | tensorflow | Support Windows builds through clang | Right now, the tested configurations for Linux, Mac and Windows use gcc, clang and MSVC, as seen at
https://www.tensorflow.org/install/install_sources#tested_source_configurations
Linux can be made to compile with clang, too, if you use the right magic trick (`--config=cuda_clang`). All that's left is Windows, which is probably the trickiest of the three.
Bonus points for allowing to cross-build for Windows under Linux. The main problem there might be with CUDA and getting its SDK installed on a Linux system (just a wild guess).
(Forked from an existing discussion under https://github.com/tensorflow/tensorflow/issues/12052) | stat:contribution welcome,type:build/install | medium | Major |
272,814,403 | go | runtime: goroutines linger in GC assist queue | ### What version of Go are you using (`go version`)?
```
go version devel +1ac8846984 Fri Nov 3 14:06:21 2017 +0000 linux/amd64
```
### Does this issue reproduce with the latest release?
This report is about the behavior on tip. I've also seen this recently with Go 1.9 and Go 1.8.
### What operating system and processor architecture are you using (`go env`)?
linux/amd64
### What did you do?
I have a server that makes HTTPS requests for large JSON blobs consisting mainly of base64 encoded protobuf messages. It decodes and processes the messages, retaining them in memory for a short period. I collected and viewed an execution trace.
What I found in the execution trace didn't match the way I think the GC is expected to work.
This server isn't very latency-sensitive, but since the GC is preventing mutators from doing work the program's throughput is stifled.
### What did you expect to see?
I expected to see the background GC worker running on 1/4th of the threads (9 of 36 here), regularly releasing assist credit to the goroutines that have become blocked on assist. I expected to see any goroutines that blocked on assist credit to be satisfied in significantly less than 1ms.
### What did you see instead?
I see goroutines sitting in the GCWaiting state for several milliseconds, and see less than a quarter of Ps running the background mark worker. User goroutines that allocate memory during those several milliseconds are transitioned to the GCWaiting state.
This doesn't happen during every GC cycle for this program—maybe during one out of five cycles, and to varying degrees. This is the "best" example I've seen out of maybe 40 GC cycles (three screens of execution trace UI).
I'd expect the background workers to flush their credit at least dozens of times each millisecond, since `gcCreditSlack = 2000` seems pretty small. I'd expect there to be enough work for many background mark workers, since the GC splits large jobs are split into oblets of 128 kB. I'd expect for the GC-aware parts of the scheduler to allow and encourage 1/4th of the Ps to run background mark workers. If background mark workers intentionally go idle when they aren't able to find work, I'd expect them to be reactivated when `gcFlushBgCredit` finds that there's unsatisfied demand for scan credit.
---
Here's an overview of my program's execution trace. We'll zoom in on the GC cycle at 215–229 ms in a moment:
<img width="1467" alt="screen shot 2017-11-06 at 10 59 48 pm" src="https://user-images.githubusercontent.com/230685/32642109-d83a1246-c586-11e7-8087-9272b3a5e730.png">
A GC workers start at 216ms, and 9/36 Ps run the gcBgMarkWorker for 6ms. All Ps are busy for another 4ms (hard to see if 1/4th of them are running the background worker).
Around 225ms, the number of goroutines in the GCWaiting state climbs to the 80s, where it remains until 228ms. Only Ps 9 and 12 run the background mark worker. About a dozen Ps run user goroutines. A couple of those goroutines attempt to allocate and so also enter the GCWaiting state.
<img width="1616" alt="screen shot 2017-11-06 at 10 35 06 pm" src="https://user-images.githubusercontent.com/230685/32642105-d798163a-c586-11e7-9f30-a17ad01e1d8c.png">
When the background mark worker on P 9 stops, its assist credit allows a bunch of the blocked goroutines to run. Those goroutines soon return to the GCWaiting state.
<img width="1616" alt="screen shot 2017-11-06 at 10 40 05 pm" src="https://user-images.githubusercontent.com/230685/32642107-d7d125f6-c586-11e7-920e-0a151e6d48fb.png">
<img width="1615" alt="screen shot 2017-11-06 at 10 41 15 pm" src="https://user-images.githubusercontent.com/230685/32642108-d7ec890e-c586-11e7-8ed7-2b2728f208e7.png">
@aclements do you understand what's going on here? | Performance,NeedsInvestigation,compiler/runtime | low | Minor |
272,815,599 | pytorch | Multiprocessing with torch.solve hangs | Applying torch.gesv with multiprocessing, after a call to torch.potrf, causes hanging. Reported from the forums: https://discuss.pytorch.org/t/multiprocessing-subprocess-is-not-running-at-blas-or-lapack-command/9727/8
Repro below. Increase `n_data` until it happens (for me it happened at 1001).
```
import torch
from torch.autograd import Variable
import torch.optim
def reproduce():
n_data = 100
ndim = 100
x_input = Variable(torch.rand(n_data, ndim) * 2 - 1)
x_input.data[0, :] = 0
output = Variable(torch.randn(n_data, 1))
n_init = 1
b = Variable(torch.randn(ndim, n_data))
A = Variable(torch.randn(ndim, ndim))
chol = torch.potrf(A.mm(A.t()) + Variable(torch.eye(ndim)))
pool = torch.multiprocessing.Pool(n_init)
res = pool.apply_async(torch.gesv, args=(b, A))
return res.get()
if __name__ == '__main__':
print(reproduce())
```
cc @jianyuh @nikitaved @pearu @mruberry @heitorschueroff @walterddr @IvanYashchuk | module: multiprocessing,triaged,module: linear algebra | low | Major |
272,822,214 | go | net/url: username in authority is not strictly in RFC3986 | ### What version of Go are you using (`go version`)?
go version devel +5a5223297a Wed Nov 1 11:43:41 2017 -0700 windows/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
Windows 7 64bit
### What did you do?
```go
package main
import (
"fmt"
"log"
"net/url"
)
func main() {
u, err := url.Parse("http://[email protected]:[email protected]/")
if err != nil {
log.Fatal(err)
}
fmt.Printf("%#v\n", u)
fmt.Printf("%#v\n", u.User)
}
```
https://play.golang.org/p/HFm27EmRPU
### What did you expect to see?
error should be returned
### What did you see instead?
no errors.
https://www.blackhat.com/docs/us-17/thursday/us-17-Tsai-A-New-Era-Of-SSRF-Exploiting-URL-Parser-In-Trending-Programming-Languages.pdf
In this slide, some URL parsers are mentioned. And seems to be different from cURL. RFC3986 says username is filled with `unreserved / pct-encoded / sub-delims`.
```
userinfo = *( unreserved / pct-encoded / sub-delims / ":" )
pct-encoded = "%" HEXDIG HEXDIG
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
```
https://tools.ietf.org/html/rfc3986
And whatwg-url says
>If the @ flag is set, prepend "%40" to buffer.
https://url.spec.whatwg.org/#authority-state
Go's implementation find `@` in authority with using `strings.LastIndex`.
https://github.com/golang/go/blob/5d0cab036712539d50435904ded466bf6b7b0884/src/net/url/url.go#L535
If implementation should be strictly in RFC3986 and whatwg-url, multiple @ should be treated as error, I think.
related issue https://github.com/golang/go/issues/3439 | NeedsInvestigation | low | Critical |
272,859,768 | flutter | Improve FittedBox docs to suggest forcing the parent to a particular size | I know FittedBox will create RenderFittedBox,but I can not understand ''performLayout'' function that compute the size:
```
@override
void performLayout() {
if (child != null) {
child.layout(const BoxConstraints(), parentUsesSize: true);
size = constraints.constrainSizeAndAttemptToPreserveAspectRatio(child.size); //keep the same aspect ratio with child?
_clearPaintData();
} else {
size = constraints.smallest;
}
}
```
The ''constrainSizeAndAttemptToPreserveAspectRatio'' function will keep the size has same aspect ratio with the child's size,so after that it simply scale up/down child's size to fit FittedBox's size.
Because that reason, the parameter 'fit' has no effect.
If the code update with that, here all is right.
```
@override
void performLayout() {
if (child != null) {
child.layout(const BoxConstraints(), parentUsesSize: true);
size = constraints.biggest;
_clearPaintData();
} else {
size = constraints.smallest;
}
}
```
My demo code:
```
Widget build(BuildContext context) {
return new Container(
color: Colors.white,
alignment: Alignment.center,
child: new Container(
width: 200.0,
height: 100.0,
color: Colors.black,
alignment: Alignment.topLeft,
child: new FittedBox(
fit: BoxFit.fitWidth,
alignment: Alignment.topLeft,
child: new Container(
color: Colors.red,
width: 300.0,
height: 240.0,
alignment: Alignment.center,
child: new Text('AAA'),
),
)
)
);
``` | framework,d: api docs,a: quality,P2,team-framework,triaged-framework | low | Major |
272,876,892 | flutter | Display Progress Indicator for Flutter Upgrade | When doing `flutter upgrade`, the only messages we get are `Downloading X tools...`. It happened many times that I wonder whether the download has stalled or if there's still progress going on.
Can we have a progress indicator for when the tools are being downloaded? | c: new feature,tool,P3,team-tool,triaged-tool | low | Minor |
272,956,749 | youtube-dl | ESPN broken | [debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['http://www.espn.com/watch/player?id=3211683', '--ap-mso', 'DTV', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE', '--hls-prefer-native', '--verbose']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2017.11.06
[debug] Python version 3.4.4 - Windows-2012ServerR2-6.3.9600
[debug] exe versions: ffmpeg N-71320-gc4b2017, ffprobe N-80912-gce466d0
[debug] Proxy map: {}
[ESPN] 3211683: Downloading JSON metadata
ERROR: Unable to download JSON metadata: HTTP Error 400: Bad Request (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp1ogvhjus\build\youtube_dl\extractor\common.py", line 506, in _request_webpage
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp1ogvhjus\build\youtube_dl\YoutubeDL.py", line 2195, in urlopen
File "C:\Python\Python34\lib\urllib\request.py", line 470, in open
File "C:\Python\Python34\lib\urllib\request.py", line 580, in http_response
File "C:\Python\Python34\lib\urllib\request.py", line 508, in error
File "C:\Python\Python34\lib\urllib\request.py", line 442, in _call_chain
File "C:\Python\Python34\lib\urllib\request.py", line 588, in http_error_default | tv-provider-account-needed | low | Critical |
272,990,468 | TypeScript | JSDoc refactor doesn't add type parameters | <!-- BUGS: Please use this template. -->
<!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript -->
<!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md -->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** typescript@next
**Code**
```ts
/**
* @template T
* @param {function(T): boolean} g
*/
function f(/*1*/g) {
}
```
Run the inline-jsdoc-types refactor at `/*1*/`.
**Expected behavior:**
`f<T>(g: (arg0: T) => boolean)`
**Actual behavior:**
`f(g: (arg0: any) => boolean)`
| Bug,Domain: Quick Fixes | low | Critical |
272,993,984 | pytorch | Proposal: combine requires_grad and retain_grad() | Currently, requires_grad means two things:
1) That we should compute gradients for this variable and functions of this variable
2) On a "leaf" variable, it means we should store the gradient to the "grad" attribute
The `retain_grad()` functions is used to signify that we should store the gradient on non-"leaf" variables to the "grad" attribute.
We should change `requires_grad` so that it signifies that we should store the "grad" attribute on all variables (leaf and non-leaf). We should add a new read-only attribute, `compute_grad`, which serves the first purpose (whether or not we should compute gradients).
`retain_grad()` can be deprecated and simply set requires_grad=True for backwards compatibility.
cc @ezyang @SsnL @albanD @zou3519 @gqchen | module: autograd,triaged | low | Major |
273,089,291 | go | cmd/internal/src: do we need to carry around absFilename (and symFilename) ? | pos.go defines a PosBase which carries an andFilename. absFilenames are produced from the regular filename when the PosBase is created and then don't change. It seems that we could just as well produce the absFilename only at the end, when we need it (to flow into the compiler's generated binary). Investigate.
Reminder issue.
(See also #22660.) | NeedsInvestigation,compiler/runtime | low | Minor |
273,100,664 | kubernetes | Deprecate kubelet "enable-controller-attach-detach" option | <!-- This form is for bug reports and feature requests ONLY!
If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
/kind feature
@kubernetes/sig-storage-feature-requests
**What happened**:
Are there any use cases where we still need to support kubelet doing the attach/detach? If not, let's deprecate this field and eventually remove it.
| sig/storage,kind/feature,lifecycle/frozen | low | Critical |
273,101,763 | flutter | Add a flutter project validation mechanism | Sister bug to #12573
Currently, once a project is created, flutter loses the ability to sanitize the project build configuration (e.g. gradle and the various Xcode project configurations). If the user alters those configurations too much, it becomes very difficult to go back to a working state.
We currently only have indirect signals when the build output based on the build configurations are bad but it's too indirect. We should directly assert against the build configurations things such as:
Is the target API level too low, is generated.xcconfig transitively included, are schemes/build configs named a particular way, are you trying to build for 32-bit iOS etc.
----
From #14974:
> It'd be good if we gave users a way to check "is my project in a healthy state?". Things this might check:
- [ ] Are the Gradle files up to date?
- [ ] Are the Xcode project files up to date?
- [ ] Are there any errors in my pubspec.yaml?
- For example, the user should be depending on `flutter_test: sdk: flutter`, not `test` directly (or at least not without depending on `flutter_test`) | tool,t: flutter doctor,a: first hour,a: build,P2,team-tool,triaged-tool | low | Critical |
273,141,171 | vscode | SCM - Support keyboard shortcuts for inline change review commands | Steps to Reproduce:
1. Set this in keybindings
```json
{
"key": "alt+cmd+z",
"command": "git.revertChange"
}
```
2.

<!-- Launch with `code --disable-extensions` to check. -->
Reproduces without extensions: Yes | help wanted,feature-request,scm | low | Major |
273,146,606 | godot | Transform lock is not saved for scenes instanced in editor | **Operating system or device, Godot version, GPU Model and driver (if graphics related):**
35e7f992993a69a1e25cc12109ef7cf885e18260
**Issue description:**
When setting transform lock on subscene and saving the scene the lock should be saved as well. Unfortunately that's not the case:

Saving the lock works well for 'normal' (not subscene) nodes.
**Steps to reproduce:**
1. Create scene with node2d as root. Save it as `Scene2D`
2. Create another scene, add some node, instance `Scene2D` in editor as child of that node.
3. Set transform lock on `Scene2D`
4. Save the scene
5. Close and reopen the scene
**Link to minimal example project:**
| enhancement,topic:editor,confirmed | low | Minor |
273,156,241 | vscode | [Extensions] Bind package.config's activation onLanguage to configuration | - VSCode Version: 1.18.0 dcee22
- OS Version: macOS 10.13.1
Steps to Reproduce:
1. Have an extension that has `"activationEvents": [ "onLanguage:markdown" ]` but allow users to [override the languages it works for via configuration](https://github.com/TravisTheTechie/vscode-write-good/blob/master/package.json#L22), e.g. adding text
2. The user now must load a markdown file before it will activate for text files
It would be awesome if I could bind activation to whatever languages the user has configured instead of having to bind it to [every event](https://code.visualstudio.com/docs/extensionAPI/activation-events#_activationevents). Is this possible in some way that I can't find?
Reproduces without extensions: No, it's extension related | feature-request,api | low | Minor |
273,215,914 | vscode | [icons] allow name specific root folder icons | - VSCode Version: 1.18.1
- OS Version: Windows 10
I use muli-root workspace, and lost the specific icon defined for some folders:

I use my own icon theme and specify
```
"folderNames": {
"db": "_fd_red",
"server": "_fd_green"
}
```
Now, with multi-workspace, they are no longer used.
There is a setting `rootFolder` but no` rootFolderNames`.
I would like a setting taking account **name of folder** or **name defined in .code-workspace.** to get something like this:

| feature-request,themes,workbench-multiroot | low | Major |
273,220,551 | vscode | [folding] Hover can show region description on `#endregion` | It would be great if the description of `region` pops up when mouse hovering on `#endregion`. To clarify, in `#region Great Stuff`, the description text is `Great Stuff`. | feature-request,editor-folding | low | Major |
273,222,095 | angular | Possibility to add multiple updateOn events | [x] Feature request
It would be nice to add support for multiple updateOn: events. e.g updateOn: "blur submit".
There is scenario where single "blur" is not enough.
For example:
1. we have simple form with 2 fields and one submit button
2. user enters value for the first field and navigates to the second one
3. user enters value for the second field and submit form by pressing enter key
In my example, the form will be submitted with empty value in the second filed.
| feature,help wanted,freq2: medium,area: forms,feature: under consideration | medium | Major |
273,263,295 | TypeScript | SourceFile.ambientModuleNames is undefined after transform() |
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.4.2 (stuck to this TypeScript version because of Angular 5.0.1)
**Code**
```ts
import * as ts from 'typescript';
// transform typescript AST prior to compilation
const transformationResult: ts.TransformationResult<ts.SourceFile> = ts.transform(
tsProgram.getSourceFiles(),
[ /* customTransformers */ ],
options
);
// Wraps a CompilerHost reading SourceFile's from the previous TransformationResult
function wrapCompilerHost({transformation, options}): ts.CompilerHost {
const wrapped = ts.createCompilerHost(options);
return {
...wrapped,
getSourceFile: (fileName, version) => {
const inTransformation = transformation.transformed.find((file) => file.fileName === fileName);
if (inTransformation) {
return inTransformation;
} else {
return wrapped.getSourceFile(fileName, version);
}
}
}
}
// then, do another compilation
const wrappingCompilerHost = wrapCompilerHost(transformationResult, options);
ts.createProgram(/*..*/, wrappingCompilerHost).emit(/* .. */);
```
**Expected behavior:**
Should "ambientModuleNames" not be copied/applied to the transformed `SourceFile`?
**Actual behavior:**
After `ts.transform(..)`, the "ambientModuleNames" of `SourceFile` is undefined, resulting in the following error:
```
"TypeError: Cannot read property 'length' of undefined
at resolveModuleNamesReusingOldState (~ng-packagr/node_modules/typescript/lib/typescript.js:69409:80)
at processImportedModules (~ng-packagr/node_modules/typescript/lib/typescript.js:70364:35)
at findSourceFile (~ng-packagr/node_modules/typescript/lib/typescript.js:70274:17)
at processImportedModules (~ng-packagr/node_modules/typescript/lib/typescript.js:70393:25)
at findSourceFile (~ng-packagr/node_modules/typescript/lib/typescript.js:70274:17)
at args (~ng-packagr/node_modules/typescript/lib/typescript.js:70200:85)
at getSourceFileFromReferenceWorker (~ng-packagr/node_modules/typescript/lib/typescript.js:70173:34)
at processSourceFile (~ng-packagr/node_modules/typescript/lib/typescript.js:70200:13)
at processRootFile (~ng-packagr/node_modules/typescript/lib/typescript.js:70055:13)
at ~ng-packagr/node_modules/typescript/lib/typescript.js:69312:60"
```
Original `SourceFile` has `ambientModuleName`. The transformed source file hasn't:

| Bug,Help Wanted,API | low | Critical |
273,264,120 | neovim | SIGINT forwarding for RPC / ctrl-c interrupt rpcrequest() | It would be useful to have either:
1. SIGINT forwarding
2. Subscribe to SIGINT on RPC
Each have trade-offs, perhaps being strongly down to a language-by-language interface.
This would allow for the interruption of long-running commands. Much like the old-style python interface can do. | enhancement,ux,channels-rpc,remote-plugin | low | Major |
273,292,077 | godot | RichTextLabel's bbcode_text property is not tag-stack-aware. | **Operating system or device, Godot version, GPU Model and driver (if graphics related):**
Windows 10
Godot master
**Issue description:**
If you manually add content to `RichTextLabel`'s internal tag stack using `append_bbcode` and the `push_*`/`pop` functions, I would expect to be able to print `bbcode_text` and see the corresponding opening and closing tags visible in the output (at least, as much as is possible. I know you can't perfectly do that with `push_meta(Variant)`, but for most you could). To the end user, they don't even see the internal tag stack, so when they use `append_bbcode`, they THINK that they are in fact editing `bbcode_text`...except that they aren't...<_< We should try to maintain that illusion.
**Steps to reproduce:**
Start a new scene with RichTextLabel as the root. Toggle on the bbcode bool. Add a script:
# RichTextLabel.gd
func _ready():
add_text("hello")
print(bbcode_text) # prints nothing to the screen
I would have expected to see "hello" printed as a result of this.
Also, fixing this Issue would probably also fix #12881. | bug,confirmed,topic:gui | low | Minor |
273,329,919 | opencv | Kalman filter OpenCL version | Hi everyone
I want to contribute to OpenCV but I want to make sure that I'm in the right way
Recently, I'm working on prediction and updating stage matrix parallel processing with OpenCL kernel
My Question is why there is no Kalman filter with OpenCL version?
Is it because it's not that fast for small matrix size? ex. matrix size under 100 * 100
Thx everyone | priority: low,category: ocl | low | Minor |
273,347,622 | flutter | ColorFilter doesn't handle transparency of images | **Scenario**
Using ColorFilter to increase the brightness of an image when receiving focus from the user. For example a button.
**Issue**
1. Color is a required field.
2. Transparent areas are filled with the color.
**Desired**
ColorFilter should ignore transparent bits.
| framework,d: api docs,customer: crowd,a: images,c: proposal,P2,a: gamedev,team-framework,triaged-framework | low | Critical |
273,450,540 | flutter | Widget got rebuilt due to outdated dependencies | Suppose widget `W` depends on `InheritedWidget` `D` based some `condition` be true. After a asynchronous operation, `condition` changed from `true` to `false`. From now on, `W` doesn't depend on `D` actually, but `W` still got rebuilt due to changes in `D`. `condition` can be internal state of `W` or information from other `InheritedWidget`.
I post a test [gist](https://gist.github.com/kezhuw/3ff4cf46d2d60565960caf2257cd1242) to show this. | framework,c: performance,d: api docs,P3,team-framework,triaged-framework | low | Major |
273,460,558 | pytorch | Exposing CuDNN benchmark strategy selection | It would be very convenient to have an option to set the convolution algorithm preference to prefer memory over speed when enabling the cuDNN benchmark. For a lot of applications memory efficient convolutions are preferred to the fastest approach. So having a way to set this would be welcome.
`torch.backends.cudnn.benchmark = True`
The [cudnnConvolutionFwdPreference](https://docs.rs/cudnn-sys/0.0.3/cudnn_sys/enum.cudnnConvolutionFwdPreference_t.html) during the benchmark knows several settings:
* CUDNN_CONVOLUTION_FWD_NO_WORKSPACE
* CUDNN_CONVOLUTION_FWD_PREFER_FASTEST
* CUDNN_CONVOLUTION_FWD_SPECIFY_WORKSPACE_LIMIT
Yet, in the current [implementation](https://github.com/pytorch/pytorch/blob/8fbe003d4ed946804d67a6d3bcd84eb6c3df9a4a/torch/csrc/cudnn/Conv.cpp) this appears to be fixed to: CUDNN_CONVOLUTION_FWD_PREFER_FASTEST
This is of course similar for:
* cudnnConvolutionBwdDataPreference_t
* cudnnConvolutionBwdFilterPreference_t
* cudnnConvolutionFwdPreference_t
My initial proposal would be to include a property to the `torch.backends.cudnn` which can be set to pick a different preference, i.e.:
`torch.backends.cudnn.benchmark_memory_limit = None` (FASTEST / default value)
`torch.backends.cudnn.benchmark_memory_limit = 0` (NO_WORKSPACE)
`torch.backends.cudnn.benchmark_memory_limit = 12` (SPECIFY_WORKSPACE_LIMIT)
The numbers could represent GPU memory in GB/MB or even bytes.
cc @ezyang @gchanan @zou3519 @bdhirsh @heitorschueroff @csarofeen @ptrblck | module: cudnn,module: bootcamp,feature,triaged | low | Major |
273,568,134 | godot | Change `make_function` API to work well with different script languages | **Issue description:**
Currently, the API to add a function to scripts, which is used by the connections dialog, is `ScriptLanguage::make_function(class, name, args)`.
This method is expected to return a string with the function's declaration and body. The returned string is appended to the end of the script's source. This may be fine for languages like GDScript, but it won't work for other languages.
This API should be changed to receive `Ref<Script>` and either be expected to modify it or to return its full source code with the newly added function.
| enhancement,topic:core,topic:editor,confirmed | low | Major |
273,571,451 | go | runtime: fractional worker preemption causes performance degradation for single cpu | What version of Go are you using (go version)?
```
go version devel +ef0e2af Mon Nov 6 15:55:31 2017 +0000 linux/amd64
```
What operating system and processor architecture are you using (go env)?
```
GOARCH="amd64"
GOBIN=""
GOCACHE="/nfs/site/home/itocar/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/localdisk/itocar/gopath/"
GORACE=""
GOROOT="/localdisk/itocar/golang"
GOTMPDIR=""
GOTOOLDIR="/localdisk/itocar/golang/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build447443283=/tmp/go-build -gno-record-gcc-switches"
```
/proc/cpuinfo:
model name : Intel(R) Xeon(R) CPU E5-2609 v3 @ 1.90GHz
What did you do?
I've compared performance of 1.9 vs trunk on builtin benchmarks with -cpu=1 and found that some of them have regressed. Changes in performance were bisected to 28e1a8e47aa089e781aa15bdd16e15265a5180bd which caused:
cmd/compile/internal/ssa/Fuse/10000 33.2ms ± 2% 38.1ms ± 2% +14.62%
text/template/parse/ParseLarge 43.2ms ± 1% 49.1ms ± 2% +13.60% (p=0.000 n=8+8)
This is reproducible only with go test -cpu=1 -bench=......
I doubt that -cpu=1 case is important, but CL message didn't mention performance impact, so I'm opening this.
| Performance,NeedsInvestigation,compiler/runtime | low | Critical |
273,589,055 | go | database/sql/v2: what would a Go 2 version of database/sql look like? | We're wondering what the story for database/sql might be for Go 2. We agree it's clearly important to be maintained (including ongoing maintenance including but definitely not limited to Google support).
Whether the package lives in std or x/foo is out of scope for this bug. (We'll wait until package management is figured out more first.)
But given what we know now, what would a fresh database/sql look like for Go 2?
Experience reports, docs, API proposals welcome.
@kardianos, could you write up your wishlist?
| v2,NeedsInvestigation | high | Critical |
273,591,028 | TypeScript | JSDoc Object literal not parsed ignoring leading star, leading to parse error | **Code**
`allowJs` and `checkJs` are on.
```ts
/** @typedef {{
* type: string,
* header: {text: string},
* longestChain: {duration: number, length: number, transferSize: number},
* chains: !Object<string, !CriticalRequestChainRenderer.CRCNode>
* }}
*/
CriticalRequestChainRenderer.CRCDetailsJSON; // eslint-disable-line no-unused-expressions
```
**Expected behavior:**
A strongly typed `CriticalRequestChainRenderer.CRCDetailsJSON`
**Actual behavior:**
```
front_end/audits2/lighthouse/renderer/details-renderer.js(294,2): error TS1131: Property or signature expected.
```
(An error on the first leading `*` in the comment-embedded object literal.)
| Bug,Domain: JSDoc,Domain: JavaScript | low | Critical |
273,592,082 | TypeScript | JSDoc function type within generic not parsed correctly | <!-- BUGS: Please use this template. -->
<!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript -->
<!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md -->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.7.0-dev.201xxxxx
**Code**
```ts
/** @type {!Set<function()>} */
this._listeners = new Set();
```
**Expected behavior:**
No error, strongly typed `this._listeners`
**Actual behavior:**
```
front_end/bindings/BlackboxManager.js(22,21): error TS1005: '>' expected.
```
(A parse error on `function` claiming a closing `>` is expected)
| Bug,Domain: JSDoc,Domain: JavaScript | low | Critical |
273,595,391 | TypeScript | JSDoc function type not parsed correctly when nested | <!-- BUGS: Please use this template. -->
<!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript -->
<!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md -->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.7.0-dev.201xxxxx
**Code**
```ts
/**
* @param {!Runtime.Extension} extension
* @param {?function(function(new:Object)):boolean} predicate
* @return {boolean}
*/
_checkExtensionApplicability(extension, predicate) {
return false;
}
```
**Expected behavior:**
Strong types for `extension` and `predicate`.
**Actual behavior:**
```
front_end/Runtime.js(398,24): error TS1138: Parameter declaration expected.
```
(An error on the inner `function`, rather than a correctly constructed type)
| Bug,Domain: JSDoc,Domain: JavaScript | low | Critical |
273,597,602 | TypeScript | JSDoc allow missing parameter names | <!-- BUGS: Please use this template. -->
<!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript -->
<!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md -->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.7.0-dev.201xxxxx
**Code**
```ts
/**
* @override
* @param {function(string)}
*/
setNotify(notify) {
this._notify = notify;
}
```
**Expected behavior:**
strongly typed `notify`
**Actual behavior:**
```
front_end/audits2_worker/Audits2Service.js(33,31): error TS1003: Identifier expected.
```
(Error on the end of the `@param` line)
Found a few instances of this in the chrome devtools, which means this validates in the closure compiler. If the param names are missing, they should just be matched by position.
| Bug,Domain: JSDoc,Domain: JavaScript | low | Critical |
273,599,667 | TypeScript | JSDoc Allow lone rest type | **Code**
```ts
/**
* @param {function(new:T, ...)} flavorType
* @param {?T} flavorValue
* @template T
*/
setFlavor(flavorType, flavorValue) {
}
```
**Expected behavior:**
Strong types for both arguments, no errors
**Actual behavior:**
```
front_end/ui/Context.js(14,33): error TS1110: Type expected.
```
(An error on the `...` in the first parameter type)
We'd usually consider the "right" thing to do here as `...*`, but closure is apparently fine with just `...`, so we should also respect that. A few files in the chrome devtools codebase make use of this pattern. | Bug,Domain: JavaScript | low | Critical |
273,614,658 | rust | Corrupted variable view in the debugger on Windows | I am not sure if it belongs to this project. Please advise if it should be moved somewhere else.
I have got sample code from Hyper project (code sample is below), which I compiled using stable MSVC toolchain on Windows (stable-x86_64-pc-windows-msvc, rustc 1.21.0 (3b72af97e 2017-10-09)). Next I launched it under a debugger and played with breakpoints and variable inspections. I can see that the debugger shows correct view of local variables like `adr` and `strin` inside of `call` function.

However, the inspection of the `req` variable is corrupted:

I am observing the same with Visual Studio Code + cppvsdbg and with Visual Studio 2017 debugger.
I am not sure if the issue is inferred by incomplete debug information or incorrect visualization settings for the debugger. I understand debugger visualization extensions (natvis) are embedded into PDB files now (?). Although, I have configured VS Code debugger visualization extensions (not sure if it was done correctly ?). Could you please advise what I should try / check to resolve the issue?
Debugger launch config in VS Code:
```json
{
"name": "(Windows) Launch",
"type": "cppvsdbg",
"request": "launch",
"program": "${workspaceFolder}/target/debug/hello_cargo.exe",
"args": [],
"stopAtEntry": false,
"cwd": "${workspaceFolder}",
"environment": [],
"externalConsole": true
}
```
Code sample used:
```rust
extern crate hyper;
extern crate futures;
use futures::future::Future;
use hyper::header::ContentLength;
use hyper::server::{Http, Request, Response, Service};
struct HelloWorld;
const PHRASE: &'static str = "Hello, World!";
impl HelloWorld {
fn callee(&self, str: String) {
}
}
impl Service for HelloWorld {
// boilerplate hooking up hyper's server types
type Request = Request;
type Response = Response;
type Error = hyper::Error;
// The future representing the eventual Response your call will
// resolve to. This can change to whatever Future you need.
type Future = Box<Future<Item=Self::Response, Error=Self::Error>>;
fn call(&self, req: Request) -> Self::Future {
let strin = String::from("aaaaaaa");
let adr = req.remote_addr().unwrap();
println!("{}", adr);
println!("{}", strin);
self.callee(strin.clone());
self.callee(strin);
// We're currently ignoring the Request
// And returning an 'ok' Future, which means it's ready
// immediately, and build a Response with the 'PHRASE' body.
Box::new(futures::future::ok(
Response::new()
.with_header(ContentLength(PHRASE.len() as u64))
.with_body(PHRASE)
))
}
}
fn main() {
let addr = "127.0.0.1:3000".parse().unwrap();
let server = Http::new().bind(&addr, || Ok(HelloWorld)).unwrap();
server.run().unwrap();
}
```
but I observed the same issue happening with VS Code + cppvsdbg extension and Visual Studio 2017 debuggers. I guess it is related to incomplete PDB infromation | O-windows,A-debuginfo,T-compiler,C-bug | low | Critical |
273,631,396 | TypeScript | JSDoc trying to parse tag names where tag names aren't possible | <!-- BUGS: Please use this template. -->
<!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript -->
<!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md -->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.7.0-dev.201xxxxx
**Code**
```ts
/**
* When concatenated with a string, this automatically concatenates the user's mention instead of the Member object.
* @returns {string}
* @example
* // Logs: Hello from <@123456789>!
* console.log(`Hello from ${member}!`);
*/
toString() {
return `<@${this.nickname ? '!' : ''}${this.user.id}>`;
}
```
**Expected behavior:**
No error
**Actual behavior:**
```
node_modules/discord.js/src/structures/GuildMember.js(511,28): error TS1003: Identifier expected.
```
Error on `@123456789` - we probably shouldn't be looking for tags in the middle of a line. | Suggestion,In Discussion,Domain: JSDoc,Domain: JavaScript | low | Critical |
273,633,067 | opencv | OpenCV 3.1's SVD is 3600 times slower than OpenCV 2.2's SVD | ##### Detailed description
As it is already reported previously, SVD of version 2.3 and later is slower than SVD of version 2.2.
https://github.com/opencv/opencv/issues/4313
https://github.com/opencv/opencv/issues/7563
https://github.com/opencv/opencv/issues/7917
Below, I will report the benchmark result.
OpenCV 2.2 use LAPACK, so its SVD was fast, but OpenCV 2.3 use own implementation, and so its SVD is slow. OpenCV's SVD is implemented in `lapack.cpp`'s `JacobiSVDImpl_`. I looked at this code, but I could not understand. I just guess that the speed of this function becomes slow when there are zero singular values. In order to check my hypothesis, two benchmark tests are done below: One with the matrix filled with the value "one," and the other with the matrix filled with random numbers.
##### System information (version)
- OpenCV => 3.1 or 2.2
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
##### Steps to reproduce
```cpp
const int ROW = 2000;
const int COL = 2000;
Mat mat = Mat::ones(ROW, COL, CV_64F);
int64 start = getTickCount();
cout << "start = " << start << endl;
SVD svd(mat);
int64 end = getTickCount();
cout << "end = " << end << endl;
int calctime = (int)((end - start) * 1000 / getTickFrequency());
cout << "duration = " << calctime << " [msec]" << endl << endl;
cout << "W = " << endl << svd.w.rowRange(Range(0, 9)) << endl;
```
For random matrix, following code is used for `mat`.
```cpp
RNG gen(getTickCount());
Mat mat(ROW, COL, CV_64F);
gen.fill(mat, RNG::UNIFORM, 0.0, 1.0);
```
##### Benchmark result: random
Library | Millisecond
--- | ---
MATLAB | 3737
Python numpy | 3878
C/C++ OpenCV 2.2 | 27523
Python OpenCV 3.3 | 147870
C/C++ OpenCV 3.1 | 187424
C/C++ NRC 3rd edit. | 437539
C/C++ Eigen | 940722
##### Benchmark result: ones
Library | Millisecond
--- | ---
C/C++ Eigen | 170
C/C++ OpenCV 2.2 | 2501
MATLAB | 18784
Python numpy | 21017
C/C++ NRC 3rd edit. | 542057
Python OpenCV 3.3 | 9003935
C/C++ OpenCV 3.1 | 9004074
| feature,category: core | low | Major |
273,655,278 | go | x/build: add a CGO_ENABLED=0 Windows Builder? | In https://github.com/golang/go/issues/22680#issuecomment-344122132 , @alexbrainman notes we don't have a CGO_ENABLED=0 Windows builder.
We probably should add one.
CLs welcome.
| OS-Windows,Builders,new-builder | low | Minor |
273,659,388 | go | cmd/cgo: pointer-passing rules may not be strong enough | In #12416, we (mostly @ianlancetaylor, I think) defined some restrictions on what C functions can do with Go pointers.
The current restrictions say, “The C code … must not store any Go pointers in Go memory, even temporarily.”
They do not prohibit the C code from storing non-Go pointers in Go memory, but I've been thinking about the pointer check a lot lately, and https://github.com/golang/go/issues/20427#issuecomment-343255844 reminded me that C compilers can generate some pretty gnarly code.
In particular, C11 has a proper threading model, and a C11 compiler is allowed to assume that non-atomic writes do not race. It can rewrite them as, say, a store followed by an add, or a store of a spilled temporary followed by a load of that temporary and a store of the actual pointer. (Fortunately, I do not yet have an example of a C compiler actually generating such code.)
So now I'm not sure that the cgo pointer check is really strong enough after all: writes of non-Go pointers to Go pointer slots from C can, at least theoretically, race with the garbage collector tracing those slots, and cause it to observe invalid pointers.
To address that problem, I think we need to do one of:
* Strengthen the existing sentence: “The C code … must not store **any pointers** in Go memory, even temporarily,” or
* Add another clause: “The C code may store non-Go pointers in Go memory, but must ensure that each such store occurs atomically.”
(CC @dr2chase, @aclements, @RLH)
| Documentation,NeedsFix,compiler/runtime | low | Minor |
273,720,010 | TypeScript | Function should be assignable to (...args: any[]) => any | Obviously all functions have type `Function`. And likewise all functions can be given the type
```ts
type AnyFunc = (...args: any[]) => any;
```
AFAIK, there is nothing one can do with something of type `Function` that cannot be done with something of type `AnyFunc` and vice versa. However they are not equivalently assignable:
```ts
(f: AnyFunc): Function => f;
(f: Function): AnyFunc => f;
// ^
// Type 'Function' is not assignable to type 'AnyFunc'.
// Type 'Function' provides no match for the signature '(...args: any[]): any'.
```
If I edit `lib.d.ts` to add a callable signature to `Function`:
```ts
interface Function {
(...argArray: any[]): any;
//...
}
```
it seems to "fix" this issue; is there any reason not to make this change for real? | Suggestion,In Discussion | medium | Critical |
273,764,106 | youtube-dl | Export failed URL in a batch file | - [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.11.06**
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [x] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
I frequently download multiple files within a batch-file. If some URL fail, I have no idea which of them worked and which did not.
I'd appreciate the option to export the failed URL in a new file for staying organized.
Any plans to do so? | request | low | Critical |
273,875,511 | go | x/mobile: stack traces are wrong on iOS and Android | ### What version of Go are you using (`go version`)?
```
go version devel +4f178d157d Fri Oct 20 13:59:37 2017 +0000 darwin/amd64
```
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/steeve/go"
GORACE=""
GOROOT="/usr/local/Cellar/go/1.9.1/libexec"
GOTOOLDIR="/usr/local/Cellar/go/1.9.1/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/bs/51dlb_nn5k35xq9qfsxv9wc00000gr/T/go-build785508759=/tmp/go-build -gno-record-gcc-switches -fno-common"
CXX="clang++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
```
### What did you do?
Bind an iOS framework using `gomobile bind` and create a `panic`.
### What did you expect to see?
The real stack trace.
### What did you see instead?
As commented in https://github.com/golang/go/issues/20392#issuecomment-344102319:
```
# Crashlytics - plaintext stacktrace downloaded by Steeve Morin at Tue, 14 Nov 2017 00:11:12 GMT
# Platform: ios
# Application: testapp-swift
# Version: 1.0 (1)
# Issue #: 2
# Issue ID: 591c99d0be077a4dccdaf367
# Session ID: e591a6fc5b8641d8b4571ce7df6c11f4_4c0f40a1c8d011e7a1bd56847afe9799_0_v2
# Date: 2017-11-14T00:10:00Z
# OS Version: 11.1.1 (15B150)
# Device: iPhone X
# RAM Free: 7.2%
# Disk Free: 69.1%
#0. Crashed: com.apple.main-thread
0 testapp-swift 0x1048d5798 runtime.raiseproc + 24
1 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
2 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
3 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
4 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
5 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
6 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
7 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
8 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
9 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
10 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
--
#0. Crashed: com.apple.main-thread
0 testapp-swift 0x1048d5798 runtime.raiseproc + 24
1 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
2 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
3 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
4 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
5 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
6 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
7 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
8 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
9 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
10 testapp-swift 0x1048c1fdc runtime.dieFromSignal + 60
#1. Thread
0 testapp-swift 0x1048d5b1c runtime.mach_semaphore_wait + 12
1 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
2 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
3 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
4 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
5 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
6 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
7 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
8 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
9 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
10 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
#2. Thread
0 libsystem_kernel.dylib 0x183c65dbc __workq_kernreturn + 8
1 libsystem_pthread.dylib 0x183d76fa0 _pthread_wqthread + 884
2 libsystem_pthread.dylib 0x183d76c20 start_wqthread + 4
#3. Thread
0 libsystem_pthread.dylib 0x183d76c1c start_wqthread + 122
#4. Thread
0 libsystem_kernel.dylib 0x183c65dbc __workq_kernreturn + 8
1 libsystem_pthread.dylib 0x183d77134 _pthread_wqthread + 1288
2 libsystem_pthread.dylib 0x183d76c20 start_wqthread + 4
#5. Thread
0 libsystem_kernel.dylib 0x183c65dbc __workq_kernreturn + 8
1 libsystem_pthread.dylib 0x183d77134 _pthread_wqthread + 1288
2 libsystem_pthread.dylib 0x183d76c20 start_wqthread + 4
#6. com.apple.uikit.eventfetch-thread
0 libsystem_kernel.dylib 0x183c44bc4 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x183c44a3c mach_msg + 72
2 CoreFoundation 0x1840f5c74 __CFRunLoopServiceMachPort + 196
3 CoreFoundation 0x1840f3840 __CFRunLoopRun + 1424
4 CoreFoundation 0x184013fb8 CFRunLoopRunSpecific + 436
5 Foundation 0x184a3d6e4 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 304
6 Foundation 0x184a5cafc -[NSRunLoop(NSRunLoop) runUntilDate:] + 96
7 UIKit 0x18e1472f4 -[UIEventFetcher threadMain] + 136
8 Foundation 0x184b3e860 __NSThread__start__ + 996
9 libsystem_pthread.dylib 0x183d7831c _pthread_body + 308
10 libsystem_pthread.dylib 0x183d781e8 _pthread_body + 310
11 libsystem_pthread.dylib 0x183d76c28 thread_start + 4
#7. Thread
0 libsystem_kernel.dylib 0x183c65dbc __workq_kernreturn + 8
1 libsystem_pthread.dylib 0x183d76fa0 _pthread_wqthread + 884
2 libsystem_pthread.dylib 0x183d76c20 start_wqthread + 4
#8. Thread
0 testapp-swift 0x1048d5b44 runtime.mach_semaphore_timedwait + 20
1 testapp-swift 0x1048ac5e0 runtime.semasleep1 + 192
2 testapp-swift 0x1048ac5e0 runtime.semasleep1 + 192
3 testapp-swift 0x1048ac5e0 runtime.semasleep1 + 192
4 testapp-swift 0x1048ac5e0 runtime.semasleep1 + 192
5 testapp-swift 0x1048ac5e0 runtime.semasleep1 + 192
6 testapp-swift 0x1048ac5e0 runtime.semasleep1 + 192
7 testapp-swift 0x1048ac5e0 runtime.semasleep1 + 192
8 testapp-swift 0x1048ac5e0 runtime.semasleep1 + 192
9 testapp-swift 0x1048ac5e0 runtime.semasleep1 + 192
10 testapp-swift 0x1048ac5e0 runtime.semasleep1 + 192
#9. Thread
0 testapp-swift 0x1048d5b1c runtime.mach_semaphore_wait + 12
1 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
2 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
3 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
4 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
5 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
6 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
7 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
8 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
9 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
10 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
#10. Thread
0 testapp-swift 0x1048d5b1c runtime.mach_semaphore_wait + 12
1 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
2 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
3 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
4 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
5 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
6 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
7 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
8 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
9 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
10 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
#11. Thread
0 testapp-swift 0x1048d5b1c runtime.mach_semaphore_wait + 12
1 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
2 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
3 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
4 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
5 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
6 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
7 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
8 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
9 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
10 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
#12. Thread
0 testapp-swift 0x1048d5b1c runtime.mach_semaphore_wait + 12
1 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
2 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
3 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
4 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
5 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
6 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
7 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
8 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
9 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
10 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
#13. Thread
0 testapp-swift 0x1048d5b1c runtime.mach_semaphore_wait + 12
1 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
2 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
3 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
4 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
5 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
6 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
7 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
8 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
9 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
10 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
#14. Thread
0 testapp-swift 0x1048d5b1c runtime.mach_semaphore_wait + 12
1 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
2 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
3 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
4 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
5 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
6 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
7 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
8 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
9 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
10 testapp-swift 0x1048ac55c runtime.semasleep1 + 60
#15. com.twitter.crashlytics.ios.MachExceptionServer
0 libsystem_kernel.dylib 0x183c44bc4 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x183c44a3c mach_msg + 72
2 testapp-swift 0x104918aa4 CLSMachExceptionServer + 100
3 libsystem_pthread.dylib 0x183d7831c _pthread_body + 308
4 libsystem_pthread.dylib 0x183d781e8 _pthread_body + 310
5 libsystem_pthread.dylib 0x183d76c28 thread_start + 4
#16. Thread
0 libsystem_kernel.dylib 0x183c65dbc __workq_kernreturn + 8
1 libsystem_pthread.dylib 0x183d77134 _pthread_wqthread + 1288
2 libsystem_pthread.dylib 0x183d76c20 start_wqthread + 4
#17. com.apple.NSURLConnectionLoader
0 libsystem_kernel.dylib 0x183c44bc4 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x183c44a3c mach_msg + 72
2 CoreFoundation 0x1840f5c74 __CFRunLoopServiceMachPort + 196
3 CoreFoundation 0x1840f3840 __CFRunLoopRun + 1424
4 CoreFoundation 0x184013fb8 CFRunLoopRunSpecific + 436
5 CFNetwork 0x18477e264 +[NSURLConnection(Loader) _resourceLoadLoop:] + 404
6 Foundation 0x184b3e860 __NSThread__start__ + 996
7 libsystem_pthread.dylib 0x183d7831c _pthread_body + 308
8 libsystem_pthread.dylib 0x183d781e8 _pthread_body + 310
9 libsystem_pthread.dylib 0x183d76c28 thread_start + 4
```
| mobile | medium | Critical |
273,884,520 | pytorch | Deprecate inplace argument in torch.nn.functional | https://github.com/pytorch/pytorch/pull/3683 splits off in-place functions to use the underscore suffix, like the functions on the `Tensor`.
We should consider deprecating the inplace argument completely and require people to either call, e.g.:
```
F.relu(input) # never in-place
F.relu_(input) # always in-place
```
cc @albanD @mruberry @ezyang @SsnL @gchanan | module: bc-breaking,feature,module: nn,triaged,module: deprecation | low | Minor |
273,889,373 | go | x/net/http2: support memory re-use for MetaHeadersFrames | As pointed out in https://github.com/grpc/grpc-go/issues/1587#issuecomment-340015543 (and in https://github.com/cockroachdb/cockroach/issues/17370), re-using `MetaHeadersFrame` memory similarly to `DataFrame`s has the potential to increase performance significantly (~50% throughput increase), even where the header frame data is quite small (10s of bytes). The http2 library should ideally support this.
| Performance,NeedsInvestigation | low | Major |
273,905,010 | kubernetes | Use `cmp.Equal` instead of DeepEqual? | https://github.com/google/go-cmp/
I wonder if this can replace our semantic DeepEqual?
@lavalamp
@kubernetes/sig-api-machinery-bugs | kind/cleanup,sig/api-machinery,help wanted,lifecycle/frozen | low | Critical |
273,913,944 | kubernetes | fluentd-gcp crashing because of `JournalError: Bad message` | This happened in a 1.6 test: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-serial-release-1-6/929
fluentd crashed/restarted ~40 times during the test because it could not handle the bad message in journal.
```
2017-11-14 15:40:52 +0000 [error]: unexpected error error_class=Systemd::JournalError error=#<Systemd::JournalError: Bad message>
2017-11-14 15:40:52 +0000 [error]: /var/lib/gems/2.1.0/gems/systemd-journal-1.2.3/lib/systemd/journal.rb:284:in `enumerate_helper'
2017-11-14 15:40:52 +0000 [error]: /var/lib/gems/2.1.0/gems/systemd-journal-1.2.3/lib/systemd/journal.rb:106:in `current_entry'
2017-11-14 15:40:52 +0000 [error]: /var/lib/gems/2.1.0/gems/fluent-plugin-systemd-0.0.8/lib/fluent/plugin/in_systemd.rb:88:in `watch'
2017-11-14 15:40:52 +0000 [error]: /var/lib/gems/2.1.0/gems/fluent-plugin-systemd-0.0.8/lib/fluent/plugin/in_systemd.rb:70:in `run'
2017-11-14 15:40:52 +0000 [info]: shutting down fluentd
```
This issue fixed in fluentd-0.14.x according to https://github.com/reevoo/fluent-plugin-systemd/issues/16
Not sure if the latest fluentd-gcp image contains the fix or not.
/cc @crassirostris | kind/bug,sig/instrumentation,lifecycle/frozen | medium | Critical |
273,950,335 | vscode | Autoclosing pairs should be configurable | I've mentioned this in #15899, but this should really be a new issue. Just like Atom, Code should have an option to configure autocomplete characters. Currently, only brackets are auto-completed (and wrapped around selections), but it would be really useful if this also worked with quotes and backticks (especially in Markdown documents).
Here's the relevant settings page in Atom:

| feature-request,editor-autoclosing | high | Critical |
273,979,102 | react-native | KeyboardAvoidingView has no effect on multiline TextInput | KeyboardAvoidingView only works with single-line TextInputs. When the `multiline` prop is set, KeyboardAvoidingView does not shift the TextInput at all.
### Is this a bug report?
Yes
### Have you read the [Contributing Guidelines](https://facebook.github.io/react-native/docs/contributing.html)?
Yes
### Environment
Environment:
OS: macOS Sierra 10.12.6
Node: 7.0.0
npm: 3.10.8
Watchman: 4.7.0
Xcode: 9.1
Packages: (wanted => installed)
react-native: 0.49.3
react: 16.0.0-beta.5
Target Platform: iOS (10.3)
### Steps to Reproduce
1. Use a `<TextInput>` component with `multiline` prop set.
2. Wrap this in a ScrollView
3. Wrap that in a KeyboardAvoidingView.
### Expected Behavior
Multiline TextInput should scroll above the soft keyboard.
### Actual Behavior
Soft keyboard covers multiline TextInput.
### Reproducible Demo
```
import React, { Component } from 'react';
import { Text, TextInput, View, ScrollView, KeyboardAvoidingView, Keyboard} from 'react-native';
...
render() {
return (
<KeyboardAvoidingView style={{flex:1}} behavior="padding" keyboardVerticalOffset={64}>
<ScrollView keyboardShouldPersistTaps={'handled'}>
<View style={{padding: 12}}>
// various content to fill the page
<Text style={{fontSize: 20, padding: 40}}>MESSAGE 1</Text>
<Text style={{fontSize: 20, padding: 40}}>MESSAGE 2</Text>
<Text style={{fontSize: 20, padding: 40}}>MESSAGE 3</Text>
<Text style={{fontSize: 20, padding: 40}}>MESSAGE 4</Text>
<Text style={{fontSize: 20, padding: 40}}>MESSAGE 5</Text>
<Text style={{fontSize: 20, padding: 40}}>MESSAGE 6</Text>
<Text style={{fontSize: 20, padding: 40}}>MESSAGE 7</Text>
<Text style={{fontSize: 20, padding: 40}}>MESSAGE 8</Text>
<Text style={{fontSize: 20, padding: 40}}>MESSAGE 9</Text>
<Text style={{fontSize: 20, padding: 40}}>MESSAGE 10</Text>
</View>
<TextInput
style={{padding: 4}}
multiline={true}
placeholder={'Type something here...'}
onChangeText={this.updateMessage}
value={this.state.message}
/>
</ScrollView>
</KeyboardAvoidingView>
);
}
``` | Platform: iOS,Ran Commands,Component: TextInput,Bug | high | Critical |
273,997,687 | rust | [impl Trait] Should we allow `impl Trait` after `->` in `fn` types or parentheses sugar? | [RFC 1951] disallowed uses of impl Trait within Fn trait sugar or higher-ranked bounds. For example, the following is disallowed:
[RFC 1951]: https://github.com/rust-lang/rfcs/blob/master/text/1951-expand-impl-trait.md#expansion-to-arguments
```
fn foo(f: impl Fn(impl SomeTrait) -> impl OtherTrait)
fn bar() -> (impl Fn(impl SomeTrait) -> impl OtherTrait)
```
This tracking issue exists to discuss -- if we were to allow them -- what semantics they ought to have. Some known concerns around the syntax are:
- Should the `()` switch from existential to universal quantification and back?
- I think the general feeling here is now "no", basically because "too complex".
- If HRTB were introduced, where would we (e.g.) want `impl OtherTrait` to be bound?
For consistency, we are disallow `fn(impl SomeTrait) -> impl OtherTrait` and `dyn Fn(impl SomeTrait) -> impl OtherTrait` as well. When considering the questions, one should also consider what the meaning would be in those contexts. | C-enhancement,T-lang,A-impl-trait | medium | Major |
273,998,221 | TypeScript | Allow type annotation on catch clause variable | **TypeScript Version:** 2.6.1
```ts
const rejected = Promise.reject(new Error());
async function tryCatch() {
try {
await rejected;
} catch (err: Error) { // TS1196: Catch clause variable cannot have a type annotation
// Typo, but `err` is `any`, so it results in runtime error
console.log(err.mesage.length);
}
}
function promiseCatch() {
rejected.catch((err: Error) => { // OK
// Compiler error; Yay!
console.log(err.mesage.length);
});
}
```
This was discussed in #8677 and #10000. It was closed as "fixed" in #9999, but as far as I can tell neither of the issues was actually resolved. In either event, I'd like to make a case for allowing type annotations in catch clauses.
Especially with the introduction of downlevel async functions, I'd suggest that disallowing catch clause type annotations leads to less safe code. In the example, the two methods of handling the promise are functionally equivalent, but one allows you to type the error, and the other doesn't. Without writing _extra_ code for the `try`/`catch` version (`if (err instanceof Error) {` or `const e: Error = err` or something), you'll get a runtime error that you wouldn't get with the pure `Promise` version.
The primary rationale for not allowing this is that any object can be thrown, so it's not guaranteed to be correct. However, most of the benefit of TypeScript comes from making assertions about your and other people's code that can't be strictly guaranteed (especially when importing JavaScript). And unless one would argue that the `Promise` catch function _also_ shouldn't allow a type annotation on the error parameter, this argument seems to make very little practical sense.
I believe one of the other arguments against is that it might be confusing, as it looks like the typed exception handling you might see in other languages (e.g., Java), and folks may think the catch will **only** catch errors of the annotated type. I don't personally believe that's a legitimate issue, but if it really is I'd propose at least allowing a `catch (err as Error) {` syntax or similar as a way of emphasizing that it's a type _assertion_.
If nothing else at all, it seems that there should be a way to trigger a warning (similar to an implicit any warning) when using an untyped `err` directly within a catch block. | Suggestion,Awaiting More Feedback,Add a Flag | high | Critical |
274,030,806 | opencv | OpenCV build failed on android with -isystem /usr/include defined | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
- OpenCV => 3.3.1
- Operating System Host => Linux
- Operating System Target => Android
- Compiler => Clang 5.0
- CMake => 3.9
##### Detailed description
I am building opencv using the last version of CMake with clang. When I build I have a lot of error because of the path /usr/include is defined during the build. I was able to find other repository with this issue and the last CMake version should fix the trouble but the build keep failing.
The issue I am facing is this one : https://github.com/android-ndk/ndk/issues/467
May be someone has some suggestion because I try many stuff and none of them fix the trouble.
BTW the same configuration is building fine on Android when the host machine is Darwin.
| category: build/install,incomplete | low | Critical |
274,044,796 | go | proposal: spec: add kind-specific nil predeclared identifier constants | A common error for people new to Go is misunderstanding that a interface that is not `nil` can contain a `nil` pointer. This is one of the most commonly cited entries in the Go FAQ: https://golang.org/doc/faq#nil_error. A quick search shows at least 24 threads on golang-nuts discussing this, even though it is already in the FAQ.
It is not new to observe that one of the causes of this common mistake is that `nil` is overloaded. Since changing that would not be backwards compatible, I propose the following changes.
1) We add six new predefined identifiers: `nilptr`, `nilinterface`, `nilslice`, `nilchan`, `nilfunc`, `nilmap`. These new identifiers are untyped constants that only resolve to a certain kind, much as `1.1` is an untyped constant that only resolves to a `float` or `complex` type. `nilptr` may be used as the zero value for any pointer type, `nilinterface` may be used as the zero value for any interface type, and so forth. An attempt to use, for example, `nilptr` in an assignment or comparison with a variable or value that is not a pointer type is invalid.
2) We add a vet check that warns about any comparison of a value of any type with plain `nil`. We encourage people to change `x == nil` to `x == nilinterface` or `x == nilptr` or whatever is appropriate. Since initially this vet check will trigger all the time, we can not turn it on when running `go test`. It could be on by default when running `go vet`.
3) At this point people who run `go vet` will no longer make the common mistake. If `v` is a value of interface type, then writing `v == nilptr` will be a compilation error. Writing `v == nilinterface` will not cause people to erroneously think that this is testing whether `v` contains a `nil` pointer.
4) In some later release, we turn on the vet check when running `go test`.
5) If we are ever willing to make a backward incompatible change, we can make `v == nil` a compilation error rather than simply being a vet error.
Something to consider is that one could imagine permitting `v == nilptr` when v has an interface type, and having this be `true` if `v` is not `nilinterface`, if `v` holds a value of pointer type, and if the pointer value is `nilptr`. I don't know that this is a good idea, and I'm not proposing it. I'm only proposing the above. | LanguageChange,Proposal,LanguageChangeReview | high | Critical |
274,111,075 | rust | incr.comp.: x.py --incremental can cause too big cache directories | In particular, sometimes there are two cache subdirectories for a given crate in `stage0-incremental`, e.g. two `rustc_trans-xxxxxxx` directories. They have different crate disambiguators (otherwise there'd only be one directory). The disambiguator is set by the build system, either `cargo` or `x.py`. It's not clear yet whether this is a problem in `rustc` (somehow handling disambiguators or directory names wrongly) or `cargo` or `x.py`.
Note that it can be normal to have multiple directories for crates with the same name but different disambiguators, e.g. if there are two different versions of the same crate in the crate graph or if there is a library and an executable with the same name (as is the case with "rustc"). | C-enhancement,T-compiler,A-incr-comp | low | Minor |
274,205,275 | go | archive/tar: re-add sparse file support | Hi @dsnet. Thanks for all the Go 1.10 archive/tar work. It's really an amazing amount of cleanup, and it's very well done.
The one change I'm uncomfortable with from an API point of view is the sparse hole support.
First, I worry that it's too complex to use. I get lost trying to read Example_sparseAutomatic - 99% of it seems to have nothing to do with sparse files - and I have a hard time believing that we expect clients to write all this code. Despite the name, nothing about the example strikes me as “automatic.”
Second, I worry that much of the functionality here does not belong in archive/tar. Tar files are not the only time that a client might care about where the holes are in a file or about creating a new file with holes, and yet somehow this functionality is expressed in terms of tar.Header and a new tar.SparseHole structure instead of tar-independent operations. Tar should especially not be importing and using such subtle bits of syscall as it is in sparse_windows.go.
It's too late to redesign this for Go 1.10, so I suggest we pull out this new API and revisit for Go 1.11.
For Go 1.11, I would suggest to investigate (1) what an appropriate API in package os would be, and (2) how to make archive/tar take advantage of that more automatically.
For example, perhaps it would make sense for package os to add
// Regions returns the boundaries of data and hole regions in the file.
// The result slice can be read as pairs of offsets indicating the location
// of initialized data in the file or, ignoring the first and last element,
// as pairs of offsets indicating the location of a hole in the file.
// The first element of the result is always 0, and the last element is
// always the size of the file.
// For example, if f is a 4-kilobyte file with data written only to the
// first and last kilobyte (and therefore a 2-kilobyte hole in the middle),
// Regions would return [0, 1024, 3072, 4096].
//
// On operating systems that do not support files with holes or do
// not support querying the location of holes in files,
// Regions returns [0, size].
//
// Regions may temporarily change the file offset, so it should not
// be executed in parallel with Read or Write operations.
func (f *File) Regions() ([]int64, error)
That would avoid archive/tar's current DetectParseHoles and SparseEntry, and the tar.Header only need to add a new field `Regions []int64`. (Regions is not a great name; better names are welcome.) Note that using a simple slice of offsets avoids the need for a special invertSparseEntries function entirely: you just change whether you read pairs starting at offset 0 or 1.
As for "punching holes", it suffices on Unix (as you know) to simply truncate the file (which Create does anyway) and then not write to the holes. On Windows it appears to be necessary to set the file type to sparse, but I don't see why the rest of sparsePunchWindows is needed. It seems crazy to me that it could possibly be necessary to pre-declare every hole location in a fresh file. The FSCTL_SET_ZERO_DATA looks like it is for making a hole in an existing file, not a new file. It seems like it should suffice to truncate the target file, mark it as sparse, set the file size, and then write the data. What's left should be automatically inferred as holes. If we were to add a new method SetSparse(bool) to os.File, then I would expect it to work on all systems to do something like:
f = Create(file)
f.SetSparse(true) // no-op on non-Windows systems, FSCTL_SET_SPARSE (only) on Windows
for each data chunk {
f.WriteAt(data, offset)
}
f.Truncate(targetSize) // in case of final hole, or write last byte of file
Finally, it seems like handling this should _not_ be the responsibility of every client of archive/tar. It seems like it would be better for this to just work automatically.
On the tar.Reader side, WriteTo already takes care of not writing to holes. It could also call SetSparse and use Truncate if present as an alternative to writing the last byte of the file.
On the tar.Writer side, I think ReadFrom could also take care of this. It would require making WriteHeader compute the header to be written to the file but delay the actual writing until the Write or ReadFrom call. (And that in turn might make Flush worth keeping around not-deprecated.) Then when ReadFrom is called to read from a file with holes, it could find the holes and add that information to the header before writing out the header. Both of those combined would make this actually automatic.
At the very least, it seems clear that the current API steps beyond what tar should be responsible for. I can easily see developers who need to deal with sparse files but have no need for tar files constructing fake tar headers just to use DetectSparseHoles and PunchSparseHoles. That's a strong signal that this functionality does not belong in tar as the primary implementation. (A weaker but still important signal is that to date the tar.Header fields and methods have not mentioned os.File explicitly, and it should probably stay that way.)
Let's remove this from Go 1.10 and revisit in Go 1.11. Concretely, let's remove tar.SparseEntry, tar.Header.SparseHoles, tar.Header.DetectSparseHoles, tar.Header.PunchSparseHoles, and the deprecation notice for tar.Writer.Flush.
Thanks again.
Russ | NeedsFix,FeatureRequest | low | Critical |
274,279,505 | react | React-test-renderer: support for portal | **Do you want to request a *feature* or report a *bug*?**
Report a bug
**What is the current behavior?**
This test
```javascript
import React from 'react';
import { createPortal } from 'react-dom';
import renderer from 'react-test-renderer';
const Drop = () => (
createPortal(
<div>hello</div>,
this.dropContainer
)
);
test('Drop renders', () => {
const component = renderer.create(
<div>
<input />
<Drop />
</div>
);
const tree = component.toJSON();
expect(tree).toMatchSnapshot();
});
```
fails with
> Invariant Violation: Drop(...): Nothing was returned from render. This usually means a return statement is missing. Or, to render nothing, return null.
This test passes if I wrap createPortal in a container.
```javascript
<div>
{createPortal(
<div>hello</div>,
this.dropContainer
)}
</div>
```
**What is the expected behavior?**
The code without the parent container works fine in the browser. So it seems that I'm adding the parent `div` just for the test to pass. I believe `react-test-renderer` should support empty returns?
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
Lastest
| Type: Feature Request,Component: Test Renderer,React Core Team | high | Critical |
274,304,249 | youtube-dl | [facebook] Support Instagram embedded videos | Example:
$ youtube-dl -v "https://www.facebook.com/BlueManGroupFans/posts/1747439181956701" [debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://www.facebook.com/BlueManGroupFans/posts/1747439181956701']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.11.15
[debug] Python version 3.4.2 - Linux-3.16.0-4-amd64-x86_64-with-debian-8.9
[debug] exe versions: ffmpeg 3.2.5-1, ffprobe 3.2.5-1, rtmpdump 2.4
[debug] Proxy map: {}
[facebook] 1747439181956701: Downloading webpage
ERROR: Unable to extract video ids; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/home/ant/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 784, in extract_info
ie_result = ie.extract(url)
File "/home/ant/bin/youtube-dl/youtube_dl/extractor/common.py", line 437, in extract
ie_result = self._real_extract(url)
File "/home/ant/bin/youtube-dl/youtube_dl/extractor/facebook.py", line 415, in _real_extract
webpage, 'video ids', group='ids'),
File "/home/ant/bin/youtube-dl/youtube_dl/extractor/common.py", line 800, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
youtube_dl.utils.RegexNotFoundError: Unable to extract video ids; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Thank you in advance. :) | request | low | Critical |
274,309,362 | react | [Umbrella] New algorithm for resuming interrupted work | *Resuming* is the ability to re-use fibers after they are interrupted by a higher-priority update. Take the following scenario: A component is updated at a normal, async priority. Before the update is finished processing, a higher-priority update is scheduled (let's say it's synchronous, though it could also be a higher-priority async update). The sync update *interrupts* the async update, leaving it unfinished. After the sync update finishes, we go back to processing the interrupted, async update. It's possible, and even likely, that the interrupted work wasn't touched by the sync work and can be *resumed* without starting over completely.
This is an important optimization for several async features we have in mind, including error handling, blockers, pre-rendering, and hidden priority.
We used to have an implementation of resuming that mostly worked but had some bugs. A few months ago, I spent some time [identifying the bugs using fuzz testing](https://github.com/facebook/react/pull/9952) and fixing them by iterating on the existing algorithm. I eventually got a [version working that passed all the tests](https://github.com/facebook/react/pull/9695). But even this version didn't have all of the features we wanted, and the algorithm seemed inherently flawed. So we decided it would be best to scrap the existing algorithm and revisit resuming in the future.
We now believe we have a better idea of how resuming should work. I'm going to split the work into multiple PRs, and use this issue to keep track of our progress.
My apologies if some of my descriptions are hard to follow. It can be difficult to describe without resorting to jargon. I'll iterate on this issue as I work.
Always reconcile against current child set (#11564)
---------------------------------------------------
This is a small refactor that reflects what we already do without resuming: the set we reconcile against is always the current set. In the reverted resuming algorithm, the set we reconcile against was sometimes a work-in-progress set, and there are a few code paths that are left over from that implementation.
Stash interrupted children
--------------------------
When cloning a work-in-progress fiber from current, and there is already an existing work-in-progress that was interrupted, stash the interrupted work-in-progress children (and corresponding fields) in case we can reuse them later. In begin phase, add an additional check to see if incoming props/state match the interrupted props/state. If so, bail out and re-use the interrupted children. If not, the interrupted children are no longer useful, because we're about to re-render the parent and overwrite them. (Unmounted fibers actually can be re-used even if we re-render the parent; see next step.)
This gets us back to the same functionality we had in the old resuming algorithm. We can now resume interrupted children if we come back to it at the same priority at which it was originally rendered. The main limitation is that the work is lost if the parent is re-rendered at a higher priority.
**Need a way to distinguish between a work-in-progress fiber and the "previous current" fiber*
Pool unmounted, interrupted children so they can resume even if parent re-renders at higher priority
------------------------------------------------------------------------------------
When a fiber is about to be re-rendered, and there are interrupted children that could not be reused, search through the interrupted children and find the ones that are unmounted (don't have an alternate). Stash the unmounted children in a separate set; they can be kept around indefinitely without being overwritten. This set acts like a pool of children. The next time the parent is re-rendered at the priority of the interrupted children, check the pool for matches before creating new fibers.
| Type: Umbrella,Component: Reconciler,React Core Team | low | Critical |
274,324,438 | rust | Spurious "broken pipe" error messages, when used in typical UNIX shell pipelines | ~~~~
$ cat yes.rs
fn main() { loop { println!("y"); } }
$ rustc yes.rs && ./yes | head -n1
y
thread 'main' panicked at 'failed printing to stdout: Broken pipe (os error 32)', src/libstd/io/stdio.rs:692:8
note: Run with `RUST_BACKTRACE=1` for a backtrace.
$ yes | head -n1
y
~~~~
This was originally filed [here](https://github.com/sfackler/cargo-tree/issues/25) but @sfackler determined the cause:
> This is due to `println!` panicking on errors: https://github.com/rust-lang/rust/blob/f1ea23e2cc72cafad1dc25a06c09ec2de8e323eb/src/libstd/io/stdio.rs#L671.
>
> C-based programs typically just get killed off with a SIGPIPE, but Rust ignores that signal.
Note that to see the backtrace, the data being piped has to be large enough to overflow the kernel pipe buffer.
| O-linux,O-macos,T-libs-api,C-bug | medium | Critical |
274,328,434 | rust | Making a function generic (but not using the parameter at all) causes ~12% slowdown | I found a situation where making a function that contains the majority of the hot code a generic function produces about a 12% performance regression in benchmarks, even though the generic parameter is not actually used and the type parameter in question is only ever used with one type.
The code in question is at https://bitbucket.org/marshallpierce/stream-vbyte-rust. The benchmark that exhibits the largest degradation is `encode_scalar_rand_1k`.
[ 95ba949](https://bitbucket.org/marshallpierce/stream-vbyte-rust/commits/95ba949e0580656e91b1a58c3c89421261b4843e) is the commit that introduces the generic and associated slowdown.
When comparing the output of `perf annotate -l` for that benchmark in the two cases, it looks like the [hot code](https://bitbucket.org/marshallpierce/stream-vbyte-rust/src/95ba949e0580656e91b1a58c3c89421261b4843e/src/scalar.rs?fileviewer=file-view-default#scalar.rs-102) is compiled significantly differently. `cmp.rs:846` (an implementation of `lt()`) shows up as the leading offender in `perf annotate` with about 12% of samples in the slow case, basically the same as the performance delta I'm seeing, whereas it's not present at all in the base case.
In the base case, it looks like inlining collapsed function calls up through `encode::encode`, a few layers up the call stack:
```
...
0.00 : 110ef: cmp $0x1,%rax
0.00 : 110f3: jbe 11150 <stream_vbyte::encode::encode+0x1f0>
: _ZN12stream_vbyte6encode17encode_num_scalarE():
0.00 : 110f5: cmp $0x2,%rdx
0.00 : 110f9: jb 111d3 <stream_vbyte::encode::encode+0x273>
0.00 : 110ff: movzbl -0x2b(%rbp),%ecx
0.00 : 11103: mov %cl,0x1(%r14,%r12,1)
: _ZN4core4iter5range8{{impl}}11next<usize>E():
0.00 : 11108: cmp $0x3,%rax
0.00 : 1110c: jb 11150 <stream_vbyte::encode::encode+0x1f0>
: _ZN12stream_vbyte6encode17encode_num_scalarE():
0.00 : 1110e: cmp $0x3,%rdx
0.00 : 11112: jb 111da <stream_vbyte::encode::encode+0x27a>
0.00 : 11118: movzbl -0x2a(%rbp),%ecx
0.00 : 1111c: mov %cl,0x2(%r14,%r12,1)
: _ZN4core4iter5range8{{impl}}11next<usize>E():
0.00 : 11121: cmp $0x4,%rax
0.00 : 11125: jb 11150 <stream_vbyte::encode::encode+0x1f0>
: _ZN12stream_vbyte6encode17encode_num_scalarE():
0.00 : 11127: cmp $0x4,%rdx
...
```
In the slower case, `encode_num_scalar` was inlined only into its direct caller, `do_encode_quads`:
```
: _ZN9byteorder8{{impl}}9write_u32E():
lib.rs:1726 2.11 : 1149a: mov %r10d,-0x2c(%rbp)
: _ZN12stream_vbyte6encode17encode_num_scalarE():
: output[i] = buf[i];
mod.rs:161 0.77 : 1149e: test %r13,%r13
0.00 : 114a1: je 1167d <stream_vbyte::scalar::do_encode_quads+0x41d>
0.16 : 114a7: movzbl -0x2c(%rbp),%ecx
1.62 : 114ab: mov %cl,(%r9,%rax,1)
: _ZN4core3cmp5impls8{{impl}}2ltE():
cmp.rs:846 2.90 : 114af: cmp $0x1,%r8
0.00 : 114b3: jbe 11510 <stream_vbyte::scalar::do_encode_quads+0x2b0>
: _ZN12stream_vbyte6encode17encode_num_scalarE():
0.41 : 114b5: cmp $0x2,%r13
0.00 : 114b9: jb 11648 <stream_vbyte::scalar::do_encode_quads+0x3e8>
0.08 : 114bf: movzbl -0x2b(%rbp),%ecx
mod.rs:161 0.55 : 114c3: mov %cl,0x1(%r9,%rax,1)
: _ZN4core4iter5range8{{impl}}11next<usize>E():
range.rs:218 1.82 : 114c8: cmp $0x3,%r8
0.00 : 114cc: jb 11510 <stream_vbyte::scalar::do_encode_quads+0x2b0>
: _ZN12stream_vbyte6encode17encode_num_scalarE():
0.16 : 114ce: cmp $0x3,%r13
0.00 : 114d2: jb 1166d <stream_vbyte::scalar::do_encode_quads+0x40d>
0.04 : 114d8: movzbl -0x2a(%rbp),%ecx
0.14 : 114dc: mov %cl,0x2(%r9,%rax,1)
: _ZN4core4iter5range8{{impl}}11next<usize>E():
1.24 : 114e1: cmp $0x4,%r8
0.00 : 114e5: jb 11510 <stream_vbyte::scalar::do_encode_quads+0x2b0>
: _ZN12stream_vbyte6encode17encode_num_scalarE():
0.06 : 114e7: cmp $0x4,%r13
0.00 : 114eb: jb 11674 <stream_vbyte::scalar::do_encode_quads+0x414>
0.00 : 114f1: movzbl -0x29(%rbp),%ecx
```
Note the non-zero percentages attached to various `cmp` instructions. Also, based on my casual "look at how many times `encode_num_scalar` appears to have been inlined" analysis, it looks like the slow case unrolled the loop 4x while the fast case did not. (Discussing on rust-internals led to https://github.com/rust-lang/rfcs/issues/2219.)
## Meta
`rustc --version --verbose`:
rustc 1.21.0 (3b72af97e 2017-10-09)
binary: rustc
commit-hash: 3b72af97e42989b2fe104d8edbaee123cdf7c58f
commit-date: 2017-10-09
host: x86_64-unknown-linux-gnu
release: 1.21.0
LLVM version: 4.0 | I-slow,C-enhancement,T-compiler,E-needs-mcve | low | Major |
274,345,518 | go | proposal: encoding/json: add access to the underlying data causing UnmarshalTypeError | Currently one has to maintain a copy of the data being decoded by `json`.`Decoder` to retrieve the underlying data yielding an `UnmarshalTypeError` when decoding a `json` stream.
Making the data in `decodeState`.`data` directly accessible in the returned error:
```go
type UnmarshalTypeError struct {
…
Data []byte
}
```
would make it easy to enable improved diagnostics for faulty inputs.
(Somewhat related to #8254 and #9693 which are also about error diagnostics in `encoding/json`.) | Proposal,Proposal-Hold | low | Critical |
274,402,457 | youtube-dl | BBC iPlayer broken with python2, working with python3 | Downloading from BBC iPlayer errors out when using python2 but is fine when using python3
See log for details.
Not working with python2
```
[foo@bar Downloads]$ ./youtube-dl http://www.bbc.co.uk/iplayer/episode/b09ffxyn --proxy xxx.xxx.xx:80 --verbose
[debug] System config: []
[debug] User config: [u'--no-part']
[debug] Custom config: []
[debug] Command-line args: [u'http://www.bbc.co.uk/iplayer/episode/b09ffxyn', u'--proxy', u'xxx.xxx.xx:80', u'--verbose']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.11.15
[debug] Python version 2.7.14 - Darwin-15.6.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 3.3.4, ffprobe 3.3.4, phantomjs 2.1.1, rtmpdump 2.4
[debug] Proxy map: {u'http': u'xxx.xxx.xx:80', u'https': u'xxx.xxx.xx:80'}
[bbc.co.uk] b09ffxyn: Downloading video page
[bbc.co.uk] b09ffxyn: Downloading playlist JSON
[bbc.co.uk] b09ffxss: Downloading media selection XML
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] b09ffxss: Downloading MPD manifest
[bbc.co.uk] b09ffxss: Downloading MPD manifest
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
ERROR: An extractor error has occurred. (caused by KeyError('_Request__r_host',)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "./youtube-dl/youtube_dl/extractor/common.py", line 437, in extract
ie_result = self._real_extract(url)
File "./youtube-dl/youtube_dl/extractor/bbc.py", line 566, in _real_extract
programme_id, title, description, duration, formats, subtitles = self._download_playlist(group_id)
File "./youtube-dl/youtube_dl/extractor/bbc.py", line 462, in _download_playlist
formats, subtitles = self._download_media_selector(programme_id)
File "./youtube-dl/youtube_dl/extractor/bbc.py", line 327, in _download_media_selector
mediaselector_url % programme_id, programme_id)
File "./youtube-dl/youtube_dl/extractor/bbc.py", line 344, in _download_media_selector_url
return self._process_media_selector(media_selection, programme_id)
File "./youtube-dl/youtube_dl/extractor/bbc.py", line 391, in _process_media_selector
m3u8_id=format_id, fatal=False)
File "./youtube-dl/youtube_dl/extractor/common.py", line 1341, in _extract_m3u8_formats
fatal=fatal)
File "./youtube-dl/youtube_dl/extractor/common.py", line 535, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query)
File "./youtube-dl/youtube_dl/extractor/common.py", line 506, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 2196, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "./youtube-dl/youtube_dl/utils.py", line 1086, in https_open
req, **kwargs)
File "/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1195, in do_open
h.request(req.get_method(), req.get_selector(), req.data, headers)
File "/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 294, in get_selector
return self.__r_host
File "/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 253, in __getattr__
return self.__dict__[attr]
KeyError: '_Request__r_host'
Traceback (most recent call last):
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 784, in extract_info
ie_result = ie.extract(url)
File "./youtube-dl/youtube_dl/extractor/common.py", line 450, in extract
raise ExtractorError('An extractor error has occurred.', cause=e)
ExtractorError: An extractor error has occurred. (caused by KeyError('_Request__r_host',)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
Working with python3
```
[foo@bar Downloads]$ python3 ./youtube-dl http://www.bbc.co.uk/iplayer/episode/b09ffxyn --proxy xxx.xxx.xx:80 --verbose
[debug] System config: []
[debug] User config: ['--no-part']
[debug] Custom config: []
[debug] Command-line args: ['http://www.bbc.co.uk/iplayer/episode/b09ffxyn', '--proxy', 'xxx.xxx.xx:80', '--verbose']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.11.15
[debug] Python version 3.5.1 - Darwin-15.6.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 3.3.4, ffprobe 3.3.4, phantomjs 2.1.1, rtmpdump 2.4
[debug] Proxy map: {'http': 'xxx.xxx.xx:80', 'https': 'xxx.xxx.xx:80'}
[bbc.co.uk] b09ffxyn: Downloading video page
[bbc.co.uk] b09ffxyn: Downloading playlist JSON
[bbc.co.uk] b09ffxss: Downloading media selection XML
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] b09ffxss: Downloading MPD manifest
[bbc.co.uk] b09ffxss: Downloading MPD manifest
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] b09ffxss: Downloading MPD manifest
[bbc.co.uk] b09ffxss: Downloading MPD manifest
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[bbc.co.uk] b09ffxss: Downloading m3u8 information
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on 'https://vod-dash-uk-live.bbcfmt.hs.llnwd.net/usp/auth/vod/piff_abr_full_hd/b6441d-b09ffxss/vf_b09ffxss_bbabecef-5811-4c75-a52b-b22066d95767.ism.hlsv2.ism/dash/'
[dashsegments] Total fragments: 440
[download] Destination: Detectorists, Series 3, Episode 2-b09ffxss.fstream-uk-iptv_streaming_concrete_combined_hd_mf_limelight_uk_dash_https-video=5070000.mp4
[download] 0.0% of ~630.53MiB at 90.40KiB/s ETA 20:04:24^C
ERROR: Interrupted by user
```
| geo-restricted | low | Critical |
274,519,273 | react | Reword "unknown property" warning to be less obnoxious | I thought before it might cause knee jerk reactions, and it does in practice: https://twitter.com/freeformflo/status/928454078903894016
I think we should change the phrasing to a more neutral one. Potentially explaining *why* we prefer camel case. | Type: Enhancement,Component: DOM,React Core Team | low | Major |
274,571,568 | opencv | UMat memory leak on Intel GPUs when using multiple threads | ##### System information (version)
- OpenCV => 3.3.1
- Operating System / Platform => Windows 10 64 Bit
- Compiler => Visual Studio 2013
##### Detailed description
Using UMat with OpenCL on Intel videocards in multiple host threads causes GPU memory leak.
##### Steps to reproduce
1. Create function that: loads image into `cv::Mat`, copies it to `cv::UMat` using `copyTo` method, runs some OpenCL-related function like CascadeClassifier's `detectMultiScale` or `cv::equalizeHist`, and returns.
2. Create loop: create thread, run function from 1 in this thread, join thread.
3. Watch GPU memory increasing indefinitely.

##### Additional notes
1. If not using Intel GPU (tested on various NVidia GPUs), there's no memory leak.
2. If using OpenCL from fixed number of threads (for example, in fixed-size threadpool), there's no memory leak.
3. Even if I release UMat's memory using `release()` method, memory will leak. | duplicate,RFC | low | Minor |
274,665,760 | angular | i18n: document how to lazy load locale data on bootstrap of application | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[x ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
Currently the documentation only states you should import the locale data you need for an app
## Expected behavior
<!-- Describe what the desired behavior would be. -->
A typical enterprise app supports multiple cultures and may not know the culture until run time. An example of how to import the correct locale data at run time would be helpful.
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
This was discussed as part of #20193
## Environment
<pre><code>
Angular version: X.Y.Z
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [ ] Chrome (desktop) version XX
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
For Tooling issues:
- Node version: XX <!-- run `node --version` -->
- Platform: <!-- Mac, Linux, Windows -->
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| feature,area: i18n,P3,feature: under consideration | medium | Critical |
274,683,852 | TypeScript | Support typedef inheritance with JSDoc | **TypeScript Version:** 2.7.0-dev.20171116
**Code**
```js
/**
* @typedef {Object} Base
* @property {String} baseProp
*/
/**
* @typedef {Base} Child
* @property {String} childProp
*/
/** @type {Child} */
const child = { baseProp: '' };
```
**Expected behavior:** Error. `child` is missing the `childProp` property.
**Actual behavior:** No error. The `Child` type is identical to the `Base` type and only requires a `baseProp` property. The `childProp` property definition is completely ignored.
Or maybe there's another way to do the equivalent of `interface Child extends Base` here? | Suggestion,In Discussion,Domain: JSDoc | high | Critical |
274,843,768 | rust | Trigger JIT debuggers with panic=unwind | Currently we only trigger JIT debuggers on Windows if we compile programs with panic=abort. | O-windows,C-enhancement,T-compiler,WG-debugging | low | Critical |
274,844,516 | rust | Do not unwind on Windows with panic=abort | Currently we use the `ud2` instruction to abort the process. This unwinds the stack which we want to avoid. We also need to ensure that the abort mechanism triggers JIT debuggers (see https://github.com/rust-lang/rust/issues/46056). | O-windows,C-enhancement,T-compiler | low | Critical |
274,862,534 | rust | incr.comp.: Make sure `cargo check` is compatible with incremental compilation. | So far incremental compilation did not bring any benefit for `cargo check` because incremental compilation only cached post-trans artifacts and the whole point of `cargo check` is to exit from compilation before the costly trans and LLVM parts.
With https://github.com/rust-lang/rust/pull/46004 this has changed and we'll keep adding more and more things to this pre-trans cache. As a consequence we should make sure that `cargo check` works in conjunction with `CARGO_INCREMENTAL=1`. It already might but we've never tested it and have no regression tests.
cc @nikomatsakis @rust-lang/cargo | C-enhancement,E-needs-test,T-compiler,A-incr-comp,T-cargo,WG-incr-comp | low | Major |
274,882,877 | vue | Quadratic memory usage | ### Version
2.5.4
### Reproduction link
[https://codepen.io/anon/pen/KyyxKB?editors=1010](https://codepen.io/anon/pen/KyyxKB?editors=1010)
### Steps to reproduce
1. Open browser devtools
2. Launch reproduction with different MAX variable (100, 200, 500, 1000, 1500, 2000, 4000)
3. Note that memory usage growth quadratically with the MAX variable, while number of computed properties and their implied dependencies in the program is proportional to the MAX variable.
### What is expected?
Expected linear (i.e. proportional) growth of memory consumption with growth of MAX variable
### What is actually happening?
All `data_X` observables have all `computed_X` computed values as their subscribers, so total number of subscriptions growth quadratically.
---
It's quite hard to write a code that would really affect users with the bug, but anyway it feels like a flaw in reactive system design.
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement | low | Critical |
274,897,162 | rust | Type analysis on branches is inconsistent depending on whether an explicit return is used | I would expect the following two snippets of code to behave the same:
```rust
extern crate futures;
use futures::future::{self, Future};
fn main() {
println!("{}", foo().wait().unwrap());
}
fn foo() -> Box<Future<Item=String, Error=String>> {
(|| {
if true {
Box::new(future::ok("foo".to_owned()))
} else {
bar()
}
})()
}
fn bar() -> Box<Future<Item=String, Error=String>> {
Box::new(future::ok("bar".to_owned()))
}
```
```rust
extern crate futures;
use futures::future::{self, Future};
fn main() {
println!("{}", foo().wait().unwrap());
}
fn foo() -> Box<Future<Item=String, Error=String>> {
(|| {
if true {
return Box::new(future::ok("foo".to_owned()));
} else {
bar()
}
})()
}
fn bar() -> Box<Future<Item=String, Error=String>> {
Box::new(future::ok("bar".to_owned()))
}
```
But in reality, the first snippet works fine, and the second fails because the typechecker cannot unify `std::boxed::Box<futures::FutureResult<std::string::String, _>>` with `std::boxed::Box<futures::Future<Item=std::string::String, Error=std::string::String>>`. Given that the typechecker *can* and *does* unify them in the first case, it seems it should be able to do the same in the second case. Whether I use an implicit or explicit return shouldn't change the type analysis being done. | C-enhancement,T-compiler,A-inference,S-has-mcve | low | Critical |
274,898,033 | TypeScript | Return type inference broken for types with static members in ts 2.5; later fixed | Angular's dependency injection looks something like this:
```
interface Type<T> extends Function {
new (...args: any[]): T;
}
function get<T>(token: Type<T>, notFoundValue?: T): T {
return null as any as T;
}
class Token0 {
a = 1;
}
class Token1 {
static a = 1;
}
let t0 = get(Token0); // t0 always inferred as Token0
let t1 = get(Token1); // TS 2.5.3: t1 inferred as `{}`
// TS 2.6.1 t1 inferred as `Token1`
```
As stated in those comments, starting in TS 2.5 we get the wrong type when a user asks the `Injector` for a Token with a static member. It's later fixed in TS 2.6.
Workaround is to give an explicit type argument to `get`, eg Angular users should repeat the type like `Injector.get<Token1>(Token1)`
There may be no action required here other than asking users to upgrade to TS 2.6 - but sadly Angular doesn't support it yet because we need to upgrade all the places we depend on first. Filing this issue mostly so we have a place to point to in the Angular changelog. | Bug | low | Critical |
274,906,815 | go | net: LookupHost IDN support | ### What version of Go are you using (`go version`)?
go version go1.9 linux/amd64
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN="/home/oleg/go/work"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/oleg/go/"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build065227263=/tmp/go-build -gno-record-gcc-switches"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
### What did you do?
I tried to resolve hostname that contains special symbols, e.g. '/', '='. But I got error 'no such host'. Utilities host, dig and nslookup provides this opportunity. Also they allow to resolve hostnames with length is more then 255 symbols unlike net.LookupHost.
_**Example:**_
```
_, err := net.LookupHost("/.yandex.ru")
fmt.Println(err) // lookup /.yandex.ru: no such host
```
### What did you expect to see?
**_For example, host:_**
```
$ host /.yandex.ru
/.yandex.ru has address 213.180.204.242
/.yandex.ru mail is handled by 10 not-for-mail.yandex.net.
```
| NeedsInvestigation | low | Critical |
274,973,826 | TypeScript | Feature request: exclude outDir by default in tsconfig.json | **Scenario**: As a user, I want typescript compiler to default-exclude output transpiled files to `compilerOptions.outDir` with `compilerOptions.declaration` property set to `true`. I would expect that `outDir` is excluded by default, because:
- it would prevent me from importing definitions from `outDir` by error, which is easy to do with intellisense
- it would prevent me from having `TS5055` errors such as spotted in #16749 and #6046
- `tsc` CLI has already implemented this behaviour, see PR #8703
| Suggestion,Awaiting More Feedback | low | Critical |
274,984,119 | angular | HttpClient does not set X-XSRF-Token on Http Post | <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[ X] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
Updated to angular 5.0.2. Updated from the deprecated HttpModule (@angular/http) to HttpClientModule (@angular/common/http).
Updated Http.post to HttpClient.post and tested. X-XSRF-TOKEN was not present in the Http Header.
HttpClient (@angular/common/http) does not set X-XSRF-Token on Http Post, while Http (@angular/http) does.
## Expected behavior
<!-- Describe what the desired behavior would be. -->
HttpClient should set the X-XSRF-TOKEN on Http Post.
## Minimal reproduction of the problem with instructions
<!--
For bug reports please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
Verify that javascript XSRF-TOKEN cookie has been set.
Test HttpClient (@angular/http) and HttpClientModule (@angular/common/http) side by side with nearly identical Http post requests.
<pre><code>
//OLD
CreateOld(sample: Models.Sample): Observable<Models.Sample[]> {
let body = JSON.stringify(sample);
let headers = new Headers({ 'Content-Type': 'application/json' });
return this.http
.post(this.baseUrl + 'api/sample/Create', body, { headers: headers })
.map(response => { return response.json() as Models.Task[] });
}
//NEW
CreateNew(sample: Models.Sample): Observable<Models.Sample[]> {
let body = JSON.stringify(sample)
return this.httpClient
.post<Models.Task[]>(this.baseUrl + 'api/sample/Create', body, { headers: new HttpHeaders().set('Content-Type', 'application/json') });
}
</code></pre>
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
## Environment
<pre><code>
Angular version: 5.0.2
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [ X] Chrome (desktop) version 62.0.3202.94
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ ] Firefox version XX
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [X ] Edge version 41.16299.15.0
For Tooling issues:
- Node version: XX <!-- run `node --version` -->
- Platform: <!-- Mac, Linux, Windows -->
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
| type: bug/fix,freq2: medium,area: common/http,type: confusing,P3 | high | Critical |
274,985,613 | TypeScript | Feature request: allow user to merge extended arrays in tsconfig files | **Scenario**: As a user, I would like to optionally merge extended arrays in `tsconfig` files. To do so, I would add a nested dot array `["..."]` reminding spread operator to the property I want to merge. Here is an example:
<details>
<summary><code>tsconfig-base.json</code></summary>
<pre>
{
"exclude": ["**/__specs__/*"]
}
</pre>
</details>
<details>
<summary><code>tsconfig-custom.json</code></summary>
<pre>
{
"extends": "./tsconfig-base.json",
"exclude": [["...tsconfig-base"], "lib"] // resolved to ["**/__specs__/*"; "lib"]
}
</pre>
</details>
<br/>
**Alternative**: using a config `{}` object
<details>
<summary><code>tsconfig-custom.json</code></summary>
<pre>
{
"extends": "./tsconfig-base.json",
"exclude": [{ "extends": "tsconfig-base" }, "lib"] // resolved to ["**/__specs__/*"; "lib"]
}
</pre>
</details>
| Suggestion,Awaiting More Feedback | high | Critical |
274,987,377 | TypeScript | [Suggestion] Allow @ts-ignore at the end of the same line | I added a comment in https://github.com/Microsoft/TypeScript/pull/18457#issuecomment-339097980 but I think that nobody saw it.
It would be nice if I could put the `@ts-ignore` at the same line, to leave the code more readable
```ts
let v1: string = 1; // @ts-ignore
let v2: string = 2; // @ts-ignore
let v3: string = 3; // @ts-ignore
let v4: string = 4; // @ts-ignore
let v5: string = 5; // @ts-ignore
let v6: string = 6; // @ts-ignore
```
instead of:
```ts
// @ts-ignore
let v1: string = 1;
// @ts-ignore
let v2: string = 2;
// @ts-ignore
let v3: string = 3;
// @ts-ignore
let v4: string = 4;
// @ts-ignore
let v5: string = 5;
// @ts-ignore
let v6: string = 6;
``` | Suggestion,In Discussion | high | Critical |
275,029,270 | opencv | DNN/OpenCL Crashes on some Macs |
##### System information (version)
- OpenCV => 3.3.1
- Operating System / Platform => Mac OS 10.12.6
##### Detailed description
Using DNN framework with OpenCL crashes on some Macs but not others. I've included the crash log.
##### Detailed description
OS Version: Mac OS X 10.12.6 (16G29)
Report Version: 12
Anonymous UUID: 60CB8BA8-3FBF-43E5-12ED-0FE4183AFF6B
Time Awake Since Boot: 580000 seconds
System Integrity Protection: disabled
Crashed Thread: 0 Dispatch queue: opencl_runtime
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000048
Exception Note: EXC_CORPSE_NOTIFY
Termination Signal: Segmentation fault: 11
Termination Reason: Namespace SIGNAL, Code 0xb
Terminating Process: exc handler [0]
VM Regions Near 0x48:
-->
__TEXT 00000001039d2000-00000001039d4000 [ 8K] r-x/rwx SM=COW /usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Thread 0 Crashed:: Dispatch queue: opencl_runtime
0 com.nvidia.web.GeForceGLDriverWeb 0x000000011197174c 0x111400000 + 5707596
1 com.nvidia.web.GeForceGLDriverWeb 0x000000011196ed5c 0x111400000 + 5696860
2 com.nvidia.web.GeForceGLDriverWeb 0x0000000111972194 0x111400000 + 5710228
3 com.nvidia.web.GeForceGLDriverWeb 0x000000011173785d 0x111400000 + 3373149
4 com.nvidia.web.GeForceGLDriverWeb 0x0000000111736837 0x111400000 + 3369015
5 com.nvidia.web.GeForceGLDriverWeb 0x000000011172f8c0 0x111400000 + 3340480
6 com.nvidia.web.GeForceGLDriverWeb 0x000000011172fc24 0x111400000 + 3341348
7 com.nvidia.web.GeForceGLDriverWeb 0x0000000111791a82 0x111400000 + 3742338
8 com.nvidia.web.GeForceGLDriverWeb 0x000000011176dd7c 0x111400000 + 3595644
9 com.nvidia.web.GeForceGLDriverWeb 0x000000011176cf1c 0x111400000 + 3591964
10 com.nvidia.web.GeForceGLDriverWeb 0x000000011176d3c2 gldFinishQueue + 279
11 com.apple.opencl 0x00007fffbb42a818 0x7fffbb424000 + 26648
12 com.apple.opencl 0x00007fffbb446f4d 0x7fffbb424000 + 143181
13 libdispatch.dylib 0x00007fffcc7ac8fc _dispatch_client_callout + 8
14 libdispatch.dylib 0x00007fffcc7ad536 _dispatch_barrier_sync_f_invoke + 83
15 libdispatch.dylib 0x00007fffcc7b760d _dispatch_barrier_sync_f_slow + 540
16 com.apple.opencl 0x00007fffbb447015 clFinish + 90
17 libopencv_core.3.3.dylib 0x000000010856b77d cv::ocl::OpenCLAllocator::deallocate_(cv::UMatData*) const + 1161
18 libopencv_core.3.3.dylib 0x0000000108568bbf cv::ocl::OpenCLAllocator::deallocate(cv::UMatData*) const + 439
19 libopencv_core.3.3.dylib 0x00000001085d0a29 cv::UMat::~UMat() + 55
20 libopencv_dnn.3.3.dylib 0x0000000104c0d39c std::__1::__vector_base<cv::UMat, std::__1::allocator<cv::UMat> >::~__vector_base() + 40
21 libopencv_dnn.3.3.dylib 0x0000000104c0d26b cv::dnn::ConvolutionLayerImpl::~ConvolutionLayerImpl() + 31
22 libopencv_dnn.3.3.dylib 0x0000000104c0a74e cv::dnn::ConvolutionLayerImpl::~ConvolutionLayerImpl() + 14
23 libopencv_dnn.3.3.dylib 0x0000000104c127d6 cv::detail::PtrOwnerImpl<cv::dnn::ConvolutionLayerImpl, cv::DefaultDeleter<cv::dnn::ConvolutionLayerImpl> >::deleteSelf() + 24
24 libopencv_dnn.3.3.dylib 0x0000000104c0292e cv::Ptr<cv::dnn::experimental_dnn_v2::Layer>::~Ptr() + 38
25 libopencv_dnn.3.3.dylib 0x0000000104bfaaee cv::dnn::experimental_dnn_v2::LayerData::~LayerData() + 96
26 libopencv_dnn.3.3.dylib 0x0000000104bfa4b4 std::__1::__tree<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::__map_value_compare<int, std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::less<int>, true>, std::__1::allocator<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData> > >::destroy(std::__1::__tree_node<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, void*>*) + 50
27 libopencv_dnn.3.3.dylib 0x0000000104bfa49f std::__1::__tree<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::__map_value_compare<int, std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::less<int>, true>, std::__1::allocator<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData> > >::destroy(std::__1::__tree_node<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, void*>*) + 29
28 libopencv_dnn.3.3.dylib 0x0000000104bfa49f std::__1::__tree<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::__map_value_compare<int, std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::less<int>, true>, std::__1::allocator<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData> > >::destroy(std::__1::__tree_node<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, void*>*) + 29
29 libopencv_dnn.3.3.dylib 0x0000000104bfa4ab std::__1::__tree<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::__map_value_compare<int, std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::less<int>, true>, std::__1::allocator<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData> > >::destroy(std::__1::__tree_node<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, void*>*) + 41
30 libopencv_dnn.3.3.dylib 0x0000000104bfa49f std::__1::__tree<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::__map_value_compare<int, std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::less<int>, true>, std::__1::allocator<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData> > >::destroy(std::__1::__tree_node<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, void*>*) + 29
31 libopencv_dnn.3.3.dylib 0x0000000104bfa49f std::__1::__tree<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::__map_value_compare<int, std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::less<int>, true>, std::__1::allocator<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData> > >::destroy(std::__1::__tree_node<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, void*>*) + 29
32 libopencv_dnn.3.3.dylib 0x0000000104bfa4ab std::__1::__tree<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::__map_value_compare<int, std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, std::__1::less<int>, true>, std::__1::allocator<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData> > >::destroy(std::__1::__tree_node<std::__1::__value_type<int, cv::dnn::experimental_dnn_v2::LayerData>, void*>*) + 41
33 libopencv_dnn.3.3.dylib 0x0000000104c033f7 cv::dnn::experimental_dnn_v2::Net::Impl::~Impl() + 133
34 libopencv_dnn.3.3.dylib 0x0000000104c0335d cv::detail::PtrOwnerImpl<cv::dnn::experimental_dnn_v2::Net::Impl, cv::DefaultDeleter<cv::dnn::experimental_dnn_v2::Net::Impl> >::deleteSelf() + 27
35 libopencv_dnn.3.3.dylib 0x0000000104c88f42 cv::Ptr<cv::dnn::experimental_dnn_v2::Net::Impl>::~Ptr() + 38
36 cv2.so 0x0000000103de2df4 pyopencv_dnn_Net_dealloc(_object*) + 18
37 org.python.python 0x0000000103a0e251 insertdict_by_entry + 259
38 org.python.python 0x0000000103a0cbd3 insertdict + 51
39 org.python.python 0x0000000103a0c2b7 dict_set_item_by_hash_or_entry + 101
40 org.python.python 0x0000000103a10e63 _PyModule_Clear + 350
41 org.python.python 0x0000000103a6edf4 PyImport_Cleanup + 499
42 org.python.python 0x0000000103a795f5 Py_Finalize + 291
43 org.python.python 0x0000000103a8ae79 Py_Main + 3153
44 libdyld.dylib 0x00007fffcc7e2235 start + 1 | bug,platform: ios/osx,category: ocl,category: dnn | low | Critical |
275,085,115 | TypeScript | No warning when using unary +/- operators on functions and objects? | **TypeScript Version:** 2.6.1
**Code**
```ts
let func = () => "func";
+func; // no error
-func; // no error
let obj = { foo: "bar" };
+obj; // no error
-obj; // no error
```
**Expected behavior:**
Unary `+` or `-` is only applicable to a `number` or maybe a `string`. I would expect the lines listed as `// no error` to be errors.
**Actual behavior:**
Unary `+` and `-` never generates a warning no matter what crazy stuff I try to apply it to.
---
I was made aware of this behavior by [a recent Stack Overflow question](https://stackoverflow.com/questions/47361859/typescript-error-on-implicit-type-conversion). I see from #7019 that unary `+` and `-` on `string`s is supported, and I guess I can understand that as a way to parse a `string` into a `number` (for `+`, anyway. Is `-` really used like this in practice?). But functions? objects? Seems more likely to be an error than intentional to me. Thoughts? | Suggestion,Revisit | medium | Critical |
275,100,463 | javascript | Explain UTF-8 BOM rule in readme | Most style decisions are explained in the readme, but I couldn't find the reasoning on why a BOM is considered bad. Where I've looked:
* [the patch](https://github.com/airbnb/javascript/commit/d9cb343b518349982f3b094f2ce48eda8347e6c7) that enabled it, found no commit message body.
* the current code of the rules file, https://github.com/airbnb/javascript/blob/8cf2c70a4164ba2dad9a79e7ac9021d32a406487/packages/eslint-config-airbnb-base/rules/style.js#L472-L474
* [the eslint rule page](https://eslint.org/docs/rules/unicode-bom) mentioned in the patch code comment, doesn't claim it's bad.
* searched the readme for "BOM", "unicode" and "byte order"
* searched issue tracker for "BOM", "unicode" and "byte order"
Could someone explain it, or add search keywords to make the explanation easier to find?
Update: Also, is there a recommendation on how to declare the file encoding instead? I searched the readme for "charset", "encod" and "character set" but no matches. | question | low | Major |
275,104,285 | youtube-dl | [pbs] Support shows | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.11.15*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x ] I've **verified** and **I assure** that I'm running youtube-dl **2017.11.15**
### Before submitting an *issue* make sure you have:
- [x ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [ x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [ x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
Youtube-dl claims to be unable to download a web page.
Note that I checked both the show page, and since it was a javascript page, I went into the episode listing page as well.
NB: Today is the first time I've tried to get anything from PBS. So far, I am 0 for 1.
```
keybounceMBP:PBS michael$ youtube-dl --get-filename http://www.pbs.org/show/nature/
WARNING: Unable to download webpage: HTTP Error 404: Not Found
WARNING: Unable to download webpage: HTTP Error 404: Not Found
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
keybounceMBP:PBS michael$ youtube-dl -v --get-filename http://www.pbs.org/show/nature/
[debug] System config: []
[debug] User config: ['-k', '--hls-prefer-native', '--abort-on-unavailable-fragment', '-o', '%(series)s/s%(season_number)02d-e%(episode_number)02d-%(title)s.%(ext)s', '-f', '\nbest[ext=mp4][height>431][height<=576]/\nbestvideo[ext=mp4][height=480]+bestaudio[ext=m4a]/\nbest[ext=mp4][height>340][height<=431]/\nbestvideo[ext=mp4][height>360][height<=576]+bestaudio/\nbest[height>340][height<=576]/\nbestvideo[height>360][height<=576]+bestaudio/\nbestvideo[height=360]+bestaudio/\nbest[ext=mp4][height>=280][height<=360]/\nbest[height<=576]/\nworst', '--ap-mso', 'Dish', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE', '--write-sub', '--write-auto-sub', '--sub-lang', 'en,enUS,en-us', '--sub-format', 'ass/srt/best', '--convert-subs', 'ass', '--embed-subs', '--mark-watched', '--download-archive', 'downloaded-videos.txt']
[debug] Custom config: []
[debug] Command-line args: ['-v', '--get-filename', 'http://www.pbs.org/show/nature/']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.11.15
[debug] Python version 3.6.3 - Darwin-13.4.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 3.3.4, ffprobe 3.3.4, rtmpdump 2.4
[debug] Proxy map: {}
WARNING: Unable to download webpage: HTTP Error 404: Not Found
WARNING: Unable to download webpage: HTTP Error 404: Not Found
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/Users/michael/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 784, in extract_info
ie_result = ie.extract(url)
File "/Users/michael/bin/youtube-dl/youtube_dl/extractor/common.py", line 437, in extract
ie_result = self._real_extract(url)
File "/Users/michael/bin/youtube-dl/youtube_dl/extractor/pbs.py", line 589, in _real_extract
self._sort_formats(formats)
File "/Users/michael/bin/youtube-dl/youtube_dl/extractor/common.py", line 1075, in _sort_formats
raise ExtractorError('No video formats found')
youtube_dl.utils.ExtractorError: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
keybounceMBP:PBS michael$ youtube-dl -v --get-filename http://www.pbs.org/show/nature/episodes/
[debug] System config: []
[debug] User config: ['-k', '--hls-prefer-native', '--abort-on-unavailable-fragment', '-o', '%(series)s/s%(season_number)02d-e%(episode_number)02d-%(title)s.%(ext)s', '-f', '\nbest[ext=mp4][height>431][height<=576]/\nbestvideo[ext=mp4][height=480]+bestaudio[ext=m4a]/\nbest[ext=mp4][height>340][height<=431]/\nbestvideo[ext=mp4][height>360][height<=576]+bestaudio/\nbest[height>340][height<=576]/\nbestvideo[height>360][height<=576]+bestaudio/\nbestvideo[height=360]+bestaudio/\nbest[ext=mp4][height>=280][height<=360]/\nbest[height<=576]/\nworst', '--ap-mso', 'Dish', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE', '--write-sub', '--write-auto-sub', '--sub-lang', 'en,enUS,en-us', '--sub-format', 'ass/srt/best', '--convert-subs', 'ass', '--embed-subs', '--mark-watched', '--download-archive', 'downloaded-videos.txt']
[debug] Custom config: []
[debug] Command-line args: ['-v', '--get-filename', 'http://www.pbs.org/show/nature/episodes/']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2017.11.15
[debug] Python version 3.6.3 - Darwin-13.4.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 3.3.4, ffprobe 3.3.4, rtmpdump 2.4
[debug] Proxy map: {}
WARNING: Unable to download webpage: HTTP Error 404: Not Found
WARNING: Unable to download webpage: HTTP Error 404: Not Found
ERROR: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/Users/michael/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 784, in extract_info
ie_result = ie.extract(url)
File "/Users/michael/bin/youtube-dl/youtube_dl/extractor/common.py", line 437, in extract
ie_result = self._real_extract(url)
File "/Users/michael/bin/youtube-dl/youtube_dl/extractor/pbs.py", line 589, in _real_extract
self._sort_formats(formats)
File "/Users/michael/bin/youtube-dl/youtube_dl/extractor/common.py", line 1075, in _sort_formats
raise ExtractorError('No video formats found')
youtube_dl.utils.ExtractorError: No video formats found; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
keybounceMBP:PBS michael$
```
| site-support-request | low | Critical |
275,105,597 | go | time: Time.Format does not provide space padding for numbers other than days | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
`go version go1.9.2 darwin/amd64`
### Does this issue reproduce with the latest release?
I'm not sure I'm running the latest release of Go, but I just ran `brew upgrade go` on my macOS laptop and it says `go 1.9.2 already installed`.
### What operating system and processor architecture are you using (`go env`)?
```
GOARCH="amd64"
GOOS="darwin"
```
### What did you do?
I was trying to make a library that gives programmers who wanted to use Python-like `strftime` method on `time.Time` a nice `Strftime` function. While writing test code to test if paddings are working, I found that there is no space(literal ` `) padding for 'month' and 'hour' but only 'day' haves it.
To explain what happens, here's Go playground's link: https://play.golang.org/p/cAgWgVD5Tu
As you can see underscore(`_`) layout string doesn't work for month, hour, minute and second.
### What did you expect to see?
In the above playground, I expected ` 9 4` for month and day.
### What did you see instead?
But I've got `_9 4` instead. Same for month, hour, minute and second.
I would like to patch the `time` package and I'd like to know whether this issue has already issued or not. Thanks.
| help wanted,NeedsInvestigation,FeatureRequest | low | Major |
275,126,014 | rust | rustc "unexpected panic"; no /proc/self/exe inside a debootstrap chroot | I installed rustc in a Debian chroot, as per https://wiki.debian.org/Debootstrap
```
sudo debootstrap testing /debian-testing-chroot https://deb.debian.org/debian/
```
Inside that chroot, rustc crashes.
```
# rustc hello.rs
Can't read /proc/cpuinfo: No such file or directory
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.21.0 running on x86_64-unknown-linux-gnu
thread 'rustc' panicked at 'failed to get current_exe: no /proc/self/exe available. Is /proc mounted?', src/librustc/session/filesearch.rs:169:22
note: Run with `RUST_BACKTRACE=1` for a backtrace.
# ls /proc
#
```
Sure, I can tinker with my chroot configuration to flesh out /proc, but as rustc's output said, "the compiler unexpectedly panicked. this is a bug". | O-linux,I-ICE,T-compiler,C-bug | low | Critical |
275,130,688 | rust | Allow showing the expanded macros of only a given module / source file | When working with many macros, looking at the expanded macro output of the whole crate doesn't scale.
There should be an option to only show the expanded macros from a given module / source file.
E.g. in one case I have 81733 lines in the expanded macro output from `cargo rustc -- -Z unstable-options --pretty=expanded`... | C-feature-request | low | Major |
275,153,823 | go | crypto/rsa: linux/arm64 Go 1.9 performance is +10X slower than OpenSSL | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
go version go1.9.2 linux/arm64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="arm64"
GOBIN=""
GOEXE=""
GOHOSTARCH="arm64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH=""
GORACE=""
GOROOT="/usr/lib/go-1.6"
GOTOOLDIR="/usr/lib/go-1.6/pkg/tool/linux_arm64"
GO15VENDOREXPERIMENT="1"
CC="gcc"
GOGCCFLAGS="-fPIC -pthread -fmessage-length=0"
CXX="g++"
CGO_ENABLED="1"
### What did you do?
go test crypto/rsa -bench .
### What did you expect to see?
Performance can be on par with OpenSSL (https://blog.cloudflare.com/content/images/2017/11/pub_key_1_core-2.png)
### What did you see instead?
+10X slower than OpenSSL (https://blog.cloudflare.com/content/images/2017/11/go_pub_key_1_core.png)
| Performance,help wanted,NeedsFix | low | Major |
275,201,553 | node | domain migration to async_hooks | The PR https://github.com/nodejs/node/pull/16222, contains the first step by @vdeturckheim to remove `domain` from the deep layers of nodecore and implement it on top of `async_hooks`. This PR just landed, thus I'm opening this issue to discuss the next steps.
- [Deprecate and remove `.domain` support from `MakeCallback`](https://github.com/nodejs/node/blob/97ba69f91543f89d389a4f3fef57c5c6c734df34/src/node.cc#L855L882). Implementers should use `async_context`. What will our depreciation and removal strategy for this be? _Note: we would like to ultimately remove this as it adds quite a bit of extra complexity._
- [x] Deprecate `.domain` support from `MakeCallback` (PR: https://github.com/nodejs/node/pull/17417)
- [ ] Remove `.domain` support from `MakeCallback`
- [x] [Remove special domain code in exception handling](https://github.com/nodejs/node/blob/97ba69f91543f89d389a4f3fef57c5c6c734df34/src/node.cc#L784L851). ~~It is unclear what the migration strategy for this is.~~ (PR: https://github.com/nodejs/node/pull/17159)
- [x] [Remove domain integration in events](https://github.com/nodejs/node/blob/283b949404745314801462f48e379397545bdde2/lib/events.js#L152L168). (PR: https://github.com/nodejs/node/pull/17403, PR: https://github.com/nodejs/node/pull/17588)
- Check for leftovers. Examples:
- [x] [timers](https://github.com/nodejs/node/blob/4503da8a3a3b0b71d950a63de729ce495965f6ea/lib/timers.js#L266L275) (PR: https://github.com/nodejs/node/pull/17880)
- [x] [env declarations](https://github.com/nodejs/node/blob/4503da8a3a3b0b71d950a63de729ce495965f6ea/src/env.h#L274) (PR: https://github.com/nodejs/node/pull/18291)
- [x] Track context with `async_hooks` (PR: https://github.com/nodejs/node/pull/16222)
/cc @addaleax @vdeturckheim | domain,async_hooks | medium | Critical |
275,285,052 | pytorch | Add SGDR, SGDW, AdamW and AdamWR | Recently, there are two papers from [Ilya Loshchilov](http://ml.informatik.uni-freiburg.de/people/loshchilov/index.html) and [Frank Hutter](http://www2.informatik.uni-freiburg.de/~hutter/).
[SGDR: Stochastic Gradient Descent with Warm Restarts](https://arxiv.org/abs/1608.03983), introduces a learning rate decay method according to different training periods. Several hyperparameters improve the-state-of-the art result a lot. It has been added to tensorflow as [tf.train.cosine_decay](https://www.tensorflow.org/versions/master/api_docs/python/tf/train/cosine_decay)
Their recent work [Fixing Weight Decay Regularization in Adam](https://arxiv.org/abs/1711.05101) fixes a wide misunderstanding when using Adam and weight decay synchronously. It has been highly admitted by the author of [Adam](https://twitter.com/dpkingma/status/930849593767608320).
I think pytorch should add these features as well.
cc @vincentqb | module: optimizer,triaged | medium | Major |
275,323,899 | opencv | How to load and save a tiff image with CMYK channel, and keep it unchanged | I read a tiff image, and then save it as png.
However, the color looks different.
I want to know how to load and save a tiff image with CMYK channel, and keep it unchanged
```
p = "image.tif"
origin = cv2.imread(p, cv2.IMREAD_UNCHANGED)
cv2.imwrite("test.png", origin)
``` | category: imgcodecs,incomplete | low | Major |
275,346,031 | pytorch | Make pytest stop printing docstrings in its default diagnostic output | Sample:
```
_________________________________ TestNN.test_ConvTranspose3d_dilated_cuda ________________________[67/1868]
self = <test_nn.TestNN testMethod=test_ConvTranspose3d_dilated_cuda>
test = <test_nn.NewModuleTest object at 0x7f0b865920b8>
> setattr(TestNN, cuda_test_name, lambda self, test=test: test.test_cuda(self))
test_nn.py:3636:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
common_nn.py:850: in test_cuda
gpu_gradInput = test_case._backward(gpu_module, gpu_input, gpu_output, gpu_gradOutput)
test_nn.py:237: in _backward
output.backward(grad_output, retain_graph=True)
../torch/autograd/variable.py:168: in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
variables = (Variable containing:
(0 ,0 ,0 ,.,.) =
-5.6280e-02 -1.3005e-01 3.2031e-01 -4.1098e-03 2.5253e-01 1.3095e-01
9.9...4e-01 1.4101e-01 3.8256e-01 -1.2257e-01 3.1663e-03 -4.4487e-01
[torch.cuda.FloatTensor of size 1x3x6x9x6 (GPU 0)]
,)
grad_variables = (Variable containing:
(0 ,0 ,0 ,.,.) =
0 1 0 0 1 1
0 1 1 0 1 0
1 0 1 0 1 0
... 1 0 1 1
0 0 0 0 1 1
0 1 1 1 1 0
[torch.cuda.FloatTensor of size 1x3x6x9x6 (GPU 0)]
,)
retain_graph = True, create_graph = False, retain_variables = None
def backward(variables, grad_variables=None, retain_graph=None, create_graph=None, retain_variables=None
):
"""Computes the sum of gradients of given variables w.r.t. graph leaves.
The graph is differentiated using the chain rule. If any of ``variables``
are non-scalar (i.e. their data has more than one element) and require
gradient, the function additionally requires specifying ``grad_variables``.
```
I guess it is trying to be helpfully chatty, but I don't need to see a big pile of docstring...
cc @mruberry | module: tests,triaged,enhancement | low | Major |
275,350,810 | go | encoding/json: include field name in unmarshal error messages when extracting time.Time | allocated from #6716 .
### What version of Go are you using (`go version`)?
go1.9.2 linux/amd64
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
### What did you do?
run https://play.golang.org/p/YnlDi-3DMP
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
### What did you expect to see?
Field name with error message
### What did you see instead?
Error without field name
| NeedsInvestigation | low | Critical |
275,370,559 | neovim | 'sessionoptions': restore quickfix window |
- `nvim --version`: v0.2.2
- Operating system/version: archlinux64
- Terminal name/version: termite
- `$TERM`: xterm-termite
### Steps to reproduce using `nvim -u NORC`
```
nvim -u NORC
:vsplit
C-w l
:copen
:mksession session.vim
:qa
nvim -u NORC -s session.vim
```
### Actual behaviour
The quickfix window (bottom right) is gone
### Expected behaviour
The quickfix window is restored. For my workflow there is some rationale where I place that window, so I would like to see it being restored just like other windows / buffers. | enhancement | low | Major |
275,388,492 | godot | Spatial/Node2D locks should not be saved through metadata | **Operating system or device, Godot version, GPU Model and driver (if graphics related):**
450bdda97a933ed9081efa20bddc71ee0b8286c9
**Issue description:**
According to discussion in #12359 it seems locks should not be saved inside scene metadata anymore. Instead there should be new bool property inside node2d/spatial that should not be visible in inspector. As far as we discussed on IRC this still could be considered for 3.0 | enhancement,topic:core,usability | low | Major |
275,409,653 | kubernetes | Make kubectl Scale function for RS, Deployments, Job, etc watch-based | Offshoot of https://github.com/kubernetes/kubernetes/issues/56064#issuecomment-345738834
As part of the kubectl Scale() function, we can optionally waitForReplicas to reach the desired count.
Currently, we're polling periodically to check that happens (for RS, jobs, etc). We should move them to use Watch instead - like we're doing for RCs currently:
https://github.com/kubernetes/kubernetes/blob/98fb71e8ce2caa3062f3f0b0ed4ab470683f5578/pkg/kubectl/scale.go#L233-L246
/area kubectl
/kind cleanup
| kind/cleanup,area/kubectl,sig/autoscaling,sig/apps,lifecycle/frozen | medium | Major |
275,413,921 | godot | Cache calls to set custom clipping rectangles on a CanvasItem | For instance in this snipped of code:
void MyCustomItem::_notification(int p_what) {
case NOTIFICATION_DRAW: {
// Enable clipping
VisualServer::get_singleton()->canvas_item_set_clip(get_canvas_item(), true);
draw_background();
// Clip to margins
VisualServer::get_singleton()->canvas_item_set_custom_rect(get_canvas_item(), true, Rect2(
Point2(
margin_left,
margin_top
),
Size2(
width - margin_left - margin_right,
height - margin_top - margin_bottom
)
));
draw_stuff();
break;
}
}
I need `draw_background()` to paint something within the CanvasItem clipping area and `draw_stuff()` to draw everything else within some margin. Such thing would be possible if calls to `canvas_item_set_clip()` and `canvas_item_set_custom_rect()` were cached.
| enhancement,topic:core,topic:2d | low | Minor |
275,460,917 | flutter | single finger scale gesture | `onScaleUpdate` is being triggered for a `GestureDetector` even when the user is touching the screen with only a single finger. In this case, `details.scale` is `1.0`. I don't know the behavior of iOS or Android, but it seems like scale update should not be triggered for a single finger.
This is a problem because when a user is placing or removing two fingers on the screen, often one finger will remain for a brief time, which will reset the scale factor and cause ugly jank.
The workaround would be for the developer to listen for `onScaleStart` and `onScaleEnd` and based on those listen for `onScaleUpdate` only during the time scale is active.
See also issue #13101 | framework,f: gestures,has reproducible steps,P2,found in release: 3.7,found in release: 3.9,team-framework,triaged-framework | low | Critical |
275,476,345 | go | net/http: set response headers on TimeoutHandler timeout | I was recently suprised to discover that [`http.TimeoutHandler`] returns a blob of HTML (`"<html><head><title>Timeout</title></head><body><h1>Timeout</h1></body></html>"`) as the default message body. It does not attempt to set the appropriate Content-Type header and if you write a response yourself there is no option to set the content type.
The documentation says:
> (If msg is empty, a suitable default message will be sent.)
It would be nice if the format of the default message could be mentioned in the documentation, the default could be removed, or the Content-Type could be set (or some combination of the above).
[`http.TimeoutHandler`]: https://golang.org/pkg/net/http/#TimeoutHandler
/cc @bradfitz | NeedsDecision,FeatureRequest | low | Major |
275,525,998 | neovim | UI: perf: avoid redraw for completed 'showcmd' | It was noticed that when 'showcmd' is enabled, even completed commands (or trivial commands such as `j` or ESC) cause a screen update, which is then cleared by an immediate redraw.
- RPC: `nvim_input('<esc>')`
- One of the updates is: `#("cursor_goto" #(9 21)) #("put" #("^") #("[")) #("cursor_goto" #(0 0)))`
Sometimes you can even see this in a slow terminal, by holding down `j`, for example.
Perhaps we can avoid the screen update entirely. | performance,ui | low | Major |
275,528,368 | puppeteer | expose timing information | Exposing timing information would greatly benefit performance scenarios.
We shoud:
1. add `DOMContentLoaded` and `NavigationStart` metrics to the `Page.metrics()` struct
2. expose timing information for requests/response objects
There's currently a suspicion that [1] is not aligned with [2] in the protocol. We, however, should pick a single time scale and use it throughout the api. | feature,chromium | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.