id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
314,187,529 | go | path/filepath: TestEvalSymlinks tests with "/" prefix fail in Windows | ### What version of Go are you using (`go version`)?
go version go1.10.1 windows/amd64
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
set GOARCH=amd64
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
### What did you do?
Built go using "all.bat"
### What did you expect to see?
while running tests
ok path/filepath x.xxxs
### What did you see instead?
--- FAIL: TestEvalSymlinks (0.04s)
path_test.go:798: EvalSymlinks("C:\\Users\\varga\\AppData\\Local\\Temp\\evalsymlink400713581\\test/linkabs") returns "C:\\", want "/"
FAIL
FAIL path/filepath 1.552s
I have a code fix for this if this. Essentially just prepending the expected result with the volume name if we are on Windows and the path has a prefix of "/".
Does that sound reasonable? | help wanted,OS-Windows,NeedsInvestigation | low | Major |
314,187,984 | TypeScript | Describe object vs Object in the Handbook | https://www.typescriptlang.org/docs/handbook/basic-types.html should cover the built-in `object` type and have a brief discussion of it vs `Object`
See #20614 | Docs | low | Minor |
314,190,638 | vscode | Windows update failed: Access is denied | Log from %HOME%\AppData\Local\Temp\vscode-inno-updater.log
```
Apr 13 11:27:40.666 INFO Starting: C:\Program Files\Microsoft VS Code\Code.exe, false
Apr 13 11:27:40.670 INFO Checking for running Code.exe processes... (attempt 1)
Apr 13 11:27:40.670 INFO Code.exe is running, wait a bit
Apr 13 11:27:41.148 INFO Checking for running Code.exe processes... (attempt 2)
Apr 13 11:27:41.153 INFO Code.exe is running, wait a bit
Apr 13 11:27:41.654 INFO Checking for running Code.exe processes... (attempt 3)
Apr 13 11:27:41.666 INFO Code.exe is not running
Apr 13 11:27:41.666 INFO Starting update, silent = false
Apr 13 11:27:41.691 INFO do_update: "C:\\Program Files\\Microsoft VS Code\\Code.exe", _
Apr 13 11:27:41.692 INFO move_update: "C:\\Program Files\\Microsoft VS Code\\unins000.dat", _
Apr 13 11:27:41.696 INFO Delete: "Code.exe" (attempt 1)
Apr 13 11:27:41.744 INFO Delete: "Code.exe" (attempt 2)
Apr 13 11:27:41.945 INFO Delete: "Code.exe" (attempt 3)
Apr 13 11:27:42.396 INFO Delete: "Code.exe" (attempt 4)
Apr 13 11:27:43.197 INFO Delete: "Code.exe" (attempt 5)
Apr 13 11:27:44.448 INFO Delete: "Code.exe" (attempt 6)
Apr 13 11:27:46.249 INFO Delete: "Code.exe" (attempt 7)
Apr 13 11:27:48.700 INFO Delete: "Code.exe" (attempt 8)
Apr 13 11:27:51.901 INFO Delete: "Code.exe" (attempt 9)
Apr 13 11:27:55.952 INFO Delete: "Code.exe" (attempt 10)
Apr 13 11:28:00.953 INFO Delete: "Code.exe" (attempt 11)
Apr 13 11:28:00.953 ERRO Access is denied. (os error 5)
```
On Windows 10 64 bit | bug,install-update,windows | high | Critical |
314,191,034 | three.js | Request function Render To Depth | You need a function similar to renderer.render, but with the ability to render it in depth.
Why is not standard use appropriate?
```
scene.overrideMaterial = overrideMaterial;
renderer.render (scene, camera, ...);
scene.overrideMaterial = null;
```
due to the fact that using this method for Skin mesh will be drawn a static position, i.e. without animation.
Why is not suitable use:
`renderer.render (scene, camera, ...);`
because of the fact that some objects have their own CustomDepthMaterial. And also due to the fact that too heavy shader can be, but in the depths it is not needed.
How did I do this? used something between **renderer.render and WebGLShadowMap.renderObject**, because in WebGLShadowMap.renderObject is implemented exactly as it should be done in WebGLRenderer.
I just do not want to make my edits to it after every update of the engine, so I ask you to do this, I think it will be useful to many, now it is not enough. | Enhancement | low | Minor |
314,194,066 | terminal | Add an option to toggle the width of ambiguous-width characters | This bug-tracker is monitored by Windows Console development team and other technical types. **We like detail!**
If you have a feature request, please post to [the UserVoice](https://wpdev.uservoice.com/forums/266908).
> **Important: When reporting BSODs or security issues, DO NOT attach memory dumps, logs, or traces to Github issues**. Instead, send dumps/traces to [email protected], referencing this GitHub issue.
Please use this form and describe your issue, concisely but precisely, with as much detail as possible
* Your Windows build number: (Type `ver` at a Windows Command Prompt)
10.0.16299.371
* What you're doing and what's happening: (Copy & paste specific commands and their output, or include screen shots)
1. Set font to ”MS Gothic” or a Japanese monospace font
2. Type ”■square□square” or another text that contains ambiguous-width characters like ”■” and ”┌”
3. Ambiguous-with characters lap over the next characters to them
* What's wrong / what should be happening instead:
Some characters like ”■” and ”┌” has the same width as ASCII characters in latin fonts but double width in Japanese (and maybe some other languages) fonts. These characters are called ambiguous-weighted.
Other terminal emulator has the option to toggle the width of them but Windows Console doesn't.
EDIT: also see #16779 | Issue-Feature,Product-Conhost,Help Wanted,Product-Conpty,Area-Server,Area-Settings,Product-Terminal | low | Critical |
314,258,581 | vue | v-once for component tag doesn't work in v-for | ### Version
2.5.16
### Reproduction link
[https://jsfiddle.net/hL0rrbs9/6/](https://jsfiddle.net/hL0rrbs9/6/)
### Steps to reproduce
Run code, and watch.
### What is expected?
"Yay Yay ;)" values should not change to "hell naw!" in 3 seconds.
### What is actually happening?
"Yay Yay ;)" values are changing in to "hell naw!" in 3 seconds.
---
<h2>
Wait 3 seconds. Behavior is not consistent. "component" tag in v-for should not change.
</h2>
<div id="app">
<component :is="comp" v-once></component>
<p v-once>{{comp}}</p>
<div v-for="n in items" :key="n.id">
<component :is="comp" v-once></component>
<p v-once>{{comp}}</p>
</div>
</div>
<script>
var z = new Vue({
el: "#app",
data: {
comp: "comp1",
items: [{id:1}, {id:2}]
},
components: {
"comp1": {
template: "<p style='background:green;color:white'>yay yay ;)</p>"
},
"comp2": {
template: "<p style='background:red;color:white'>hell naw!</p>"
}
}
});
setTimeout(function() {
z.comp = "comp2"
}, 3000);
</script>
<!-- generated by vue-issues. DO NOT REMOVE --> | bug,has PR | low | Minor |
314,276,107 | go | cmd/go: list command crashes on testdata packages under vendor | If the Go files in a `testdata` package import another package found in `vendor` directory, running `go list` command in the `testdata` package will result in an error:
```
unexpected directory layout:
import path: p
root: /Users/zplin/gocode/src
dir: /Users/zplin/gocode/src/go_examples/vendor/p
expand root: /Users/zplin/gocode/src
expand dir: /Users/zplin/gocode/src/go_examples/vendor/p
separator: /
```
However, if the `testdata` package doesn't import any package from `vendor`, the `go list` command works fine.
### What version of Go are you using (`go version`)?
go version go1.10.1 darwin/amd64
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
### What did you do?
```
git clone [email protected]:linzhp/go_examples.git
cd go_examples/testdata
go list
```
### What did you expect to see?
`_/Users/zplin/gocode/src/buck_go_examples/testdata`
### What did you see instead?
```
unexpected directory layout:
import path: p
root: /Users/zplin/gocode/src
dir: /Users/zplin/gocode/src/go_examples/vendor/p
expand root: /Users/zplin/gocode/src
expand dir: /Users/zplin/gocode/src/go_examples/vendor/p
separator: /
```
| NeedsInvestigation | low | Critical |
314,278,022 | TypeScript | Support a @nonnull/@nonnullable JSDoc assertion comment | This could be used in place of the non-null assertion operator, and solve #23403.
```js
while (queue.length) {
(/** @nonnull */ queue.pop())();
}
```
Related is #23217, which tracks definite assignment assertions. | Suggestion,Committed,Domain: JSDoc,Domain: JavaScript | medium | Critical |
314,280,956 | vscode | Add wordwrap indicator | There's currently no visual indication when you use the `"editor.wordWrap": "on"` setting.
I'd propose optionally adding a wordwrap-indicator on each newline, something like this:

so it becomes easy to spot when a line has been wordwrapped. | feature-request,editor-wrapping | high | Critical |
314,317,401 | vue | Creating a component named map should warn the user (as with button) | ### Version
2.5.15
### Reproduction link
[https://jsfiddle.net/e2yxoomh/2/](https://jsfiddle.net/e2yxoomh/2/)
### Steps to reproduce
Create a component that includes the word "map" (case insensitive). For example:
- Map
- MapView
- mapper
### What is expected?
I expect these to work, or at least give me some sort of error message.
### What is actually happening?
Nothing happens. The components do not render. There is no error message.
<!-- generated by vue-issues. DO NOT REMOVE --> | contribution welcome,improvement,good first issue,has PR | medium | Critical |
314,364,165 | go | x/build/maintner: GerritMessage doesn't include inline comments | Consider [CL 97058](https://golang.org/cl/97058) as an example. It has reviews with inline comments. For example, here's one by Andrew:

[`GerritMessage`](https://godoc.org/golang.org/x/build/maintner#GerritMessage) structure has the [`Message string`](https://godoc.org/golang.org/x/build/maintner#GerritMessage.Message) field:
```Go
// Message is the raw message contents from Gerrit (a subset
// of the raw git commit message), starting with "Patch Set
// nnnn".
Message string
```
But it doesn't expose the inline comments that were a part of that message. Here's the corresponding `GerritMessage` from `maintner.Corpus`:
```Go
(*maintner.GerritMessage)({
Meta: (*maintner.GitCommit)({GitCommit 9298d2c50518c3444f01b95aae9579f9d3bdb30d}),
Version: (int32) 1,
Message: (string) (len=26) "Patch Set 1:\n\n(4 comments)",
Date: (time.Time) 2018-02-26 05:16:52 +0000 +0000,
Author: (*maintner.GitPerson)({
Str: (string) (len=61) "Andrew Bonventre <22285@62eb7196-b449-3ce5-99f1-c037f21e1705>"
})
}),
```
This is a feature request and a tracking issue for it. I imagine this should be in scope, since CL comment bodies are already included. /cc @bradfitz @andybons | Builders,NeedsDecision,FeatureRequest | low | Major |
314,375,674 | rust | No compiler error when attempting to change field of const struct | There is no compiler error, when trying to change the field of a ```const``` struct.
I tried this code:
```rust
struct StructA {
pub a: u32,
}
const A: StructA = StructA{a: 0};
fn main() {
A.a = 10;
println!("{}", A.a);
}
```
I expected to see this happen:
Compiler error, because I try to change the value of a ```const```.
Instead, this happened:
The statement is just ignored. ```A.a = 10;```looks like the value of ```A.a ``` is set to ```10```, since there is no compiler error, I expect this to happen.
Instead when printing the value of ```A.a```, it is still the initial value ```0```
## Meta
`rustc --version --verbose`:
```console
rustc 1.24.0-nightly (8e7a609e6 2018-01-04)
binary: rustc
commit-hash: 8e7a609e635b728eba65d471c985ab462dc4cfc7
commit-date: 2018-01-04
host: x86_64-apple-darwin
release: 1.24.0-nightly
LLVM version: 4.0
``` | C-enhancement,A-lints,T-compiler | low | Critical |
314,376,492 | go | x/build/maintner: occasional unexpected updates for specific issues/PRs | I created a [`NewNetworkMutationSource`](https://godoc.org/golang.org/x/build/maintner#NewNetworkMutationSource) and let it stream events for the last hour or so. Most of the events that came in was legitimate current activity.
However, I also noticed a few suspect events. They refer to issues/PRs that as far as I can tell have had no recent activity, so I can't explain why they came up (quite regularly). Filing this here so I can investigate later (unless someone gets to it first).
```
2018/04/14 19:21:11 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"crypto" number:32 pull_request:true > )
2018/04/14 19:23:18 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"tour" number:428 > )
2018/04/14 19:36:11 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"crypto" number:32 pull_request:true > )
2018/04/14 19:38:20 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"tour" number:428 > )
2018/04/14 19:51:12 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"crypto" number:32 pull_request:true > )
2018/04/14 19:53:22 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"tour" number:428 > )
2018/04/14 20:06:13 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"crypto" number:32 pull_request:true > )
2018/04/14 20:08:24 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"tour" number:428 > )
2018/04/14 20:21:13 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"crypto" number:32 pull_request:true > )
2018/04/14 20:23:26 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"tour" number:428 > )
2018/04/14 20:36:14 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"crypto" number:32 pull_request:true > )
2018/04/14 20:38:27 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"tour" number:428 > )
2018/04/14 20:51:15 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"crypto" number:32 pull_request:true > )
2018/04/14 20:53:29 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"tour" number:428 > )
2018/04/14 21:06:16 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"crypto" number:32 pull_request:true > )
2018/04/14 21:08:31 (*maintpb.Mutation)(github_issue:<owner:"golang" repo:"tour" number:428 > )
```
(I've removed all the normal events from the log, so it only includes the unexpected ones.)
I'm avoiding linking to those issues/PRs because that might create legitimate events there. But you can infer them from the log. The issue is number 428 at https://github.com/golang/tour/issues?page=2, and the PR is number 32 at https://github.com/golang/crypto/pulls?q=is%3Aclosed. Both have no recent activity, definitely not anything in the last hour (as far as I can tell).
/cc @bradfitz @andybons
For reference, this is the code I ran:
<details><br>
```Go
// Play with streaming mutations from maintner.NewNetworkMutationSource.
package main
import (
"context"
"fmt"
"log"
"os"
"os/signal"
"github.com/davecgh/go-spew/spew"
"golang.org/x/build/maintner"
"golang.org/x/build/maintner/godata"
"golang.org/x/build/maintner/maintpb"
)
func main() {
ctx, cancel := context.WithCancel(context.Background())
go func() {
sigint := make(chan os.Signal, 1)
signal.Notify(sigint, os.Interrupt)
<-sigint
cancel()
}()
err := run(ctx)
if err != nil {
log.Fatalln(err)
}
}
var s struct {
didInit bool
}
func run(ctx context.Context) error {
src := maintner.NewNetworkMutationSource("https://maintner.golang.org/logs", godata.Dir())
done := ctx.Done()
Outer:
for {
ch := src.GetMutations(ctx)
for {
select {
case <-done:
log.Printf("Context expired while loading data from log %T: %v", src, ctx.Err())
return nil
case e := <-ch:
if e.Err != nil {
log.Printf("Corpus GetMutations: %v", e.Err)
return e.Err
}
if e.End {
log.Printf("Reloaded data from log %T.", src)
s.didInit = true
continue Outer
}
processMutation(e.Mutation)
}
}
}
}
func processMutation(m *maintpb.Mutation) {
if !s.didInit {
fmt.Print(".")
return
}
spew.Dump(m)
}
```
</details> | Builders,NeedsInvestigation | low | Critical |
314,377,130 | rust | Putting `#![feature(…)]` in a module `.rs` file is silently ignored | Putting `#![feature(…)]` in a mod is silently ignored.
I tried this code (minimized version):
https://github.com/nelhage/feature-in-mod
I'm writing some toy Rust code that needs to perform pattern-matching over ASTs (that use `Box` to reference children), and wanted to use `box` patterns. On advice of https://doc.rust-lang.org/1.14.0/book/box-syntax-and-patterns.html I added `#![feature(box_patterns)]` to the top of my `.rs` file (which is a `mod` inside the larger executable).
I expected to see this happen: Either:
- The `feature()` directive would work and I would get access to the feature
- The `feature()` directive would yield an error
Instead, this happened:
The `feature()` directive had no observable effect at all; `cargo build` output did not change after adding the line, either to show a new error or to suppress the "box pattern syntax is experimental" error.
## Meta
`rustc --version --verbose`:
```
rustc 1.25.0 (84203cac6 2018-03-25)
binary: rustc
commit-hash: 84203cac67e65ca8640b8392348411098c856985
commit-date: 2018-03-25
host: x86_64-unknown-linux-gnu
release: 1.25.0
LLVM version: 6.0
```
| C-enhancement,A-diagnostics,T-compiler | low | Critical |
314,394,404 | react | Unexpected warning when hydrating with portal and SSR | **Do you want to request a *feature* or report a *bug*?**
*bug*
**What is the current behavior?**
Given the following (simplified) snippet:
```jsx
class HoverMenu extends React.Component {
render() {
if (typeof document === 'undefined') return null
const root = document.getElementById('root')
return ReactDOM.createPortal(<div>Hello World</div>, root)
}
}
class Para extends React.Component {
render() {
return (
<span>
Some Text
<HoverMenu />
</span>
)
}
}
```
where `div#root` is a valid `div` that exists, the following error is shown when hydrating after SSR:
`Warning: Expected server HTML to contain a matching <div> in <span>`
The warning goes away if I update the definition of `HoverMenu` to:
```jsx
class HoverMenu extends React.Component {
componentDidMount() {
this.setState({ isActive: true })
}
render() {
const { isActive} = this.state
if (!isActive) return null
const root = document.getElementById('root')
return ReactDOM.createPortal(<div>Hello World</div>, root)
}
}
```
I'd prefer not to do that because of the double rendering caused by `setState` in `componentDidMount`.
I don't quite understand what that error is telling me. No `<div />` is rendered server-side in either case. The error is particularly confusing, as the `HoverMenu` DOM `div` is not even rendered inside a DOM `span`. (I wonder if this is happening because `HoverMenu` is nested inside a React `span`.)
**What is the expected behavior?**
No error is thrown. Or, at least that the error message is clearer.
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
Chrome 65
React 16.2
(SSR through Next 5.1)
| Type: Bug,Difficulty: medium | medium | Critical |
314,400,013 | flutter | RenderFilter to change the brightness/contrast of all child widgets. | ## feature request.
RenderFilter to change the brightness/contrast of all child widgets. | c: new feature,framework,P3,team-framework,triaged-framework | low | Major |
314,403,137 | vscode | Allow webviews to be shown in modal windows | It would be very helpful if extension writers could show a popup/modal window with various kinds of graphical options (e.g: create file from list, with a preview of various file types).
In fact there is already something like that (the built-in issue report window), but I'd prefer more flexibility by either allowing more control types in the dialog (like icon views, treeviews etc.) or make the entire content a webview.
Thanks | feature-request,api,webview | medium | Major |
314,407,008 | go | x/net/dns/dnsmessage: cannot parse mDNS SRV records | DNS message compression was disabled SRV Target fields in golang/go#10622 / https://golang.org/cl/100055 (as per [RFC 2782](https://tools.ietf.org/html/rfc2782#page-4)).
However, compression is explicitly allowed for the MDNS SRV target field ([RFC 6762 Sec 18.14](https://tools.ietf.org/html/rfc6762#section-18.14)):
> Unicast DNS does not allow name compression for the target host in an SRV record, [...] all Multicast DNS implementations are REQUIRED to decode compressed SRV records correctly.
Attempting to decode a Chromecast MDNS SRV record with `dnsmessage.Message.Unpack` now fails with the error:
> unpacking Additional: SRV record: Target: compressed name in SRV resource data
Compression support for DNS SRV target fields is necessary to support MDNS. Please consider:
* adding an option to allow compression/decompression, or
* reverting the earlier change.
Cc @mdempsky @iangudger | NeedsInvestigation | low | Critical |
314,417,509 | pytorch | [caffe2] benchmark performance for different operators | The environment of server is the following:
- Framework: Caffe2
- OS: Centos-7.3.1611
- CUDA/cuDNN version: CUDA-8.0(cudnn-6.0)
- GPU: Tesla K80
- GCC version: gcc-4.8
- CMake version: cmake3
I want to test the performance on different convolution and fully-connected operators. But I got a rather ridiculous result, where different convolution operators may have great difference performance among each other.
I follow the method from the `caffe2/python/convnet_benchmarks.py` file. What I did is to build a model net which contains only one operator, such as ConvOp operator, for obtaining performance. Now
the result is the following:
> ======begin** EigenConvOp(CPU)benchmark=====================
> Starting benchmark.
> Running warmup runs.
> Main runs.
> Main run finished. Microseconds per iter: 21989.7. Iters per second: 45.4759
> =======end EigenConvOp(CPU) benchmark=======================
>
>
> =======begin ConvOp(CPU) benchmark=========================
> Starting benchmark.
> Running warmup runs.
> Main runs.
> Main run finished. Microseconds per iter: 426.187. Iters per second: 2346.39
> =======end ConvOp(CPU) benchmark==========================
>
>
> =======begin MKLDNNConvOp(CPU) benchmark==================
> I0415 21:44:07.254658 23478 operator.cc:165] Engine MKLDNN is not available for operator Conv.
> Starting benchmark.
> Running warmup runs.
> Main runs.
> Main run finished. Microseconds per iter: 426.828. Iters per second: 2342.86
> =======begin MKLDNNConvOp(CPU) benchmark===================
>
The `EigenConvOp` is too slow and the time of it 50 times than others. I think the result is wrong and I guess there are some problems on `workspace.BenchmarkNet` method.
When I look the `net_dag.cc:DAGNetBase::TEST_Benchmark` method, I found everything was ok.
Anyone has a effective method to benchmark different operator performance?
| caffe2 | low | Major |
314,430,731 | rust | Writing Eq::eq produces an unhelpful diagnostic (`std::cmp::Eq` cannot be made into an object) | ```rust
fn hi() -> bool {
Eq::eq(&(), &())
}
```
```
Compiling playground v0.0.1 (file:///playground)
error[E0038]: the trait `std::cmp::Eq` cannot be made into an object
--> src/main.rs:3:5
|
3 | Eq::eq(&(), &())
| ^^^^^^ the trait `std::cmp::Eq` cannot be made into an object
|
= note: the trait cannot use `Self` as a type parameter in the supertraits or where-clauses
error: aborting due to previous error
```
I stared at this for a moment in pure disbelief... and then once I woke up a bit more, I remembered that `eq` is actually on the `PartialEq` trait (and so what I wrote is not interpreted as `<_ as Eq>::eq`, but rather as `<Eq>::eq`). | C-enhancement,A-diagnostics,A-trait-system,T-compiler,D-confusing | low | Critical |
314,433,171 | vue | keep-alive: include/exclude components by component key attribute | ### What problem does this feature solve?
The include and exclude props allow components to be conditionally cached only by component name. If we want to reuse components but force replacement using the `key` attribute, there is no control over which components we want to keep-alive only matching components by their name.
### What does the proposed API look like?
https://jsfiddle.net/9nk92wuy/
<!-- generated by vue-issues. DO NOT REMOVE --> | feature request,has PR | medium | Major |
314,452,575 | go | x/build/maintner: be confident that returned GerritCL.Status always has one of documented values | [`GerritCL.Status`](https://godoc.org/golang.org/x/build/maintner#GerritCL.Status) is documented as:
```Go
// Status will be "merged", "abandoned", "new", or "draft".
Status string
```
We want clients of `maintner` API to be confident in that statement, and not have to doubt it by adding their own "if cl.Status is something else" checks (which is annoying to have to do).
This issue is to verify/confirm that's the case. I believe it is, but @bradfitz asked me to file this in [CL 107296](https://go-review.googlesource.com/c/build/+/107296/4/maintner/gerrit.go#307):
> @bradfitz: Can you file a separate issue about that and I can investigate and ask the Gerrit team if needed? | Builders,NeedsInvestigation | low | Minor |
314,453,661 | go | cmd/compile: bounds check elimination for `if len(x) > 32 { ...; x = x[8:]; ... }` | ( From https://github.com/golang/go/issues/23354#issuecomment-365753223 )
In https://github.com/dgryski/go-metro/commit/1308eab584388b3f8f6050f027708891c4f4143a I got a major performance boost by changing the loop to remove the reassignments to ptr which, even though they were still within range, invalidated the bounds checks for that were valid for ptr before the assignment.
The bounds-check elimination prover should handle this common case. | Performance,NeedsFix,compiler/runtime | low | Major |
314,483,388 | go | sync: mutex profiling information is confusing (wrong?) for mutexes with >2 contenders | ERROR: type should be string, got "https://github.com/golang/go/blob/2b2348ab143368a35031a814a8d41eb5a437aa33/src/runtime/sema.go#L340\r\n\r\nPlease answer these questions before submitting your issue. Thanks!\r\n\r\n\r\n### What version of Go are you using (`go version`)?\r\n\r\n`(lldb) plat sh go version\r\ngo version go1.10.1 darwin/amd64`\r\n\r\n### Does this issue reproduce with the latest release?\r\n\r\nYes\r\n\r\n### What operating system and processor architecture are you using (`go env`)?\r\n```\r\n(lldb) plat sh go env\r\nGOARCH=\"amd64\"\r\nGOBIN=\"\"\r\nGOCACHE=\"off\"\r\nGOEXE=\"\"\r\nGOHOSTARCH=\"amd64\"\r\nGOHOSTOS=\"darwin\"\r\nGOOS=\"darwin\"\r\nGOPATH=\"\"\r\nGORACE=\"\"\r\nGOROOT=\"/usr/local/Cellar/go/1.10.1/libexec\"\r\nGOTMPDIR=\"\"\r\nGOTOOLDIR=\"/usr/local/Cellar/go/1.10.1/libexec/pkg/tool/darwin_amd64\"\r\nGCCGO=\"gccgo\"\r\nCC=\"clang\"\r\nCXX=\"clang++\"\r\nCGO_ENABLED=\"1\"\r\nCGO_CFLAGS=\"-g -O2\"\r\nCGO_CPPFLAGS=\"\"\r\nCGO_CXXFLAGS=\"-g -O2\"\r\nCGO_FFLAGS=\"-g -O2\"\r\nCGO_LDFLAGS=\"-g -O2\"\r\nPKG_CONFIG=\"pkg-config\"\r\nGOGCCFLAGS=\"-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build604482558=/tmp/go-build -gno-record-gcc-switches -fno-common\"`\r\n```\r\n### What did you do?\r\n\r\n```\r\npackage main\r\n\r\nimport (\r\n\t\"fmt\"\r\n\t\"os\"\r\n\t\"runtime\"\r\n\t\"runtime/pprof\"\r\n\t\"sync\"\r\n\t\"testing\"\r\n)\r\n\r\nfunc TestMutex(t *testing.T) {\r\n\truntime.SetMutexProfileFraction(1)\r\n\tfmt.Printf(\"Running with fraction = %v\\n\", runtime.SetMutexProfileFraction(1))\r\n\tch := make(chan struct{}, 3)\r\n\tm := sync.Mutex{}\r\n\tgo func1(ch, &m, 10)\r\n\tgo func2(ch, &m, 10)\r\n\tgo func3(ch, &m, 10)\r\n\r\n\t<-ch\r\n\t<-ch\r\n\t<-ch\r\n\tfmt.Println(\"Done waiting\")\r\n\tprofile := pprof.Lookup(\"mutex\")\r\n\tprofile.WriteTo(os.Stdout, 1)\r\n}\r\n\r\nfunc core(m *sync.Mutex, loops int) {\r\n\tm.Lock()\r\n\r\n\tfor i := 0; i < loops*1000*1000*1000; i++ {\r\n\t}\r\n\tm.Unlock()\r\n}\r\nfunc func1(ch chan<- struct{}, m *sync.Mutex, loops int) {\r\n\tcore(m, loops)\r\n\tfmt.Println(\"Done func1 loops=\", loops)\r\n\tch <- struct{}{}\r\n}\r\n\r\nfunc func2(ch chan<- struct{}, m *sync.Mutex, loops int) {\r\n\tcore(m, loops)\r\n\tfmt.Println(\"Done func2 loops=\", loops)\r\n\tch <- struct{}{}\r\n}\r\n\r\nfunc func3(ch chan<- struct{}, m *sync.Mutex, loops int) {\r\n\tcore(m, loops)\r\n\tfmt.Println(\"Done func3 loops=\", loops)\r\n\tch <- struct{}{}\r\n}\r\n\r\n```\r\n\r\n\r\n### What did you expect to see?\r\n\r\nThat one func was contended for twice as long as the other: e.g.\r\n```\r\nlldb) c\r\nProcess 90281 resuming\r\nRunning with fraction = 1\r\nDone func3 loops= 10\r\nDone func1 loops= 10\r\nDone func2 loops= 10\r\nDone waiting\r\n--- mutex:\r\ncycles/second=3096003024\r\nsampling period=1\r\n37586656890 1 @ 0x105ad75 0x10e7897 0x10e78e9 0x1053ac1\r\n# 0x105ad74 sync.(*Mutex).Unlock+0x74 /usr/local/Cellar/go/1.10.1/libexec/src/sync/mutex.go:201\r\n# 0x10e7896 testmutex.core+0x56 /Users/qjeremy/gocode/src/testmutex/testmutex_test.go:34\r\n# 0x10e78e8 testmutex.func1+0x38 /Users/qjeremy/gocode/src/testmutex/testmutex_test.go:37\r\n\r\n19168525985 1 @ 0x105ad75 0x10e7897 0x10e7ac9 0x1053ac1\r\n# 0x105ad74 sync.(*Mutex).Unlock+0x74 /usr/local/Cellar/go/1.10.1/libexec/src/sync/mutex.go:201\r\n# 0x10e7896 testmutex.core+0x56 /Users/qjeremy/gocode/src/testmutex/testmutex_test.go:34\r\n# 0x10e7ac8 testmutex.func3+0x38 /Users/qjeremy/gocode/src/testmutex/testmutex_test.go:49\r\n```\r\n\r\n### What did you see instead?\r\n\r\n```\r\nRunning with fraction = 1\r\nDone func3 loops= 10\r\nDone func1 loops= 10\r\n Done func2 loops= 10\r\nDone waiting\r\n--- mutex:\r\ncycles/second=3095995234\r\nsampling period=1\r\n20145330860 1 @ 0x105ad75 0x10e7897 0x10e7ac9 0x1053ac1\r\n# 0x105ad74 sync.(*Mutex).Unlock+0x74 /usr/local/Cellar/go/1.10.1/libexec/src/sync/mutex.go:201\r\n# 0x10e7896 testmutex.core+0x56 /Users/qjeremy/gocode/src/testmutex/testmutex_test.go:34\r\n# 0x10e7ac8 testmutex.func3+0x38 /Users/qjeremy/gocode/src/testmutex/testmutex_test.go:49\r\n\r\n19631603241 1 @ 0x105ad75 0x10e7897 0x10e78e9 0x1053ac1\r\n# 0x105ad74 sync.(*Mutex).Unlock+0x74 /usr/local/Cellar/go/1.10.1/libexec/src/sync/mutex.go:201\r\n# 0x10e7896 testmutex.core+0x56 /Users/qjeremy/gocode/src/testmutex/testmutex_test.go:34\r\n# 0x10e78e8 testmutex.func1+0x38 /Users/qjeremy/gocode/src/testmutex/testmutex_test.go:37\r\n```\r\n\r\n### Analysis\r\n\r\n I believe the cause is the linked line number, I am not sure why the next sudog (t) should be considered to have start waiting from the time first sudog (s) is released \r\n" | NeedsInvestigation,compiler/runtime | low | Critical |
314,555,528 | go | x/mobile/cmd/gomobile: bind fails for cloud.google.com/go/trace | ### What version of Go are you using (`go version`)?
1.10
### Does this issue reproduce with the latest release?
Not quite sure. I tried re-installing gomobile but still getting:
```
[matti@babylon trace (master)] % gomobile version
gomobile version unknown: binary is out of date, re-install it
```
I followed this for (re)installing the gomobile cmd:
```
$ go get golang.org/x/mobile/cmd/gomobile
$ gomobile init
```
### What operating system and processor architecture are you using (`go env`)?
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/matti/Library/Caches/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/matti/go"
GORACE=""
GOROOT="/usr/local/Cellar/go/1.10/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.10/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/jw/jrh0fgfn42ndjty5zpbsv8p00000gn/T/go-build504316821=/tmp/go-build -gno-record-gcc-switches -fno-common"
### What did you do?
I was attempting to build (bind) iOS framework out of the Google Stackdriver Cloud Trace Go library cloud.google.com/go/trace. Reproduce by
```
go get -u cloud.google.com/go/trace
cd ~/go/src/cloud.google.com/go/trace
gomobile bind -target ios -o /tmp/Cloudtrace.framework
```
### What did you expect to see?
The /tmp/Cloudtrace.framework to be built.
### What did you see instead?
The build failed to:
```
gomobile: darwin-arm: go build -tags ios -buildmode=c-archive -o /var/folders/jw/jrh0fgfn42ndjty5zpbsv8p00000gn/T/gomobile-work-736257513/trace-arm.a gobind failed: exit status 2
# gobind
/var/folders/jw/jrh0fgfn42ndjty5zpbsv8p00000gn/T/gomobile-work-736257513/src/gobind/go_tracemain.go:53: cannot use (*proxytrace_SamplingPolicy)(_param_p0_ref) (type *proxytrace_SamplingPolicy) as type "cloud.google.com/go/trace".SamplingPolicy in assignment:
*proxytrace_SamplingPolicy does not implement "cloud.google.com/go/trace".SamplingPolicy (missing Sample method)
```
Building some other Google modules (eg. storage) works fine. I checked the type limitations for gomobile bind and what this page https://godoc.org/golang.org/x/mobile/cmd/gobind lists does not seem to list anything that would prevent the following from being built:
```
type SamplingPolicy interface {
// Sample returns a Decision.
// If Trace is false in the returned Decision, then the Decision should be
// the zero value.
Sample(p Parameters) Decision
}
// Parameters contains the values passed to a SamplingPolicy's Sample method.
type Parameters struct {
HasTraceHeader bool // whether the incoming request has a valid X-Cloud-Trace-Context header.
}
// Decision is the value returned by a call to a SamplingPolicy's Sample method.
type Decision struct {
Trace bool // Whether to trace the request.
Sample bool // Whether the trace is included in the random sample.
Policy string // Name of the sampling policy.
Weight float64 // Sample weight to be used in statistical calculations.
}
```
I am guessing *proxytrace_SamplingPolicy is something the gomobile bind is creating in its process, there is nothing like that in the trace sources. | mobile | low | Critical |
314,556,362 | pytorch | [Caffe2] CUDNN_STATUS_BAD_PARAM Error with the LRN layer while trying to run the code using CUDA. The training works fine on CPU | Original python traceback for operator 39 in network `fast_style_train` in exception above (most recent call last):
File "styleCaffe.py", line 284, in <module>
File "styleCaffe.py", line 250, in main
File "styleCaffe.py", line 126, in styleNetModelDef
File "styleCaffe.py", line 68, in customVGGY
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/brew.py", line 121, in scope_wrapper
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/helpers/normalization.py", line 43, in lrn
Traceback (most recent call last):
File "styleCaffe.py", line 284, in <module>
main()
File "styleCaffe.py", line 260, in main
workspace.CreateNet(train_model.net, overwrite=True)
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/workspace.py", line 166, in CreateNet
StringifyProto(net), overwrite,
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/workspace.py", line 192, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at local_response_normalization_op_cudnn.cc:39] status == CUDNN_STATUS_SUCCESS. 3 vs 0. , Error at: /home/prasanna/Installations/caffe2/caffe2/operators/local_response_normalization_op_cudnn.cc:39: CUDNN_STATUS_BAD_PARAM
I am facing the error of Bad parameters while creating the LRN Layer. I have defined the layer in the following way:
conv1_vggx = brew.conv(model, data, 'conv1_vggx', 3, 96,kernel= 7,stride=2,weight_init=('GaussianFill',{'mean':0.0, 'std':1e-2}))
relu1_vggx = brew.relu(model, conv1_vggx, 'relu1_vggx')
inst1_vggx = brew.lrn(model,relu1_vggx,'inst1_vggx',size = 5)
pool1_vggx = brew.max_pool(model, inst1_vggx, 'pool1_vggx', kernel=3, stride=2)
brew.lrn is the layer in which
CUDNN_ENFORCE(
cudnnSetLRNDescriptor(norm_desc_, size_, alpha_, beta_, bias_));
is failing.
I looked at the parameter they seem to be correct to me.
How to workaround this problem?
| caffe2 | low | Critical |
314,597,734 | rust | Surprising type inference on method call without explicit turbo-fish | Please excuse the lack of specificity in the title, it reflects my lack of understanding what is going on here.
Here's a distilled example of what I'm going for:
```rust
trait FooMut {
type Baz: 'static;
fn bar<'a, I>(self, iterator: &'a I) where for <'b> &'b I: IntoIterator<Item= &'b &'a Self::Baz>;
}
struct DelegatingFooMut<T> where T: FooMut {
delegate: T
}
impl<T> FooMut for DelegatingFooMut<T> where T: FooMut {
type Baz = DelegatingBaz<T::Baz>;
fn bar<'a, I>(self, collection: &'a I) where for <'b> &'b I: IntoIterator<Item= &'b &'a Self::Baz> {
let collection = collection.into_iter().map(|b| &b.delegate);
self.delegate.bar(&collection)
}
}
struct DelegatingBaz<T> {
delegate: T
}
```
([Play](https://play.rust-lang.org/?gist=3428bfa4fc81db0b73aaa66e76db7d5e&version=nightly))
This fails to compile (on stable and nightly) with:
```
error[E0271]: type mismatch resolving `for<'b> <&'b I as std::iter::IntoIterator>::Item == &'b &<T as FooMut>::Baz`
--> src/main.rs:17:23
|
17 | self.delegate.bar(&collection)
| ^^^ expected struct `DelegatingBaz`, found associated type
|
= note: expected type `&&'a DelegatingBaz<<T as FooMut>::Baz>`
found type `&&<T as FooMut>::Baz`
error[E0308]: mismatched types
--> src/main.rs:17:27
|
17 | self.delegate.bar(&collection)
| ^^^^^^^^^^^ expected type parameter, found struct `std::iter::Map`
|
= note: expected type `&I`
found type `&std::iter::Map<<&I as std::iter::IntoIterator>::IntoIter, [closure@src/main.rs:15:53: 15:68]>`
```
I played around a bit and found that changing the `bar` implementation to the following does compile:
```rust
fn bar<'a, I>(self, collection: &'a I) where for <'b> &'b I: IntoIterator<Item= &'b &'a Self::Baz> {
let collection: Vec<&<T as FooMut>::Baz> = collection.into_iter().map(|b| &b.delegate).collect();
self.delegate.bar::<Vec<&<T as FooMut>::Baz>>(&collection)
}
```
([Play](https://play.rust-lang.org/?gist=a2027c40f0fa2f0c07256d0e45328543&version=nightly))
But only with the turbo-fish on the call to `self.delegate.bar`; if I remove the turbo-fish it once again fails to compile:
```rust
fn bar<'a, I>(self, collection: &'a I) where for <'b> &'b I: IntoIterator<Item= &'b &'a Self::Baz> {
let collection: Vec<&<T as FooMut>::Baz> = collection.into_iter().map(|b| &b.delegate).collect();
self.delegate.bar(&collection)
}
```
([Play](https://play.rust-lang.org/?gist=d5bd0f6256af3c82dd4973be19269045&version=nightly))
This surprised me. From the error it seems like the compiler infers the type-parameter on the call to the delegate to be the same as the type parameter on the outer (delegating) method. I am not completely sure if this is a bug or intended behavior. If this isn't a bug I was hoping someone would perhaps be able to give me some insight into the what and why and a possible work-around.
While collecting into a `Vec` works as a workaround for now, I would really like to avoid having to allocate. I'm assuming that I might be able to make the original example work if I can work out a type for a turbo-fish there, but `std::iter::Map` seems to take the concrete type of its closure as a type parameter and I cannot figure out how to represent that in the turbo-fish type (if that's possible at all). | C-enhancement,T-lang,A-inference | low | Critical |
314,631,837 | vue | Have template compiler add source metadata to HTML tags | ### What problem does this feature solve?
I'm developing an all-in-one editor for webdevs that runs inside Chrome DevTools.
Just by replacing `npm start` with `[name-not-finalized] start`, users can have a fully featured text editor right inside Chrome DevTools shell, automatically pointing at their project directory.
It comes with DOM inspector, where you pick an element and it'll literally take you to `file:line:col` where that element was defined. So you don't have to look through files to figure out where that button below the header is coming from. Useful when a new dev joins a project or you're revisiting your work after a very long time.
▶ [Watch 30 sec demo w/ a React project](http://goo.gl/d64cgv)
___
And of course, it also works with Vue projects, just as awesome.
▶ [Watch 20 sec demo w/ a Vue project](https://drive.google.com/open?id=1rGeFiNLezxzaJqnhnDbjvD_OvLMyaMIh)
Above demos were shot using locally tinkered compilers (just not clean enough to be a PR)
The goal is to deliver the ultimate developer experience ever. Other features are in development as we speak, like CSS QuickEditing, built-in image editor and a SVG editor so devs don't have to fire up Illustrator or Photoshop to make minor tweaks to their stuff.
The only hurdle in me releasing the app is getting external players to participate in revolutionizing how we write web.
So to wrap up I have only two requests:
- Please have the template compilers add metadata to each tag (either as data attribute, or property on DOM node itself), that contains path to `*.vue` file (can be relative to project root), `line:col`/offset where the tag opens and last `line:col`/offset where the tag closes.
- Please star this issue: https://bugs.chromium.org/p/chromium/issues/detail?id=811036
### What does the proposed API look like?
Something like this:
```javascript
console.log(someElement.__vue__._debugSource)
// > { file: 'src/components/Header.vue', line: 12, col: 4, lineEnd: 16, col: 8 }
// or
// > { file: 'src/components/Header.vue', start: 241, end: 352 }
```
<!-- generated by vue-issues. DO NOT REMOVE --> | feature request | low | Critical |
314,648,158 | You-Dont-Know-JS | Include some coverage of common tag functions for tagged template literals | When reading though [YDKJS: ES6 & Beyond; Chapter 2: Syntax](https://github.com/getify/You-Dont-Know-JS/blob/master/es6%20%26%20beyond/ch2.md) about [Tagged Template Literals](https://github.com/getify/You-Dont-Know-JS/blob/master/es6%20%26%20beyond/ch2.md#tagged-template-literals), I thought of the case, "what if a multi-line tagged template literal is inside of a function or other containing block with indentation?" That lead me to test and see that by default it doesn't compensate for white space indentation, but YDKJS doesn't cover this case nor similar ones to it. I don't think YDKJS needs to cover all similar cases, but I think it would be a good idea to at least point to common ideas or a library of common ones such as [common-tags](https://github.com/declandewet/common-tags) that I came across which provides a template literal tag function called [`stripIndent`](https://github.com/declandewet/common-tags#stripindent) that helps with cases like this.
Thanks for writing the book! It's thorough and very helpful for me learning new ES6 features. | for second edition | medium | Minor |
314,690,913 | opencv | VideoCapture: Accessing physical camera with MJPG produces artifacts | ##### System information (version)
- OpenCV => 3.2.0
- Operating System / Platform =>Win 10 64bit
- Compiler => Visual Studio 2013, 64bit
##### Detailed description
Accessing my (physical) webcam compressed by setting the FOURCC to MJPG works (I see a noticeable increase in FPS for 120fps-capable devices) but adds artifacts to the image on all tested devices. On one device even the FoV changed - but I could live with that if it were not for the artifacts.
On my machine, ``CV_DSHOW`` is selected by default and it is the only interface I could successfully set MJPG for so I explicitly defined it in the minimal-not-working-example below. If I leave it to be auto-determined, I see the same symptoms.
The artifacts are not shown if I don't set the resolution (then, the images are 640x480 and look intact).
**Logitech C920**
1) without MJPEG

2) With MJPEG

3) Zoomed on the artifacts: the last row is completely black
(no, it is not the window decoration)

**ELP 2.0 Megapixel USB Camera**
1) without MJPEG

2) with MJPEG

3) Zoomed on the artifacts: last row black and last column with blue stripes

##### Steps to reproduce
```
#include <string>
#include <iostream>
#include "opencv2/videoio.hpp"
#include "opencv2/core/mat.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
int main()
{
cv::VideoCapture cap;
cap.open(0, cv::CAP_DSHOW);
std::cout << "MJPG: " << cap.set(cv::CAP_PROP_FOURCC, cv::VideoWriter::fourcc('M', 'J', 'P', 'G'));
cap.set(cv::CAP_PROP_FRAME_WIDTH, 1280);
cap.set(cv::CAP_PROP_FRAME_HEIGHT, 720);
cv::Mat img;
while (cv::waitKey(2) != 'q') {
cap >> img;
cv::imshow("img", img);
}
return 0;
}
``` | category: videoio,platform: win32 | low | Minor |
314,695,939 | vscode | Feature: Settable "keyboard.chords.mode" | <h1 align=center><img alt="consequential inaccessibility ≠ incidental feature request" src="https://upload.wikimedia.org/wikipedia/commons/3/3b/Antu-preferences-desktop-accessibility-32.svg" height=48 /></h1>
<details><summary align=center><b><code>Explainer</code></b></summary>
---
**Note** — The following was added after reopening the issue… please read to understand my perspective when I opened this issue a while back.
> TL;DR;
>
> This issue for folks like myself comes with accessibility related burdens that extend far beyond feeling 😯 because you triggered the wrong thing… For every unexpected outcome, there is a long period of observing to look for what makes it happen, then you need to figure out where to look for possible causes, then you find that, then you look for ways to fix it, creating all sorts of noise and all sorts of visual and cognitive burdens.
>
> So you get the gist, I am disabled my way, it never got in my way, but your chords feature is an actual disability not of my way, it is in my way!
>
> And so yes all that just because of a shortcut, yes, that is what others don't get to see — which is a good thing, but not accepting that some suffer this is hardly a fair thing to do just because there is more who don't need bother… please ❤️
>
> DR;
>
> consequential inaccessibility ≠ incidental feature request
</details>
---
**Proposal**
I'd like to propose a solution that will allow everyone to decide the degree of complexity they are comfortable with, without the need for any complicated configuration and possible very little change to the existing system. The user simply decides which "keyboard.chords.mode" they want to use in the `settings.json`, which will augment how VS code behaves immediately after the first key of a possible chord is pressed.
<details><summary>This was in the original issue, it is not meant as a "feature request" merely my naïve way to try to "offer solutions" not "problems" but I realize now my mistake…</summary><p>
> Let's say the current unchanged behaviour is the "default" mode, we can also consider an alternative "off" mode, and maybe my favourite one "continuous" is the third mode, may be even extension-defined or something more complicated down the road. When "off", VS code does not even need to initialize any chord effects. When "default", VS code does what it normally does.
>
> For "continuous" a new mode all together is introduces to replace the current sparse key sequences with modifier(s) + key(s) combinations. This would instead require the first key to include one or more modifiers with one or more letter, except the modifiers should remain pressed before the following keys in the sequence. So for instance"<kbd>cmd</kbd>+<kbd>k</kbd> <kbd>cmd</kbd>+<kbd>w</kbd>" would not trigger if the <kbd>cmd</kbd> key was released midway, or if <kbd>cmd</kbd> is not yet released (allowing sequences that go beyond two keys), then as soon as the modifier is released, all keybindings exactly matching `"key": "cmd+k cmd+w"` would be the intended keys, excluding any other partial matches like ~~`"key": "cmd+k"`~~ or ~~`"key": "cmd+k cmd+w cmd+1"`~~... etc.
</details>
But, please come your own solution, I do not want chords, not the way they are at least, so on/off is better to me than trouble/off/useful — which was misrepresented above, sorry!
---
**Why**
From time to time commonly used keyboard shortcuts, specifically those that overlap with the first key in the chord of others, stop working. The reasons are always different, but ultimately, the underlaying cause or the side effect will be fixed by finding the offending keybinding which is guaranteed to either be a default or extension default, and then adding two or more new bindings to try to gracefully avoid the conflict.
In reality, the current chord implementation is extremely evolved to do amazing work coalescing **predefined** keybindings from many sources, but as far as the user is concerned, `keybindings.json` is an extremely complicated file to troubleshoot. If a conflict did not involve chords, it is not complicated, and most of the time using the Keyboards Shortcut editor is all you need to substitute any offenders, which result in 1 or 2 records in the json file at the most.
**Issue**
So let me close with my incomplete issue (which likely you would not like filed).
Currently my <kbd>cmd</kbd>+<kbd>k</kbd> no longer clears the terminal and my keybindings to fix that are not close to how things were before either a new VS code change or some extension I recently added (or something else).
- VSCode Version: Version 1.22.2 (1.22.2)
- OS Version: macOS 10.13.4 (17E199)
Steps to Reproduce:
1. Don't know
Current *Crude* Fix:
```json
// keybindings.json
{
"key": "cmd+k",
"command": "workbench.debug.panel.action.clearReplAction",
"when": "!editorFocus && inDebugMode" // but only if panel is open and showing please
},
{
"key": "cmd+k",
"command": "workbench.action.terminal.clear",
"when": "terminalFocus" // did xterm break this (I don't know)
},
{
"key": "cmd+k",
"command": "workbench.action.terminal.clear",
"when": "!editorFocus && !inDebugMode" // but only if in terminal please
}
```
Does this issue occur when all extensions are disabled?: Yes *but which one*
I don't think it is possible for users of different skillsets to figure out the trickle down logic the VS code uses to determine when to expect a next chord key when they are 100% sure they bound <kbd>cmd</kbd>+<kbd>k</kbd> to clear the terminal when the focus is in the terminal. Even then, how many negative bindings would it take to ensure that the positive binding always triggers.
**Thanks** and really excited to see the next iteration of this revolutionary keyboard handling system. | feature-request,keybindings | low | Critical |
314,729,577 | TypeScript | Callback in mapped type implicitly has 'any' type | **TypeScript Version:** 2.9.0-dev.20180414
**Code**
```ts
declare function watch<T>(obj: T, propertyChangedCallbacks: { [K in keyof T]: (obj: T) => void }): void;
watch({ x: 0 }, {
x: obj => {},
});
```
**Expected behavior:**
`obj` is `{ x: number }`.
**Actual behavior:**
`src/a.ts(3,8): error TS7006: Parameter 'obj' implicitly has an 'any' type.` | Bug | low | Critical |
314,763,755 | go | x/build: make sure builders always set $HOME and $USER | The builders don't always set $HOME and $USER.
Fix that, and revert the os/user testing hacks from https://go-review.googlesource.com/c/go/+/107300
| Builders | low | Minor |
314,787,035 | rust | Provide natstepfilter and/or natjmc files for debugging with Visual Studio | Visual Studio has support for specifying functions to step over unconditionally, or to step over if debugging "just my code". See the documentation [here](https://docs.microsoft.com/en-us/visualstudio/debugger/just-my-code#BKMK_C___Just_My_Code).
I'm investigating if these files can be embedded in the pdb (like a natvis file with the `/NATVIS` linker flag) - but I think it would still be useful to simply include two files with the `msvc` rust compilers:
* `rust.natjmc` - This file could probably exclude everything in `std::*`.
* `rust.natstepfilter` - This file would have entries to exclude common helper functions or trait implementations that you typically never want to step through.
| O-windows,A-debuginfo,C-enhancement,T-compiler,O-windows-msvc | low | Critical |
314,814,811 | go | x/build/cmd/gerritbot: Gerrit edits are immediately overwritten by older GitHub commits | I tried to edit a commit message on Gerrit, but gopherbot fought me and reverted it:
https://go-review.googlesource.com/c/sys/+/107302
https://go-review.googlesource.com/c/sys/+/107302/2..3
I'd only expect it to be reverted if there was a new patchset on GitHub's side.
/cc @andybons | Builders,NeedsFix | medium | Major |
314,857,048 | TypeScript | Don't offer to change spelling to not-yet-defined variables | **TypeScript Version:** 2.8.1
**Search Terms:** suggest change spelling auto fix import
**Code**
```typescript
// This import is missing, but can be auto-suggested
import { /* SomeClass */ } from "./SomeClass";
const bindContainer = (container) => {
const someClass = container.get(SomeClass);
};
```
**Expected behavior:**
When I ask for auto-fixes on the `SomeClass`, it should only suggest to add it to the import declaration.
**Actual behavior:**
Two suggestions are given:
* `Change spelling to 'someClass'`
* `Add 'SomeClass' to existing import declaration from "./SomeClass"`
We can statically reject the rename suggestion here because this is in the initializer of the variable. | Suggestion,Awaiting More Feedback,Domain: Quick Fixes | low | Minor |
314,863,924 | electron | Change default web behaviors that don't make sense in the context of Electron | There are a number of behaviors that Electron inherits from Chromium which don't make sense for the vast majority of Electron apps. For instance:
- dragging a file into a window navigates to that file
- pinch-zooming in a window zooms the UI
- <kbd>Cmd+Click</kbd> opens links a new BrowserWindow
Electron apps that don't want these behaviors currently have to disable them explicitly, in an ad-hoc and error-prone way. Since these behaviors are undesirable for almost every Electron app, we should disable them by default.
This issue is a tracking issue for discussion about this general idea; separate issues will be created for specific changes that need to be made. | discussion | medium | Critical |
314,866,425 | flutter | Move Rasterizer::Snapshot to the IO thread. | Currently, snapshotting for the service protocol happens on the raster (formerly GPU) thread because we need access to IO GrContext resident textures. However, this could also be done on the IO thread itself. The only roadblock was the fact that the sole ownership of the layer tree was with the rasterizer which was only safe to access on the raster (formerly GPU) thread. However, this tree can be flattened and sent to the IO thread for snapshotting. Once this mechanism in place, the duplicate snapshotting logic in Scene::toImage can also be removed. | team,engine,P3,team-engine,triaged-engine | low | Minor |
314,867,550 | go | cmd/compile: compiler variable folding can break linker's -X option | I'm facing a problem when I'm trying to set a variable at build time using the LDFLAG -X. The issue seem to occur because the compiler optimizes the variable into a constant since it's not being used in the package.
Here's a very simplfied code snippet that shows the issue:
```
package main
import "fmt"
var foo = "A"
var bar = foo
func main() {
fmt.Println(bar)
}
```
```
% go build -ldflags "-X main.foo=B" test.go && ./test
A
```
If you change the `bar` variable initialization line to: `var bar = foo + ""` the output is as expected:
```
% go build -ldflags "-X main.foo=B" test.go && ./test
B
```
go version go1.10.1 linux/amd64 | NeedsInvestigation,compiler/runtime | low | Major |
314,890,944 | TypeScript | Allow Typescript to detect when imported html relative paths are incorrect | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 -->
<!--
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the CONTRIBUTING guidelines: https://github.com/Microsoft/TypeScript/blob/master/CONTRIBUTING.md
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
<!-- If you have a QUESTION:
THIS IS NOT A FORUM FOR QUESTIONS.
Ask questions at http://stackoverflow.com/questions/tagged/typescript
or https://gitter.im/Microsoft/TypeScript
-->
<!-- If you have a SUGGESTION:
Most suggestion reports are duplicates, please search extra hard before logging a new suggestion.
See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md
-->
<!-- If you have a BUG:
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 2.8.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** html template import
**Code**
In global.d.ts:
```ts
declare module '*.html' {
const value: string;
export default value
}
```
Somewhere else in code:
```ts
import templateHtml from './SOME_PATH_THAT_DOESNT_EXIST.html'
```
**Expected behavior:**
I expect that Typescript will still attempt to parse the string in 'from' and tell me if it can't find any matching files, as it is a relative path.
**Actual behavior:**
No error or warning is emitted, so any typos in the path won't have a problem until running the app.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** | Suggestion,Awaiting More Feedback | low | Critical |
314,913,050 | three.js | CubeTexture Support for the Threejs Editor | This is a feature request.
For a long time, threejs editor has been lacking the support of `CubeTexture`.
Not just for importing but exporting as well.
I'm assuming that many people would love this feature.
Most of the time, we have to go through threejs editor in order to create a scene with proper textures and materials and use it as an asset for the app development. But the lack of CubeTexture makes the app development slightly tricky.
I'm aware that it's not a straightforward feature and may require a CubeTexture UI Panel, changes in .toJson and .fromJson in the editor and also the changes in the ObjectLoader and some more local storage related adjustments.
However, I believe if this feature gets added, the editor will become more powerful then ever.
Let me if this request qualifies to be a feature request in upcoming revision? | Enhancement,Editor | low | Minor |
314,921,947 | flutter | [Widget Request] BasicApp | Maybe something like this already exists and I've just missed it.
What would be very useful is a simple bare bones "App" widget (no navigation, nothing) for creating single view app. Basically, something that just sets the text direction and locale etc (a stripped down version of WidgetsApp). | c: new feature,framework,P3,team-framework,triaged-framework | low | Major |
314,973,256 | three.js | Texture panel for all textures in Threejs Editor | Hello Everyone,
Hope you all would find this feature request interesting.
At this point, threejs editor supports the texture implementation but there is no easier way to set various texture properties like anisotropy, filters, repeat, offset etc.
I would propose a texture panel to have all the texture properties for each selected object and its texture type.
This will ease up the process of scene and asset creation. Without this feature, the material and texture assignment panels look somewhat incomplete.
I'm sure that this feature will be highly valuable for so many different people and will make the editor more powerful than ever.
Let me know if this feature makes any sense. Will be glad to have a positive response. | Enhancement,Editor | low | Minor |
314,991,985 | godot | Floating point values in the inspector are forcibly rounded to 3 decimal places | **Godot version:**
3.0.2
**OS/device including version:**
Arch Linux
**Issue description:**
<!-- What happened, and what was expected. -->
Fractional numbers, when entered in the inspector of the editor, e.g. for the `Height` of a `CapsuleShape`, are rounded to two decimal places. For example, when entering 2.022 it will be rounded to 2.02 and when entering and 5.555 will be rounded to 5.56.
The issue could be mitigated by assuming different units, e.g. assuming 1 unit = 1 cm instead of 1 meter. 5.555 would become 55.55 and that would be fine (two decimal places now). However, this again does not work for more fractional places, e.g. 1/3. Additionally, the editor seems to be optimized for something like 1 unit = 1 meter or at least 1 ft because the zoom feature (mouse wheel) does not allow zooming out far enough to make it work with cm or inches.
My suggestion is to not round the values at all and leave them at 32 bit IEEE 754 representation. I don't see any benefit from forcing to round the values. Alternatively, allow setting the decimal places via the preferences menu.
**Steps to reproduce:**
1. Create a `CollisionShape` node and set it to `CapsuleShape`
2. Enter 5.555 as height value and accept
3. Observe the value changing to 5.56
| discussion,topic:editor,confirmed | medium | Critical |
314,992,309 | every-programmer-should-know | What about Network ? | Needs some ❤️,good first issue | low | Minor |
|
315,060,176 | pytorch | [feature request] Stochastic Variance Reduced Gradient (SVRG) optimizer | I'm currently working on neural networks for my Master's thesis and I stumbled upon the optimizer described in this paper:
https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf
and I managed to make an implementation for myself and I thought I could contribute it if it is accepted by the community.
There is one thing about this optimizer that makes it a bit tougher than others. And that is the fact that it uses two sets of parameters ( which means two models in my implementation ): the main ones and a snapshot of the main taken every few epochs, and that are used to reduce the variance of the optimization.
cc @vincentqb | feature,module: optimizer,triaged,needs research | low | Minor |
315,084,879 | pytorch | [caffe2] GANs | Is it possible to develop a Generative Adversial Network with caffe2?
I'm struggling a bit on how to give the gradient of the discriminator to the generator.
Is there any minimal example?
Thanks | caffe2 | low | Major |
315,110,544 | pytorch | Autogenerate code example / tutorial outputs in documentation | It would be nice to have an automated way of doing the following:
- Generates output of code examples and includes it in the docs
- Error or warn if any of the code examples don't run (or crash) so it's easy to identify what to fix
Something like [nbsphinx](https://nbsphinx.readthedocs.io/en/0.3.2/) does both of the above. | todo,module: docs,good first issue,triaged,module: doc infra | medium | Critical |
315,116,368 | vscode | Search and replace across files corrupts files with CR line endings |
Issue Type: <b>Bug</b>
Execute these commands:
```cmd
git clone https://github.com/AArnott/pinvoke.git
cd pinvoke
git checkout b1239bc075f87f202a410973d7faa468a5e6b9cf
code .
```
In the Search panel, search for "LICENSE.txt" and replace with "LICENSE" (no other options such as regex are selected).
In the search and replace results (before commiting them), notice how the second line of most source files has a small change applied. This is good.
But notice how 4 files have an erroneous change applied to the first line:
* src/User32/User32+DLGITEMTEMPLATE.cs
* src/User32/User32+DLGTEMPLATE.cs
* src/User32/User32+MSG.cs
* src/User32/User32+PeekMessageRemoveFlags.cs
VS Code version: Code 1.22.1 (950b8b0d37a9b7061b6f0d291837ccc4015f5ecd, 2018-04-06T02:26:57.615Z)
OS version: Windows_NT x64 10.0.16299
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz (8 x 2112)|
|Memory (System)|15.93GB (2.72GB free)|
|Process Argv|C:\Program Files\Microsoft VS Code\Code.exe .|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (16)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-markdownlint|Dav|0.14.1
githistory|don|0.4.0
xml|Dot|1.9.2
gitlens|eam|8.2.1
EditorConfig|Edi|0.12.1
git-project-manager|fel|1.5.1
docker-linter|hen|0.5.0
docomment|k--|0.0.18
azure-account|ms-|0.4.0
cpptools|ms-|0.16.1
csharp|ms-|1.14.0
PowerShell|ms-|1.6.0
team|ms-|1.133.0
vscode-docker|Pet|0.0.26
java|red|0.23.0
vsc-docker|Zim|0.34.0
</details>
<!-- generated by issue reporter --> | bug,search,confirmed | low | Critical |
315,148,165 | angular | Generated output in ngsw.json is wrong | ## I'm submitting a...
<pre><code>
[x] Bug report
[ ] Feature request
</code></pre>
## Current behavior
When run the build script with the inline argument of `--base-href /current/` and in the `ngsw.json` file use the absolute path for the files, it produces the path `/current/index.html` in the built `ngsw.json`. Using of the relative path in the `ngsw.json` does not help.
## Expected behavior
Either using the relative path in the `ngsw.json` or generating the output due to the `--base-href` value.
## Environment
<pre><code>
Angular version: 5.0.0
@angular/service-worker: 5.2.10
Browser:
- [x] Chrome (desktop) version 65
For Tooling issues:
- Node version: 8.9.4
- Platform: Windows
</code></pre>
| type: bug/fix,freq1: low,area: service-worker,state: needs more investigation,P4 | low | Critical |
315,157,569 | TypeScript | In JS, type annotations should not block errors from the rest of the program | In a Javascript file that is checked by flow, we may see code like this (from create-react-app/packages/react-error-overlay/src/utils/parseCompileError.js)
```js
export type ErrorLocation = {|
fileName: string,
lineNumber: number,
colNumber?: number,
|}
function parseCompileError(message: string): ?ErrorLocation {
const lines: Array<string> = message.split('\n');
for (let i = 0; i < lines.length; i++) {
// ...................
return fileName && lineNumber ? { fileName, lineNumber, colNumber } : null;
}
```
This behaves badly when the typescript compiler compiles it with checkJs on:
**Expected behavior:**
1. Errors on the type declaration and all the type annotations.
2. However, ErrorLocation should be declared as a type alias, and message, lines, etc should all have their declared types.
3. If they are used incorrectly, they should have errors.
**Actual behavior:**
1. Errors on the type declaration and all the type annotations.
In the language service:
2. message, lines, etc have the correct types, **but ErrorLocation is type any**.
3. If they are used incorrectly, they have errors.
In batch compilation:
2. No errors show up except those from (1), even if there are lots of other javascript files without type annotations.
| Suggestion,Awaiting More Feedback,Domain: JavaScript | low | Critical |
315,215,285 | kubernetes | Guidelines for node-level plugin auth | **Overview**
I get a lot of questions about how to setup mutual authentication for node-level cluster plugins. Examples include:
- Local volume provisioning
- Device plugins (modify node object)
- Device metrics (scraping)
- CRI Streaming
- _Are there others I'm missing?_
For the general case, our recommendation has been to use a service mesh (such as Istio), or NetworkPolicy (although this isn't really a full solution). But as both of those are optional features on Kubernetes, I don't think they are sufficient for core functionality.
We need some general guidelines for plugins, and may need to build some additional capabilities or libraries to support them.
The architecture we need to target includes:
- A central control component, running as a deployment in the cluster. Let's call this the "CCC".
- A DaemonSet that handles node operations, and communicates back to the controller. Let's call this the "DSP" (DaemonSet Pod).
- K8s API server
Things we might need to solve:
1. CCC -> apiserver (solved)
a. [x] server (apiserver) authn -- cluster CA
b. [x] client (CCC) authn -- Use service account tokens
c. [x] authz -- RBAC
2. CCC -> DSP
a. [ ] server (DSP) authn
b. [x] client (CCC) authn -- Service account tokens + TokenReview
c. [x] authz -- RBAC + SubjectAccessReview
3. DSP -> CCC
a. [ ] server (CCC) authn
b. [ ] client (DSP) authn
c. [ ] authz
4. DSP -> apiserver
a. [x] server (apiserver) authn -- cluster CA
b. [ ] client (DSP) authn
c. [ ] authz
Optionally, we could only solve one of (2) and (3), and just require communications to happen in a single direction (i.e. pull or push). We also might be able to avoid (4) by delegating to the CCC (i.e. require all DSP requests to the apiserver to go through the CCC).
/kind technical-debt
/sig auth
/cc @dashpole @vishh @davidz627 @Random-Liu | kind/cleanup,kind/feature,sig/auth,priority/important-longterm,lifecycle/frozen | low | Major |
315,269,253 | go | testing: show rusage statistics for benchmarks | When optimizing CPU usage, it would be useful to get CPU usage information from Go benchmarks.
Could we add, where supported, [getrusage](http://man7.org/linux/man-pages/man2/getrusage.2.html) calls before and after running each benchmark, in addition to wall-time?
I have no opinion about the API for requesting those statistics. Perhaps adding a `-test.stats` or `-test.resources` flag would work?
For example: `-test.stats=utime,stime,maxrss`.
This doesn't have to be rusage specific. For instance, it could be extended with statistics from the [perf_event](http://man7.org/linux/man-pages/man2/perf_event_open.2.html) API at some point.
I could send a CL if this is a welcome change. | NeedsInvestigation | low | Minor |
315,348,817 | rust | Detect recursive instantiation of generic functions | #### We currently have a quite helpful diagnostic for unconditionally recursive functions:
```rust
pub fn recur() {
recur();
}
```
```
warning: function cannot return without recurring
--> src/main.rs:1:1
|
1 | pub fn recur() {
| ^^^^^^^^^^^^^^ cannot return without recurring
2 | recur();
| ------- recursive call site
|
= note: #[warn(unconditional_recursion)] on by default
= help: a `loop` may express intention better if this is on purpose
```
#### And infinitely sized recursive types:
```rust
pub struct S {
s: S,
}
```
```
error[E0072]: recursive type `S` has infinite size
--> src/main.rs:1:1
|
1 | pub struct S {
| ^^^^^^^^^^^^ recursive type has infinite size
2 | s: S,
| ---- recursive without indirection
|
= help: insert indirection (e.g., a `Box`, `Rc`, or `&`) at some point to make `S` representable
```
#### But no good error for infinite instantiation of generic functions.
The following is minimized from @LPGhatguy's syntax tree library ([original playground](https://play.rust-lang.org/?gist=c21ebe815e8cc00680bfd85d51ce8d0d)).
```rust
trait Serializer {
type Associated;
fn leaf() -> Self::Associated { unimplemented!() }
}
pub enum Expression {
Leaf,
Node(Box<Expression>),
}
fn print<S: Serializer>(e: Expression) {
match e {
Expression::Leaf => drop(S::leaf()),
Expression::Node(e) => print::<Wrapper<S>>(*e),
}
}
use std::marker::PhantomData as Wrapper;
impl<S: Serializer> Serializer for Wrapper<S> {
type Associated = S::Associated;
}
enum Json {}
impl Serializer for Json {
type Associated = ();
}
fn main() {
print::<Json>(Expression::Leaf);
}
```
Here the instantiation of `print::<Json>` requires instantiating `print::<Wrapper<Json>>` which calls `print::<Wrapper<Wrapper<Json>>` which calls `print::<Wrapper<Wrapper<Wrapper<Json>>>`... (The use case in this example is noteworthy because it is conceptually sensible; the trouble happens only when combined with Rust's approach of monomorphizing generic functions. [Analogous code in Swift](https://iswift.org/playground?XO5tjo&v=4) where generics are not monomorphized does not hit the same overflow.)
As of rustc 1.27.0-nightly we get an unhelpful message with a recommendation that can't work. It would be better to detect this pattern of a generic function generating a tower of recursive instantiations.
```
error[E0275]: overflow evaluating the requirement `<Json as Serializer>::Associated`
|
= help: consider adding a `#![recursion_limit="128"]` attribute to your crate
error: aborting due to previous error
``` | A-type-system,C-enhancement,A-diagnostics,T-compiler,D-terse,T-types | low | Critical |
315,391,388 | go | x/sys/linux/perf: add package for Linux perf tracing | Linux ships with a robust suite of performance counters and samplers accessed via the [perf_event_open](http://man7.org/linux/man-pages/man2/perf_event_open.2.html) system call.
Recent [changes](https://go-review.googlesource.com/c/sys/+/105756) to x/sys/unix make it possible to call perf_event_open from Go, but using the perf events system correctly is notoriously hard.
Having recently implemented a wrapper for perf events in Go, I would like to generalize and contribute it upstream.
As regards the code location: x/sys/windows seems to have specialized subpackages, so maybe similar structure under x/sys/unix would work. | Proposal,Proposal-Accepted | medium | Major |
315,416,030 | puppeteer | Inconsistent text rendering in headless mode | **EDIT:** The fix is to add `--font-render-hinting=none` to the launch args. E.g.
```
var browser = await puppeteer.launch({
headless: true,
args: ['--font-render-hinting=none']
});
```
**Original Comment:**
Font spacing seems to be inconsistent between headless and non headless mode.
This is likely a Chromium bug for Puppeteer versions 1.2.0 and above.
### Steps to reproduce
**Tell us about your environment:**
* Puppeteer version: 1.2
* Platform / OS version: Linux
* URLs (if applicable):
* Node.js version: 8.6.0
**What steps will reproduce the problem?**
1. screenshot.js
```
'use strict';
const puppeteer = require('puppeteer');
(async() => {
const browser = await puppeteer.launch({ headless: true }); // toggle to false
const page = await browser.newPage();
await page.goto('file:///tmp/test.html');
await page.waitFor(5000);
await page.screenshot({path: '/tmp/screenshot.png'});
await browser.close();
})();
```
2. test.html
```
<html>
<head>
<link rel="stylesheet"
href="https://fonts.googleapis.com/css?family=Lato">
<style>
body {
font-family: 'Lato', serif;
font-size: 15px;
}
div.a {
line-height: 0.5;
}
</style>
</head>
<body>
<div class="a">
<div>aaaaa..............................................................................|111</div><br>
<div>qwertyasdfzxcvyuiohjklbnm................................|222</div><br>
<div>longlonglonglonglonglonglonglongshorty......|333</div>
</div>
</body>
</html>
```
3. `node /tmp/screenshot.js`, then repeat with `headless: false`
**What is the expected result?**
Text is correctly aligned and looks the same as opening the HTML in browser. `headless: false`

**What happens instead?**
Text is misaligned with `headless: true`
 | feature,upstream,chromium | high | Critical |
315,439,177 | ant-design | slider缺少对某个点禁止选中的API | - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
当 included=false 时,表明不同标记间为并列关系;此时marks其中的某一个点需要设置为disabled状态;然而现有的API是针对整个slide设置的disabled,不能满足扩展要求。
### What does the proposed API look like?
暂无想法
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | help wanted,Inactive | low | Major |
315,450,582 | vscode | editor.action.sortLinesAscending has confusing sort order for symbols | When sorting lines using `editor.action.sortLinesAscending` ("Sort Lines Ascending" via the Command Palette) the lines are sorted in a very strange order when the comparison involves a symbol character.
I would expect that `.` (period) would be sorted before `_` (underscore) because in ASCII, period is 46 and underscore is 95.
However, vscode uses `localeCompare` as its sorting method (see [sortLinesCommand.ts:76]( https://github.com/Microsoft/vscode/blob/54e7055c12f4e9a80f44c67758a12cf248d5f374/src/vs/editor/contrib/linesOperations/sortLinesCommand.ts#L76)), which results in underscore being sorted before period. See the repro steps below for an example.
I saw a previous issue regarding this (#15516) but it was closed because the ASCII ordering was in fact correct. In the case I've outlined below, the ordering is not correct.
I'm not sure what the correct solution would be but I think that for ASCII symbols, ASCII ordering should be obeyed. I do realise that "Sort Lines Ascending" is an ambiguous term - ascending according to what criteria? - so perhaps the command could be renamed to something more specific, or you could provide different default sorting options.
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.22.1 Commit 950b8b0d37a
- OS Version: Windows 10 Enterprise Version 1709 OS Build 16299.371
Steps to Reproduce:
1. Copy the following into a new file:
```
a_b.txt
a_b_c.txt
````
2. Highlight the file contents.
3. Open the Command Palette and select "Sort Lines Ascending".
**Expected:**
The file is sorted in the following order:
```
a_b.txt
a_b_c.txt
```
**Actual:**
The file is sorted in the following order:
```
a_b_c.txt
a_b.txt
```
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
| feature-request,editor-sorting | medium | Critical |
315,453,246 | ant-design | The `Row` component will overflow when setting gutter | - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Version
3.4.1
### Environment
os: mac os 12, browser: chrome 65
### Reproduction link
[https://zhuchuji.github.io/antd-issues/](https://zhuchuji.github.io/antd-issues/)
### Steps to reproduce
1. Layout with `Row` and `Col`
2. Set `gutter` to the `Row`
### What is expected?
The `Row` should not overflow
### What is actually happening?
The `Row` overflows and produces the scrollbar
---
Antd sets `margin-left` and `margin-right` to a negative value on `Row` to compensate the `padding-left` of the first `Col` child and `padding-right` of the last `Col` child, which will make the `Row` box larger than its parent as it's set to `box-sizing: border-box`. Users have to provide a `container` with padding to ensure the `Row` will not overflow, which is a big problem in layout especially in responsive web, because they have to set variables for gutters and calculate it for the padding of the `container`. It's totally inconvenient. If the grid system removes the first column's `padding-left` and the last column's `padding-right`, the `Row` does not need a negative margin any more. Perhaps, it could be fixed by adding a container in `Row`, so that you don't have to detect which column is the first child and which is the last.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive | high | Critical |
315,462,393 | rust | proc_macro::TokenStream: provide AST node kind hint | ```rust
/// Enum representing AST nodes a #[proc_macro_attribute] may be applied to
// Bikeshedding welcome
pub enum SyntaxNodeKind {
// when crates as macro inputs aren't pretty-printed as modules (#41430)
Crate,
Item, // could be module, function, impl, etc. `syn` can figure the rest out
Statement,
Expression,
ExternItem, // since item kinds are restricted in `extern {}`
}
impl TokenStream {
/// If this token stream represents a valid syntax tree node, return its kind.
/// Returns `None` for raw tokens
// Alternately it could simply panic when not available because that would only happen in
// `#[proc_macro]` which should expect only raw tokens anyway
pub fn syntax_node_kind(&self) -> Option<SyntaxNodeKind> {}
}
```
This would be exclusively for `#[proc_macro_attribute]`s which parse their input as AST nodes:
* attributes that only accept one kind could assert equality and error immediately instead of attempting to parse their expected kind (allowing them to emit a concise error message instead of "expected [token] got [token]")
* attributes that accept multiple kinds won't have to guess at what node kind they should attempt to parse
cc @alexcrichton @dtolnay @petrochenkov
| C-feature-request,A-macros-2.0 | low | Critical |
315,565,723 | go | cmd/compile: avoiding zeroing new allocations when possible | Consider:
```go
func f() *int {
x := new(int)
*x = 1
return x
}
```
The first line gets translated into `x = newobject(type-of-int64)`, which calls `mallocgc` with a "needszero" argument of true. But it doesn't need zeroing: it has no pointers, and data gets written to the whole thing.
Same holds for:
```go
func f() *[2]int {
x := new([2]int)
x[0] = 1
x[1] = 2
return x
}
```
and more interestingly:
```go
func f() *[1024]int {
x := new([1024]int)
for i := range x {
x[i] = i
}
return x
}
```
We could detect such scenarios in the SSA backend and replace the call to `newobject` to a call to a (newly created) `newobjectNoClr`, which is identical to `newobject` except that it passes `false` to `mallocgc` for `needszero`.
Aside: The SSA backend already understands `newobject` a little. It removes the pointless zero assignment from:
```go
func f() *[2]int {
x := new([2]int)
x[0] = 0 // removed
return x
}
```
although not from:
```go
func f() *[2]int {
x := new([2]int)
x[0] = 1
x[1] = 0 // not removed, but could be
return x
}
```
Converting to `newobjectNoClr` would probably require a new SSA pass, in which we put values in store order, detect calls to `newobject`, and then check whether subsequent stores obviate the need for zeroing. And also at the same time eliminate unnecessary zeroing that the existing rewrite rules don't cover.
This new SSA pass might also someday grow to understand and rewrite e.g. calls to `memmove` and `memequal` with small constant sizes.
It is not obvious to me that this pass would pull its weight, compilation-time-wise. Needs experimentation. Filing an issue so that I don't forget about it. :)
| Performance,compiler/runtime | low | Major |
315,567,620 | go | runtime: sparse zeroing in mallocgc | This is a performance idea; it needs experimentation to see whether it is worth it.
mallocgc accepts a flag to not zero the new allocation. It is used in a few places in the runtime where we know already that we'll entirely overwrite the new memory; #24926 contemplates having the compiler use it too.
mallocgc must however always zero the new allocation if it contains pointers; runtime uses check for pointers before asking for raw memory. However, we could change the meaning of the "don't zero" flag to mean "I'm going to overwrite all the memory". mallocgc could then decide to only zero the pointers in the new memory, instead of zeroing everything. The decision to only zero pointers might be helpful if pointers are sparse in the type. Deciding whether pointers are sparse in the type is probably something we would do at compile time and set a flag in the type.
| Performance,compiler/runtime | low | Major |
315,573,159 | go | cmd/compile: sometimes issue rematerializable values early | ```go
package p
var (
s *int
b bool
)
func f() {
var q *int
if b {
q = new(int)
} else {
q = new(int)
}
s = q
}
```
This code is a bit silly, but it's the smallest reproduction I have handy. :)
When compiled, this code tests b, and on each branch it contains an LEAQ of `type.int`. The LEAQ should be hoisted above the branch, since it is needed immediately in each case, and there are registers to spare.
The CSE pass does its work, and there is a single LEAQ instruction going into regalloc. However, early in regalloc, we decide to issue it every place we need it:
```go
if s.values[v.ID].rematerializeable {
// Value is rematerializeable, don't issue it here.
// It will get issued just before each use (see
// allocValueToReg).
for _, a := range v.Args {
a.Uses--
}
s.advanceUses(v)
continue
}
```
This is usually the right decision--disabling this bit of code causes an overall regression. But as the initial code shows, there are instances in which it'd be better to issue the rematerializable value right away. I'm not sure exactly what the right set of conditions is, though.
cc @cherrymui @randall77
| Performance,compiler/runtime | low | Minor |
315,584,902 | go | x/build/cmd/gerritbot: don't push to Gerrit if checklist hasn't been deleted | In https://github.com/golang/go/pull/24927 the checklist was kept in the commit message and gopherbot pushed it to Gerrit.
Gopherbot could instead just look for "Please ensure you adhere to every item in this list" and, if found, refuse to push it to Gerrit, saying something on the PR instead.
/cc @andybons | Builders | low | Minor |
315,621,695 | vscode | Option to configure editor scrollbar to be opaque | Currently, if editor.renderLineHighlight is set to "line" or "all", the highlight extends into the scrollbar. It would be helpful if it stopped just left of the scrollbar. Here is my rationale:
- The line position isn't the same as the scrollbar position, except by coincidence. They are independent, however the highlighting suggests a linkage of some kind.
- With certain color schemes it can be confusing at a quick glance to determine where on the scrollbar I actually am.
Example of intrusion of line highlight into the scrollbar:

Notice how similar the two highlighted areas look on the scrollbar? On closer inspection I can tell that I am actually at the lower point on the scrollbar, because of the brown horizontal line.
As a workaround, I could easily change the color of the highlight to prevent confusion. But, I'm having trouble understanding why the highlight extends into the scrollbar in the first place.
Thanks for your consideration!
VS Code 1.22.1 | feature-request,editor-scrollbar | medium | Major |
315,669,787 | flutter | Rename LICENSE to NOTICES and update code accordingly | We currently have a LICENSES file in the engine repo. Based on feedback from our legal team, we should (to make things clearer) rename that to NOTICES. This will involve updating the license script documentation to write to NOTICES instead of LICENSE.
We should also update the flutter tool to read from the NOTICES file of each package, if there is one, and only if there isn't fall back on the LICENSE file.
The file that we put into the application package should also be renamed NOTICES. The framework should be updated accordingly. | tool,framework,engine,P2,team-engine,triaged-engine | low | Minor |
315,731,013 | flutter | Update Tonic to better indicate error conditions. | Currently, Tonic has sub-optimal mechanisms of indicating errors in some of its functions. The convention followed (somewhat inconsistently) is [to log to standard error and then exit the process](https://github.com/fuchsia-mirror/topaz/blob/efc5ace130b83ed77accc67788ce28998d0b10fc/lib/tonic/file_loader/file_loader.cc#L176). This often causes unexpected failures that are hard to diagnose. For example, on iOS, logging to standard error does not redirect the logs to tooling that may potentially be listening for errors. Syslog must be used instead on that platform. Also, while calling `exit` on iOS does wind up calling `abort` (which the tooling identifies as process death), the same call is also used on the iOS simulator which does exit the process with the specified code.
We should rework error handling and log propagation in Tonic. | team,engine,P2,team-engine,triaged-engine | low | Critical |
315,818,169 | go | x/build/cmd/gerritbot: perform gofmt checks before pushing to Gerrit | In https://golang.org/cl/99337 a file using CRLF instead of LF line endings was added. When using git-codereview to send CLs, there is a gofmt check which prevents this. But it seems there is no corresponding check when sending CLs via Github PRs.
It would be nice to run the gofmt check also for PRs submitted via Github.
/cc @bradfitz @andybons | help wanted,Builders,NeedsFix | low | Minor |
315,824,740 | react-native | [SectionList][inverted] SectionList headers are sticky-at-the-top-footers if the list is inverted | - [x] I have reviewed the [documentation](https://facebook.github.io/react-native)
- [x] I have searched [existing issues](https://github.com/facebook/react-native/issues)
- [x] I am using the [latest React Native version](https://github.com/facebook/react-native/releases)
## Environment
Environment:
OS: macOS High Sierra 10.13.1
Node: 8.6.0
Yarn: 1.5.1
npm: 4.6.1
Watchman: 4.9.0
Xcode: Xcode 9.0.1 Build version 9A1004
Android Studio: Not Found
Packages: (wanted => installed)
react: 16.3.1 => 16.3.1
react-native: 0.55.3 => 0.55.3
## Steps to Reproduce
Clone [this repository](https://github.com/terrysahaidak/ReactNative-SeactionList-Bug-Example) and run it via `react-native run-ios`.
Observe the section separators are in wrong (randomly?) places.
## Expected Behavior
The section list headers should be in the top of the section instead of the bottom like section footers do with an inverted list (but they aren't sticky).
#### Expected gif:

## Actual Behavior
The section headers are footers instead of headers. They are sticky because of https://github.com/facebook/react-native/pull/17762 but still footers, not headers.
Sample code:
```jsx
export default class App extends React.Component {
render() {
return (
<View style={s.container}>
<SectionList
inverted
sections={mock.sections}
maxToRenderPerBatch={10}
initialNumToRender={10}
style={{ flex: 1 }}
keyExtractor={(item) => item.messageId}
renderSectionHeader={sectionProps => (
<SectionHeader
{...sectionProps}
/>
)}
renderItem={({ item }) => (
<ListItem
item={item}
/>
)}
ItemSeparatorComponent={ItemSeparator}
/>
/>
</View>
);
}
}
```
#### Problem Gif:

| Resolution: PR Submitted,Component: SectionList,Bug | high | Critical |
315,887,577 | pytorch | [Feature request] LayerNormLSTMCell and LayerNormLSTM | It could be convenient to have `LayerNormLSTMCell` and `LayerNormLSTM` implemented in `torch.nn`. | triaged,enhancement | low | Minor |
316,002,759 | kubernetes | Allow use of $ref in in CRD validation schema | (combined with https://github.com/kubernetes/kubernetes/issues/76965)
### `$ref` scenario 1:
In go types, we can define a type and use it in several other types. When generating CRD from this go types, we want to use reference for that type in the validation so that the schema has better readability.
```
package v1beta1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type DataType struct {
Value int `json:"value,omitempty"`
}
type MyKindSpec struct {
Data []DataType `json:"data,omitempty"`
}
type MyKindStatus struct {
Data []DataType `json:"data,omitempty"`
}
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// MyKind
// +k8s:openapi-gen=true
// +resource:path=mykinds
type MyKind struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec MyKindSpec `json:"spec,omitempty"`
Status MyKindStatus `json:"status,omitempty"`
}
```
**What you expected to happen**:
In the CRD for this type, we expect to see the validation as
```
spec:
properties:
data:
items:
$ref: '#/definitions/<...>.DataType'
```
### `$ref` scenario 2:
Original type information is lost when CRD authors [expand their openapi definitions](https://github.com/ant31/crd-validation/blob/eab5f7e90b2c/pkg/convert_types.go#L81-L89). `openapi-gen`[-based operator code is similarly affected](https://github.com/coreos/prometheus-operator/blob/master/example/prometheus-operator-crd/alertmanager.crd.yaml). CRDs forbid `$ref` (implicit GVK link) and `x-*` properties (GVK hints).
_This information enables early, developer-friendly policy checks and useful manifest transforms_. Our CICD automation extends the base k8s schema with custom policy and relies on the schema's `$ref`-erential knowledge to inspect and operate on target resource types, anywhere they appear, eg. deeply nested **or embedded within custom resources**. Prior to validation, a simple callback might first:
**ObjectMeta**
- Require teams set `app.kubernetes.io/*` labels.
- Require teams set org-specific cost/usage/alert aggregation labels.
- Back and forward compat handling, eg. `app`, `app.kubernetes.io/name`, `team`, ...
**Container**
- Inject dynamic ENV and volumes.
- Resolve string properties with templates.
- Require several `resources.limits.*` are set.
**PodSpec** ... **PodSecurityContext** ... **Volume** ...
**Deployment** ... **CronJobSpec** ... **JobSpec** ...
**ServiceSpec** ... **SecurityContext** ... _**(many more)**_ ...
`$ref` expansion makes CRDs opaque to our automation. A pre-processing step is required to repair the type information and restore visibility into CRDs. Options considered are:
- Import all golang-based `openapi-gen`'ed CRDs _directly_ (wherever they write `openapi_generated.go`) and call each `GetOpenAPIDefinitions` function. Merge all results.
- Run `openapi-gen` on k8s and CRD sources ourselves. Avoids multiple `GetOpenAPIDefinitions` calls and potential linking problems.
- Use `kube-apiserver`. Run `hack/update-openapi-spec.sh` with extra CRDs "builtin" and enabled so `GET /openapi/v2` simply returns what we need.
I'd love `$ref` support, but I understand why that could be difficult or undesirable. Alas, if CRDs permitted a known property, eg. `x-kubernetes-group-version-kind` (or similar), CRD libraries can set it accordingly for _each unique `$ref` expansion_ (no need to nest GVK hints), allowing consumers (or even k8s itself) to restore this relationship if and when useful.
Are there options we can move on? | sig/api-machinery,kind/feature,priority/important-longterm,area/custom-resources,lifecycle/frozen | high | Critical |
316,044,600 | flutter | Would like a tool to do symbolication (ideally, would like "flutter logs"/"flutter run" to do it automatically) | Related to https://github.com/flutter/flutter/issues/1016
Would be really nice to have a tool to symbolicate, e.g. follow the steps of:
https://github.com/flutter/engine/wiki/Symbolicating-production-crash-stacks
Including from custom engine builds. We hit an issue recently with customer:gold where they are building their own engine with cherry-picks, thus our normal symbolication instructions break down. Would be nice if we could point them to a script to symbolicate. | c: new feature,tool,engine,P2,team-engine,triaged-engine | low | Critical |
316,048,492 | go | x/build: add Gerrit plugin to give us REST endpoint for last-modified time of anything globally | Gerrit doesn't have pubsub, so we instead have a dummy Google account that subscribes to all possible email spam from Gerrit. That dummy Google account has an email address with a domain name whose MX record goes to an SMTP server we run (see https://pubsubhelper.golang.org/).
That lets us get realtime updates from Gerrit so gopherbot can react in realtime to changes in Gerrit.
But it only works for actions for which Gerrit generates an email.
Some actions on Gerrit, notably modifying hashtags on a CL, do NOT generate an email. This is good, because such email would be spammy. It is bad, though, in that gopherbot cannot react quickly.
It would be nice if maintner (https://maintner.golang.org/) could have a cheap way to poll Gerrit for changes. Currently no methods are cheap enough, which is why we do the SMTP server thing.
We resort to polling every 5 minutes or so. (whereby polling == `git ls-remotes`).
In particular, we want to know when the "meta" refs change (`refs/changes/98/103398/meta`).
I know of no Gerrit REST API to get that information cheaply. (and definitely no git API)
I don't mind whether this works at the project level or the server level. Project might be better, as we want to monitor all of go.googlesource.com/* but only part of code.googlesource.com.
| Builders | low | Minor |
316,070,401 | pytorch | [caffe2] IfOp | Does anybody know how to use the IfOp? Are there any examples?
| caffe2 | low | Minor |
316,076,089 | pytorch | [caffe2] Training and inference | Hi,
I have a train net and a validation net declared in the same workspace. It seems that the weights are shared as both nets are in the same workspace. Now, my question is: when applying:
CAFFE_ENFORCE(workspace.RunNet(trainNet.predict().net().name()));
I'm expecting an update of my weights as I added some training operators.
But when I'm running:
CAFFE_ENFORCE(workspace.RunNet(validationNet.predict().net().name()));
Is this going to do a backprop? How to prevent the net from doing a weight update on the validation pass?
| caffe2 | low | Minor |
316,110,105 | neovim | cannot change colors of existing :terminal with 'termguicolors' | <!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`:
```
NVIM v0.2.2
Build type: Release
LuaJIT 2.0.5
Compilation: /usr/local/Homebrew/Library/Homebrew/shims/super/clang -Wconversion -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -DNVIM_MSGPACK_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -DNDEBUG -DMIN_LOG_LEVEL=3 -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wimplicit-fallthrough -Wvla -fstack-protector-strong -fdiagnostics-color=auto -DINCLUDE_GENERATED_DECLARATIONS -I/tmp/neovim-20180209-92407-udkzoo/neovim-0.2.2/build/config -I/tmp/neovim-20180209-92407-udkzoo/neovim-0.2.2/src -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/opt/gettext/include -I/usr/include -I/tmp/neovim-20180209-92407-udkzoo/neovim-0.2.2/build/src/nvim/auto -I/tmp/neovim-20180209-92407-udkzoo/neovim-0.2.2/build/include
Compiled by [email protected]
Features: +acl +iconv +jemalloc +tui
See ":help feature-compile"
system vimrc file: "$VIM/sysinit.vim"
fall-back for $VIM: "/usr/local/Cellar/neovim/0.2.2_1/share/nvim"
Run :checkhealth for more info
```
- Vim (version: ) behaves differently? `N/A`
- Operating system/version: macOS 10.13.4
- Terminal name/version: kitty
- `$TERM`: xterm-kitty
### Steps to reproduce using `nvim -u NORC`
```
nvim -u NORC --cmd "set termguicolors"
:terminal
printf '\x1b[31mRED\x1b[0m\n'
<C-\><C-n>:let g:terminal_color_1 = '#654321'
printf '\x1b[31mRED\x1b[0m\n'
```
### Actual and expected behaviour
Terminal colors don't change during terminal lifetime.
### Desired
Per [https://neovim.io/doc/user/nvim_terminal_emulator.html](https://neovim.io/doc/user/nvim_terminal_emulator.html), `g:terminal_color_#' is read only at terminal startup. As expected, attempting to change these variables doesn't cause any color change. Given the discussion at [#4696](https://github.com/neovim/neovim/issues/4696), changes to colors of the underlying terminal emulator also don't propagate into the neovim :terminal.
This makes drastic colorscheme changes impossible. I can use :colorscheme to change from a light background to a dark background neovim but have no way to update terminal colors accordingly, and usually do between day and nighttime. This can make the terminal either extremely unpleasant to use in one of the two cases, or slightly unpleasant to use all of the time.
Is there any way both to use termguicolors in most of neovim while also allowing the terminal to either reflect emulator colors or to have adjustable colors?
| enhancement,terminal | medium | Critical |
316,113,437 | go | path/filepath: WalkFunc is called with a "file does not exist" error if a file is deleted between readdirnames and lstat (inside walk). | Please answer these questions before submitting your issue. Thanks!
### What version of Go are you using (`go version`)?
% go version
go version go1.10 linux/amd64
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
% go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/amistry/.cache/go-build"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/amistry/go-src"
GORACE=""
GOROOT="/home/amistry/go"
GOTMPDIR=""
GOTOOLDIR="/home/amistry/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build740843021=/tmp/go-build -gno-record-gcc-switches"
### What did you do?
Call filepath.Walk on a directory where files are being constantly deleted (and added).
### What did you expect to see?
No errors.
### What did you see instead?
"file does not exist" errors. If a file doesn't exist, why are you telling me about it?
I guess this is more of a question of semantics. If a file a deleted between the directory listing and lstat, should the caller be told, or should this case be hidden? If a file is deleted before the readdirnames, then there's no error. If a file is added between readdirnames and lstat, it is hidden from the caller. I'd argue that hiding the error is a more consistent and expected.
| NeedsInvestigation | low | Critical |
316,118,898 | pytorch | [Caffe2] TensorProtosDBInput AttributeError | When I run the `lmdb_create_example.py --output_file ~/git/caffe2/build/caffe2/python/examples/test.lmdb`
I got the below error:
>>> Write database...
Inserted 0 rows
Inserted 16 rows
Inserted 32 rows
Inserted 48 rows
Inserted 64 rows
Inserted 80 rows
Inserted 96 rows
Inserted 112 rows
Checksum/write: 1744642
>>> Read database...
Traceback (most recent call last):
File "/home/bill/git/caffe2/build/caffe2/python/examples/lmdb_create_example.py", line 107, in <module>
main()
File "/home/bill/git/caffe2/build/caffe2/python/examples/lmdb_create_example.py", line 103, in main
read_db_with_caffe2(args.output_file, checksum)
File "/home/bill/git/caffe2/build/caffe2/python/examples/lmdb_create_example.py", line 71, in read_db_with_caffe2
db=db_file, db_type="lmdb")
File "/home/bill/git/caffe2/build/caffe2/python/model_helper.py", line 436, in TensorProtosDBInput
return helpers.db_input.db_input(
AttributeError: 'module' object has no attribute 'db_input'
However I check the /git/caffe2/build/caffe2/python/helpers/db_input.db_input is existed.
I'm new in Caffe2.
Do anyone know how to fix this?
- PyTorch or Caffe2: Caffe2
- OS: Ubuntu 16.04
- PyTorch version:
- How you installed PyTorch (conda, pip, source): source
- Python version: 2.7
- CUDA/cuDNN version: 9.0
- GPU models and configuration: NVRM version: NVIDIA UNIX x86_64 Kernel Module 390.30 Wed Jan 31 22:08:49 PST 2018
- GCC version (if compiling from source): 5.4.0 20160609
- CMake version: 3.5.1
- Build command you used (if compiling from source):
- Versions of any other relevant libraries:
In addition, including the following information will also be very helpful for us to diagnose the problem:
- A script to reproduce the bug. Please try to provide as minimal of a test case as possible.
- Error messages and/or stack traces of the bug
- Context around what you are trying to do
| caffe2 | low | Critical |
316,150,741 | go | gccgo: objcopy needs to be required by configure | gcc 7.3.0
"objdump" is actually a required executable for building gccgo, as I understand it.
Therefore, it would be nice to flag it as such, in libgo/configure, rather than have the build bomb out later, mysteriously.
Oddly, there is already a line related to it, in configure.ac:
AC_CHECK_TOOL(OBJCOPY, objcopy, missing-objcopy)
but that does not seem to trigger a clear error to the user.
I'm no autoconf expert, but I'm guessing moding the above line as follows might do the trick:
AC_CHECK_TOOL(OBJCOPY, objcopy)
if test "x$OBJCOPY" = "x"; then
AC_MSG_ERROR([objcopy from GNU binutils required for gccgo])
fi
| NeedsFix | low | Critical |
316,194,024 | three.js | Spatial Index and Occlusion culling introducing into core | # Spatial Index
Spatial index is necessary when dealing with large scenes, such large scenes are very common in games for example.
### Motivation
If you want to do a raycast into the scene - currently you are stuck with a linear search, which is dominated by number of objects and polygons. A spatial index would enable a lot of internal optimizations, such as faster occlusion culling and sorting.
_Direct application in the renderer:_
# Occlusion culling
Good occlusion culling is required for good performance. Point above would help here. There are a lot of techniques that can be utilized further here.
born as a result of this discussion: #13807 | Suggestion | medium | Major |
316,194,883 | three.js | Optimizations to Animation Engine | # Optimizations to animation engine
Currently animation engine chokes on some 500 bones being animated simultaneously, resulting in a very high CPU usage.
### Motivation
It is not uncommon to see 3-5 characters at the same time with 500+ bones each in modern games, with current CPU demand such fidelity is not achievable, instead you have to compromise to about 15 bones per character in order to achieve decent performance.
born as a result of this discussion: #13807 | Suggestion | medium | Major |
316,195,195 | three.js | Support of Compressed Textures in core | # Compressed Textures
Compressed textures as a first-class citizen, along with tools for on-line compression
### Motivation
Compressed textures offer a great amount of extra detail requiring only a little space, for applications with large textures and/or large number of textures, this draws a line between interactive frame-rate and a slide-show, this point becomes more relevant for lower-end GPUs, as they tend to have less RAM, being able to draw 2024 compressed textures instead of 512 uncompressed ones is a extremely important, as they take up potentially the same amount of GPU RAM. Compressed textures take less time to load and put less stress on browser, since decompression is not done by default (unlike PNG).
born as a result of this discussion: #13807 | Suggestion | medium | Critical |
316,206,053 | opencv | ORB detector crashed | ##### System information (version)
- OpenCV => 3.4.1 dev
- Operating System / Platform => Windows 64 Bit/32bit (check both)
- Compiler => Visual Studio 2017
##### Detailed description
// C++ code example
https://docs.opencv.org/3.4.1/dc/d16/tutorial_akaze_tracking.html
Can`t use ORB detector in demo and own application.
In all cases crash in the file "orb.cpp", function: static void computeKeyPoints()
909: std::swap(allKeypoints, newAllKeypoints);
910: } <-- here, after exit from cycle
Debugged application stoped with error message:
Windows has triggered a breakpoint in stack.exe.
This may be due to a corruption of the heap, which indicates a bug in stack.exe or any of the DLLs it has loaded.
Stoped here: file xmemory0
#endif /* defined(_M_IX86) || defined(_M_X64) */
--> ::operator delete(_Ptr, _Bytes);
}
Also impossible use AKAZE detector.
Detector working, if call function detectAndCompute(), and when application close, get error msg from:
/* verify block type */
_ASSERTE(_BLOCK_TYPE_IS_VALID(pHead->nBlockUse));
If not call detectAndCompute(), all ok. | bug,category: features2d,incomplete,needs reproducer | low | Critical |
316,217,259 | opencv | imshow upscale displayed image | Using the latest OpenCV on Windows 10, any image I show or any options I use for creating the window (autosize, or normal with specified size) the resulted window displays a scaled version of the image. The scale seems to correspond to the general scale set in Windows display settings. Is there a way to disable that scale, at least when manually specifying the windows size? It is very important to preserve the original resolution while developing computer vision algorithms. | feature,category: highgui-gui,platform: win32 | low | Minor |
316,277,384 | rust | PathBuf set_file_name and with_file_name need docs for input abspaths | Looking at:
https://doc.rust-lang.org/std/path/struct.PathBuf.html#method.with_file_name
and
https://doc.rust-lang.org/std/path/struct.PathBuf.html#method.set_file_name
I had thought, based on what was written there, that the code would extract the filename alone from the input, dropping the directory prefix.
But apparently what will actually happen is that it drops all the content from `self` and just turns it into a copy of the input.
(And if you give it a relative path with multiple components, then it effectively pushes all of the components onto `self`)
I don't actually *mind* the current behavior, and it is easy enough to workaround myself. (Basically, I think `set_file_name` and `with_file_name` are somewhat misleadingly misnamed...) But we should document it it, perhaps with concrete examples.
Here is an example of what I am getting at ([playpen](https://play.rust-lang.org/?gist=fc97d1c694aa12eb8e3586e0ca2bca31&version=stable)):
```rust
fn main() {
use std::path::Path;
println!("EXAMPLE 1");
let path1 = Path::new("/dir1/file1.txt");
let path2 = Path::new("/dir2/file2.txt");
println!("path1: {}", path1.display());
println!("path2: {}", path2.display());
println!("path1.with_file_name(path2): {}", path1.with_file_name(path2).display());
println!("Felix expected /dir1/file2.txt");
println!("");
println!("EXAMPLE 2");
let path1 = Path::new("dir1/file1.txt");
let path2 = Path::new("dir2/file2.txt");
println!("path1: {}", path1.display());
println!("path2: {}", path2.display());
println!("path1.with_file_name(path2): {}", path1.with_file_name(path2).display());
println!("Felix expected dir1/file2.txt");
}
``` | C-enhancement,T-libs-api,A-io | low | Minor |
316,279,837 | vscode | Licensing unnecessarily prohibits usage in cloud services | I've been using a cloud service provider for remote work. In their service they provide Atom as a text editor, but I've been using Code for about a year now and am loathe to switch over my development environment.
They've bumped on something in the licensing. They cannot:
> share, publish, or lend the software, or provide it as a hosted solution for others to use, or transfer the software or this agreement to any third party.
After looking around at all the confusion over Code's license I can understand the reasons Microsoft has the dual licensing system. (I don't agree, but I understand.) I even built a copy of Code from source to see if that would be usable (verdict is still out there). But simpler would be for Microsoft to strike this line from the software license. Leave in the non-transferable bit and cut out the rest. This would allow cloud services to provide Code as the de-facto editor, which can only benefit you in the long run by increasing user count and brand awareness.
The thing is, I'm not sure why this is even in here. In my opinion it fundamentally goes against the branding that Microsoft has given code, which is that it's free for everyone and usable everywhere. It feels like a hangover from the commercial version of the VS IDE, or from Windows itself, some boilerplate that gets tacked on automatically.
The alternative is to have people build Code from source, which removes all MS branding and telemetry. I'd think this less desirable for you.
My other thought is that they could provide a pop-up that offers to install it for the user, something akin to how Code is now being distributed with Anaconda. Anaconda's use of the official installer seems to bypass this clause in the license. However, this is very much *not* the solution I want, because the last thing the world needs are more pop-ups. :)
Thanks for listening. | under-discussion,license | low | Minor |
316,281,106 | godot | Using the `viewport` stretch mode and resizing the window (or using "shrink" value other than 1) results in black screen if MSAA is forced on in NVIDIA or AMD settings | **Godot version:** Git https://github.com/godotengine/godot/commit/7d6f210ccb5de9ef414f94ad42f9f3dea14c0493
**OS/device including version:** Fedora 27, NVIDIA 390.48
**Issue description:** When the window scaling mode is set to `2d` or `viewport`, setting the `display/window/stretch/shrink` project setting to a value other than 1 will cause the running project's rendering to break (it will appear to be frozen at the splash screen). If the window is resized by the user, it will turn into a black screen. No errors can be seen in the Debugger dock of the editor.
This also occurs when using the `viewport` stretch mode after resizing the window, even if `display/window/stretch/shrink` is set to 1.
**Steps to reproduce:** After making sure the window scaling mode is set to `2d` or `viewport`, set `display/window/stretch/shrink` to a value other than 1 in the Project Settings then run the project.
**Minimal reproduction project:** [shrink_bug.zip](https://github.com/godotengine/godot/files/1932373/shrink_bug.zip)
___
**Update:** I can still reproduce this as of https://github.com/godotengine/godot/commit/6110bdee138febf3b04b47bc15b834bda7b99d52 (Fedora 30, NVIDIA 418.74). When the bug occurs, this appears in the console when Godot is started with `--verbose`:
```
ERROR: _gl_debug_print: GL ERROR: Source: OpenGL Type: Error ID: 1282 Severity: High Message: GL_INVALID_OPERATION error generated. Source and destination dimensions must be identical with the current filtering modes.
At: drivers/gles3/rasterizer_gles3.cpp:123.
``` | bug,topic:rendering,confirmed,topic:thirdparty | medium | Critical |
316,339,404 | TypeScript | Exhaustiveness checking against an enum only works when the enum has >1 member. | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** [email protected]
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** discriminated, exhaustiveness, type guard, narrowing
**Code**
```ts
// Legal action types for ValidAction
enum ActionTypes {
INCREMENT = 'INCREMENT',
// DECREMENT = 'DECREMENT',
}
interface IIncrement {
payload: {};
type: ActionTypes.INCREMENT;
}
// interface IDecrement {
// payload: {};
// type: ActionTypes.DECREMENT;
// }
// Any string not present in T
type AnyStringExcept<T extends string> = { [P in T]: never; };
// ValidAction is an interface with a type in ActionTypes
type ValidAction = IIncrement;
// type ValidAction = IIncrement | IDecrement;
// UnhandledAction in an interface with a type that is not within ActionTypes
type UnhandledAction = { type: AnyStringExcept<ActionTypes>; };
// The set of all actions
type PossibleAction = ValidAction | UnhandledAction;
// Discriminates to ValidAction
function isUnhandled(x: PossibleAction): x is UnhandledAction {
return !(x.type in ActionTypes);
}
type CounterState = number;
const initialState: CounterState = 0;
function receiveAction(state = initialState, action: PossibleAction) {
// typeof action === PossibleAction
if (isUnhandled(action)) {
// typeof action === UnhandledAction
return state;
}
// typeof action === ValidAction
switch (action.type) {
case ActionTypes.INCREMENT:
// typeof action === IIncrement
return state + 1;
// case ActionTypes.DECREMENT:
// return state - 1;
}
// typeof action === IIncrement
// Since INCREMENT is handled above, this should be impossible,
// However the compiler will say that assertNever cannot receive an argument of type IIncrement
return assertNever(action);
}
function assertNever(x: UnhandledAction): never {
throw new Error(`Unhandled action type: ${x.type}`);
}
```
**Expected behavior:** No error would be thrown, as the switch statement is exhaustive. If the ActionTypes.DECREMENT parts are uncommented (resulting in two possible values for ActionTypes) there is no error. An error only occurs when ActionTypes takes on a single value. The error occurs even if the `never` assertion happens in the default statement, which is obviously unreachable from IIncrement.
**Actual behavior:** An error is thrown despite the only possible value being explicitly handled. If ActionTypes.DECREMENT is uncommented the expected behavior is present.
**Playground Link:** (fixed the links)
[Error](https://www.typescriptlang.org/play/index.html#src=%2F%2F%20Legal%20action%20types%20for%20ValidAction%0D%0Aenum%20ActionTypes%20%7B%0D%0A%20%20INCREMENT%20%3D%20'INCREMENT'%2C%0D%0A%2F%2F%20%20%20DECREMENT%20%3D%20'DECREMENT'%2C%0D%0A%7D%0D%0A%0D%0Ainterface%20IIncrement%20%7B%0D%0A%20%20payload%3A%20%7B%7D%3B%0D%0A%20%20type%3A%20ActionTypes.INCREMENT%3B%0D%0A%7D%0D%0A%0D%0A%2F%2F%20interface%20IDecrement%20%7B%0D%0A%2F%2F%20%20%20payload%3A%20%7B%7D%3B%0D%0A%2F%2F%20%20%20type%3A%20ActionTypes.DECREMENT%3B%0D%0A%2F%2F%20%7D%0D%0A%0D%0A%2F%2F%20Any%20string%20not%20present%20in%20T%0D%0Atype%20AnyStringExcept%3CT%20extends%20string%3E%20%3D%20%7B%20%5BP%20in%20T%5D%3A%20never%3B%20%7D%3B%0D%0A%0D%0A%2F%2F%20ValidAction%20is%20an%20interface%20with%20a%20type%20in%20ActionTypes%0D%0Atype%20ValidAction%20%3D%20IIncrement%3B%0D%0A%2F%2F%20type%20ValidAction%20%3D%20IIncrement%20%7C%20IDecrement%3B%0D%0A%0D%0A%2F%2F%20UnhandledAction%20in%20an%20interface%20with%20a%20type%20that%20is%20not%20within%20ActionTypes%0D%0Atype%20UnhandledAction%20%3D%20%7B%20type%3A%20AnyStringExcept%3CActionTypes%3E%3B%20%7D%3B%0D%0A%0D%0A%2F%2F%20The%20set%20of%20all%20actions%0D%0Atype%20PossibleAction%20%3D%20ValidAction%20%7C%20UnhandledAction%3B%0D%0A%0D%0A%2F%2F%20Discriminates%20to%20ValidAction%0D%0Afunction%20isUnhandled(x%3A%20PossibleAction)%3A%20x%20is%20UnhandledAction%20%7B%0D%0A%20%20%20%20return%20!(x.type%20in%20ActionTypes)%3B%0D%0A%7D%0D%0A%0D%0Atype%20CounterState%20%3D%20number%3B%0D%0Aconst%20initialState%3A%20CounterState%20%3D%200%3B%0D%0A%0D%0Afunction%20receiveAction(state%20%3D%20initialState%2C%20action%3A%20PossibleAction)%20%7B%0D%0A%20%20%20%20%2F%2F%20typeof%20action%20%3D%3D%3D%20PossibleAction%0D%0A%20%20%20%20if%20(isUnhandled(action))%20%7B%0D%0A%20%20%20%20%20%20%20%20%2F%2F%20typeof%20action%20%3D%3D%3D%20UnhandledAction%0D%0A%20%20%20%20%20%20%20%20return%20state%3B%0D%0A%20%20%20%20%7D%0D%0A%0D%0A%20%20%20%20%2F%2F%20typeof%20action%20%3D%3D%3D%20ValidAction%0D%0A%20%20%20%20switch%20(action.type)%20%7B%0D%0A%20%20%20%20%20%20%20%20case%20ActionTypes.INCREMENT%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%2F%2F%20typeof%20action%20%3D%3D%3D%20IIncrement%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20return%20state%20%2B%201%3B%0D%0A%20%20%20%20%20%20%20%20%2F%2F%20case%20ActionTypes.DECREMENT%3A%0D%0A%20%20%20%20%20%20%20%20%2F%2F%20%20%20%20%20return%20state%20-%201%3B%0D%0A%20%20%20%20%7D%0D%0A%0D%0A%20%20%20%20%2F%2F%20typeof%20action%20%3D%3D%3D%20IIncrement%0D%0A%20%20%20%20%2F%2F%20Since%20INCREMENT%20is%20handled%20above%2C%20this%20should%20be%20impossible%2C%0D%0A%20%20%20%20%2F%2F%20However%20the%20compiler%20will%20say%20that%20assertNever%20cannot%20receive%20an%20argument%20of%20type%20IIncrement%0D%0A%20%20%20%20return%20assertNever(action)%3B%0D%0A%7D%0D%0A%0D%0Afunction%20assertNever(x%3A%20UnhandledAction)%3A%20never%20%7B%0D%0A%20%20%20%20throw%20new%20Error(%60Unhandled%20action%20type%3A%20%24%7Bx.type%7D%60)%3B%0D%0A%7D)
[Working](https://www.typescriptlang.org/play/index.html#src=%2F%2F%20Legal%20action%20types%20for%20ValidAction%0D%0Aenum%20ActionTypes%20%7B%0D%0A%20%20INCREMENT%20%3D%20'INCREMENT'%2C%0D%0A%20%20DECREMENT%20%3D%20'DECREMENT'%2C%0D%0A%7D%0D%0A%0D%0Ainterface%20IIncrement%20%7B%0D%0A%20%20payload%3A%20%7B%7D%3B%0D%0A%20%20type%3A%20ActionTypes.INCREMENT%3B%0D%0A%7D%0D%0A%0D%0Ainterface%20IDecrement%20%7B%0D%0A%20%20payload%3A%20%7B%7D%3B%0D%0A%20%20type%3A%20ActionTypes.DECREMENT%3B%0D%0A%7D%0D%0A%0D%0A%2F%2F%20Any%20string%20not%20present%20in%20T%0D%0Atype%20AnyStringExcept%3CT%20extends%20string%3E%20%3D%20%7B%20%5BP%20in%20T%5D%3A%20never%3B%20%7D%3B%0D%0A%0D%0A%2F%2F%20ValidAction%20is%20an%20interface%20with%20a%20type%20in%20ActionTypes%0D%0A%2F%2F%20type%20ValidAction%20%3D%20IIncrement%3B%0D%0Atype%20ValidAction%20%3D%20IIncrement%20%7C%20IDecrement%3B%0D%0A%0D%0A%2F%2F%20UnhandledAction%20in%20an%20interface%20with%20a%20type%20that%20is%20not%20within%20ActionTypes%0D%0Atype%20UnhandledAction%20%3D%20%7B%20type%3A%20AnyStringExcept%3CActionTypes%3E%3B%20%7D%3B%0D%0A%0D%0A%2F%2F%20The%20set%20of%20all%20actions%0D%0Atype%20PossibleAction%20%3D%20ValidAction%20%7C%20UnhandledAction%3B%0D%0A%0D%0A%2F%2F%20Discriminates%20to%20ValidAction%0D%0Afunction%20isUnhandled(x%3A%20PossibleAction)%3A%20x%20is%20UnhandledAction%20%7B%0D%0A%20%20%20%20return%20!(x.type%20in%20ActionTypes)%3B%0D%0A%7D%0D%0A%0D%0Atype%20CounterState%20%3D%20number%3B%0D%0Aconst%20initialState%3A%20CounterState%20%3D%200%3B%0D%0A%0D%0Afunction%20receiveAction(state%20%3D%20initialState%2C%20action%3A%20PossibleAction)%20%7B%0D%0A%20%20%20%20%2F%2F%20typeof%20action%20%3D%3D%3D%20PossibleAction%0D%0A%20%20%20%20if%20(isUnhandled(action))%20%7B%0D%0A%20%20%20%20%20%20%20%20%2F%2F%20typeof%20action%20%3D%3D%3D%20UnhandledAction%0D%0A%20%20%20%20%20%20%20%20return%20state%3B%0D%0A%20%20%20%20%7D%0D%0A%0D%0A%20%20%20%20%2F%2F%20typeof%20action%20%3D%3D%3D%20ValidAction%0D%0A%20%20%20%20switch%20(action.type)%20%7B%0D%0A%20%20%20%20%20%20%20%20case%20ActionTypes.INCREMENT%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%2F%2F%20typeof%20action%20%3D%3D%3D%20IIncrement%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20return%20state%20%2B%201%3B%0D%0A%20%20%20%20%20%20%20%20case%20ActionTypes.DECREMENT%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20return%20state%20-%201%3B%0D%0A%20%20%20%20%7D%0D%0A%0D%0A%20%20%20%20%2F%2F%20typeof%20action%20%3D%3D%3D%20IIncrement%0D%0A%20%20%20%20%2F%2F%20Since%20INCREMENT%20is%20handled%20above%2C%20this%20should%20be%20impossible%2C%0D%0A%20%20%20%20%2F%2F%20However%20the%20compiler%20will%20say%20that%20assertNever%20cannot%20receive%20an%20argument%20of%20type%20IIncrement%0D%0A%20%20%20%20return%20assertNever(action)%3B%0D%0A%7D%0D%0A%0D%0Afunction%20assertNever(x%3A%20UnhandledAction)%3A%20never%20%7B%0D%0A%20%20%20%20throw%20new%20Error(%60Unhandled%20action%20type%3A%20%24%7Bx.type%7D%60)%3B%0D%0A%7D)
**Related Issues:**
https://github.com/Microsoft/TypeScript/issues/19904
https://github.com/Microsoft/TypeScript/issues/14210
https://github.com/Microsoft/TypeScript/issues/18056 | Bug | low | Critical |
316,346,122 | flutter | SceneBuilder requires having only one root layer | If you push two layers onto a SceneBuilder, the second is silently dropped on the floor.
The first time you push a layer onto a scene builder, DefaultLayerBuilder::PushLayer sees it doesn't have a root layer yet, and uses the new layer as its root layer.
Then you pop that layer, and we set the current layer to the parent of the current layer, which is null, since the current layer is the root layer and the root layer has no parent.
When you then push the second layer onto the scene builder, DefaultLayerBuilder::PushLayer sees it has a root layer, sees it doesn't have a current layer, and exits without doing anything.
IMHO we have two choices:
1. Document and (in debug builds) assert that you only push a single layer onto the SceneBuilder. (Similarly, assert that when you pop, there is a layer to pop.)
2. Make SceneBuilder support having multiple root layers (e.g. by actually having SceneBuilder always create itself its own root layer, and push and pop on that).
This manifests itself currently when you use SceneBuilder e.g. to create an image with toImage. If you happen to push two child layers (which is easy to do when using the framework), you will silently only get the first in the output, even though no exceptions are thrown to suggest anything went wrong.
cc @chinmaygarde @gspencergoog | engine,c: rendering,P2,team-engine,triaged-engine | low | Critical |
316,366,204 | TypeScript | Option to disable sorting when running organize imports | _From @ssi-hu-tasi-norbert on April 20, 2018 9:9_
<!-- Do you have a question? Please ask it on https://stackoverflow.com/questions/tagged/vscode. -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.22.2
- OS Version: win 10
Sort should be optional. I have imports from 3rd parties above and app imports below and I don't want to mix them. With a separator( for me it is a comment // app) these two sections can be sort separatedly.
Anyway, this is a great feature.
Thanks,
Norbert
_Copied from original issue: Microsoft/vscode#48263_ | Suggestion,Awaiting More Feedback,Domain: Organize Imports | low | Major |
316,382,523 | kubernetes | Volume metrics causing node to become not ready | <!-- This form is for bug reports and feature requests ONLY!
If you're looking for help check [Stack Overflow](https://stackoverflow.com/questions/tagged/kubernetes) and the [troubleshooting guide](https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/).
If this may be security issue, please disclose it privately via https://kubernetes.io/security/.
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
@kubernetes/sig-storage-bugs
**What happened**:
In this [discussion](https://github.com/kubernetes/kubernetes/issues/36499#issuecomment-383192634), it was reported that running df on nodes with 100 pods is causing the node to become not ready, and switching to SSDs for the boot disk helped alleviate this issue.
**What you expected to happen**:
Collecting metrics should not cause the node to become unstable | kind/bug,sig/storage,lifecycle/frozen | low | Critical |
316,398,544 | flutter | Support splitting a Flutter app's assets into multiple packages | For Android Instant Apps, and probably for Fuchsia and probably for hot updates, we will want to split an apps' assets into blobs of data each of which can be shipped separately and loaded independently. It would probably make sense to do it along `package:` boundaries like we do for Dart code.
cc @zanderso @tvolkert @jason-simmons @chinmaygarde @cbracken | c: new feature,tool,framework,engine,c: proposal,P2,team-engine,triaged-engine | low | Minor |
316,400,572 | flutter | Flutter needs a way to manage image assets by locale | Both iOS and Android asset systems have ways to manage images per-locale. Maybe those are already sufficient?
Or maybe we need some combined method as part of the `pubspec.yaml` flutter assets tag system?
(Filing based on a comment from @Hixie in an email.) | c: new feature,tool,framework,a: assets,P2,team-framework,triaged-framework | low | Minor |
316,410,020 | go | bytes, strings: optimize Contains with fast-path for sub-slices | Consider the following:
```go
func Benchmark(b *testing.B) {
buf := make([]byte, 1<<20)
rand.Read(buf)
for n := 64; n <= len(buf); n <<= 1 {
b.Run(fmt.Sprintf("%d", n), func(b *testing.B) {
for i := 0; i < b.N; i++ {
bytes.Contains(buf, buf[n-64:n])
}
})
}
}
```
On my machine, this prints:
```
Benchmark/64-8 100000000 11.8 ns/op
Benchmark/128-8 100000000 22.3 ns/op
Benchmark/256-8 50000000 29.1 ns/op
Benchmark/512-8 30000000 56.9 ns/op
Benchmark/1024-8 20000000 101 ns/op
Benchmark/2048-8 10000000 195 ns/op
Benchmark/4096-8 5000000 393 ns/op
Benchmark/8192-8 2000000 976 ns/op
Benchmark/16384-8 1000000 1749 ns/op
Benchmark/32768-8 300000 3968 ns/op
Benchmark/65536-8 200000 8312 ns/op
Benchmark/131072-8 100000 18300 ns/op
Benchmark/262144-8 50000 35918 ns/op
Benchmark/524288-8 20000 75942 ns/op
Benchmark/1048576-8 10000 156854 ns/op
```
In this situation, the substring is sliced out of the parent slice. It should be know that the parent contains the substring in O(1) with something similar to:
```go
func Contains(b, subslice []byte) bool {
if (*reflect.SliceHeader).(unsafe.Pointer(&b)).Data <= (*reflect.SliceHeader).(unsafe.Pointer(&subslice)).Data && (*reflect.SliceHeader).(unsafe.Pointer(&b)).Data + uintptr(len(b)) >= (*reflect.SliceHeader).(unsafe.Pointer(&subslice)).Data + uintptr(len(subslice)) {
return true
}
return Index(b, subslice) != -1
}
```
(the above code is not correct as there is special consideration to manipulating `unsafe.Pointer`, but the general approach is the same) | Performance,NeedsDecision | low | Minor |
316,437,269 | flutter | All-purpose Media widget | It's a rather minor request, and totally up for discussion, however I feel that Flutter would benefit from having a generic ``Media`` widget, alongside ``Image`` and ``VideoPlayerController``.
Back in the days, when glittering GIFs ruled the world, if a website hosted images it hosted images, when it hosted videos – it hosted videos. Nowadays the lines are blurring, and Webm is slowly but surely pushing GIF out of the picture. Thus, websites that used to host files like PNG, JPG and GIF (all possible to be displayed through ``Image`` widget), also host MP4 and WEBM files now (both possible to be displayed through ``VideoPlayerController``), and sometimes even MP3 and other media formats.
Thus, if I wanted to grab, say, 50 newest files files from Imgur, half of them or so would be Webm. And that makes it impossible to display them all in a grid of ``Image`` widgets. Of course I could go through each and every one of them, check what's the file extension, compare it to my list of image extensions, list of sound extensions, list of video extensions, and create a widget based on that, so a PNG file will be displayed in ``Image``, and Webm will be displayed in ``VideoPlayerController``.
Question is: should that be the case? Shouldn't an all-purpose ``Media`` widget take care of that?
Another possibility would be an inline Chrome window, but it doesn't give full control over how is the media being displayed, and I don't think that a grid of 200 or so Chrome windows is a particularly good idea.
Thoughts? Opinions? Already exisitng solutions? | c: new feature,framework,a: video,P3,team-framework,triaged-framework | low | Major |
316,454,081 | rust | Module decarations when using include! are relative to the included file | I'm not sure if this is actually a bug or intended behavior, but when using the `include!` macro to include a file which contains module declarations i.e. `mod foo`, it searches for the module relative to the included file's path and not relative to the file which uses the `include!`.
Here's some example code to demonstrate this:
mod.rs:
```
include!(concat!(env!(OUT_DIR), "/foo.in"));
```
foo.in:
```
mod bar;
```
Example compiler output:
```
error[E0583]: file not found for module `bar`
--> <OUT_DIR>/foo.in:1:5
|
1 | mod bar;
| ^^^^^^^^^
|
= help: name the file either bar.rs or bar/mod.rs inside the directory "<OUT_DIR>"
```
This makes it impossible to have module declarations from generated code in a build script, for example.
## Meta
`rustc --version --verbose`:
rustc 1.25.0 (84203cac6 2018-03-25)
binary: rustc
commit-hash: 84203cac67e65ca8640b8392348411098c856985
commit-date: 2018-03-25
host: x86_64-unknown-linux-gnu
release: 1.25.0
LLVM version: 6.0
| T-lang,C-feature-request | low | Critical |
316,463,688 | rust | Invalid collision with TryFrom implementation? | Sorry for the code dump. This is the smallest code I could make to reproduce the problem.
```rust
use std::marker::PhantomData;
use std::convert::TryFrom;
trait Integer {}
impl Integer for u8 {}
trait Adapter<I: Integer>: TryFrom<I> + Into<I> {}
enum Choice {
Foo,
Bar,
Baz
}
impl From<Choice> for u8 {
fn from(c: Choice) -> u8 {
match c {
Choice::Foo => 1,
Choice::Bar => 2,
Choice::Baz => 3,
}
}
}
impl TryFrom<u8> for Choice {
type Error = ();
fn try_from(i: u8) -> Result<Choice, ()> {
match i {
1 => Ok(Choice::Foo),
2 => Ok(Choice::Bar),
3 => Ok(Choice::Baz),
_ => Err(()),
}
}
}
impl Adapter<u8> for Choice {}
struct Pick<I: Integer, A: Adapter<I>> {
phantom: PhantomData<A>,
value: I,
}
impl<I: Integer, A: Adapter<I>> From<A> for Pick<I, A> {
fn from(a: A) -> Pick<I, A> {
Pick {
phantom: PhantomData,
value: a.into(),
}
}
}
impl<I: Integer, A: Adapter<I>> TryFrom<Pick<I, A>> for A {
type Error = A::Error;
fn try_from(p: Pick<I, A>) -> Result<A, Self::Error> {
A::try_from(p.value)
}
}
```
Attempting to compile this produces:
```
error[E0119]: conflicting implementations of trait `std::convert::TryFrom<Pick<_, _>>`:
--> src/main.rs:53:1
|
53 | impl<I: Integer, A: Adapter<I>> TryFrom<Pick<I, A>> for A {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: conflicting implementation in crate `core`:
- impl<T, U> std::convert::TryFrom<U> for T
where T: std::convert::From<U>;
error[E0210]: type parameter `A` must be used as the type parameter for some local type (e.g. `MyStruct<A>`)
--> src/main.rs:53:1
|
53 | impl<I: Integer, A: Adapter<I>> TryFrom<Pick<I, A>> for A {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type parameter `A` must be used as the type parameter for some local type
|
= note: only traits defined in the current crate can be implemented for a type parameter
```
I've spent several hours looking at this and I can't figure out why I'm getting these errors. The type parameter `A` is being used as the type parameter for a local type. Maybe the compiler can't tell because it is nested inside `TryFrom<>`?
But I'm also not sure why the first error occurs at all. The rule it conflicts with can basically be described as "types with infallible conversions implicitly implement fallible conversions." But in my case there is no infallible conversion. So I don't see where the conflict is arising from.
If there is really a bug and I'm not just overtired, this may impact #49305. | A-trait-system,T-lang,C-bug | high | Critical |
316,493,208 | rust | Warnings and error suggestions are wrong when originating from inside proc-macros | For example
```
warning: unused import: `Form`
--> example/src/main.rs:11:26
|
11 | #[derive(Debug, Default, Form)]
| ^^^^
|
= note: #[warn(unused_imports)] on by default
error[E0369]: binary operation `+` cannot be applied to type `&str`
--> example/src/main.rs:18:26
|
18 | #[derive(Debug, Default, Form)]
| ^^^^ `+` can't be used to concatenate two `&str` strings
help: `to_owned()` can be used to create an owned `String` from a string reference. String concatenation appends the string on the right to the string on the left and may require reallocation. This requires ownership of the string on the left
|
18 | #[derive(Debug, Default, Form.to_owned())]
|
```
when the expanded source contains
```rust
use self::gtk::{Widget, Button, ComboBoxText, Label};
// and
let _ = "a" + "b";
```
with ComboBoxText being unused. | C-enhancement,A-macros,T-compiler | low | Critical |
316,511,776 | rust | Tracking issue for f32 and f64 methods in libcore | https://github.com/rust-lang/rust/pull/49896 removes from libcore (and moves to libstd) three methods of `f32` and `f64` (that were only usable through the unstable trait `core::num::Float`) because they’re implemented by calling LLVM intrinsics, and it’s not clear whether those intrinsics are lowered on any platform to calls to C’s `libm` or something else that requires runtime support that we don’t want in libcore:
* `abs`: calls `llvm.fabs.f32` or `llvm.fabs.f64`
* `signum`: calls `llvm.copysign.f32` or `llvm.copysign.f64`
* `powi`: calls `llvm.powi.f32` or `llvm.powi.f32`
The first two seem like they’d be easy to implement in a small number of lower-level instructions (such as a couple lines with `if`, or even bit twiddling based on IEEE 754). `abs` in particular seems like a rather common operation, and it’s unfortunate not to have it in libcore.
The `compiler-builtins` crate has Rust implementations of `__powisf2` and `__powidf2`, but in LLVM code those are only mentioned in `lib/Target/WebAssembly/WebAssemblyRuntimeLibcallSignatures.cpp` so I haven’t found evidence that `llvm.powi.f32` and `llvm.powi.f32` call those functions.
PR #27823 “Remove dependencies on libm functions from libcore” similarly moved a number of other `f32` and `f64` methods to libstd, but left these three behind specifically. (And unfortunately doesn’t discuss why.)
Maybe it’s fine to move them back in libcore? (As inherent methods, assuming #49896 lands.)
CC @alexcrichton | T-libs-api,C-tracking-issue,A-intrinsics,A-floating-point,Libs-Tracked | medium | Critical |
316,521,804 | youtube-dl | Add support for reelz.com | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.04.16*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2018.04.16**
### Before submitting an *issue* make sure you have:
- [ ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [ ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [ ] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2018.04.16
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
...
<end of log>
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
https://now.reelz.com/tve/tveshowepisode.aspx?ap=1&showid=235&eid=13183&clipid=81697&assetid=RMMF303
| site-support-request,tv-provider-account-needed | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.