id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
442,243,752 | rust | Tracking issue for existential lifetimes | This is a tracking issue for existential lifetimes.
**Description:**
Allow hiding a type (via `impl Trait`) that is invariant over some lifetime without explicitly mentioning the invariant lifetime.
Consider the following:
```rust
impl Trait<'b> for Cell<&'a u32> { }
fn foo(x: Cell<&'x u32>) -> impl Trait<'y> where 'x: 'y { x }
```
There is no reason this cannot be legal, although it is not permitted at present. We would want to translate the function signature internally into something like:
```rust
fn foo(x: Cell<&'x u32>) -> impl exists<'x: 'y> Trait<'y> where 'x: 'y { x }
```
Although it be noted there is no need for user-facing `exists<...>` syntax; only HIR and `ty` representations probably. The concrete type corresponding to `impl exists<'x: 'y> Trait<'y>` can this be soundly checked by the compiler at use site.
Note, we still need to be careful to ban situations like those mentioned by @matthewjasper in https://github.com/rust-lang/rust/pull/59402. By actually embedding the existential lifetime in the type rather than simply doing a check when resolving the opaque type, we should be able to resolve these issues, however. One can view this solution as a "compiler-internalised" version of [the `Captures` marker trait solution](https://github.com/rust-lang/rust/pull/56047), in some sense.
**Steps:**
- [ ] Decide on exact semantics of existential lifetimes. Perhaps @nikomatsakis can briefly write up his thoughts here.
- [ ] Implement the RFC (cc previous attempts https://github.com/rust-lang/rust/pull/57870 and https://github.com/rust-lang/rust/pull/59402, @nikomatsakis for mentoring instructions?)
- [ ] Stabilization PR ([see instructions on rustc-guide][stabilization-guide])
[stabilization-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#documentation-prs
CC @Centril @nikomatsakis @matthewjasper @cramertj | A-type-system,A-lifetimes,T-lang,A-impl-trait,T-types | low | Minor |
442,258,941 | vue-element-admin | 执行npm install 安装依赖有很多告警 |
## 执行npm install
安装依赖有很多告警
#### Steps to reproduce(问题复现步骤)
```
$ npm install
npm WARN deprecated [email protected]: This project has been renamed to 'tasksfile'. Install using 'npm install tasksfile' instead.
npm WARN deprecated [email protected]: This project has been renamed to @pawelgalazka/cli . Install using @pawelgalazka/cli instead
npm WARN deprecated [email protected]: Package no longer supported. Contact [email protected] for more info.
npm WARN deprecated [email protected]: This module has moved and is now available at @hapi/joi. Please update your dependencies as this version is no longer maintained an may contain bugs and security issues.
npm WARN deprecated [email protected]: This project has been renamed to @pawelgalazka/cli-args. Install using @pawelgalazka/cli-args instead
npm WARN deprecated [email protected]: This module has moved and is now available at @hapi/hoek. Please update your dependencies as this version is no longer maintained an may contain bugs and security issues.
npm WARN deprecated [email protected]: This module has moved and is now available at @hapi/topo. Please update your dependencies as this version is no longer maintained an may contain bugs and security issues.
npm WARN deprecated [email protected]: CircularJSON is in maintenance only, flatted is its successor.
npm WARN deprecated [email protected]: Please upgrade to kleur@3 or migrate to 'ansi-colors' if you prefer the old syntax. Visit <https://github.com/lukeed/kleur/releases/tag/v3.0.0\> for migration path(s).
npm WARN deprecated [email protected]: use String.prototype.padStart()
```
您的项目是否需要更新依赖的版本号?
| feature,in plan | low | Critical |
442,262,099 | PowerToys | Generate/Verify checksums from file | Without a additional tool you can´t fastly verfiy files with a checksum. You can only use the cmd:
eg. CMD:
certutil -hashfile <filename> SHA256
Possible algorithm:
MD2 MD4 MD5 SHA1 SHA256 SHA384 SHA512
It would be great to have these settings in a new tab "Checksum" by right clicking on a file, where all checksums listed (which can be calculated by certutil) .
There should be a field where you can copy in an existing checksum and compare it with the calculated ones. If it matches, the matching hash could be marked.
| Idea-New PowerToy,Status-In progress | high | Critical |
442,281,055 | pytorch | Support size to `torch.normal` | This would be more consistent with numpy.
```python
torch.normal(0.0, 4.0, size=5)
```
cc @mruberry @rgommers @heitorschueroff | triaged,module: numpy,function request | low | Minor |
442,306,097 | PowerToys | Independent virtual desktops per monitor | Yet another virtual desktop request. Currently switching virtual desktops switches windows on all monitors. It would be great to make this independent per monitor.
Eg. I always have Outlook on monitor 1, but want to switch between multiple sets of apps on monitor 2 and 3 without losing whats on monitor 1.
Bonus points if the virtual desktops are all shared so I can cycle through them on each monitor eg 5 virtual desktops, 3 monitors.
Start with:
1=1, 2=2, 3=3
Cycle 3 (to virtual 4):
1=1, 2=2, 3=4
Cycle 2 (to virtual 5):
1=1, 2=5, 3=3
Cycle 3 "down" (to virtual 2):
1=1, 2=5, 3=2
| Idea-New PowerToy,Product-Virtual Desktop | high | Critical |
442,329,477 | rust | io::Stdout should use block bufferring when appropriate | I feel like a pretty common pitfall for beginning Rust programmers is to try writing a program that uses `println!` to print a lot of lines, compare its performance to a similar program written in Python, and be (rightly) baffled at the fact that Python is substantially faster. This occurred most recently here: https://www.reddit.com/r/rust/comments/bl7j7j/hey_rustaceans_got_an_easy_question_ask_here/emx3bhm/
The reason why this happens is because `io::Stdout` unconditionally uses line buffering, regardless of whether it's being used interactively (e.g., printing to a console or a tty) or whether it's printing to a file. So if you print a lot of lines, you end up calling the `write` syscall for every line, which is quite expensive. In contrast, Python uses line buffering when printing interactively, and standard block bufferring otherwise. You can see more details on this [here](https://github.com/python/cpython/blob/6daaf3f7de78eec2c80eaa8e94e4cca54f758a30/Modules/_io/_iomodule.c#L163-L173) and [here](https://docs.python.org/3/library/sys.html#sys.stdout).
In my opinion, Rust should adopt the same policy as Python. Indeed, there is even a FIXME item for this in the code:
https://github.com/rust-lang/rust/blob/ef01f29964df207f181bd5bcf236e41372a17273/src/libstd/io/stdio.rs#L401-L404
I think this would potentially solve a fairly large stumbling block that folks run into. The CLI working group [even calls it out as a performance footgun](https://rust-lang-nursery.github.io/cli-wg/tutorial/output.html#a-note-on-printing-performance). And also [here](https://github.com/rust-lang-nursery/cli-wg/issues/29) too. Additionally, [ripgrep rolls its own handling for this](https://docs.rs/grep-cli/0.1.2/grep_cli/fn.stdout.html).
I can't think of too many appreciable downsides to doing this. It is a change in behavior. For example, if you wrote a Rust program today that printed to `io::Stdout`, and the user redirected the output to a file, then the user could (for example) `tail` that output and see it updated as each line was printed. If we made `io::Stdout` use block buffering when printing to a file like this, then that behavior would change. (This is the reasoning for flags like `--line-buffered` on `grep`.)
cc @rust-lang/libs | C-enhancement,T-libs-api,A-io | medium | Major |
442,339,166 | PowerToys | Manage FileMgr view defaults by folder, file type, etc. and prevent re-set w/o permission. | null | Idea-New PowerToy | low | Minor |
442,339,279 | flutter | [in_app_purchase] buying methods should return Future | Currently, you have to subscribe to a `Stream` of purchases, however, in my conception, `Stream`s are preferable when you are supposed to receive lots of responses that update the screen accordingly and do not block user experience. In this case, you are making a request and the user is supposed to continue to use the app only when the response arrives.
IAPs are pretty simple and straightforward, I cannot see the need to separate the request and response, unless the developers had no other option, which I think is not the case, since other similar plugins do use `Future`s.
Perhaps the `Stream` should be kept for when users make a purchase within the store itself, as @adriancmurray said.
This discussion started here: #9591 | p: in_app_purchase,package,team-ecosystem,P3,triaged-ecosystem | low | Major |
442,351,807 | TypeScript | Feature: Analyze @throws tags | ### Description
Given a function has a `@throws` JSDoc tag, it would be helpful to apply some analysis and raise a compiler warning (or, if configured, an error) when the caller does not handle the error **and** it doesn't declare its own `@throws` tag.
### Examples
```typescript
/**
* @throws {SomeException}
*/
function myFunction() {
// some logic
// then for some reason we throw an exception
if (somethingWentWrong) {
throw new SomeException("something happened");
}
// some more logic
}
function potentialMess() {
myFunction(); // Compiler error: 'myFunction' may throw `SomeException`. ts(9876)
}
/**
* @throws {SomeException}
*/
function letItBubble() {
myFunction(); // No compiler errors
}
function aFunctionThatHandlesTheException() {
try {
myFunction(); // No compiler errors
}
catch (error) {
// do something with the error
}
}
```
### Questions
- Ideally, it would be even better if VSCode realizes that `myFunction` throws an error, and then `@throws` is not required for the analysis. Still, this approach would be useful when dealing with external code/interfaces.
- It is my understanding that JSDoc's `@throws` is allowed once -- if this is correct, then VSCode could ignore this and accept multiple tags anyway? Otherwise, maybe we can use a different tag, e.g. `@exception`?
| Suggestion,Awaiting More Feedback | medium | Critical |
442,373,095 | angular | [@angular/animations] sequence non-intuitively/erroneously merges non-animated query() calls into single initial call | <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🐞 bug report
### Affected Package
<!-- Can you pin-point one or more @angular/* packages as the source of the bug? -->
<!-- ✍️edit: --> The issue is caused by package @angular/....
This is a problem with `@angular/animations`
### Description
<!-- ✍️--> A clear and concise description of the problem...
I'm trying to create a complex angular animation with the `sequence([])` function. What I am trying to achieve requires carefully setting various CSS properties on different elements in a well-defined series of events.
My efforts are being messed up because `sequence([])` seems to combine individual query() steps that lack an `animate()` function into ONE `query()` call. So a sequence that looks like this:
```
sequence([
query('.my-class', [
style({
display: 'none'
})
]),
// ...
// 1. Various other animated steps
// ...
query('.my-class', [
style({
display: 'block'
})
]),
// ...
// 2. Various other animated steps
// ...
])
```
... ends up acting like this:
```
sequence([
query('.my-class', [
style({
display: 'none',
display: 'block'
})
]),
// ...
// 1. Various other animated steps
// ...
// ...
// 2. Various other animated steps
// ...
])
```
## 🔬 Minimal Reproduction
<!--
Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-issue-repro2
-->
<!-- ✍️--> https://stackblitz.com/...
<!--
If StackBlitz is not suitable for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue.
A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem.
Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior.
Issues that don't have enough info and can't be reproduced will be closed.
You can read more about issue submission guidelines here: https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-submitting-an-issue
-->
The code I've been using to reach this conclusion can be seen in [this file](https://github.com/MagnusBrzenk/ng7-material-boilerplate/blob/010-animationsContinued/src/app/core/animations/route-change.animations.ts) within the ['010-animationsContinued' branch].
The salient animation is triggered on any route change (e.g. clicking between 'data' and 'about' in the toolbar).
Note: this might be related to [#27577](https://github.com/angular/angular/issues/27577), but I find this undesirable merging to occur not just with the display property, but others as well (e.g. `position`)
| type: bug/fix,area: animations,freq1: low,P3 | low | Critical |
442,410,049 | rust | Clippy attributes have no effect on struct fields | Example:
```use failure::Error;
struct MyTest {
a: &'static [u64],
}
fn main() -> Result<(), Error> {
let b = MyTest {
#[allow(clippy::unreadable_literal)]
a: &[1234567890],
};
Ok(())
}
```
The #[allow] directive must be placed on the variable assignment to have an effect - where it is now does not seem to cover the field value.
Searched a little bit, but wasn't sure if this goes with any existing issues (not sure if this would be #53012 since I don't think clippy attributes are procedural) | A-attributes,A-lints,T-compiler,C-bug | low | Critical |
442,411,893 | go | go/doc: Synopsis can return a sentence spanning multiple paragraphs and code blocks | The [`Synopsis`](https://godoc.org/go/doc#Synopsis) function defines the logic to determine the synopsis, often used on package documentation. The logic is:
> Synopsis returns a cleaned version of the first sentence in s. That sentence ends after the first period followed by space and not preceded by exactly one uppercase letter. The result string has no \n, \r, or \t characters and uses only single spaces between words. If s starts with any of the IllegalPrefixes, the result is the empty string.
Let "paragraph" mean a block of text separated from other text by a blank line (i.e., `"\n\n"`). The current synopsis logic means a sentence can span across multiple paragraphs. For example:
```Go
fmt.Println(doc.Synopsis(`This is a sentence that starts in the first paragraph
and it keeps going in the second paragraph
and ends in the third paragraph. This is the second sentence.`))
// Output: This is a sentence that starts in the first paragraph and it keeps going in the second paragraph and ends in the third paragraph.
```
_(Playground link: https://play.golang.org/p/hSAetYyxkwa)_
Perhaps we should consider changing the logic such that a sentence is not allowed to span multiple paragraphs.
From what I've observed, that is rarely used intentionally, but can happen unintentionally. For example, the current version of the `github.com/rs/cors` package has a very long synopsis that spans multiple paragraphs and includes code blocks:
```
$ go list -f '{{.Doc}}' github.com/rs/cors
Package cors is net/http handler to handle CORS related requests as defined by http://www.w3.org/TR/cors/ You can configure it by passing an option struct to cors.New: c := cors.New(cors.Options{ AllowedOrigins: []string{"foo.com"}, AllowedMethods: []string{http.MethodGet, http.MethodPost, http.MethodDelete}, AllowCredentials: true, }) Then insert the handler in the chain: handler = c.Handler(handler) See Options documentation for more options.
```
/cc @griesemer @julieqiu
**Edit:** Another one is `github.com/peterhellberg/link`:
```
$ go list -f '{{.Doc}}' github.com/peterhellberg/link
Package link parses Link headers used for pagination, as defined in RFC 5988 Installation Just go get the package: go get -u github.com/peterhellberg/link Usage A small usage example package main import ( "fmt" "net/http" "github.com/peterhellberg/link" ) func main() { for _, l := range link.Parse(`<https://example.com/?page=2>; rel="next"; foo="bar"`) { fmt.Printf("URI: %q, Rel: %q, Extra: %+v\n", l.URI, l.Rel, l.Extra) // URI: "https://example.com/?page=2", Rel: "next", Extra: map[foo:bar] } if resp, err := http.Get("https://api.github.com/search/code?q=Println+user:golang"); err == nil { for _, l := range link.ParseResponse(resp) { fmt.Printf("URI: %q, Rel: %q, Extra: %+v\n", l.URI, l.Rel, l.Extra) // URI: "https://api.github.com/search/code?q=Println+user%3Agolang&page=2", Rel: "next", Extra: map[] // URI: "https://api.github.com/search/code?q=Println+user%3Agolang&page=34", Rel: "last", Extra: map[] } } }
```
**Edit 2:** Another one is `github.com/astaxie/beego/[email protected]`:
```
$ go list -f '{{.Doc}}' github.com/astaxie/beego/orm
Package orm provide ORM for MySQL/PostgreSQL/sqlite Simple Usage package main import ( "fmt" "github.com/astaxie/beego/orm" _ "github.com/go-sql-driver/mysql" // import your used driver ) // Model Struct type User struct { Id int `orm:"auto"` Name string `orm:"size(100)"` } func init() { orm.RegisterDataBase("default", "mysql", "root:root@/my_db?charset=utf8", 30) } func main() { o := orm.NewOrm() user := User{Name: "slene"} // insert id, err := o.Insert(&user) // update user.Name = "astaxie" num, err := o.Update(&user) // read one u := User{Id: user.Id} err = o.Read(&u) // delete num, err = o.Delete(&u) } more docs: http://beego.me/docs/mvc/model/overview.md
``` | NeedsDecision | low | Minor |
442,414,875 | go | cmd/go: concurrent build and cache clean is unsafe | Migrated from discussion in https://github.com/golang/go/issues/31931.
If you start a build with `go build`, and run `go cache -clean` while it is building, the build can fail, because intermediate artifacts get deleted.
A sample failure looks like:
```
# os/exec
../os/exec/exec.go:24:2: can't open import: "bytes": open /Users/josh/Library/Caches/go-build/85/852a5f87d98090fa893dcf8b168751b8a141cb0f4087b4951cfd9fc5edebe252-d: no such file or directory
```
@jayconrod wrote:
> What should go clean -cache do with concurrent builds?
> * We could have a "clean" lock that would block `go build` and other commands until `go clean -cache` finishes.
> * `go clean -cache` could leave files created after it started. This might be hard to do reliably and portably though.
@bcmills wrote:
> I think that locking scheme is possible to implement using the existing `cmd/go/internal/lockedfile` API.
>
> `go build` would obtain a read-lock on the file (`os.O_CREATE|os.O_RDONLY`)
> `go clean` would obtain a write-lock on the file (`os.O_CREATE|os.O_WRONLY`)
>
> We don't currently rely on file-locking for the cache, but since this would only prevent a `build` / `clean` race perhaps it's not so bad.
I am not sure that @jayconrod's second option would work: I believe the problem is that the clean deletes files previously created by this build, which the build is still relying on.
@jayconrod's first option would work as long as `go clean -cache` is also blocked until `go build` is done. This makes it look a lot like a RW lock, as @bcmills notes. The only remaining danger is starvation of the clean. (That could actually happen in my particular, unusual case, since I am running many concurrent builds, but that's probably not worth designing around.) | NeedsInvestigation,GoCommand | low | Critical |
442,427,676 | flutter | [Web] Empty white website in Chrome on GitPod | I tried to make the Flutter for Web run in Gitpod.
All seems to be working in Firefox and Safari (I'm on OSX) but in Chrome I don't anything in the opened website.
To reproduce start this Gitpod workspace in chrome (clicking the link):
https://gitpod.io/#https://github.com/svenefftinge/flutter_web | framework,platform-web,P3,team-web,triaged-web | medium | Major |
442,436,406 | terminal | Need a way to have per-machine profiles in settings | Settings roam, which is really cool! However, my machines don't all have the same configuration. For instance, my main dev machine has tooling installed on the `O:` drive. My default profile runs the VS command prompt batch file. Well, I just dropped terminal on a second machine that doesn't have the same hardware config, so this default profile is totally busted there. Thankfully I'm launching with `cmd /k o:\...foo.cmd`, so I just get a slightly annoying error from CMD.
However, we'll need a way to have some way of differentiating config. As an idea, in other tools (e.g. emacs), it's not uncommon to key off of machine name to vary configuration.
=========
After a brief discussion in triage, we realize that there's going to need to be a thorough design of how we want per-machine settings to be done here.
The initial design of settings is "simple" in that all settings applied in an unfiltered fashion and there's only 1 file and we have no "settings UI yet".
There's advantages to no settings UI.
There's advantages to having only one settings file.
There's advantages to being able to layer multiple settings files.
And so on and so forth.
Therefore, it's a feature that someone needs to sit down and spec at a future date. | Issue-Feature,Area-Settings,Product-Terminal | low | Critical |
442,444,486 | terminal | Feature Request: /help commands output to collapsible pane | When using command line tools, I often have to continuously repeat typing command /help to remind myself what I can do with the app.
If there was a way to output the text displayed when typing /help to a collapse box on the screen, I could then expand it to find a command, without having to keep either scrolling, or re-running the command | Issue-Feature,Area-UserInterface,Area-Extensibility,Product-Terminal | low | Major |
442,459,776 | TypeScript | [feature request] destructured type assignment | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
typescript destructuring types
## Suggestion
We can currently do:
```ts
type Lane = import('../Lane').Lane
```
But it would be nifty to be able to do:
```ts
type {Lane} = import('../Lane')
```
And taking it further, grab more things at once:
```ts
type {Lane, LaneColor, LaneSize} = import('../Lane')
```
instead of having to write
```ts
type Lane = import('../Lane').Lane
type LaneColor = import('../Lane').LaneColor
type LaneSize = import('../Lane').LaneSize
```
## Use Cases
convenience
## Examples
see above
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
442,465,752 | flutter | [google_sign_in] better error handling | Currently the way that errors are handled in the ObjC google_sign_in package makes it hard to identify the core issue at hand.
if, for example, the url scheme if not provided in the iOS app info.plist an error is generated by GIDSignIn, when this error is caught it is passed back to flutter the error message is lost and instead it returns a more generic error message without much information.
The error is caught on GoogleSignInPlugin.m, line 96
@try {
[[GIDSignIn sharedInstance] signIn];
} @catch (NSException *e) {
result([FlutterError errorWithCode:@"google_sign_in" message:e.reason details:e.name]);
[e raise];
}
**The error currently logged in the flutter debug output**
*** First throw call stack:
```
(
0 CoreFoundation 0x00000001057d71bb __exceptionPreprocess + 331
1 libobjc.A.dylib 0x0000000104901735 objc_exception_throw + 48
2 CoreFoundation 0x00000001057d7015 +[NSException raise:format:] + 197
3 Runner 0x000000010066aef9 -[GIDSignIn signInWithOptions:] + 242
4 Runner 0x00000001006674ee -[GIDSignIn signIn] + 64
5 Runner 0x000000010057324b -[FLTGoogleSignInPlugin handleMethodCall:result:] + 2251
6 Flutter 0x0000000102d357ba __45-[FlutterMethodChannel setMethodCallHandler:]_block_invoke + 115
```
**The error currently being logged by the Xcode output**
```
Runner[18549:1040761] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Your app is missing support for the following URL schemes: com.googleusercontent.apps000000000000-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
```
*** First throw call stack:
```
(
0 CoreFoundation 0x0000000113a501bb __exceptionPreprocess + 331
1 libobjc.A.dylib 0x0000000112b72735 objc_exception_throw + 48
2 CoreFoundation 0x0000000113a50015 +[NSException raise:format:] + 197
3 Runner 0x000000010e75cef9 -[GIDSignIn signInWithOptions:] + 242
4 Runner 0x000000010e7594ee -[GIDSignIn signIn] + 64
5 Runner 0x000000010e66524b -[FLTGoogleSignInPlugin handleMethodCall:result:] + 2251
6 Flutter 0x000000010fabd7ba __45-[FlutterMethodChannel setMethodCallHandler:]_block_invoke + 115
7 Flutter 0x000000010fada4ac _ZNK5shell21PlatformMessageRouter21HandlePlatformMessageEN3fml6RefPtrIN5blink15PlatformMessageEEE + 166
8 Flutter 0x000000010faddff0 _ZN5shell15PlatformViewIOS21HandlePlatformMessageEN3fml6RefPtrIN5blink15PlatformMessageEEE + 38
9 Flutter 0x000000010fb30ca7 _ZNSt3__110__function6__funcIZN5shell5Shell29OnEngineHandlePlatformMessageEN3fml6RefPtrIN5blink15PlatformMessageEEEE4$_27NS_9allocatorIS9_EEFvvEEclEv + 57
10 Flutter 0x000000010fae9e0e _ZN3fml15MessageLoopImpl15RunExpiredTasksEv + 522
11 Flutter 0x000000010faed18c _ZN3fml17MessageLoopDarwin11OnTimerFireEP16__CFRunLoopTimerPS0_ + 26
12 CoreFoundation 0x00000001139b5f34 __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ + 20
13 CoreFoundation 0x00000001139b5b32 __CFRunLoopDoTimer + 1026
14 CoreFoundation 0x00000001139b539a __CFRunLoopDoTimers + 266
15 CoreFoundation 0x00000001139afa1c __CFRunLoopRun + 2252
16 CoreFoundation 0x00000001139aee11 CFRunLoopRunSpecific + 625
17 GraphicsServices 0x00000001180c11dd GSEventRunModal + 62
18 UIKitCore 0x000000011ba9681d UIApplicationMain + 140
19 Runner 0x000000010e441850 main + 112
20 libdyld.dylib 0x0000000114b7b575 start + 1
)
libc++abi.dylib: terminating with uncaught exception of type NSException
```
This may not be specific to this plugin but I'm sure it will help future users if the error handling provides a more clear description of the error.
| platform-ios,p: google_sign_in,package,a: error message,P2,team-ios,triaged-ios | low | Critical |
442,471,185 | rust | Types made public via associated types are not documented | Given following code:
```
mod raspberry {
pub struct Banana;
}
pub struct Peach;
pub trait Apple {
type Juice;
}
impl Apple for Peach {
type Juice = raspberry::Banana;
}
```
The generated documentation will not include documentation for `<Peach as Apple>::Juice`.
 | T-rustdoc,C-bug,A-reachable-priv | low | Minor |
442,483,810 | terminal | GSync/Freesync refresh rate / FPS drops when using Terminal | My main monitor is 144Hz. An easy way of seeing the current FPS is wiggling the mouse - the movement in 144 FPS is much smoother than 60 FPS, and this is **very** noticeable.
While using Terminal, the FPS constantly drops, and moves between low FPS and full 144 FPS. I can't tell if it drops to 60 FPS or a different amount, but it's way lower than 144 FPS. Wiggling the cursor while typing shows this problem well.
It seems like every interaction with the Terminal can cause the FPS to "flip" between low and high: Focusing on the window, typing, etc. Sometimes waiting ~3 seconds is enough for the FPS to switch back to high.
Graphics card is an `Nvidia GTX 1080 Ti` and monitor is an `Asus PG279Q`.
```
C:\WINDOWS\system32>ver
Microsoft Windows [Version 10.0.18362.86]
``` | Help Wanted,Area-Rendering,Issue-Bug,Product-Terminal,Priority-2 | high | Critical |
442,484,904 | pytorch | TensorIterator resizes output to a scalar if there are no inputs | ## 🐛 Bug
In master e47b210 -- May 9, 2019
When TensorIterator is built with only a single output (and no inputs) it resizes the output to a scalar (0-dim). The problem lies in `compute_shape`. If there are no inputs (and `resize_outputs_` is the default value of `true`) then `shape_` remains empty.
This bug is not user visible. There's no code currently that triggers it, but it makes it harder to write operators using TensorIterator.
https://github.com/pytorch/pytorch/blob/e47b21007511e3e427ffc25ac6fca339bd7953a6/aten/src/ATen/native/TensorIterator.cpp#L516-L552
Discovered by @syed-ahmed in https://github.com/pytorch/pytorch/pull/20292#discussion_r282686933 | module: internals,triaged | low | Critical |
442,488,239 | pytorch | torch.distributions.Binomial.sample() uses a massive amount of memory | ## 🐛 Bug
I'd like to use a random sampling process as part of my training (related to work in [this paper](https://arxiv.org/abs/1901.11365)). In my case this entails taking a binomial sample every epoch from the raw data, and using that sample to train. The dataset itself is small enough to fit into GPU memory (~8000 samples by 7000 features). Unfortunately, the binomial sample method contains [this](https://github.com/pytorch/pytorch/blob/master/torch/distributions/binomial.py#L97-L99):
```python
max_count = max(int(self.total_count.max()), 1)
shape = self._extended_shape(sample_shape) + (max_count,)
bernoullis = torch.bernoulli(self.probs.unsqueeze(-1).expand(shape))
```
Which, for my data, means that it tries to allocate a >600GB matrix to take a sample from a 240MB matrix. This is not ideal, to say the least. I guess no one has tried to sample a binomial distribution of this size before?
## To Reproduce
Steps to reproduce the behavior:
1. Create a binomial distribution with a big max count, e.g. `b = torch.distributions.binomial.Binomial(max_count=2000, probs=0.5)`
1. Try to take a big sample: `b.sample(sample_shape=torch.Size(10000, 10000)`
1. Laugh when pytorch tells you to buy 745GB of RAM just so you can perform this operation
## Expected behavior
The numpy version, on the same machine:
```
%timeit b = np.random.binomial(2000, 0.5, size=(10000, 10000))
11.4 s ± 60.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
## Environment
- PyTorch Version (e.g., 1.0): 1.0.1.post2
- OS (e.g., Linux): Ubuntu16 via Docker container
- How you installed PyTorch (`conda`, `pip`, source): `conda install pytorch -c torch`
- Build command you used (if compiling from source): N/A
- Python version: 3.7
- CUDA/cuDNN version: 9.0
- GPU models and configuration: running on an NVIDIA Titan XP with 12GB of RAM
- Any other relevant information: N/A?
| module: distributions,module: memory usage,triaged | low | Critical |
442,502,210 | vscode | Cannot set embedded language indentation rules | Related: https://github.com/vuejs/vetur/issues/534
Vetur extension defines two languages: `vue` and `vue-html`.
I can only use `languages.setLanguageConfiguration` to set indentation rule of the outer language `vue` but not the embedded `vue-html`.
With this API call, here's what I see:
```ts
export function registerLanguageConfigurations() {
// languages.setLanguageConfiguration('vue', {
// indentationRules: {
// increaseIndentPattern: /<(?!\?|(?:area|base|br|col|frame|hr|html|img|input|link|meta|param)\b|[^>]*\/>)([-_\.A-Za-z0-9]+)(?=\s|>)\b[^>]*>(?!.*<\/\1>)|<!--(?!.*-->)|\{[^}"']*$/,
// decreaseIndentPattern: /^\s*(<\/(?!html)[-_\.A-Za-z0-9]+\b[^>]*>|-->|\})/
// },
// });
languages.setLanguageConfiguration('vue-html', {
indentationRules: {
increaseIndentPattern: /<(?!\?|(?:area|base|br|col|frame|hr|html|img|input|link|meta|param)\b|[^>]*\/>)([-_\.A-Za-z0-9]+)(?=\s|>)\b[^>]*>(?!.*<\/\1>)|<!--(?!.*-->)|\{[^}"']*$/,
decreaseIndentPattern: /^\s*(<\/(?!html)[-_\.A-Za-z0-9]+\b[^>]*>|-->|\})/
}
});
}
```

If I remove the commented-out call to set `indentationRules` of `vue` language, then it seems to work. However, I'm worried it'll affect other non `vue-html` regions such as css/javascript.
You can repro by:
- Checking out this branch of Vetur: https://github.com/vuejs/vetur/tree/embedded-indentation-rules
- Compile with
```bash
yarn
cd server && yarn && cd ..
yarn compile
```
- Run the `client` debug config to test it
In short:
- In `vue-html` embedded in `vue`, I would expect setting `vue-html`'s indentation rules would affect move line up/down
- I have to set both indentation rules for `vue` and `vue-html` to achieve this, which would have some side effect | feature-request,languages-basic | low | Critical |
442,527,001 | node | Inspector segmentation fault | * **Version**: 12.2.0
* **Platform**: Windows 10, Linux Ubuntu 18.0, Mac HighSierra
* **Subsystem**: inspector
Accessing `this` in static class field initialization context in some conditions crashes the process. This seems to be related both to Node.js inspector and Chrome devtool inspector.
Here is an automated repro:
```js
'use strict';
~class{a(){}};
debugger;
class A{static a = this};
if(process.argv.includes('test')) return;
const cp = require('child_process');
const proc1 = cp.spawn(process.execPath, ['--inspect-brk', __filename, 'test']);
const proc2 = cp.spawn(process.execPath, ['inspect', '-p', proc1.pid]);
proc1.on('exit', (code, signal) => {
if(code) console.log('Exit code: ' + code.toString(16).toUpperCase());
if(signal) console.log('Exit signal: ' + signal);
});
setTimeout(() => proc2.stdin.write('c\n'), 1000);
setTimeout(() => proc2.stdin.write('s\n'), 1100);
setTimeout(() => proc2.stdin.write('repl\n'), 1200);
setTimeout(() => proc2.stdin.write('this\n'), 1300);
```
Save as `main.js`, then run `node main.js`
Output:
```
Exit code: C0000005 // On Windows
Exit signal: SIGSEGV // On Linux and Mac
``` | confirmed-bug,inspector | low | Critical |
442,537,170 | PowerToys | Disable focus stealing | The old TweakUI had an option to disable focus stealing. I am a real power user: while launching several apps I am writing emails, notes, documents and so on. But when another app has been finished the startup it steals the focus. Other times this happens when an app shows a pop-up.
Pls bring back an option to disable focus stealing | Idea-New PowerToy | high | Critical |
442,546,218 | go | encoding/asn1: unmarshal of Context Specific into slice of RawValue | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
go version go1.12.5 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
My Go version is the latest version.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
set GOARCH=amd64
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
</pre></details>
### What did you do?
I tried to ASN.1 unmarshal `UnauthenticatedAttributes` of a PKCS #7 `SignedData`. But the Unmarshal() does not properly unmarshal (only two of the total of three are unmarshalled) and returns the rest as a return value. However, other asn1 editor or viewers or OpenSSL properly unmarshal and show the result.
My code is available at https://play.golang.org/p/2W2xBUjbpqO
### What did you expect to see?
The ASN1 editor shows like the following pics.
https://imgur.com/vfiSt3Q
https://imgur.com/OhjLlsY
### What did you see instead?
They are not properly unmarshalled.
| NeedsInvestigation | low | Minor |
442,604,160 | PowerToys | Network Kill-Switch | As addition to the WLAN and Airplane toggle in the sidebar: A Kill-Switch for all Network-Connections for Internet, WAN and LAN (Ethernet, WLAN, Mobile/LTE/UMTS,...) | Idea-New PowerToy | low | Major |
442,612,690 | godot | Octant physics negatively impact GridMap performance | **Godot version:** 3.1.1 mono (64 bit)
<!-- Specify commit hash if non-official. -->
**Issue description:**
When using the gridmap to render thousands of cells (100.000 cells, 100 different more complex meshes), the framerate drops significantly.
After some investigation, I found out that this is caused by the physics tick. We confirmed this hypothesis by disabling all collision layers.
Additionally, the more cells are filled, the slower it gets to change cells. When we increased the octant size, this had a positive effect on this issue.
Is it possible to disable the physics altogether? Or providing a way to access those physics objects by making octants available through the gridmap?
If not, is there a reason why it is implemented as such? Would there be a problem if we attempted to fix this issue (no C++ experts here though :) )?
**Steps to reproduce:**
Create a gridmap and fill 100.000 cells with around 20 different meshes -a little more complex than a cube-. This should drop the framerate significantly.
When disabling the collision layers, it should be back to normal.
The game will also take a lot longer when changing a cell on a gridmap with 100.000 filled cells than with a gridmap of 1.000.
Example of the meshes used:

| bug,discussion,confirmed,topic:physics,topic:3d,performance | medium | Major |
442,689,444 | pytorch | [feature request] Run examples from docs as tests | Quite a few times (e.g. https://github.com/pytorch/pytorch/issues/20301) some examples from docs became obsolete and ran unnoticed.
Proposal: ability to mark some examples in docs as tests and run them on CI and check execution for errors, warnings etc. | module: docs,feature,triaged | low | Critical |
442,733,406 | pytorch | Define portable M_PI replacement, use it instead of non-standard M_PI in math.h | A common mistake is to use `M_PI` from `<math.h>` which is not portable on Windows. Standard workaround is to `#define _USE_MATH_DEFINES` before including the header. We should have a shim header that handles this for you, and then add a lint rule to forbid direct inclusion of `math.h`. Alternately, if only uses of `M_PI` are in cpp files, we can just add `-D_USE_MATH_DEFINES` to our Windows compiler flags (this approach is not permissible if they occur in `M_PI`
Recent occurrence: https://github.com/pytorch/pytorch/pull/19316 | module: internals,triaged | low | Minor |
442,751,205 | terminal | CMD does not show Unix LF when pasting | * Build 18890 (18894 on the way...)
* What doing: Create a file with **Unix (LF) line endings**. For example,
```cmd
echo foo
echo bar
```
Paste it in CMD window. The output is `fooecho bar`. The output should be in separate lines. Try the same in mintty. It shows correct output.
| Product-Conhost,Area-Interaction,Issue-Bug,Priority-2 | low | Major |
442,790,471 | rust | Tracking issue for RFC 2603, "Rust Symbol Mangling (v0)" | This is a tracking issue for the RFC "Rust Symbol Mangling (v0)" (rust-lang/rfcs#2603).
**Current status:**
Since #90128, you can control the mangling scheme with `-C symbol-mangling-version`, which can be:
* `legacy`: the older mangling version, still the default currently
* explicitly specifying this is unstable-only and also requires `-Z unstable-options`
(to allow for eventual removal after `v0` becomes the default)
* `v0`: the new RFC mangling version, as implemented by #57967
(Before #90128, this flag was the nightly-only `-Z symbol-mangling-version`)
</details>
To test the new mangling, set `RUSTFLAGS=-Csymbol-mangling-version=v0` (or change [`rustflags` in `.cargo/config.toml`](https://doc.rust-lang.org/cargo/reference/config.html#configuration-keys)). Please note that only symbols from crates built with that flag will use the new mangling, and that tool support (e.g. debuggers) will be limited initially, until everything is upstreamed. However, `RUST_BACKTRACE` and [`rustfilt`](https://crates.io/crates/rustfilt) should work out of the box with either mangling version.
**Steps:**
- [x] Implement the RFC (https://github.com/rust-lang/rust/pull/57967 + https://github.com/alexcrichton/rustc-demangle/pull/23)
- [x] Upstream C implementation of the demangler to:
- [x] `binutils`/`gdb` (GNU `libiberty`)
- [x] [[PATCH] Move rust_{is_mangled,demangle_sym} to a private libiberty header.
](https://gcc.gnu.org/pipermail/gcc-patches/2019-June/523011.html) committed as https://github.com/gcc-mirror/gcc/commit/979526c9ce7bb79315f0f91fde0668a5ad8536df
- [x] [[PATCH] Simplify and generalize rust-demangle's unescaping logic.
](https://gcc.gnu.org/pipermail/gcc-patches/2019-August/527835.html) committed as https://github.com/gcc-mirror/gcc/commit/42bf58bb137992b876be37f8b2e683c49bc2abed
- [x] [[PATCH] Remove some restrictions from rust-demangle.
](https://gcc.gnu.org/pipermail/gcc-patches/2019-September/530445.html) committed as https://github.com/gcc-mirror/gcc/commit/e1cb00db670e4eb277f8315ecc1da65a5477298d
- [x] [[PATCH] Refactor rust-demangle to be independent of C++ demangling.
](https://gcc.gnu.org/pipermail/gcc-patches/2019-November/533719.html) ([original submission](https://gcc.gnu.org/pipermail/gcc-patches/2019-October/532388.html)) committed as https://github.com/gcc-mirror/gcc/commit/32fc3719e06899d43e2298ad6d0028efe5ec3024
- [x] [[PATCH] Support the new ("v0") mangling scheme in rust-demangle.
](https://gcc.gnu.org/pipermail/gcc-patches/2020-November/558905.html) ([original submission](https://gcc.gnu.org/pipermail/gcc-patches/2020-March/542012.html)) committed as https://github.com/gcc-mirror/gcc/commit/84096498a7bd788599d4a7ca63543fc7c297645e
- [x] Linux `perf` (through `binutils 2.36` and/or `libiberty 11.0`, or later versions - may vary between distros)
- [x] `valgrind`
- [x] Implement demangling support in LLVM, including lldb, lld, llvm-objdump, llvm-nm, llvm-symbolizer, llvm-cxxfilt
- [x] Resolve issue around rustc generating invalid symbol names (https://github.com/rust-lang/rust/issues/83611)
- [ ] Adjust documentation ([see instructions on rustc-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-guide][stabilization-guide])
[stabilization-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rust-lang.github.io/rustc-guide/stabilization_guide.html#documentation-prs
**Unresolved questions:**
- [x] [Punycode vs UTF-8](https://github.com/rust-lang/rfcs/blob/master/text/2603-symbol-name-mangling-v2.md#punycode-vs-utf-8), some prior discussion in https://github.com/rust-lang/rust/issues/7539
- [x] [Encoding parameter types for function symbols](https://github.com/rust-lang/rfcs/blob/master/text/2603-symbol-name-mangling-v2.md#encoding-parameter-types-for-function-symbols)
**Desired availability of tooling:**
Linux:
- Tools: binutils, gdb, lldb, perf, valgrind
| Distro | Has versions of all tools with support? |
| - | - |
| Debian (latest stable) | ? |
| Arch | ? |
| Ubuntu (latest release) | ? |
| Ubuntu (latest LTS) | ? |
| Fedora (latest release) | ? |
| Alpine (latest release) | ? |
Windows:
Windows does not have support for demangling either legacy or v0 Rust symbols and requires debuginfo to load the appropriate function name. As such, no special support is required.
macOS:
More investigation is needed to determine to what extent macOS system tools already support Rust v0 mangling. | T-compiler,B-unstable,B-RFC-implemented,C-tracking-issue,S-tracking-needs-to-bake | high | Critical |
442,793,725 | pytorch | Overhead performance regression over time umbrella issue. | This issue is meant to collect various performance-regression-over-time bug reports that aren't specific op regressions, that almost certainly overlap, but which we should track separately to make sure we cover all the cases.
To start:
https://github.com/pytorch/pytorch/issues/5388
https://github.com/pytorch/pytorch/issues/16717
https://github.com/pytorch/pytorch/issues/2560
cc @ezyang @gchanan @zou3519 @VitalyFedyunin @ngimel @mruberry | high priority,module: performance,module: internals,module: cuda,module: cpu,triaged,quansight-nack | low | Critical |
442,805,629 | rust | Request: Format change for doc test cli output | _Originally filed at https://github.com/rust-lang/cargo/issues/6927_
---
When running `cargo test` on a project with some doc tests I see the following output:
```
$ cargo test
Finished dev [unoptimized + debuginfo] target(s) in 0.06s
Running target/debug/deps/tq_parser-9edae091aa940c1c
running 1 test
test tokenizer::tests::string_literal::simple ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Doc-tests tq-parser
running 3 tests
test src/tokenizer.rs - tokenizer::Tokenizer<'text>::new (line 158) ... ok
test src/tokenizer.rs - tokenizer::Tokenizer (line 36) ... ok
test src/tokenizer.rs - tokenizer::Token (line 122) ... ok
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
```
This ticket is focusing on this portion:
```
Doc-tests tq-parser
running 3 tests
test src/tokenizer.rs - tokenizer::Tokenizer<'text>::new (line 158) ... ok
test src/tokenizer.rs - tokenizer::Tokenizer (line 36) ... ok
test src/tokenizer.rs - tokenizer::Token (line 122) ... ok
```
I believe this output could be more useful, perhaps at the cost of being 'pretty', if the line numbers were joined to the source file path with a `:` to produce a clickable (in some environments) path.
For example, if the output were the following
```
Doc-tests tq-parser
running 3 tests
test src/tokenizer.rs:158 - tokenizer::Tokenizer<'text>::new ... ok
test src/tokenizer.rs:36 - tokenizer::Tokenizer ... ok
test src/tokenizer.rs:122 - tokenizer::Token ... ok
```
I could cmd-click on `src/tokenizer.rs:158` and be taken, in my editor, directly to the doc test instead of having to open the file then navigate to the line.
This is more useful in the case of errors.
If I change the `tokenizer::Tokenizer<'text>::new` test to fail I get
```
test src/tokenizer.rs - tokenizer::Tokenizer<'text>::new (line 158) ... FAILED
```
Which could be
```
test src/tokenizer.rs:158 - tokenizer::Tokenizer<'text>::new ... FAILED
``` | T-rustdoc,C-enhancement,A-libtest,A-doctests | low | Critical |
442,813,309 | PowerToys | Add dock to desktop to any window | Add the feature Dock to Desktop found in OneNote 2016 (keyboard shortcut Crtl + Alt + D)
Actually changes the window to a TaskBar allowing you to Maximize a second window to the screen. | Idea-Enhancement,Product-Window Manager | medium | Critical |
442,831,962 | go | x/playground: change vet support to happen in 1 HTTP request/response instead of 2 | Currently the playground makes 2 HTTP requests (/compile + /vet) instead of just 1.
(Noticed while working on #31944)
We need to do this first, otherwise the implementation of #31944 either gets nasty, or slow.
So let's just clean this up first.
Plan:
* keep old handler, to support older javascript clients
* add new "withvet=1" URL parameter to advertise to server that the client supports vet in one round trip
* return "vet" object in returned JSON
* have client skip /vet XHR if vet object key is returned
/cc @ysmolsky | NeedsFix | low | Major |
442,834,700 | godot | Local resource looses resource path | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
66baa3b
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
Win 10
**Issue description:**
<!-- What happened, and what was expected. -->
attaching a resource to a node and then setting it to local_to_scene will set the resource_path to null
**Steps to reproduce:**
Create a Node with export var Resource and a Resource
attach Resource to node
print resource.get_path()
set Resource to local_to_scene
print resource.get_path()
**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[TileTest.zip](https://github.com/godotengine/godot/files/3167637/TileTest.zip)
| discussion,topic:core | low | Critical |
442,841,130 | pytorch | Better documentation / molly-guards around use of multiprocessing with spawn in Jupyter/ipython notebooks | Apparently, it's really popular for users to use multiprocessing + spawn in ipython/Jupyter notebooks. For example, #17680; also, https://fb.workplace.com/groups/1405155842844877/permalink/2766399306720517/ (FB-only)
There is a loaded footgun that occurs when you combine these three ingredients: the spawn method will **run everything in your notebook top-level** (and you probably have live statements on your notebook top-level; that's why you're running in a notebook) before actually running the requested function in question. This is unexpected for users, who just expected to be directly dropped into the function in question, and leads to all sorts of fun (`mp.set_start_method` reports that it's already been called; deadlock when the child process starts trying to run the same thing again).
It would be *really good* to document this somewhere people are actually going to read it, and maybe add some runtime checks to detect if this situation is happening and help users do the right thing.
cc @colesbury @chandlerzuo | module: docs,triaged,enhancement | low | Major |
442,841,968 | terminal | Job control and the Console API | The Console API is presently lacking any notion of the foreground process/process group; that is, there is no equivalent to the UNIX tcsetpgrp() call, and indeed the API documentation notes that in the presence of multiple processes there is no guarantee which process will receive input read from the console.
This is really quite a serious bug — it prevents the use of any kind of job control from the shell, because the shell has no way to cause a process to stop if it tries to read from the attached console. Yes, it's possible that a shell could create pseudo consoles for every subprocess and attempt to manage this itself, but that doesn't work if the subprocesses themselves spawn subprocesses.
Please take a look at https://ftp.gnu.org/old-gnu/Manuals/glibc/html_node/Access-to-the-Terminal.html#Access%20to%20the%20Terminal which describes the UNIX behaviour; you *really* want to make sure that it's possible to do similar things in the Windows API. In particular, it must be possible to send a console control message to a process if it tries to read from a console for which it is not the foreground process or in the foreground process group; the default behaviour for the message, if unhandled, should be to suspend the process. It must also be possible, optionally, depending on the console configuration, to cause output to behave the same way.
I'd suggest adding something like
```c
BOOL ConsoleSetForegroundProcessGroup(HANDLE hConsole, HANDLE hProcessGroup);
HANDLE ConsoleGetForegroundProcessGroup(HANDLE hConsole);
```
Then I'd probably add an extra console mode flag to control the suspension of processes on output, maybe `ENABLE_SUSPEND_ON_OUTPUT`; you could also add `ENABLE_SUSPEND_ON_INPUT` and leave that off by default to maintain the present behaviour.
Finally, the console control events `CTRL_SUSPEND_FOR_INPUT` and `CTRL_SUSPEND_FOR_OUTPUT` need adding. It would also be a good idea to add, while you're about it, `CTRL_SUSPEND`, the default behaviour of which should be to suspend the process unless it's handled. | Issue-Question,Product-Conhost,Area-Server | low | Critical |
442,873,663 | rust | `impl Trait` changes mutability requirements of closure | This code behaves in a way that is not obvious to me:
```rust
fn main() {
fn get_impl_trait() -> impl Default {}
let impl_trait = get_impl_trait();
let mut unit = ();
let modify_unit = || {
let _captured_impl_trait = impl_trait; // (1)
unit = (); // (2)
};
modify_unit();
}
```
This code compiles without any warnings in stable Rust. But notice, the `modify_unit` closure is not declared `mut`, but it modifies the `unit` variable inside at location `(2)`.
However, when you comment out the line marked `(1)`, suddenly Rust complains that the closure is not mutable:
[Link to the code on the playground.](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=7e64bdf75122c21de92d0d9195ca04b0)
```text
error[E0596]: cannot borrow `modify_unit` as mutable, as it is not declared as mutable
--> src/main.rs:13:5
|
7 | let modify_unit = || {
| ----------- help: consider changing this to be mutable: `mut modify_unit`
...
13 | modify_unit();
| ^^^^^^^^^^^ cannot borrow as mutable
```
What is the reason for this behavior? | C-enhancement,A-diagnostics,T-compiler | low | Critical |
442,887,453 | flutter | [in_app_purchase] BillingResponse should be accessible | The method `buyNonConsumable` of `GooglePlayConnection` calls `launchBillingFlow`, which returns `Future<BillingResponse>`. This response would be very useful to show the user why his/hers purchase was not successful. I've seen that the equivalent `launchBillingFlow` for iOS would be `addPayment` of `AppStoreConnection`, which returns `Future<void>`. I believe this is the reason why buying methods return nothing. Makes sense. Still, I see no reason why `launchBillingFlow` could not throw an `Exception` containing `BillingResponse` if the enum value is not `BillingResponse.ok`. | p: in_app_purchase,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Minor |
442,898,678 | rust | Request: Make `std::marker::Freeze` pub again | I had heard tell of `Freeze` but didn't really know what it was until today. [`swym`](https://github.com/mtak-/swym/blob/1dcf0a9d7d61be3bd3cbb6993d5d27bda2874e6d/src/tx.rs#L228), a hybrid transactional memory library, has an accidental reimplementation of `Freeze` using `optin_builtin_traits`. Unfortunately `optin_builtin_traits` is the only feature keeping `swym` on nightly.
The [ticket](https://github.com/rust-lang/rust/issues/12683) that removed `Freeze` doesn't have much of an explanation for why it was removed so I'm assuming it was a lack of motivating use cases.
### Use Case
[`swym::tcell::TCell::borrow`](https://docs.rs/swym/0.1.0-preview/swym/tcell/struct.TCell.html#method.borrow) returns snapshots of data - shallow `memcpy`s - that are guaranteed to not be torn, and be valid for the duration of the transaction. Those snapshots hold on to the lifetime of the `TCell` in order to act like a true reference, without blocking updates to the `TCell` from other threads. Other threads promise to not mutate the value that had its snapshot taken until the transaction has finished, but are permitted to move the value in memory.
This works great for a lot of types, but fails miserably when `UnsafeCell`s are directly stored in the type.
```rust
let x = TCell::new(Mutex::new("hello there".to_owned()));
// .. inside a transaction
let shallow_copy = x.borrow(tx, Default::default())?;
// locking a shallow copy of a lock... is not really a lock at all!
*shallow_copy.lock().unwrap() = "uh oh".to_owned();
```
Even if `Mutex` internally had a pointer to the "actual" mutex data structure, the above example would still be broken because the `String` is deallocated through the shallow copy. The `String` contained in the `TCell` would point to freed memory.
Note that having `TCell::borrow` require `Sync` would still allow the above broken example to compile.
### Freeze
If `swym::tcell::TCell::borrow` could require `Freeze` then this would not be an issue as the `Mutex` type is definitely not `Freeze`.
```rust
pub(crate) unsafe auto trait Freeze {}
impl<T: ?Sized> !Freeze for UnsafeCell<T> {}
unsafe impl<T: ?Sized> Freeze for PhantomData<T> {}
unsafe impl<T: ?Sized> Freeze for *const T {}
unsafe impl<T: ?Sized> Freeze for *mut T {}
unsafe impl<T: ?Sized> Freeze for &T {}
unsafe impl<T: ?Sized> Freeze for &mut T {}
```
Shallow immutability is all that is required for `TCell::borrow` to work. `Sync` is only necessary to make `TCell` `Sync`.
* `TCell<String>` - should be permitted.
* `TCell<Mutex<String>>` - should be forbidden.
* `TCell<Box<Mutex<String>>>` - should be permitted.
### Alternatives
- A manually implemented marker trait _could_ work, but is actually very dangerous in practice. In the below example, assume that the impl of `MyFreeze` was correct when it was written. Everytime the author of `MyType` updates their dependency on `other_crate` they must recheck that `OtherType` still has no direct interior mutability or risk unsoundness.
```rust
struct MyType { value: other_crate::OtherType }
unsafe impl MyFreeze for MyType {}
```
- Add a `T: Copy` bound on `TCell::<T>::borrow`. This would definitely work but leaves a lot of types out.
- Wait for [OIBIT](https://github.com/rust-lang/rust/issues/13231)s to stabilize (assuming it will be stabilized).
- Have `TCell` store a `Box<T>` internally, and only work with heap allocated data where interior mutability is of no concern. This would be pretty effective, and if the type is small enough and `Copy`, the `Box` could be elided. While not as good as stabilizing `Freeze`, I think this is the best alternative. | T-lang,C-feature-request,needs-rfc | high | Critical |
442,923,850 | terminal | Feature Request: Preview of open tabs in task bar | I would like to have each of my tabs show up in my task bar when I hover over the icon like Edge browser. | Issue-Feature,Help Wanted,Area-UserInterface,Product-Terminal | low | Major |
442,945,014 | terminal | Suggestion: "One-click & snap" connect to bluetooth/serial devices and network hosts using QR Code and Code 128 | IoT devices, network equipment, servers, ... can be accessed through serial over bluetooth, telnet, ssh, etc... but they can require quite a bit of information to configure the terminal properly to connect to them.
It is possible to put labels on them with their hostname or ip address, protocol to use, or for bluetooth or serial equipment, their bluetooth address, passkey, bits per second, data bits, parity, stop bits and flow control information...
Now, imagine if you could just print a QR Code or Code 128 (depending on what is easier to stick on the equipment) containing all that information, and you could just scan the label that is on the device to connect to it automatically.
For servers with a status screen but no keyboard attached, a simple `qrencode -t UTF8 "ssh://$(hostname)"` displayed on its status screen could make connecting to it much easier as well.
This could be a "Scan connection barcode" option in the [+][v] (new tab menu) of the Windows Terminal, using the Windows built-in barcode library to scan the sticker using the webcam and connecting to it in a single click and snap.
Typical scenario would be tech guy walking to a network switch, IoT endpoint, etc... and, using his laptop, connecting to it without having to setup the connection manually.
Future scenario would be when the Windows Terminal works on HoloLens 2, to be able to simply say "connect using terminal" to have a floating augmented-reality terminal window connecting to the device in front of the user.
| Issue-Feature,Help Wanted,Area-Extensibility,Product-Terminal | low | Minor |
442,961,929 | rust | Compiling is signification slower with long return position impl types | I have the following (relatively complicated) parser code (with dependency `combine 3.8.1` and `either_n 0.2.0`), which takes ~45s on my computer to compile for `cargo test`:
<details>
```rust
#[macro_use]
extern crate combine;
use combine::error::StringStreamError;
use combine::parser::{
char::{alpha_num, char, letter, spaces, string},
choice::{choice, optional},
combinator::attempt,
range::recognize,
repeat::{many, skip_many1},
Parser,
};
use either_n::{Either2, Either3, Either6};
use std::iter::{self, FromIterator};
pub fn parse_item(input: &str) -> Result<(&str, TokenStream<'_>), ()> {
(identifier_str(), item_after_name())
.parse(input)
.map_err(|_| ())
.and_then(|((name, rest), remaining)| match remaining {
"" => Ok((name, TokenStream(rest.collect()))),
_ => Err(()),
})
}
#[derive(Clone, Debug, Default, Eq, PartialEq)]
pub struct TokenStream<'a>(pub Vec<Token<'a>>);
impl<'a> FromIterator<Token<'a>> for TokenStream<'a> {
fn from_iter<I: IntoIterator<Item = Token<'a>>>(iter: I) -> Self {
TokenStream(Vec::from_iter(iter))
}
}
impl<'a> IntoIterator for TokenStream<'a> {
type Item = Token<'a>;
type IntoIter = <Vec<Token<'a>> as IntoIterator>::IntoIter;
fn into_iter(self) -> Self::IntoIter {
self.0.into_iter()
}
}
impl<'a> Extend<Token<'a>> for TokenStream<'a> {
fn extend<I: IntoIterator<Item = Token<'a>>>(&mut self, iter: I) {
self.0.extend(iter);
}
}
impl<'a, Iter> Extend<Iter> for TokenStream<'a>
where
Iter: IntoIterator<Item = Token<'a>>,
{
fn extend<I: IntoIterator<Item = Iter>>(&mut self, iter: I) {
self.0.extend(iter.into_iter().flatten())
}
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum Token<'a> {
Text(&'a str),
Nested(TokenStream<'a>),
Type(TokenStream<'a>),
Primitive(Primitive<'a>),
Identifier(&'a str),
AssocType(&'a str),
Range(Range),
Where,
}
impl<'a> From<Primitive<'a>> for Token<'a> {
fn from(primitive: Primitive<'a>) -> Self {
Token::Primitive(primitive)
}
}
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum Primitive<'a> {
Ref(&'a str),
SliceStart,
SliceEnd,
TupleStart,
TupleEnd,
Unit,
Named(&'a str),
}
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum Range {
Range,
RangeFrom,
RangeFull,
RangeInclusive,
RangeTo,
RangeToInclusive,
}
// TODO: Replace this macro with named existential type when it's available.
// See https://github.com/rust-lang/rust/issues/34511
macro_rules! parser_str_to_iter_token {
($a:lifetime) => {
parser_str_to!($a, impl Iterator<Item = Token<$a>>)
};
}
macro_rules! parser_str_to {
($a:lifetime, $ty:ty) => {
impl Parser<Input = &$a str, Output = $ty>
}
}
fn item_after_name<'a>() -> parser_str_to_iter_token!('a) {
(
lex("("),
nested_type_like_list(),
lex(")"),
optional_tokens(chain2(lex("->"), single_type_like())),
optional_tokens(chain4(
wrap("where", Token::Where),
single_type_like(),
lex(":"),
sep1_by_lex(single_type_like, "+"),
)),
)
.map(|(left, params, right, ret, where_clause)| {
iter::empty()
.chain(left)
.chain(params)
.chain(right)
.chain(ret)
.chain(where_clause)
})
}
type BoxedTokenIter<'a> = Box<dyn Iterator<Item = Token<'a>> + 'a>;
// // Add an extra wrapper for this parser so that it can be invoked recursively.
// parser! {
// fn type_like['a]()(&'a str) -> BoxedTokenIter<'a> {
// type_like_inner()
// }
// }
//
// fn type_like_inner<'a>() -> parser_str_to!('a, BoxedTokenIter<'a>) {
// sep1_by_lex(single_type_like, "|").map(to_boxed_iter)
// }
fn type_like<'a>() -> parser_str_to_iter_token!('a) {
sep1_by_lex(single_type_like, "|")
}
// Add an extra wrapper for this parser so that we don't have too deep type name.
parser! {
fn single_type_like['a]()(&'a str) -> BoxedTokenIter<'a> {
single_type_like_inner()
}
}
fn single_type_like_inner<'a>() -> parser_str_to!('a, BoxedTokenIter<'a>) {
single_type_like_token().map(iter::once).map(to_boxed_iter)
}
fn to_boxed_iter<'a, T>(iter: impl Iterator<Item = T> + 'a) -> Box<dyn Iterator<Item = T> + 'a> {
Box::new(iter)
}
fn single_type_like_token<'a>() -> parser_str_to!('a, Token<'a>) {
to_type_token(choice((
attempt(ref_type()).map(Either6::One),
attempt(slice_type()).map(Either6::Two),
attempt(fn_type()).map(Either6::Three),
attempt(tuple_type()).map(Either6::Four),
attempt(range_type()).map(Either6::Five),
named_type().map(Either6::Six),
)))
}
fn ref_type<'a>() -> parser_str_to_iter_token!('a) {
chain3(
recognize((
char('&'),
optional(string("mut")),
optional(attempt((spaces(), lifetime()))),
))
.map(|s| iter::once(Token::Primitive(Primitive::Ref(s)))),
maybe_spaces(),
single_type_like(),
)
}
fn slice_type<'a>() -> parser_str_to_iter_token!('a) {
chain3(
wrap_start("[", Primitive::SliceStart),
type_like(),
wrap_end("]", Primitive::SliceEnd),
)
}
fn fn_type<'a>() -> parser_str_to_iter_token!('a) {
chain4(
text((char('('), spaces())),
nested_type_like_list(),
text((spaces(), char(')'), spaces(), string("->"), spaces())),
type_like(),
)
}
fn tuple_type<'a>() -> parser_str_to_iter_token!('a) {
choice((
attempt(wrap("()", Primitive::Unit)).map(Either2::One),
chain3(
wrap_start("(", Primitive::TupleStart),
nested_type_like_list(),
wrap_end(")", Primitive::TupleEnd),
)
.map(Either2::Two),
))
}
fn nested_type_like_list<'a>() -> parser_str_to_iter_token!('a) {
optional(
sep1_by_lex(type_like, ",")
.map(Iterator::collect)
.map(Token::Nested),
)
.map(IntoIterator::into_iter)
}
fn range_type<'a>() -> parser_str_to_iter_token!('a) {
(
optional(named_type()),
choice((attempt(lex_str("..=")), attempt(lex_str("..")))),
optional(named_type()),
)
.and_then(|(start, op, end)| {
let range = match (&start, op.trim(), &end) {
(None, "..", None) => Range::RangeFull,
(None, "..", Some(_)) => Range::RangeTo,
(None, "..=", Some(_)) => Range::RangeToInclusive,
(Some(_), "..", None) => Range::RangeFrom,
(Some(_), "..", Some(_)) => Range::Range,
(Some(_), "..=", Some(_)) => Range::RangeInclusive,
_ => return Err(StringStreamError::UnexpectedParse),
};
let start = start.into_iter().flatten();
let end = end.into_iter().flatten();
Ok(iter::empty()
.chain(start)
.chain(range_token(op, range))
.chain(end))
})
}
fn range_token(s: &str, range: Range) -> impl Iterator<Item = Token<'_>> {
let start = match &s[..s.len() - s.trim_start().len()] {
"" => None,
spaces => Some(Token::Text(spaces)),
};
let end = match &s[s.trim_end().len()..] {
"" => None,
spaces => Some(Token::Text(spaces)),
};
iter::empty()
.chain(start)
.chain(iter::once(Token::Range(range)))
.chain(end)
}
fn named_type<'a>() -> parser_str_to_iter_token!('a) {
chain2(
named_type_base().map(|ty| iter::once(Token::Type(ty.collect()))),
// Associated items
many::<TokenStream<'_>, _>(attempt(chain2(
lex("::"),
identifier_str().map(Token::AssocType).map(iter::once),
))),
)
}
fn named_type_base<'a>() -> parser_str_to_iter_token!('a) {
chain2(
// Name
identifier_str().map(|ident| {
iter::once(if is_primitive(ident) {
Token::Primitive(Primitive::Named(ident))
} else {
Token::Identifier(ident)
})
}),
// Optional parameters
optional_tokens(chain3(
lex("<"),
sep1_by_lex(type_param, ","),
text((spaces(), char('>'))),
)),
)
}
fn to_type_token<'a>(inner: parser_str_to_iter_token!('a)) -> parser_str_to!('a, Token<'a>) {
inner.map(|ty| {
let mut inner: Vec<_> = ty.collect();
match inner.as_ref() as &[_] {
[Token::Type(_)] => inner.remove(0),
_ => Token::Type(TokenStream(inner)),
}
})
}
#[rustfmt::skip]
fn is_primitive(ident: &str) -> bool {
match ident {
"bool" | "char" | "str" |
"i8" | "i16" | "i32" | "i64" | "i128" | "isize" |
"u8" | "u16" | "u32" | "u64" | "u128" | "usize" => true,
_ => false,
}
}
fn type_param<'a>() -> parser_str_to_iter_token!('a) {
choice((
attempt(lifetime_param()).map(Either3::One),
attempt(assoc_type_param()).map(Either3::Two),
type_like().map(Either3::Three),
))
}
fn lifetime_param<'a>() -> parser_str_to_iter_token!('a) {
text(lifetime())
}
fn assoc_type_param<'a>() -> parser_str_to_iter_token!('a) {
chain3(
identifier_str().map(Token::AssocType).map(iter::once),
lex("="),
type_like(),
)
}
fn optional_tokens<'a>(inner: parser_str_to_iter_token!('a)) -> parser_str_to_iter_token!('a) {
optional(attempt(inner))
.map(IntoIterator::into_iter)
.map(Iterator::flatten)
}
fn sep1_by_lex<'a, P, I>(
parser_fn: impl Fn() -> P,
sep: &'static str,
) -> parser_str_to_iter_token!('a)
where
P: Parser<Input = &'a str, Output = I>,
I: Iterator<Item = Token<'a>>,
{
chain2(
parser_fn(),
many::<TokenStream<'a>, _>(attempt(chain2(lex(sep), parser_fn()))),
)
}
fn lex<'a>(s: &'static str) -> parser_str_to_iter_token!('a) {
text(lex_str(s))
}
fn lex_str<'a>(s: &'static str) -> parser_str_to!('a, &'a str) {
recognize((spaces(), string(s), spaces()))
}
fn wrap_start<'a>(
inner: &'static str,
token: impl Into<Token<'a>>,
) -> parser_str_to_iter_token!('a) {
let token = token.into();
chain2(
string(inner).map(move |_| iter::once(token.clone())),
maybe_spaces(),
)
}
fn wrap_end<'a>(inner: &'static str, token: impl Into<Token<'a>>) -> parser_str_to_iter_token!('a) {
let token = token.into();
chain2(
maybe_spaces(),
string(inner).map(move |_| iter::once(token.clone())),
)
}
fn wrap<'a>(inner: &'static str, token: impl Into<Token<'a>>) -> parser_str_to_iter_token!('a) {
let token = token.into();
chain3(
maybe_spaces(),
string(inner).map(move |_| iter::once(token.clone())),
maybe_spaces(),
)
}
fn maybe_spaces<'a>() -> parser_str_to_iter_token!('a) {
recognize(spaces()).map(|s| match s {
"" => None.into_iter(),
s => Some(Token::Text(s)).into_iter(),
})
}
fn text<'a>(inner: impl Parser<Input = &'a str>) -> parser_str_to_iter_token!('a) {
text_token(inner).map(iter::once)
}
fn text_token<'a>(
inner: impl Parser<Input = &'a str>,
) -> impl Parser<Input = &'a str, Output = Token<'a>> {
recognize(inner).map(Token::Text)
}
fn lifetime<'a>() -> parser_str_to!('a, &'a str) {
recognize((char('\''), skip_many1(letter())))
}
fn identifier_str<'a>() -> parser_str_to!('a, &'a str) {
recognize(skip_many1(choice((alpha_num(), char('_')))))
}
macro_rules! impl_chain {
($name:ident: $($v:ident)+) => {
fn $name<'a>($(
$v: parser_str_to!('a, impl IntoIterator<Item = Token<'a>>),
)+) -> parser_str_to_iter_token!('a) {
($($v),+).map(|($($v),+)| {
iter::empty() $(.chain($v.into_iter()))+
})
}
}
}
impl_chain!(chain2: a b);
impl_chain!(chain3: a b c);
impl_chain!(chain4: a b c d);
#[cfg(test)]
mod tests {
use super::*;
use combine::Parser;
macro_rules! tokens {
($($t:tt)*) => {{
let mut result = vec![];
tokens_impl!(result $($t)*);
result
}};
}
macro_rules! tokens_impl {
($result:ident) => {};
($result:ident where $($t:tt)*) => {
$result.push(Token::Where);
tokens_impl!($result $($t)*);
};
($result:ident +$ident:ident $($t:tt)*) => {
$result.push(Token::AssocType(stringify!($ident)));
tokens_impl!($result $($t)*);
};
($result:ident $ident:ident $($t:tt)*) => {
$result.push(Token::Identifier(stringify!($ident)));
tokens_impl!($result $($t)*);
};
($result:ident $str:literal $($t:tt)*) => {
$result.push(Token::Text($str));
tokens_impl!($result $($t)*);
};
($result:ident &$r:literal $($t:tt)*) => {
$result.push(Token::Primitive(Primitive::Ref(concat!("&", $r))));
tokens_impl!($result $($t)*);
};
($result:ident @() $($t:tt)*) => {
$result.push(Token::Type(TokenStream(vec![
Token::Primitive(Primitive::Unit),
])));
tokens_impl!($result $($t)*);
};
($result:ident @( $($inner:tt)* ) $($t:tt)*) => {
$result.push(Token::Type(TokenStream(vec![
Token::Primitive(Primitive::TupleStart),
Token::Nested(TokenStream(tokens!($($inner)*))),
Token::Primitive(Primitive::TupleEnd),
])));
tokens_impl!($result $($t)*);
};
($result:ident @[ $($inner:tt)* ] $($t:tt)*) => {
let mut inner = vec![];
inner.push(Token::Primitive(Primitive::SliceStart));
tokens_impl!(inner $($inner)*);
inner.push(Token::Primitive(Primitive::SliceEnd));
$result.push(Token::Type(TokenStream(inner)));
tokens_impl!($result $($t)*);
};
($result:ident ~$range:ident $($t:tt)*) => {
$result.push(Token::Range(Range::$range));
tokens_impl!($result $($t)*);
};
($result:ident @$ident:ident $($t:tt)*) => {
$result.push(Token::Type(TokenStream(vec![
Token::Primitive(Primitive::Named(stringify!($ident))),
])));
tokens_impl!($result $($t)*);
};
($result:ident ^$ident:ident $($t:tt)*) => {
$result.push(Token::Type(TokenStream(vec![
Token::Identifier(stringify!($ident)),
])));
tokens_impl!($result $($t)*);
};
($result:ident ^[ $($inner:tt)* ] $($t:tt)*) => {
$result.push(Token::Type(TokenStream(tokens!($($inner)*))));
tokens_impl!($result $($t)*);
};
($result:ident { $($inner:tt)* } $($t:tt)*) => {
$result.push(Token::Nested(TokenStream(tokens!($($inner)*))));
tokens_impl!($result $($t)*);
};
}
macro_rules! test {
($parser:ident: [$($input:literal => [$($expected:tt)*],)*]) => {
#[test]
fn $parser() {
$(
let (tokens, remaining) = super::$parser().parse($input)
.expect("failed to parse");
assert_eq!(remaining, "", "unparsed content");
assert_eq!(tokens.collect::<Vec<_>>(), tokens!($($expected)*));
)*
}
};
}
test!(item_after_name: [
" ((T) -> ())" => [" (" { ^["(" { ^T } ") -> " @()] } ")"],
" ((&T) -> bool) -> (B, B) where B: Default + Extend<T>" => [
" (" { ^["(" { ^[&"" ^T] } ") -> " @bool] } ") " "-> " @( ^B ", " ^B )
" " where " " ^B ": " ^Default " + " ^[ Extend "<" ^T ">" ]
],
]);
test!(type_like: [
// Named
"Foo" => [^Foo],
"Option<Foo>" => [^[Option "<" ^Foo ">"]],
"Foo::Err" => [^[^Foo "::" +Err]],
// References
"&Foo" => [^[&"" ^Foo]],
"&'a Foo" => [^[&"'a" " " ^Foo]],
"&mut Foo" => [^[&"mut" " " ^Foo]],
"&mut 'a Foo" => [^[&"mut 'a" " " ^Foo]],
"&[Foo]" => [^[&"" @[^Foo]]],
// Tuple-like
"()" => [@()],
"(Foo, &Bar)" => [@(^Foo ", " ^[&"" ^Bar])],
// Range
"usize.. usize" => [^[@usize ~Range " " @usize]],
"usize..=usize" => [^[@usize ~RangeInclusive @usize]],
" .. usize" => [^[" " ~RangeTo " " @usize]],
" ..=usize" => [^[" " ~RangeToInclusive @usize]],
"usize.. " => [^[@usize ~RangeFrom " "]],
" .. " => [^[" " ~RangeFull " "]],
// Function
"() -> Foo" => [^["(" ") -> " ^Foo]],
"(Iterator<Item = T>) -> Result<(), T>" => [
^["(" { ^[Iterator "<" +Item " = " ^T ">"] } ") -> " ^[Result "<" @() ", " ^T ">"]]
],
"(Foo, &(Bar, &mut 'a [Baz])) -> T" => [
^["(" { ^Foo ", " ^[&"" @(^Bar ", " ^[&"mut 'a" " " @[^Baz]])] } ") -> " ^T]
],
// Union (pseudo-type)
"Foo | &Bar<T> | (Baz) -> bool" => [
^Foo " | " ^[&"" ^[Bar "<" ^T ">"]] " | " ^["(" { ^Baz } ") -> " @bool]
],
]);
}
```
</details>
However, if you replace the function `type_like` with the commented code above it (the `parser!` macro and `type_like_inner` function), it compiles significant faster, and only takes ~18s.
There might be something improvable here? | I-compiletime,T-compiler,A-impl-trait,E-needs-mcve | low | Critical |
442,975,072 | TypeScript | when preserveConstEnums = true should allow use enum[key] | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.201xxxxx
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
preserveConstEnums = true
```ts
const enum EnumCacheName
{
'toc_contents' = '.toc_contents.cache'
}
let { target } = yargs
.option('target', {
type: 'string',
})
.argv
;
let cache_file = path.join(ProjectConfig.cache_root, EnumCacheName[target as any]);
```
**Expected behavior:**
no error, because this code exists in .js
```js
var EnumCacheName;
(function (EnumCacheName) {
EnumCacheName["toc_contents"] = ".toc_contents.cache";
})(EnumCacheName || (EnumCacheName = {}));
```
**Actual behavior:**
> Error: TS2476: A const enum member can only be accessed using a string literal.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
| Suggestion,Awaiting More Feedback | low | Critical |
443,006,588 | terminal | Feature request: theme preview / install UI | You're probably already working on this, but it would be useful to have an issue to coordinate discussion.
@felixse's Fluent Terminal has a pretty lovely theme UI that gets it 90% right:

Some good things about it:
- Previews of everything, rather than just the theme name
- Users can click on the colors to edit (with a little photoshop-style editor)
- It can import `.itermcolors` directly, rather than asking users to run `colortool`
One thing I'd add: the ability to just type in the name of [the many well known color schemes](https://iterm2colorschemes.com/), preview it, and have it installed. This saves having to download a color scheme, convert it, and put it in a directory and import it.
| Issue-Feature,Area-UserInterface,Product-Terminal | low | Major |
443,015,147 | godot | Impractical behaviour when trying to enter a new line at the end of a collapsed section | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** v3.1 stable
<!-- Specify commit hash if non-official. -->
**Issue description:** Unable to create new line in the script file when cursor is at the end of a collapsed function and also the last line of the script
<!-- What happened, and what was expected. -->
**Steps to reproduce:**
- write a function and collapse it
- make sure your function sits on the very last line in the editor
- move your cursor to the end of the function (between ":" and the dots  indicating a collapsed section
- try hitting enter to add a new line
*EXPECTED*
new empty line of code is created after the function
*ACTUALL BEHAVIOUR*
a new line inside the function block is created

*My recommendation for solving this*
Allow the user to navigate the cursor to a position after the three dots 
but still on the same line of the script.
Another issue this would solve is when you hit the "backspace" key to delete a line right after a collapsed section the collapsed section becomes un-collapsed. I am not sure if this is a desired behavior.
*Update*
I also noticed that when trying to highlight a collapsed section ( which is also sitting at the end of a script) for copy & paste or move i was unable to highlight the collapsed code so i would end-up moving only the functions declaration or part of the code.
The same issue is present when i try to move a collapsed section by hitting control + direction. I would expect a collapsed section to move as a whole.
(I haven't found a related issue, feel free to correct me if i am wrong)
| enhancement,topic:editor,usability | low | Major |
443,050,044 | PowerToys | Xmouse (focus follows mouse) | Please update xmouse!
I have a freeware tool that is serviceable, but it does not always play well with various pop-up UI elements, such as color palettes in tools like FileMaker, or the Windows system tray. | Idea-New PowerToy,Product-Mouse Utilities | medium | Critical |
443,050,136 | PowerToys | Send to X | Send to X was the reason I installed PowerToys wherever I went for as long as it worked. It would be lovely to have this back again. | Idea-New PowerToy,Product-File Explorer,Product-File Actions Menu | low | Major |
443,058,154 | pytorch | CosineAnnealingLR has unexpected behavior with large step | ## 🐛 Bug
CosineAnnealingLR has unexpected behavior with large step. We use token count as the step for language modeling and this is often in the 1e9 range.
## To Reproduce
1. Construct torch.optim.lr_scheduler.CosineAnnealingLR with max_step=1e9
2. scheduler.step(x) for large x
Repro in this collab notebook
https://colab.research.google.com/drive/11htMBtisAHq7fQViW0inUMPAiA2J93t3
## Expected behavior
Behaves nicely with large numbers for step size
## Environment
- PyTorch Version (e.g., 1.0): 1.1
- OS (e.g., Linux): colab
- Python version: 3.x
- GPU models and configuration: None
| module: optimizer,triaged | low | Critical |
443,068,881 | angular | docs: please describe how to compile and load components in the browser | <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 📚 Docs or angular.io bug report
### Description
according to @alxhub there is a way to compile angular code and load it in a running Angular app, completely in the browser. i.e., JIT compilation+dynamic component loading (without entry components).
(see https://github.com/angular/angular/issues/30222)
I want to be able to "directly interface with the JIT compiler", but I could not find effective documentation.
Thank you
| area: core,state: needs eng input,P4 | medium | Critical |
443,080,232 | opencv | HEIF format support in OpenCV | This is a feature request, not a bug.
##### Detailed description
https://github.com/strukturag/libheif
I'd like to access HEIF files directly without conversion. Is it possible to add a build flag by using libheif? | feature,priority: low,category: imgcodecs,GSoC,effort: few weeks | low | Critical |
443,095,800 | flutter | Is that possible to replace the keyboard position with another widget and keep TextField position unchanged during keyboard animation? | Flutter version
```
[✓] Flutter (Channel master, v1.5.9-pre.205, on Mac OS X 10.13.6 17G4015, locale en-CN)
```
Hi, I will try my best to describe the problem. Basically, I want to achieve what many messenger apps do when clicking Emoji icon when keyboard is present. All messenger apps including Facebook messenger, WeChat etc.. are doing the same thing. But I have spent two days to try to do the same in Flutter, but failed.
First, hiding and showing keyboard are easy by the code `SystemChannels.textInput.invokeMethod('TextInput.hide');` and `SystemChannels.textInput.invokeMethod('TextInput.show');`
When clicking the Emoji button, I want the textfield row position remains unchanged, then hide the keyboard and the keyboard original position should be replaced by the emoji panel. But the current behavior is the keyboard showing and hiding actions will change the UI layout during the animation. This does not happen to all the native apps that I use. Something like the following:
## Showing Emoji Panel When Keyboard OnScreen
Original Screen:
```
-------Column-------
-- ListView
----------------------
-- TextFieldRow
----------------------
-- Keyboard
----------------------
```
After clicking Emoji Icon:
Phase 1 Screen:
```
-------Column-------
-- ListView
----------------------
-- TextFieldRow
----------------------
-- Emoji Panel
----------------------
-- Keyboard
----------------------
```
ListView and Emoji Panel positions changed because Both Emoji Panel and Keyboard are present at the same time. and NOTE THAT their positions will keep changing during keyboard moving down.
Phase 2 Screen:
```
-------Column-------
-- ListView
----------------------
-- TextFieldRow
----------------------
-- Emoji Panel
----------------------
```
ListView and Emoji Panel positions changed because Keyboard is finally gone.
## Hiding Emoji Panel and Bring Keyboard Back
Original Screen:
```
-------Column-------
-- ListView
----------------------
-- TextFieldRow
----------------------
-- Emoji Panel
----------------------
```
Clicking Emoji Icon (or Keyboard Icon)
Phase 1 Screen:
```
-------Column-------
-- ListView
----------------------
-- TextFieldRow
----------------------
```
ListView and TextFieldRow positions changed because emoji panel is gone.
Phase 2 Screen:
```
-------Column-------
-- ListView
----------------------
-- TextFieldRow
----------------------
-- Keyboard
----------------------
```
ListView and Emoji Panel's positions keeps changing while keyboard is moving up.
## What I expect
I expect the ListView and TextFieldRow will not change their positions while the keyboard is moving up or down and emoji shows or hides. And this is what all native apps I used so far do. I tried many codes including playing around Stack, resizeToAvoidBottomPadding property etc.. But nothing works so far.
(Btw, I can get keyboard height with `MediaQuery.of(context).viewInsets.bottom`, so setting the same height to emoji panel as the keyboard is not a problem. and even if the emoji panel has a a little bit different height, the positions change should be around some pixels instead of the current behaviors)
Thank you very much for your help.
| a: text input,c: new feature,framework,f: material design,P3,team-design,triaged-design | low | Critical |
443,103,208 | PowerToys | Context Sensitive Emoji Menu | It would be super nice if I type a sentence "I will see you at home" and press `WIN+.` to open the emoji menu then one of the top results is already if I would have searched for the last word that I typed.
Bonus points if I could go back anywhere in the sentence and do the same thing replacing things with emojis. Obviously, my most used emojis should still be near the top as well (different smiley's).
If you are wondering why someone would care so much about emoji? I live in Sweden and speak Swedish fluently but my native language is English. The Swedish language compared to the English language is quite poor in providing synonyms for words to account for different connotations and subtle nuances. So Swedes tend to use emoji's a lot to convey those emotions and nuances instead.
So even my business emails contain emojis 😁 | Idea-New PowerToy | low | Minor |
443,104,619 | PowerToys | Change the Z-index of notifications | Notifications in Windows seem to be the highest z-index. However, sometimes those notifications don't have a close button just the arrow to perform an action. The only way to get rid of them is to select them and perform this awkward swipe...which for me always ends up with clicking the notification and opening the action.
The problem is that those notifications are show right where those background programs are running most of the time. So when I receive a notification and I want to then close the program that is running the menu to do so is hidden under the notification.
In short those menu's in the bottom right need a higher z-index if at all tweakable? | Idea-New PowerToy | low | Minor |
443,107,527 | rust | improper_ctypes should not suggest reprs that lead to errors | Currently when the lint finds a repr(Rust) struct, enum, or union that is not FFI-safe, it generally suggests some repr attributes that could be applied to a struct/enum/union that would make it FFI-safe, which is nice. However, it bases these suggestions only on the kind of data type, without checking whether the attribute could actually be applied to the specific type.
For example, when compiling this code:
```rust
struct Rgb(u8, u8, u8);
extern "C" {
fn get_color(name: *const u8) -> Rgb;
}
```
The compiler suggests `repr(transparent)` alongside `repr(C)`, but applying this suggestion will cause an error because the struct has multiple non-ZST fields.
Arguably the compiler should first check whether the suggestion "makes sense", and not suggest `repr(transparent)` in cases such as this one. Although it probably *should* continue to suggesting `repr(C)` even if that would then lead to another improper_ctypes about a *field* of the affected type (the user might want to repeatedly apply those suggestions to mark all necessary types as `repr(C)`). | C-enhancement,A-lints,A-FFI,T-compiler,L-improper_ctypes | low | Critical |
443,113,002 | rust | Box<dyn Error> does not impl Error | It seems reasonable to expect that `Box<dyn Error>` would implement `Error`, [but it does not](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=36fba6624a6ac8bd886b76a4d838d7e5).
Unfortunately, extending the `impl<T: Error> Error for Box<T>` to also apply for `T: ?Sized` fails because then the `impl<'a, E: Error + 'a> From<E> for Box<dyn Error + 'a>` starts overlapping with `impl<T> From<T> for T`. | T-libs-api,C-bug | medium | Critical |
443,113,721 | PowerToys | Configure windows borders size, scroll bars, ... in Windows | One of the main reasons that keeps me clinged to Windows 8.1 is the inability to set windows borders size in Windows 10.
Beside my personal (_and admittedly irrelevant_) idiosyncrasy, I'm strongly convinced that Windows 10 narrow borders can be a real issue for people with certain classes of motor and/or visual dysfunctions.
Please provide a PowerToy to make this aspect configurable.
Thanks!
| Idea-Enhancement,Product-Tweak UI Design | low | Major |
443,121,640 | pytorch | String in tensor | ## 🐛 Bug
`"whateverstring" in torch.tensor(1.)` returns True

This bug happens in torch.1.1.0, I'm running pytorch with python 3.6.5, on windows linux subsystem.
But it won't happen in 1.0.1post2, with Ubuntu 16.04 and python 3.6.8

| module: internals,triaged | low | Critical |
443,125,939 | go | x/net/http2: panic: interface conversion: http.http2Frame is *http.http2UnknownFrame, not *http.http2HeadersFrame | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.5 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build000458693=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
It is randomly occurred. Sometimes works fine without error, sometimes panics.
Maybe it associated with my Transport.
```golang
type Transport struct {
upstream http.RoundTripper
jar http.CookieJar
delay int
userAgent string
}
func (t *Transport) RoundTrip(r *http.Request) (*http.Response, error) {
resp, err := t.upstream.RoundTrip(r)
if err != nil {
return nil, err
}
if some_condition(resp) {
req, err = http.NewRequest("GET", u.String(), nil)
if err != nil {
return nil, err
}
client := http.Client{
Transport: t,
Jar: t.jar,
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
resp, err = client.Do(req)
if err != nil {
return nil, err
}
redirectUrl := resp.Header.Get("Location")
redirectLocation, err := url.Parse(redirectUrl)
if err != nil {
return nil, err
}
if redirectLocation.Host == "" {
redirectUrl = fmt.Sprintf("%s://%s%s",
resp.Request.URL.Scheme,
resp.Request.URL.Host,
redirectUrl)
}
req, err = http.NewRequest("GET", redirectUrl, nil)
if err != nil {
return nil, err
}
client = http.Client{
Transport: t,
Jar: t.jar,
}
resp, err = client.Do(req)
return resp, err
}
return resp, err
}
```
### What did you expect to see?
Works without panics.
### What did you see instead?
<pre>
panic:
api_1 | 15:58:21 app | interface conversion: http.http2Frame is *http.http2UnknownFrame, not *http.http2HeadersFrame
api_1 | 15:58:21 app |
api_1 | 15:58:21 app |
api_1 | 15:58:21 app | panic:
api_1 | 15:58:21 app | err must be non-nil
api_1 | 15:58:21 app |
api_1 | 15:58:21 app |
api_1 | 15:58:21 app | goroutine
api_1 | 15:58:21 app | 32
api_1 | 15:58:21 app | [
api_1 | 15:58:21 app | running
api_1 | 15:58:21 app | ]:
api_1 | 15:58:21 app | panic
api_1 | 15:58:21 app | (
api_1 | 15:58:21 app | 0x9e0580
api_1 | 15:58:21 app | ,
api_1 | 15:58:21 app | 0xbb6070
api_1 | 15:58:21 app | )
api_1 | 15:58:21 app |
api_1 | 15:58:21 app | /usr/local/go/src/runtime/panic.go
api_1 | 15:58:21 app | :
api_1 | 15:58:21 app | 565
api_1 | 15:58:21 app | +
api_1 | 15:58:21 app | 0x2c5
api_1 | 15:58:21 app | fp=
api_1 | 15:58:21 app | 0xc000407b68
api_1 | 15:58:21 app | sp=
api_1 | 15:58:21 app | 0xc000407ad8
api_1 | 15:58:21 app | pc=
api_1 | 15:58:21 app | 0x42bca5
api_1 | 15:58:21 app |
api_1 | 15:58:21 app | net/http.(*http2pipe).closeWithError
api_1 | 15:58:21 app | (
api_1 | 15:58:21 app | 0xc0002d76a8
api_1 | 15:58:21 app | ,
api_1 | 15:58:21 app | 0xc0002d76f8
api_1 | 15:58:21 app | ,
api_1 | 15:58:21 app | 0x0, 0x0, 0x0)
api_1 | /usr/local/go/src/net/http/h2_bundle.go:3553 +0x1cd fp=0xc000407b90 sp=0xc000407b68 pc=0x67d70d
api_1 | 15:58:21 app | net/http.(*http2pipe).CloseWithError
api_1 | 15:58:21 app | (...)
api_1 | /usr/local/go/src/net/http/h2_bundle.go:3540
api_1 | net/http.(*http2clientConnReadLoop).cleanup(0xc000407fb8)
api_1 | /usr/local/go/src/net/http/h2_bundle.go:8084 +0x26c fp=0xc000407c90 sp=0xc000407b90 pc=0x694cac
api_1 | 15:58:21 app | runtime.call32(0x0, 0xae22d8, 0xc00005a0b0, 0x800000008)
api_1 | /usr/local/go/src/runtime/asm_amd64.s:519 +0x3b fp=0xc000407cc0 sp=0xc000407c90 pc=0x45605b
api_1 | panic(0xa18c40, 0xc0005de240)
api_1 | /usr/local/go/src/runtime/panic.go:522 +0x1b5 fp=0xc000407d50 sp=0xc000407cc0 pc=0x42bb95
api_1 | 15:58:21 app | runtime.panicdottypeE
api_1 | 15:58:21 app | (...)
api_1 | /usr/local/go/src/runtime/iface.go:248
api_1 | runtime.panicdottypeI(0xbc7d40, 0xa6c9a0, 0xa20f80)
api_1 | /usr/local/go/src/runtime/iface.go:258 +0xf5 fp=0xc000407d78 sp=0xc000407d50 pc=0x409455
api_1 | net/http.(*http2Framer).ReadFrame(0xc00058f420, 0xc0002830e0, 0x0, 0x0, 0x0)
api_1 | /usr/local/go/src/net/http/h2_bundle.go:1759 +0x65d fp=0xc000407e38 sp=0xc000407d78 pc=0x67559d
api_1 | net/http.(*http2clientConnReadLoop).run(0xc000407fb8, 0xae22d8, 0xc0003b3fb8)
api_1 | /usr/local/go/src/net/http/h2_bundle.go:8102 +0x8f fp=0xc000407f70 sp=0xc000407e38 pc=0x694f7f
api_1 | net/http.(*http2ClientConn).readLoop(0xc00006b080)
api_1 | /usr/local/go/src/net/http/h2_bundle.go:8030 +0x76 fp=0xc000407fd8 sp=0xc000407f70 pc=0x694836
api_1 | runtime.goexit()
api_1 | /usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc000407fe0 sp=0xc000407fd8 pc=0x457bf1
api_1 | created by net/http.(*http2Transport).newClientConn
api_1 | /usr/local/go/src/net/http/h2_bundle.go:7093 +0x637
</pre> | NeedsInvestigation | low | Critical |
443,129,935 | terminal | coding standards issue, _Upper should be banned | the language standard reserves _Upper and __doubleUnderscore symbols for itself. windows coding standards recognize this as well, all _ prefixes on methods and other things with an upper case second letter should be renamed. | Product-Conhost,Help Wanted,Issue-Bug,Area-Build,Product-Terminal | low | Major |
443,137,782 | terminal | Moving the cursor on the screen breaks 2-cell characters | The lower left area of the window from the lower left of the cursor, the two-cell character is broken.
It is difficult to establish this condition in the console window. As an example, upload Vim's free cursor mode ":set virtualedit=all".

| Product-Conhost,Area-Rendering,Issue-Bug,Priority-2 | low | Critical |
443,143,922 | PowerToys | Default Printer Changer from Tray | Some years ago (over 17) I wrote a utility that allowed you to quickly change the default printer from the system tray. It was written in the time of the original power tools and was standard Win32 API. The code is at https://github.com/timlegge/printers if you or anyone else is interested in it.
Always found it to be a useful utility.
TIm | Idea-New PowerToy | low | Major |
443,151,754 | react-native | Building OSS-compatible React Native "Plugins" for Native Libraries | We need help building a generic implementation for native "plugins" as described here: https://github.com/react-native-community/discussions-and-proposals/issues/125
Let's start with iOS for now. | Help Wanted :octocat:,Type: Discussion | low | Minor |
443,165,904 | pytorch | Class based Sampler for Class Incremental/Continual Learning research | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
A class based dataset sampler for class incremental and continual learning research.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
For research in class incremental learning domain, we need datasets to be split up on the basis of classes so that after training on one subset of classes, others can be introduced and the model can be made to generalize on that too in an architectural fashion or using generalization techniques like Elastic Weight Consolidation. Pytorch is strong enough with dynamic graphs but currently it lacks a sampler which can return examples from specific classes.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
I would like to add a sampler similar to others but which takes in the class labels as argument and returns examples from those classes. A random flag can be added to the arguments to randomize the order of samples in the subset. The implementation would be similar to that of SubsetRandomSampler in torch.utils.data but instead of indices, it would take in the class labels as argument.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
Please feel free to suggest any alternative implementations. :) | feature,module: dataloader,triaged | low | Major |
443,205,853 | go | net: clarify UDPConn Read behaviour documentation | The documentation for `UPDConn.Read` says that "Read implements the Conn Read method." However, UDP is not stream and `Conn` is described as "a generic stream-oriented network connection".
It would be nice to be able to know what the characteristics of `Read` are with respect to packets — whether `Read` only reads a packet at a time or not — without having to read the source.
See https://groups.google.com/forum/#!topic/golang-nuts/f_s3fGRgAF8 | Documentation,help wanted,NeedsFix | low | Minor |
443,228,417 | node | build,cares,openssl,zlib: limit symbol exports | Refs: #4932 #6274
On Windows, Node exports a curated list of symbols from its dependencies that don't use dllimport/dllexport (c-ares, openssl, zlib)
On Unices, Node indiscriminately re-exports everything thanks to `-Wl,--whole-archive`.
There's a comment about it in node.gyp:
```python
# TODO(bnoordhuis) Make all platforms export the same list of symbols.
# Teach mkssldef.py to generate linker maps that UNIX linkers understand.
```
It would be good to implement that. It should help trim down the binary size. | build,openssl | low | Minor |
443,306,886 | vue-element-admin | 为什么登录的时候还自动在headers里面添加Authorization参数? | ## Bug report(问题描述)
在使用/login登录请求的时候,为什么Request Headers 中会自动添加Authorization参数?
#### Steps to reproduce(问题复现步骤)
1.启动项目,登录调试发现headers中出现权限校验参数Authorization
#### Screenshot or Gif(截图或动态图) | need repro :mag_right: | low | Critical |
443,326,250 | PowerToys | Marking and unlocking locked files from context menu | Sometimes executable file is locked due to security reasons. In first try of run this executable nothing happens. It may be confusing. Maybe little mark like this on an executable icon will help to avoid it?

User also needs to go to file properties and check Unblock to make it run:

**Unblock** option under right-click context menu may save the day.
| Idea-New PowerToy,Status-In progress | low | Major |
443,334,204 | PowerToys | To be able to attach two or more windows showing side by side for easier Alt-Tab Switching | Hi,
1) So today i was having two windows side by side (vscode and outlook) . Then when i tried to switch to chrome with alt tab . After i was finished with chrome i pressed alt tab was hoping that the split view i created (vscode & outlook) should come back with 1 click of alt tab but i had to find both of the apps with alt tab to bring them back into the required position. If we could somehow attach or pin the two apps together with a shortcut that would be great.This could be extended to four side by side attached windows or custom positioned windows to be attached together for alt-tab.
This can also happen in another way where an extra key with alt tab can smartly detect all the windows placed on screen and can be locked for easier alt+tab+(an appropriate key) switching
2) And also while typing the above i had another idea for having alt-tab having two different modes. Where a user can add in a personalized alt-tab section all the applications the person frequently switches . And the user can switch between these modes with a keyboard shortcut.
So what this will do is person can switch to non frequent mode when the user wants to with a keyboard shortcut . and switch to frequent application mode the user wants to so that the user doesnt have to keep searching for the frequently needed apps because some non frequent apps came in front of the alt tab switcher.
And both the non frequent and frequent modes can be seen on the alt tab overlay easily.
Dont know if the 2) one would be possible with power toys but please implement the first one.
Thanks
PS: Dont know if all the above is currently possible. | Idea-Enhancement,Product-Window Manager | low | Major |
443,353,774 | go | cmd/compile: use slicebytetostringtmp in concatenation | $ go version
go version devel +2e4edf4697 Sun May 12 07:14:09 2019 +0000 linux/amd64
Test:
```go
package main
var data = []byte("data")
var str = "str"
func main() {
if (string(data) + str)[1] == 'a' {
println('a')
}
}
```
This currently uses runtime.slicebytetostring followed by runtime.concatstring2. I think this could use runtime.slicebytetostringtmp for slice->string conversion.
This is extracted from a real program where the slice is large, but we need to append a small string to it. Avoiding the second alloc/copy would be useful. | Performance,NeedsFix,compiler/runtime | low | Minor |
443,383,047 | pytorch | pos_weight argument in torch.nn.BCELoss | ## 🚀 Feature requested
I'd like to have a `pos_weight` arguments in `torch.nn.BCELoss`
## Motivation
to have consistency with `torch.nn.BCEWithLogitsLoss` and to be able to address a multi-label task with a strong class imbalance
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
| module: nn,triaged,enhancement | low | Major |
443,453,368 | pytorch | Dataloader's memory usage keeps increasing during one single epoch. | (All codes were tested on Pytorch 1.0.0 and Pytorch 1.0.1. Memory capacity of my machine is `256Gb`. )
## Description and Reproduction
Hi,
I create a dataloader to load features from local files by their file paths but find this results in OOM problem even though the code is simple.
The dataloader can be simplfied as:
```python
import numpy as np
import torch
import torch.utils.data as data
import time
class MyDataSet(data.Dataset):
def __init__(self):
super(MyDataSet, self).__init__()
# Assume that the self.infoset here contains the description information about the dataset,
# such as a list of file names or paths.
# Here it is a list of strings. I set it aoubt 8Gb in memory.
# In my real project, this infoset is 40Gb in memory.
self.infoset = [str(i).zfill(1024) for i in range(len(self))]
def __getitem__(self, index):
info = self.infoset[index] # problem is here
items = {}
items['features'] = self.load_feature(info)
return items
def load_feature(self, info):
'''
Load feature from files
'''
feature = torch.Tensor(np.ones([8, 4, 2], dtype=np.float32))
return feature
def __len__(self):
return 8000000
dataset = MyDataSet()
dataloader = data.DataLoader(dataset, batch_size=1024, shuffle=True, num_workers=16, pin_memory=True)
while True:
for i, sample in enumerate(dataloader):
print(i, len(dataloader))
time.sleep(0.05) # slow down the process to see the mem-usage increasing during one epoch
```
During each epoch, the memory usage is about `13GB` at the very beginning and keeps inscreasing and finally up to about `46Gb`, like this:

Although it will decrease to `13GB` at the beginning of next epoch, this problem is serious to me because in my real project the `infoset` is about `40Gb` due to the large number of samples and finally leads to `Out of Memory` (OOM) at the end of the first epoch.
## Expected behavior
I have found that the problem is caused by the first line of `MyDataset.__getitem__()` : `info = self.infoset[index]`, in the following code, if I remove this line, then memory usage is normal, which is also my expected behavior.
```python
class MyDataSet(data.Dataset):
def __init__(self):
super(MyDataSet, self).__init__()
# Assume that the self.infoset here contains the description information about the dataset.
# Here it is a list of strings. I set it aoubt 8Gb in memory.
# In my real project, this infoset may be 40Gb in memory.
self.infoset = [str(i).zfill(1024) for i in range(len(self))]
def __getitem__(self, index):
# info = self.infoset[index] # problem is here
info = 'fake info'
items = {}
items['features'] = self.load_feature(info)
return items
def load_feature(self, info):
'''
Load feature from files
'''
feature = torch.Tensor(np.ones([8, 4, 2], dtype=np.float32))
return feature
def __len__(self):
return 8000000
dataset = MyDataSet()
dataloader = data.DataLoader(dataset, batch_size=1024, shuffle=True, num_workers=16, pin_memory=True)
while True:
for i, sample in enumerate(dataloader):
print(i, len(dataloader))
time.sleep(0.05) # slow down the process to see the mem-usage increasing during one epoch
```
And the corresponding mem usage is stable at `13GB`:

## More test
In the following code, I don't even load features in `__getitem__()` but just read a string of `infoset`, but get the same problem:
```python
class MyDataSet(data.Dataset):
def __init__(self):
super(MyDataSet, self).__init__()
# Assume that the self.infoset here contains the description information about the dataset.
# Here it is a list of strings. I set it aoubt 8Gb in memory.
# In my real project, this infoset may be 40Gb in memory.
self.infoset = [str(i).zfill(1024) for i in range(len(self))]
def __getitem__(self, index):
info = self.infoset[index] # problem is here
items = {}
# items['features'] = self.load_feature(info)
return items
def load_feature(self, info):
'''
Load feature from files
'''
feature = torch.Tensor(np.ones([8, 4, 2], dtype=np.float32))
return feature
def __len__(self):
return 8000000
dataset = MyDataSet()
dataloader = data.DataLoader(dataset, batch_size=1024, shuffle=True, num_workers=16, pin_memory=True)
while True:
for i, sample in enumerate(dataloader):
print(i, len(dataloader))
time.sleep(0.05) # slow down the process to see the mem-usage increasing during one epoch
```
Mem usage:

Any suggestions or reasons about this problem?
Thanks. | module: dataloader,triaged | medium | Major |
443,457,418 | rust | Tracking issue for stabilizing `Error::type_id` | ## Updated Issue
This is a tracking issue for stabilizing the functionality of `Error::type_id` somehow. The subject of a historical [security advisory](https://groups.google.com/forum/#!topic/rustlang-security-announcements/aZabeCMUv70) the API was [recently changed](https://github.com/rust-lang/rust/pull/60902) to prevent memory unsafety issues on all channels including nightly. The functionality, however, is still unstable, so we should stabilize it at some point!
## Original issue.
Reported by @seanmonstar to the security mailing list recently, it was discovered that the recent stabilization of `Error::type_id` in Rust 1.34.0 is actually not memory safe. Described in a [recent security announcement](https://groups.google.com/forum/#!topic/rustlang-security-announcements/aZabeCMUv70) the stabilization of `Error::type_id` has been reverted for [stable](https://github.com/rust-lang/rust/pull/60785), [beta](https://github.com/rust-lang/rust/pull/60786), and [master](https://github.com/rust-lang/rust/pull/60787).
This leaves us, however, with the question of what to do about this API? `Error::type_id` has been present since the inception of the `Error` trait, all the way back to 1.0.0. It's unstable, however, and is pretty rare as well to have a manual implementation of the `type_id` function. Despite this we would ideally still like a path to stability which includes safety at some point.
This tracking issue is intended to serve as a location to discuss this issue and determine the best way forward to fully removing `Error::type_id` (so even nightly users are not affected by this memory safety issue) and having a stable mechanism for the functionality.
| T-libs-api,B-unstable,C-tracking-issue,A-error-handling,Libs-Tracked,PG-error-handling | medium | Critical |
443,495,985 | PowerToys | PowerToy suggestion: Update Dependency Walker for Windows 7/10 | Microsoft's API-sets aren't supported by the last release version of Dependency Walker, so we get dozens of errors.
http://www.dependencywalker.com/ | Idea-New PowerToy | low | Critical |
443,508,129 | create-react-app | Feature: Add line/links to error messages | ### Is this a bug report?
No
### How it is now
I have an error message like this:
```
Compiled with warnings.
./src/components/installation/Websocket.tsx
Line 77: Expected to return a value at the end of arrow function array-callback-return
Search for the keywords to learn more about each warning.
To ignore, add // eslint-disable-next-line to the line before.
```
I can cmd+click on `./src/components/installation/Websocket.tsx` and it'll take me to the right file ⭐️
What would be nice is that in the case where there is *one* error per file, it could instead be `./src/components/installation/Websocket.tsx:77` and clicking on that in the terminal would take directly there. The `:77` could be colored using ANSI to be the background color, so it's not visible.
Alternatively, and perhaps a nicer UI, is to use hyperlinks one the `Line 77` using something like [hyperlinker](https://www.npmjs.com/package/hyperlinker) which would have the href as `./src/components/installation/Websocket.tsx:77`. This works in quite a lot of terminals now.
<img width="905" alt="Screen Shot 2019-05-13 at 1 06 22 PM" src="https://user-images.githubusercontent.com/49038/57640254-f5fff700-757f-11e9-82a6-6c1b476e6e42.png">
| tag: enhancement | low | Critical |
443,521,561 | flutter | Flutter changelog should list version initially published on, where possible | Currently, [the Flutter changelog](https://github.com/flutter/flutter/wiki/Changelog) has several "Changes since v___" sections (including a long one for chagnes since 1.0). If a breaking change has been published on a version, it should be listed here with the exact version it was published on, so that developers upgrading from flutter version x -> version y can know exactly which breaking changes they will have to deal with.
 | team,d: wiki,P2,team-release | low | Major |
443,521,970 | go | cmd/go: validate module proxy URLs received from go-get=1 queries | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
Go tip at [CL 170879](https://golang.org/cl/170879) or later.
### Does this issue reproduce with the latest release?
no
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE=SHOWS_CORRECTLY
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH=SHOWS_CORRECTLY
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.2/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.2/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=SHOWS_CORRECTLY
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/j0/qxnj38ld075bj80p9hs2dqlw0000gn/T/go-build144819988=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
on travis, when running tests against master, modules are unable to be downloaded. i am unsure if this is because of travis or if it can be replicated elsewhere
### What did you expect to see?
successful download of modules
### What did you see instead?
```
go: github.com/golang-migrate/migrate/[email protected] requires
github.com/fsouza/[email protected] requires
cloud.google.com/[email protected] requires
golang.org/x/[email protected] requires
dmitri.shuralyov.com/app/[email protected]: Get https:///dmitri.shuralyov.com/app/changes/@v/v0.0.0-20180602232624-0a106ad413e3.info: http: no Host in request URL
The command "go mod download" failed and exited with 1 during .
```
specifically, the 3x slashes (/) in the url. removing the slash resolves correctly | help wanted,NeedsFix,GoCommand,modules | low | Critical |
443,534,939 | node | Calling disconnect causes processes spawned by cluster module to exit too early | <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version:
Platform:
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: 12.2.0 but could also reproduce on 10.15.3
* **Platform**: Darwin 18.5.0 Darwin Kernel Version 18.5.0: Mon Mar 11 20:40:32 PDT 2019; root:xnu-4903.251.3~3/RELEASE_X86_64 x86_64
* **Subsystem**:
<!-- Please provide more details below this comment. -->
Consider the following script:
```
const cluster = require('cluster');
const childProcess = require('child_process');
const useCluster = false;
const isMaster = useCluster ? cluster.isMaster : !process.argv.includes('worker');
if (isMaster) {
if (useCluster) {
cluster.fork(__filename);
} else {
childProcess.fork(__filename, ['worker']);
}
} else {
setTimeout(() => console.log('hi from worker'), 1000);
process.disconnect();
}
```
When `useCluster` is `true` node exits immediately and nothing is printed to the console. When `useCluster` is `false` you see "hi from worker" logged after 1s and then node will exit which is the expected behavior. The documentation makes it sound like calling `process.disconnect` should only close the IPC channel between the master and worker process and should not cause either to exit early if there is still work to do. | help wanted,cluster | low | Critical |
443,536,991 | go | x/website: unify playground.js location | We currently have two copies of the playground's playground.js: one in x/tools (the original) and one in x/website (the new home, but currently unused?).
Unify.
/cc @ysmolsky @andybons @dmitshur @katiehockman @cnoellekb | Thinking,NeedsInvestigation | low | Minor |
443,537,573 | godot | HBoxContainer/VBoxContainer don't have initial size, but will if it's moved | **Godot version:**
3.1.1 (Mono)
**Issue description:**
When adding children to a HBoxContainer/VBoxContainer, the container itself does not immediately have a size. Calling `get_rect()` will return (0,0). However, if you move the container to a different position (e.g. by 1 pixel), then the size is calculated and subsequent `get_rect()` calls will have a value.
It is not clear to me the Control lifecycle, and when layout should have happened. Should we expect `get_rect()` to actually have a size immediately after adding children? When is layout triggered? Triggering it on a move seems like unintended behaviour in either case.
**Steps to reproduce:**
1. Create an HBoxContainer
2. Add a selection of children with sizes (I used ColorRect's with min sizes set).
3. Get the return value from `get_rect()`. It will be (0, 0, 0, 0).
4. Move the container to a new position, e.g. (1, 0). It must be a different position relative to it's current.
5. Call `get_rect()` again, and it will be populated with a value.
**Minimal reproduction project:**
[Testing-ControlSizes.zip](https://github.com/godotengine/godot/files/3174247/Testing-ControlSizes.zip) | usability,documentation,topic:gui | low | Minor |
443,539,913 | vue | <keep-alive> within <transition-group> blocks leave transitions | ### Version
2.6.10
### Reproduction link
[https://codepen.io/sathomas/pen/Jqoyqo](https://codepen.io/sathomas/pen/Jqoyqo)
### Steps to reproduce
Component structure:
<transition-group>
<keep-alive>
<component />
</keep-alive>
</transition-group>
Change dynamic component. Leave transition does not occur.
In repro example, click <kbd>Switch View</kbd>
- Note 1: In repro example if `<keep-alive>` is removed via checkbox, all transitions work as expected.
- Note 2: In repro example if `<transition-group>` is replaced with `<transition>`, all transitions work as expected.
### What is expected?
Initial component should transition out while new component transitions in.
### What is actually happening?
Initial component is removed immediately while new component transitions in.
---
In actual use case, `v-show` is not a good option as the dynamic components involved are quite complex (1000s of DOM elements) and leaving them in the actual DOM causes performance problems.
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement,transition | medium | Major |
443,541,135 | go | x/mobile: Exported structs not retaining field values in Swift | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version go1.12.4 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Haven't tested
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN="/Users/tristian/go/bin"
GOCACHE="/Users/tristian/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/tristian/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.4/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.4/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/g1/v3hz1sm92nv7wbzpg3dx_h_h0000gp/T/go-build064001605=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
I'm using the latest `gomobile`, installed from this commit:
```
commit 32b2708ab17190067486adc3513cae8dc2a7e5a4 (HEAD -> master, origin/master, origin/HEAD)
Author: Mark Villacampa <[email protected]>
Date: Thu May 9 10:20:13 2019 +0000
```
In my go source, I added a struct type that is exposed as part of Gomobile:
```go
type Config struct {
OSName string
OSDevice string
}
// NewConfig()
func NewConfig() *Config {
return &Config{}
}
```
In the Swift code I use the following to create an instance and set some fields.
```swift
// "Mobile" is the prefix package name
// var config = MobileNewConfig() (tried this too)
let config = MobileNewConfig()
config?.osName = "some name"
config?.osDevice = "some string"
```
I build the framework like so:
```
GO111MODULE=off gomobile bind -a -v -target=ios/arm,ios/arm64,ios/amd64 example.com/mobile
```
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
### What did you expect to see?
I expected to see the `config` variable retain the assigned string values in the swift code.
### What did you see instead?
The `config` variable not retaining the swift values, for example take a look at this debugger capture:

| NeedsInvestigation,mobile | low | Critical |
443,545,307 | go | runtime: optionally (reliably) avoid netpoller | The [gVisor project](https://github.com/google/gvisor) implements a user-space Kernel, and its implementation performance-sensitive, which forces a manual avoidance of the netpoller by avoiding certain APIs.
It would be nice to automate and enforce this avoidance, either by exposing some API that could be use to assert in a test that the netpoller has never been used, or by exposing a build tag that would guarantee that the netpoller is inactive. As of this writing it seems concretely that we want to avoid ever incrementing [netpollWaiters](https://github.com/golang/go/blob/8d212c3ac3bacdf8d135e94d1e0a0c3cfba6e13a/src/runtime/netpoll_stub.go#L9).
cc @iangudger @nlacasse @prattmic @amscanne | NeedsInvestigation,compiler/runtime | low | Major |
443,565,904 | TypeScript | Permit functions that return a value to also serve as a type guard | ## Search Terms
Linear type, affine type, type guard
## Suggestion
It would be very helpful to allow a function to serve as a type guard, but also return an unrelated value.
## Use Cases
This can be used to express type changes as a result of mutating operations, covering some of the use cases of e.g. Rust's affine types. (See also #16148.)
## Examples
Consider this example, compiled with `--strictNullChecks`:
```ts
type NonEmptyArray<T> = {
pop(): T;
} & Array<T>;
function isNonEmpty<T>(array: Array<T>): array is NonEmptyArray<T>;
function isNonEmpty(array: Array<unknown>): boolean {
return array.length > 0;
}
let array: string[] = ['element'];
if (isNonEmpty(array)) { // Guard gives 'array' type NonEmptyArray<string>.
const elem1: string = array.pop(); // Works. This is correct.
const elem2: string = array.pop(); // Also works, but elem2 will be undefined at runtime!
}
```
We could make this correct if `pop()` could both return a value *and* behave as a type guard. This isn't great syntax, but nonetheless consider if this was supported:
```ts
type NonEmptyArray<T> = {
pop(): T && this is Array<T>;
} & Array<T>;
function isNonEmpty<T>(array: Array<T>): array is NonEmptyArray<T>;
function isNonEmpty(array: Array<unknown>): boolean {
return array.length > 0;
}
let array: string[] = ['element'];
if (isNonEmpty(array)) { // Guard gives 'array' type NonEmptyArray<string>.
const elem1: string = array.pop(); // Returns a string and gives 'array' type Array<string>.
const elem2: string = array.pop(); // Doesn't compile; pop() returns 'string | undefined'!
}
```
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
This can use a new, previously invalid syntax to avoid affecting any existing program.
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
There's no change in the code that's emitted; this feature would exist purely at the level of the type system.
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
I believe that it would. It seems to be aligned well, in particular, with "Statically identify constructs that are likely to be errors." | Suggestion,Awaiting More Feedback | low | Critical |
443,611,424 | terminal | CONSOLE_INFORMATION::IsConsoleLocked and CONSOLE_INFORMATION::GetCSRecursionCount inappropriately groveling into opaque CRITICAL_SECTION object internals | > These two functions appear to be peering into the internal opaque CRITICAL_SECTION object. The CRITICAL_SECTION definition is opaque and shouldn't be used by apps (it can - and has - changed layout between OS versions so long as the size is the same).
>
> The CONSOLE_INFORMATION object should track an owning thread ID and critical section entry count itself if it needs to get this information and shouldn't try to assume that it knows about the critical section implementation details (particularly for OSS'd code we'd like to avoid encouraging that sort of thing as it inevitably leads to broken apps on future OS updates etc.).
>
> bool CONSOLE_INFORMATION::IsConsoleLocked() const
> {
> // The critical section structure's OwningThread field contains the ThreadId despite having the HANDLE type.
> // This requires us to hard cast the ID to compare.
> return _csConsoleLock.OwningThread == (HANDLE)GetCurrentThreadId();
> }
>
>
> ULONG CONSOLE_INFORMATION::GetCSRecursionCount()
> {
> return _csConsoleLock.RecursionCount;
> }
> | Product-Conhost,Area-Server,Issue-Bug | low | Critical |
443,640,191 | go | x/tools: automate major version upgrades for modules | This is a spin out of #31543 ("cmd/go: creating v2+ modules has lower success rate than it could").
### Background
[Semantic import versioning](https://research.swtch.com/vgo-import) places the major version in module paths and import paths for v2+ modules, such as:
* `module github.com/some/mod/v2` in the author's `go.mod`.
* `require github.com/some/mod/v2 v2.0.0` in the consumer's `go.mod`.
* `import "github.com/some/mod/v2/some/pkg"` in the consumer's `.go` files, and in the author's `.go` files when the module's packages import other packages within the same v2+ module.
This approach has value, but empirically it currently seems it can be a challenge to do correctly the first time (e.g., modules with v2 major version semver tags that are [missing](https://github.com/prometheus/prometheus/blob/v2.9.1/go.mod) the required `/v2` in their own `module` statements, or modules that accidentally do not update all import statements and [accidentally](https://github.com/santhosh-tekuri/jsonschema/blob/3187f5dd695e7a7fe4c2254be6fb4f0737fec928/httploader/httploader.go#L20) depend on the v1 version of themselves, etc.)
It also creates additional work for authors and consumers if a module's major version increments, such as from v1 to v2, or v3 to v4, etc.
### Suggestion
Tooling should be able to help here in a substantial way.
[github.com/marwan-at-work/mod](https://github.com/marwan-at-work/mod) is great for people who know about it and who are willing to trust it.
However, a tool from the broader community won't have the penetration and impact of something from golang.org, or at least it would likely take much longer to get a similar type of penetration following a typical trajectory for a community tool.
Therefore, the suggestion here is to create a golang.org/x utility that can edit `go.mod` and `.go` files to simplify the workflow for authors and consumers of v2+ modules. It might be possible to port `marwan-at-work/mod` itself, especially with the creation in #31761 of the x/mod repo that exposes APIs for module mechanics (such as an API for `go.mod` parsing).
Three sample use cases:
1. If someone is adopting modules for the first time as the author of a v2+ set of packages, ideally the utility would:
* set the `/vN` in the module path in the `module` statement in the `go.mod`
* update any import paths in the module's `.go` files if needed
2. If the author is later changing the major version for a v2+ module, ideally the utility would:
* set the `/vN` in the module path in the `module` statement in the `go.mod`
* update any import paths in the module's `.go` files if needed
3. If a consumer wants to use a particular major version of a v2+ module, ideally the utility would:
* set the `require` statement properly in the consumer's `go.mod`
* update any import paths in the consumer's `.go` files if needed
I think `marwan-at-work/mod` can currently do all of those things.
Perhaps one day similar functionality could live in cmd/go (e.g., the closed #27248), but it seems more practical to start with something in golang.org/x.
### Non-goals
The suggestion here is that this golang.org/x utility would **not** do anything with VCS tagging, nor do anything for creating `/vN` subdirectories for anyone following the ["Major Subdirectory"](https://research.swtch.com/vgo-module) approach. Also, it would probably be reasonable for a first version to error out if used by a consumer that has a `replace` for the module of interest (or, that could be handled more gracefully).
CC @bcmills @jayconrod @marwan-at-work | NeedsInvestigation,modules,Tools | medium | Critical |
443,642,381 | terminal | Investigate using ZWNJ at the beginning of COOKED_READ_DATA line | This is a follow on from #514.
We could potentially insert a ZWNJ at the beginning of a cooked read data line to prevent ligature joins from happening between the immutable text and the prompt line for applications that use the cooked read data services of the `conhost.exe`. This would make this work automagically for `cmd.exe` and a bunch of little utilities.
There are some concerns about sending that through ConPTY that I think are justified (as would be required to make this work in any fashion on the Windows Terminal UI).
Either way, this is going to need someone to fool around with it and see what works and what doesn't. | Issue-Feature,Product-Conhost,Area-Input,Area-CookedRead | low | Minor |
443,648,741 | flutter | The --track-widget-creation kernel transformer should return package:paths instead of absolute paths | Using absolute paths leads to issues as it is harder to detect which paths are package:flutter. This will also help resolve inconsistencies between paths on flutter_web and flutter where different schemes are used due to temporary directories used as part of the build process. | framework,f: inspector,P2,team-framework,triaged-framework | low | Minor |
443,662,095 | pytorch | Lint rule to prevent direct use of #pragma omp | @ilia-cher, I noticed, quite by accident, that you had expunged all occurrences of `#pragma omp` in favor of `at::parallel_for`. That's cool, should we add a lint rule to prevent people from reintroducing direct use of `#pragma omp`? There's still one occurrence of it in aten/src/THNN/generic/VolumetricConvolutionMM.c and I doubt most reviewers will know to reject reviews if they contain fresh occurrences of `#pragma omp` | module: lint,triaged | low | Minor |
443,670,510 | PowerToys | Add Aero-like or Fluent-like titlebar blur and transparency. | It would be cool and would make some apps (like **Photos** or **Calculator**) look more consistent if Win32 programs (like **File Explorer**) have Fluent titlebars. It would also make some programs, like the new **Chromium Edge**, look neat with blur and transparency on titlebar! | Help Wanted,Idea-New PowerToy,Product-Tweak UI Design | medium | Critical |
443,710,417 | youtube-dl | Kare11 | ## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.05.11. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.05.11**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.kare11.com/article/news/local/land-of-10000-stories/10-year-old-college-sophomore-has-bigger-plans-next-fall/89-23db04f1-4a5d-4ed8-9d45-70409cb09d20
- Single video: https://www.kare11.com/video/news/local/land-of-10000-stories/10-year-old-college-sophomore-has-bigger-plans-next-fall/89-a9476453-c73b-44d8-9106-575347f5e4a1
- Single video: https://www.kare11.com/article/news/89-23db04f1-4a5d-4ed8-9d45-70409cb09d20
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Kare11 is a local news station/site in Minnesota.
I tried several URLs to download from. None of them got me the intended video, but one of them downloaded the video for a different news story, so I think it should be possible to figure something out.
Below is some output of me messing around trying to get it to work.
```
brad@kazuki:~/video/ > youtube-dl 'https://www.kare11.com/article/news/local/land-of-10000-stories/10-year-old-college-sophomore-has-bigger-plans-next-fall/89-23db04f1-4a5d-4ed8-9d45-70409cb09d20'
[generic] 89-23db04f1-4a5d-4ed8-9d45-70409cb09d20: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 89-23db04f1-4a5d-4ed8-9d45-70409cb09d20: Downloading webpage
[generic] 89-23db04f1-4a5d-4ed8-9d45-70409cb09d20: Extracting information
[youtube] whteSyd-Wk4: Downloading webpage
[youtube] whteSyd-Wk4: Downloading video info webpage
WARNING: unable to extract channel id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
[download] Destination: Land of 10,000 Stories - After dad's death, daughters start radio show to play his music-whteSyd-Wk4.f136.mp4
[download] 100% of 23.90MiB in 00:03
[download] Destination: Land of 10,000 Stories - After dad's death, daughters start radio show to play his music-whteSyd-Wk4.f140.m4a
[download] 100% of 4.23MiB in 00:00
[ffmpeg] Merging formats into "Land of 10,000 Stories - After dad's death, daughters start radio show to play his music-whteSyd-Wk4.mp4"
Deleting original file Land of 10,000 Stories - After dad's death, daughters start radio show to play his music-whteSyd-Wk4.f136.mp4 (pass -k to keep)
Deleting original file Land of 10,000 Stories - After dad's death, daughters start radio show to play his music-whteSyd-Wk4.f140.m4a (pass -k to keep)
brad@kazuki:~/video/ > rm Land\ of\ 10,000\ Stories\ -\ After\ dad\'s\ death,\ daughters\ start\ radio\ show\ to\ play\ his\ music-whteSyd-Wk4.mp4
brad@kazuki:~/video/ > youtube-dl 'https://www.kare11.com/video/news/local/land-of-10000-stories/10-year-old-college-sophomore-has-bigger-plans-next-fall/89-a9476453-c73b-44d8-9106-575347f5e4a1?jwsource=cl'
[generic] 89-a9476453-c73b-44d8-9106-575347f5e4a1?jwsource=cl: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 89-a9476453-c73b-44d8-9106-575347f5e4a1?jwsource=cl: Downloading webpage
[generic] 89-a9476453-c73b-44d8-9106-575347f5e4a1?jwsource=cl: Extracting information
ERROR: Unsupported URL: https://www.kare11.com/video/news/local/land-of-10000-stories/10-year-old-college-sophomore-has-bigger-plans-next-fall/89-a9476453-c73b-44d8-9106-575347f5e4a1?jwsource=cl
brad@kazuki:~/video/ > youtube-dl 'https://www.kare11.com/video/news/local/land-of-10000-stories/10-year-old-college-sophomore-has-bigger-plans-next-fall/89-a9476453-c73b-44d8-9106-575347f5e4a1'
[generic] 89-a9476453-c73b-44d8-9106-575347f5e4a1: Requesting header
WARNING: Falling back on generic information extractor.
[generic] 89-a9476453-c73b-44d8-9106-575347f5e4a1: Downloading webpage
[generic] 89-a9476453-c73b-44d8-9106-575347f5e4a1: Extracting information
ERROR: Unsupported URL: https://www.kare11.com/video/news/local/land-of-10000-stories/10-year-old-college-sophomore-has-bigger-plans-next-fall/89-a9476453-c73b-44d8-9106-575347f5e4a1
brad@kazuki:~/video/ > youtube-dl 'https://media.kare11.com/embeds/video/a9476453-c73b-44d8-9106-575347f5e4a1/iframe'
[generic] iframe: Requesting header
WARNING: Could not send HEAD request to https://media.kare11.com/embeds/video/a9476453-c73b-44d8-9106-575347f5e4a1/iframe: HTTP Error 404: Not Found
[generic] iframe: Downloading webpage
ERROR: Unable to download webpage: HTTP Error 404: Not Found (caused by <HTTPError 404: 'Not Found'>); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
brad@kazuki:~/video/ > youtube-dl 'https://media.kare11.com/embeds/video/a9476453-c73b-44d8-9106-575347f5e4a1'
[generic] a9476453-c73b-44d8-9106-575347f5e4a1: Requesting header
WARNING: Could not send HEAD request to https://media.kare11.com/embeds/video/a9476453-c73b-44d8-9106-575347f5e4a1: HTTP Error 404: Not Found
[generic] a9476453-c73b-44d8-9106-575347f5e4a1: Downloading webpage
ERROR: Unable to download webpage: HTTP Error 404: Not Found (caused by <HTTPError 404: 'Not Found'>); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
```
| site-support-request | low | Critical |
443,789,461 | pytorch | [docs] Automatically detect docs missing in rst | I propose something like this to find automatically members missing from rst (could be put on some doc page as well). This could also be improved to make a list of public members missing docs altogether.
```python
rst_dir = 'docs/source'
rst = '\n'.join(open(os.path.join(rst_dir, rst_file)).read() for rst_file in os.listdir(rst_dir))
members = lambda obj : [(member, getattr(getattr(obj, member), '__doc__') is not None) for member in dir(obj) if not member.startswith('_') and member not in rst]
doctodo = members(torch) + members(torch.nn) + members(torch.nn.functional)
``` | module: docs,module: tests,triaged,enhancement | low | Minor |
443,806,779 | opencv | Possibly a bug fix of OpenCV 4.1.0 with Python3 support? | I **ALWAYS** met the following **ERROR** messages while compiling OpenCV 4.1.0 with Python3:
```bash
In file included from ....../opencv/modules/python/src2/cv2.cpp:1722:0:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h: In function ‘PyObject* pyopencv_cv_dnn_dnn_Net_getFLOPS(PyObject*, PyObject*, PyObject*)’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39763:5: error: ‘vector_MatShape’ was not declared in this scope
vector_MatShape netInputShapes;
^~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39763:5: note: suggested alternative: ‘vector_Match’
vector_MatShape netInputShapes;
^~~~~~~~~~~~~~~
vector_Match
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39768:43: error: ‘netInputShapes’ was not declared in this scope
pyopencv_to(pyobj_netInputShapes, netInputShapes, ArgInfo("netInputShapes", 0)) )
^~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39768:43: note: suggested alternative: ‘pyobj_netInputShapes’
pyopencv_to(pyobj_netInputShapes, netInputShapes, ArgInfo("netInputShapes", 0)) )
^~~~~~~~~~~~~~
pyobj_netInputShapes
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39794:5: error: ‘vector_MatShape’ was not declared in this scope
vector_MatShape netInputShapes;
^~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39794:5: note: suggested alternative: ‘vector_Match’
vector_MatShape netInputShapes;
^~~~~~~~~~~~~~~
vector_Match
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39799:43: error: ‘netInputShapes’ was not declared in this scope
pyopencv_to(pyobj_netInputShapes, netInputShapes, ArgInfo("netInputShapes", 0)) )
^~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39799:43: note: suggested alternative: ‘pyobj_netInputShapes’
pyopencv_to(pyobj_netInputShapes, netInputShapes, ArgInfo("netInputShapes", 0)) )
^~~~~~~~~~~~~~
pyobj_netInputShapes
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h: In function ‘PyObject* pyopencv_cv_dnn_dnn_Net_getLayer(PyObject*, PyObject*, PyObject*)’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39835:5: error: ‘LayerId’ was not declared in this scope
LayerId layerId;
^~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39840:36: error: ‘layerId’ was not declared in this scope
pyopencv_to(pyobj_layerId, layerId, ArgInfo("layerId", 0)) )
^~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h: In function ‘PyObject* pyopencv_cv_dnn_dnn_Net_getLayersShapes(PyObject*, PyObject*, PyObject*)’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39948:5: error: ‘vector_MatShape’ was not declared in this scope
vector_MatShape netInputShapes;
^~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39948:5: note: suggested alternative: ‘vector_Match’
vector_MatShape netInputShapes;
^~~~~~~~~~~~~~~
vector_Match
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39950:5: error: ‘vector_vector_MatShape’ was not declared in this scope
vector_vector_MatShape inLayersShapes;
^~~~~~~~~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39950:5: note: suggested alternative: ‘vector_vector_DMatch’
vector_vector_MatShape inLayersShapes;
^~~~~~~~~~~~~~~~~~~~~~
vector_vector_DMatch
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39951:28: error: expected ‘;’ before ‘outLayersShapes’
vector_vector_MatShape outLayersShapes;
^~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39955:43: error: ‘netInputShapes’ was not declared in this scope
pyopencv_to(pyobj_netInputShapes, netInputShapes, ArgInfo("netInputShapes", 0)) )
^~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39955:43: note: suggested alternative: ‘pyobj_netInputShapes’
pyopencv_to(pyobj_netInputShapes, netInputShapes, ArgInfo("netInputShapes", 0)) )
^~~~~~~~~~~~~~
pyobj_netInputShapes
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39957:69: error: ‘inLayersShapes’ was not declared in this scope
ERRWRAP2(_self_->getLayersShapes(netInputShapes, layersIds, inLayersShapes, outLayersShapes));
^
....../opencv/modules/python/src2/cv2.cpp:175:5: note: in definition of macro ‘ERRWRAP2’
expr; \
^~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39957:85: error: ‘outLayersShapes’ was not declared in this scope
ERRWRAP2(_self_->getLayersShapes(netInputShapes, layersIds, inLayersShapes, outLayersShapes));
^
....../opencv/modules/python/src2/cv2.cpp:175:5: note: in definition of macro ‘ERRWRAP2’
expr; \
^~~~
In file included from ....../opencv/modules/python/src2/cv2.cpp:1722:0:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39958:79: error: ‘inLayersShapes’ was not declared in this scope
return Py_BuildValue("(NNN)", pyopencv_from(layersIds), pyopencv_from(inLayersShapes), pyopencv_from(outLayersShapes));
^~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39958:110: error: ‘outLayersShapes’ was not declared in this scope
return Py_BuildValue("(NNN)", pyopencv_from(layersIds), pyopencv_from(inLayersShapes), pyopencv_from(outLayersShapes));
^~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39967:5: error: ‘vector_vector_MatShape’ was not declared in this scope
vector_vector_MatShape inLayersShapes;
^~~~~~~~~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39967:5: note: suggested alternative: ‘vector_vector_DMatch’
vector_vector_MatShape inLayersShapes;
^~~~~~~~~~~~~~~~~~~~~~
vector_vector_DMatch
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39968:28: error: expected ‘;’ before ‘outLayersShapes’
vector_vector_MatShape outLayersShapes;
^~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39974:68: error: ‘inLayersShapes’ was not declared in this scope
ERRWRAP2(_self_->getLayersShapes(netInputShape, layersIds, inLayersShapes, outLayersShapes));
^
....../opencv/modules/python/src2/cv2.cpp:175:5: note: in definition of macro ‘ERRWRAP2’
expr; \
^~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39974:84: error: ‘outLayersShapes’ was not declared in this scope
ERRWRAP2(_self_->getLayersShapes(netInputShape, layersIds, inLayersShapes, outLayersShapes));
^
....../opencv/modules/python/src2/cv2.cpp:175:5: note: in definition of macro ‘ERRWRAP2’
expr; \
^~~~
In file included from ....../opencv/modules/python/src2/cv2.cpp:1722:0:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39975:79: error: ‘inLayersShapes’ was not declared in this scope
return Py_BuildValue("(NNN)", pyopencv_from(layersIds), pyopencv_from(inLayersShapes), pyopencv_from(outLayersShapes));
^~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39975:110: error: ‘outLayersShapes’ was not declared in this scope
return Py_BuildValue("(NNN)", pyopencv_from(layersIds), pyopencv_from(inLayersShapes), pyopencv_from(outLayersShapes));
^~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h: In function ‘PyObject* pyopencv_cv_dnn_dnn_Net_getMemoryConsumption(PyObject*, PyObject*, PyObject*)’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40010:5: error: ‘vector_MatShape’ was not declared in this scope
vector_MatShape netInputShapes;
^~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40010:5: note: suggested alternative: ‘vector_Match’
vector_MatShape netInputShapes;
^~~~~~~~~~~~~~~
vector_Match
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40016:43: error: ‘netInputShapes’ was not declared in this scope
pyopencv_to(pyobj_netInputShapes, netInputShapes, ArgInfo("netInputShapes", 0)) )
^~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40016:43: note: suggested alternative: ‘pyobj_netInputShapes’
pyopencv_to(pyobj_netInputShapes, netInputShapes, ArgInfo("netInputShapes", 0)) )
^~~~~~~~~~~~~~
pyobj_netInputShapes
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h: In function ‘PyObject* pyopencv_cv_dnn_dnn_Net_getParam(PyObject*, PyObject*, PyObject*)’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40053:5: error: ‘LayerId’ was not declared in this scope
LayerId layer;
^~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40059:34: error: ‘layer’ was not declared in this scope
pyopencv_to(pyobj_layer, layer, ArgInfo("layer", 0)) )
^~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h: In function ‘PyObject* pyopencv_cv_dnn_dnn_Net_setParam(PyObject*, PyObject*, PyObject*)’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40261:5: error: ‘LayerId’ was not declared in this scope
LayerId layer;
^~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40268:34: error: ‘layer’ was not declared in this scope
pyopencv_to(pyobj_layer, layer, ArgInfo("layer", 0)) &&
^~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40279:5: error: ‘LayerId’ was not declared in this scope
LayerId layer;
^~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40286:34: error: ‘layer’ was not declared in this scope
pyopencv_to(pyobj_layer, layer, ArgInfo("layer", 0)) &&
^~~~~
In file included from ....../opencv/modules/python/src2/cv2.cpp:1722:0:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h: In function ‘PyObject* pyopencv_cv_dnn_dnn_AsyncMat_wait_for(PyObject*, PyObject*, PyObject*)’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40447:5: error: ‘chrono_milliseconds’ was not declared in this scope
chrono_milliseconds timeout;
^~~~~~~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40447:5: note: suggested alternative: ‘_PyTime_AsMilliseconds’
chrono_milliseconds timeout;
^~~~~~~~~~~~~~~~~~~
_PyTime_AsMilliseconds
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40448:5: error: ‘AsyncMatStatus’ was not declared in this scope
AsyncMatStatus retval;
^~~~~~~~~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40452:36: error: ‘timeout’ was not declared in this scope
pyopencv_to(pyobj_timeout, timeout, ArgInfo("timeout", 0)) )
^~~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40452:36: note: suggested alternative: ‘timer_t’
pyopencv_to(pyobj_timeout, timeout, ArgInfo("timeout", 0)) )
^~~~~~~
timer_t
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40454:18: error: ‘retval’ was not declared in this scope
ERRWRAP2(retval = _self_->wait_for(timeout));
^
....../opencv/modules/python/src2/cv2.cpp:175:5: note: in definition of macro ‘ERRWRAP2’
expr; \
^~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40454:18: note: suggested alternative: ‘read’
ERRWRAP2(retval = _self_->wait_for(timeout));
^
....../opencv/modules/python/src2/cv2.cpp:175:5: note: in definition of macro ‘ERRWRAP2’
expr; \
^~~~
In file included from ....../opencv/modules/python/src2/cv2.cpp:1722:0:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40455:30: error: ‘retval’ was not declared in this scope
return pyopencv_from(retval);
^~~~~~
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40455:30: note: suggested alternative: ‘read’
return pyopencv_from(retval);
^~~~~~
read
....../opencv/modules/python/src2/cv2.cpp: In instantiation of ‘bool pyopencv_to(PyObject*, T&, const char*) [with T = std::vector<cv::Mat>; PyObject = _object]’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39294:44: required from here
....../opencv/modules/python/src2/cv2.cpp:24:105: error: ‘to’ is not a member of ‘PyOpenCV_Converter<std::vector<cv::Mat>, void>’
bool pyopencv_to(PyObject* obj, T& p, const char* name = "<unknown>") { return PyOpenCV_Converter<T>::to(obj, p, name); }
~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
....../opencv/modules/python/src2/cv2.cpp: In instantiation of ‘PyObject* pyopencv_from(const T&) [with T = std::future<cv::Mat>; PyObject = _object]’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39746:36: required from here
....../opencv/modules/python/src2/cv2.cpp:27:75: error: ‘from’ is not a member of ‘PyOpenCV_Converter<std::future<cv::Mat>, void>’
PyObject* pyopencv_from(const T& src) { return PyOpenCV_Converter<T>::from(src); }
~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
modules/python3/CMakeFiles/opencv_python3.dir/build.make:65: recipe for target 'modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2.cpp.o' failed
make[2]: *** [modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2.cpp.o] Error 1
make[2]: Leaving directory '....../opencv/build'
CMakeFiles/Makefile2:10518: recipe for target 'modules/python3/CMakeFiles/opencv_python3.dir/all' failed
make[1]: *** [modules/python3/CMakeFiles/opencv_python3.dir/all] Error 2
make[1]: Leaving directory '....../opencv/build'
Makefile:165: recipe for target 'all' failed
make: *** [all] Error 2
```
Therefore, I directly copied the following several lines from file
**opencv/modules/dnn/misc/python/pyopencv_dnn.hpp**
```bash
typedef dnn::DictValue LayerId;
typedef std::vector<dnn::MatShape> vector_MatShape;
typedef std::vector<std::vector<dnn::MatShape> > vector_vector_MatShape;
#ifdef CV_CXX11
typedef std::chrono::milliseconds chrono_milliseconds;
typedef std::future_status AsyncMatStatus;
#else
typedef size_t chrono_milliseconds;
typedef size_t AsyncMatStatus;
#endif
```
and put on top of file
**pyopencv_generated_types.h**
Now, I still have the following **ERROR** messages:
```bash
....../opencv/modules/python/src2/cv2.cpp: In instantiation of ‘bool pyopencv_to(PyObject*, T&, const char*) [w
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39304:44: required from here
....../opencv/modules/python/src2/cv2.cpp:24:105: error: ‘to’ is not a member of ‘PyOpenCV_Converter<std::vecto
bool pyopencv_to(PyObject* obj, T& p, const char* name = "<unknown>") { return PyOpenCV_Converter<T>::to(obj, p, name); }
~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
....../opencv/modules/python/src2/cv2.cpp: In instantiation of ‘PyObject* pyopencv_from(const T&) [with T = std
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39756:36: required from here
....../opencv/modules/python/src2/cv2.cpp:27:75: error: ‘from’ is not a member of ‘PyOpenCV_Converter<std::futu
PyObject* pyopencv_from(const T& src) { return PyOpenCV_Converter<T>::from(src); }
~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
....../opencv/modules/python/src2/cv2.cpp: In instantiation of ‘bool pyopencv_to(PyObject*, T&, const char*) [w = _object]’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:39850:66: required from here
....../opencv/modules/python/src2/cv2.cpp:24:105: error: ‘to’ is not a member of ‘PyOpenCV_Converter<cv::dnn::d
bool pyopencv_to(PyObject* obj, T& p, const char* name = "<unknown>") { return PyOpenCV_Converter<T>::to(obj, p, name); }
~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
....../opencv/modules/python/src2/cv2.cpp: In instantiation of ‘bool pyopencv_to(PyObject*, T&, const char*) [w, 1000> >; PyObject = _object]’:
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40462:66: required from here
....../opencv/modules/python/src2/cv2.cpp:24:105: error: ‘to’ is not a member of ‘PyOpenCV_Converter<std::chron
....../opencv/modules/python/src2/cv2.cpp: In instantiation of ‘PyObject* pyopencv_from(const T&) [with T = std
....../opencv/build/modules/python_bindings_generator/pyopencv_generated_types.h:40465:36: required from here
....../opencv/modules/python/src2/cv2.cpp:27:75: error: ‘from’ is not a member of ‘PyOpenCV_Converter<std::futu
PyObject* pyopencv_from(const T& src) { return PyOpenCV_Converter<T>::from(src); }
~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~
modules/python3/CMakeFiles/opencv_python3.dir/build.make:65: recipe for target 'modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2.cpp.o' failed
make[2]: *** [modules/python3/CMakeFiles/opencv_python3.dir/__/src2/cv2.cpp.o] Error 1
make[2]: Leaving directory '....../opencv/build'
CMakeFiles/Makefile2:10518: recipe for target 'modules/python3/CMakeFiles/opencv_python3.dir/all' failed
make[1]: *** [modules/python3/CMakeFiles/opencv_python3.dir/all] Error 2
make[1]: Leaving directory '....../opencv/build'
Makefile:165: recipe for target 'all' failed
make: *** [all] Error 2
```
Any suggestions?
| priority: low,category: build/install,incomplete | low | Critical |
443,833,859 | terminal | Add Optional Cursor Properties to Color Schemes | There are several fields assigned against profiles that feel like they would be better placed in schemes (or at least duplicated to schemes):
* `cursorColor`
* `cursorShape`
* `fontFace`
* `fontSize`
There are two options for these properties that could be utilised:
1. Remove them from the profile and add them to schemes, updating the current schemes with default values.
2. Leave them in the profile and have them act as an override when present, similar to what `background` is doing currently.
Additionally, it may be worth actually removing the default `background` value, to allow the theme background to actually work by default. | Issue-Feature,Help Wanted,Area-Settings,Product-Terminal | low | Major |
443,841,799 | rust | Private doc test flag | Doc tests seem to work in a similar way to integration tests, in that the tests are performed from the public facing API. This is great for public facing libraries, however for internal development it would be nice to have doc tests that run in a just-as-easy fashion.
Is this something on the cards that's on the cards or has a work around? | T-rustdoc,C-feature-request,A-doctests | low | Major |
443,846,628 | godot | font.get_string_size() should support '\n' multiline. | I think font.get_string_size() should support '\n' symbol as multiline.
for now everything calculate as one line, and '\n' are ignored!
var size = get_string_size("hello\nworlds");
size.y = could have get_height() * num_lines;
size.x = could have max(line[for_i] , size.x);
**Godot version:**
3.1.1
| enhancement,topic:core | low | Minor |
443,873,689 | youtube-dl | support for https://unacademy.com/ | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.05.11. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.05.11**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [ ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://unacademy.com/lesson/vector-space-lesson-1-in-hindi/RD9SAOJX/?source=Course
- Single video: https://unacademy.com/lesson/vector-space-lesson-2-in-hindi/7C823QYM
- Playlist: https://unacademy.com/course/linear-algebra-for-csir-net-math/D74R1NTF
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Sir/Madam
Plz provide the youtube-dl support for the site https://unacademy.com/. I am unable to download playlist from this. Also, site provides download option to its android app. Support for this site is not helping me only but thousands of students in their learning.
Account details:
[email protected]
ram12345
| site-support-request | low | Critical |
443,888,107 | go | reflect: Value.Method does not do the same nil checks as the language | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
go version go1.12 linux/amd64
</pre>
### What did you do?
```
package main
import (
"fmt"
"reflect"
)
type S struct {
x int
}
func (S) M(int) {}
type T struct {
*S
}
func main() {
var v = T{}
t := reflect.TypeOf(v)
fmt.Println(t, "has", t.NumMethod(), "methods:")
for i := 0; i < t.NumMethod(); i++ {
fmt.Print(" method#", i, ": ", t.Method(i).Type, "\n")
}
fmt.Println(reflect.ValueOf(v).Method(0)) // ok
_ = v.M // panic
}
```
### What did you expect to see?
Run okay.
### What did you see instead?
panic.
**Update: [similarly for interface dynamic values](https://github.com/golang/go/issues/32021#issuecomment-612353206)** | NeedsInvestigation,compiler/runtime | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.