id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
489,878,492 | PowerToys | Drag to edge of screen automatically switches virtual desktops | null | Idea-Enhancement,Product-Window Manager,Product-Virtual Desktop | low | Minor |
489,878,530 | PowerToys | When remoting in, collapse other monitors to new virtual desktops. Remove VD when connecting locally again | null | Idea-New PowerToy,Product-Window Manager,Product-Virtual Desktop | low | Minor |
489,878,899 | PowerToys | Visual updates / flash zones for Win+Arrow | After the first arrow is pressed, flash then zones while I have the Win key is down so that users can see where to move the window. | Idea-Enhancement,FancyZones-Dragging&UI,Product-FancyZones,Area-User Interface | low | Minor |
489,879,283 | PowerToys | Merge FancyZones with MTND and include zone moves in the pop-up | In addition to moving a window to a new desktop, the MTND pop-up should allow a user to:
- Move to a different, existing desktop
- Move to a zone in an existing desktop
- Move to a new desktop with a specific zone layout | Idea-Enhancement,Product-FancyZones,Product-Virtual Desktop,FancyZones-VirtualDesktop | low | Minor |
489,879,752 | PowerToys | [Shortcut Guide] Shortcuts list [tracker] | total list from MSFT support: https://support.microsoft.com/en-us/help/12445/windows-keyboard-shortcuts
This list will be added into #890 with v2 of shortcut guide.
Combination | Effect
-- | --
Win + Esc | Close magnification
Win + Enter | Narrator (Below 19h1)
Win + Pause | System Window
Win + PrtScn | Capture screenshot
Win + Space | Switch Keyboard Layout
Win + Tab | Open Timeline
Win + ; | Emoji Keyboard Pop up
Win + . | Emoji Keyboard Pop up
Win + , | Peek at desktop
Win + +/- | Magnifier
---|---
Win + A | Action Center
Win + B | Focus system tray
Win + C | Cortana (doesn't work on 1909)
Win + D | Display & hide desktop
Win + E | File Explorer
Win + F | Open Feedback Hub
Win + G | Game bar
Win + H | Dictation bar
Win + I | Settings
Win + K | Quick connect
Win + L | Lock PC
Win + M | Minimize windows
Win + N | Notification dialog (Need to verify min version, may be Win11 and up)
Win + Q | Open Search Bar
Win + P | Open Display Mode Selection panel
Win + R | Run Dialog
Win + S | Open Search Bar
Win + T | Navigate between task bar items
Win + U | Ease of access center
Win + V | Open Clipboard History
Win + W | Open Windows Ink Workspace Panel
Win + X | Quick launch menu
Win+Z | Snap Flyout ( win11+)
---|---
Win + Shift + M | Restores windows minimized with Win + M
Win + Shift + S | Screenshot
Win + Shift + T | Navigate between task bar items reversed
Win + Shift + Left/Right Arrow | Move entire active window to different window
---|---
Win + Ctrl + arrow | Switch virtual desktop
Win + Ctrl + Enter | Narrator (19h1 and above)
Win + Ctrl+F4 | Close virtual desktop
Win + Ctrl + D | New virtual desktop
Win + Ctrl + N | Narrator settings
Win + Ctrl + Q | Quick Assist
---|---
Win + Alt+ D | Toggles calendar in taskbar
---|---
Win + Ctrl + Shift + B | Restart Video Driver
---|---
Win + Shift + Ctrl + Alt | emulates the Office key | Idea-Enhancement,Product-Shortcut Guide,Status-In progress,Tracker | high | Critical |
489,880,410 | pytorch | Backtrace prints many <unknown function> | Our backtrace code frequently prints out frames with `<unknown function>`, indicating that `backtrace_symbols` was unable to retrieve the symbol for the backtrace. For example:
```
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7efd7464fc11 in /data/users/ezyang/pytorch-tmp/torch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7efd7464f82a in /data/users/ezyang/pytorch-tmp/torch/lib/libc10.so)
frame #2: torch::autograd::VariableType::checked_cast_variable(at::Tensor&, char const*, int) + 0x138 (0x7efd7a777188 in /data/users/ezyang/pytorch-tmp/torch/lib/libtorch.so)
frame #3: torch::autograd::VariableType::unpack(at::Tensor&, char const*, int) + 0x9 (0x7efd7a777459 in /data/users/ezyang/pytorch-tmp/torch/lib/libtorch.so)
frame #4: torch::autograd::VariableType::copy_(at::Tensor&, at::Tensor const&, bool) + 0xb0 (0x7efd7a778ff0 in /data/users/ezyang/pytorch-tmp/torch/lib/libtorch.so)
frame #5: <unknown function> + 0x6a60c9 (0x7efd970910c9 in /data/users/ezyang/pytorch-tmp/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0x6a7ee9 (0x7efd97092ee9 in /data/users/ezyang/pytorch-tmp/torch/lib/libtorch_python.so)
frame #7: c10d::ProcessGroupGloo::runLoop(int) + 0x169 (0x7efd97086199 in /data/users/ezyang/pytorch-tmp/torch/lib/libtorch_python.so)
frame #8: <unknown function> + 0xb8678 (0x7efd96962678 in /home/ezyang/Dev/pytorch-tmp-env/lib/libstdc++.so.6)
frame #9: <unknown function> + 0x7dd5 (0x7efdabb6add5 in /lib64/libpthread.so.0)
frame #10: clone + 0x6d (0x7efdab893ead in /lib64/libc.so.6)
```
This occurs even when the library is built with debugging symbols. The reason this occurs is the symbols in question live in an *anonymous* namespace, and thus don't actually have symbols for `backtrace_symbols` to extract:
```
$ gdb /data/users/ezyang/pytorch-tmp/torch/lib/libtorch_python.so
(gdb) info symbol 0x6a7ee9
c10d::(anonymous namespace)::AsyncSparseAllreduceWork::run() + 41 in section .text
```
To work around this, you can load the library in question in gdb and then ask about the address there (I think we get the symbol name from DWARF in that case). But it would be nice if this just worked out of the box. | module: build,triaged,better-engineering | low | Critical |
489,881,617 | PowerToys | [FZ Editor] Add unit tests | # Summary of the new feature/enhancement
The new FZ editor does not have any tests. We need to fix that. | Idea-Enhancement,Area-Tests,FancyZones-Editor,Product-FancyZones | low | Minor |
489,889,994 | go | x/build: keep test result logs available after success | Currently, the x/build testing infrastructure (both trybots and post-submit builders) generally makes test logs available under two conditions:
- while the test is running, streamed
- after the test completed, if it failed
After tests complete successfully, the test logs immediately become inaccessible.
I don't know the exact rationale for this design, but my guess is that it done to reduce computational and storage resource use. It was likely deemed logs are less useful if tests passed. /cc @bradfitz Is there more context on this?
### Feature Request
This feature request issue is to make them accessible on successful test completion for some amount of time (at least a month). Both on the [build dashboard](https://build.golang.org), and for trybot results.
The reasons for wanting this include:
- it's sometimes helpful to be able to know which tests ran and which tests were skipped, and to be able to confirm a specific test truly passed
- it's sometimes helpful to know which architectures were tested by trybots (and from that, to know which ones weren't), and having trybot result page with logs makes that much easier
- pretty much every CI system keeps test logs accessible regardless if tests succeeded or failed, so it has become a baseline expectation
I think this should be worth doing in order to improve the developer experience for people working on the Go project. Feedback from others is welcome.
/cc @golang/osp-team | Testing,Builders,NeedsFix,FeatureRequest,Community | low | Critical |
489,891,685 | flutter | [google_sign_in] document resource IDs for play store app submission | The playstore app submission process includes a pre-test routine that supports sign-in automation which allows developers to provide credentials for their apps for use in testing, thereby allowing a more comprehensive pre-submit test.
Unfortunately, it would appear that in addition to providing the username/password, developers must also provide the resource IDs associated with text-entry for the username and password, along with the 'sign-in' button.
https://support.google.com/googleplay/android-developer/answer/7002270#signin
For flutter apps that make use of the google sign-in plugin, it seems that this information is not directly available via the current plugin docs (I'm assuming the plugin leverages the GMS client component):
https://developers.google.com/android/reference/com/google/android/gms/auth/api/signin/GoogleSignInClient
If this information could be provided for use in testing/submission, it would be most helpful.
Thanks!
| d: api docs,p: google_sign_in,package,team-ecosystem,P3,triaged-ecosystem | low | Minor |
489,893,046 | terminal | Feature Request: link generation for files + other data types | #2094 Description of the new feature/enhancement
There are a number of scenarios where having a link detected with a meaningful click action would be helpful:
1. File (optional line number): click action could launch an editor.
2. Directory: click action can to change to that directory
3. Git commit: click action can bring up a description of the commit in the appropriate web page or tool | Issue-Feature,Area-Extensibility,Product-Terminal | low | Major |
489,900,134 | rust | Extend `span_label` to include fallback operation | There are occassions where `DiagnosticBuilder::span_label` can be called with a `DUMMY
_SP`. The current behavior is we don't show those labels at all in the output. In some cases that is the correct behavior. In others, we proactively look for `DUMMY_SP`s and emit a `note` instead. I assume that the former is the most common behavior right now, although I do not know which behavior is most desired in the general case.
We should add a new method that has the fallback, or make all `span_label`s take an enum argument selecting the fallback behavior.
After extending the API, we should also audit our codebase for the correct behavior throughout all diagnostics. | C-cleanup,A-diagnostics,T-compiler | low | Minor |
489,912,016 | terminal | [Audit] Eliminate raw pointers as begin/end markers in the CharRowCellReference class | When turning on audit mode for the output buffer project, I ran into the circumstance where `CharRowCellReference` is using raw pointers as its iterator definitions for begin and end, which makes static analysis very unhappy as it still results in pointer math.
This represents adjusting `CharRowCellReference` to perform iteration in a way that audit finds palatable. | Product-Conhost,Help Wanted,Area-Output,Product-Terminal,Issue-Task,Area-CodeHealth | low | Minor |
489,917,332 | terminal | [Audit] Use std::wstring_view for all global strings | I left this behind in the audit mode enable phase.
This is because changing the `DEFAULT_FONT_FACE` to a `std::wstring_view` cascades in a way that is troublesome.
For one, `std::wstring_view` is not guaranteed to be null terminated. Some of the consumers of this value throughout the codebase use the pointer to this memory location as a z-terminated string instead of a counted string. In the context of pointing to a globally allocated string, we can probably feel safe that it is null terminated and could ignore this. But I don't like ignoring things like this as they pop up later when refactoring occurs.
For two, changing the way the font face string works cascades through the entire `FontInfo` and friends classes in a way that touches many, many things. I don't really want to move all of that around without also addressing #602 at the same time because we've already identified that this particular value needs to change in those classes to satisfy that bug and circumstance.'
For three, we haven't even started talking about the cascading effects of doing this to the other three strings.
So for now, I've punted and left this TODO behind. | Product-Conhost,Help Wanted,Area-Settings,Product-Terminal,Issue-Task,Area-CodeHealth | low | Critical |
489,919,305 | flutter | Explore alternate rendering backends for Windows | Currently we're using the OpenGL rendering configuration in the embedding API, via ANGLE. Longer term, we should investigate other possibilities:
- Skia has a Vulkan backend; we could potentially wire up Vulkan through the embedding API and use that when available on Windows.
- If Skia were to add some flavor of DirectX backend, we could use that. The specific version would impact how widely it could be used. | engine,platform-windows,a: desktop,P3,team-windows,triaged-windows | low | Minor |
489,919,501 | terminal | [Audit] Unsuppress OuptutCellIterator warning and solve custom construction/destruction warning | I suppressed the warning on the `OutputCellIterator` as a part of turning on audit mode because I couldn't figure it out (in a reasonable amount of time for how many other things I was fixing and trying to get turned on).
This represents removing the warning suppression and figuring out what the actual issue is that is causing the warning to flag when the iterator is advanced. | Product-Conhost,Help Wanted,Area-Output,Product-Terminal,Issue-Task,Area-CodeHealth | low | Minor |
489,921,471 | terminal | [Audit] Remove suppression on CodepointWidthDetector | While turning on the audit mode, I ran into a condundrum with the `CodepointWidthDetector` that I didn't really want to dig into at the moment.
The warning is complaining that we have a global object with complex initialization. That is true. But there's some decisions to be made here that equaled "not-a-quick-fix".
The `CodepointWidthDetector` is a class designed to have multiple instances if need be.
However, for the most part, it is used in a single-instance manner inside `conhost` and `wt`.
The class `GlyphWidth.cpp` is mostly a wrapper around `CodepointWidthDetector` to turn it into a singleton that can be accessed by C-style methods in the global namespace.
The decisions here are really around...
Should `CodepointWidthDetector` just be a singleton itself and only hand out a copy of itself on `Instance()`? I know that's fine for `conhost`, but I haven't fully considered it for `wt` yet.
If we get rid of the C-style global namespace methods (by removing the entire `GlyphWidth` class), will that cause a kerfluffle throughout the `conhost` codebase (and maybe `wt` too) because there are a bunch of places that gratuitously grab the global state since it was never encapsulated before?
Anything else I didn't have time to fully consider here? | Product-Conhost,Area-Output,Product-Terminal,Issue-Task,Area-CodeHealth | low | Minor |
489,926,626 | TypeScript | Handle CompileOnSave on projects that contain project references. | Came up as TODO from #32028 where we want to see what needs to happen for such projects.. | Bug | low | Minor |
489,955,969 | godot | Adding an existing script to a node which has the same case-insensitive name warns for case mismatch | Godot 3.2 master ccf294b92f02af6e60206e220dcf4a8474f73f0a
I have an existing script named `interaction.gd` and a node called `Interaction`. I used the right-click menu of the scene tree to add that script. The dialog popped up proposing the name `Interaction.gd`. I clicked on the folder icon to pick `interaction.gd` from the file system, then clicked `OK`.
Later I noticed the following warnings in the console:
```
WARNING: FileAccessWindows::_open: Case mismatch opening requested file 'Interaction.gd', stored as 'interaction.gd' in the filesystem. This file will not open when exported to other case-sensitive platforms.
At: drivers\windows\file_access_windows.cpp:105
WARNING: FileAccessWindows::_open: Case mismatch opening requested file 'Interaction.gd', stored as 'interaction.gd' in the filesystem. This file will not open when exported to other case-sensitive platforms.
At: drivers\windows\file_access_windows.cpp:105
Loading resource: res://dmc_terrain/Interaction.gd
WARNING: FileAccessWindows::_open: Case mismatch opening requested file 'Interaction.gd', stored as 'interaction.gd' in the filesystem. This file will not open when exported to other case-sensitive platforms.
At: drivers\windows\file_access_windows.cpp:105
WARNING: FileAccessWindows::_open: Case mismatch opening requested file 'Interaction.gd', stored as 'interaction.gd' in the filesystem. This file will not open when exported to other case-sensitive platforms.
At: drivers\windows\file_access_windows.cpp:105
```
Although the script loaded correctly, these warnings seem to hint a problem with this sequence of actions. Case-sensitivity should not matter at all since the script already existed and I didn't rename it.
It also happened during file opening, so somehow Godot tried to open the script using the wrong name. | bug,platform:windows,topic:editor,confirmed | low | Major |
489,962,843 | go | net/http: document outgoing Host header origin in more places | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.5 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOOS="darwin"
</pre></details>
### What did you do?
https://play.golang.org/p/O5Y6dEi_hqx
### What did you expect to see?
I expected to see that setting a `Host:` header actually sends hosts and sets `req.Host`
```map[Accept-Encoding:[gzip] Host:[example.com] User-Agent:[My Test User Agent 1.1]]```
### What did you see instead?
I see that it just sets Hosts from url and uses `req.Host` if exists.
```map[Accept-Encoding:[gzip] Host:[127.0.0.1:2] User-Agent:[My Test User Agent 1.1]]```
I see some closed issues, but I have a few more words about it, It is documented that we need to set .Host to send host header and there are few more headers that calculated in request.write but for example `User-Agent` is being overwritten but it does not do this for Host.
https://github.com/golang/go/blob/master/src/net/http/request.go#L553 on this comment we see that it says
```
// Find the target host. Prefer the Host: header, but if that
// is not given, use the host from the request URL.
```
but it does not even look at host header.
| Documentation,help wanted,NeedsFix | low | Minor |
489,970,527 | go | doc: release notes: minor mistake regarding FreeBSD port applicability | ### What version of Go are you using (`go version`)?
1.13
### Does this issue reproduce with the latest release?
Not applicable.
### What operating system and processor architecture are you using (`go env`)?
Not applicable.
### What did you do?
Read the [change log](https://golang.org/doc/go1.13#freebsd).
### What did you expect to see?
I expected there to be no mention of COMPAT_FREEBSDN β N = [4-11], since this feature is useful only to binaries built on unsupported versions of FreeBSD running on a newer version (such as a static binary built on FreeBSD 4, or a dynamic binary with misc/compat4x in a jail).
### What did you see instead?
A mention that Go is available for both FreeBSD 11, 12, and 13 since all three versions will be buildable by the port in the ports framework (and probably have already been built by the time this issue gets fixed).
Since this is an easy issue to fix, I hope it's okay that I didn't supply all the necessary information. :) | Documentation,help wanted,NeedsFix | low | Minor |
489,976,745 | rust | Add support for splitting linker invocation to a second execution of `rustc` | This issue is intended to track support for splitting a `rustc` invocation that ends up invoking a system linker (e.g. `cdylib`, `proc-macro`, `bin`, `dylib`, and even `staticlib` in the sense that everything is assembled) into two different `rustc` invocations. There are a number of reasons to do this, including:
* This can improved pipelined compilation support. The [initial pass of pipelined compilation](https://github.com/rust-lang/rust/issues/58465) explicitly did not pipeline linkable compilations because the linking step needs to wait for codegen of all previous steps. By literally splitting it out build systems could then synchronize with previous codegen steps and only execute the link step once everything is finished.
* This makes more artifacts cacheable with caching solutions like `sccache`. Anything involving the system linker cannot be cached by `sccache` because it pulls in too many system dependencies. The output of the first half of these linkable compilations, however, is effectively an `rlib` which can already be cached.
* This can provide build systems which desire more control over the linker step with, well, more control over the linker step. We could presumably extend the second half here with more options eventually. This is a somewhat amorphous reason to do this, the previous two are the most compelling ones so far.
This is a relatively major feature of rustc, and as such this may even require an RFC. This issue is intended to get the conversation around this feature started and see if we can drum up support and/or more use cases. To give a bit of an idea about what I'm thinking, though, a strawman for this might be:
1. Add two new flags to `rustc`, `--only-link` and `--do-not-link`.
2. Cargo, for example, would first compile the `bin` crate type by passing the `--do-not-link` flag, passing all the flags it normally does today.
3. Cargo, afterwards, would then execute `rustc` again, only this time passing the `--only-link` flag.
These two flags would indicate to `rustc` what's happening, notably:
* `--do-not-link` indicates that rustc should be creating a linkable artifact, such as a one of the ones mentioned above. This means that rustc should *not* actually perform the link phase of compilation, but rather it's skipped entirely. In lieu of this a temporary artifact is emitted in the output directory, such as `*.rlink`. Maybe this artifact is a folder of files? Unsure. (maybe it's just an rlib!)
* The converse of `--do-not-link`, `--only-link`, is then passed to indicate that the compiler's normal phases should all be entirely skipped *except* for the link phase. Note that for performance this is crucial in that this does not rely on incremental compilation, nor does this rely on queries, or anything like that. Instead the compiler forcibly skips all this work and goes straight to linking. Anything the compiler needs as input for linking should either be in command line flags (which are reparsed and guaranteed to be the same as the `--do-not-link` invocation) or the input would be an output of the `--do-not-link` invocation. For example maybe the `--do-not-link` invocation emits an file that indicates where to find everything to link (or something like that).
The general gist is that `--do-not-link` says "prepare to emit the final crate type, like `bin`, but only do the crate-local stuff". This step can be pipelined, doesn't require upstream objects, and can be cached. This is also the longest step for most final compilations. The gist of `--only-link` is that it's execution time is 99% the linker. The compiler should do the absolute minimal amount of work to figure out how to invoke the linker, it then invokes the linker, and then exits. To reiterate again, this **will not rely on incremental compilation** because engaging all of the incremental infrastructure takes quite some time, and additionally the "inputs" to this phase are just object files, not source code.
In any case this is just a strawman, I think it'd be best to prototype this in rustc, learn some requirements, and then perhaps open an RFC asking for feedback on the implementation. This is a big enough change it'd want to get a good deal of buy-in! That being said I would believe (without data at this time, but have a strong hunch) that the improvements to both pipelining and the ability to use `sccache` would be quite significant and worthwhile pursuing. | A-linkage,I-compiletime,T-compiler,C-feature-request | high | Major |
490,035,810 | flutter | TweenSequence (and other animation building-blocks) should be flippable | For certain use cases (see https://github.com/flutter/packages/pull/30 for an example) you want a flippable TweenSequence. Example: you have two TweenSequences to drive the opacity of two widgets:
```dart
TweenSequenceItem<double>(
tween: Tween<double>(begin: 1.0, end: 0.0),
weight: 4 / 12,
),
TweenSequenceItem<double>(
tween: ConstantTween<double>(0.0),
weight: 8 / 12,
),
```
and:
```dart
TweenSequenceItem<double>(
tween: ConstantTween<double>(0.0),
weight: 4 / 12,
),
TweenSequenceItem<double>(
tween: Tween<double>(begin: 1.0, end: 0.0),
weight: 8 /12,
),
```
When the animation runs forward, you want the first widget to fade out quickly, and the second widget to fade in slowly after that. When the animation runs in reverse, you want the second widget to fade out quickly, and the first one to fade in slowly after that. Currently, you'd have to specify four different TweenSequences for that. Would be cool if you'd just have to define the two above and can obtain the flipped ones from it (similar to Curve.flipped).
This kind of flipping seems to be common with material design animations.
There may be other concepts that would also benefit from flipping. | c: new feature,framework,a: animation,P3,team-framework,triaged-framework | low | Major |
490,040,559 | flutter | Create a Flutter installer package that completes the initial setup for you | Hey there! The folks from GDG Baltimore have suggested creating an installer file that takes the user through the install process automatically to reduce the quirks for beginners. For example, adding Flutter to the system path can be a bit of a painful experience. An installer would be able to handle that automatically. | c: new feature,tool,a: first hour,P3,team-tool,triaged-tool | low | Minor |
490,047,406 | flutter | Create a new Flutter SDK package flutter_test_tools | I spoke with @Hixie and he agreed that we should refactor the new Flutter driver synchronization APIs (#39196, #38836) into a new Flutter SDK package flutter_test_tools so that they can be used anywhere dart:ui is available, even if the Flutter driver extension is not enabled. Filing this issue to track the creation of the package.
FYI @adazh @digiter | a: tests,c: new feature,tool,t: flutter driver,P2,team-tool,triaged-tool | low | Minor |
490,062,688 | terminal | [Audit] Fix suppressed return of std::wstring_view& from OutputCellView | During audit, I had to suppress a warning related to returning a reference to a wstring_view.
This is because of the weird relationship that exists between `TextBufferTextIterator` and `TextBufferCellIterator`. The Text iterator needs the `wstring_view` stored *somewhere* or it won't return the right thing with its weird `operator->`.
The fact that this is flagged and awkward indicates a bad design that should be revisited. There has to be a better way. | Product-Conhost,Area-Output,Product-Terminal,Issue-Task,Area-CodeHealth | low | Minor |
490,064,049 | terminal | [Audit] Remove suppression of hidden methods on TextBufferTextIterator | This is very related to #2681 and will probably be fixed at the same time or with a better design that could encompass both of these.
The issue is that the Text iterator inherits from the Cell one. Which is probably not right. It should probably encapsulate it or something.
The original idea when I was scrabbling those iterators together was to not waste time advancing the portions of the iterator that weren't related to the text if all the end consumer wanted was the text. But it doesn't even look like I achieved that and I introduced things that are turning into gross situations and warnings.
| Product-Conhost,Area-Output,Product-Terminal,Issue-Task,Area-CodeHealth | low | Minor |
490,064,746 | terminal | [Audit] default DxEngine constructor should not throw | To change this requires a change in initialization strategy of the engines that is probably overdue anyway. Instead of half-fixing this to solve the issue, I left a TODO. This represents that TODO. | Product-Conhost,Area-Rendering,Product-Terminal,Issue-Task,Area-CodeHealth | low | Minor |
490,068,127 | pytorch | Custom sampler for Seq2Seq models to avoid padding | ## π Feature
<!-- A clear and concise description of the feature proposal -->
As proposed in this [PyTorch forum post](https://discuss.pytorch.org/t/using-batches-for-seq2seq-models/55205): Custom sampler for Sequence-to-Sequence models (but also for sequence classification, sequence tagging, autoencoders, etc.) that forms homogeneous batches, i.e., in the same batch all input sequences have the same length, and all target sequences have the same length. For example a batch might contain 32 pairs with all input sequences having length 12 and all target sequences having length 15.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
Most Sequence-to-Sequence tutorials assume batch sizes of 1, since target sequences typically and with a end-of-sequence (EOS) token without padding. Thus, random batches of larger would include target sequences with different length. Batches of size 1, of course, show poor performance w.r.t. to training time compared the larger batches
While padding of input sequences is common (at least for sequence classification), it's effects on the accuracy are not always clear. Optional packing using `PackedSequence` arguably mitigates this but also adds noticeable overhead to the performance of the training (anecdotal bservation: up 10% longer training)
## Pitch
<!-- A clear and concise description of what you want to happen. -->
A custom sampler organizes all data items, here: pairs of sequences, into batch, so that each batch contains only pairs with equal inputs lengths and equal output lengths. For example a batch might contain 32 pairs with all input sequences having length 12 and all target sequences having length 15. The benefits are:
* Obviously, larger batches that yield better training times compared to batches of size 1
* No padding and optional packing need (potentially avoid all involved side effects)
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
`torchtext` has its own `BucketIterator` that organizes batches to minimize padding (not completely avoiding it).
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
To see how the combination of input and target lengths are distributed, Iβve made a test with a real-world dataset for machine translation. Iβve set the batch size to 32. The figure below shows the distribution of the sizes of all batches. As one can see, the vast majority of batches is indeed full (i.e., 32 sequence pairs). This shouldnβt really be surprising since:
* Batch sizes (e.g., 32 or 64) are is essentially nothing given large datasets of millions of sequences pairs or more. Thus, the chance that enough pairs share the same input and target lengths is high.
* The combination of input and target lengths is typically not independent. For example, an input of length 5 generally does not have a target length of 20 but in a similar range.

cc @SsnL | module: dataloader,triaged | low | Major |
490,076,646 | pytorch | Int32 overflow in bincount indexing | ## π Bug
In `aten/src/ATen/native/cuda/SummaryOps.cu`, there is a snippet:
```
template<typename input_t, typename IndexType>
__device__ static IndexType getBin(input_t bVal, input_t minvalue, input_t maxvalue, int nbins) {
IndexType bin = (int)((bVal - minvalue) * nbins / (maxvalue - minvalue));
```
When data type is int32, for big enough value of bVal and big number of nbins, multiplication causes int overflow that can result in negative bin index returned. This can cause memory corruption.
## To Reproduce
Steps to reproduce the behavior:
1. Run bincount with minval=0, nbins=65536, and value 36830 inside. This will translate into bin index -28706.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
value 36830 should give bin 36830 | module: cuda,triaged,module: 64-bit | low | Critical |
490,083,102 | flutter | [feature request] Text field of variable size | As you can see below I want implement text field of variable size. Currently I intend to use stack and look for way to get hold of renderbox of text and find out it that is feasible.

Thx | a: text input,c: new feature,framework,P3,team-framework,triaged-framework | low | Major |
490,107,464 | bitcoin | wallet: CPU use proportional to wallet transaction count when idle | I recently noticed that the Bitcoin Core GUI (v0.18.1) was constantly using significant amounts of CPU when idle. I disconnected all peers and it continued to use the CPU.
It turns out that it is calling `WalletModel::pollBalanceChanged` 4 times per second for all of my wallets, which causes it to loop through all the transactions in all my wallets 4 times per second, and uses a lot of CPU.
Is this intentional? It seems like it may be unnecessary.
I tried changing the pollTimer, using 2.5 seconds instead of 250 ms and it dropped the CPU usage to much lower, so I'm pretty sure this is what's causing the CPU load:
connect(pollTimer, &QTimer::timeout, this, &WalletModel::pollBalanceChanged);
pollTimer->start(MODEL_UPDATE_DELAY*10);
Most of the CPU is being used by `LookupBlockIndex` looking up the same few thousand block hashes over and over, here:
#3 std::unordered_map<...>::find (__x=..., this=<optimized out>) at /usr/include/c++/8/bits/unordered_map.h:920
#4 LookupBlockIndex (hash=...) at ./validation.h:436
#5 interfaces::(anonymous namespace)::LockImpl::getBlockHeight (this=<optimized out>, hash=...) at interfaces/chain.cpp:34
#6 interfaces::(anonymous namespace)::LockImpl::getBlockDepth (this=0x555556ebb4c0, hash=...) at interfaces/chain.cpp:43
#7 CMerkleTx::GetDepthInMainChain (this=this@entry=0x7fff7ce26940, locked_chain=...) at wallet/wallet.cpp:4479
#8 CWalletTx::IsTrusted (this=this@entry=0x7fff7ce26940, locked_chain=...) at wallet/wallet.cpp:2081
#9 CWallet::GetUnconfirmedBalance (this=0x7fff7dc19150) at /usr/include/c++/8/bits/unique_ptr.h:342
#10 interfaces::(anonymous namespace)::WalletImpl::getBalances (this=0x7fff7d4e66a0) at /usr/include/c++/8/bits/shared_ptr_base.h:1307
#11 interfaces::(anonymous namespace)::WalletImpl::tryGetBalances (this=0x7fff7d4e66a0, balances=..., num_blocks=@0x7fffffffd9cc: -1) at interfaces/wallet.cpp:388
#12 WalletModel::pollBalanceChanged (this=0x5555569caef0) at /usr/include/c++/8/bits/unique_ptr.h:342
| Wallet,Resource usage | low | Major |
490,109,547 | pytorch | torch.cuda.empty_cache() write data to gpu0 | ## π Bug
I have 2 gpus, when I clear data on gpu1, empty_cache() always write ~500M data to gpu0.
I observe this in torch 1.0.1.post2 and 1.1.0.
## To Reproduce
The following code will reproduce the behavior:
After torch.cuda.empty_cache(), ~567M gpu memory will be filled on gpu0.
```
import torch
aa=torch.zeros((1000,1000)).cuda(1)
del aa
torch.cuda.empty_cache()
```
## Expected behavior
If torch.cuda.set_device(1) is used, then the everything will be good. The following code will give out my desired behaviour. some gpu memory on gpu1 will be released, while gpu0 remains empty.
```
import torch
torch.cuda.set_device(1)
aa=torch.zeros((1000,1000)).cuda(1)
del aa
torch.cuda.empty_cache()
```
## Environment
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 2.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 384.130
cuDNN version: Probably one of the following:
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.5.1.5_bak
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.7.3.0
Versions of relevant libraries:
[pip] pip2 19.1.1
[conda] None used
## Additional context
I observe the phenomenon on multiple machines with different hardware, and multiple torch version.
| module: cuda,triaged | low | Critical |
490,130,952 | godot | "Making Unique" resource collections in derived scenes replaces resource elements in base scenes. | **Godot version:**
Godot 3.1 stable mono
**OS/device including version:**
Windows 10
**Issue description:**
If you have a resource container and a resource element inside of that container, and if both of these resources are embedded resources inside of a scene file, then attempting to "Make Unique" the resource container in a derived scene will correctly create a deep duplicate of the resource container (that is, the derived scene will now have its own copies of the sub-resources, i.e. the resource container and the resource element).
However, if you look back at the base scene, you will see that the base resource element, which previously had a path residing in the base scene, will now be replaced with a resource path referring to the newly created embedded resource element in the derived scene.
This should not happen. It should instead retain its original copy of the resource element, that way the base scene and derived scene, upon making the resource container unique, will each have their own containers with each their own copies of the resource elements.
**Steps to reproduce:**
The demo below uses a resource collection from godot-next called a ResourceSet that is a thin layer around a Dictionary of Script -> Resource pairs. It makes a GUI that guarantees only one Resource of any kind will exist and allows you to constrain the collection's elements to a particular scripted type.
In the demo, the constraint type is "Behavior", another resource in godot-next. Its features are not relevant to this Issue. The demo introduces a "Bad" resource which derives Behavior and contains an integer "Num" exported property.
The ResourceSet is present on a CallbackDelegator node (another godot-next type) which serves as the root node of the `res://scenes/other.tscn` scene. There is also a derived `res://scenes/other1.tscn` scene.
If one opens the `other1.tscn` scene and expands all of the resource paths in the Inspector, they will see `other.tscn:1` for the ResourceSet and `other.tscn:2` for the Bad instance within it (if you don't see one, click the '+' icon to add it).
Then, if you click the "Make Unique" tool in the other1 scene's ResourceSet instance and save the scene, the resource paths will update to where the ResourceSet is now `other1.tscn:2` and the Bad instance inside of it is `other1.tscn:1`. So, the ID numbers have swapped, and both are now stored inside the other1 scene. The ID numbers swapping is irrelevant, but the "other.tscn" becoming "other1.tscn" is correct behavior.
If you then move over to `res://scenes/other.tscn` and examine the same resource paths in the Inspector, you will notice that the ResourceSet is still pointing to `other.tscn:2`, but the Bad instance has now changed to be `other1.tscn:1`. This is the problem as it should have kept itself as `other.tscn:1` so that the embedded, embedded resource is still referring to the original scene.
**Minimal reproduction project:**
[nested_resources_test.zip](https://github.com/godotengine/godot/files/3582253/nested_resources_test.zip) | enhancement,topic:core | low | Minor |
490,203,287 | go | cmd/compile: align formal parameters with actual argument when reporting incorrect number of arguments | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/klak/.cache/go-build"
GOENV="/home/klak/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/klak"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build664638301=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
https://play.golang.org/p/Jgg2p681cOG
### What did you expect to see?
<pre>
./prog.go:7:5: not enough arguments in call to foo
have (number, number, number, number)
want (int, int, int, int, int, int)
</pre>
### What did you see instead?
<pre>
./prog.go:7:5: not enough arguments in call to foo
have (number, number, number, number)
want (int, int, int, int, int, int)
</pre>
| NeedsInvestigation,FeatureRequest | low | Critical |
490,261,582 | youtube-dl | support for qobuz ? | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.01. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2019.09.01**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [i don't know] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Playlist: https://open.qobuz.com/album/0060253753015
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE
I can't use youtube-dl for qobuz... i don't know if is possible or no for some reason, i pay my stream but i shoot my phone data subscription.
regard... | site-support-request | low | Critical |
490,264,441 | flutter | Scaffold should follow keyboard during keyboard height change | ## Steps to Reproduce
Scaffold resizes its body when keyboard is shown but this resize is without animation so general feel is very poor. In the example tap on white area and you will see how green area moves instantly while leaving white area behind. I would expect that green area follows keyboard.
```
import 'package:flutter/material.dart';
void main() {
runApp(
MyApp(),
);
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key}) : super(key: key);
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
void initState() {
// TODO: implement initState
super.initState();
}
@override
Widget build(BuildContext context) {
return Scaffold(
resizeToAvoidBottomInset: true,
body: Column(
children: <Widget>[
Expanded(
child: TextField(),
),
Container(
height: 100,
width: double.infinity,
color: Colors.green,
),
],
),
);
}
}
```
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
```
[β] Flutter (Channel beta, v1.8.3, on Mac OS X 10.14.6 18G87, locale en-US)
[β] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
[β] Xcode - develop for iOS and macOS (Xcode 10.3)
[β] Android Studio (version 3.5)
[!] IntelliJ IDEA Ultimate Edition (version 2018.1.2)
β Flutter plugin not installed; this adds Flutter specific functionality.
β Dart plugin not installed; this adds Dart specific functionality.
[β] Connected device (1 available)
```
| a: text input,framework,a: animation,f: material design,a: fidelity,c: proposal,P3,team-design,triaged-design | low | Minor |
490,296,691 | go | cmd/vet: prefix in vet output breaks error file format compatibility |
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN="/Users/mbp/go/bin"
GOCACHE="/Users/mbp/Library/Caches/go-build"
GOENV="/Users/mbp/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY="github.com/mirza-s"
GONOSUMDB="github.com/mirza-s"
GOOS="darwin"
GOPATH="/Users/mbp/go"
GOPRIVATE="github.com/mirza-s"
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/nt/0hzfppd92jvbk99fhvnhy8ph0000gn/T/go-build729502798=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
Run
```
go vet cmd/main.go
```
### What did you expect to see?
```
cmd/main.go:187:62: undefined: a
```
### What did you see instead?
```
vet: cmd/main.go:187:62: undeclared name: a
```
This appears to be introduced in 38431f1044880b936e35034ded19a6a8bc9faa21 | NeedsInvestigation,Analysis | low | Critical |
490,323,770 | youtube-dl | LiveTV stream from rutube.ru issue | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.01. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Read bugs section in FAQ: http://yt-dl.org/reporting
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a broken site support issue
- [x] I've verified that I'm running youtube-dl version **2019.09.01**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar bug reports including closed ones
- [x] I've read bugs section in FAQ
## Verbose log
<!--
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.09.01
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://rutube.ru/play/embed/11175917']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.09.01
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.18362
[debug] exe versions: ffmpeg N-94481-g5ac28e9cc1, ffprobe N-94481-g5ac28e9cc1
[debug] Proxy map: {}
[rutube:embed] 11175917: Downloading options JSON
ERROR: No video formats found
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpbzyg5d3a\build\youtube_dl\YoutubeDL.py", line 796, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpbzyg5d3a\build\youtube_dl\extractor\common.py", line 530, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpbzyg5d3a\build\youtube_dl\extractor\rutube.py", line 184, in _real_extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpbzyg5d3a\build\youtube_dl\extractor\rutube.py", line 89, in _extract_formats
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpbzyg5d3a\build\youtube_dl\extractor\common.py", line 1327, in _sort_formats
youtube_dl.utils.ExtractorError: No video formats found
```
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
LiveTV stream above plays fine in Google Chrome browser.
| site-update-request | low | Critical |
490,333,916 | pytorch | Model parallel with DDP get `Socket Timeout` error when using NCCL, while GLOO works fine | ## π Bug
<!-- A clear and concise description of what the bug is. -->
When training models in multi-machine multi-GPU setting on SLURM cluster, if `dist.init_process_group` with `NCCL` backend, and wrapping my multi-gpu model with `DistributedDataParallel` as [the official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#combine-ddp-with-model-parallelism), a `Socket Timeout` runtime error will raise after the `timeout` (30s in my setting). However, if initialize the group with `GLOO` backend, everything works as expected.
Each of the processes holds a model replica, and I have double checked that GPUs assigned to each process do not overlap. i.e. process 0 get GPU{0, 1}, process 1 get GPU{2, 3}. Parameters in my model are separated to two sets, each set resides on one GPU.
## To Reproduce
Steps to reproduce the behavior:
1. Launch several processes (8 and 16 in my experiments), call `dist.init_process_group` with `NCCL` backend;
1. Build a multi-gpu model, make sure GPUs assigned to each process do not overlap;
1. Wrap the model with `DistributedDataParallel` and do not assign `device_ids`, `output_device`. After `timeout`, encounter with `Runtime Error: Socket Timeout`.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
My codebase basically looks like this:
```python
# fire tasks on SLURM cluster...
os.environ["MASTER_PORT"] = str(port)
os.environ["MASTER_ADDR"] = str(master_ip)
os.environ["WORLD_SIZE"] = str(n_tasks)
os.environ["RANK"] = str(proc_id)
dist.init_process_group(backend=dist.Backend.NCCL, timeout=timedelta(seconds=30))
# ...
class MyModel(nn.Module)
def __init__(self, ..., device0, device1):
# ...
self.part_1.to(device0)
self.part_2.to(device1)
# task0 get GPU{0, 1}, task1 get GPU(2, 3)...
d0 = torch.device(f"cuda:{rank * 2}")
d1 = torch.device(f"cuda:{rank * 2 + 1}")
model = MyModel(..., d0, d1)
# not all parameters are used in each iteration
ddp_model = DistributedDataParallel(model, find_unused_parameters=True)
# ...
```
Invoking DDP did not raise any error, however after the `timeout` (30s in my setting), I encountered with following error:
### torch-1.1:
```bash
Traceback (most recent call last):
File "../tools/train_val_classifier.py", line 332, in <module>
main()
File "../tools/train_val_classifier.py", line 103, in main
model, model_without_ddp = get_ddp_model(model, devices=(fp_device, q_device))
File ".../quant_prob/utils/distributed.py", line 120, in get_ddp_model
ddp_model = DistributedDataParallel(model, device_ids=devices, find_unused_parameters=True)
File "/envs/r0.3.0/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 286, in __init__
self.broadcast_bucket_size)
File "/envs/r0.3.0/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 410, in _dist_broadcast_coalesced
dist._dist_broadcast_coalesced(self.process_group, tensors, buffer_size, False)
RuntimeError: Socket Timeout
```
### torch-1.2
```bash
Traceback (most recent call last):
File "../tools/train_val_classifier.py", line 332, in <module>
main()
File "../tools/train_val_classifier.py", line 103, in main
model, model_without_ddp = get_ddp_model(model, devices=(fp_device, q_device), debug=CONF.debug)
File "/mnt/lustre/lirundong/Projects/quant-prob/quant_prob/utils/distributed.py", line 128, in get_ddp_model
ddp_model = DistributedDataParallel(model, device_ids=devices, find_unused_parameters=True)
File "/mnt/lustre/lirundong/Program/conda_env/torch-1.2-cuda-9.0/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 298, in __init__
self.broadcast_bucket_size)
File "/mnt/lustre/lirundong/Program/conda_env/torch-1.2-cuda-9.0/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 480, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(self.process_group, tensors, buffer_size)
RuntimeError: Socket Timeout
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
If initialize with `GLOO` backend, the training and evaluation works properly: for each machine, GPU{0, 2, 4, 6} firstly do some computation, then GPU{1, 3, 5, 7}. Backpropagation works like DDP on single-gpu model.
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): both 1.1, 1.2
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source):
- torch-1.1: conda
- torch-1.2: compiled from source
- Build command you used (if compiling from source):
```python
TORCH_CUDA_ARCH_LIST="6.0;6.1;7.0" CUDNN_LIB_DIR=$HOME/.local/cudnn-v7.6.2/lib64 CUDNN_INCLUDE_DIR=$HOME/.local/cudnn-v7.6.2/include python setup.py install
```
- Python version: 3.6.9
- CUDA/cuDNN version:
- torch-1.1: CUDA-9.0, cuDNN-7.5.1
- torch-1.2: CUDA-9.0, cuDNN-7.6.2
- GPU models and configuration: GTX-1080Ti, TitanXP
- Any other relevant information: on SLURM cluster
## Additional context
<!-- Add any other context about the problem here. -->
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera | oncall: distributed,triaged | medium | Critical |
490,361,190 | godot | Tilemap - crash if y-sort active and tiles at 512x512 or more | **Godot version:**
- 3.1.1 (64 bit) - 2D
_Remark: the 32 bit client seems to be able to achieve higher tile number, before he simply stopps to respond_
**OS/device including version:**
- Windows 10
- RTX 2080
- I7 8700
**Issue description:**
If a tilemap is filled by GD-Script, exceeds 512x512 tiles and has y-sort active it crashes with the following error message.
- ERROR: _get_socket_error: Socket error: 10054
- At: drivers/unix/net_socket_posix.cpp:190
Additional remark: The memory usage gets significantly higher with y-sort active.
**Steps to reproduce:**
The attached project already has the correct setup -> if you remove y-sort it runs without crash.
But you can also just load via code a tilemap with double loop β¦ error only occurs if y-sort is active.
**Minimal reproduction project:**
[project.zip](https://github.com/godotengine/godot/files/3584355/project.zip) | bug,topic:core,crash | low | Critical |
490,374,355 | PowerToys | [FancyZones] Auto resizing of windows placed side-by-side | # Summary of the new feature/enhancement
Windows (10?) comes with a neat feature to allow windows arranged side-by-side to logically snap together so whenever I resize one of them, the other ones resizes accordingly. This is a built-in feature that can be enabled and disabled in the Multitasking settings of Windows 10 (see screenshot below) and it would be quite nice if Powertools' FancyZones supports / respects this setting and enables the same underlying 'snapping' if windows when / where applicable to windows placed side by sides into zones.
# Proposed technical implementation details (optional)
n.a.
Screenshot of corresponding Windows 10 Settings:

| Idea-Enhancement,FancyZones-Dragging&UI,Product-FancyZones | medium | Critical |
490,393,460 | kubernetes | API client can't recover when tcp-reset happens | client-go verison is 9.0.0
1. get endpoints use CoreV1 from api server is ok, set timeout 5s
2. apply iptables rule `sudo iptables -A INPUT -s k8scluter -p tcp -j REJECT --reject-with tcp-reset`
3. wait about 15s
4. remove iptables rule
after that, client-go can't communicate with api-server, always get `Get https://xxx/api/v1/namespaces/prod/endpoints/aaa?timeout=5s: context deadline exceeded` | kind/bug,area/client-libraries,sig/api-machinery,priority/important-longterm,lifecycle/frozen | low | Major |
490,434,299 | PowerToys | Grouping Taskbar Icons | # Summary of the new feature/enhancement
I want to be able to group taskbar icons.
# Proposed technical implementation details (optional)
By example, I have Chrome, Firefox, Edge and Opera in my taskbar. I want to group them in only one icon "Browsers". When I click (or hover? can be an option), then appears a menu with these 4 icons, then I can click and open my desired browser. It will clean my taskbar, I can group related icons as well.
| Idea-New PowerToy,Product-Tweak UI Design | medium | Major |
490,442,140 | pytorch | Detaching a distribution's `log_prob` to block gradients only w.r.t its parameters | ## π Feature
It would be quite useful to have the general ability, to compute a 'detached' `log_prob` for any distribution; i.e. blocking all gradient computation w.r.t that distribution's parameters (but not the value), _without_ resorting to the alternative of constructing an instance of that distribution with explicitly detached parameters.
cc: @fritzo (a late request following our conversation at icml :slightly_smiling_face: )
## Motivation
This would be particularly useful for a variety of stochastic variational inference approaches [1,2], which seek to avoid the _score-function gradient_ component of the estimators, by either ignoring it entirely [1] or by _doubly reparameterising_ the estimator [2] to include it in a form that avoids the larger variance of the score-function gradient estimator.
[1] [Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference](https://arxiv.org/abs/1703.09194)
[2] [Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives](https://arxiv.org/abs/1810.04152)
## Pitch
The idea is to have an operation that allows a `log_prob` call to block gradient computation w.r.t the _parameters_, while gradients w.r.t the _value_ still get computed. Simply detaching the entire `log_prob` computation will not satisfy this requirement.
``` python
import torch
import torch.distributions as dist
params = torch.zeros(3,2), torch.ones(3,2) # mu, sigma
pz = dist.Normal(*params)
z = pz.rsample()
# hypothetical setup
lpz = pz.log_prob(z, detached_params=True) # or some similar form
loss = -lpz
```
## Alternatives
While one solution for this is to just create a new instance of the distribtuion in question with pre-detached parameters, for more complex models (as in those used with Pyro), this can get hairy quite quickly.
``` python
import torch
import torch.distributions as dist
params = torch.zeros(3,2), torch.ones(3,2) # mu, sigma
params_ = tuple(p.detach() for p in params)
pz = dist.Normal(*params)
pz_ = dist.Normal(*params_)
z = pz.rsample()
lpz = pz_.log_prob(z) # note: using pz_ to score
loss = -lpz
```
It also doesn't seem to be feasible to wrap the existing `log_prob` method, as there does not appear to be a way to access all the parameters of an arbitrary distribution---with a view to detaching them when required.
cc @vincentqb @fritzo @neerajprad @alicanb @vishwakftw | module: distributions,triaged | medium | Major |
490,448,565 | pytorch | [Feature request] modified Cholesky decomposition | It would be useful to have a modified Cholesky decomposition implemented in PyTorch.
This allows one to perform Cholesky decomposition on singular covariance matrices.
A work-around is to add a small multiple of identity matrix and run `torch.cholesky`, but that adds an extra hyper-parameter that user needs to tune.
One implementation is here:
http://faculty.washington.edu/mforbes/doc/pymmf/_build/_generated/api/mmf.math.linalg.cholesky.cholesky_.html
cc @ezyang @gchanan @zou3519 @vishwakftw @SsnL @jianyuh | high priority,triaged,enhancement,module: linear algebra | low | Major |
490,474,853 | PowerToys | Alt+Drag or Windows+Drag to move windows | ### Summary of the new feature/enhancement
Moving windows using the mouse relies on hitting a small target, especially on high-resolution screens. Add an option to move entire windows by alt-dragging or windows-dragging them, anywhere on the window except perhaps the resize area.
### Proposed technical implementation details
Stefan Sundin has done great work on AltDrag: https://stefansundin.github.io/altdrag/ . With permission, I'd use this as an initial reference.
Alt+Drag or Windows+Drag could be implemented as part of FancyZones, activated with a toggle. | Help Wanted,Idea-New PowerToy,Product-Window Manager | high | Critical |
490,489,986 | flutter | Dart SDK libraries and Flutter for web | ### Background
An *unsupported library* is either missing an implementation in the platform's libraries.json file, or has an implementation where all members throw when called.
The only mechanism for conditionally including code is config-specific imports and exports. The mechanism for using these is the presence/absence of an *unsupported library
### Specific Issues
- `dart:io` provides a reasonable set of functionality that we currently rely on in the framework. Specifically, we use the `HttpClient` to make network requests, `Platform` to detect the initial non-overridden platform, and `File` to transfer bytes across certain compute boundaries, like `Isolates` and through `PlatformChannels`. See also https://github.com/flutter/flutter/issues/32220, https://github.com/flutter/flutter/issues/36281, https://github.com/dart-lang/sdk/issues/35969
- The existing dart4web tooling refuses to compile applications to the web that have an *unsupported library* in the transitive closure, including `dart:isolate` and `dart:io`. Under this restriction, almost no current flutter applications will compile to the web unless all dependencies are rewritten to use config specific imports. See also https://github.com/dart-lang/build/pull/2420.
- The analyzer does not support having multiple analysis summaries. Thus either dart libraries that exist are displayed as missing, or dart libraries that are missing are displayed as existing. See also https://github.com/flutter/flutter/issues/35588
- The analyzer is not capable of analyzing code that uses config specific imports beyond the default import case. Because it does not analyze other paths, the user is responsible for verifying that the interfaces are used in a compatible way. For example, an API incompatibility that will not be detected until compile time:
a.dart
```
import 'b.dart'
if (dart.library.io) 'c.dart'
void main() {
bar(1); // No error
}
```
b.dart
```
void bar(int x) { }
```
c.dart
```
import 'dart:io';
void bar(String x) { }
```
- The analyzer is not capable of analyzing code that uses config specific exports beyond the default export case. Config specific exports are required if a type from an *unsupported library* is a part of a public API. For example, to include `File` in an interface conditionally:
a.dart
```
export 'b.dart'
if (dart.library.io) 'c.dart'
```
b.dart
```
void foo(Object file) {}
```
c.dart
```
import 'dart:io';
void foo(File file) {}
```
This relies on the fact that it is always safe to substitute the two APIs, but the caller will not get any information about which implementation is used, or when to use it.
- Pub is not capable of distinguishing between a library that conditionally supports Web/Flutter/Flutter for Web/VM. Specifically see https://github.com/flutter/flutter/issues/35588#issuecomment-524373451
### Library Delta
Due to the additional inclusion of web specific libraries that we can't currently remove, the delta between a Flutter for Mobile/Desktop and Flutter for Web is currently quite large. The complexity is entirely placed on our users to deal with:
Library | Web | Mobile/Desktop
-- | -- | --
ui | β | β
async | β | β
core | β | β
collection | β | β
typed_data | β | β
developer | β | β
convert | β | β
math | β | β
wasm | x | β
ffi | x | β
io | x | β
isolate | x| β
js | β| x
js_util | β | x
html | β | x
web_audio | β | x
web_gl | β | x
indexed_db | β | x
web_sql | β | x
svg | β | x
mirrors | x | x
The only way to write an app, plugin, or package that works across all platforms without heavy use of config specific imports/exports is to _only_ touch libraries supported on all platforms. However this represents a lowest common denominator strategy, which is against the core principals of flutter.
### Comparison to method/message channels
Flutter already deals with platform specific features using platform channels. An asynchronous message is sent to the platform side, where platform code *may* respond (for example, a camera), or it might go into the void. The application author can choose to ignore the error or make a requirement, but Flutter itself does not make that choice.
The introduction of the web and web libraries has put us in a situation where we are deciding for the application author.
### Problems with `dart:html`
Due to the highly dynamic nature of the browser DOM, any usage of `dart:html` will transitively pull in almost all of the dart core libraries. These libraries are also somewhat intertwined with the implementation of dart2js, the production JavaScript compiler. This adds additional code size to all flutter applications, even though the only use case may be within the web implementation of dart:ui. The only alternative would be to use JavaScript interop, but this again pulls in `dart:html` transitively.
`dart:html` is also generated from a chrome IDL and may not be accurate for a given browser/version. It is somewhat equivalent to a code generated `dart:android` or `dart:ios` library.
### Rock and a Hard Place
It seems as if we either need dramatically better support for the config specific import or other conditional compilation feature, or we need to standardize the set of dart SDK libraries (even if some methods are unimplemented).
cc @Hixie @tvolkert @goderbauer @ferhatb | framework,dependency: dart,customer: crowd,platform-web,P2,team: skip-test,team-web,triaged-web | low | Critical |
490,501,860 | rust | Ambiguity between type ascription and absolute paths | I am not sure, but I think the following program should be valid:
```rust
type X = ();
fn main() {
let ():::X = ();
}
```
It currently fails with
```
error: expected identifier, found `:`
--> /home/matklad/tmp/main.rs:4:13
|
4 | let ():::X = ();
| ^ expected identifier
```
because we don't try to decompose `::` token.
I am not sure what should be the behavior of
```rust
type X = ();
fn main() {
let Y:::X = ();
}
```
but I lean towards allowing it as well, looks like it requires only constant look-ahead to handle.
Anyway, `rg ':::' --type rust` shows that we don't have a test here.
@petrochenkov what are your thoughts on this? | A-parser,T-lang | low | Critical |
490,507,785 | kubernetes | High system load/CPU utilization with trivial liveness and readiness exec probes | **What happened**:
Using trivial probes significantly contributes to CPU/system load.
**What you expected to happen**:
Trivial probes should be performant
**How to reproduce it (as minimally and precisely as possible)**:
1. Create a cluster (minimal reproduction done on an m5.large ec2 instance, creating a single node cluster via kubeadm with a deployment specifying a 2 pod replica scheduled on the single node. Reproduction steps are available for more complex setups)
2. Measure system load/CPU utilization
3. Deploy the following deployment
```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: diagnostic
name: diagnostic
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: diagnostic
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
k8s-app: diagnostic
spec:
containers:
- name: diagnostic
image: ubuntu:18.04
command:
- /bin/bash
- -c
- touch /tmp/healthy; sleep 3600
imagePullPolicy: IfNotPresent
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 2
failureThreshold: 120
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 2
failureThreshold: 120
restartPolicy: Always
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- operator: "Exists"
```
4. Remeasure system load/CPU utilization
**Anything else we need to know?**:
Included is a screenshot. The beginning shows the node in "steady state" without the deployment running. This is at about 10% utilization. The first blip is deploying a similar deployment with no probes specified, still at 10%. The second blip is deploying a similar deployment but with just the livenessprobe specified, this is about 17%. The third blip is deploying the provided deployment with both liveness and readiness probes specified, this is about 34%.

Further testing shows that CPU utilization scales linearly with pod replica size.
**Environment**:
- Kubernetes version (use `kubectl version`): 1.15.3
- Cloud provider or hardware configuration: AWS (m5.large)
- OS (e.g: `cat /etc/os-release`): Ubuntu 18.04.3
- Kernel (e.g. `uname -a`): 5.0.0-25-generic
- Install tools: kubeadm 1.15.3
- Network plugin and version (if this is a network-related bug): VPC CNI v1.5.3 (reproduced with kuberouter as well)
- Others: Docker 18.09.2
| kind/bug,priority/backlog,sig/node,help wanted,triage/accepted | high | Critical |
490,516,209 | go | runtime: lock cycle between mheap_.lock and gcSweepBuf.spineLock | As the title says, there's a runtime lock cycle between `mheap_.lock` and `gcSweepBuf.spineLock`. The cycle is effectively of the following form:
On one thread:
1. `(*mheap).alloc_m` (which runs on the system stack) allocates a span and calls in `(*gcSweepBuf).push`.
1. `(*gcSweepBuf).push` acquires the spine lock.
Meanwhile on another thread:
1. `deductSweepCredit` (on a g's stack) calls into `sweepone`, then `(*mspan).sweep`.
1. `(*mspan).sweep` calls into `(*gcSweepBuf).push`.
1. `(*gcSweepBuf).push` acquires the spine lock.
1. `(*gcSweepBuf).push` calls into either `persistentalloc` or `unlock`.
1. In the prologue of either of these function, a stack growth is triggered which acquires `mheap_.lock`.
Note that `(*gcSweepBuf).push` would have the potential for self-deadlock in the `alloc_m` case, but because it runs on the system stack, stack growths won't happen.
This must be an extremely rare deadlock because `git` history indicates that it's been around since 2016 and we've never received a single bug report (AFAICT). With that being said, if we want any sort of automated lock cycle detection, we need to fix this.
It's unclear to me what the right thing to do here is. The "easy" thing would be make `(*gcSweepBuf).push` run on the system stack, that way it'll never trigger a stack growth, but this seems wrong. It feels better to instead only acquire `spineLock` after `mheap_.lock`, but this may not be possible. My concern is that the allocated span's `sweepgen` could end up skewed with respect to the `gcSweepBuf` it's in, but I haven't looked closely at the concurrency requirements of the relevant pieces.
CC @aclements | NeedsInvestigation,compiler/runtime | low | Critical |
490,525,859 | TypeScript | Consider supporting an array for package.json types | ## Search Terms
package.json types
## Suggestion
Allow the package.json `"types"` field to be a list of strings.
## Use Cases
I will acknowledge that the primary use case inspiring this suggestion is a bit flimsy (which I will address below), but allowing the `"types"` field to be a list can potentially solve a current issue with monorepo development, by giving the compiler fallbacks for either the development / tooling stage or the build stage.
## Examples
Take a monorepo project structure:
```
/repo/tsconfig.base.json
/repo/package.json
/repo/packages/App/package.json
/repo/packages/App/src/tsconfig.json
/repo/packages/App/src/index.ts
/repo/packages/LibA/package.json
/repo/packages/LibA/src/tsconfig.json
/repo/packages/LibA/src/index.ts
/repo/packages/LibB/package.json
/repo/packages/LibB/src/tsconfig.json
/repo/packages/LibB/src/index.ts
```
```json
// /repo/tsconfig.base.json
{
"compilerOptions": {
"baseUrl": "./",
"module": "esnext",
"moduleResolution": "node",
"paths": {
"@repo/*": ["node_modules/@repo/*/src"]
}
}
}
```
For each packages `src/tsconfig.json`:
```json
{
"extends": "../../../tsconfig.base.json"
}
```
"App" isn't really an issue because nothing imports it. But the libs could be used in multiple ways, and can also be published. For publishing, a lib might have a package.json file like:
```json
{
"name": "@repo/LibA",
"version": "0.0.1",
"main": "index.js",
// This is built
"types": "index.d.ts"
}
```
With this configuration, builds work fine. However, during development, we don't want to reference a compiled definition file for types. If we make changes to a lib, we want to see those changes without having to compile and refresh any caches or a TypeScript server.
To accomplish this, we can just use a clean script to clear out any of the build artifacts during development, but there's a slight problem: Without the definition file (e.g. `/repo/packages/LibA/index.d.ts`), imports aren't exactly what we're looking for by default. For example, without the definition file, if I try to auto import, either in VSCode or WebStorm, it looks like this:
```ts
// /repo/packages/App/src/index.ts
import {Foo} from '@repo/LibA/src';
let f: Foo // Auto import triggered
```
So now I have to go trim off the `/src` b/c TS is looking for types, and isn't finding them. I'm actually not sure why it adds the `/src`, even though we have path mappings that would resolve to a shorter path. What I do know is that we can fix this during development by changing the `"types"` field:
```json
{
"types": "src/index.ts"
}
```
According to descriptions of the module resolution algorithm and looking through the code that does this resolution, even for `DtsOnly`, TypeScript files are fine. Now auto imports will default to the expected behavior:
```ts
import {Foo} from '@repo/LibA';
let f: Foo
```
I think now it's probably obvious why I think a `"types"` array would help solve this. If types were a list, we could clean build files during development and still fallback to sources for types.
There are a lot of potential issues with this, obviously. I think the most I can probably hope for is to draw attention to yet another issue with TypeScript and monorepos. I've read through most of [this long discussion](https://github.com/microsoft/TypeScript/issues/25376), and monorepo woes are obviously something the team is aware of.
Some issues I can think of:
- It might encourage authors to export types in ways that aren't encouraged by the TS team or the community.
- If multiple type files are present, it could incur a significant performance penalty from excessive parsing and declaration merging
- It's a non-targeted solution for a very targeted problem that few people probably contend with anyways.
However, by bringing it up, I figure that the experts here will understand the implications and potential solutions on a much deeper and more intuitive level. Perhaps there is some way of supporting this without adding anything gnarly or changing the `"types"` field. Potentially a `"typesFallback"` field could be added. I don't know what's worse, honestly.
Thank you for your time.
## Checklist
My suggestion meets these guidelines:
* [✓] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [✓] This wouldn't change the runtime behavior of existing JavaScript code
* [✓] This could be implemented without emitting different JS based on the types of the expressions
* [✓] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
490,526,284 | pytorch | [jit] `random` module support | This is used in MaskRCNN (specifically `random.choice`), we could implement the subset of `random` that's not covered in `torch` already
cc @suo | oncall: jit,triaged,jit-backlog | low | Major |
490,533,817 | PowerToys | Suggested PowerToy: Move windows partially outside of screen by hitting screen border with mouse | # Summary of the new feature/enhancement
This is a feature ("Auto Scroll") available by ThinkVantage Tools (from Lenovo), which can (unfortunately) only be installed on Lenovo/IBM machines, but I think there is no technical limitation of doing so (all you really need is a mouse hook). I'd like to have this part of PowerToys so I can use it on every machine I have. In my experience it is very unintrusive and sometimes really helpful.
When you have a focused movable window that is partially outside the screen (or the bounding box of all screens in case there is more than one) and you move your mouse (with no mouse button pressed) to the border of the screen and "continue moving", the window will move in. So if you try to move the mouse 10 more pixels against the border, the window will move 10 more pixels into the screen (but the mouse pointer remains at the same absolute position on the screen). If you continue trying to move the mouse, eventually the whole window will be visible and the feature automatically "turns off" (as the precondition of having a window partially outside the screen is no longer valid).
When moving the mouse into a corner of the screen, you can simulteneously move the window both horizontally and vertically.
# Proposed technical implementation details (optional)
You will probably need a mouse hook to be able to detect mouse movement that does not move the cursor. Plus some winapi calls like SetWindowPos or SetWindowPlacement to move the window around, and GetForegroundWindow and GetWindowRect GetWindowLong to find out which window has focus, where it is on screen, and whether it is movable. But I guess you at Microsoft know better what API is available anyway :) | Idea-New PowerToy,Product-Window Manager,Product-Mouse Utilities | low | Minor |
490,540,872 | flutter | default Xcode project requires a tripwire to defeat timestamp analysis | (Hypothetically the iOS project has the exact same issue)
The macOS Xcode project requires us to unconditionally touch a file after flutter assemble and list this as an input In order to defeat the timestamp analysis.
While the set of input/output files is complete, the flutter build also depends on the current configuration (build mode, target file). Thus within a configuration, the timestamp analysis will skip the script phase and it will be correct. Changing configurations would require a clean, since xcode thinks that the script phase is still safe to skip.
This isn't an issue with the flutter assemble caching, as long as the flutter tool is invoked it will correctly determine whether the build can be skipped or not. The tripwire file allows us to delegate to this caching strategy instead. | tool,a: build,P3,team-tool,triaged-tool | low | Minor |
490,554,287 | rust | Tracking Issue for Cross Compiling Doctests | This is an issue to track the functionality added by [PR#60387](https://github.com/rust-lang/rust/pull/60387), and also by extension [this PR](https://github.com/rust-lang/cargo/pull/6892) in cargo.
PR#60387 adds three options to rustdoc:
+ `--enable-per-target-ignores` which adds a compiletest-like mechanism for ignoring platform on a substring basis (eg `ignore-foo`)
+ `--runtool` and `--runtool-arg` for specifying an external program and its arguments, to which the compiled doctest binary will be passed as the final argument
The companion PR adds a flag to cargo for enabling this feature as well as cross-compiling doctests and parsing runtool tokens from a `.cargo/config`.
Eventually another PR for `x.py` can either enable this unconditionally or add a flag for testing the extensive documentation in the standard library on other platforms.
This solves cargo issue [#6460](https://github.com/rust-lang/cargo/issues/6460) | T-rustdoc,A-cross,B-unstable,C-tracking-issue,A-doctests,requires-nightly | low | Major |
490,556,020 | PowerToys | Keyboard shortcut for switching between windows of the same program | Windows currently does not have a shortcut for switching between windows of the same program/app. On Mac it is <kbd>CMD</kbd> + <kbd>`</kbd>. I think it should be supported on Windows out of the box, but maybe adding it in PowerToys would be faster?
Right now the alternative is to install a third-party tool called [Easy Window Switcher](https://neosmart.net/EasySwitch/).
| Help Wanted,Idea-New PowerToy,Product-Window Manager | medium | Critical |
490,558,436 | PowerToys | Maximize/Full Screen window within a zone (virtual monitor) | Maximize window within the zone
Maximizing the window currently maximizing it to the whole desktop. It would be useful if it has the ability to maximize within the zone
| Idea-Enhancement,Product-Window Manager,Product-FancyZones | high | Critical |
490,580,544 | bitcoin | policy: allow RBF descendant carveout whenever conflicts exist, not just when number of conflicts == 1 | Comments left over from #16421.
From aj:
> Can't we actually cope with merging packages this way too though? If you've got a tx that has parents A, B, C; and that conflicts with tx's X (parent A) and Y (parent B), then beforehand you had:
> ```
> ..., A, X, [children X]
> ..., B, Y, [children Y]
> ```
> (maybe some tx's were descendants of both X and Y, but it's worse if there weren't any like that) and afterwards you have:
> ```
> ..., A, tx
> ..., B, tx
> ```
> You don't have C in the mempool because that would fail the "replacement-adds-unconfirmed" test later.
> So you know tx's ancestor checks pass, because they're actually worked out; you know A,B's ancestor checks pass because they don't change, tx's descendant check is trivial, and you know A,B and all their parent's descendant checks passed, because they did so when X and Y were there -- as far as sizes go, if they were all at their limit, then the max size for tx is the minimum of the descendant sizes of each tx that was replaced.
> So I think you could replace the `setConflicts.size() == 1` test with:
>```
> if (!setConflicts.empty()) {
> auto conflict = setIterConflicting.begin();
> assert(conflict != setIterConflicting.end());
> uint64_t bump = (*conflict)->GetSizeWithDescendants();
> while(++conflict != setIterConflicting.end()) {
> bump = std::min(bump, (*conflict)->GetSizeWithDescendants());
> }
> nLimitDescendants += 1;
> nLimitDescendantSize += bump;
> }
> ```
From sipa
> I haven't thought hard about the effect on potential DoS issues this policy change may have. | Brainstorming,TX fees and policy | low | Major |
490,585,135 | flutter | A Cupertino widget that has the option to disable user interaction should be disabled and turn gray when its route is no longer the current route | Most noticeable when an action sheet / share sheet / alert view pops up.
The behavior can be observed on iOS 12 so it's not an iOS 13 feature. But the new iOS modal does make it more noticeable when one drags the current route down to dismiss it.
### Gif (may take a while to load):

Maybe related: https://github.com/flutter/flutter/issues/2222 | framework,a: fidelity,f: cupertino,f: routes,c: proposal,P2,team-design,triaged-design | low | Minor |
490,600,010 | pytorch | (PyTorch1.1 and 1.2) RuntimeError: Can't detach views in-place. Use detach() instead | In my code, run `optimizer.zero_grad()`, there is an error here:
```
File "/home/backend/.local/lib/python3.6/site-packages/torch/optim/optimizer.py", line 164, in zero_grad
p.grad.detach_()
RuntimeError: Can't detach views in-place. Use detach() instead
```
Should I modify `detach_()` to `detach()` here?
cc @ezyang @gchanan @zou3519 @SsnL @albanD @vincentqb | module: autograd,module: optimizer,triaged,enhancement | low | Critical |
490,621,209 | flutter | PageController nextPage multiple calls | If PageController.nextPage() is called multiple times before the first call was due to complete it causes an unexpected animation and the final page in view is not what was expected.
For example, if nextPage is exposed to the user through a button and the user presses the button twice in short succession then I would expect the PageView to skip on two pages however, what we see is a stuttering of the animation and the view moves on only 1 page.
### Example

### Steps to Reproduce
```dart
import 'package:flutter/material.dart';
void main() => runApp(MyPageView());
class MyPageView extends StatefulWidget {
MyPageView({Key key}) : super(key: key);
_MyPageViewState createState() => _MyPageViewState();
}
class _MyPageViewState extends State<MyPageView> {
PageController _pageController;
@override
void initState() {
super.initState();
_pageController = PageController();
}
@override
void dispose() {
_pageController.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: SafeArea(
child: Column(
mainAxisSize: MainAxisSize.max,
children:[ Container(
width: 300,
height: 300,
child: PageView(
controller: _pageController,
children: [
Container( color: Colors.red),
Container( color: Colors.grey),
Container( color: Colors.blue),
Container( color: Colors.yellow),
Container( color: Colors.black),
Container( color: Colors.blue),
Container( color: Colors.white),
Container( color: Colors.purple),
Container( color: Colors.red),
Container( color: Colors.grey),
Container( color: Colors.blue),
Container( color: Colors.yellow),
Container( color: Colors.black),
Container( color: Colors.blue),
Container( color: Colors.white),
Container( color: Colors.purple),
],
),
),
RaisedButton(
color: Colors.white,
onPressed: () {
if (_pageController.hasClients) {
_pageController.nextPage(
duration: const Duration(milliseconds: 500),
curve: Curves.easeInOut,
);
}
},
child: Text('Next'),
),
RaisedButton(
color: Colors.white,
onPressed: () {
if (_pageController.hasClients) {
_pageController.previousPage(
duration: const Duration(milliseconds: 500),
curve: Curves.easeInOut,
);
}
},
child: Text('Previous'),
),
]
),
),
),
);
}
}
```
### flutter doctor -v
```
[β] Flutter (Channel beta, v1.8.3, on Microsoft Windows [Version 10.0.18362.295], locale en-IE)
β’ Flutter version 1.8.3 at C:\android\flutter
β’ Framework revision e4ebcdf6f4 (6 weeks ago), 2019-07-27 11:48:24 -0700
β’ Engine revision 38ac5f30a7
β’ Dart version 2.5.0 (build 2.5.0-dev.1.0 0ca1582afd)
[β] Android toolchain - develop for Android devices (Android SDK version 29.0.0)
β’ Android SDK at C:\Users\Ger\AppData\Local\Android\sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-29, build-tools 29.0.0
β’ Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
β’ All Android licenses accepted.
[β] Android Studio (version 3.4)
β’ Android Studio at C:\Program Files\Android\Android Studio
β’ Flutter plugin version 37.0.1
β’ Dart plugin version 183.6270
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[β] VS Code (version 1.37.1)
β’ VS Code at C:\Users\Ger\AppData\Local\Programs\Microsoft VS Code
β’ Flutter extension version 3.4.0
[β] Connected device (1 available)
β’ Android SDK built for x86 β’ emulator-5556 β’ android-x86 β’ Android 5.1.1 (API 22) (emulator)
β’ No issues found!
```
| framework,a: animation,f: scrolling,has reproducible steps,P2,found in release: 2.2,found in release: 2.4,team-framework,triaged-framework | low | Minor |
490,629,441 | PowerToys | [FancyZones] swap window location between zones when drag and dropping windows on each other (or using keyboard) | PowerToys FancyZones: swap window location between zones when drag and dropping windows on each other.
for example i'm using a 3 column zone. i want it to act like this so that when i drag the window from the right side and drop it on the left zone, the locations of the windows swap with each other automatically.
I added a diagnostic file and recreated the issue/requests for this Feedback: https://aka.ms/AA6079y | Idea-Enhancement,FancyZones-Dragging&UI,Product-FancyZones | medium | Major |
490,636,745 | pytorch | Pytorch master can not build with computer capability 3.0 under Mac OS X 10.13.16 with Nvidia GT 750m | ## π Bug
<!-- -->
Building pytorch master HEAD under Mac OS X 10.13.16 with CC3.0 Nvidia GT 750M CUDA 10.0 and cuDNN 7.4.2 failed with error: identifier "__ldg" is undefined because __ldg is only support with CC 3.5 and above
## To Reproduce
Steps to reproduce the behavior:
1.MacOSX 10.13.16(17G8030) with Nvidia GT 750M (CC 3.0)
2.CUDA Web Driver 410.130
3.GPU Driver Version 387.10.10.10.40.130
4.CUDA 10.0
5.cuDNN 7.4.2
6.XCode 9.4.1
7. Clone the master of pytorch and build with GPU enabled "MACOSX_DEPLOYMENT_TARGET=10.13 CC=clang CXX=clang++ python setup.py install"
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
Build failed with following messages:
[ 51%] Building NVCC (Device) object caffe2/CMakeFiles/torch.dir/operators/torch_generated_abs_op.cu.o
/Users/sfeng/GH/pytorch_master/caffe2/operators/channelwise_conv3d_op_cudnn.cu(102): error: identifier "__ldg" is undefined
/Users/sfeng/GH/pytorch_master/caffe2/operators/channelwise_conv3d_op_cudnn.cu(123): error: identifier "__ldg" is undefined
/Users/sfeng/GH/pytorch_master/caffe2/operators/channelwise_conv3d_op_cudnn.cu(102): error: identifier "__ldg" is undefined
detected during instantiation of "void caffe2::DepthwiseConv3dGPUKernelNCHW(caffe2::DepthwiseArgs, const T *, const T *, T *, int) [with T=float]"
(380): here
/Users/sfeng/GH/pytorch_master/caffe2/operators/channelwise_conv3d_op_cudnn.cu(123): error: identifier "__ldg" is undefined
detected during instantiation of "void caffe2::DepthwiseConv3dGPUKernelNCHW(caffe2::DepthwiseArgs, const T *, const T *, T *, int) [with T=float]"
(380): here
/Users/sfeng/GH/pytorch_master/caffe2/operators/channelwise_conv3d_op_cudnn.cu(184): error: identifier "__ldg" is undefined
detected during instantiation of "void caffe2::DepthwiseConv3dBackpropFilterGPUKernelNCHW(caffe2::DepthwiseArgs, const T *, const T *, T *, int) [with T=float]"
(505): here
/Users/sfeng/GH/pytorch_master/caffe2/operators/channelwise_conv3d_op_cudnn.cu(303): error: identifier "__ldg" is undefined
detected during instantiation of "void caffe2::DepthwiseConv3dBackpropInputGPUKernelNCHW(caffe2::DepthwiseArgs, const T *, const T *, T *, int) [with T=float]"
(515): here
6 errors detected in the compilation of "/var/folders/wm/5_m2zvqj1tv_gv177yjkq_kr0000gn/T//tmpxft_00011df9_00000000-6_channelwise_conv3d_op_cudnn.cpp1.ii".
CMake Error at torch_generated_channelwise_conv3d_op_cudnn.cu.o.Release.cmake:281 (message):
Error generating file
/Users/sfeng/GH/pytorch_master/build/caffe2/CMakeFiles/torch.dir/operators/./torch_generated_channelwise_conv3d_op_cudnn.cu.o
make[2]: *** [caffe2/CMakeFiles/torch.dir/operators/torch_generated_channelwise_conv3d_op_cudnn.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/Users/sfeng/GH/pytorch_master/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h(149): warning: missing return statement at end of non-void function "Eigen::internal::ptrue(const Packet &) [with Packet=half2]"
/Users/sfeng/GH/pytorch_master/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h(149): warning: missing return statement at end of non-void function "Eigen::internal::ptrue(const Packet &) [with Packet=half2]"
make[1]: *** [caffe2/CMakeFiles/torch.dir/all] Error 2
make: *** [all] Error 2
Traceback (most recent call last):
File "setup.py", line 759, in <module>
build_deps()
File "setup.py", line 320, in build_deps
cmake=cmake)
File "/Users/sfeng/GH/pytorch_master/tools/build_pytorch_libs.py", line 59, in build_caffe2
cmake.build(my_env)
File "/Users/sfeng/GH/pytorch_master/tools/setup_helpers/cmake.py", line 333, in build
self.run(build_args, my_env)
File "/Users/sfeng/GH/pytorch_master/tools/setup_helpers/cmake.py", line 143, in run
check_call(command, cwd=self.build_dir, env=env)
File "/Users/sfeng/anaconda3/envs/pytorch/lib/python3.7/subprocess.py", line 347, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '8']' returned non-zero exit status 2.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Build succeed and run properly
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
Collecting environment information...
/Users/sfeng/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/cuda/__init__.py:132: UserWarning:
Found GPU0 GeForce GT 750M which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.
warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
PyTorch version: 1.3.0a0+a332583
Is debug build: No
CUDA used to build PyTorch: 10.0
OS: Mac OSX 10.13.6
GCC version: Could not collect
CMake version: version 3.14.0
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: GeForce GT 750M
Nvidia driver version: 1.1.0
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.17.1
[pip] pytorch-sphinx-theme==0.0.24
[pip] torch==1.3.0a0+a332583
[pip] torchvision==0.5.0a0+0bd7080
[conda] blas 1.0 mkl
[conda] mkl 2019.4 233
[conda] mkl-include 2019.4 233
[conda] mkl-service 2.0.2 py37h1de35cc_0
[conda] mkl_fft 1.0.14 py37h5e564d8_0
[conda] mkl_random 1.0.2 py37h27c97d8_0
[conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop>
[conda] torch 1.3.0a0+a332583 pypi_0 pypi
(pytorch) SamuelFdeMBP:utils sfeng$
## Additional context
PyTorch version: 1.3.0a0+a332583 is my local successful build with the PR I will submit soon.
<!-- Add any other context about the problem here. -->
I know that the official support for CC 3.0 stop years ago, But till 1.1.0 pytorch still works with CC3.0 device until there is some cuda launch error which had been fixed by jcooky with his PR of https://github.com/jcooky/pytorch/commit/ecdf4564d44835b3b2ffd18e286ad7e549231a14.
With jcooky's PR the MacOSX version of pytorch master can build and runs properly with CC3.5 devices.
But for CC 3.0 devices the pytorch HEAD failed to build
| module: cuda,low priority,triaged,module: macos | low | Critical |
490,647,326 | PowerToys | [Shortcut Guide] When the taskbar autohides the guide doesn't show the numbers | <!--
**Important: When reporting BSODs or security issues, DO NOT attach memory dumps, logs, or traces to Github issues**.
Instead, send dumps/traces to [email protected], referencing this GitHub issue.
-->
# Environment
```
Windows build number: [run "ver" at a command prompt] Microsoft Windows [Version 10.0.18975.1000]
PowerToys version: 0.11.0
PowerToy module for which you are reporting the bug (if applicable): ShortCut Guide
```
# Steps to reproduce
<!-- A description of how to trigger this bug. -->
Enable automatic hiding of the taskbar in the taskbar settings ->

Press and Hold the Windows Key to show the guide, two scenario's can happen:
- If the taskbar is hidden, no numbers are shown
- If the taskbar is closing (because the ShortCut Guide is opening), sometimes a `0` can be seen but only a `0`
# Expected behavior
<!-- A description of what you're expecting, possibly containing screenshots or reference material. -->
I expected two things;
- It shows the taskbar with the numbers -> so it unhides the taskbar so it can show the numbers
- It keeps the taskbar hidden but shows the program icons with the corresponding numbers -> so a lite version of the taskbar, instead of showing the full one
# Actual behavior
<!-- What's actually happening? -->
One of the two earlier mentioned scenario's:
- If the taskbar is hidden, no numbers are shown
- If the taskbar is closing (because the ShortCut Guide is opening), sometimes a `0` can be seen but only a `0`
# Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
Scenario 1: Shows noting:

Scenario 2: Or shows a `0`:

Just a quick note -> Even when a programs holds the taskbar constanly open (the icon turns orange) if you press and hold the windows key to open the ShortCut Guide -> it will close/hide the taskbar and show the `0` as in Scenario 2
| Idea-Enhancement,Product-Shortcut Guide | low | Critical |
490,650,453 | pytorch | Conv2D 2x~20x slower than Tensorflow when channel count is small | ## π Bug
When I run my benchmark code with `channel_count = 64` on both TensorFlow and PyTorch, the PyTorch version shows ~2x slower speed than TensorFlow version.
But when I set `channel_count = 256`, Tensorflow and PyTorch perform similar speed.
Is it a π?
Tested on CPU(Ryzen 3700X), Tesla K80(up to 20x slower), Tesla T4(~2x slower), RTX 2070 Super.
## To Reproduce
Run these benchmark code with Python 3
Or run it in Colab: https://colab.research.google.com/drive/13hvxoV-pe1E5fYLiF9BQhzZcS5910y9V
### Tensorflow version:
````python3
import tensorflow as tf
import time
channel_count = 64
# Model
print("* Create model")
p_input = tf.random.uniform(shape=(1, 3, 64, 64), minval=0.0, maxval=1.0, dtype=tf.float32)
p_target = tf.random.uniform(shape=(1, 3, 64, 64), minval=0.0, maxval=1.0, dtype=tf.float32)
v = p_input
v = tf.nn.relu(tf.keras.layers.Conv2D(channel_count, (3, 3), padding="same", data_format="channels_first").apply(v))
v = tf.nn.relu(tf.keras.layers.Conv2D(channel_count, (3, 3), padding="same", data_format="channels_first").apply(v))
v = tf.nn.relu(tf.keras.layers.Conv2D(channel_count, (3, 3), padding="same", data_format="channels_first").apply(v))
v = tf.nn.relu(tf.keras.layers.Conv2D(channel_count, (3, 3), padding="same", data_format="channels_first").apply(v))
v = tf.keras.layers.Conv2D(3, (3, 3), padding="same", data_format="channels_first").apply(v)
loss_fn = tf.reduce_mean(tf.abs(v - p_target))
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss_fn)
with tf.Session() as sess:
# Initialized, Load state
sess.run(tf.global_variables_initializer())
t = time.time()
for i_step in range(100000):
loss_value, _ = sess.run([loss_fn, optimizer])
if i_step % 5000 == 0:
print(time.time() - t)
t = time.time()
````
### Torch version:
````python3
import torch
import time
# Model
print("* Create model")
channel_count = 64
net = torch.nn.Sequential(
torch.nn.Conv2d(3, channel_count, (3, 3), padding=(1, 1)),
torch.nn.ReLU(),
torch.nn.Conv2d(channel_count, channel_count, (3, 3), padding=(1, 1)),
torch.nn.ReLU(),
torch.nn.Conv2d(channel_count, channel_count, (3, 3), padding=(1, 1)),
torch.nn.ReLU(),
torch.nn.Conv2d(channel_count, channel_count, (3, 3), padding=(1, 1)),
torch.nn.ReLU(),
torch.nn.Conv2d(channel_count, 3, (3, 3), padding=(1, 1)),
).train().cuda()
#net = torch.jit.trace_module(net, {"forward": torch.empty((1, 3, 64, 64)).uniform_(-1.0, 1.0).cuda()})
train_param_list = net.parameters()
opt = torch.optim.Adam(train_param_list, lr=1e-4, eps=1e-8, weight_decay=0.0)
t = time.time()
for i_step in range(100000):
opt.zero_grad()
input_data = torch.empty((1, 3, 64, 64)).uniform_(-1.0, 1.0).cuda()
target_data = torch.empty((1, 3, 64, 64)).uniform_(-1.0, 1.0).cuda()
pred = net(input_data)
loss = (target_data - pred).abs().mean()
loss.backward()
opt.step()
if i_step % 5000 == 0:
print(time.time() - t)
t = time.time()
````
## Environment
````
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.1.243
OS: Arch Linux
GCC version: (GCC) 9.1.0
CMake version: version 3.15.2
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce RTX 2070 SUPER
Nvidia driver version: 435.21
cuDNN version: /usr/lib/libcudnn.so.7.6.2
Versions of relevant libraries:
[pip3] numpy==1.17.1
[pip3] torch==1.2.0
[conda] Could not collect
Tensorflow: 1.14.0
````
| module: performance,module: cudnn,module: cuda,module: convolution,triaged | low | Critical |
490,653,832 | PowerToys | Default custom layouts for FancyZones | # Summary of the new feature/enhancement
I would love to see the following layouts for FancyZones
1. left zone 2/3 of the screen and right zone 1/3 of the screen
2. left zone 1/3 of the screen and right zone 2/3 of the screen
3. left zone 2/3 of the screen and right zone 1/3 of the screen, the right zone is spit horizontally into two zones:

I am using the layout number 3 (WQHD monitor) and it really helps me a lot organizing my windows. | Idea-Enhancement,FancyZones-Layouts,Product-FancyZones | low | Major |
490,655,247 | rust | Compiler spuriously infers data needs to be borrowed as mutable | All credits are due to [this SO question](https://stackoverflow.com/questions/57828602/what-is-rusts-borrow-checker-really-complaining-about-here). This code is tried:
```rust
fn selection_sort(collection: &mut Vec<&mut String>) {
for i in 0..collection.len() {
let mut least_element = i;
for j in (i + 1)..collection.len() {
if collection[j] < collection[least_element] {
least_element = j;
}
}
collection.swap(least_element, i);
}
}
```
I expect it to compile. Instead, `rustc` complains that the data needs to be borrowed as mutable, which is incorrect:
```bash
error[E0596]: cannot borrow data in a `&` reference as mutable
--> src/lib.rs:5:32
|
5 | if collection[j] < collection[least_element] {
| ^^^^^^^^^^^^^^^^^^^^^^^^^ cannot borrow as mutable
|
= help: trait `IndexMut` is required to modify indexed content
```
This happened on `rustc-1.37.0`, `beta-1.38.0` and `nightly-1.39.0`. | A-borrow-checker,T-compiler,C-bug | low | Critical |
490,657,355 | scrcpy | Key bindings for scroll | Key bindings for scrolling will be a great addition. At present, it hard to drag or scroll using touch pad on laptops. | feature request | low | Major |
490,661,497 | PowerToys | [FancyZones] Allow zones to extend outside screen area. | # Summary of the new feature/enhancement
Many apps have poor UI design with wide borders and other non-functional portions that needlessly waste screen area.
A Fancy Zones zone definition can help remedy this situation by permitting parts of a zone to be outside the viewable area.
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
An extreme example: I use a media player for which I only wish to see the play/pause button, which is in the top-right corner of the app's window area. Unfortunately, I can't resize or pan/scroll the app so only that button is shown. Instead, I simply move the window to the bottom-left corner of the screen with most of its area off-screen, and the play/pause button nicely located in the lower left corner of my screen.
# Proposed technical implementation details (optional)
Ensure at least some non-border window content is viewable.
The definition process should define the zone size with all corners visible, then permit sliding the zone to its desired location with some parts off-screen.
<!--
A clear and concise description of what you want to happen.
-->
Keep all existing zone editing capabilities, adding only the ability to drag a defined zone so parts of it are outside the screen area.
If it becomes possible to "lose" a zone by moving it entirely off-screen, include a fail-safe/recovery button to the Zone Editor to "Move All Zones to Within Screen Area". | Idea-Enhancement,FancyZones-Layouts,Product-FancyZones | low | Minor |
490,668,895 | flutter | calling child.paint() appears to work (but doesn't, and doesn't assert) | We need to catch `RenderObject` `paint` methods calling their children's `paint` methods directly instead of via the `PaintingContext.paintChild` somehow. If you just call `paint` directly, it _appears_ to work, but it doesn't: it leaves `_needsPaint` set to true, so that later the child won't repaint itself if it tries to call `markNeedsPaint`. | framework,a: error message,P2,team-framework,triaged-framework | low | Minor |
490,670,781 | PowerToys | Lock windows position and size | It would be great to be able to lock the position and size of selected windows.
Something Windows can only do with the help of third-party programs, but they could already do it themselves. | Idea-Enhancement,Product-Window Manager | low | Minor |
490,670,788 | rust | Tracking issue for future-incompatibility lint `soft_unstable` | ## What is this lint about
When some feature is unstable, but was allowed on stable due to some stability checking hole or oversight we may unstabilize it in a "soft" way to avoid breaking crates depending on the crates using the feature.
This soft unstabilization can be done using this lint.
## Features that are currently emitting this lint
#### The `#[bench]` attribute
Tracking issue: https://github.com/rust-lang/rust/issues/50297
- #64066
- https://github.com/rust-lang/rust/pull/134273
#### RustcEncodable & RustcDecodable
Tracking issue: https://github.com/rust-lang/rust/issues/134301
- https://github.com/rust-lang/rust/pull/116016
- https://github.com/rust-lang/rust/pull/134272
## Features that were previously emitting this lint
#### Inner `#![test]` and `#![rustfmt::skip]`
Tracking issue: https://github.com/rust-lang/rust/issues/54726
- #79003
- #82399
- https://github.com/rust-lang/rust/pull/134276
## Before adding new features here, read this
We should really support a separate tracking issue for each case of a soft-unstable feature; having them all point here doesn't make a ton of sense. | A-lints,T-lang,T-compiler,C-future-incompatibility,C-tracking-issue | low | Major |
490,676,743 | material-ui | [Select][material-ui] Don't lock the body scroll and make it non-modal select | ## Current Behavior π―
When I open a Select component vertical scroll bar of the page disappears.
See here

I don't have a demo to reproduce sorry, but I suppose this is a known issue: https://github.com/mui-org/material-ui/issues/8710, https://github.com/mui-org/material-ui/pull/7239, so maybe someone can help.
The select component is located on an app bar on the screenshot.
I tried setting box-sizing: border-box on App bar, no help.
## Expected Behavior π€
The scrollbar should not disappear.
## Your Environment π
<!--
Include as many relevant details about the environment with which you experienced the bug.
If you encounter issues with typescript please include version and tsconfig.
-->
| Tech | Version |
| ----------- | ------- |
| Material-UI | v3.1.1 |
| React | 16.5 |
| Browser | Chrome |
| new feature,component: menu,component: select,priority: important | high | Critical |
490,689,764 | kubernetes | Kustomize resources from remote url | Mirroring this from: https://github.com/kubernetes-sigs/kustomize/issues/970 now that kustomize is actually in kubectl. Feel free to redirect me if that's wrong. Also @Liujingfang1 who was against this so they can say why if they want.
**What would you like to be added**:
Right now resoruces in kustomize.config.k8s.io/v1beta1 only supports remote url directories with a kustomization.yaml. Would be convenient if it allowed you to directly refer to a yaml.
**Why is this needed**:
Often you want to use a project that's provded a kubernetes yaml but hasn't move to using kustomize yet so they don't have a kustomization.yaml. For example maybe I want to easily set up httpbin. It would be nice to pull in https://github.com/istio/istio/blob/master/samples/httpbin/httpbin.yaml rather than mirror it locally.
Another fun example is knative which is pretty new but still doesn't have a kustomization rather they release individaul yamls that it would be great to pull directly.
https://github.com/knative/serving/releases/tag/v0.8.1
Happy to work on this if it's just not resourced but wanted to see if there was a reason NOT to do this first. | kind/feature,sig/cli,lifecycle/frozen | medium | Major |
490,702,878 | godot | ExternalEditor Lost code when changing script name and pressing play | **Godot version:**
- 3.1.1 , 3.2 dev. 3.1.1-mono, 3.2 dev-mono.
**OS/device including version:**
- GNU/Linux Ubuntu and Kubuntu 18.04.3LTS, Ubuntu and Kubuntu 19.04.
**Issue description:**
- When the name of a script is changed, and the play button is pressed, the code written in the script disappears and is replaced by the default template.
- EDIT: This does not happen when the godot restarts after changing the name of the script, if you create the script from the node or if you create the script from the text editor.
- External editors: Monodevelop, VisualStudio Code.
**Steps to reproduce:**
- Create a new script and write some code, save and close the text editor, then change the name and press the play button, open your script and you will notice that the code you wrote has been replaced by the default template.
| bug,topic:editor,confirmed,topic:dotnet | low | Major |
490,710,282 | rust | Rust compiler creating huge files on tmp directory | Was surprised to get the following no space left error ...
```
Compiling rustc_driver v0.0.0 (/home/santiago/src/oss/rust1/src/librustc_driver)
error: failed to write version script: No space left on device (os error 28)
```
This happens because rustc_driver is placing a lot of huge places on `/tmp` and tmp on my machine is of type `tmpfs` which is stored usually in RAM or swap and it's by default half of the RAM, so 4GB in my old machine.
The following are the files created ...
```
[santiago@archlinux rustcaSq42I]$ pwd
/tmp/rustcaSq42I
[santiago@archlinux rustcaSq42I]$ ls -lh
total 4.0G
-rw-r--r-- 1 santiago santiago 6.9M Aug 15 14:44 libcc-580cb3ec679a1ef0.rlib
-rw-r--r-- 1 santiago santiago 243K Aug 15 14:44 libcrc32fast-df4710f26ff3874f.rlib
-rw-r--r-- 1 santiago santiago 746 Aug 15 14:44 libeither-be66c1a9963b273c.rlib
-rw-r--r-- 1 santiago santiago 2.1M Aug 15 14:44 libenv_logger-39812d84bbfe2b11.rlib
-rw-r--r-- 1 santiago santiago 595K Aug 15 14:44 libflate2-31c9dbe4de46ec52.rlib
-rw-r--r-- 1 santiago santiago 414K Aug 15 14:44 libhumantime-d1a9d86e783663fa.rlib
-rw-r--r-- 1 santiago santiago 742 Aug 15 14:44 libitoa-cafbf9ae61da85d2.rlib
-rw-r--r-- 1 santiago santiago 202K Aug 15 14:44 liblog_settings-2620b484081ec8f2.rlib
-rw-r--r-- 1 santiago santiago 64K Aug 15 14:44 libminiz_sys-829f35f627be716a.rlib
-rw-r--r-- 1 santiago santiago 584K Aug 15 14:44 libpunycode-bc98388f5faa9996.rlib
-rw-r--r-- 1 santiago santiago 764 Aug 15 14:44 libquick_error-b52ae6e62fe18a00.rlib
-rw-r--r-- 1 santiago santiago 770 Aug 15 14:44 libremove_dir_all-a10b98c02f91914a.rlib
-rw-r--r-- 1 santiago santiago 666M Aug 15 14:44 librustc-300974c3ebad3d8b.rlib
-rw-r--r-- 1 santiago santiago 139M Aug 15 14:44 librustc_ast_borrowck-064564744d0b66e4.rlib
-rw-r--r-- 1 santiago santiago 158M Aug 15 14:44 librustc_codegen_ssa-0b961b0d1ebdaa34.rlib
-rw-r--r-- 1 santiago santiago 141M Aug 15 14:44 librustc_codegen_utils-3c4fec2d7a4aaa58.rlib
-rw-r--r-- 1 santiago santiago 215M Aug 15 14:44 librustc_incremental-413cebfc5fe72a28.rlib
-rw-r--r-- 1 santiago santiago 163M Aug 15 14:44 librustc_interface-4a187b71ebecda3f.rlib
-rw-r--r-- 1 santiago santiago 128M Aug 15 14:44 librustc_lint-f5d7618b6d033ce8.rlib
-rw-r--r-- 1 santiago santiago 315M Aug 15 14:44 librustc_metadata-a179352ab6d85b74.rlib
-rw-r--r-- 1 santiago santiago 822M Aug 15 14:44 librustc_mir-232a965ccb3e9077.rlib
-rw-r--r-- 1 santiago santiago 143M Aug 15 14:44 librustc_passes-c08b598d7f5694c2.rlib
-rw-r--r-- 1 santiago santiago 36M Aug 15 14:44 librustc_plugin-8f9749116c8b6ad6.rlib
-rw-r--r-- 1 santiago santiago 138M Aug 15 14:44 librustc_privacy-1eb27d780fbbcb81.rlib
-rw-r--r-- 1 santiago santiago 76M Aug 15 14:44 librustc_resolve-23cfc118665ed372.rlib
-rw-r--r-- 1 santiago santiago 138M Aug 15 14:44 librustc_save_analysis-970ae44e6e8aee42.rlib
-rw-r--r-- 1 santiago santiago 267M Aug 15 14:44 librustc_traits-1f50191ae0f8c9ba.rlib
-rw-r--r-- 1 santiago santiago 469M Aug 15 14:44 librustc_typeck-0246067c1ba94397.rlib
-rw-r--r-- 1 santiago santiago 298K Aug 15 14:44 libryu-17527a943b47993b.rlib
-rw-r--r-- 1 santiago santiago 3.6M Aug 15 14:44 libserde_json-f180caae537f064f.rlib
-rw-r--r-- 1 santiago santiago 71M Aug 15 14:44 libsyntax_ext-c3562645f3bbdcaa.rlib
-rw-r--r-- 1 santiago santiago 1006K Aug 15 14:44 libtempfile-7dcc5500a694e84f.rlib
-rw-r--r-- 1 santiago santiago 1.1M Aug 15 14:44 list
```
I think that the compilation process shouldn't place this kind of files there.
For more information, read the Zulip discussion https://rust-lang.zulipchat.com/#narrow/stream/182449-t-compiler.2Fhelp/topic/no.20space.20left.20on.20device.20(tmpfs).20when.20compiling.20rustc | A-driver,T-compiler,C-bug | low | Critical |
490,717,652 | youtube-dl | [vyborymos] Fix extraction | - Single video url: https://vybory.mos.ru/voting-stations/28102?channel=0
- Playlist: https://msk-cache-4-1-h.cdn.vybory.mos.ru/master.m3u8?sid=e252c26a-a563-11e8-812f-00259057913e
and some other urls might be used with 1st URLs above:
https://msk-cache-4-1-h.cdn.vybory.mos.ru/aes128-key/26132077.key?sid=e252c26a-a563-11e8-812f-00259057913e&kid=short-token-1&exp=1567924828&dig=d0b62bbe94a7588714ed6eaed5cd459f
https://msk-cache-4-1-h.cdn.vybory.mos.ru/hls/e252c26a-a563-11e8-812f-00259057913e/1567924622.95-1567924637.94.ts?input=ege-production&kid=short-token-1&exp=1567924828&dig=dad56113ed8dfd4580462d1afd77f395
https://ls-pub.cdn.vybory.mos.ru/stat?station_id=28102&user_id=0&errorlevel=0&adapter_type=hlsjs&token=eyJhbGciOiJIUzI1NiJ9.eyJkYXRhIjoyODEwMn0.wb9eRieii7IFKxCbtqJlrd0sY8MG06uz24uXGhv6YG0&uuid=16ca91e0-d203-11e9-aa9c-c7f44acda033&mobile=0
R:\>youtube-dl.exe -v -F https://msk-cache-4-1-h.cdn.vybory.mos.ru/master.m3u8?sid=e252c26a-a563-11e8-812f-00259057913e
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', '-F', 'https://msk-cache-4-1-h.cdn.vybory.mos.ru/master.m3u8?sid=e252c26a-a563-11e8-812f-00259057913e']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.09.01
[debug] Python version 3.4.4 (CPython) - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg 4.2, ffprobe 4.2
[debug] Proxy map: {}
[generic] master: Requesting header
WARNING: Could not send HEAD request to https://msk-cache-4-1-h.cdn.vybory.mos.ru/master.m3u8?sid=e252c26a-a563-11e8-812f-00259057913e: HTTP Error 400: Bad Request
[generic] master: Downloading webpage
ERROR: Unable to download webpage: HTTP Error 400: Bad Request (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpbzyg5d3a\build\youtube_dl\extractor\common.py", line 627, in _request_webpage
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpbzyg5d3a\build\youtube_dl\YoutubeDL.py", line 2229, in urlopen
File "C:\Python\Python34\lib\urllib\request.py", line 470, in open
File "C:\Python\Python34\lib\urllib\request.py", line 580, in http_response
File "C:\Python\Python34\lib\urllib\request.py", line 508, in error
File "C:\Python\Python34\lib\urllib\request.py", line 442, in _call_chain
File "C:\Python\Python34\lib\urllib\request.py", line 588, in http_error_default
No fresh ydl, neither ffmpeg/ffprobe 4.2 can get stream from those URLs... might be there are some catch with IP checking or something like that, Please investigate and make update for ydl - so it will be able to get video stream from needed election points !:)
--- Today, 2019.09.08, we have here in Moscow local elections, yet we have webcams installed on every election point, so I really want to get some stream for a history:) Thanks in advance!!!
-t
PS. Friend of mine has found that code https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/vyborymos.py can be used to grab videostream from mentioned URLs that I've posted. **Please** fix that vyborymos.py with new stuff that is compatible with current vybory.mos.ru site ! **Thanks in advance** ! | site-support-request | low | Critical |
490,742,519 | create-react-app | react-scripts test not working with hidden directories (prefixed with `.`) | ### Describe the bug
When one of the directories in your location is prefixed with a `.` the `react-scripts test` isn't working. You can see it when running `npm test` and then choose to run all. The glob expression used seems to be invalid:
```
No tests found, exiting with code 1
Run with `--passWithNoTests` to exit with code 0
In C:\z\.my-app
4 files checked.
testMatch: C:/z\.my-app/src/**/__tests__/**/*.{js,jsx,ts,tsx}, C:/z\.my-app/src/**/*.{spec,test}.{js,jsx,ts,tsx} - 0 matches
testPathIgnorePatterns: \\node_modules\\ - 4 matches
testRegex: - 0 matches
Pattern: - 0 matches
```
### Did you try recovering your dependencies?
Simple repro steps for a new project below.
### Which terms did you search for in User Guide?
n/a
### Environment
```
System:
OS: Windows 10
CPU: (8) x64 Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
Binaries:
Node: 12.9.1 - C:\Program Files\nodejs\node.EXE
Yarn: 1.12.3 - ~\AppData\Roaming\npm\yarn.CMD
npm: 6.9.0 - C:\Program Files\nodejs\npm.CMD
Browsers:
Edge: 42.17134.1.0
Internet Explorer: 11.0.17134.1
npmPackages:
react: ^16.9.0 => 16.9.0
react-dom: ^16.9.0 => 16.9.0
react-scripts: 3.1.1 => 3.1.1
npmGlobalPackages:
create-react-app: Not Found
```
### Steps to reproduce
1. `npx create-react-app my-app`
2. `mv my-app .my-app`
3. `cd .my-app`
4. `npm test`
### Expected behavior
I would expect the 1 test to run.
### Actual behavior
However, no tests are found.
### Reproducible demo
Please copy and paste:
```
npx create-react-app my-app
mv my-app .my-app
cd .my-app
npm test
```
Select `a` to run all tests. | tag: underlying tools,issue: bug report | low | Critical |
490,759,355 | TypeScript | Type safety with `TemplateStringsArray` and tag functions | <!--
Title: Type safety with `TemplateStringsArray` and tag functions
-->
## Search Terms
`TemplateStringsArray`, `type`, `safety`, `generic`, `tag`, `function`
## Suggestion
Hello, I'd like to know how could I achieve type safety with `TemplateStringsArray` and tag functions or make such scenarios possible.
## Use Cases
I would use `TemplateStringsArray` together with the
```ts
tagFunction`<type_safe_accessor>`
```
Currently, there's no way to do this, since
1. the definition of `TemplateStringsArray` is not generic
```ts
// source code:
// TypeScript/src/lib/es5.d.ts
// TypeScript/lib/lib.es5.d.ts
// local install:
// /usr/lib/code/extensions/node_modules/typescript/lib/lib.es5.d.ts
interface TemplateStringsArray extends ReadonlyArray<string> {
readonly raw: ReadonlyArray<string>;
}
```
AND
2. the usage of
```ts
tagFunction`<type_safe_accessor>`
```
gives the following error, completely preventing the type-safe usage:
```ts
Argument of type '{}' is not assignable to parameter of type keyof TypeAndValueObject | TemplateStringsArray<keyof TypeAndValueObject>'.
Type '{}' is missing the following properties from type 'TemplateStringsArray<keyof TypeAndValueObject>': raw, length, concat, join, and 19 more. ts(2345)
```
## Examples
### The working case
I have some type-safe `i18n` translations:
```ts
// Dictionary.ts
export interface Dictionary {
"Hello": string;
"Click to see more": string;
"Today is": (day: string) => string;
}
// en.ts
import { Dictionary } from "./Dictionary";
export const en: Dictionary = {
"Hello": "Hello",
"Click to see more": "Click to see more",
"Today is": (day: string) => `Today is ${day}`,
};
// lt.ts
import { Dictionary } from "./Dictionary";
export const lt: Dictionary = {
"Hello": "Sveiki",
"Click to see more": "Paspauskite, kad pamatytumΔte daugiau",
"Today is": (day: string) => `Ε iandien yra ${day}`,
};
```
```ts
// i18n.ts
import { Dictionary } from "./Dictionary";
import { en } from "./en";
import { lt } from "./lt";
export interface ITranslations {
en: Dictionary;
lt: Dictionary;
}
export const translations: ITranslations = {
en: en,
lt: lt,
};
// "en" | "lt"
export type ILang = keyof ITranslations;
```
I have a function to get the translations:
```ts
import { ILang, translations } from "./i18n";
import { Dictionary } from "./Dictionary";
const currentLang: ILang = "en";
const dictionary: Dictionary = translations[currentLang];
export const selectTranslation = <K extends keyof Dictionary>(key: K): Dictionary[K] => {
const translationText: Dictionary[K] = dictionary[key];
return translationText;
};
```
And I can safely use it with type-safe translations in the following fashion:
```ts
// someFile.ts
import { selectTranslation } from "../selectTranslation.ts";
/**
* Type-checks, auto-completes etc. the strings from the `Dictionary`
*/
const someText: string = selectTranslation("Hello");
```
### The problem
However, now I'd like to use the template string (template literal) syntax, paired with the tag function to achieve the same functionality, so I improve the `selectTranslation` function:
```ts
// selectTranslation.ts
import { ILang, translations } from "./i18n";
import { Dictionary } from "./Dictionary";
const currentLang: ILang = "en";
const dictionary: Dictionary = translations[currentLang];
export const selectTranslation = <K extends keyof Dictionary>(key: K | TemplateStringArray): Dictionary[K] => {
let realKey: K;
if (Array.isArray(key)) {
realKey = key[0];
} else {
realKey = key; // error:
/**
* Type 'K | TemplateStringsArray<string>' is not assignable to type 'K'.
* Type 'TemplateStringsArray<string>' is not assignable to type 'K'. ts(2322)
*/
}
const translationText: Dictionary[K] = dictionary[realKey];
return translationText;
};
```
```ts
// anotherFile.ts
import { selectTranslation } from "../selectTranslation.ts";
/**
* Does NOT type-check and gives the previously mentioned error:
*
* Argument of type '{}' is not assignable to parameter of type K | TemplateStringsArray<K>'.
* Type '{}' is missing the following properties from type 'TemplateStringsArray<K>': raw, length, concat, join, and 19 more. ts(2345)
*/
const someText: string = selectTranslation`Hello`; // error
```
## Possible solutions
I've tried 3 different solutions, neither of them giving me the results that I want:
### a) Change the `TemplateStringsArray` to be a generic like so
```ts
// source code:
// TypeScript/src/lib/es5.d.ts
// TypeScript/lib/lib.es5.d.ts
// local install:
// /usr/lib/code/extensions/node_modules/typescript/lib/lib.es5.d.ts
interface TemplateStringsArray<T = string> extends ReadonlyArray<T> {
readonly raw: ReadonlyArray<T>;
}
```
and used it like so
```diff
// selectTranslation.ts
-export const selectTranslation = <K extends keyof Dictionary>(key: K | TemplateStringsArray): Dictionary[K] => {
+export const selectTranslation = <K extends keyof Dictionary>(key: K | TemplateStringsArray<K>): Dictionary[K] => {
```
however, the same problems persisted - the casting from `key` to `realKey` failed
AND the tagged function usage still failed
```ts
// selectTranslation.ts
if (Array.isArray(key)) {
realKey = key[0];
} else {
realKey = key; // still errors
}
// anotherFile.ts
const someText: string = selectTranslation`Hello`; // still errors
```
### b) Instead of using `TemplateStringsArray`, just use `Array<K>`
```diff
// selectTranslation.ts
-export const selectTranslation = <K extends keyof Dictionary>(key: K | TemplateStringsArray): Dictionary[K] => {
+export const selectTranslation = <K extends keyof Dictionary>(key: K | Array<K>): Dictionary[K] => {
```
the first problem of casting from `key` to `realKey` disappeared,
BUT the second one still remained
```ts
// selectTranslation.ts
if (Array.isArray(key)) {
realKey = key[0];
} else {
realKey = key; // all cool now
}
// anotherFile.ts
const someText: string = selectTranslation`Hello`; // still errors!
```
### c) Just use `any`
```diff
// selectTranslation.ts
-export const selectTranslation = <K extends keyof Dictionary>(key: K | TemplateStringsArray): Dictionary[K] => {
+export const selectTranslation = <K extends keyof Dictionary>(key: K | any): Dictionary[K] => {
```
which then allows me to use the tag function, **BUT** there's NO type-safety, autocompletions etc., making it practically useless.
## TL;DR:
Neither of the solutions helped - I'm still unable to use the `selectTranslation` function as a tag function with type safety.
Is there any way to make it possible?
Reminder - we have 2 problems here:
1. (less important since it can be avoided by using `Array<K>`) `TemplateStringsArray` does not work like it should (or I'm using it wrong), even when used as a generic
2. (more important) tag functions cannot have type-safe parameters?
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
---
I'm happy to help if you have any further questions.
| Suggestion,In Discussion | high | Critical |
490,763,265 | kubernetes | Kubernetes API Authentication Proxy Request Headers | **What would you like to be added**:
See #66754.
Assuming the `--requestheader-group-headers` flag is set to `X-Remote-Group` the API server should accept both the following header(s) to apply the groups A,B,C to the user:
```http
X-Remote-Group: A
X-Remote-Group: B
X-Remote-Group: C
```
```http
X-Remote-Group: A,B,C
```
Currently only the former is supported.
**Why is this needed**:
The HTTP spec ([RFC 2616](https://tools.ietf.org/html/rfc2616#section-4.2)) specifies how multiple headers of the same name can be combined into one.
> Multiple message-header fields with the same field-name MAY be
present in a message if and only if the entire field-value for that
header field is defined as a comma-separated list [i.e., #(values)].
It MUST be possible to combine the multiple header fields into one
"field-name: field-value" pair, without changing the semantics of the
message, by appending each subsequent field-value to the first, each
separated by a comma.
Popular tools, such as Python's WSGI, do not support sending multiple headers of the same name. This makes using a Python web server as an authenticating proxy troublesome.
The previous issue (#66754) was closed as there's a chance some groups (eg from LDAP) could contain commas.
Kubernetes could explicitly forbid group names from containing commas.
Alternatively groups containing commas could be allowed if the commas are properly escaped in a similar way to how [LDAP supports commas in group names](https://www.ibm.com/support/knowledgecenter/SSPREK_9.0.0/com.ibm.isam.doc/base_admin/concept/con_validcharldapusrandgrpname.html).
| priority/awaiting-more-evidence,kind/feature,sig/auth,lifecycle/frozen | medium | Major |
490,763,972 | rust | EXPERIMENT: Make `async fn`, `.await`, `async move? {` legal on 2015 | The parser currently will not parse `async fn`, `async move? $block`, and `<expr>.await` on Rust 2015.
Check whether:
- doing so would simplify the parser & the grammar.
- it is feasible with a crater run for `<expr>.await` and `async move? $block`
Drawbacks include:
- folks having fewer incentives to use Rust 2018. | C-enhancement,A-parser,P-low,I-needs-decision,T-lang,T-compiler,A-async-await,AsyncAwait-Triaged,A-edition-2015 | low | Minor |
490,768,037 | youtube-dl | --embed-thumbnail for Opus files | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.09.01. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a feature request
- [x] I've verified that I'm running youtube-dl version **2019.09.01**
- [x] I've searched the bugtracker for similar feature requests including closed ones
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
Please add support for embedding cover art in opus files.
| request | medium | Critical |
490,784,233 | go | syscall: installing Go in chroot chronically fails on TestCloneNEWUSERAndRemapRootDisableSetgroups if /proc/sys/kernel/unprivilged_userns_clone is not 1 | I'm installing the latest go in a chroot, and I'm getting a repeat of the error in #11261, but this time, the patch mentioned in #11261 (http://golang.org/cl/11269) doesn't stop the bug. I manually set that kernel variable (/proc/sys/kernel/unprivileged_userns_clone) to 0 rather than 1, and then the test succeeded, so that is a workaround, but the error still happened in the first place.
This workaround allowed Go to be installed successfully.
I am not going to repeat the data in #11261 since that already explained the background. | OS-Linux,NeedsFix,compiler/runtime | low | Critical |
490,786,407 | rust | Hard-linking incremental files can be expensive. | Doing some profiling to determine why incremental Cargo builds are slow, I noticed that hard-linking the incremental files takes a nontrivial amount of time. This issue is to ask those in-the-know if it may be possible to improve it.
On my system (macos, apfs, an extremely fast ssd), it takes between 500 to 750ms of a 7s incremental build (7% to 10%) just to hard-link the incremental files. The incremental build of cargo's lib generates about 1000 files, and it appears the incremental system links them 3 times (once from `incremental` to `working`, then `working` to `deps`, and then `deps` back to `incremental`), for a total of about 3,000 hard links.
I see two things here:
1. Can incremental produce fewer files?
2. Can incremental not hard-link the same files 3 times?
| C-enhancement,I-compiletime,T-compiler,A-incr-comp,WG-incr-comp | low | Major |
490,805,109 | flutter | Allow flutter to save widget to image even if widget is offscreen | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
When you have an application, sometimes it's important to save widgets as images even if they're not on screen.
An example of how this would be necessary: having a customized widget displayed in a thumbnail in the app but you wish to save as an image a full size/full screen version of that same widget.
## Steps to Reproduce
<!-- Please tell us exactly how to reproduce the problem you are running into. -->
1. ... Following documentation, use RepaintBoundary with a global key in the widget to save as image.
2. ... Wrap the previous widget in an OffStage widget.
3. ... Try to call RenderRepaintBoundary boundary =
_exportLayoutKey.currentContext.findRenderObject();
ui.Image image = await boundary.toImage();
## Logs
<!--
Include the full logs of the commands you are running between the lines
with the backticks below. If you are running any "flutter" commands,
please include the output of running them with "--verbose"; for example,
the output of running "flutter --verbose create foo".
-->
It will throw an error
`flutter: 'package:flutter/src/rendering/proxy_box.dart': Failed assertion: line 2882 pos 12: '!debugNeedsPaint': is not true.`
running `flutter doctor -v`
<!-- If possible, paste the output of running `flutter doctor -v` here. -->
```
[β] Flutter (Channel dev, v1.9.7, on Mac OS X 10.14.6 18G95, locale en-US)
β’ Flutter version 1.9.7 at /Users/tapizquent/Projects/tools/flutter
β’ Framework revision 4984d1a33d (11 days ago), 2019-08-28 17:04:07 -0700
β’ Engine revision f52c0b9270
β’ Dart version 2.5.0
[β] Android toolchain - develop for Android devices (Android SDK version 29.0.0)
β’ Android SDK at /Users/tapizquent/Library/Android/sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-29, build-tools 29.0.0
β’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
β’ All Android licenses accepted.
[β] Xcode - develop for iOS and macOS (Xcode 10.3)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 10.3, Build version 10G8
β’ CocoaPods version 1.7.4
[β] Android Studio (version 3.4)
β’ Android Studio at /Applications/Android Studio.app/Contents
β’ Flutter plugin version 36.1.1
β’ Dart plugin version 183.6270
β’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[β] VS Code (version 1.38.0)
β’ VS Code at /Applications/Visual Studio Code.app/Contents
β’ Flutter extension version 3.4.1
[β] Connected device (1 available)
β’ Tapizquent β’ e75fc710ec1404ac1c323311b0ba25049db7e7d4 β’ ios β’ iOS 12.4.1
β’ No issues found!
```
## **Update:**
Minimal project showcasing scenario:
```
import 'dart:convert';
import 'dart:typed_data';
import 'dart:ui' as ui;
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
GlobalKey _globalKey = GlobalKey();
Future<void> _capturePng() async {
try {
print('inside');
RenderRepaintBoundary boundary =
_globalKey.currentContext.findRenderObject();
ui.Image image = await boundary.toImage(pixelRatio: 3.0);
ByteData byteData =
await image.toByteData(format: ui.ImageByteFormat.png);
var pngBytes = byteData.buffer.asUint8List();
var bs64 = base64Encode(pngBytes);
print(pngBytes);
print(bs64);
return pngBytes;
} catch (e) {
print(e);
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Widget To Image demo'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text(
'Tapping button below should capture placeholder image capture the placeholder image',
),
RepaintBoundary(
key: _globalKey,
child: Offstage(
child: Container(
width: MediaQuery.of(context).size.width * 0.8,
child: Placeholder(),
),
),
),
RaisedButton(
child: Text('capture Image'),
onPressed: _capturePng,
),
],
),
),
);
}
}
```
I have tried doing it like this, and also switching Offstage and RepaintBoundary:
```
RepaintBoundary(
key: _globalKey,
child: Offstage(
child: Container(
width: MediaQuery.of(context).size.width * 0.8,
child: Placeholder(),
),
),
),
``` | framework,a: images,c: proposal,P3,team-framework,triaged-framework | high | Critical |
490,810,890 | rust | Tracking issue for iter_order_by | Landed in #62205
# Public API
```rust
pub mod core {
pub mod iter {
mod traits {
mod iterator {
pub trait Iterator {
fn cmp_by<I, F>(mut self, other: I, mut cmp: F) -> Ordering
where
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, I::Item) -> Ordering,
{
}
fn partial_cmp_by<I, F>(
mut self,
other: I,
mut partial_cmp: F,
) -> Option<Ordering>
where
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, I::Item) -> Option<Ordering>,
{
}
fn eq_by<I, F>(mut self, other: I, mut eq: F) -> bool
where
Self: Sized,
I: IntoIterator,
F: FnMut(Self::Item, I::Item) -> bool,
{
}
}
}
}
}
}
```
# Before stabilization:
- [ ] Stabilization PR
# Open questions
- [ ] Should we use `size_hint` as an optimization for equality checking? That means buggy iterators may return incorrect results for `eq_by`. We could optimize for just a handful of `TrustedLen` iterators instead. | T-libs-api,B-unstable,C-tracking-issue,A-iterators,Libs-Tracked | low | Critical |
490,821,549 | rust | Cleanup: Refactor `rustc_typeck/check/op.rs` | `rustc_typeck/check/op.rs` has some seriously over-complicated methods, e.g. `fn check_overloaded_binop`, that obscure the happy path and do way too much. The file and especially the aforementioned method should be refactored. | C-cleanup,A-type-system,T-compiler,T-types | low | Minor |
490,832,054 | realworld | Bug: Removing all tags in an Article requires an array with an empty string | The API doesn't seem to be removing all tags when an article's `tagList` is an empty array.
Sending an array with an empty string properly removes all the tags. | Status: Approved,v2 changelog | low | Critical |
490,849,658 | go | x/mobile: gomobile bind does not support vendor | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
go version go1.13 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
set GO111MODULE=auto
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\Wang\AppData\Local\go-build
set GOENV=C:\Users\Wang\AppData\Roaming\go\env
set GOEXE=.exe
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=D:\Work\go\gopath
set GOPRIVATE=
set GOPROXY=https://proxy.golang.org,direct
set GOROOT=D:\Work\go
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLDIR=D:\Work\go\pkg\tool\windows_amd64
set GCCGO=gccgo
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=D:\Work\go\gopath\src\github.com\fatedier\frp\go.mod
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=C:\Users\Wang\AppData\Local\Temp\go-build704107050=/tmp/go-build -gno-record-gcc-switches
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
- go get github.com/fatedier/frp
- vi {GOPATH}/github.com/fatedier/frp/cmd/frpc/sub/export.go
```
package sub
import (
"time"
"math/rand"
_ "github.com/fatedier/frp/assets/frpc/statik"
"github.com/fatedier/golib/crypto"
)
// I want to export this function only
func Start(filePath string) {
crypto.DefaultSalt = "frp"
rand.Seed(time.Now().UnixNano())
runClient(filePath)
}
```
open terminal
`gomobile bind -target android github.com/fatedier/frp/cmd/frpc/sub`
### What did you expect to see?
gomobile create a .aar and .jar file
### What did you see instead?
```
gomobile: D:\Work\go\gopath\bin\gobind.exe -lang=go,java -outdir=C:\Users\Wang\AppData\Local\Temp\gomobile-work-225577183 github.com/fatedier/frp/cmd/frpc/sub failed: exit status 1
type-checking package "github.com/fatedier/frp/cmd/frpc/sub" failed (D:\Work\go\gopath\src\github.com\fatedier\frp\cmd\frpc\sub\http.go:22:2: could not import github.com/spf13/cobra (type-checking package "github.com/spf13/cobra" failed (D:\Work\go\gopath\pkg\mod\github.com\spf13\[email protected]\bash_completions.go:11:2: could not import github.com/spf13/pflag (cannot find package "github.com/spf13/pflag" in any of:
D:\Work\go\src\github.com\spf13\pflag (from $GOROOT)
D:\Work\go\gopath\src\github.com\spf13\pflag (from $GOPATH)))))
```
| NeedsInvestigation,mobile | low | Critical |
490,904,004 | flutter | Stream.periodic usage causes test to hang | I've been trying to get to the bottom of [this issue](https://github.com/ReactiveX/rxdart/issues/322) and have honed in on `Stream.periodic`. In trying to diagnose, I've hit a (probably related) issue, per below.
## Steps to Reproduce
```dart
import 'dart:async';
import 'package:fake_async/fake_async.dart';
import 'package:flutter_test/flutter_test.dart';
void main() {
testWidgets('repro with testWidgets (hangs)', (tester) async {
final periodic = Stream<int>.periodic(const Duration(seconds: 30));
final subscription = periodic.listen(null);
await tester.pump(const Duration(seconds: 3));
await subscription.cancel();
print('done');
});
test('repro with quiver (works)', () async {
final fakeAsync = FakeAsync();
// Below I am mimicking the same calls that are made to FakeAsync when using testWidgets
fakeAsync.flushMicrotasks();
fakeAsync.flushMicrotasks();
fakeAsync.flushMicrotasks();
final periodic = Stream<int>.periodic(const Duration(seconds: 30));
final subscription = periodic.listen(null);
fakeAsync.elapse(const Duration(seconds: 3));
fakeAsync.flushMicrotasks();
await subscription.cancel();
print('done');
fakeAsync.flushMicrotasks();
fakeAsync.flushMicrotasks();
fakeAsync.flushMicrotasks();
});
// I have no idea why both quiver and fake_async implement a FakeAsync helper or why Flutter uses the former, but I thought I'd try this one as well.
test('repro with fake_async package (works)', () async {
fakeAsync((f) async {
final periodic = Stream<int>.periodic(const Duration(seconds: 30));
final subscription = periodic.listen(null);
f.elapse(const Duration(seconds: 3));
await subscription.cancel();
print('done');
});
});
}
```
Running the `repro with quiver (works)` or `repro with fake_async package (works)` tests both work fine. But running the `repro with testWidgets (hangs)` test will hang after test execution.
## Flutter Doctor
```
[β] Flutter (Channel stable, v1.9.1+hotfix.4, on Microsoft Windows [Version 10.0.18362.418], locale en-AU)
β’ Flutter version 1.9.1+hotfix.4 at C:\Users\Kent\Repository\flutter
β’ Framework revision cc949a8e8b (3 weeks ago), 2019-09-27 15:04:59 -0700
β’ Engine revision b863200c37
β’ Dart version 2.5.0
[β] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
β’ Android SDK at C:\Users\Kent\AppData\Local\Android\Sdk
β’ Android NDK location not configured (optional; useful for native profiling support)
β’ Platform android-29, build-tools 29.0.2
β’ Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
β’ All Android licenses accepted.
[β] Android Studio (version 3.5)
β’ Android Studio at C:\Program Files\Android\Android Studio
β’ Flutter plugin version 39.0.3
β’ Dart plugin version 191.8423
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
[β] VS Code (version 1.39.2)
β’ VS Code at C:\Users\Kent\AppData\Local\Programs\Microsoft VS Code
β’ Flutter extension version 3.5.1
[!] Connected device
! No devices available
! Doctor found issues in 1 category.
``` | a: tests,framework,dependency: dart,has reproducible steps,P3,found in release: 3.3,found in release: 3.6,team-framework,triaged-framework | low | Minor |
491,178,538 | flutter | Building a relatively simple page takes more than 20ms of the UI thread time | Mirror issue for https://b.corp.google.com/issues/121238693 -- conversation should occur there. | framework,customer: dream (g3),P2,team-framework,triaged-framework | low | Minor |
491,179,677 | rust | Add other variants to std::io::ErrorKind enum | I was trying to rewrite Python code in Rust and I need to match against what could be `NotADirectory` (`ENOTDIR`) and saw that `ErrorKind`'s variants do not include many `errno`s present in `libc` - to speak only of the Unix platform - including the one I need.
Is there a reason why those variants were not implemented? Could this be my first contribution to Rust? | T-libs-api,C-feature-request | low | Critical |
491,183,760 | flutter | How to modify default gap between leading and title in ExpansionTile widget ? | ## Use case
I'm currently using an ExpansionTile widget in my flutter app.
But the gap between the leading and title is too wide.
I think it's about 16 pixel. (Is there any special reason, why the default gap is 16 pixel?)
Based on my further observation I think 16 pixel is coming from below line in **list_tile.dart**.
**// The horizontal gap between the titles and the leading/trailing widgets
static const double _horizontalTitleGap = 16.0;**
Somehow, so far I can't find any proper way to modify _horizontalTitleGap value.
Is there any proper way to modify _horizontalTitleGap ?
## Proposal
There are several ways that might work to modify _horizontalTitleGap value:
1. Introduce new parameter in ExpansionTile widget that can modify _horizontalTitleGap.
2. Wrap ExpansionTile widget with a new widget that can modify _horizontalTitleGap. | framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Minor |
491,184,721 | kubernetes | panic encoding the same object from multiple goroutines concurrently | Seen in https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/pull/69821/pull-kubernetes-integration/1170241808647589890
Serialization using `WithVersionEncoder#Encode` (the default for output) mutates objects. More details in https://github.com/kubernetes/kubernetes/pull/117174
https://github.com/kubernetes/kubernetes/blob/58e6e903b71580e52925ce630a15e4df4d9f7f47/staging/src/k8s.io/apimachinery/pkg/runtime/helper.go#L239-L241
This happens to be triggered from a test, but other contexts that encoded the same object from multiple goroutines could hit the same issue
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x402f95]
goroutine 105737 [running]:
strings.Count(0x0, 0x2, 0x46fdf46, 0x1, 0x1)
/usr/local/go/src/strings/strings.go:84 +0xf2
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/schema.ParseGroupVersion(0x0, 0x2, 0x8744b2, 0x3dfe500, 0xc000231410, 0xc01f8d4b38, 0xaf88cc0, 0x434c200)
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/schema/group_version.go:222 +0x99
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/schema.FromAPIVersionAndKind(0x0, 0x2, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/schema/group_version.go:296 +0x5a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/apis/meta/v1.(*TypeMeta).GroupVersionKind(0xc04cfdfc00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/meta.go:128 +0x5e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime.WithVersionEncoder.Encode(0x7d611c0, 0xc04cf533c0, 0x7fc20274a3e8, 0xc00010fc00, 0x7d741a0, 0xc000214690, 0x7d71be0, 0xc04cfdfc00, 0x7d56f00, 0xc04cfba450, ...)
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/helper.go:231 +0x177
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime.Encode(0x7d61120, 0xc04cf8b260, 0x7d71be0, 0xc04cfdfc00, 0xc04cfdfc00, 0x1, 0x0, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/codec.go:46 +0x74
k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Body(0xc04cfca600, 0x46ca1c0, 0xc04cfdfc00, 0xc04cfca600)
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/rest/request.go:400 +0x238
k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*pods).Create(0xc04cfccd20, 0xc04cfdfc00, 0xc, 0x7e5d9a0, 0xc04cfccd20)
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1/pod.go:120 +0xd4
```
this can cause a race condition, or concurrent modification errors | kind/bug,sig/api-machinery,priority/important-longterm,lifecycle/frozen | low | Critical |
491,213,407 | go | x/build/devapp: handle module dependencies more intelligently in Dockerfile | The x/build/devapp [Dockerfile](https://github.com/golang/build/blob/master/devapp/Dockerfile) contains [extra steps](https://github.com/golang/build/blob/a2cf19446e2ca6ac8153b261ab49f9be9394b5f4/devapp/Dockerfile#L20-L29) to optimize the speed of repeated docker builds when the `go.mod` or `go.sum` files have not changed. It would be nice to have a more automated way of doing this that doesn't require us to remember to add some our dependencies manually inside of our Dockerfile.
See [golang.org/cl/193878](https://golang.org/cl/193878).
| Builders,NeedsInvestigation | low | Major |
491,225,638 | TypeScript | docs: clarify declaration merging behavior | ## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
declaration merging imported interface module declaration merging
related issue #11406
## Suggestion
*[This is a docs improvement request](https://www.typescriptlang.org/docs/handbook/declaration-merging.html)*
As I found via some helpful feedback on Gitter, in response to this [S.O. issue](https://stackoverflow.com/questions/57848134/trouble-updating-an-interface-using-declaration-merging), in order for declaration merging to work across multiple files, all subsequent declarations (after the first) need to specify the *original file* that the declared interface was created in.
For example, given a module `@rschedule/core` with the following files
```ts
// @rschedule/core/es2015/packages/core/core/date-adapter
export interface DateAdapterType {
base: DateAdapterBase;
}
```
```ts
// @rschedule/core/index.ts
export * from './es2015/packages/core/core/date-adapter'
```
this *does not work work* (though I think it is reasonable to expect that it would):
```ts
// file_a.ts
declare module '@rschedule/core' {
interface DateAdapterType {
one: string;
}
}
```
```ts
// file_b.ts
import { DateAdapterType } from '@rschedule/core';
import './file_a';
function test(date: DateAdapterType) {
date.base; // should be type `DateAdapterBase` but instead errors that `.base` doesn't exist
date.one; // is type `string`
}
```
This fails because `@rschedule/core` is re-exporting the `DateAdapterType` interface. The following works
```ts
// file_a.ts
declare module '@rschedule/core/es2015/packages/core/core/date-adapter' {
interface DateAdapterType {
one: string;
}
}
```
```ts
// file_b.ts
import { DateAdapterType } from '@rschedule/core';
import './file_a';
function test(date: DateAdapterType) {
date.base; // is type `DateAdapterBase`
date.one; // is type `string`
}
```
This existing API for declaration merging is problematic because the *true* location of the `DateAdapterType` interface within `@rschedule/core` is private to the module and could theoretically change between releases. Longer term, I'd like to see this implementation issue within typescript fixed (#33327), but in the immediate term a docs update could save future folks some pain.
I'll note that I was only able to eventually figure out what was going on via #11406 from 2016. As @MoonStorm sagely put it in a comment at that time to clarify this documentation.
> Could you please update the docs with this example, in the module augmentation section as well as in what's new ? Nobody is going to augment rxjs's Observable straight in the external module's own folder.
## Use Cases
Developer understanding and happiness
## Examples
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Docs | low | Critical |
491,226,815 | godot | Editor thinks singleton is not in scopeuntil code is reloaded | **Godot version:**
Godot v3.1.1
**OS/device including version:**
Debian 9 64 bits
**Issue description:**
When you add a new .gd script to the project's autoload list and begin using it the editor flags your use of the singleton's name as an error, because it thinks it isn't on scope (even though, of course, it is always in scope).
**Steps to reproduce:**
Create a new script.
Add it to the project's autoload list.
Use the singleton's name anywhere in your project. Editor marks it as an error and tells you that the name of the singleton is not defined on this scope. Running the project works fine, though.
Close Godot, open it again and load your project. The editor now recognizes singletons are always in scope and doesn't warn you about the "error" anymore.
**Minimal reproduction project:**
N/A | bug,topic:editor | low | Critical |
491,228,410 | flutter | PointerEvent should have meta info (isShiftPressed, isCommandPressed etc.) | `PointerEvent` should have meta info `isControlPressed`, `isAltPressed` similar to `RawKeyEvent`. This will allow to handle more "types" of clicks. Example use-case: shift + click = add to selection | c: new feature,framework,f: gestures,P3,team-framework,triaged-framework | low | Minor |
491,252,567 | TypeScript | Signature help not select correct overload for partially completed function | <!-- π¨ STOP π¨ π¦π§π’π£ π¨ πΊπ»πΆπ· π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
From https://github.com/microsoft/vscode/issues/80570
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.7.0-dev.20190907
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
- SignatureHelp
- parameter hints
**Code**
For the code
```ts
declare function addEventHandler(event: "onPlayerJoin", x: number): void;
declare function addEventHandler(event: "onPlayerQuit", y: string): void;
addEventHandler('onPlayerQuit')
```
1. Trigger signature help after the closing `'` in the call to `addEventHandler`
**Expected behavior:**
Signature help for the second overload is returned
**Actual behavior:**
Signature help for both overloads is returned. We end up showing the one for `onPlayerJoin ` first
 | Suggestion,Experience Enhancement | low | Critical |
491,295,712 | flutter | Embedder should prevent engine initialization if the ICU data file is not provided | Currently, via the embedder API, if the Shell is initialized without an ICU data file, it is assumed that the embedder has initialized ICU before the first instantiation of the shell. The engine does not warn of this implicit fallback in non-verbose launch modes.
The model works this way because ICU setup has to be done once per process and the mobile and Fuchsia shells have access to Fuchsia directly (and may need ICU initialized before shell initialization **and** not have the shell blow away the embedder set configuration). This is not the case for custom embedders and embedders can only initialize ICU via the Flutter engine (ICU symbols are hidden).
If embedders forget to pass a valid ICU data file, the engine crashes during first text layout with an obscure error. This is poor UX and we should just prevent engine launches without embedder provided ICU data files. | engine,a: quality,e: embedder,P3,team-engine,triaged-engine | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.