id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
476,549,992 | flutter | iOS: Platform View stops responding to GestureInput | Hello,
I use a MKMapView (Apple Maps) inside a PlatformView and it’s working fine.
But after using a SearchView (with showSearch) and dismissing it, the platformView is only responding to touch events for one more time.
After that, nothing happens if you try to move the map or zoom etc.
Every other flutter widget on screen is still working fine.
This affects both debug builds and release builds and it affects simulator (most of the time) and real devices (all the time).
I also uploaded a demo project on google drive, so you can reproduce the bug.
The bug occurs most of the time but if not, please close and restart the app and follow the steps again.
Video of bug:
[Link](https://drive.google.com/open?id=1IkhF7CTyfA3iROKyqp8OkvOcBb7f9M9U)
Demo project:
[Link](https://drive.google.com/open?id=15Q3mqJCrsx6ttVlmNL56YIzfxkuDCOFk)
## Steps to Reproduce
<!--
Please tell us exactly how to reproduce the problem you are running into.
Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
If the problem is with your application's rendering, then please attach
a screenshot and explain what the problem is.
-->
1. Move the map (then wait until touch is finished / animation finished)
2. Press SearchButton
3. Press dismiss button (in the top left corner)
4. Move the Map (then wait until touch is finished / animation finished)
5. Try to move the map again -> Map not moving/interactive anymore
If the map is still interactive, repeat steps 2 through 5
If the bug still not occurs, try closing and reopening the app and follow the steps again.
## Logs
<!--
Run your application with `flutter run --verbose` and attach all the
log output below between the lines with the backticks. If there is an
exception, please see if the error message includes enough information
to explain how to solve the issue.
-->
```
[+16700 ms] [DEVICE LOG] 2019-08-03 19:13:35.431071+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124b5b, Description: send gesture actions
[ +38 ms] [DEVICE LOG] 2019-08-03 19:13:35.468868+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124b5c, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:35.469011+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124b5d, Description: send gesture actions
[ +14 ms] [DEVICE LOG] 2019-08-03 19:13:35.485203+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124b5e, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:35.485516+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124b5f, Description: send gesture actions
[ +16 ms] [DEVICE LOG] 2019-08-03 19:13:35.501697+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e00, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:35.502253+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e01, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:35.502346+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e02, Description: send gesture actions
[ +199 ms] [DEVICE LOG] 2019-08-03 19:13:35.699206+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e03, Description: send gesture actions
[ +2 ms] [DEVICE LOG] 2019-08-03 19:13:35.699953+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e04, Description: send gesture actions
[ +181 ms] [DEVICE LOG] 2019-08-03 19:13:35.884511+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e05, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:35.885436+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e06, Description: send gesture actions
[ +193 ms] [DEVICE LOG] 2019-08-03 19:13:36.078856+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e07, Description: send gesture actions
[ +2 ms] [DEVICE LOG] 2019-08-03 19:13:36.079591+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e08, Description: send gesture actions
[ +178 ms] [DEVICE LOG] 2019-08-03 19:13:36.259984+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e09, Description: send gesture actions
[ +1 ms] [DEVICE LOG] 2019-08-03 19:13:36.260367+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e0a, Description: send gesture actions
[ +189 ms] [DEVICE LOG] 2019-08-03 19:13:36.450115+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e0b, Description: send gesture actions
[ +1 ms] [DEVICE LOG] 2019-08-03 19:13:36.451050+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e0c, Description: send gesture actions
[ +186 ms] [DEVICE LOG] 2019-08-03 19:13:36.639542+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e0d, Description: send gesture actions
[ +2 ms] [DEVICE LOG] 2019-08-03 19:13:36.640424+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e0e, Description: send gesture actions
[ +182 ms] [DEVICE LOG] 2019-08-03 19:13:36.822902+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e0f, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:36.823640+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e50, Description: send gesture actions
[ +183 ms] [DEVICE LOG] 2019-08-03 19:13:37.008348+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e51, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:37.009338+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e52, Description: send gesture actions
[ +184 ms] [DEVICE LOG] 2019-08-03 19:13:37.193897+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e53, Description: send gesture actions
[ +2 ms] [DEVICE LOG] 2019-08-03 19:13:37.194664+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e54, Description: send gesture actions
[ +185 ms] [DEVICE LOG] 2019-08-03 19:13:37.379020+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e55, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:37.379704+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e56, Description: send gesture actions
[ +181 ms] [DEVICE LOG] 2019-08-03 19:13:37.563925+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e57, Description: send gesture actions
[ +2 ms] [DEVICE LOG] 2019-08-03 19:13:37.564301+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e58, Description: send gesture actions
[ +357 ms] [DEVICE LOG] 2019-08-03 19:13:37.922696+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e59, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:37.922928+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e5a, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:37.923770+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e5b, Description: send gesture actions
[+2181 ms] [DEVICE LOG] 2019-08-03 19:13:40.107651+0200 localhost Runner[30814]: (AXRuntime) [com.apple.Accessibility:AXRuntimeCommon] This class 'FlutterSemanticsObject' is not a known serializable element and returning it as an
accessibility element may lead to crashes
[ +13 ms] [DEVICE LOG] 2019-08-03 19:13:40.120996+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e5c, Description: send gesture actions
[ +92 ms] [DEVICE LOG] 2019-08-03 19:13:40.210532+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124e5d, Description: send gesture actions
[ +14 ms] [DEVICE LOG] 2019-08-03 19:13:40.227109+0200 localhost Runner[30814]: (Flutter) flutter: Test
[ ] flutter: Test
[ +390 ms] [DEVICE LOG] 2019-08-03 19:13:40.618156+0200 localhost Runner[30814]: (CallKit) [com.apple.calls.callkit:Default] Call host has no calls
[ +9 ms] [DEVICE LOG] 2019-08-03 19:13:40.627397+0200 localhost Runner[30814]: (CoreFoundation) Created Activity ID: 0x124e5e, Description: Updating Key-Value Observers Of Preferences
[ +4 ms] [DEVICE LOG] 2019-08-03 19:13:40.630439+0200 localhost Runner[30814]: (UIKitCore) [com.apple.UIKit:FeedbackActivation] activate generator with style: TurnOn; activationCount: 0 -> 1; styleActivationCount: 0 -> 1;
<_UIKeyboardFeedbackGenerator: 0x600007e00000>
[ +1 ms] [DEVICE LOG] 2019-08-03 19:13:40.630645+0200 localhost Runner[30814]: (UIKitCore) [com.apple.UIKit:Feedback] activate engine <_UIFeedbackSystemSoundEngine: 0x60000239a140>, clientCount: 0 -> 1
[ +1 ms] [DEVICE LOG] 2019-08-03 19:13:40.630729+0200 localhost Runner[30814]: (UIKitCore) [com.apple.UIKit:Feedback] activating engine <_UIFeedbackSystemSoundEngine: 0x60000239a140>
[ ] [DEVICE LOG] 2019-08-03 19:13:40.631095+0200 localhost Runner[30814]: (UIKitCore) [com.apple.UIKit:Feedback] engine <_UIFeedbackSystemSoundEngine: 0x60000239a140: state=3, numberOfClients=1, prewarmCount=0,
_internal_isSuspended=0> state changed: Inactive -> Activating
[ +1 ms] [DEVICE LOG] 2019-08-03 19:13:40.631289+0200 localhost Runner[30814]: (UIKitCore) [com.apple.UIKit:Feedback] engine <_UIFeedbackSystemSoundEngine: 0x60000239a140: state=4, numberOfClients=1, prewarmCount=0,
_internal_isSuspended=0> state changed: Activating -> Running
[ ] [DEVICE LOG] 2019-08-03 19:13:40.635018+0200 localhost Runner[30814]: (RunningBoardServices) Created Activity ID: 0x124e5f, Description: didChangeInheritances
[+1595 ms] [DEVICE LOG] 2019-08-03 19:13:42.233585+0200 localhost Runner[30814]: (AXRuntime) [com.apple.Accessibility:AXRuntimeCommon] This class 'FlutterSemanticsObject' is not a known serializable element and returning it as an
accessibility element may lead to crashes
[ +19 ms] [DEVICE LOG] 2019-08-03 19:13:42.252617+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124ef0, Description: send gesture actions
[ +62 ms] [DEVICE LOG] 2019-08-03 19:13:42.315455+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124ef1, Description: send gesture actions
[ +25 ms] [DEVICE LOG] 2019-08-03 19:13:42.336119+0200 localhost Runner[30814]: (UIKitCore) [com.apple.UIKit:FeedbackActivation] deactivate generator with style: TurnOn; activationCount: 1 -> 0; styleActivationCount: 1 -> 0;
<_UIKeyboardFeedbackGenerator: 0x600007e00000>
[ ] [DEVICE LOG] 2019-08-03 19:13:42.336203+0200 localhost Runner[30814]: (UIKitCore) [com.apple.UIKit:Feedback] deactivate engine <_UIFeedbackSystemSoundEngine: 0x60000239a140>, clientCount: 1 -> 0
[ ] [DEVICE LOG] 2019-08-03 19:13:42.336260+0200 localhost Runner[30814]: (UIKitCore) [com.apple.UIKit:Feedback] _internal_deactivateEngineIfPossible <_UIFeedbackSystemSoundEngine: 0x60000239a140>, clientCount: 0, suspended: 0
[ ] [DEVICE LOG] 2019-08-03 19:13:42.336313+0200 localhost Runner[30814]: (UIKitCore) [com.apple.UIKit:Feedback] _internal_deactivateEngineIfPossible <_UIFeedbackSystemSoundEngine: 0x60000239a140> tearedDown: 1
[ ] [DEVICE LOG] 2019-08-03 19:13:42.336436+0200 localhost Runner[30814]: (UIKitCore) [com.apple.UIKit:Feedback] engine <_UIFeedbackSystemSoundEngine: 0x60000239a140: state=0, numberOfClients=0, prewarmCount=0,
_internal_isSuspended=0> state changed: Running -> Inactive
[+4715 ms] [DEVICE LOG] 2019-08-03 19:13:47.059008+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124ef2, Description: send gesture actions
[ +16 ms] [DEVICE LOG] 2019-08-03 19:13:47.075408+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124ef3, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:47.075785+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124ef4, Description: send gesture actions
[ +197 ms] [DEVICE LOG] 2019-08-03 19:13:47.269943+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124ef5, Description: send gesture actions
[ +190 ms] [DEVICE LOG] 2019-08-03 19:13:47.458765+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124ef6, Description: send gesture actions
[ +176 ms] [DEVICE LOG] 2019-08-03 19:13:47.638649+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124ef7, Description: send gesture actions
[ +186 ms] [DEVICE LOG] 2019-08-03 19:13:47.825529+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124ef8, Description: send gesture actions
[ +347 ms] [DEVICE LOG] 2019-08-03 19:13:48.171989+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124ef9, Description: send gesture actions
[ ] [DEVICE LOG] 2019-08-03 19:13:48.172199+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124efa, Description: send gesture actions
[+2893 ms] [DEVICE LOG] 2019-08-03 19:13:51.067636+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124efb, Description: send gesture actions
[ +736 ms] [DEVICE LOG] 2019-08-03 19:13:51.804476+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124efc, Description: send gesture actions
[ +722 ms] [DEVICE LOG] 2019-08-03 19:13:52.526711+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124efd, Description: send gesture actions
[ +496 ms] [DEVICE LOG] 2019-08-03 19:13:53.022698+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124efe, Description: send gesture actions
[ +460 ms] [DEVICE LOG] 2019-08-03 19:13:53.483232+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x124eff, Description: send gesture actions
[+1404 ms] [DEVICE LOG] 2019-08-03 19:13:54.887590+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x125110, Description: send gesture actions
[ +367 ms] [DEVICE LOG] 2019-08-03 19:13:55.255056+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x125111, Description: send gesture actions
[ +337 ms] [DEVICE LOG] 2019-08-03 19:13:55.592560+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x125112, Description: send gesture actions
[ +229 ms] [DEVICE LOG] 2019-08-03 19:13:55.821816+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x125113, Description: send gesture actions
[ +265 ms] [DEVICE LOG] 2019-08-03 19:13:56.087905+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x125114, Description: send gesture actions
[ +214 ms] [DEVICE LOG] 2019-08-03 19:13:56.302801+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x125115, Description: send gesture actions
[ +231 ms] [DEVICE LOG] 2019-08-03 19:13:56.534156+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x125116, Description: send gesture actions
[ +244 ms] [DEVICE LOG] 2019-08-03 19:13:56.778616+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x125117, Description: send gesture actions
[ +247 ms] [DEVICE LOG] 2019-08-03 19:13:57.025819+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x125118, Description: send gesture actions
[ +201 ms] [DEVICE LOG] 2019-08-03 19:13:57.226882+0200 localhost Runner[30814]: (UIKitCore) Created Activity ID: 0x125119, Description: send gesture actions
```
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
```
Maximilians-iMac:testproject maximilian$ flutter analyze
Analyzing testproject...
No issues found! (ran in 2.7s)
```
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
```
[✓] Flutter (Channel dev, v1.8.3, on Mac OS X 10.14.5 18F132, locale de-DE)
• Flutter version 1.8.3 at /Users/maximilian/Documents/Flutter/flutter-sdk
• Framework revision e4ebcdf6f4 (7 days ago), 2019-07-27 11:48:24 -0700
• Engine revision 38ac5f30a7
• Dart version 2.5.0 (build 2.5.0-dev.1.0 0ca1582afd)
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.0)
• Android SDK at /Users/maximilian/Library/Android/sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 11.0)
• Xcode at /Applications/Xcode-beta.app/Contents/Developer
• Xcode 11.0, Build version 11M382q
• CocoaPods version 1.6.1
[✓] Android Studio (version 3.4)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 38.1.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[✓] VS Code (version 1.36.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.3.0
[✓] Connected device (1 available)
• iPhone Xʀ • D41B7C5A-A833-4394-BAF7-9AD58713D067 • ios • com.apple.CoreSimulator.SimRuntime.iOS-13-0 (simulator)
• No issues found!
```
| platform-ios,framework,f: gestures,a: platform-views,has reproducible steps,P2,found in release: 2.1,team-ios,triaged-ios | low | Critical |
476,553,855 | pytorch | Better documentation about PyTorch's dependencies | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Rather than or in addition to having dependencies as submodules, it would also be helpful to list them separately, indicating typical configurations and allowing the user the option of informing cmake where the dependencies are installed.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
It can be frustrating to build from source with dependencies breaking the build, for example issues with the automated installation of MKL DNN and Caffe seem to break the default installation. Furthermore, each of the dependencies has a rather wide choice of configuration options that can affect performance.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
An improved installation README.md document which lists the dependencies, where they are used, how the impact performance and which ones are critical.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
Use of Spack for installation - https://spack.io/
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| module: build,module: docs,triaged,module: third_party | low | Major |
476,561,834 | terminal | Support importing themes from other terminal apps | # Description of the new feature/enhancement
Fluent Terminal has the really neat ability to import iterm themes which allows me to use themes from [rainglow](https://rainglow.io/).
I would love it if the new Windows terminal was able to do the same. That way the community isn't waiting for all the various theme authors to recognize it (if ever!).
# Proposed technical implementation details (optional)
I'll leave it up to you guys, but the basic idea would be to just have a look at all the applications rainglow has themes for and maybe implement hyper and iterm theme import functionality. | Issue-Feature,Area-Settings,Product-Terminal | low | Minor |
476,577,784 | rust | rust-lldb: cannot get backtrace on macOS | I have a breakpoint set on `rust_panic`, but when I hit this and then do `bt`, I get:
```
error: need to add support for DW_TAG_base_type '()' encoded with DW_ATE = 0x7, bit_size = 0
Traceback (most recent call last):
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 89, in print_val
is_tuple_like = False)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 210, in print_struct_val
body = separator.join([render_child(idx) for idx in range(field_start_index, len(fields))])
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 203, in render_child
return this + print_val(field_val.get_wrapped_value(), internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 89, in print_val
is_tuple_like = False)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 210, in print_struct_val
body = separator.join([render_child(idx) for idx in range(field_start_index, len(fields))])
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 203, in render_child
return this + print_val(field_val.get_wrapped_value(), internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 89, in print_val
is_tuple_like = False)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 210, in print_struct_val
body = separator.join([render_child(idx) for idx in range(field_start_index, len(fields))])
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 203, in render_child
return this + print_val(field_val.get_wrapped_value(), internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 89, in print_val
is_tuple_like = False)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 210, in print_struct_val
body = separator.join([render_child(idx) for idx in range(field_start_index, len(fields))])
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 203, in render_child
return this + print_val(field_val.get_wrapped_value(), internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 89, in print_val
is_tuple_like = False)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 210, in print_struct_val
body = separator.join([render_child(idx) for idx in range(field_start_index, len(fields))])
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 203, in render_child
return this + print_val(field_val.get_wrapped_value(), internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 89, in print_val
is_tuple_like = False)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 210, in print_struct_val
body = separator.join([render_child(idx) for idx in range(field_start_index, len(fields))])
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 203, in render_child
return this + print_val(field_val.get_wrapped_value(), internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 89, in print_val
is_tuple_like = False)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 210, in print_struct_val
body = separator.join([render_child(idx) for idx in range(field_start_index, len(fields))])
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 203, in render_child
return this + print_val(field_val.get_wrapped_value(), internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 89, in print_val
is_tuple_like = False)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 210, in print_struct_val
body = separator.join([render_child(idx) for idx in range(field_start_index, len(fields))])
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 203, in render_child
return this + print_val(field_val.get_wrapped_value(), internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 105, in print_val
return print_std_vec_val(val, internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 254, in print_std_vec_val
internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 289, in print_array_of_values
return ', '.join([render_element(i) for i in range(length)])
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 287, in render_element
return print_val(element_val, internal_dict)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 89, in print_val
is_tuple_like = False)
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 210, in print_struct_val
body = separator.join([render_child(idx) for idx in range(field_start_index, len(fields))])
File "/Users/alex/Software/rust-devel/src/etc/lldb_rust_formatters.py", line 203, in render_child
return this + print_val(field_val.get_wrapped_value(), internal_dict)
TypeError: cannot concatenate 'str' and 'NoneType' objects
error: librustc_driver-f561ecb0e4be7b67.dylib DWARF DIE at 0x076fab13 (class closure) has a member variable 0x076fab1a (__0) whose type is a forward declaration, not a complete definition.
Try compiling the source file with -fstandalone-debug
Illegal instruction: 4
```
| O-macos,A-debuginfo,T-compiler,C-bug | low | Critical |
476,580,656 | godot | C++ Code has lack of proper reference usage and move semantics | While I was debugging the GLTF importer code, as it was causing some problems, I found some large problems with the source code:
It seems like there's a large lack of use of references and const correctness in the code.
There are large copies of data being used for read-only purposes throughout the GLTF importer. In fact, in most cases you can just replace the type with a const& version.
Here are some examples:
```cpp
Error EditorSceneImporterGLTF::_parse_buffers(GLTFState &state, const String &p_base_path) {
if (!state.json.has("buffers"))
return OK;
// Note: Unnecessary copy
// const Array &buffers = state.json["buffers"];
Array buffers = state.json["buffers"];
for (int i = 0; i < buffers.size(); i++) {
if (i == 0 && state.glb_data.size()) {
state.buffers.push_back(state.glb_data);
} else {
// Note: Unnecessary copy
// const Dictionary &buffer = buffers[i];
Dictionary buffer = buffers[i];
if (buffer.has("uri")) {
Vector<uint8_t> buffer_data;
String uri = buffer["uri"];
if (uri.findn("data:application/octet-stream;base64") == 0) {
//embedded data
buffer_data = _parse_base64_uri(uri);
} else {
uri = p_base_path.plus_file(uri).replace("\\", "/"); //fix for windows
buffer_data = FileAccess::get_file_as_array(uri);
ERR_FAIL_COND_V(buffer.size() == 0, ERR_PARSE_ERROR);
}
ERR_FAIL_COND_V(!buffer.has("byteLength"), ERR_PARSE_ERROR);
int byteLength = buffer["byteLength"];
ERR_FAIL_COND_V(byteLength < buffer_data.size(), ERR_PARSE_ERROR);
state.buffers.push_back(buffer_data);
}
}
}
print_verbose("glTF: Total buffers: " + itos(state.buffers.size()));
return OK;
}
```
This importer is also a good example of when move semantics would increase performance quite a bit.
These are some pretty large performance implications to take into account, especially when it takes over a minute to import a simple scene with an armature and 6 meshes. | discussion,topic:core,topic:import | low | Critical |
476,583,115 | rust | rustdoc: Built-in macros are not documented in all the necessary locations | NOTE: Some cases below assume https://github.com/rust-lang/rust/pull/63056 has landed.
- :negative_squared_cross_mark: https://doc.rust-lang.org/nightly/core/default/index.html doesn't contain the derive macro `Default`.
:heavy_check_mark: Compare with the https://doc.rust-lang.org/nightly/std/fmt/index.html page which contains the derive macro `Debug` because it's introduced through a reexport rather than directly.
- :negative_squared_cross_mark: Some built-in macros not available through the libcore root are documented in the root instead (e.g. derives) - https://doc.rust-lang.org/nightly/core/index.html.
- :heavy_check_mark: libcore prelude is documented correctly https://doc.rust-lang.org/nightly/core/prelude/v1/index.html
- :negative_squared_cross_mark: https://doc.rust-lang.org/nightly/std/default/trait.Default.html doesn't contain the derive macro `Default`.
:negative_squared_cross_mark: Note that https://doc.rust-lang.org/nightly/std/fmt/index.html doesn't contain the derive macro `Debug` as well.
- :negative_squared_cross_mark: Derive and attribute macros in libstd prelude https://doc.rust-lang.org/nightly/std/prelude/v1/index.html has to be marked with `#[doc(hidden)]` currently because otherwise they generate dead links failing the `linkchecker` testing.
As a result they do not appear on the page.
What is worse, due to `#[doc(hidden)]` some traits with names matching derives (e.g. `Default`) are also not documented at that location. | T-rustdoc,A-macros,C-bug | low | Critical |
476,593,263 | TypeScript | Support string types in dynamic import types. | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
dynamic import type
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
I'd like to define a function that wraps a dynamic `require` or `import`. Specifically, I'm writing a library that uses node's worker_thread and I want to support 'dependency injection' via dynamic import. You can't pass class instances or functions across execution contexts.
```typescript
import { isMainThread, Worker, workerData } from 'worker_threads';
export type LoggerImportPath = ???;
export default function lib(logger: LoggerImportPath): Worker {
return new Worker(__filename, { workerData: { logger });
}
if (!isMainThread) {
const module: { default: Logger } = await import(workerData.logger);
}
```
```
// no type error, ./path/to/logger exports a Logger.
lib(`${__filename}/path/to/logger');
// this should be a type error: import('not-a-module') is not assignable to { default: Logger };
lib('not-a-module');
```
I don't think you can currently define LoggerImportPath any more precisely than `string` and `import(path: string)` returns `Promise<any>`.
```
// this returns `{ default: Logger}`
await import('path');
// this returns `any`
const wrapper = path => import(path);
const module: any = await wrapper('path')
```
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
I haven't seen this pattern in any other libraries, but I think it's a good way to pass classes or functions across threads or execution contexts. You can't currently limit the kinds of strings that library uses can pass when a module path is expected.
## Examples
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
476,593,276 | flutter | [google_maps_flutter] Compass doesn't show on map rotation until hot reload on Android | On Android the compass not showing after map rotation.
When we click on a marker or trigger a hot reloading it start working. | platform-android,customer: crowd,p: maps,package,has reproducible steps,P2,found in release: 2.0,found in release: 2.3,team-android,triaged-android | low | Critical |
476,599,679 | pytorch | SummaryWriter doesn't read comment if log_dir precised | ## 🐛 Bug
Comment from the user is add to log_dir path name only if log_dir is not precised. Saving different file in same folder result creating files 'log_dir/filename.0', 'log_dir/filename.1', .. read as the same file by TB and messing with the graph, histogram, etc.
## To Reproduce
Steps to reproduce the behavior:
1. Create 2 different SummaryWritter with the same log_dir
```
from torch.utils.tensorboard import SummaryWriter
tb1 = SummaryWriter(log_dir = './dir', comment='Training')
tb2 = SummaryWriter(log_dir = './dir', comment='Validation')
```
2. Write something with the same name and close :
```
import numpy as np
for i in range(100):
tb1.add_scalar('Loss', np.cos(i), i)
tb2.add_scalar('Loss', np.sin(i),i)
tb1.close()
tb2.close()
```
3. Open tensorboard and see the two file readed as one and the graph bug (line across the graph in this example)
## Expected behavior
The comment must be add out the `if not log_dir` in :
https://github.com/pytorch/pytorch/blob/master/torch/utils/tensorboard/writer.py#L206-L212
```
class SummaryWriter(object):
def __init__(self, log_dir=None, comment='', purge_step=None, max_queue=10,
[...]
if not log_dir:
import socket
from datetime import datetime
current_time = datetime.now().strftime('%b%d_%H-%M-%S')
log_dir = os.path.join(
'runs', current_time + '_' + socket.gethostname() + comment)
self.log_dir = log_dir
```
Something like :
```
class SummaryWriter(object):
def __init__(self, log_dir=None, comment='', purge_step=None, max_queue=10,
[...]
if not log_dir:
import socket
from datetime import datetime
current_time = datetime.now().strftime('%b%d_%H-%M-%S')
log_dir = os.path.join(
'runs', current_time + '_' + socket.gethostname() )
self.log_dir = log_dir + '_' + comment
```
## Additional context
First time reporting something, so your advices are welcome | triaged,module: tensorboard | low | Critical |
476,627,555 | opencv | Get unexpected results when using the setMouseCallback on windows with CV_WINDOW_OPENGL | I get wrong coordinate points by clicking on the mouse in the window with CV_WINDOW_OPENGL. And the coordinate points obtained from the window without CV_WINDOW_OPENGL are correct.
```
std::vector<cv::Point> vecTemp;
void OnMouseAction(int event, int x, int y, int flags, void *para)
{
switch (event)
{
case CV_EVENT_LBUTTONDOWN:
vecTemp.push_back(cv::Point2d(x, y));
}
}
```
```
cv::Mat cnPerDstFront;
cv::namedWindow("FRONT", CV_WINDOW_OPENGL | CV_WINDOW_AUTOSIZE);
cv::imshow("FRONT", cmPerDstFront);
cv::waitKey(1);
while (1)
{
int key = cv::waitKey(10);
cv::setMouseCallback("FRONT", OnMouseAction);
if (key == 'q')
break;
}
vPstFront = vecTemp;
```
```
output vPstFront : [104, 308;105, 738]
```
```
cv::Mat cnPerDstFront;
cv::namedWindow("FRONT", CV_WINDOW_AUTOSIZE);
cv::imshow("FRONT", cmPerDstFront);
cv::waitKey(1);
while (1)
{
int key = cv::waitKey(10);
cv::setMouseCallback("FRONT", OnMouseAction);
if (key == 'q')
break;
}
vPstFront = vecTemp;
```
```
output vPstFront : [104, 313;105, 751]
``` | category: highgui-gui | low | Minor |
476,638,190 | flutter | [webview_flutter] Add Windows support | I want to implement the function of writing a piece of code to open a web page in android, ios, windows, but this (https://pub.dev/packages/webview_flutter) does not support the windows platform, when can I support it?
## Current status (last updated June 2024)
Blocked on https://github.com/flutter/flutter/issues/31713. There will not be any update on this issue until platform view support has been implemented.
Please [**do not comment to ask for updates or timelines**](https://github.com/flutter/flutter/blob/master/docs/contributing/issue_hygiene/README.md#do-not-add-me-too-or-same-or-is-there-an-update-comments-to-bugs). | c: new feature,platform-windows,p: webview,package,a: desktop,P2,team-windows,triaged-windows | low | Critical |
476,667,820 | rust | Inconsistent optimization | ```rust
use std::ops::*;
pub struct Number([u64; 4]);
impl Add for Number {
type Output = Self;
#[inline]
fn add(self, other: Self) -> Self::Output {
let mut accum: Self::Output = unsafe { core::mem::uninitialized() };
let mut carry = false;
for i in 0..self.0.len() {
let x = self.0[i] as u128 + other.0[i] as u128 + carry as u128;
carry = x > core::u64::MAX as u128;
accum.0[i] = x as u64;
}
accum
}
}
pub fn add_a(l: Number, r: Number) -> Number {
l + r
}
pub fn add_b(l: [u64; 4], r: [u64; 4]) -> [u64; 4] {
let mut accum: [u64; 4] = unsafe { core::mem::uninitialized() };
let mut carry = false;
for i in 0..4 {
let x = l[i] as u128 + r[i] as u128 + carry as u128;
carry = x > core::u64::MAX as u128;
accum[i] = x as u64;
}
accum
}
impl Sub for Number {
type Output = Self;
#[inline]
fn sub(self, other: Self) -> Self::Output {
let mut accum: Self::Output = unsafe { core::mem::uninitialized() };
let mut carry = false;
for i in 0..self.0.len() {
let x = self.0[i] as u128 - other.0[i] as u128 - carry as u128;
carry = x > core::u64::MAX as u128;
accum.0[i] = x as u64;
}
accum
}
}
pub fn sub_a(l: Number, r: Number) -> Number {
l - r
}
pub fn sub_b(l: [u64; 4], r: [u64; 4]) -> [u64; 4] {
let mut accum: [u64; 4] = unsafe { core::mem::uninitialized() };
let mut carry = false;
for i in 0..4 {
let x = l[i] as u128 - r[i] as u128 - carry as u128;
carry = x > core::u64::MAX as u128;
accum[i] = x as u64;
}
accum
}
```
https://godbolt.org/z/aqhD1B
The above link demonstrates four functions of the same algorithm. The generated assembly is widely divergent based on context. It would be great if rust could output the same code for all of them. Alternatively, someone could recommend an algorithm that produces consistent output. | A-LLVM,I-slow,C-enhancement,T-compiler,C-optimization | low | Minor |
476,669,188 | TypeScript | Max depth limit does not trigger. Gives up and resolves type to any | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
+ Type instantiation is excessively deep and possibly infinite.ts(2589)
+ Chaining methods
+ Fluent API
+ Generic class
+ Generic method
**Code**
Using `DataT`,
```ts
type AppendToArray<ArrT extends any[], T> = (
(
ArrT[number]|
T
)[]
);
interface Data {
arr : number[]
};
type AppendToFoo<C extends Data, N extends number> = (
Foo<{arr:AppendToArray<C["arr"], N>}>
);
class Foo<DataT extends Data> {
arr! : DataT["arr"];
//Using `DataT`
x<N extends number> (n:N) : AppendToFoo<DataT, N> {
return null!;
}
}
declare const foo0 : Foo<{arr:0[]}>;
/*
const foo12: Foo<{
arr: (0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12)[];
}>
*/
const foo12 = foo0
.x(1).x(2).x(3).x(4).x(5).x(6).x(7).x(8).x(9).x(10).x(11).x(12);
/*
Expected:
+ Similar to foo12
OR
+ Type instantiation is excessively deep and possibly infinite.ts(2589)
Actual:
const foo13: Foo<{
arr: any[];
}>
*/
const foo13 = foo12.x(13);
```
[Playground](http://www.typescriptlang.org/play/#code/C4TwDgpgBAgmkDsAmAVA9jATpghiAPFpilBAB7ATIDOUOCIA2gLoA0UKAfFALxQAUAKCgiBw0RKIpGCAK4BbAEYRMzAD7iJIlJqgBKFoL0BuQQEsElTADMcAY2gARHMBxQA3rpzYoALihySiqGAL6moJCw8FSoaABiaGj4AMKkFDG0zq7sAHJplDQBCsqY3HxCEglJ7t6YvnCIsUR4KYwARLVtbFA5nCGcRqaCdgA2ONS0VfhZOCTkBUiZLjjcnhK1AIR+UDPSHdhdphIA9McAqtQWAOZQAAa7t7pk+HnzGUVBpQIIvjl62w0YugprtcqtdBJMBBgLJMAgiiMRhsjqIQoI0UgIKNvNA7GgENRgFBrIkAAzbKY1bC+UksfqmY4AKmG+MJxMSAEYAEz+SleakCclqKAcqDCrliqAAZklABZJQBWSUANklAHZJQAOSUATklHKFItFwu5BmYpn6gkZxxZBKJJLQ3N47LQpN0ADoyPwOXpPfwub6vVLA-xZSGFSHlSG1SHNSGdSGDYmfX7TQzmQBRMiQOyUJC+QSCADUUAAymZ5GYxpgoMA0C7uYIAPIAJWLHHA0AshPowDMLjM+KgZlo5AcEzMADcICMQFBMRAwHRkFAwGgJ4pZ8OENYLGZKO7gNR-QrNQnCzBc7IcCMC3i7Q2pbzEvg1qJav56ExzeiBtbbWyDocjKfBAVyqbBsYQA)
Using `this`,
```ts
type AppendToArray<ArrT extends any[], T> = (
(
ArrT[number]|
T
)[]
);
interface Data {
arr : number[]
};
type AppendToFoo<C extends Data, N extends number> = (
Foo<{arr:AppendToArray<C["arr"], N>}>
);
class Foo<DataT extends Data> {
arr! : DataT["arr"];
//Using `this`
x<N extends number> (n:N) : AppendToFoo<this, N> {
return null!;
}
}
declare const foo0 : Foo<{arr:0[]}>;
/*
const foo9: Foo<{
arr: (0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9)[];
}>
*/
const foo9 = foo0
.x(1).x(2).x(3).x(4).x(5).x(6).x(7).x(8).x(9);
/*
Expected:
+ Similar to foo9
OR
+ Type instantiation is excessively deep and possibly infinite.ts(2589)
Actual:
const foo10: Foo<{
arr: any[];
}>
*/
const foo10 = foo9.x(10);
```
[Playground](http://www.typescriptlang.org/play/#code/C4TwDgpgBAgmkDsAmAVA9jATpghiAPFpilBAB7ATIDOUOCIA2gLoA0UKAfFALxQAUAKCgiBw0RKIpGCAK4BbAEYRMzAD7iJIlJqgBKFoL0BuQQEsElTADMcAY2gARHMBxQA3rpzYoALihySiqGAL6moJCw8FSoaABiaGj4AMKkFDG0zq7sAHJplDQBCsqY3HxCEglJ7t6YvnCIsUR4KYwARLVtbFA5nCGcRqaCdgA2ONS0VfhZOCTkBUiZLjjcnhK1AIR+UDPSHdhdphIA9McAqtQWAOZQAAbAABZm1Le6ZPh58xlFQaUCCL4cnptg0Yugpo9nrlVroJJgIMBZJgEEURiMNkdRCFBNikBBRt5oHY0AhqMAoNZEgAGbZTGrYXxUlj9UzHABUwxJZIpiQAnP46V4GQIaWooABGKBigBMUqgAGY5QAWOUAVjlADY5QB2OUADjlvIMzFM-UEbOOnNJ5MpaF5vB5aCpugAdGR+OK9G7+NKve75X7+ErA6rAxrA9rA3rA0bWRyAKJkSB2ShIXyCQQAaigAGUzPIzGNMFBgGhHbzBAB5ABKWY44GgFjJ9GAZhcZhJUGeaQcEzMADcICMQFA8RAwHRkFAwGg+4ph12ENYLGZKC7gNQfaq9UaMzAU7IcCN08TrY7xVSBYl8GtRLV-PQmCacQMLVbubaLw7bbzvReTEAA)
**Expected behavior:**
Both examples should resolve correctly or trigger, `Type instantiation is excessively deep and possibly infinite.ts(2589)`
See code for more details.
**Actual behavior:**
Both examples resolve to `any` without errors.
Both examples seem to have a different limit before this happens?
See code for more details.
**Playground Link:**
Using `DataT`, the limit seems to be `12`.
[Playground](http://www.typescriptlang.org/play/#code/C4TwDgpgBAgmkDsAmAVA9jATpghiAPFpilBAB7ATIDOUOCIA2gLoA0UKAfFALxQAUAKCgiBw0RKIpGCAK4BbAEYRMzAD7iJIlJqgBKFoL0BuQQEsElTADMcAY2gARHMBxQA3rpzYoALihySiqGAL6moJCw8FSoaABiaGj4AMKkFDG0zq7sAHJplDQBCsqY3HxCEglJ7t6YvnCIsUR4KYwARLVtbFA5nCGcRqaCdgA2ONS0VfhZOCTkBUiZLjjcnhK1AIR+UDPSHdhdphIA9McAqtQWAOZQAAa7t7pk+HnzGUVBpQIIvjl62w0YugprtcqtdBJMBBgLJMAgiiMRhsjqIQoI0UgIKNvNA7GgENRgFBrIkAAzbKY1bC+UksfqmY4AKmG+MJxMSAEYAEz+SleakCclqKAcqDCrliqAAZklABZJQBWSUANklAHZJQAOSUATklHKFItFwu5BmYpn6gkZxxZBKJJLQ3N47LQpN0ADoyPwOXpPfwub6vVLA-xZSGFSHlSG1SHNSGdSGDYmfX7TQzmQBRMiQOyUJC+QSCADUUAAymZ5GYxpgoMA0C7uYIAPIAJWLHHA0AshPowDMLjM+KgZlo5AcEzMADcICMQFBMRAwHRkFAwGgJ4pZ8OENYLGZKO7gNR-QrNQnCzBc7IcCMC3i7Q2pbzEvg1qJav56ExzeiBtbbWyDocjKfBAVyqbBsYQA)
Using `this`, the limit seems to be `9`.
[Playground](http://www.typescriptlang.org/play/#code/C4TwDgpgBAgmkDsAmAVA9jATpghiAPFpilBAB7ATIDOUOCIA2gLoA0UKAfFALxQAUAKCgiBw0RKIpGCAK4BbAEYRMzAD7iJIlJqgBKFoL0BuQQEsElTADMcAY2gARHMBxQA3rpzYoALihySiqGAL6moJCw8FSoaABiaGj4AMKkFDG0zq7sAHJplDQBCsqY3HxCEglJ7t6YvnCIsUR4KYwARLVtbFA5nCGcRqaCdgA2ONS0VfhZOCTkBUiZLjjcnhK1AIR+UDPSHdhdphIA9McAqtQWAOZQAAbAABZm1Le6ZPh58xlFQaUCCL4cnptg0Yugpo9nrlVroJJgIMBZJgEEURiMNkdRCFBNikBBRt5oHY0AhqMAoNZEgAGbZTGrYXxUlj9UzHABUwxJZIpiQAnP46V4GQIaWooABGKBigBMUqgAGY5QAWOUAVjlADY5QB2OUADjlvIMzFM-UEbOOnNJ5MpaF5vB5aCpugAdGR+OK9G7+NKve75X7+ErA6rAxrA9rA3rA0bWRyAKJkSB2ShIXyCQQAaigAGUzPIzGNMFBgGhHbzBAB5ABKWY44GgFjJ9GAZhcZhJUGeaQcEzMADcICMQFA8RAwHRkFAwGg+4ph12ENYLGZKC7gNQfaq9UaMzAU7IcCN08TrY7xVSBYl8GtRLV-PQmCacQMLVbubaLw7bbzvReTEAA)
**Related Issues:**
I have no clue how to search for this.
-----
I came across this weird bug by accident, actually.
I was working on a personal project and this would be the 4th, or 6th time I've rewritten the project. I've noticed that in all the rewrites, there's this particular method on generic class where the max number of times I could chain it was always 20.
The 21st call would always trigger the max depth error.
I was sick of it and decided to investigate possible ways to work around this limit.
I decided to write a simple repro before messing with it. (The above code snippets).
However, the simplified repro behaved **very** weirdly, and would not trigger the error.
It boggles me how TS can resolve crazy types like [this](http://www.typescriptlang.org/play/#code/C4TwDgpgBAgmkDsAmAeGAndAVKEAewEyAzlAIYIgDaAugDRRYB8UAvFABQBQUvnPfQRmxUEAVwC2AIwjoaAHwGDeWJVACUtLuoDcXLgGMANmWKkAGmkw58hEuUq0WAbzXAAhFABcsa3sF4nOreUJZwiKjCWAxRTC5qgugQwGLoCFAIEADuoRy6agC+XEVIEMZkSVAGAPYIxMBQgT6WTno1dQ3obI0AdHh5fQP96oMjw6MT41NDM2Ozk-PTc+pcC8trG0ubizvrw6tbh7vbe8dHpxcnV+fXZ3eXN4+7B-e3D69Pyy-vP29rXEA),

but will choke on simple types like the above snippets.
-----
It's also super weird because my super complex examples have a limit of 20 calls. And these super simple examples have a super low limit. | Bug | medium | Critical |
476,700,817 | pytorch | Construction of MultivariateNormal much slower on GPU than CPU | ## 🐛 Bug
Constructing a MultivariateNormal distribution is much slower when inputting GPU-based `FloatTensor`s than CPU-based ones.
On my machine the GPU version is ~33x slower than CPU.
## To Reproduce
Steps to reproduce the behavior:
```
import time
import torch
from torch.distributions.multivariate_normal import MultivariateNormal
mu = torch.FloatTensor([2, 4])
sigma = torch.FloatTensor([[5, 0], [0, 2]])
mu_gpu = mu.cuda()
sigma_gpu = sigma.cuda()
num_runs = 1000
t_cpu, t_gpu = 0, 0
for _ in range(num_runs):
st = time.perf_counter()
m1 = MultivariateNormal(mu, sigma)
t_cpu += time.perf_counter() - st
torch.cuda.synchronize()
st = time.perf_counter()
m2 = MultivariateNormal(mu_gpu, sigma_gpu)
torch.cuda.synchronize()
t_gpu += time.perf_counter() - st
print(f'[CPU] Time Taken: {t_cpu}s')
print(f'[GPU] Time Taken: {t_gpu}s')
```
Output on my machine:
```
[CPU] Time Taken: 0.08132426194060827s
[GPU] Time Taken: 2.7058167830073216s
```
## Expected behavior
I'd expect the GPU to be faster, or at least of a comparable speed to CPU.
## Environment
```
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Pop!_OS 18.10
GCC version: (Ubuntu 8.3.0-6ubuntu1~18.10) 8.3.0
CMake version: version 3.12.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: GeForce GTX 1070 With Max-Q Design
Nvidia driver version: 410.78
cuDNN version: /usr/lib/cuda-10.0/lib64/libcudnn.so.7.4.1
Versions of relevant libraries:
[pip3] numpy==1.17.0
[pip3] torch==1.1.0
[pip3] torchvision==0.3.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl_fft 1.0.12 py36ha843d7b_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
```
| module: performance,module: distributions,module: cuda,triaged | low | Critical |
476,706,053 | scrcpy | Request: offer more options for drag&drop of files | When right-button drag&drop, it could offer us various options:
1. push to main storage path (important for APK files, as the normal behavior is to install them)
2. push to Download folder
3. For single APK file, install&run.
4. Same as #3, but also grant permissions (use the "-g" parameter)
5. For multiple APK files, install&grant-permissions
6. Push and share
7. Push and open
As for the definition of "run", it depends. On most apps, it's the main Activity to launch.
On some, there is no such an Activity. On such cases, it depends on what is the app. If for example, it's a live wallpaper app, it should show the chooser. If it failed to find how to "run" the app, nothing should occur, except maybe tell the user that we can't find any way to "run" the app. | feature request | low | Critical |
476,736,996 | rust | rustdoc: explicit marker trait impl bounds are not simplified | Rustdoc currently renders automatic and explicit marker trait implementation bounds differently, for automatic implementations the bounds are simplified down to their most fundamental requirements, while explicit implementations show exactly what is specified.
As an example, here are two identically defined structs (`Bar` and `Baz`), one uses the automatic implementation for `Send` while the other has the same implementation explicitly defined, in both cases the direct requirement is `Foo<T>: Send` which can be simplified down to `T: Send` by looking at the bounds on `Foo`:
```rust
pub struct Foo<T>(T);
pub struct Bar<T>(Foo<T>);
pub struct Baz<T>(Foo<T>);
unsafe impl<T> Send for Baz<T> where Foo<T>: Send {}
```
and the associated `impl Send` renderings:


This will become more relevant if RFC 2145 is ever implemented, that [should allow bounds to refer to private types](https://github.com/rust-lang/rfcs/pull/2353#issuecomment-369224212) in which case there is no way to go from the `impl Send for Baz` in the docs to see what the actual requirements are (but it is already possible to simulate that today if `Foo` is moved into a private module).
This is also an issue for the proc-macro in https://github.com/taiki-e/pin-project (cc @taiki-e), that produces an explicit implementation for `Unpin` that must refer to the field types so that it can work generically for any struct. | T-rustdoc | low | Minor |
476,737,043 | go | proposal: x/tools: tool to audit diffs in dependencies | One of the key points from https://github.com/golang/go/issues/30240 is:
> Saved module caches do not interoperate well with version-control and code-review tools.
This point is further developed in https://github.com/golang/go/issues/30240#issuecomment-516735768.
Raising this issue as a placeholder for the discussion about this specific point, because this point has a life well beyond and decisions on `vendor` and is relevant (by and large) to all users of Go.
_Please add to/edit this description as required - this is just a placeholder_ | Proposal,modules | low | Major |
476,767,503 | node | http: OutgoingMessage streamlike | http `OutgoingMessage` is missing some methods, properties and behaviours to make it "truly" "streamlike":
- [x] `destroyed`
- [x] `writableLength` (https://github.com/nodejs/node/pull/29018)
- [x] `writableHighWaterMark` (https://github.com/nodejs/node/pull/29018)
- [x] `writableCorked`
- [x] `writableObjectMode` (https://github.com/nodejs/node/pull/29018)
- [x] `cork()` (https://github.com/nodejs/node/pull/29053)
- [x] `uncork()` (https://github.com/nodejs/node/pull/29053)
- [ ] `ERR_STREAM_DESTROYED` (https://github.com/nodejs/node/pull/31818)
- [ ] `instanceof Writable` (issue https://github.com/nodejs/node/issues/28971)
Furthermore the destroy event ordering does not seem to be fully consistent with the stream implementation (needs further investigation). | http,stream | low | Minor |
476,803,821 | thefuck | Add support for brew command not found | can you add support for https://github.com/Homebrew/homebrew-command-not-found ? | enhancement,help wanted,osx,hacktoberfest | low | Major |
476,841,390 | youtube-dl | [bilibili] bangumi links have changed. | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.08.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2019.08.02**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of youtube-dl that clearly demonstrates the problem.
Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dl version 2019.08.02
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
.\youtube-dl.exe -v https://www.bilibili.com/bangumi/play/ss5802#100643
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://www.bilibili.com/bangumi/play/ss5802#100643']
[debug] Encodings: locale cp936, fs mbcs, out cp936, pref cp936
[debug] youtube-dl version 2019.08.02
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.18362
[debug] exe versions: none
[debug] Proxy map: {}
[generic] ss5802#100643: Requesting header
WARNING: Falling back on generic information extractor.
[generic] ss5802#100643: Downloading webpage
[generic] ss5802#100643: Extracting information
ERROR: Unsupported URL: https://www.bilibili.com/bangumi/play/ss5802#100643
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpjwvl_v2x\build\youtube_dl\YoutubeDL.py", line 796, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpjwvl_v2x\build\youtube_dl\extractor\common.py", line 530, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpjwvl_v2x\build\youtube_dl\extractor\generic.py", line 3333, in _real_extract
youtube_dl.utils.UnsupportedError: Unsupported URL: https://www.bilibili.com/bangumi/play/ss5802#100643
```
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
Now [BiliBili](https://www.bilibili.com/) redirects the link `http://bangumi.bilibili.com/anime/1869/play#40062` (which is a test in the [bilibili extractor](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/bilibili.py)) to `https://www.bilibili.com/bangumi/play/ss1869#40062`. A new solution is needed ?
| geo-restricted | low | Critical |
476,894,031 | pytorch | Better version of chrome://tracing | ## 📚 Documentation
Our friends at Google (specifically, Android) have built a wasdwebassemblyl-based tracing tool at https://ui.perfetto.dev/#!/viewer (source code to manually trace at https://android.googlesource.com/platform/external/perfetto/+/refs/heads/master/test/client_api_example.cc).
I think PyTorch could make use of that, or at least point to it in its documentation?
Example of uploaded trace: https://ui.perfetto.dev/#!/?s=c33b3f7ea24e145ed13a52196e1eb5e121334d53c7de6cedead7387a5f3c72 | module: docs,triaged,small | low | Minor |
476,929,029 | rust | Tracking issue for {Rc, Arc}::get_mut_unchecked | On `Rc` and `Arc` an new unsafe `get_mut_unchecked` method provides `&mut T` access without checking the reference count. `Arc::get_mut` involves multiple atomic operations whose cost can be non-trivial. `Rc::get_mut` is less costly, but we add `Rc::get_mut_unchecked` anyway for symmetry with `Arc`.
These can be useful independently, but they will presumably be typical when uninitialized constructors (tracked in https://github.com/rust-lang/rust/issues/63291) are used.
An alternative with a safe API would be to introduce `UniqueRc` and `UniqueArc` types that have the same memory layout as `Rc` and `Arc` (and so zero-cost conversion to them) but are guaranteed to have only one reference. But introducing entire new types feels “heavier” than new constructors on existing types, and initialization of `MaybeUninit<T>` typically requires unsafe code anyway.
PR https://github.com/rust-lang/rust/pull/62451 adds:
```rust
impl<T: ?Sized> Rc<T> { pub unsafe fn get_mut_unchecked(this: &mut Self) -> &mut T {…} }
impl<T: ?Sized> Arc<T> { pub unsafe fn get_mut_unchecked(this: &mut Self) -> &mut T {…} }
```
---
## Open questions
- [ ] Rename to `get_unchecked_mut` to match https://doc.rust-lang.org/std/?search=get_unchecked_mut ? | T-libs-api,B-unstable,C-tracking-issue,requires-nightly,Libs-Tracked | medium | Critical |
476,955,127 | flutter | Deprecate and remove setSurfaceSize | You can do the same with `physicalSizeTestValue`.
History: "setSurfaceSize was first IIRC, and physicalSizeTestValue was second. The first solved the specific problem of setting the size of the test surface. The second allowed you to just override anything on the Window." | a: tests,framework,c: API break,c: proposal,P2,team-framework,triaged-framework | low | Major |
477,019,733 | flutter | [Web] debugPrintStack has dart:sdk_internal frame on top of stack | Stacktrace.current is included when calling Stacktrace.current for web platform but is skipped for VM/AOT.
## Steps to Reproduce
Disable skip:isBrowser in test/foundation/assertions.dart
| dependency: dart,platform-web,has reproducible steps,P3,found in release: 3.10,found in release: 3.12,team-web,triaged-web | low | Critical |
477,069,713 | go | go/printer: deletes or inserts AST types into code | Forked from https://github.com/golang/go/issues/31291.
We have found that `go/printer.Fprint` will delete broken code.
An example of code being deleted: https://play.golang.org/p/WKRt74denE0.
We have also found that `go/printer.Fprint` will insert the string literal `BadStmt` or `BadExpr` in broken code that parses into `*ast.BadStmts` or `*ast.BadExprs`.
An example of code being rewritten: https://play.golang.org/p/_BBrwbfAtEH.
This was discovered through using the function `go/format.Node`, which uses `go/printer.Fprint`. We noticed that `format.Source` returns an error in such cases, which is the behavior I would expect. I would have expected that `format.Source` and `format.Node` would behave the same on the same input. | NeedsInvestigation | low | Critical |
477,077,685 | godot | Can't compile Godot with asan, lsan, ubsan and disabled vorbis | **Godot version:**
3.2.dev.custom_build 7126654ea
**OS/device including version:**
Ubuntu 18.04 GCC 7.4, clang 6.0
**Issue description:**
Commands
```
scons p=x11 -j6 use_llvm use_lsan=yes use_asan=yes use_ubsan=yes module_vorbis_enabled=no
```
Errors
```
Ranlib Library ==> core/libcore.x11.tools.64s.a
[Initial build] Linking Program ==> bin/godot.x11.tools.64s
/usr/bin/ld: modules/libmodules.x11.tools.64s.a(video_stream_theora.x11.tools.64s.o): undefined reference to symbol 'vorbis_block_clear'
//usr/lib/x86_64-linux-gnu/libvorbis.so.0: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
scons: *** [bin/godot.x11.tools.64s] Error 1
scons: building terminated because of errors.
```
```
anlib Library ==> core/libcore.x11.tools.64.llvms.a
[100%] Linking Program ==> bin/godot.x11.tools.64.llvms
[100%] /usr/bin/ld: modules/libmodules.x11.tools.64.llvms.a(video_stream_theora.x11.tools.64.llvms.o): undefined reference to symbol 'vorbis_block_clear'
//usr/lib/x86_64-linux-gnu/libvorbis.so.0: error adding symbols: DSO missing from command line
clang: error: linker command failed with exit code 1 (use -v to see invocation)
scons: *** [bin/godot.x11.tools.64.llvms] Error 1
scons: building terminated because of errors.
``` | bug,topic:buildsystem | low | Critical |
477,104,448 | go | regexp: optimize for provably too long inputs | As a complement to #31329, we should find out what the maximum length of matchable inputs is at compile-time (when it exists) and return very early.
Some patterns for which this can be computed:
- `^a{2,5}$` (max length 5)
- `^((aaa)|(aa))$` (max length 3)
- `^.$` (max length `utf8.UTFMax`=4) | Performance,NeedsInvestigation | low | Minor |
477,122,140 | pytorch | Make MultiProcessTestCase pickable | ## Issue description
If MultiProcessTestCase is pickable, there is no need to hack testMethod in order to launch several processes.
And there will be no need to write duplicate terminated process status checking code.
## Code pointer
https://github.com/pytorch/pytorch/blob/725d6cd8cec9f2d2c60b8b09bf6384b38612e85a/test/common_distributed.py#L104-L161
## References
Discussion happened in #23660
[Python docs](https://docs.python.org/3/library/pickle.html#pickling-class-instances) | oncall: distributed,module: tests,triaged,better-engineering | low | Minor |
477,126,845 | pytorch | [quantization] jit::class_ for packed weights | <to be filled in>
cc @suo | oncall: jit,triaged,quantization_release_1.3,jit-backlog | low | Major |
477,131,008 | TypeScript | PositionError does not match docs in MDN | **TypeScript Version:** 3.5.3
**Search Terms:**
PositionError, Geolocation
**Code**
```ts
const positionError: PositionError = {
code: 1,
message: ''
}
```
**Expected behavior:**
Compiles successfully as this matches the docs here: https://developer.mozilla.org/en-US/docs/Web/API/PositionError
**Actual behavior:**
`Type '{ code: number; message: string; }' is missing the following properties from type 'PositionError': PERMISSION_DENIED, POSITION_UNAVAILABLE, TIMEOUT`
**Playground Link:**
https://www.typescriptlang.org/play/#code/MYewdgzgLgBADiCBLKTwFEBOmSYFwwAKiKaYWOmMAvDAN4BQMzMoAJgKYECMANEywC2HCBACGAcy4wA5DIYBfBkA
**Related Issues:**
None
| Bug,Domain: lib.d.ts | low | Critical |
477,134,098 | pytorch | Fan out calculation broken for group (depthwise) convolution | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
```python
# depthwise, should be (9, 9)
m = torch.nn.Conv2d(4, 4, 3, groups=4)
print(torch.nn.init._calculate_fan_in_and_fan_out(m.weight))
# groupwise, should be (18, 9)
m = torch.nn.Conv2d(4, 2, 3, groups=2)
print(torch.nn.init._calculate_fan_in_and_fan_out(m.weight))
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
1. Should return `(9,9)` instead of `(9, 36)`
1. Should return `(18, 9)` instead of `(18, 18)`
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- PyTorch Version (e.g., 1.0): 1.3.0
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): pacman
- Build command you used (if compiling from source):
- Python version: 3.7
- CUDA/cuDNN version: None
- GPU models and configuration: None
- Any other relevant information: None
cc @ezyang @gchanan @zou3519 | module: convolution,triaged,module: initialization | low | Critical |
477,147,638 | flutter | why a page with statefull is rebuild every time when the global route change? | ### what
this is my [code](https://github.com/dumplings/flutter-demo/tree/master/life_cycle).
i make 5 pages, one is extends stateless, others statefull, and Their relationship is `push` or `pop` to next one, and i found out something is that if i go next page, old will be rebuild, even how old page(just statefull page will happened), I know that using statefull at the root is a matter of performance and is not recommended, I just want to know why this happened. thank you.
### example code
#### main.js
```dart
return MaterialApp(
...
initialRoute: '/',
routes: {
'/': (_) => HomePage(),
'/half': (_) => HomeHalfPage(),
'/first': (_) => FirstPage(),
'/second': (_) => SecondPage(),
'/third': (_) => ThirdPage(),
},
);
```
#### body.js
```dart
// Created by hejingguo on 2019-08-06
import 'package:flutter/material.dart';
class Body extends StatelessWidget {
final String title;
final String routeName;
final bool isEnd;
Body(this.title, this.routeName, {this.isEnd = false});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text(title),
Padding(
padding: EdgeInsets.only(
top: 30.0,
),
),
RaisedButton(
onPressed: () {
if (isEnd) {
Navigator.pop(context);
} else {
Navigator.pushNamed(context, routeName);
}
},
child: Text('go to $routeName'),
),
],
),
),
);
}
}
```
#### page.js
```dart
import 'package:flutter/material.dart';
import 'body.dart';
void printMessage(String methodName) =>
print('Home Page ===${DateTime.now().toUtc()}===> $methodName');
class HomePage extends StatefulWidget {
@override
_HomePageState createState() => _HomePageState();
}
class _HomePageState extends State<HomePage> {
@override
void initState() {
super.initState();
printMessage('initState');
}
@override
void dispose() {
super.dispose();
printMessage('dispose');
}
@override
void didChangeDependencies() {
super.didChangeDependencies();
printMessage('didChangeDependencies');
}
@override
void didUpdateWidget(HomePage oldWidget) {
super.didUpdateWidget(oldWidget);
printMessage('didUpdateWidget');
}
@override
Widget build(BuildContext context) {
printMessage('build');
return Body('HomePage', '/half');
}
}
```
#### log
```shell
flutter: Home Page ===2019-08-06 03:14:21.585795Z===> initState
flutter: Home Page ===2019-08-06 03:14:21.587966Z===> didChangeDependencies
flutter: Home Page ===2019-08-06 03:14:21.588262Z===> build
flutter: Home Page ===2019-08-06 03:14:21.854595Z===> didUpdateWidget
flutter: Home Page ===2019-08-06 03:14:21.854860Z===> build
flutter: Home Half Page ===2019-08-06 03:14:24.351334Z===> build
flutter: Home Page ===2019-08-06 03:14:24.666594Z===> build
flutter: First Page ===2019-08-06 03:14:25.497391Z===> initState
flutter: First Page ===2019-08-06 03:14:25.497650Z===> didChangeDependencies
flutter: First Page ===2019-08-06 03:14:25.497916Z===> build
flutter: Home Page ===2019-08-06 03:14:25.820592Z===> build
flutter: Second Page ===2019-08-06 03:14:29.350101Z===> initState
flutter: Second Page ===2019-08-06 03:14:29.350341Z===> didChangeDependencies
flutter: Second Page ===2019-08-06 03:14:29.350547Z===> build
flutter: Home Page ===2019-08-06 03:14:29.657926Z===> build
flutter: First Page ===2019-08-06 03:14:29.660035Z===> build
flutter: Third Page ===2019-08-06 03:14:32.350773Z===> initState
flutter: Third Page ===2019-08-06 03:14:32.351007Z===> didChangeDependencies
flutter: Third Page ===2019-08-06 03:14:32.351208Z===> build
flutter: First Page ===2019-08-06 03:14:32.661122Z===> build
flutter: Second Page ===2019-08-06 03:14:32.663558Z===> build
flutter: Home Page ===2019-08-06 03:14:32.665463Z===> build
```
[more code](https://github.com/dumplings/flutter-demo/tree/master/life_cycle)
### Flutter Version
Flutter 1.7.8+hotfix.4 • channel stable • https://github.com/flutter/flutter.git
Framework • revision 20e59316b8 (3 weeks ago) • 2019-07-18 20:04:33 -0700
Engine • revision fee001c93f
Tools • Dart 2.4.0 | framework,c: performance,d: api docs,f: routes,P2,team-framework,triaged-framework | low | Major |
477,155,369 | go | encoding/csv: do not use bufio.Writer in csv.Writer | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.6 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/root/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/root/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build056785781=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
Consider this situation, in my program, down to the lowest call, i need to convert my `Record` to csv string, so I use `csv.NewWriter` and pass it with bytes.Buffer, and i Write my record, and Flush, at last, i got the csv string from bytes.Buffer. Everything is fine and great for now. But, after benching, I found my program running very slowly, and the bufio used in csv.Writer is the performance killer. Because for each record, bufio.defaultBufSize length of slice is maked, and it is not easy to prevent from this slice allocing.
### What did you expect to see?
Leave the choice of using bufio or not to the user.
in stdlib, use raw io.Writer, and if user want to use bufio. just wrapp it like this:
```
w := csv.NewWriter(bufio.NewWriter(os.Stdout))
```
| Performance,NeedsInvestigation | low | Critical |
477,178,318 | pytorch | CUDA: THTensor code complains about devices not matching when creating tensor from blob | ## 🐛 Bug
Despite pytorch being able to work out that the storage of the device is actually "cuda:1" (this is where it was created!), it assumes it was on "cuda:0" and complains about it.
If I set the index to -1 (default), it should use the pointer to work out the device index and bypass any complaints. Instead, it makes this assumption about cuda:0 (which is not mentioned anywhere in my code).
Error:
> Attempted to set the storage of a tensor on device "cuda:0" to a storage on different device "cuda:1". This is no longer allowed; the devices must match. (THTensor_stealAndSetStoragePtr at ..\..\aten\src\TH\THTensor.cpp:116)
Related: #19007
## To Reproduce
Steps to reproduce the behavior:
```
torch::Tensor matToTensor(const cv::cuda::GpuMat &image, const bool mean)
{
std::vector<int64_t> dims = {image.rows, image.cols, image.channels()};
std::vector<int64_t> strides = {image.step1(), image.channels(), 1};
auto options = torch::TensorOptions().dtype(torch::kFloat).device(torch::kCUDA).device_index(-1);
torch::Tensor tensorImage;
return torch::from_blob(image.data, dims, strides, deleter, options);
}
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): 1.2.0-pre
- OS (e.g., Linux): Windows
- How you installed PyTorch (`conda`, `pip`, source): nightly
- Build command you used (if compiling from source): N/A
- Python version: N/A
- CUDA/cuDNN version: 10.0/7.5
- GPU models and configuration: RTX 2060
- Any other relevant information:
## Additional context
| module: cpp,triaged | low | Critical |
477,186,531 | flutter | Using escape characters in variable strings | While this could be more of a SO-type question, [I have posted there already to no useful response](https://stackoverflow.com/questions/57213902/new-line-n-not-being-respected-when-reading-from-sqflite-result) so I'm out of options and this is currently a blocker for many of our production apps that rely on data-driven icons and string formatting.
I'm trying to use escape characters in both SQFlite data and from other model class variables, neither of which seem to work at all.
For example, I have a FontAwesome .ttf that I'm trying to render fonts from by referencing the unicode characters eg. `\uf535'. If I hardcode the text value like this:
```dart
RichText(
text: "\uf535"
)
```
then the icon renders correctly, but doing:
```dart
class Model {
String iconString = "\uf535";
}
RichText(
text: "${model.iconString}"
)
```
renders the literal string, and no icon/escape formatting works at all. Is there some sort of trick to getting this to work or is something going funky somewhere?
| framework,dependency: dart,a: typography,P2,team-framework,triaged-framework | low | Major |
477,221,411 | flutter | [video_player] Add Windows support | When does the video player plugin support the Windows platform? Is there any plan support?
## Current status (as of October 2024)
There is an in-progress PR (https://github.com/flutter/packages/pull/5884), but it needs more work to be landable, and is not currently under active development. Anyone interested in moving the PR forward is welcome to contribute. | c: new feature,platform-windows,customer: crowd,p: video_player,package,c: proposal,a: desktop,P2,has partial patch,team-windows,triaged-windows | low | Critical |
477,241,140 | flutter | Flutter doesn't honor the Gradle service directory path setting of Android Studio | ## Steps to Reproduce
1. Change the setting "Build, Execution, Deployment" - Gradle - "Service directory path" in Android Studio (it is `%userprofile%\.gradle` by default).
2. Build an Android Studio project. Note that it creates the configured path and places Gradle there.
3. Build a Flutter projct. Note that it does **not** use the configured path but instead creates the default path (`%userprofile%\.gradle`) and places Gradle there.
Note: I do not have a Gradle directory configured in `flutter config`. As far as I know this should mean that it takes the one from Android Studio.
## Logs
`flutter doctor -v`:
```
[√] Flutter (Channel stable, v1.7.8+hotfix.4, on Microsoft Windows [Version 10.0.17763.615], locale de-AT)
• Flutter version 1.7.8+hotfix.4 at C:\Android\flutter
• Framework revision 20e59316b8 (3 weeks ago), 2019-07-18 20:04:33 -0700
• Engine revision fee001c93f
• Dart version 2.4.0
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.1)
• Android SDK at C:\Android\Sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.1
• ANDROID_HOME = C:\Android\Sdk
• Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
• All Android licenses accepted.
[√] Android Studio (version 3.4)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin version 38.1.1
• Dart plugin version 183.6270
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01)
[!] Connected device
! No devices available
! Doctor found issues in 1 category.
``` | platform-android,tool,t: gradle,P3,team-android,triaged-android | low | Minor |
477,255,264 | terminal | Feature Request: Bundle more icons with good names | # Description of the new feature/enhancement
There are a number of interesting icons that could apply in various situations, such as the VS icon, the Azure or Office/Exchange logos, SQL, GitHub, etc as well as various generic icons like a database drum or a terminal icon.
# Proposed technical implementation details (optional)
Adding new icon assets and giving them a good name should be easy for Microsoft-owned properties. Some may need to be invented (eg, C# has no logo, where F# has a rather spiff one).
Care should be taken to include variants for various contrast situations.
Giving them real names instead of GUID names would make them easier to use without having to track down random images on the web, or something buried in the file system.
Proposed icon set:
- [ ] Visual Studio
- [ ] C# (needs to be defined)
- [ ] F#
- [ ] Exchange
- [ ] GitHub
- [ ] Azure
- [ ] Microsoft
- [ ] Windows (CMD's icon is.. not great)
- [ ] Generic network terminal (eg, for SSH, Telnet, FTP, etc)
- [ ] Generic database
- [ ] Generic file system
Potential third party icons, if they agree?
- [ ] Git
- [ ] Python
- [ ] Julia
- [ ] Wolfram
- [ ] Create mechanism to propose new ones to engage | Issue-Feature,Area-Settings,Product-Terminal | low | Major |
477,278,538 | rust | Warn about 2015 edition on path & extern crate related errors | [Related forum thread](https://internals.rust-lang.org/t/warn-against-accidental-use-of-2015-edition/10666)
Users may be using 2015 edition without being aware of it, e.g. by copying an older template/tutorial or creating `Cargo.toml` by hand.
Accidental use of 2015 edition may cause puzzling path-related errors, such as E0432 & E0433.
When 2015-edition code fails to resolve paths, rustc could say if it works in 2018 edition (#61914) or at least emit a note saying that the failing code uses 2015 edition, and that switching to 2018 may help.
| C-enhancement,A-diagnostics,A-resolve,T-compiler,D-edition | low | Critical |
477,314,881 | vue-element-admin | 复合型输入框显示bug | ## Bug report(问题描述)
element的复合型输入框,在table 表头上的显示bug
#### Steps to reproduce(问题复现步骤)
custom-theme的样式导致element-ui的复合型输入框显示出问题
```css
.el-table th div {
display: inline-block;
padding-left: 10px;
padding-right: 10px;
line-height: 40px;
-webkit-box-sizing: border-box;
box-sizing: border-box;
overflow: hidden;
white-space: nowrap;
text-overflow: ellipsis;
}
```
```html
<el-table>
<el-table-column align="right">
<template v-slot:header="{row}">
<el-input placeholder="请输入内容" class="input-with-select">
<el-button slot="append" icon="el-icon-search" />
</el-input>
</template>
</el-table-column>
</el-table>
```
#### Screenshot or Gif(截图或动态图)

#### Other relevant information(格外信息)
- Your OS:windows 10
- Node.js version:v12.7.0.
- vue-element-admin version:4.2.1
| enhancement :star:,feature | medium | Critical |
477,317,620 | pytorch | model->to(device) costs over a millisecond when doing nothing | ## 🐛 Bug
On a system with a single GPU, with a model already loaded to this GPU, I expect model->to(torch::kCUDA) to be almost free. However, I find it takes approximately 1.5ms.
## To Reproduce
Steps to reproduce the behavior:
```C++
// Set model to correct device
using namespace std::chrono;
auto start = high_resolution_clock::now();
model->to(tens.device());
std::cout << duration_cast<microseconds>(high_resolution_clock::now() - start).count() << "us for to()" << std::endl;
```
1414us for to()
1523us for to()
...
Modified as a workaround:
```C++
// Set model to correct device
using namespace std::chrono;
auto start = high_resolution_clock::now();
auto deviceIndex = tens.device().is_cpu() ? -1 : tens.device().index();
if (currentDevice != deviceIndex)
{
model->to(tens.device());
currentDevice = deviceIndex;
}
std::cout << duration_cast<microseconds>(high_resolution_clock::now() - start).count() << "us for to()" << std::endl;
```
0us for to()
0us for to()
...
## Expected behavior
It should be a similar speed to the second version (workaround)
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): 1.2.0-pre
- OS (e.g., Linux): Windows 10
- How you installed PyTorch (`conda`, `pip`, source): nightly
- Build command you used (if compiling from source):
- Python version: N/A
- CUDA/cuDNN version: 10/7.5
- GPU models and configuration: Nvidia RTX 2060
- Any other relevant information:
| module: performance,triaged | low | Critical |
477,376,060 | TypeScript | "after" custom transformer seems to be called before the JS transformation. | Hi there,
I am trying to implement an `after` custom transformer as I am planning to make changes to the transpiled JS code.
According to the inline documentation (`Custom transformers to evaluate after built-in .js transformations.`), the transformer should be called after the source file is transpiled to JS. In the transformer API, the only provided argument is a "SourceFile" object which has no reference to the JS code (or at least I couldn't find it).
What's the point of having an after transformer if you don't get access to the transformed source code, nor have possibilities to change it? Is it a bug or is it like that on purpose?
Thanks in advance | Needs Investigation | low | Critical |
477,467,243 | TypeScript | Conditional types break with property chaining | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.5.1
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
property chaining, generic, object type param, conditional type
**Code**
Modified version of @fatcerberus ' attempt to break TS.
```ts
type Droste<T extends {x:number|string}> = {
value: T,
/**
* Should alternate between string and number
*/
droste: Droste<{x:(T["x"] extends number ? string : number)}>
};
declare const droste: Droste<{x:number}>;
//number
const _0 = droste.droste.droste.droste.droste.droste.droste.droste.droste.droste.droste
.droste.droste.droste.droste.droste.droste.droste.droste.droste.droste
.droste.droste;
const _1 = _0.droste; //string
const _2 = _1.droste; //number
const _3 = _2.droste; //string
const _4 = _3.droste; //Expected number, actual string, ???
const _5 = _4.droste; //string
const _6 = _5.droste; //string
const _7 = _6.droste; //string
const _8 = _7.droste; //string
const _9 = _8.droste; //string
//string forever and ever. Where is `number`?
```
**Expected behavior:**
Each access to `.droste` should alternate between `string` and `number`.
Or give a max instantiation depth/count error.
**Actual behavior:**
It alternates between `string` and `number` and then breaks after while.
From that point, it just sticks with `string`.
No errors given.
**Playground Link:**
[Playground](https://www.typescriptlang.org/play/#code/C4TwDgpgBAIgTgewM7AgHgCpQgD1QOwBMkoBvHALnwFcBbAIwjgB8U4BLfAcwF8A+KAF4yAKCjioANwCGAG2oQKUDABoxEgPQAqLeolaoAZQAWCarMJQ5qOPmmoojYAHcIEfFDacuVolBoMTHriWhrBhIgoirCRqGjkFAAUGADaAEQ4aQC62HjuxP50jHBQAPyewBzcUEoBxQCU-CI8ANwiIoQQAMay0nDQXQj4KFARyKhK8OPoCXVM-G0iGhpzcCKDw8BQAPoADEKjsRAAdGNRp0cX01fnZ6g393cnTw-PR8Gvny-flz-Xf7d3hIoF8jm0NiNtgBGA57V4tKDLLzcdZDSEAJlhUPhiJWRSCEK22wAzLD0TikZVvO1CTsACyw4kUjQAURwkC6qEsqxUVk51DkFSqXF5pTFqM2OwArLC6czkVwJZCAGywqXyqko2nbADssOVGuFSqJAA5YTrDdTtQBOWEmy0oynCqAAMwQ-UkTF8lggnrgxygAHVjExoOwSAADVYR0pAA)
**Related Issues:**
It is similar to https://github.com/microsoft/TypeScript/issues/32707
Except, it gives up and resolves the type to `string`, rather than `any`.
| Bug | low | Critical |
477,492,704 | rust | bare_trait_objects help is incorrect with Box<Trait + 'lifetime> in 2015 edition | The help text message for `bare_trait_objects` is incorrect when using the 2015 edition.
The following example gives a correct help text in the 2015 and 2018 edition:
```rust
pub fn test(_: Box<::std::any::Any>) {}
```
```rust
warning: trait objects without an explicit `dyn` are deprecated
--> src/lib.rs:1:20
|
1 | pub fn test(_: Box<::std::any::Any>) {
| ^^^^^^^^^^^^^^^ help: use `dyn`: `dyn (::std::any::Any)`
|
= note: `#[warn(bare_trait_objects)]` on by default
```
but adding a lifetime to the trait bound produces an incorrect help message for the 2015 edition:
```rust
pub fn test(_: Box<::std::any::Any + 'static>) {
}
```
```rust
warning: trait objects without an explicit `dyn` are deprecated
--> src/lib.rs:1:20
|
1 | pub fn test(_: Box<::std::any::Any + 'static>) {
| ^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `dyn`: `dyn ::std::any::Any + 'static`
|
= note: `#[warn(bare_trait_objects)]` on by default
```
Applying the suggest help text in 2015 edition code results in:
```rust
pub fn test(_: Box<dyn ::std::any::Any + 'static>) {
}
```
```rust
error[E0433]: failed to resolve: use of undeclared type or module `dyn`
--> src/lib.rs:1:20
|
1 | pub fn test(_: Box<dyn ::std::any::Any + 'static>) {
| ^^^ use of undeclared type or module `dyn`
warning: trait objects without an explicit `dyn` are deprecated
--> src/lib.rs:1:20
|
1 | pub fn test(_: Box<dyn ::std::any::Any + 'static>) {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `dyn`: `dyn dyn ::std::any::Any + 'static`
|
= note: `#[warn(bare_trait_objects)]` on by default
error: aborting due to previous error
For more information about this error, try `rustc --explain E0433`.
error: Could not compile `playground`.
To learn more, run the command again with --verbose.
```
The code actually compiles with the 2018 edition.
The correct fix for 2015 edition code (which also works for edition 2018) is probably to include parens:
```rust
pub fn test(_: Box<dyn (::std::any::Any) + 'static>) {
}
```
I see two options:
1. Add parens to the help text so that it is correct in both editions.
2. Fix the help text only for the 2015 edition, because the suggested fix is already correct for the 2018 edition.
I personally prefer the first option for consistency with the existing help text and easier copy/pasting between editions.
## Meta
`rustc --version --verbose`:
```
rustc 1.38.0-nightly (c4715198b 2019-08-05)
binary: rustc
commit-hash: c4715198b50d1cdaad44b6e250844362b77dcdd7
commit-date: 2019-08-05
host: x86_64-pc-windows-gnu
release: 1.38.0-nightly
LLVM version: 9.0
``` | A-lints,T-compiler,C-bug,A-suggestion-diagnostics,D-edition | low | Critical |
477,514,646 | TypeScript | String key-value type in basic types | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
typescript string key value type
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
Add an extra heavely used type - simple Object with string keys, used a lot as "primitive Map".
Tired writing `{ [ key: string ]: T }` each time.
Want `export type StrMap<T> = { [ key: string ]: T };` as one of basic types.
Keys in Objects are not too far from strings ( `Object.keys(_object).join('/')` ).
This is not fully related to https://github.com/microsoft/TypeScript/pull/26797 as it's a wider use case, IMHO.
<!-- A summary of what you'd like to see added or changed -->
## Use Cases
I'm using this type in a numerous projects in numerous modules.
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
<!-- Show how this would be used and what the behavior would be -->
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
477,529,468 | godot | Dependency Manager doesn't validate types | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
3.1.1
**Issue description:**
When you edit scene's dependencies, you can put anything for anything and it won't complain.
**Steps to reproduce:**
1. Create a Sprite node with a texture
2. Close the scene
3. Either remove the texture and open scene or edit dependencies manually
4. Select another file for the texture, like, sound file lol
5. Confirm the changes
6. Godot doesn't see any problem, not even a warning appears
You can open the scene normally, but saving it will remove the wrong dependencies. You can however edit the dependencies again and assign correct resources. | bug,topic:editor,confirmed | low | Minor |
477,529,738 | flutter | Add "optimized with asserts" engine variants. | Currently, the unoptimized engine variants are used to distribute binaries for the Flutter tester that runs on the host. This is because of the desire to have an assertions enabled engine while running tests on the host. Since these use cases don't care about the actual optimizations being disabled for ease of debugging, the engine ought to have an "optimized with asserts" variant of the engine. | engine,P3,team-engine,triaged-engine | low | Critical |
477,546,160 | kubernetes | Automate skew testing for enabled/disabled alpha API fields | https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md#adding-unstable-features-to-stable-versions details how to add a new alpha field to an existing API, and enable it over multiple versions, ensuring:
* data is not dropped when running skewed API servers, or when disabling a previously-enabled feature gate
* persisted API objects are not treated as invalid by an n-1 API server
This is a particularly fragile/nuanced area, and is easy to get wrong (c.f. https://github.com/kubernetes/kubernetes/issues/80931, https://github.com/kubernetes/kubernetes/pull/80933, https://github.com/kubernetes/kubernetes/issues/72651).
Something like the following is required to ensure we don't regress in this area:
* Tag alpha fields using something like `// +alpha`
* In CI, detect alpha fields for served API types (via openapi or via struct traversal)
* For every alpha field, ensure there is a test fixture available for a valid object with and without the field
* Exercise all of the following cases (integration tests can start servers with all alpha features enabled, then restart them with alpha features disabled against the same etcd storage, to exercise how an API server behaves when it encounters alpha data persisted by a future beta API server)
* feature enabled
* creating object with alpha data is allowed and persisted
* creating object without alpha data is allowed and persisted
* update existing object without alpha data to one with alpha data is allowed and persisted (for objects that allow mutation)
* update existing object with alpha data to one without alpha data is allowed and persisted (for objects that allow mutation)
* feature disabled
* creating object with alpha data is either rejected or persisted without the alpha data
* creating object without alpha data is allowed and persisted
* update existing object without alpha data to one with alpha data is either rejected or persisted without alpha data (for objects that allow mutation)
* update existing object with alpha data to one without alpha data is allowed and persisted (for objects that allow mutation)
/sig api-machinery
/priority important-soon | priority/important-soon,sig/api-machinery,sig/architecture,lifecycle/frozen | low | Minor |
477,564,766 | pytorch | [JIT] Can't use ndim in script | ```
import torch
def foo(x):
if x.ndim > 3:
return torch.neg(x)
else:
return x
x = torch.ones(3, 4)
x2 = torch.ones(3, 4, 5, 6)
print(foo(x))
print(foo(x2))
scripted = torch.jit.script(foo)
```
```
import torch
def foo(x):
if x.ndim > 3:
return torch.neg(x)
else:
return x
x = torch.ones(3, 4)
x2 = torch.ones(3, 4, 5, 6)
print(foo(x))
print(foo(x2))
scripted = torch.jit.script(foo)
```
cc @suo | oncall: jit,triaged,jit-backlog | low | Minor |
477,573,612 | godot | Sprite to CollisionPolygon2D improvement | **Godot version:**
3.2.dev
It would be useful if the Sprite to CollisionPolygon feature would also work for "closed rooms"
Example:

As you can see, it's not working right now.
In addition, a "shrink" function would also be cool. | enhancement,topic:core | low | Minor |
477,586,991 | pytorch | Remove USE_C10D flag | After removing THD, `USE_C10D` flag is no longer useful (learned from @pietern ). We can now remove `USE_C10D` and keep `USE_DISTRIBUTED` to toggle distributed features. | oncall: distributed,module: build,triaged | low | Minor |
477,593,595 | go | proposal: review meeting minutes | The [proposal review group meets regularly](https://go.googlesource.com/proposal/+/master/README.md#proposal-review) (roughly weekly) to review pending proposal issues and move them along in the [proposal process](https://golang.org/s/proposal).
Review consists primarily of checking that discussion is ongoing, commenting on issues to move discussion along, summarizing long discussions, CC’ing experts, and accepting or closing proposals when the discussion on the GitHub issue has reached a clear consensus about the outcome.
Review also includes closing proposals that are untenable (for example, because the changes are [backwards-incompatible](https://golang.org/doc/go1compat) or violate key design goals of the language or packages).
**This meta-issue records minutes of the proposal review meetings as issue comments, so that they can be cross-linked easily with the relevant issues. (This meta-issue is for minutes _only_; comments that are not meeting minutes will be deleted.)** | Proposal,umbrella | high | Critical |
477,595,529 | flutter | Should be able to limit max and min time in CupertinoDateTimePicker. | The CupertinoDateTimePicker currently only supports limiting the maximum and minimum date. Sometimes you may also want to limit time. For instance, picking a DateTime before or after now.
This should work in time mode and datetime mode. | c: new feature,framework,f: cupertino,P3,team-design,triaged-design | low | Major |
477,598,413 | go | wasm: add DWARF sections | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.5 linux/amd64
</pre>
Output source maps for use with browser developer tools to enable simple step debugging inside browsers. Details of the spec can be found here: https://sourcemaps.info/spec.html | NeedsInvestigation,FeatureRequest,arch-wasm | medium | Critical |
477,610,041 | go | regexp: LiteralPrefix returns surprising result for `^` anchored strings | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.5 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/usr/local/google/home/eliben/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/usr/local/google/home/eliben/go"
GOPROXY="https://proxy.golang.org"
GORACE=""
GOROOT="/usr/lib/google-golang"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/google-golang/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build346731385=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
```
package main
import (
"fmt"
"regexp"
)
func main() {
for _, p := range []string{"foo", "^foo"} {
prefix, _ := regexp.MustCompile(p).LiteralPrefix()
fmt.Printf("prefix = '%v'\n", prefix)
}
}
```
### What did you expect to see?
The documentation for `LiteralPrefix` says:
> LiteralPrefix returns a literal string that must begin any match of the regular expression re. It returns the boolean true if the literal string comprises the entire regular expression.
So I expected to see "foo" in both cases, since every any match of the regular expression `^foo` begins with the literal "foo".
```
prefix = 'foo'
prefix = 'foo'
```
### What did you see instead?
```
prefix = 'foo'
prefix = ''
```
In https://github.com/golang/go/issues/21463#issuecomment-326151555, @rsc says:
> "It looks like the literal prefix matcher doesn't kick in for anchored patterns, and we should fix that.
Also, see discussion in https://github.com/golang/go/issues/30425
| NeedsFix | low | Critical |
477,620,103 | flutter | Get Android Back Button event | In my application I want to disable the back button, but I want the user to be able to pop the current screen if he click the `AppBar` back button.
Basically I have a form inside a `PageView`, so at each new form field, the `PageView` moves forward, the back button is supposed to go back one step and not pop the entire form from the screen. But still, if the user wants to pop out, he can using the back button at the top.
Currently we can only do it using `WillPopScope`, but this isn't enough, as `WillPopScope` doesn't differentiate between a physical Back button press or an `AppBar`'s back button press. | c: new feature,platform-android,framework,P3,team-android,triaged-android | low | Minor |
477,623,866 | godot | Calling a method of your own Reference script within PREDELETE notification fails | Godot 3.1.1
I made a script class which has a `clear()` function, for which the goal is to cleanup some external resources. It can be called by code using the class, or at the destruction of the instance. The only destructor logic that I know of is `NOTIFICATION_PREDELETE`.
However... that doesn't work :(
```gdscript
extends Node
class Test:
var a = 42
func _notification(what):
if what == NOTIFICATION_PREDELETE:
print("Predelete ", a)
clear() # Breaks here
func clear():
print("Clearing Test ", a)
func _ready():
var test1 = Test.new()
test1 = null
```
When I run this, I get the following error:
```
Attempt to call function 'clear' in base 'null instance' on a null instance.
```
Which isn't helping here, and makes no sense because `_notification` was called in the first place. I would like to be be able to call functions of my own script, otherwise I have to entirely copy/paste its contents...
Note: that situation only seems to occur with `References`. If I inherit `Node` and `free()` it, the code runs normally.
I tested with inner classes but the same issue happens with file scripts. | bug,topic:gdscript,topic:editor,confirmed,crash | low | Critical |
477,630,898 | youtube-dl | Cannot download the all the videos of a webpage | ## Checklist
- [ X ] I'm asking a question
- [ X ] I've looked through the README and FAQ for similar questions
- [ X ] I've searched the bugtracker for similar questions including closed ones
## Question
Hello, I keep reading the documentation and FAQ and ask on Google but I can't seem to have this right.
This page contains 3 videos on it: https://www.onecommune.com/clean-beauty-day-1
How to download all 3 videos at once. Or even chose if I would like the 2nd or 3rd video only? I cannot find a "get all videos from a page" from the documentation. Or any of these options.
There isn't anything in the documentation about this. Does anybody know if this is possible?
Bonus: It seems that the (first) video is saved as a "name_of_video.mp4-nluigeqsez.bin". I renamed it to "name_of_video.mp4" and it worked fine. But strange. | question | low | Critical |
477,639,054 | rust | chain() make collect very slow | While working on a [SO](https://stackoverflow.com/questions/57378606/how-can-i-ensure-that-a-rust-vector-only-contains-alternating-types/57378944#57378944) question.
We was wondering if `chain()` would produce an acceptable speed, after some digging and benchmark, we come to the conclusion that `collect()` is slow because it use [`while let`](https://doc.rust-lang.org/src/alloc/vec.rs.html#1939). Unfortunately, this make collect very slow, I don't really understand why but that a fact.
But we saw that `for_each()` (probably thank to [`fold()`](https://doc.rust-lang.org/src/core/iter/adapters/chain.rs.html#101)) implementation of `chain()` don't have this problem and produce something a lot faster.
```rust
#![feature(test)]
extern crate test;
use either::Either; // 1.5.2
use std::iter;
#[derive(Debug, Default)]
pub struct Data<X, Y> {
head: Option<Y>,
pairs: Vec<(X, Y)>,
tail: Option<X>,
}
impl<X, Y> Data<X, Y> {
pub fn iter(&self) -> impl Iterator<Item = Either<&X, &Y>> {
let head = self.head.iter().map(Either::Right);
let pairs = self.pairs.iter().flat_map(|(a, b)| {
let a = iter::once(Either::Left(a));
let b = iter::once(Either::Right(b));
a.chain(b)
});
let tail = self.tail.iter().map(Either::Left);
head.chain(pairs).chain(tail)
}
}
#[derive(Debug)]
struct AData(usize);
#[derive(Debug)]
struct BData(usize);
#[cfg(test)]
mod tests {
use crate::{AData, BData, Data};
use test::Bencher; // 1.5.2
#[bench]
fn test_for_each(b: &mut Bencher) {
b.iter(|| {
let data = Data {
head: Some(BData(84)),
pairs: std::iter::repeat_with(|| (AData(42), BData(84)))
.take(20998)
.collect(),
tail: Some(AData(42)),
};
let mut data_bis = Vec::with_capacity(21000);
data.iter().for_each(|x| data_bis.push(x));
});
}
#[bench]
fn test_collect(b: &mut Bencher) {
b.iter(|| {
let data = Data {
head: Some(BData(84)),
pairs: std::iter::repeat_with(|| (AData(42), BData(84)))
.take(20998)
.collect(),
tail: Some(AData(42)),
};
let _: Vec<_> = data.iter().collect();
});
}
}
```
```none
test tests::test_collect ... bench: 1,682,529 ns/iter (+/- 2,157,023)
test tests::test_for_each ... bench: 609,031 ns/iter (+/- 750,944)
```
So, should we change implementation of collect to use `for_each()` ? Note that a for loop doesn't solve the problem. For this to be optimized we need to use `for_each()`.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | I-slow,C-enhancement,E-mentor,T-libs-api,T-compiler,A-iterators | medium | Critical |
477,681,874 | go | x/sys/cpu: add support for ARM | It would be nice to have feature detection for ARM (32-bit) in `x/sys/cpu`.
Concretely, a new `cpu.ARM` struct that closely resembles the existing `cpu.ARM64` struct, tailored to the ARM specific hardware capabilities. The following fields are proposed, which map directly to the {HWCAP, HWCAP2} auxiliary vector values on Linux and FreeBSD:
```
HasSWP bool // SWP instruction support
HasHALF bool // Half-word load and store support
HasTHUMB bool // ARM Thumb instruction set
Has26BIT bool // Address space limited to 26-bits
HasFASTMUL bool // 32-bit operand, 64-bit result multiplication support
HasFPA bool // Floating point arithmetic support
HasVFP bool // Vector floating point support
HasEDSP bool // DSP Extensions support
HasJAVA bool // Java instruction set
HasIWMMXT bool // Intel Wireless MMX technology support
HasCRUNCH bool // MaverickCrunch context switching and handling
HasTHUMBEE bool // Thumb EE instruction set
HasNEON bool // NEON instruction set
HasVFPv3 bool // Vector floating point version 3 support
HasVFPv3D16 bool // Vector floating point version 3 D8-D15
HasTLS bool // Thread local storage support
HasVFPv4 bool // Vector floating point version 4 support
HasIDIVA bool // Integer divide instruction support in ARM mode
HasIDIVT bool // Integer divide instruction support in Thumb mode
HasIDIV bool // Integer divide instruction support in ARM and Thumb mode
HasVFPD32 bool // Vector floating point version 3 D15-D31
HasLPAE bool // Large Physical Address Extensions
HasEVTSTRM bool // Event stream support
HasAES bool // AES hardware implementation
HasPMULL bool // Polynomial multiplication instruction set
HasSHA1 bool // SHA1 hardware implementation
HasSHA2 bool // SHA2 hardware implementation
HasCRC32 bool // CRC32 hardware implementation
```
As I look around, I see code detecting CPU features based on the `runtime.goarm` value (which is set by the GOARM environment variable at link time), rather than a runtime check. This means that:
1. As `runtime.goarm` is not `const`, the fast-path (e.g. using NEON) and slow-path fallback are being compiled into the binary, but only one path can **ever** be used. It would be nice if both paths can be used via run-time detection instead.
2. Using the above, one cannot have a "universal binary" that is especially problematic on Android.
In one of my projects, I have resorted to parsing `/proc/cpuinfo` for run-time detection of NEON, which only works on Linux. I'd love to instead use the standard library. | NeedsFix,FeatureRequest | medium | Major |
477,692,782 | flutter | `AnimatedSize` crops widgets on resize | I placed a `RaisedButton` inside an `AnimatedSize`, but when the widget is recreated the `AnimatedSize` is clipping the `RaisedButton` during the animation.
[Here you can see what's happening](https://imgur.com/a/0aInqLf)
My code is:
```dart
AnimatedSize(
vsync: this,
duration: Duration(milliseconds: 200),
child: RaisedButton(
onPressed: model.isStepValid() ?
() => model.onNextStepRequested()
: null,
shape: StadiumBorder(),
color: Theme.of(context).primaryColor,
child: Row(
children: <Widget>[
if (model.isLoading)
SizedBox(
height: 30,
width: 30,
child: Padding(
padding: const EdgeInsets.only(right: 12, top: 6, bottom: 6),
child: CircularProgressIndicator(
backgroundColor: Colors.white,
strokeWidth: 2.0,
),
),
),
Text(
model.isFinalStep ? 'Finalizar' : 'Próximo',
style: TextStyle(color: Colors.white),
),
],
),
),
)
```
If I wrap the `RaisedButton` in a `Padding` the problem goes away, but this messes all layout, as I don't want any padding there. | framework,a: animation,a: quality,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-framework,triaged-framework | low | Major |
477,742,304 | youtube-dl | How to identify a music video on YouTube or if a video is copyrighted? | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm asking a question
- [x] I've looked through the README and FAQ for similar questions
- [x] I've searched the bugtracker for similar questions including closed ones
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.
-->
How can I check if a YouTube video is copyrighted? I'm specifically talking about music videos, is there any data that let us identify a music video? I have checked another similar issue, the answer was to check the `license` key, but this key will be available for all the videos and won't help us identify a music video.
Any suggested solution to find a music video? Like `artist`, `creator` or any other data?
Thank you.
| question | low | Critical |
477,852,508 | rust | Rust with [no_std] & CustomAllocator -> rust_oom either undefined in link stage or already defined in compile stage :/ | Hi there,
i'm a bit new to rust and writing a library for embedded Raspberry Pi bare metal programming. I'm completely using ``#![no_std]`` in all of my crates. However, at a certain point functions of the ``alloc`` crate are needed. So I came accross the option to implement a custom ``GlobalAllocator`` as per the doc: (https://doc.rust-lang.org/core/alloc/trait.GlobalAlloc.html).
However, when compiling this the crate also requires an ``alloc_error_handler`` present like so:
```
#![no_std]
#![feature(alloc_error_handler)]
#[global_allocator]
static ALLOCATOR: RusPiRoAllocator = RusPiRoAllocator;
[...] here comes the implementation of the allocator [...]
#[alloc_error_handler]
fn alloc_error_handler(_: Layout) -> ! {
// TODO: how to handle memory allocation errors?
loop { }
}
```
However, doing so compilation works fine, but using this crate in an actual binary to be build as a dependency lead to the linker error ``undefined reference to 'rust_oom'``.
So well I just put this into the code as well:
```
#[alloc_error_handler]
fn alloc_error_handler(_: Layout) -> ! {
// TODO: how to handle memory allocation errors?
loop { }
}
#[no_mangle]
fn rust_oom() -> ! {
// well, currently there is nothing we could do on out-of-memory other than halt the core
loop { }
}
```
BUT, doing so the compiler complains that the function ``rust_oom`` has already being defined. So for whatever reason the compiler seem to optimize the ``rust_oom`` function away before linking. As a workaround I put the ``rust_oom`` function into a separate crate the allocator crate depends on. (you can find the whole code here: [ruspiro-allocator](https://github.com/RusPiRo/ruspiro-allocator/).
With this workaround in place the building of the binary works fine as long as the ``ruspiro-allocator`` crate is a direct dependency of it. I would rather like to bundle other crates of mine into a new crate that makes it easier to consume and may need only one dependency in the binary to be created with some feature gates but not many...
So I created a crate, lets call it ``kernel-crate`` that represents the final binary to be build. This crate has a dependency to a ``library-crate`` which depends on ``allocator-crate``.
In this scenario the linker will again complain that the reference to ``rust_oom`` is undefined. So somewhere down the route the function seem to be optimized away again...
But if ``kernel-crate`` directly depends on ``allocator-crate`` everything works fine....
I would appreciate if someone could shed a light on the issue and how to solve it properly.
Btw. I'm building for the target ``armv7-unknown-linux-gnueabihf``, cross compiling from a windows host machine with cargo version:
```
cargo 1.38.0-nightly (677a180f4 2019-07-08)
```
Any hint would be much appreciated...
Thanks in advance. | A-linkage,A-allocators,T-compiler,C-bug,O-bare-metal | low | Critical |
477,868,192 | pytorch | torch.nn.DataParallel causes incorrect gradients | # Bug Report
## Issue description
I have a model which has two nn.Conv2d modules, and I only use the first one of them in 'forward'.
In general, after executing 'loss.backward()', all weights' gradients of the second Conv2d (the unused one) should be 'None'.
Without nn.DataParallel, I got the correct result (conv2.weight.grad is None).
However with nn.DataParallel, the conv2.weight.grad is a zero tensor instead of None. So that if I run optimizer.step() after backward, weight_decay and momentum will be accumulated for the unused parameters, which causes unexpected results. I hope those gradients of unused parameters keep 'None' instead of a zero tensor.
I have a sample fix for this issue temporarily, but that might cause other problems (when real p.grad = zeros).
```python
loss.backward()
for p in model.parameters():
if torch.sum(torch.abs(p)) == 0.0:
p.grad = None
optimizer.step()
```
So why does this problem occur? And how to fix it correctly?
## Code example
See https://gist.github.com/xiaoguai0992/db8742c3fa7a5e02be36e64180693752
The output of the code should be:
```bash
Testing non-dataparallel.
conv1.weight, p.grad is None = False
conv1.bias, p.grad is None = False
conv2.weight, p.grad is None = True
conv2.bias, p.grad is None = True
Testing dataparallel
module.conv1.weight, p.grad is None = False
module.conv1.bias, p.grad is None = False
module.conv2.weight, p.grad is None = False
module.conv2.bias, p.grad is None = False
Testing repaired version
module.conv1.weight, p.grad is None = False
module.conv1.bias, p.grad is None = False
module.conv2.weight, p.grad is None = True
module.conv2.bias, p.grad is None = True
```
## System Info
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
GPU 2: GeForce RTX 2080 Ti
GPU 3: GeForce RTX 2080 Ti
Nvidia driver version: 418.40.04
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.16.2
[pip3] torch==1.1.0
[pip3] torchvision==0.2.2.post3
[conda] Could not collect
| oncall: distributed,module: autograd,triaged | low | Critical |
477,892,264 | pytorch | Improve the performance of linear algebra operations in CUDA for small problem sizes | ## Issue description
MAGMA has been the primary backend for specialized linear algebra operations on CUDA devices in PyTorch for a while. MAGMA is extremely effective for large problem sizes (with some routines also supporting batching) than cuSolver or cuBLAS, but while it comes to small problem sizes, we see extremely regressive behavior.
This is behavior is so regressive that, for specific problem size ranges, LAPACK outperforms MAGMA by several orders of magnitude.
This issue is created to discuss certain ways to overcome this regression, and solutions are outlined below. Details of these proposed solutions will be posted in a short while.
## Solutions
1. Implicitly offload computation to CPU, and move the result to GPU.
- This is adopted internally in MAGMA for the symeig routine. Here, we propose that we compute the result using LAPACK calls, and then move the results to the GPU using a `.to` call.
2. Modify the backend based on problem size (e.g. cuSolver instead of MAGMA).
- cuSolver has Jacobi based routines for SVD and Symmetric Eigendecomposition, which are inherently faster than divide and conquer routine used, especially for small problem sizes. We could consider using these instead of MAGMA routines.
These solutions are ordered in the ascending order of difficulty. If you have any other solutions that you would like to propose, please share them.
cc: @soumith @jacobrgardner @Balandat
cc @vishwakftw @SsnL | module: performance,module: cuda,triaged,module: linear algebra | low | Major |
477,975,777 | go | proposal: update proposal/README.md to stand alone | The proposal process document at https://golang.org/s/proposal has at least two problems we should fix.
First, it incorporates a few talks by reference, but most people aren't going to read/watch those or take away what we want to take away. It's fine to keep the talk links but we should extract the important messages and say them in the README explicitly.
Second, we introduced new process for Go 2 changes, with two rounds of proposals, in the [Go 2 here we come blog post](https://blog.golang.org/go2-here-we-come), but this is not mentioned in the README. | Proposal | low | Minor |
478,024,862 | kubernetes | Allow setting ownership on mounted secrets | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Currently, you can set secret file permissions, but not ownership: (see the "Secret files permissions" section)
https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
It would be good to add a `defaultOwner`, and possible `defaultGroup` field that would allow setting the default ownership of the secret files.
**Why is this needed**:
It is possible that there is more than one process running in a container, each running as a different user. (think `processA` running as `userA` and `processB` running as `userB`). `processA` might need to use `secretA` and `processB` to use `secretB`.
To make the secrets usable today, `secretA` and `secretB` would need to be world-readable, because there is no way to set ownership on them. This is undesirable from the security standpoint, as `processA` could read `secretB` and vice versa. | sig/storage,kind/feature,help wanted,priority/important-longterm | high | Critical |
478,075,179 | pytorch | [jit] Python @property's not supported in TorchScript | These are used in nn.quantized.modules, ex:
https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/linear.py#L116
cc @suo | oncall: jit,triaged,quantization_release_1.3,jit-backlog | low | Minor |
478,098,923 | storybook | Filter and search components by tag | **Is your feature request related to a problem? Please describe.**
Right now, there isn't a way to filter or search through filtered items in the left navigation. While searching is available to find a specific component, it searches through all available components.
**Describe the solution you'd like**
I would like the ability to filter AND search filtered options based on tag.
For example, the tag could be used to filter components by platform. I imagine by default, it would be set to show all components - but if I want to see just components that support Android, I could select that filter to only show me Android components. For a design system that supports different platforms, this means it's not possible to just see components that are Android, iOS, or web.
When selecting an Android-only component, the viewport should show me a view corresponding to Android. When selecting a component that only supports a web component, I should not be able to toggle my view to a mobile view.
Basically, at a glance, I should be able to see what components are supported based on platform.
**Describe alternatives you've considered**
* A separate Storybook per tag (messy, duplicate documentation)
* Using the default SB hierarchy and duplicate components based on tag (makes left nav very long, hard to read)
**Are you able to assist bring the feature to reality?**
Yes
**Additional context**
I would love to be able to filter the left nav like this: https://backpack.github.io/components/accordion/?platform=web
| feature request,ui,ui: search | high | Critical |
478,111,266 | pytorch | Error out during compilation if USE_FBGEMM=1 is ignored | USE_FBGEMM=1 is ignored if your compiler does not support AVX512. It would be nice to change it so that it errors out so that developers are not confused about if it is enabled or not.
@jamesr66a says: "I think USE_FBGEMM=1 is the default in CI and if we did that, a bunch of builds would error".
If we want to make this nicer for developers, we can just figure out which CI configurations we need to make USE_FBGEMM=0 to make this work.
cc @ZolotukhinM | module: build,module: cpu,module: ci,triaged,better-engineering | low | Critical |
478,124,544 | puppeteer | Support to emulate @media (hover: *) | <!--
STEP 1: Are you in the right place?
- For general technical questions or "how to" guidance, please search StackOverflow for questions tagged "puppeteer" or create a new post.
https://stackoverflow.com/questions/tagged/puppeteer
- For issues or feature requests related to the DevTools Protocol (https://chromedevtools.github.io/devtools-protocol/), file an issue there:
https://github.com/ChromeDevTools/devtools-protocol/issues/new.
- Problem in Headless Chrome? File an issue against Chromium's issue tracker:
https://bugs.chromium.org/p/chromium/issues/entry?components=Internals%3EHeadless&blocking=705916
For issues, feature requests, or setup troubles with Puppeteer, file an issue right here!
-->
### Steps to reproduce
**Tell us about your environment:**
* Puppeteer version: 1.19
* Platform / OS version: macOS
* URLs (if applicable):
* Node.js version: 10.16
**What steps will reproduce the problem?**
I want to emulate `@media (hover: ...)` to test my CSS at different devices.
Example:
```
a {
color: green;
}
@media (hover: hover) {
a:hover {
background: red;
}
}
```
`@media (hover: hover)` works well when I run this code in puppeteer+macOS. But I want to emulate `@media (hover: none)` for mobile device to test it.
**What is the expected result?**
API to configure input mechanism. | feature | low | Critical |
478,150,140 | go | cmd/go: populate debug.Module.Sum even if build in devel mode | Related to #29814
When a binary is build from within a module's source tree, the output from runtime/debug.ReadBuildInfo currently reports the module as having version `(devel)`.
This is understandable.
However, I don't think there is technical reason why the `Sum` at least can't be computed.
This has usefulness when building commandline tools from the source tree, and you want to at least emit some versioning information that distinguishes one build from another build at a different commit.
\cc @bcmills | help wanted,NeedsFix,FeatureRequest,modules | low | Critical |
478,159,590 | kubernetes | pkg/util/interrupt/interrupt.go has big bad bugs | Discovered in the context of https://github.com/kubernetes/utils/pull/87, I will duplicate some of the thoughts here. If someone takes this, please go read that PR (unless it gets fixed up and merged, in which case close this)
https://github.com/kubernetes/utils/pull/87#pullrequestreview-220631325
1) Is it allowed to call Chain() and then Run() from within a higher-level Run() ?
```
h1 := New(myfinal, foo, bar)
outer := func() error {
h2 := Chain(h1, bat, baz)
h2.Run(inner)
// other stuff
return nil
}
h1.Run(outer)
```
So outer() starts, Chains h2, and runs inner(). When inner() completes, this calls {bat, baz, foo, bar} in that order, right? Then, when outer() completes, it *WILL NOT* run foo and bar, because the `once` was already triggered.
That seems very wrong. If I understand correctly, we should at least explicitly document this. Something like:
Calling Chain on a handler which is already being run, either from within the critical section or externally, will produce undefined behavior and is almost certainly NOT what you want.
In fact, maybe we should specifically track this in `running bool` (guarded by a mutex) and have Chain() return nil (or even panic() ?) if called on a running handler?
Calling Run() on a Handler twice could also be returned as an error rather than just not working.
Calling Run() on a child and a parent in serial is also wrong, and we should defend against it.
Here's a fully formed demonstration:
```
package main
import (
"fmt"
"os"
"syscall"
"time"
)
func main() {
sig()
nestClean()
nestSignal()
}
func sig() {
fmt.Println("TEST SIGNAL")
fmt.Println("open foo")
fmt.Println("open bar")
h1 := New(myfinal, foo, bar)
h1.Run(func() error {
raise(syscall.SIGTERM)
time.Sleep(100 * time.Millisecond)
return nil
})
fmt.Println("")
}
func nestClean() {
fmt.Println("TEST NESTED w/o signal")
fmt.Println("open foo")
fmt.Println("open bar")
h1 := New(myfinal, foo, bar)
h1.Run(func() error {
fmt.Println("open qux")
h2 := Chain(h1, qux)
return h2.Run(func() error { return nil })
})
fmt.Println("")
}
func nestSignal() {
fmt.Println("TEST NESTED w/ signal")
fmt.Println("open foo")
fmt.Println("open bar")
h1 := New(myfinal, foo, bar)
h1.Run(func() error {
fmt.Println("open qux")
h2 := Chain(h1, qux)
return h2.Run(func() error { return raise(syscall.SIGTERM) })
})
fmt.Println("")
}
func myfinal(s os.Signal) {
fmt.Printf("caught signal %q\n", s)
}
func foo() {
fmt.Println("close foo")
}
func bar() {
fmt.Println("close bar")
}
func qux() {
fmt.Println("close qux")
}
func raise(sig os.Signal) error {
p, err := os.FindProcess(os.Getpid())
if err != nil {
return err
}
return p.Signal(sig)
}
```
consider:
```
h1 := New(myfinal, foo, bar)
h2 := Chain(h1, bat, qux)
h2.Run(something)
```
In the case of normal completion, we call `h2.close()` which calls `bat()` and `qux()`, then `h1.close()` which calls `foo()` and `bar()`. OK
In the case of a signal being delivered, we call `h2.signal()` which calls `bat()` and `qux()`, then `h1.close()` which calls `foo()` and `bar()`. Then it calls `h1.signal()` which *DOES NOTHING* because the `once` has already been triggered. So `myfinal() is never called`.
I think you need to pass `handler.final` not `handler.signal` ?
Additionally this does not cleanly handle sub-sections - is it supposed to ? See trouble example at Run() below.
This whole Chain() API feels fraught to me. Because of the `once`, calling Chain() is effectively mutating (or invalidating) the original handler. Can we make that more obvious in the API?
Either say that Chain() "copies" the parent handler's contents (and do `append(notify, handler.notify...)`) or say that Chain() takes modifies the original handler (e.g. (`h := New(myfinal, foo, bar); h.Chain(bat, baz); h.Run()` or something.
A fully formed example. Note that myfinal is never called in the chain case
```
package main
import (
"fmt"
"os"
"syscall"
"time"
)
func main() {
sig()
chain()
}
func sig() {
fmt.Println("TEST SIGNAL")
fmt.Println("open foo")
fmt.Println("open bar")
h1 := New(myfinal, foo, bar)
h1.Run(func() error {
raise(syscall.SIGTERM)
time.Sleep(100 * time.Millisecond)
return nil
})
fmt.Println("")
}
func chain() {
fmt.Println("TEST CHAIN SIGNAL")
fmt.Println("open foo")
fmt.Println("open bar")
h1 := New(myfinal, foo, bar)
fmt.Println("open qux")
h2 := Chain(h1, qux)
h2.Run(func() error {
raise(syscall.SIGTERM)
time.Sleep(100 * time.Millisecond)
return nil
})
fmt.Println("")
}
func myfinal(s os.Signal) {
fmt.Printf("caught signal %q\n", s)
}
func foo() {
fmt.Println("close foo")
}
func bar() {
fmt.Println("close bar")
}
func qux() {
fmt.Println("close qux")
}
func raise(sig os.Signal) error {
p, err := os.FindProcess(os.Getpid())
if err != nil {
return err
}
return p.Signal(sig)
}
```
| kind/bug,sig/architecture,lifecycle/frozen | low | Critical |
478,163,637 | youtube-dl | YoutubeDL Embedded Buffer Download | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.08.02. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a feature request
- [x] I've verified that I'm running youtube-dl version **2019.08.02**
- [x] I've searched the bugtracker for similar feature requests including closed ones
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
Downloading into a buffer should be possible, not simply into a file.
| request | low | Critical |
478,174,016 | node | Non-heap memory leak in node:10.15.3-alpine, node:10.16.0-alpine | Copied from https://github.com/nodejs/docker-node/issues/1082
We've witnessed a consistent memory leak in 2 versions of the node alpine Docker images: `node:10.15.3-alpine` and `node:10.16.0-alpine`. This leak may be present in other versions of the alpine images, but we know it is **NOT** present in `node:6.11.1-alpine`, `node:8.16.0-alpine`, `node:10.15.3-jessie`, and `node:10.15.3-stretch`.
This leak is a non-heap leak—the Node heap is not consistently growing, but the memory used by the docker container is.
Here is are some graphs demonstrating the leak across ~20 containers. The **blue lines** are the RSS of the docker containers and the **red lines** are the size of the Node heaps of the processes within each container.
## `node:10.15.3-alpine`

## `node:10.16.0-alpine`

## `node:10.15.3-stretch` (no leak, for reference)

| memory | low | Major |
478,174,473 | godot | Godot Editor should only change date modified on files when contents change | # **Godot version:**
3.1.1.stable.mono.official
# **OS/device including version:**
Windows 10 Home 64-Bit
# **Issue description:**
## **What happens**
I open my project and see that Git is reporting that project.godot, default_env.tres and Game.tscn (my root node) have changed despite their contents being identical.
## **What should happen instead**
Last modified timestamps on files should only be changed when file contents change as is expected by Git and probably other version control systems.
# **Steps to reproduce:**
Create a new project.
Close Godot.
Initialize a Git repo and commit.
Open the project in Godot.
Look at Git reporting that files have changed.
# **Minimal reproduction project:**
Any project.
[Here's the project I'm working on](https://github.com/BenMcLean/Wolf3D-Godot) | bug,topic:editor,confirmed | low | Major |
478,179,397 | vue | Click Event Triggers on Complex Buttons are ignored in some environments | ### Version
2.6.10
### Reproduction link
[https://jsfiddle.net/s7hyqk13/2/](https://jsfiddle.net/s7hyqk13/2/)
### Steps to reproduce
1. Configure one of the Adobe CEP [Sample Panels](https://github.com/Adobe-CEP/Samples). The [PProPanel](https://github.com/Adobe-CEP/Samples/tree/master/PProPanel) is a good starting point as it has very clear documentation on how to [set up the environment](https://github.com/Adobe-CEP/Samples/tree/master/PProPanel#2-enable-loading-of-unsigned-panels) for testing.
1. Replace the HTML/JavaScript/CSS contents of the panel project with the contents of the [linked JSFiddle](https://jsfiddle.net/s7hyqk13/2/).
1. Open the panel.
1. Attach a debugger (with the default PProPanel setup this would mean browsing to `localhost:7777` in Chrome).
1. Set the "_Mouse > `click`_" **Event Listener Breakpoint** in the "Sources" tab of the Chrome Debugger.
1. Click the Vue icon in the center of the silver `div`.
### What is expected?
Method bound to the `@click` handler is triggered when the image embedded in the parent div is clicked.
### What is actually happening?
The method bound to the `@click` handler is _only_ triggered when clicking outside of the parent div.
---
This is a non-trivial bug to reproduce as the only place I've experienced it is in [Adobe CEP](https://github.com/Adobe-CEP/CEP-Resources/blob/master/CEP_9.x/Documentation/CEP%209.0%20HTML%20Extension%20Cookbook.md) extensions (which run NW.js under the hood). That said, it does reproduce 100% of the time there.
The debugger (CEP context) seems to show several funny things at around [this line](https://github.com/vuejs/vue/blob/d40b7ddb8177944d1dd50f4f780e6fd92c9455c2/src/platforms/web/runtime/modules/events.js#L69) in the Vue events.js file. Specifically:
1. The `e.timeStamp` value **does not change** between callbacks for _different buttons/elements_.
1. The `attachedTimestamp` is **_significantly_ larger** than the `e.timeStamp` value.
1. The `attachedTimestamp` value _does_ change when the component is updated (the `e.timeStamp` remains identical).
I should note that this affects at least [CEP 8.0 and CEP 9.0](https://github.com/Adobe-CEP/CEP-Resources/blob/master/CEP_9.x/Documentation/CEP%209.0%20HTML%20Extension%20Cookbook.md#chromium-embedded-framework-cef) (tested in Premiere Pro).
**Vue Versions Note:** This broke somewhere between versions 2.5.17 and 2.6.x. If we run a version of 2.5 (2.5.17 and some earlier versions verified), then this issue does not occur. In testing 2.6.x, we've found that this same issue occurs from 2.6.5 to 2.6.10 (latest). Versions of 2.6 prior to 2.6.5 are actually _worse_ in that the buttons basically don't work at all.
**Important Note:** I should _further_ note that apparently right-clicking to open the basic CEF [not CEP] context menu will cause the `e.timeStamp` values to begin reporting as expected. Once this occurs, the buttons will _also_ work as expected. That said, we shouldn't have to instruct users to right-click prior to interfacing with the extension.
<!-- generated by vue-issues. DO NOT REMOVE --> | contribution welcome,browser quirks,has PR | medium | Critical |
478,188,375 | flutter | Issue with refunds (IAP Plugin) | Hello,
I've been using Flutter and the in_app_purchase pulgin to create an app with IAP.
Everything is working well, execpt if I refund(and revoke) a test purchace it still seems to be included in this list;
`final QueryPurchaseDetailsResponse responsePast = await InAppPurchaseConnection.instance.queryPastPurchases();`
Is there a way to verify if a purchase is still valid?
I think either this list shouldn't include revoked purchases,
Or it should be tagged as not current,
Or there should be a way to query the state of a purchase.
If this is already possible please point me in the right direction by way of documentation or examples.
Thank you.
Flutter version;
`Flutter 1.7.8+hotfix.3 • channel stable • https://github.com/flutter/flutter.git
Framework • revision b712a172f9 (4 weeks ago) • 2019-07-09 13:14:38 -0700
Engine • revision 54ad777fd2
Tools • Dart 2.4.0
`
In App Purchase version;
` in_app_purchase: 0.2.1`
| platform-android,customer: crowd,p: in_app_purchase,package,P2,team-android,triaged-android | low | Critical |
478,188,881 | youtube-dl | [ADN] How can I fix the subtitle support for the ADN extractor? | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions
- Search the bugtracker for similar questions: http://yt-dl.org/search-issues
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm asking a question
- [x] I've looked through the README and FAQ for similar questions
- [x] I've searched the bugtracker for similar questions including closed ones
## Question
<!--
Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.
-->
Hello!
I want to fix the subtitle support in [`adn.py`](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/adn.py).
The current situation is explained in #12724. Right now, there is a key hardcoded [here](https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/extractor/adn.py#L76) in the code:
```py
bytes_to_intlist(binascii.unhexlify(self._K + '4b8ef13ec1872730')),
```
This key changes every day, so poeple have been modifying the extractor code in their installation to get the subtitle extraction working -- and they have been changing the key every day. This is obviously not user friendly.
Youtubedl should automatically get the key as part of the subtitle retrieval process, and that is what I want to implement.
I know I need a JS Interpreter since the remote JS file that has the key changes every day and you can't easily get it with a regex or something similar because the key doesn't appear in the code; it is the result of an obfuscated js computation.
Should I use PhantomJS or another interpreter to do it?
I've found that you can get the key by executing
```js
videojs.players["adn-video-js"].onChromecastCustomData().key
```
when an episode page is loaded. That's kinda hacky but it works :p
There are many more ways to do it (like proxying the `CryptoJS` object before the dom `load` event)
but I found no easier way than that, because the variable is deeply nested in obfuscated code.
More importantly; this code uses calls to [videojs](https://videojs.com/) (one of the dependencies ADN uses) so I think this code won't break for while.
Thank you! | question | low | Critical |
478,197,762 | flutter | flutter precache --force doesn't download artifacts for desktop | I synced and ran `flutter precache --force` with the macos desktop opt-in turned on, and it downloaded things, but didn't download "darwin-x64 tools", even though it said it did, because after I got on the bus, and tried to run my desktop app, it downloaded it again (or more parts of it, or something, that took 23 seconds to download).
Here's what it said it download when I did `fluttter precache --force`:
```
% flutter precache --force
Downloading Dart SDK from Flutter engine 739b2dd4b27151958ca5993665ba2e8f2a6e9032...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 265M 100 265M 0 0 24.0M 0 0:00:11 0:00:11 --:--:-- 29.0M
Building flutter tool...
Downloading android-arm-profile/darwin-x64 tools... 0.8s
Downloading android-arm-release/darwin-x64 tools... 0.5s
Downloading android-arm64-profile/darwin-x64 tools... 0.6s
Downloading android-arm64-release/darwin-x64 tools... 0.5s
Downloading android-x86 tools... 0.8s
Downloading android-x64 tools... 0.9s
Downloading android-arm tools... 0.5s
Downloading android-arm-profile tools... 0.4s
Downloading android-arm-release tools... 0.5s
Downloading android-arm64 tools... 0.7s
Downloading android-arm64-profile tools... 0.5s
Downloading android-arm64-release tools... 0.6s
Downloading ios tools... 2.0s
Downloading ios-profile tools... 1.5s
Downloading ios-release tools... 1.3s
Downloading package sky_engine... 0.2s
Downloading common tools... 0.7s
Downloading common tools... 0.6s
Downloading darwin-x64 tools... 1.8s
```
And when I ran my app:
```
Downloading darwin-x64 tools... 23.4s
``` | tool,a: first hour,a: desktop,P3,team-tool,triaged-tool | low | Major |
478,213,842 | kubernetes | TOB-K8S-034: HTTPS connections are not authenticated | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
The Kubernetes system allows users to set up Public Key Infrastructure (PKI), but often fails to authenticate connections using Transport Layer Security (TLS) between components, negating any benefit to using PKI. The current status of authenticated HTTPS calls are outlined in the following diagram.
This failure to authenticate components within the system is extremely dangerous and should be changed to use authenticated HTTPS by default. Systems Kubernetes can depend on, such as Etcd, have also been impacted by the absence of authenticated TLS connections.
**Exploit Scenario**
Eve gains access to Alice’s Kubernetes cluster and registers a new malicious kubelet with the kube-apiserver. Since the kube-apiserver is not using authenticated HTTPS to authenticate the kubelet, the malicious kubelet receives Pod specifications as if it were an authorized kubelet. Eve subsequently introspects the malicious kubelet-managed Pods for sensitive information.
**Recommendation**
Short term, authenticate all HTTPS connections within the system by default, and ensure that all components use the same Certificate Authority controlled by the kube-apiserver.
Long term, disable the ability for components to communicate over HTTP, and ensure that all components only communicate over secure and authenticated channels. Additionally, use mutual, or two-way, TLS for all connections. This will allow the system to use TLS for authentication of client credentials whenever possible, and ensure that all components are communicating with their expected targets at the expected security level.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-034 and it was finding 3 of the report.
The vendor considers this issue High Severity.
To view the original finding, begin on page 24 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,area/security,sig/api-machinery,sig/auth,priority/important-longterm,lifecycle/frozen,wg/security-audit | low | Critical |
478,213,851 | kubernetes | TOB-K8S-022: TOCTOU when moving PID to manager’s cgroup via kubelet | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
PIDs are not process handles. A given PID may be reused in two dependent operations leading to a “Time Of Check vs Time Of Use” (TOCTOU) attack. This occurs in the Linux container manager ensureProcessInContainerWithOOMScore function, which (Figure 1):
Checks if a PID is running on host via reading /proc/<pid>/ns/pid with the isProcessRunningInHost function,
Gets cgroups for pid via reading /proc/<pid>/cgroup by getContainer function,
Moves the PID to the manager’s cgroup,
Sets an out-of-memory killer badness heuristic, which determines the likelihood of whether a process will be killed in out-of-memory scenarios, via writing to /proc/<pid>/oom_score_adj in ApplyOOMScoreAdj.
These operations allow an attacker to move a process to the manager’s cgroup, giving it access to full devices on the host, and change the OOM-killer badness heuristic from either the node host or from a container on the machine, assuming the attacker also has access to unprivileged users on the node host.
**Exploit Scenario**
Eve gains access to an unprivileged user on a node host and a root user on a Pod container on the same host within Alice’s cluster. Eve prepares a malicious process and PID-reuse attack against the docker-containerd process. Eve spawns a process within the Pod container as the root user, taking advantage of the TOCTOU and elevates her cgroup to gain read and write access to all devices. AppArmor blocks Eve from mounting devices, however, her the process is still able to read from and write to host devices.
This issue is more easily exploitable by abusing the behavior discovered in TOB-K8S-021.
See Appendix D for a proof of concept for this attack without the PID reuse.
**Recommendations**
Short term, when performing operations on files in the /proc/<pid>/ directory for a given pid, open a directory stream file descriptor for /proc/<pid>/ and use this handle when reading or writing to files.
It does not currently appear possible to prevent TOCTOU race conditions between the checks and moving the process to a cgroup because this is done by writing to the /sys/fs/cgroup/<cgroup>/cgroup.procs file. We recommend validating that a process associated with a given PID is the same process before and after moving the PID to cgroup. If the post-validation fails, log an error and consider reverting the cgroup movement.
Long term, we recommend tracking further development of Linux kernel cgroups features or even engaging with the community to produce a race-free method to manage cgroups. A similar effort is currently emerging to provide a race-free way of sending signals to processes via adding a process identifier file descriptor (PIDFD) which would be a proper handle to send signals to processes.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-022 and it was finding 4 of the report.
The vendor considers this issue High Severity.
To view the original finding, begin on page 26 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,priority/backlog,area/security,area/kubelet,sig/node,priority/important-longterm,lifecycle/frozen,wg/security-audit,triage/accepted | low | Critical |
478,213,885 | kubernetes | TOB-K8S-004: Pervasive world-accessible file permissions | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
Kubernetes uses files and directories to store information ranging from key-value data to certificate data to logs. However, a number of locations have world-writable directories:
```
cluster/images/etcd/migrate/rollback_v2.go:110: if err :=
os.MkdirAll(path.Join(migrateDatadir, "member", "snap"), 0777); err != nil {
cluster/images/etcd/migrate/data_dir.go:49: err := os.MkdirAll(path, 0777)
cluster/images/etcd/migrate/data_dir.go:87: err = os.MkdirAll(backupDir, 0777)
third_party/forked/godep/save.go:472: err := os.MkdirAll(filepath.Dir(dst), 0777)
third_party/forked/godep/save.go:585: err := os.MkdirAll(filepath.Dir(name), 0777)
pkg/volume/azure_file/azure_util.go:34: defaultFileMode = "0777"
pkg/volume/azure_file/azure_util.go:35: defaultDirMode = "0777"
pkg/volume/emptydir/empty_dir.go:41:const perm os.FileMode = 0777
```
Figure 7.1: World-writable (0777) directories and defaults
Other areas of the system use world-writable files as well:
```
cluster/images/etcd/migrate/data_dir.go:147: return ioutil.WriteFile(v.path, data, 0666)
cluster/images/etcd/migrate/migrator.go:120: err := os.Mkdir(backupDir, 0666)
third_party/forked/godep/save.go:589: return ioutil.WriteFile(name, []byte(body), 0666)
pkg/kubelet/kuberuntime/kuberuntime_container.go:306: if err := m.osInterface.Chmod(containerLogPath, 0666); err != nil {
pkg/volume/cinder/cinder_util.go:271: ioutil.WriteFile(name, data, 0666)
pkg/volume/fc/fc_util.go:118: io.WriteFile(fileName, data, 0666)
pkg/volume/fc/fc_util.go:128: io.WriteFile(name, data, 0666)
pkg/volume/azure_dd/azure_common_linux.go:77: if err = io.WriteFile(name, data, 0666); err != nil {
pkg/volume/photon_pd/photon_util.go:55: ioutil.WriteFile(fileName, data, 0666)
pkg/volume/photon_pd/photon_util.go:65: ioutil.WriteFile(name, data, 0666)
```
Figure 7.2: World-writable (0666) files
A number of locations in the code base also rely on world-readable directories and files. For example, Certificate Signing Requests (CSRs) are written to a directory with mode 0755 (world readable and browseable) with the actual CSR having mode 0644 (world-readable):
```
// WriteCSR writes the pem-encoded CSR data to csrPath.
// The CSR file will be created with file mode 0644.
// If the CSR file already exists, it will be overwritten.
// The parent directory of the csrPath will be created as needed with file mode 0755.
func WriteCSR(csrDir, name string, csr *x509.CertificateRequest) error {
...
if err := os.MkdirAll(filepath.Dir(csrPath), os.FileMode(0755)); err != nil {
...
}
if err := ioutil.WriteFile(csrPath, EncodeCSRPEM(csr), os.FileMode(0644)); err != nil {
...
}
...
}
```
Figure 7.3: Documentation and code from cmd/kubeadm/app/util/pkiutil/pki_helpers.go
**Exploit Scenario**
Alice wishes to migrate some etcd values during normal cluster maintenance. Eve has local access to the cluster’s filesystem, and modifies the values stored during the migration process, granting Eve further access to the cluster as a whole.
**Recommendation**
Short term, audit all locations that use world-accessible permissions. Revoke those that are unnecessary. Very few files truly need to be readable by any user on a system. Almost none should need to allow arbitrary system users write access.
Long term, use system groups and extended Access Control Lists (ACLs) to ensure that all files and directories created by Kuberenetes are accessible by only those users and groups that should be able to access them. This will ensure that only the appropriate users with the correct Unix-level groups may access data. Kubernetes may describe what these groups should be, or create a role-based system to which administrators may assign users and groups.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-004 and it was finding 8 of the report.
The vendor considers this issue Medium Severity.
To view the original finding, begin on page 32 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,area/security,sig/storage,sig/node,help wanted,priority/important-longterm,lifecycle/frozen,good first issue,wg/security-audit,triage/accepted | medium | Critical |
478,213,906 | kubernetes | TOB-K8S-012: Use of InsecureIgnoreHostKey in SSH connections | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
Kubernetes uses Secure Shell (SSH) to connect from masters to nodes under certain, deprecated, configuration settings. As part of this connection, masters must open an SSH connection using NewSSHTunnel, which in turn uses makeSSHTunnel. However, makeSSHTunnel configures the connection to skip verification of host keys. An attacker could man-in-the-middle or otherwise tamper with the keys on the node, without alerting the master. The code for makeSSHTunnel begins with:
```
func makeSSHTunnel(user string, signer ssh.Signer, host string) (*SSHTunnel, error) {
config := ssh.ClientConfig{
User: user,
Auth: []ssh.AuthMethod{ssh.PublicKeys(signer)},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
}
// (...)
}
```
**Exploit Scenario**
Alice’s cluster is configured to use SSH tunnels from control plane nodes to worker nodes. Eve, a malicious privileged user, has a position sufficient to man-in-the-middle connections from control plane nodes to worker nodes. Due to the use of InsecureIgnoreHostKey, Alice is never alerted to this situation. Sensitive cluster information is leaked to Eve.
**Recommendation**
Short term, document that this restriction is in place, and provide administrators with guidance surrounding SSH host auditing. This should support something similar to the Mozilla SSH Best Practices guidance.
Long term, decide if SSH tunnels will be deprecated. If they will be deprecated, remove support completely. If tunnels will not be deprecated, include a mechanism for nodes to report the SSH keys to the cluster, and always insist that the keys remain static. This may require a process to preload the trust-on-first-use (TOFU) mechanisms for SSH.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-012 and it was finding 10 of the report.
The vendor considers this issue Medium Severity.
To view the original finding, begin on page 37 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,area/security,sig/api-machinery,lifecycle/frozen,wg/security-audit | low | Critical |
478,213,945 | kubernetes | TOB-K8S-013: Use of InsecureSkipVerify and other TLS weaknesses | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
Kubernetes uses Transport Layer Security (TLS) throughout the system to connect disparate components. These include kube-proxy, kubelets, and other core, fundamental components of a working cluster. However, Kubernetes does not verify TLS connections by default for certain connections, and portions of the codebase include the use of InsecureSkipVerify, which precludes verification of the presented certificate (Figure 11.1).
```
if dialer != nil {
// We have a dialer; use it to open the connection, then
// create a tls client using the connection.
netConn, err := dialer(ctx, "tcp", dialAddr)
if err != nil {
return nil, err
}
if tlsConfig == nil {
// tls.Client requires non-nil config
klog.Warningf("using custom dialer with no TLSClientConfig. Defaulting to InsecureSkipVerify")
// tls.Handshake() requires ServerName or InsecureSkipVerify
tlsConfig = &tls.Config{
InsecureSkipVerify: true,
}
// ...
}
```
Figure 11.1: An example of InsecureSkipVerify in kubernetes-1.13.4/staging/src/k8s.io/apimachinery/pkg/util/proxy/dial.go
**Exploit Scenario**
Alice configures a Kubernetes cluster for her organization. Eve, a malicious privileged attacker with sufficient position, launches a man-in-the-middle attack against the kube-apiserver, allowing her to view all of the secrets shared over the channel.
**Recommendation**
Short term, audit all locations within the codebase that use InsecureSkipVerify, and move towards a model that always has the correct information present for all TLS connections in the cluster.
Long term, default to verifying TLS certificates throughout the system, even in non-production configurations. There are few reasons to support insecure TLS configurations, even in development scenarios. It is better to default to secure configurations than to insecure ones that may be updated.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-013 and it was finding 11 of the report.
The vendor considers this issue Medium Severity.
To view the original finding, begin on page 38 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,area/security,sig/auth,priority/important-longterm,lifecycle/frozen,wg/security-audit | low | Major |
478,213,982 | kubernetes | TOB-K8S-020: Kubectl can cause a local Out Of Memory error with a malicious Pod specification | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
When attempting to apply a Pod to the cluster, kubectl will read in the entire Pod spec in an attempt to perform validation. This results in the entire Pod spec being loaded into memory when loading from either an on-disk or remote resource. The latter is more dangerous because it is a commonly acceptable practice to pull a Pod spec from remote web server. A weaponized example of this has been produced leveraging a Python Flask server and kubectl in Figure 1 and 2, respectively.
```
from flask import Flask, Response
app = Flask(__name__)
@app.route('/')
def generate_large_response():
return Response("A"* (500 * 1024 * 1024), mimetype="text/yaml")
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000)
```
Figure 15.1: The malicious web server running on 172.31.6.71:8000
```
root@node1:/home/ubuntu# kubectl apply -f http://172.31.6.71:8000/
Killed
```
Figure 15.2: The killing of kubectl due to an OOM.
The area of code requiring full loading of the Pod spec is within the validation of annotation length, visible in Figure 3.
```
func ValidateAnnotations(annotations map[string]string, fldPath *field.Path) field.ErrorList {
allErrs := field.ErrorList{}
var totalSize int64
for k, v := range annotations {
...
totalSize += (int64)(len(k)) + (int64)(len(v))
}
if totalSize > (int64)(totalAnnotationSizeLimitB) {
allErrs = append(allErrs, field.TooLong(fldPath, "", totalAnnotationSizeLimitB))
}
return allErrs
}
```
Figure 15.3: The calculation checking if the totalSize of annotations are larger than the limit.
**Exploit Scenario**
Eve configures a malicious web server to send large responses on every request. Alice references a pod file on Eve’s web server through kubectl apply. Eve’s malicious web server returns a response that is too large for Alice’s machine to store in memory. Alice unknowingly causes an OOM on her machine running kubectl apply.
**Recommendation**
Avoid loading arbitrary data into memory regardless of size. Limit the size of a valid spec or inform the user when it consumes a substantial amount of memory, especially for specs that are fetched from remote endpoints.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-020 and it was finding 15 of the report.
The vendor considers this issue Medium Severity.
To view the original finding, begin on page 48 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,area/security,sig/cli,lifecycle/frozen,wg/security-audit | low | Critical |
478,214,028 | kubernetes | TOB-K8S-029: Encryption recommendations not in accordance with best practices | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
The cryptographic recommendations in the official documentation are not accurate, and may lead users to make unsafe choices with their Kubernetes encryption configuration.
https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
The default encryption option for users should be SecretBox. It is more secure and efficient than AES-CBC. Users should be encouraged to use KMS whenever possible. We believe these should be the only two options available to users. AES-GCM is secure, but as the docs point out, requires frequent key rotation to avoid nonce reuse attacks.
Finally, AES-CBC is vulnerable to padding oracle attacks and should be deprecated. While Kubernetes doesn't lend itself to a padding oracle attack, AES-CBC being the recommended algorithm both spreads misconceptions about cryptographic security and promotes a strictly worse choice than SecretBox.
**Exploit Scenario**
Alice configures an EncryptionConfiguration following the Kubernetes official documentation. Due to the lack of correctness in regards to best practices, Alice is misled and uses the wrong encryption provider.
**Recommendation**
Short term, default to the use of the SecretBox provider.
Long term, revise the documentation regarding the available EncryptionConfiguration providers and ensure the documentation follows up-to-date best practices. The updated table included in Appendix G should be used as a replacement of the existing table.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-029 and it was finding 19 of the report.
The vendor considers this issue Medium Severity.
To view the original finding, begin on page 57 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,area/security,kind/documentation,sig/storage,sig/auth,sig/docs,lifecycle/frozen,wg/security-audit,sig/security | medium | Critical |
478,214,072 | kubernetes | TOB-K8S-024: kubelet liveness probes can be used to enumerate host network | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
Kubernetes supports both readiness and liveness probes to detect when a Pod is operating correctly, and when to begin or stop directing traffic to a Pod. Three methods are available to facilitate these probes: command execution, HTTP, and TCP.
Using the HTTP and TCP probes, it is possible for an operator with limited access to the cluster (purely kubernetes-service related access) to enumerate the underlying host network. This is possible due to the scope in which these probes execute. Unlike the command execution probe, which will execute a command within the container, the TCP and HTTP probes execute from the context of the kubelet process. Thus, host networking interfaces are used, and the operator is now able to specify hosts which may not be available to Pods kubelet is managing.
The enumeration of the host network uses the container’s health and readiness to determine the status of the remote host. If the pod is killed and restarted due to a failed liveness probe, this indicates that the host is inaccessible. If the Pod successfully passes the liveness check and is presented as ready, the host is accessible. These two states create boolean states of accessible and inaccessible hosts to the underlying host running kubelet.
Additionally, an attacker can append headers through the Pod specification, which are interpreted by the Go HTTP library as authentication or additional request headers. This can allow an attacker to abuse liveness probes to access a wider-range of cluster resources.
An example Pod file that can enumerate the host network is available in Appendix E.
**Exploit Scenario**
Alice configures a cluster which restricts communications between services on the cluster. Eve gains access to Alice’s cluster, and subsequently submits many Pods enumerating the host network in an attempt to gain information about Alice’s underlying host network.
**Recommendations**
Short term, restrict the kubelet in a way that prevents the kubelet from probing hosts it does not manage directly.
Long term, consider restricting probes to the container runtime, allowing liveness to be determined within the scope of the container-networking interface.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-024 and it was finding 21 of the report.
The vendor considers this issue Medium Severity.
To view the original finding, begin on page 60 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,priority/backlog,area/security,sig/node,priority/important-longterm,lifecycle/frozen,wg/security-audit,triage/accepted | medium | Critical |
478,214,116 | kubernetes | TOB-K8S-007: Log rotation is not atomic | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
kubelets use a log to store metadata about the container system, such as readiness status. As is normal for logging, kubelets will rotate their logs under certain conditions:
```
// rotateLatestLog rotates latest log without compression, so that container can still write
// and fluentd can finish reading.
func (c *containerLogManager) rotateLatestLog(id, log string) error {
timestamp := c.clock.Now().Format(timestampFormat)
rotated := fmt.Sprintf("%s.%s", log, timestamp)
if err := os.Rename(log, rotated); err != nil {
return fmt.Errorf("failed to rotate log %q to %q: %v", log, rotated, err)
}
if err := c.runtimeService.ReopenContainerLog(id); err != nil {
// Rename the rotated log back, so that we can try rotating it again
// next round.
// If kubelet gets restarted at this point, we'll lose original log.
if renameErr := os.Rename(rotated, log); renameErr != nil {
// This shouldn't happen.
// Report an error if this happens, because we will lose original
// log.
klog.Errorf("Failed to rename rotated log %q back to %q: %v, reopen container log error: %v", rotated, log, renameErr, err)
}
return fmt.Errorf("failed to reopen container log %q: %v", id, err)
}
return nil
}
```
Figure 22.1: One of the log rotation mechanisms within kubelet
However, if the kubelet were restarted during the rotation, the logs and their contents would be lost. This could have a wide range of impacts to the end user, from missing threat-hunting data to simple error discovery.
**Exploit Scenario**
Alice is running a Kubernetes cluster for her organization. Eve has position sufficient to watch the logs, and understands when log rotation will occur. Even then faults a kubelet when rotation occurs, ensuring that the logs are removed.
**Recommendation**
Short term, move to a copy-then-rename approach. This will ensure that logs aren’t lost from simple rename mishaps, and that at worst they are named under a transient name.
Long term, shift away from log rotation and move towards persistent logs regardless of location. This would mean that logs would be written to in linear order, and a new log would be created whenever rotation is required.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-007 and it was finding 24 of the report.
The vendor considers this issue Low Severity.
To view the original finding, begin on page 65 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,area/security,sig/node,help wanted,lifecycle/frozen,wg/security-audit,triage/accepted | medium | Critical |
478,214,127 | kubernetes | TOB-K8S-008: Arbitrary file paths without bounding | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
Kubernetes as a whole accesses files across the system, including: logs, configuration files, and container descriptions. However, the system does not include a whitelist of safe file locations, nor does it include a more-centralized configuration of where values should be consumed from. For example, the following reads, compresses, and then removes the original file:
```
// compressLog compresses a log to log.gz with gzip.
func (c *containerLogManager) compressLog(log string) error {
r, err := os.Open(log)
if err != nil {
return fmt.Errorf("failed to open log %q: %v", log, err)
}
defer r.Close()
tmpLog := log + tmpSuffix
f, err := os.OpenFile(tmpLog, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
if err != nil {
return fmt.Errorf("failed to create temporary log %q: %v", tmpLog, err)
}
defer func() {
// Best effort cleanup of tmpLog.
os.Remove(tmpLog)
}()
defer f.Close()
w := gzip.NewWriter(f)
defer w.Close()
if _, err := io.Copy(w, r); err != nil {
return fmt.Errorf("failed to compress %q to %q: %v", log, tmpLog, err)
}
compressedLog := log + compressSuffix
if err := os.Rename(tmpLog, compressedLog); err != nil {
return fmt.Errorf("failed to rename %q to %q: %v", tmpLog, compressedLog, err)
}
// Remove old log file.
if err := os.Remove(log); err != nil {
return fmt.Errorf("failed to remove log %q after compress: %v", log, err)
}
return nil
}
```
Figure 23.1: Log compression in pkg/kubelet/logs/container_log_manager.go
While not concerning in and of itself, we recommend a more general approach to file locations and permissions at an architectural level. Furthermore, files such as the SSH authorized_keys file are lenient in what they accept; lines that do not match a key are simply ignored. Attackers with access to configuration data and a write location may be able to parlay this access into an attack such as inserting new keys into a log stream.
**Exploit Scenario**
Alice runs a cluster in production. Eve, a developer, does not have access to the production environment, but does have access to configuration files. Eve uses this access to remove sensitive files from the cluster’s file system, rendering the system inoperable.
**Recommendation**
Short term, audit all locations that handle file processing, and ensure that they include as much validation as possible. This should ensure that the paths are reasonable for what the component expects, and do not overwrite sensitive locations unless absolutely necessary.
Long term, combine this solution with TOB-K8S-004: File Permissions and TOB-K8S-006: Hard-coded credential paths. A central solution that combines permissions and data validation from a single source will help limit mistakes that overwrite files, and make changes to file system interaction easier from a central location.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-008 and it was finding 25 of the report.
The vendor considers this issue Low Severity.
To view the original finding, begin on page 67 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,priority/backlog,area/security,sig/node,help wanted,priority/important-longterm,lifecycle/frozen,wg/security-audit,triage/accepted | low | Critical |
478,214,136 | kubernetes | TOB-K8S-016: Unsafe JSON construction | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
Kubernetes uses JavaScript Object Notation (JSON) and similarly structured data sources throughout the codebase. This supports inter-component communications, both internally and externally to the cluster. However, a number of locations within the codebase use unsafe methods of constructing JSON:
```
pkg/kubectl/cmd/taint/taint.go:218: conflictTaint := fmt.Sprintf("{\"%s\":\"%s\"}", taintRemove.Key, taintRemove.Effect)
pkg/apis/rbac/helpers.go:109: formatString := "{" + strings.Join(formatStringParts, ", ") + "}"
```
Figure 24.1: Examples of incorrect JSON and JSON-like construction
**Exploit Scenario**
Alice runs a Kubernetes cluster in her organization. Bob, a user in Alice’s organization, attempts to add an RBAC permission that he is not entitled to, which causes his entire RBAC construction to be written to logs, and potentially improperly consumed elsewhere.
**Recommendation**
Short term, use proper format-specific encoders for all areas of the application, regardless of where the information is used.
Long term, unify the encoding method to ensure encoded values are validated before use, and that no portion of the application produces values with different validations.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-016 and it was finding 26 of the report.
The vendor considers this issue Low Severity.
To view the original finding, begin on page 69 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,priority/important-soon,area/security,sig/api-machinery,sig/apps,help wanted,priority/important-longterm,lifecycle/frozen,wg/security-audit,triage/accepted | medium | Critical |
478,214,194 | kubernetes | TOB-K8S-017: Use standard formats everywhere | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
Kubernetes supports multiple backends for authentication and authorization, one of which is the Attribute-Based Access Control (ABAC) backend. This backend uses a format consisting of a single-line JSON object on each line.
```
for scanner.Scan() {
i++
p := &abac.Policy{}
b := scanner.Bytes()
// skip comment lines and blank lines
trimmed := strings.TrimSpace(string(b))
if len(trimmed) == 0 || strings.HasPrefix(trimmed, "#") {
continue
}
decodedObj, _, err := decoder.Decode(b, nil, nil)
...
```
Figure 31.1: A portion of NewFromFile - kubernetes-1.13.4/pkg/auth/authorizer/abac/abac.go
This line-delimited format leads to two main issues:
The format is prone to human error. Forcing JSON objects into a single line increases the difficulty of audits and the need for specialized tooling.
JSON objects are arbitrarily restricted to the size of Scanner tokens, or about 65k characters as of this report.
From a more systemic perspective, the use of various formats across the system (JSON, YAML, line-delimited, etc) leads to increased surface area for parsing vulnerabilities.
**Recommendation**
Short term, improve the semantics of ABAC configuration file parsing.
Long term, consider consolidating the use of multiple configuration file formats, and preventing arbitrary formats from being introduced into the system.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-017 and it was finding 32 of the report.
The vendor considers this issue Informational Severity.
To view the original finding, begin on page 78 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | kind/bug,area/security,priority/awaiting-more-evidence,sig/auth,lifecycle/frozen,wg/security-audit | low | Critical |
478,214,211 | kubernetes | TOB-K8S-010: Hardcoded use of insecure gRPC transport | This issue was reported in the [Kubernetes Security Audit Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Description**
Kubernetes’ gRPC client uses a hardcoded WithInsecure() transport setting when dialing a remote:
```
staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/grpc_service.go
64: connection, err := grpc.Dial(addr, grpc.WithInsecure(), grpc.WithDefaultCallOptions(grpc.FailFast(false)), grpc.WithDialer(
pkg/kubelet/apis/podresources/client.go
39: conn, err := grpc.DialContext(ctx, addr, grpc.WithInsecure(), grpc.WithDialer(dialer), grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(maxMsgSize)))
pkg/kubelet/util/pluginwatcher/plugin_watcher.go
431: c, err := grpc.DialContext(ctx, unixSocketPath, grpc.WithInsecure(), grpc.WithBlock(),
pkg/kubelet/cm/devicemanager/device_plugin_stub.go
164: conn, err := grpc.DialContext(ctx, kubeletEndpoint, grpc.WithInsecure(), grpc.WithBlock(),
pkg/kubelet/cm/devicemanager/endpoint.go
179: c, err := grpc.DialContext(ctx, unixSocketPath, grpc.WithInsecure(), grpc.WithBlock(),
pkg/kubelet/remote/remote_runtime.go
51: conn, err := grpc.DialContext(ctx, addr, grpc.WithInsecure(), grpc.WithDialer(dailer), grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(maxMsgSize)))
pkg/kubelet/remote/remote_image.go
50: conn, err := grpc.DialContext(ctx, addr, grpc.WithInsecure(), grpc.WithDialer(dailer), grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(maxMsgSize)))
pkg/volume/csi/csi_client.go
709: grpc.WithInsecure(),
```
Figure 33.1: The use of grpc.WithInsecure() when dialing a remote.
This could allow for man-in-the-middle attacks between the gRPC client and server.
**Exploit Scenario**
Alice has a Kubernetes node and remote gRPC server running on her network. Mallory has gained access to Alice’s network. Due to an insecure transport protocol in Alice’s network, Mallory can actively monitor the network traffic between the gRPC server and client.
**Recommendation**
Short term, add documentation that explains to end users the simplest mechanism for securing the gRPC.
Long term, consider adding a configuration option allowing the gRPC transport to be selected as either secure or insecure, where the secure transport is default.
**Anything else we need to know?**:
See #81146 for current status of all issues created from these findings.
The vendor gave this issue an ID of TOB-K8S-010 and it was finding 34 of the report.
The vendor considers this issue Informational Severity.
To view the original finding, begin on page 81 of the [Kubernetes Security Review Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf)
**Environment**:
- Kubernetes version: 1.13.4 | area/security,sig/storage,sig/node,kind/feature,sig/auth,priority/important-longterm,lifecycle/frozen,wg/security-audit,needs-triage | low | Major |
478,215,053 | kubernetes | Kubernetes 3rd Party Security Audit Findings | This issue is to track the findings from the recent 3rd party security audit of Kubernetes performed by Trail of Bits and Atredis on behalf of the CNCF. The intent is to have a place to track the community's response and remediation to these issues now that they've been made public.
The full output of the assessment is available on the [Security Audit Working Group](https://github.com/kubernetes/community/tree/master/wg-security-audit) site, and this issue specifically tracks the findings from the [Security Assessment Report](https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Final%20Report.pdf).
| # | Title | Issue | Status |
|---|-------|-------|-------|
| 1 | hostPath PersistentVolumes enable PodSecurityPolicy bypass |#81110| closed, addressed by https://github.com/kubernetes/website/pull/15756 |
| 2 |Kubernetes does not facilitate certificate revocation |#81111| duplicate of #18982 and will be tracked in that issue |
| 3 |HTTPS connections are not authenticated |#81112|
| 4 |TOCTOU when moving PID to manager’s cgroup via kubelet|#81113|
| 5 |Improperly patched directory traversal in kubectl cp| #76788| closed, assigned CVE-2019-11249, fixed in https://github.com/kubernetes/kubernetes/pull/80436 |
| 6 |Bearer tokens are revealed in logs|#81114| closed, assigned CVE-2019-11250, fixed in https://github.com/kubernetes/kubernetes/pull/81330 |
| 7 |Seccomp is disabled by default|#81115| closed, addressed by #101943 |
| 8 |Pervasive world-accessible file permissions|#81116|
| 9 |Environment variables expose sensitive data|#81117| closed, addressed by #84992 and #84677
| 10 |Use of InsecureIgnoreHostKey in SSH connections|#81118|
| 11 |Use of InsecureSkipVerify and other TLS weaknesses|#81119|
| 12 |Kubeadm performs potentially-dangerous reset operations|#81120| closed, fixed by #81495, #81494, and kubernetes/website#15881
| 13 |Overflows when using strconv.Atoi and downcasting the result|#81121| closed, fixed by #89120 |
| 14 |kubelet can cause an Out of Memory error with a malicious manifest|#81122| closed, fixed by #76518 |
| 15 |Kubectl can cause an Out Of Memory error with a malicious Pod specification|#81123|
| 16 |Improper fetching of PIDs allows incorrect cgroup movement|#81124|
| 17 |Directory traversal of host logs running kube-apiserver and kubelet|#81125| closed, fixed by #87273 |
| 18 |Non-constant time password comparison |#81126| closed, fixed by #81152 |
| 19 |Encryption recommendations not in accordance with best practices|#81127|
| 20 |Adding credentials to containers by default is unsafe|#81128|
| 21 |kubelet liveness probes can be used to enumerate host network|#81129|
| 22 |iSCSI volume storage cleartext secrets in logs|#81130| closed, fixed by #81215 |
| 23 |Hard coded credential paths |#81131| closed, awaiting more evidence |
| 24 |Log rotation is not atomic |#81132|
| 25 |Arbitrary file paths without bounding|#81133|
| 26 |Unsafe JSON construction |#81134|
| 27 |kubelet crash due to improperly handled errors|#81135|
| 28 |Legacy tokens do not expire|#81136| duplicate of #70679 and will be tracked in that issue |
| 29 |CoreDNS leaks internal cluster information across namespaces|#81137|Closed, resolved with CoreDNS v1.6.2. #81137 (comment)|
| 30 |Services use questionable default functions|#81138|
| 31 |Incorrect docker daemon process name in container manager|#81139| closed, fixed by #81083 |
| 32 |Use standard formats everywhere |#81140|
| 33 |Superficial health check provides false sense of safety|#81141| closed, fixed by #81319 |
| 34 |Hardcoded use of insecure gRPC transport|#81142|
| 35 |Incorrect handling of Retry-After |#81143| closed, fixed by #91048 |
| 36 |Incorrect isKernelPid check |#81144| closed, fixed by #81086|
| 37 |Kubelet supports insecure TLS ciphersuites|#81145| closed in favor of #91444 (see [this comment](https://github.com/kubernetes/kubernetes/issues/81145#issuecomment-630291221))| | kind/bug,area/security,priority/important-longterm,lifecycle/frozen,wg/security-audit,sig/security | medium | Critical |
478,216,507 | TypeScript | Immutable-By-Default Flags | ## Search Terms
readonly, default, immutable
## Suggestion
Adding flags under the `strict` umbrella to default to immutability in different cases, and a new `mutable` keyword.
## Use Cases
I'm creating this issue as a _parent issue_ to track a couple of issues that already exist for specific cases:
- Class methods: https://github.com/microsoft/TypeScript/issues/22315, https://github.com/microsoft/TypeScript/issues/22315#issuecomment-400708470 in particular
- Arrays: https://github.com/microsoft/TypeScript/issues/32467
- Interfaces: No issue yet, left a comment in https://github.com/microsoft/TypeScript/issues/32467#issuecomment-519333922
- https://github.com/microsoft/TypeScript/issues/21152
Off of the top of my head, these flags would have some direct advantages:
- They clearly state the intention of the whole project in regards to mutability
- It's very common for new members of teams to forget to add the `readonly` keyword or use `T[]` rather than `ReadonlyArray<T>` accidentally. Defaulting to immutability would help prevent these accidents, and a `mutable` keyword would be easy to find by `tslint` or even a simple `grep`, to automatically request review in PRs or leave an automated comment `this PR introduces mutation`.
## Examples
Examples should probably live in the _children_ GitHub issues, but I'm copying here [my comment](https://github.com/microsoft/TypeScript/issues/32467#issuecomment-519333922) for a quick example:
```ts
interface T {
n: number // immutable
mutable s: string // mutable
}
const o: T = {
n: 42,
s: 'hello world',
}
o.n = 43 // error
o.s = '👋🌎' // ok
```
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
## Backwards Compatibility
@AnyhowStep made an interesting [comment on this issue](https://github.com/microsoft/TypeScript/issues/32758#issuecomment-569369439).
Basically this feature wouldn't be any problem for applications, but it could be problematic for libraries, as the emitted `.d.ts` may be imported from a library/application that isn't using this flag or is using an older TS version.
Possible solutions:
- A pre-processor directive (https://github.com/microsoft/TypeScript/issues/32758#issuecomment-569369439)
- Always emitting `d.ts` with `readonly`, never `mutable` (possibly behind another flag?)
- [downlevel-dts](https://github.com/sandersn/downlevel-dts)
## Records and Tuples
The [Record and Tuple proposal](https://github.com/tc39/proposal-record-tuple) has reached stage 2, so it may be arriving to TS soon-ish. It seems closely related to this issue, but I don't think it fully addresses it.
## `readonly interface`
The possibility of using the `readonly` keyword for an entire `interface` has been suggested in A cheaper, easier to implement middle ground could be https://github.com/microsoft/TypeScript/issues/21152.
```ts
readonly interface State {
prop1: string;
prop2: string;
...
prop22: string;
}
```
This would probably be a much cheaper and less disruptive feature to implement, and could be a great starting point. | Suggestion,Awaiting More Feedback | high | Critical |
478,218,319 | terminal | Feature Request Option to add profiles to the Windows Start Menu | Allow creating profile based entries in the Windows Start Menu
This will allow the user to launch the specific profile that they want without the need to open the default one
A button in the profile setting adds an entry in the Windows Start Menu with the icon profile and name, this entry launcher the windows terminal in this profile instead of the default one
### Maintainer Notes
**SPEC REQUIRED**
Specifier should build on work already done on #576, because I am betting that it'll be along the same path! (from @DHowett-MSFT) | Issue-Feature,Help Wanted,Area-UserInterface,Product-Terminal | medium | Major |
478,227,212 | rust | Adding a rustdoc option to do case-sensitive search | It likes [`smartcase` mode in vim][1].
> If a pattern contains an uppercase letter, it is case sensitive, otherwise, it is not.
> For example, "The" would find only "The", while "the" would find "the" or "The" etc.
[1]: https://vim.fandom.com/wiki/Searching#Case_sensitivity
| T-rustdoc,C-feature-request,A-rustdoc-search | low | Major |
478,275,744 | pytorch | Using `torch.utils.checkpoint.checkpoint_sequential` and `torch.autograd.grad` breaks when used in combination with `DistributedDataParallel` | ## 🐛 Bug
Using `torch.utils.checkpoint.checkpoint_sequential` and `torch.autograd.grad` breaks when used in combination with `DistributedDataParallel` resulting in the following stacktrace
```
Traceback (most recent call last):
File "minimal_buggy_2.py", line 198, in <module>
train(hps)
File "minimal_buggy_2.py", line 179, in train
loss.backward()
File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/function.py", line 77, in apply
return self._forward_cls.backward(self, *args)
File "/opt/conda/lib/python3.7/site-packages/torch/utils/checkpoint.py", line 99, in backward
torch.autograd.backward(outputs, args)
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: has_marked_unused_parameters_ ASSERT FAILED at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:181, please report a bug to PyTorch.
```
## To Reproduce
Steps to reproduce the behavior:
1. Running `python -m torch.distributed.launch --nproc_per_node=2 minimal_buggy.py` works
1. Running `python -m torch.distributed.launch --nproc_per_node=2 minimal_buggy.py --parallel` breaks
The code for `minimal_buggy.py` is here:
```python
from argparse import ArgumentParser, Namespace
import numpy as np
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.utils.checkpoint import checkpoint_sequential
import torchvision.datasets as ds
from torchvision import transforms
class GradientStep(nn.Module):
def __init__(self, energy):
super(GradientStep, self).__init__()
self._energy = energy
self.step_size = 0.1
def forward(self, x: torch.Tensor):
with torch.enable_grad():
x.requires_grad_()
omega = self._energy(x).sum()
grad_out = torch.ones_like(omega).to(x.device)
dx = torch.autograd.grad(outputs=omega, inputs=x, grad_outputs=grad_out,
create_graph=True, retain_graph=True,
allow_unused=True)[0]
dx.requires_grad_()
x = (x - self.step_size ** 2 * dx)
return x
class FFN(nn.Module):
def __init__(self):
super(FFN, self).__init__()
self.n_hidden = 1024
self._energy = nn.Sequential(
nn.Linear(28**2, self.n_hidden),
nn.LeakyReLU(),
nn.Linear(self.n_hidden, self.n_hidden),
nn.LeakyReLU(),
nn.Linear(self.n_hidden, self.n_hidden),
nn.LeakyReLU(),
nn.Linear(self.n_hidden, 1))
self.L = 10
def forward(self, x: torch.Tensor):
y = x.clone().to(x.device)
y.requires_grad_()
fwd = nn.Sequential(*[GradientStep(self._energy) for _ in range(self.L)])
y = checkpoint_sequential(fwd, self.L, y)
return y
def get_distributed_mnist_iterators(batch_size, **kwargs):
def _worker_init_fn(worker_id):
np.random.seed(np.random.get_state()[1][0] + worker_id)
base_transforms = [
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Lambda(lambda x: x.reshape(-1, ))]
train_dataset = ds.MNIST(
'/tmp/mnist/train',
train=True,
download=True,
transform=transforms.Compose(base_transforms))
test_dataset = ds.MNIST(
'/tmp/mnist/test',
train=False,
download=True,
transform=transforms.Compose(base_transforms))
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=kwargs.get('workers', 4),
pin_memory=True,
sampler=train_sampler,
worker_init_fn=_worker_init_fn)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=kwargs.get('test_batch_size', 100),
shuffle=False,
num_workers=kwargs.get('workers', 4),
pin_memory=True,
sampler=test_sampler,
worker_init_fn=_worker_init_fn)
return train_loader, test_loader, train_sampler, test_sampler
def get_mnist_iterators(batch_size, **kwargs):
base_transforms = [
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Lambda(lambda x: x.reshape(-1, ))]
train_dataset = ds.MNIST(
'/tmp/mnist/train',
train=True,
download=True,
transform=transforms.Compose(base_transforms))
test_dataset = ds.MNIST(
'/tmp/mnist/test',
train=False,
download=True,
transform=transforms.Compose(base_transforms))
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=kwargs.get('workers', 4),
pin_memory=True,
sampler=None)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=kwargs.get('test_batch_size', 100),
shuffle=True,
num_workers=kwargs.get('workers', 4),
pin_memory=True,
sampler=None)
return train_loader, test_loader
def parse_args() -> Namespace:
parser = ArgumentParser()
parser.add_argument('--local_rank',
type=int,
default=0,
help="Is being set by the pytorch distributed launcher")
parser.add_argument('--parallel',
default=False,
action='store_true')
hps = parser.parse_args()
return hps
def train(hps: Namespace):
if hps.parallel:
train_loader, test_loader, _, _ = get_distributed_mnist_iterators(
batch_size=32)
else:
train_loader, test_loader = get_mnist_iterators(
batch_size=32)
model = FFN()
model.cuda()
if hps.parallel:
model = nn.parallel.DistributedDataParallel(model)
crit = nn.MSELoss()
opt = torch.optim.Adam(model.parameters(), lr=1e-4)
for epoch in range(100):
for step, (b, lbl) in enumerate(train_loader):
model.train()
opt.zero_grad()
model.zero_grad()
corrupt_b = (b + 0.3 * torch.randn_like(b)).cuda()
recons = model(corrupt_b)
loss = crit(recons, b.cuda())
loss.backward()
opt.step()
if dist.get_rank() == 0:
if step % 10 == 0:
print(f'epoch {epoch}, batch: {step}, loss: {float(loss.cpu())}')
def setup():
hps = parse_args()
dist.init_process_group(backend='nccl', init_method=f'env://', rank=hps.local_rank)
size = dist.get_world_size()
group = torch.distributed.new_group(ranks=list(range(size)))
return group, hps
if __name__ == "__main__":
group, hps = setup()
train(hps)
```
## Expected behavior
Expect either command to work independent of using `DistributedDataParallel` or not.
## Environment
```
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.11.1
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 410.79
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.2
Versions of relevant libraries:
[pip3] numpy==1.16.3
[pip3] pytorch-memlab==0.0.3
[pip3] torch==1.1.0
[pip3] torchvision==0.3.0
[conda] pytorch-memlab 0.0.3 pypi_0 pypi
[conda] torch 1.1.0 pypi_0 pypi
[conda] torchvision 0.3.0 pypi_0 pypi
```
## Additional context
<!-- Add any other context about the problem here. -->
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar @jiayisuse @agolynski | oncall: distributed,module: checkpoint,feature,triaged | medium | Critical |
478,329,724 | TypeScript | Array of tuple literal not assignable to iterable of tuple with --lib es2015 -t es5 | <!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Versions:** 3.5.1, 3.5.3 (currently `typescript@latest`), 3.6.0-beta (currently `typescript@beta`), 3.6.0-dev.20190808 (currently `typescript@next`)
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
// Works:
declare function foo<T>(i: Iterable<T>): T
const x = foo([1]); // infers a: number
// Fails:
declare function bar<T, U>(j: Iterable<[T, U]>): [T, U]
const y = bar([[1, 'a']]);
// ^^^^^^^^
// Argument of type '(string | number)[][]' is not assignable to
// parameter of type 'Iterable<[string | number, string | number]>'.
// workaround:
declare function qux<T, U>(j: Array<[T, U]> | Iterable<[T, U]>): [T, U]
const z = qux([[1, 'a']]); // infers z: [number, string]
```
**Expected behavior:** variable `y` is inferred to be of type `[number, string]` and parameter `j` is inferred to be of type `Array<[number, string]>`
**Actual behavior:** `error TS2345: Argument of type '(string | number)[][]' is not assignable to parameter of type 'Iterable<[string | number, string | number]>'.`
**[Playground Link](https://www.typescriptlang.org/play/index.html?target=1#code/PTAEHUHsCcGsGcBcAoAJgUwMYBsCG11QAzAVwDtMAXAS0jOMkgB4AVAPgAprFQBJS9NFwAjbOlZsAlDxbJMdeJVAAPUAF4GkDgG0AjAF1JAblAhQ1MkUHxQuHmRIBbYYOTIzAMVzVsSNFjwCYnIqWnphfFYAGlAAVU4AKx5+QRExJm0WGNj9KR5M7P05BSUAT3VQCOgdPRiAclw6-UMTM0pSgAdCQWgYN3cwAHcYWHxIclQUDBx8QlIKGjpQAEcSZWi4xJ4AQWghUoysuNzQAB8+ASFRcQLjvNBbnOKyRVAALwrV5Rrdesbm4ymMAWKzQGxvfIOZyCGKKaAWADmRX6A1A6GUgg6AlQxF6jlA2GowgAdOh4AAmAAMugArMTqJc0uhiahiZQ-P4ZkEAG74UAAZVKzkg2B4guF2AAwiVoCQqDAjG4LJciLhMIRxcIRdKXpRZfLoKAAN7IUBm0wAKgtpvNoAtoG2oEc6EoAAtIDi3bglARKCRoC9QG7CBhVSRsEoGalKDAGIbcPRIMIElhKMTQJLcNgxDjhOVg6B4OhHAmaJgbJAiEHXegbeb7UQYABaSuFyje4voMhputmi3AXsEXCoOjYcpRoQx6A8eBCrXYRUAXyV3cEqvVF2jMAASmTw5QJMbeyOyOgeFqRegE4rbbzsCQz6AWEuVyq1YQUpOYIeTbbT8pKA4O8HwAfh4BNSmkTcv2gXd4H3CQb3NX1-TIECgKzUDwLISDkkZKc4IQ9gkLNN1ekGdD0DA2wcKgz9vR3PcI0Q5Bl2QZU13faCmR-XttE1EV6XwmB9A4OjhOgFi2I46B1w-RlrnoqdD3RAQyFQGwlO-dgj1tfi50EicGOgUTxNSRSJKk19OI3XZ9l421gCtbipztAc9IE7AhK3EyxLw8yxC0yTiNYoA)**
**Related Issues:** maybe #29311 ?
| Needs Investigation | low | Critical |
478,402,397 | flutter | Toggle FlashLight when camera is open | Need to toggle the flashlight during camera usage. (Used with ml_kit for text recognition and need the light for night usage).,Cannot ship my app without it. | c: new feature,a: quality,p: camera,package,team-ecosystem,P3,triaged-ecosystem | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.