id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
βŒ€
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
538,717,907
flutter
General improvements to theming
Currently the Themeing system is difficult to use and understand. For example what theme is applied to what? I just want to set the AppBar Text I want to set every property in the application when using Theme's. How do we do this. It needs to be extremely well documented. I found this in App Bar, that centerStyle was used to set title. ``` TextStyle centerStyle = widget.textTheme?.title ?? appBarTheme.textTheme?.title ?? theme.primaryTextTheme.title; ``` And I set appBarTheme.textTheme.title to white and I still get black. Themeing is extremely difficult in Flutter.
framework,f: material design,a: quality,team-design,triaged-design
low
Major
538,720,358
TypeScript
[formatter] Semicolon is not removed on class member with initializer
*Template info added by @mjbvz* **TypeScript Version**: 3.8.0-dev.20191216 **Search terms:** - Format - semicolons --- Issue Type: <b>Bug</b> This is a bug report for the built-in TypeScript formatter. I set to remove semicolons: ``` "typescript.format.semicolons": "remove", ``` However, semicolons are not getting removed on class variables with initializer. Simplest example: ```typescript export class X { private a: number; private b: boolean = false; private c = 10; } ``` After formatting: ```typescript export class X { private a: number private b: boolean = false; private c = 10; } ``` If there is an initializer, semicolon is not removed. VS Code version: Code 1.41.0 (9579eda04fdb3a9bba2750f15193e5fafe16b959, 2019-12-11T17:58:38.338Z) OS version: Darwin x64 19.0.0 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz (8 x 2600)| |GPU Status|2d_canvas: enabled<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>metal: disabled_off<br>multiple_raster_threads: enabled_on<br>oop_rasterization: disabled_off<br>protected_video_decode: unavailable_off<br>rasterization: enabled<br>skia_renderer: disabled_off<br>surface_control: disabled_off<br>surface_synchronization: enabled_on<br>video_decode: enabled<br>viz_display_compositor: enabled_on<br>viz_hit_test_surface_layer: disabled_off<br>webgl: enabled<br>webgl2: enabled| |Load (avg)|3, 2, 3| |Memory (System)|16.00GB (0.08GB free)| |Process Argv|| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (9)</summary> Extension|Author (truncated)|Version ---|---|--- vscode-svgviewer|css|2.0.0 vscode-eslint|dba|1.9.1 EditorConfig|Edi|0.14.3 flow-for-vscode|flo|1.5.0 vscode-react-typescript|inf|1.3.1 jbockle-format-files|jbo|3.0.0 graphql-for-vscode|kum|1.15.3 Kotlin|mat|1.7.0 debugger-for-chrome|msj|4.12.3 (1 theme extensions excluded) </details> <!-- generated by issue reporter -->
Bug,Domain: Formatter
low
Critical
538,726,331
TypeScript
class-factory mixins in type declaration files are impossible. (allow implicit return types in declaration files somehow)
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ --> ## Search Terms When the compiler is set to emit declaration files, class-factory mixins are no longer allowed. This is yet another issue that prevents class-factory mixins from being adopted in library code that need to be publish as JS with sibling declaration files. ## Suggestion Allow implicit return types to be somehow representable in declaration files. ## Use Cases Enables many use cases that are otherwise impossible when emitting declaration files. I believe declaration files should support all features of TS. ## Examples Write a mixin function, and try to emit declaration files. ## Workaround As a workaround I publish TypeScript source files in my NPM packages, then I point the `types` field in `package.json` to my source entry point. I don't publish declaration files because I can't build them. Pointing to source files opens up other cans of worms, and prior to TS 3.6.3 I was not able to point to source files due to pre-existing bugs in `tsc` in that scenario. After TS 3.6.3, I can successfully publish source files and point `types` to sources containing class-factory mixins, but I'm afraid this is brittle and can break with an upgrade to TS (just as it was broken from some point prior to v3.6.3). ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Awaiting More Feedback
low
Critical
538,810,804
pytorch
Memory management is inefficient which limits performance
I encountered an out of memory error during training RuntimeError: CUDA out of memory. Tried to allocate 1.86 GiB (GPU 0; 31.72 GiB total capacity; 24.58 GiB already allocated; 1.28 GiB free; 5.08 GiB cached) Clearly there are enough memory, but probably due to fragmentation it cannot allocate 1.86G continuous memory out of the 5.08G cached memory. This has severe downside on performance because it limits the maximum batch size. The max batch size I can get is 226. But if I add "torch.cuda.empty_cache()" to my code, the max batch size can go above 256, however, performance is severely degraded. 256+ is a big difference with 226. Larger batch size means better utilization of GPU, so this problem has a big negative impact on performance. Wonder can you fix or optimize this issue? Or are there any other solutions to this problem? cc @ngimel @VitalyFedyunin @mruberry
module: performance,module: cuda,module: memory usage,triaged
low
Critical
538,821,028
youtube-dl
Site Support Request For Feet9
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.11.28. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights. - Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a new site support request - [x] I've verified that I'm running youtube-dl version **2019.11.28** - [x] I've checked that all provided URLs are alive and playable in a browser - [x] I've checked that none of provided URLs violate any copyrights - [x] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs <!-- Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours. --> - Single video: https://www.feet9.com/16849/two-naughty-stepsisters-strip-off-their-sexy-lingerie-and-worship-each-other-s-feet/ - Single video [embed link]: https://www.feet9.com/modules/video/player/embed.php?id=16849 - Playlist: capability not found on website ## Description <!-- Provide any additional information. If work on your issue requires account credentials please provide them or explain how one can obtain them. --> From the main page of the website: > You are an amateur foot fetish, you are welcome on our website. It is the ideal platform on which you can find fetish tube freely allowing you to discover a wide range of naughty video that are classified in different categories to facilitate your research.
site-support-request
low
Critical
538,829,982
pytorch
Optimizing DLRM for CPU
## πŸš€ Feature A number of optimization and performance tuning for DLRM on CPU ## Motivation Recommendation systems are one of the most common DL workloads in the cloud or enterprise server room. Very often the recommendation system burns most compute cycles in the data center among all DL workload. DLRM is a state-of-the-art deep learning recommendation model which is composed of compute intensive MLP layers and memory intensive and capacity limited embedding layers. Due to the memory capacity, people have found CPU's large DRAM capacity helpful training large DLRM models with very large embedding tables. Since the introdution of DLRM workload, Intel Pytorch team has been working to improve the performance using the configuration described in the DLRM paper. The work has generated ~90x performance improvement for DLRM trainning workload on CPU. This RFC summarizes the work we have done on the DLRM optimization and performance tuning, including a number code improvements and command option BKMs. 1. Parallelize the Embedding_bag. The embedding_bag operation contains a table look up and reduction operation. We parallelize the index_select_add and uses caffe2's Embedding look up function caffe2::EmbeddingLookupIdx(). https://github.com/pytorch/pytorch/pull/24385 Jianyu further improve this implementation by adding a new parameter into to EmbeddingBag API https://github.com/pytorch/pytorch/pull/27477 2. Optimize the backward path of embedding_bag operation. We parallelized the backward embedding_bag specialized for float-point tensor with SUM reduction operation with continuous stride. This greatly improves the performance by avoid calling the original code which involves offset2bag(), cumsum(), index_select(), index_add(), and _embedding_bag_sparse_backward(). https://github.com/pytorch/pytorch/pull/27804 3. Optimize the gradient update. For the non-coalesced hybrid sparse tensor input, the add_out_dense_sparse() function originally does coalescing involves an expensive sort and merge operation. We parallelized gradient update by using fine-grain locked critical sections which adds the non-coalesced hybrid sparse tensor directly to the dense weight tensor. This bring more than 100x improvement on gradient update and 20x performance improvement for whole DLRM benchmark.https://github.com/pytorch/pytorch/pull/23057Β  4. Parallelize the index_put. index_put is called by the backward of Zflat = Z[:, li, lj] in interaction. We optimize the accumulation path by using atomic add float to parallelize it. This improved about 10x on index_put accumulate operation in DLRM. https://github.com/pytorch/pytorch/pull/29705 5. Parallelize the cat operation. On the contiguous path, we pre-calculate the cat offset and then perform cat copy in parallel. This gives about 8x improvement on cat operation in DLRM.https://github.com/pytorch/pytorch/pull/30806 6. Parallelize Index_select. Index_select is used in the backward path of embedding_bag operation. The current implementation is in th module and only parallelize for contiguous input. We found that for DLRM, the input tensor could be non-contiguous, and the implementation go to a serialized version. We optimize the non-contiguous input and re-implement the operation in Aten library using parallel_for. https://github.com/pytorch/pytorch/pull/30598 7. Uses all threads for all openmp tasks. We found that the best performance is achieve when all the threads running for all parallel_for tasks(). Having some light operation running at few threads doesn’t help the performance. https://github.com/pytorch/pytorch/issues/30803 To further tune the performance, you may consider the following command line options and software configuration. 1. Export KMP_BLOCKTIME=1 KMP_BLOCKTIME sets the time that an OPENMP worker thread should wait ,after completing the execution of a parallel region, before sleeping. The default time is 200ms and can hurt performance. Setting to 0 is not good also since we don't want to put worker thread to sleep too often. 2. Export KMP_AFFINITY="granularity=fine,compact,1,0" KMP_AFFINITY binds OpenMP threads to physical processing units.Β  In this setting, The OpenMP* threadΒ n+1 is bound to a different physical core to OpenMP* threadΒ n.Β  3. Use je_malloc(). jemalloc is a general purpose malloc implementation that emphasizes fragmentation avoidance and scalable concurrency support. We found this helps performance, mostly due to the reduction of calling OS's zero_pages() and better cache locality. cc @VitalyFedyunin @ngimel @mruberry
module: performance,feature,module: cpu,triaged
low
Major
538,896,986
flutter
Document platform channel restrictions whose violations lead to "ServicesBinding.defaultBinaryMessenger was accessed before the binding was initialized."
Hi. I just updated my Flutter SDK. Then when I tried to run my main flutter project where i used Sharedpreferences before the runApp, I got this `ServicesBinding.defaultBinaryMessenger was accessed before the binding was initialized` error. Then I added `WidgetsFlutterBinding.ensureInitialized()` before `runApp`. My app was successfully installed. But still I am getting the same error in the console. Even I created New Flutter Project (The Incrementer app) and tried running. Still getting this same error log, even though i didn't add the `sharedpreference` dependency. ERROR LOG: ```E/flutter ( 4991): [ERROR:flutter/lib/ui/ui_dart_state.cc(157)] Unhandled Exception: ServicesBinding.defaultBinaryMessenger was accessed before the binding was initialized. E/flutter ( 4991): If you're running an application and need to access the binary messenger before `runApp()` has been called (for example, during plugin initialization), then you need to explicitly call the `WidgetsFlutterBinding.ensureInitialized()` first. E/flutter ( 4991): If you're running a test, you can call the `TestWidgetsFlutterBinding.ensureInitialized()` as the first line in your test's `main()` method to initialize the binding. E/flutter ( 4991): #0 defaultBinaryMessenger.<anonymous closure> (package:flutter/src/services/binary_messenger.dart:76:7) E/flutter ( 4991): #1 defaultBinaryMessenger (package:flutter/src/services/binary_messenger.dart:89:4) E/flutter ( 4991): #2 MethodChannel.binaryMessenger (package:flutter/src/services/platform_channel.dart:140:62) E/flutter ( 4991): #3 MethodChannel.invokeMethod (package:flutter/src/services/platform_channel.dart:314:35) E/flutter ( 4991): #4 MethodChannel.invokeMapMethod (package:flutter/src/services/platform_channel.dart:349:48) E/flutter ( 4991): #5 MethodChannelSharedPreferencesStore.getAll (package:shared_preferences_platform_interface/method_channel_shared_preferences.dart:54:22) E/flutter ( 4991): #6 SharedPreferences._getSharedPreferencesMap (package:shared_preferences/shared_preferences.dart:166:57) E/flutter ( 4991): #7 SharedPreferences.getInstance (package:shared_preferences/shared_preferences.dart:33:19) E/flutter ( 4991): #8 AppTranslations.loadSavedLocale (package:technicalreport/settings/appLang.dart:74:43) E/flutter ( 4991): #9 main (package:technicalreport/main.dart:13:25) E/flutter ( 4991): #10 _runMainZoned.<anonymous closure>.<anonymous closure> (dart:ui/hooks.dart:239:25) E/flutter ( 4991): #11 _rootRun (dart:async/zone.dart:1126:13) E/flutter ( 4991): #12 _CustomZone.run (dart:async/zone.dart:1023:19) E/flutter ( 4991): #13 _runZoned (dart:async/zone.dart:1518:10) E/flutter ( 4991): #14 runZoned (dart:async/zone.dart:1502:12) E/flutter ( 4991): #15 _runMainZoned.<anonymous closure> (dart:ui/hooks.dart:231:5) E/flutter ( 4991): #16 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:307:19) E/flutter ( 4991): #17 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:174:12) ``` FLUTTER DOCTOR: ``` [√] Flutter (Channel stable, v1.12.13+hotfix.5, on Microsoft Windows [Version 10.0.18362.535], locale en-IN) β€’ Flutter version 1.12.13+hotfix.5 at D:\Installed\FlutterSDK β€’ Framework revision 27321ebbad (6 days ago), 2019-12-10 18:15:01 -0800 β€’ Engine revision 2994f7e1e6 β€’ Dart version 2.7.0 [√] Android toolchain - develop for Android devices (Android SDK version 28.0.3) β€’ Android SDK at C:\Users\User1\AppData\Local\Android\sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-29, build-tools 28.0.3 β€’ Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03) β€’ All Android licenses accepted. [√] Android Studio (version 3.5) β€’ Android Studio at C:\Program Files\Android\Android Studio β€’ Flutter plugin version 40.2.2 β€’ Dart plugin version 191.8593 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03) [√] Connected device (1 available) β€’ Android SDK built for x86 β€’ emulator-5554 β€’ android-x86 β€’ Android 9 (API 28) (emulator) β€’ No issues found! ```
c: crash,d: api docs,customer: crowd,P2,a: plugins,team-framework,triaged-framework
low
Critical
538,953,259
godot
Animations not looping on inherited scenes from a 3D gltf animated mesh
___ ***Bugsquad note:** This issue has been confirmed several times already. No need to confirm it further.* ___ **Godot version:** Commit 3aa46a58cd3ae5f327929d127ac5fef0733176c9 Latest master from december 16, 2019 **OS/device including version:** Linux Pop_OS! 19.10 **Issue description:** In inherited scenes from animated 3D meshes, animations won't loop at runtime even though I set the animations to loop on the animation player. The animation does loop in the editor, the problem is only in-game. **Steps to reproduce:** 1. Import a .glb or .fbx rigged and animated mesh in Godot 2. Create a new inherited scene from the imported mesh 3. Select the animation player and set an animation to autoplay and loop 4. Create a scene to see the character in the game 5. Play the game The animation doesn't loop. The looping setting is properly saved in the inherited scene: if you set the animated mesh scene to "Editable children" in the game scene, you'll see the looping option is still checked on the AnimationPlayer. Note that if you clear the inheritance on the animated character, looping will then work as expected. **Minimal reproduction project:** Here is a working project with the issue above. The character is only in gltf2 but the issue is the same with fbx. I didn't include the fbx as the character's mesh normals won't import well at the moment. [animation-looping-bug.zip](https://github.com/godotengine/godot/files/3972519/animation-looping-bug.zip)
bug,topic:editor,confirmed,usability,topic:import,topic:animation,topic:3d
medium
Critical
538,983,372
godot
Joints spring settings
Add settings to spring param. `set_flag_z(Generic6DOFJoint.FLAG_ENABLE_LINEAR_LIMIT,true) set_param_y(Generic6DOFJoint.PARAM_LINEAR_UPPER_LIMIT,1)` works, but - `set_flag_y(Generic6DOFJoint.FLAG_ENABLE_LINEAR_SPRING,true)` and _stiffness_ and _damping_ is missing documentation not provide info: `set("linear_spring_y/stiffness", 1000) set("linear_spring_y/damping", 50)` it says: > float linear_spring_x/stiffness [default: 0.01] set_param_x(value) setter get_param_x() getter plz add _set_param_*_ and in documentation change to _set("linear_spring_y/stiffness", value)_ (and all settings/param of joints)
enhancement,documentation,topic:physics
low
Minor
538,992,187
godot
Weird error "get_node"
**Godot version:** 3.2 beta3 **OS/device including version:** win7-64 **Issue description:** I have this error :` Error 15,1: The method "get_node" isn't declared in the current class.` in this line: `onready var pos_spawn_enemy_mid:Vector2 = $Ground/SpawnPivots/EnemyMid.position` in terminal i have this error: ``` * daemon not running; starting now at tcp:5037 erasing C:\Users\homev3\AppData\Roaming/Godot/projects/topdown-656ff94ce70ac8286e098a1814b517d0/filesystem_update4 SCRIPT ERROR: GDScript::reload: Parse Error: The method "get_node" isn't declared in the current class. At: res://src/maps/Map.gd:15 ERROR: reload: Method/Function Failed, returning: ERR_PARSE_ERROR At: modules/gdscript/gdscript.cpp:576 * daemon started successfully Running: C:\Users\homev3\Desktop\Godot_v3.2-beta3_win64.exe --path c:/Users/homev3/Documents/GoDot/release/projects/topdown 007 --allow_focus_steal_pid 19708 --position 1920,0 Godot Engine v3.2.beta3.official - https://godotengine.org OpenGL ES 3.0 Renderer: GeForce GT 650M/PCIe/SSE2 ERROR: debug_get_stack_level_instance: Condition ' _debug_parse_err_line >= 0 ' is true. returned: __null At: modules/gdscript/gdscript_editor.cpp:338 ``` in debug dock I have this error: ``` E 0:00:00.745 debug_get_stack_level_instance: Condition ' _debug_parse_err_line >= 0 ' is true. returned: __null <C++ Source> modules/gdscript/gdscript_editor.cpp:338 @ debug_get_stack_level_instance() But I have no idea if is related ``` **Steps to reproduce:** I moved the $Ground **Minimal reproduction project:** NA
bug,topic:gdscript
low
Critical
539,046,956
pytorch
DDP/MP not yielding nontrivial speedup
## πŸ› Bug Following the tutorial from https://pytorch.org/tutorials/intermediate/ddp_tutorial.html I implemented a distributed policy gradient reinforcement learning algorithm. Using the script below I benchmarked 1000 steps on a simple gym environment and recorded the time per worker. Since I'm on a six core machine I was expecting a nontrivial speedup per global step in the order of 1 <= num_processes <= 6. This is not the case (see output below). I'm not trying to do any learning here, just benchmarking. I observed the same behavior when using DDP and even asynchonous code from the hogwild example at https://pytorch.org/docs/stable/notes/multiprocessing.html . ## To Reproduce Steps to reproduce the behavior: 1. Run script below with different values for num_processes ``` import sys import os import tempfile import torch import torch.distributed as dist import torch.nn as nn import torch.optim as optim import torch.multiprocessing as mp from torch.nn.parallel import DistributedDataParallel as DDP from models import DNNPolicy, FCPolicy import time import gym def setup(rank, world_size): os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '12355' dist.init_process_group("gloo", rank=rank, world_size=world_size) torch.manual_seed(42) def cleanup(): dist.destroy_process_group() def main(rank, world_size): setup(rank, world_size) env = gym.make('CartPole-v0') policy = DNNPolicy(env) state = env.reset() loop_start = time.time() for i in range(1000): policy(torch.randn(240, 256, 3)) #time.sleep(0.005) loop_end = time.time() print("loop time worker:", rank, "nprocs:", world_size, loop_end - loop_start, "s") def run_demo(demo_fn, world_size): mp.spawn(demo_fn, args=(world_size,), nprocs=world_size, join=True) if __name__ == "__main__": for n_procs in range(6): run_demo(main, n_procs+1) ``` 2. Results in: loop time worker: 0 nprocs: 1 4.527710199356079 s loop time worker: 0 nprocs: 2 5.222243547439575 s loop time worker: 1 nprocs: 2 5.496668577194214 s loop time worker: 2 nprocs: 3 7.741548776626587 s loop time worker: 0 nprocs: 3 7.951193809509277 s loop time worker: 1 nprocs: 3 8.345336198806763 s loop time worker: 0 nprocs: 4 12.702503442764282 s loop time worker: 1 nprocs: 4 13.029954195022583 s loop time worker: 3 nprocs: 4 13.766679286956787 s loop time worker: 2 nprocs: 4 13.906413793563843 s loop time worker: 0 nprocs: 5 16.707841873168945 s loop time worker: 3 nprocs: 5 17.413241863250732 s loop time worker: 2 nprocs: 5 17.777271032333374 s loop time worker: 4 nprocs: 5 17.782108306884766 s loop time worker: 1 nprocs: 5 17.930848836898804 s loop time worker: 0 nprocs: 6 17.15580153465271 s loop time worker: 2 nprocs: 6 17.536866664886475 s loop time worker: 3 nprocs: 6 18.087643146514893 s loop time worker: 4 nprocs: 6 18.502392053604126 s loop time worker: 5 nprocs: 6 18.63866925239563 s loop time worker: 1 nprocs: 6 19.046801805496216 s ## Expected behavior Loop time staying roughly constant for increasing amount of workers. ## Environment PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1.243 OS: Ubuntu 18.04.2 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: Could not collect Python version: 3.6 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip] numpy==1.17.4 [pip] torch==1.3.1 [conda] torch 1.3.1 pypi_0 pypi ## Additional context <!-- Add any other context about the problem here. --> Replacing the call to policy with time.sleep(0.005) yields the timing behavior I was initially expecting. Why does the cost of calling policy go up as the number of workers increase? cc @VitalyFedyunin @ngimel @mruberry
module: performance,module: multiprocessing,triaged,module: data parallel
low
Critical
539,047,230
rust
[rustdoc] Hide items from search but leave it clickable under publiced module
In #67346, I wanted to hide HashMap from search under `hash_map` module. [![https://imgur.com/TkkiEV6.png](https://imgur.com/TkkiEV6.png)](https://imgur.com/TkkiEV6.png) It could be nice to have a way to * hide items from search but leave it under other publicized module * display only one in search for items that are re-publiced in multiple modules.
T-rustdoc
low
Minor
539,075,228
vscode
Peek call hierarchy should resend prepareCallHierarchy request after switch the direction
In the normal "Show Call Hierarchy" view, both switching the direction and refresh will resend a prepareCallHierarchy request. But in the "Peek Call Hierarchy" view, switching the direction doesn't resend a prepareCallHierarchy request. It's better to keep consistent with the normal call hierarchy view. Generally we will do some clean up in the prepareCallHierarchy.
under-discussion,callhierarchy
low
Minor
539,086,183
pytorch
sklearn and pytorch incompatibility issue
## πŸ› Bug Import `torch` after `sklearn` causes a segfault. ## To Reproduce ``` import sklearn import torch print(torch.__version__) print(sklearn.__version__) ``` ![image](https://user-images.githubusercontent.com/11019190/71001688-15df3680-20de-11ea-8441-16a58497c70b.png) In reverse order: ``` import torch import sklearn print(torch.__version__) print(sklearn.__version__) ``` ![image](https://user-images.githubusercontent.com/11019190/71001720-255e7f80-20de-11ea-9338-5f4adcf1ac2c.png) ## Environment ``` PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1.243 OS: Manjaro Linux GCC version: (GCC) 9.2.0 CMake version: version 3.16.0 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.2.89 GPU models and configuration: GPU 0: GeForce RTX 2080 Ti Nvidia driver version: 430.64 cuDNN version: /usr/lib/libcudnn.so.7.6.5 Versions of relevant libraries: [pip3] numpy==1.17.4 [pip3] torch==1.3.1 [pip3] torchvision==0.4.2 [conda] Could not collect ``` ## Additional context Related closed issues: https://github.com/pytorch/pytorch/issues/2143 https://github.com/pytorch/pytorch/issues/786 cc @ezyang @gchanan @zou3519
needs reproduction,module: crash,triaged
low
Critical
539,098,256
TypeScript
Namespace as first-class citizen
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ --> ## Search Terms <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily --> namespace, functor ## Suggestion <!-- A summary of what you'd like to see added or changed --> [Namespace as first-class citizen](https://reasonml.github.io/docs/en/module), add `abstract namespace` or `namespace type` and `namespace function` or `functor` ## Use Cases <!-- What do you want to use this for? What shortcomings exist with current approaches? --> higher order namespace ## Examples <!-- Show how this would be used and what the behavior would be --> [Module Functions (functors)](https://reasonml.github.io/docs/en/module#module-functions-functors) ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals). ----- Object is not allowed to return type,only namespace can include type and value. The syntax I showed above should need to be redesigned, also.
Suggestion,Awaiting More Feedback
low
Critical
539,101,537
material-ui
[Popover] Don't hide scrollbar
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate. ## Summary πŸ’‘ In the [documentation](https://material-ui.com/components/popover/#popover), it is written: "The scroll and click away are blocked unlike with the Popper component.". But why? Why not allowing to unlock the scrollbar as it is the case for `Select`, `Menu`, `Dialog`, etc...? You will say me: "use the popper". I can do it, but the popper misses a lot of features that the popover has (escape key & click away). Moreover, Google alreadys uses Popover that doesn't block the scrollbar: ![image](https://user-images.githubusercontent.com/5437552/71003963-b4b96200-20e1-11ea-9678-5fff49ce1934.png) So it would be nice if there is a way to not block the scrollbar on Popover (this is the only that prevents me to use it everytime I want to use it).
new feature,component: Popover,priority: important
low
Major
539,114,821
rust
Problem with `PartialEq`
When I try to `derive(PartialEq)` for type I get weird compilation error in generated code, but I'm sure it should be fine. Here's [link to playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=e2671b59bb8fc53d14fded0842a0c0d8)
A-DSTs,T-compiler,C-bug
low
Critical
539,115,626
create-react-app
IE11 support doesn't work in dev mode, even after adding all polyfills and enabling ie11 support
<!-- Please note that your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo. If you're in a hurry or don't feel confident, it's fine to report bugs with less details, but this makes it less likely they'll get fixed soon. In either case, please use this template and fill in as many fields below as you can. Note that we don't provide help for webpack questions after ejecting. You can find webpack docs at https://webpack.js.org/. --> ### Describe the bug IE11 support doesn't work even after adding react-app-polyfill, enabling "ie 11" in browserslist in package.js and adding import 'react-app-polyfill/ie11' and import 'react-app-polyfill/stable' into src/index.js ### Did you try recovering your dependencies? yes ### Which terms did you search for in User Guide? followed instructions on https://create-react-app.dev/docs/supported-browsers-features/ and https://github.com/facebook/create-react-app/blob/master/packages/react-app-polyfill/README.md ### Environment MacOS node 11.15.0 npm 6.7.0 latest create-react-app Virtual Box with Microsoft's test Window10 and IE11 image VM ### Steps to reproduce 1. npx create-react-app test 2. cd test 3. npm i react-app-polyfill 4. edit package.json and add "ie 11" into "browserslist->development" section: ``` "browserslist": { "production": [ ">0.2%", "not dead", "not op_mini all" ], "development": [ "ie 11", "last 1 chrome version", "last 1 firefox version", "last 1 safari version" ] } ``` 5. edit src/index.js and add 2 lines at the top: ``` import 'react-app-polyfill/ie11'; import 'react-app-polyfill/stable'; ``` 6. rm -rf node_modules/.cache 7. npm run start 8. launch IE11 VM and open http://`<ip>`:3000 in IE11 ### Expected behavior test create react page is supposed to show up ### Actual behavior SCRIPT5022: SyntaxError 0.chunk.js (19856,1)
issue: bug
medium
Critical
539,131,569
flutter
[webview_flutter] Navigation delegate does not have the same behavior on Android and iOS
## TL;DR: The navigation delegate is called several different times on iOS versus Android, which may make custom navigation handler difficult to maintain. The solution would be to rely on `isForMainFrame` to know if we enter the delegate on a real user interaction or not, but it may be confusing sometimes. ## Steps to Reproduce 1. Create a simple application with a webview with URL 'https://app-mobile-webview.mobizel.com/webview_flutter/html_with_links.html' or clone and build the following project branch ``` git clone -b issue/navigation-delegate-ios-versus-android [email protected]:stevemoreau/flutter_webview.git ``` 2. Run project on Android, click on the first link, and you should logs ``` I/flutter ( 3838): Entering build() I/flutter ( 3838): Page started loading: https://app-mobile-webview.mobizel.com/webview_flutter/html_with_links.html I/flutter ( 3838): Page finished loading: https://app-mobile-webview.mobizel.com/webview_flutter/html_with_links.html // LINK CLICK HAPPENED HERE I/flutter ( 3838): Entering navigationDelegate: requestUrl=https://www.mobizel.com/dummy-link-internal isForMainFrame=true I/flutter ( 3838): Page started loading: https://www.mobizel.com/dummy-link-internal I/flutter ( 3838): Page finished loading: https://www.mobizel.com/dummy-link-internal ``` 3. Then, run project on iOS, click on the first link, you should see logs ``` flutter: Entering build() flutter: Entering build() flutter: Entering navigationDelegate: requestUrl=https://app-mobile-webview.mobizel.com/webview_flutter/html_with_links.html isForMainFrame=true flutter: Page started loading: https://app-mobile-webview.mobizel.com/webview_flutter/html_with_links.html flutter: Entering navigationDelegate: requestUrl=https://www.youtube.com/embed/p9rFDcS4F00 isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Page finished loading: https://app-mobile-webview.mobizel.com/webview_flutter/html_with_links.html // LINK CLICK HAPPENED HERE flutter: Entering navigationDelegate: requestUrl=https://www.mobizel.com/dummy-link-internal isForMainFrame=true flutter: Page started loading: https://www.mobizel.com/dummy-link-internal flutter: Entering navigationDelegate: requestUrl=https://www.google.com/recaptcha/api2/anchor?ar=1&k=6LdO8pAUAAAAAK6Ei_JJmKQIztUcBZaBod-3NdCL&co=aHR0cHM6Ly93d3cubW9iaXplbC5jb206NDQz&hl=en&v=mhgGrlTs_PbFQOW4ejlxlxZn&size=invisible&cb=4xw6wzque43i isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Page finished loading: https://www.mobizel.com/dummy-link-internal flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false flutter: Entering navigationDelegate: requestUrl=about:blank isForMainFrame=false ``` **Target Platform:** iOS/Android **Target OS version: Any** **Devices: Any** **Flutter configuration:** webview_flutter: ^0.3.18+1 ## Logs <!-- Run your application with `flutter run --verbose` and attach all the log output below between the lines with the backticks. If there is an exception, please see if the error message includes enough information to explain how to solve the issue. --> $ flutter run --verbose [webview_navigation_delegate_android.txt](https://github.com/flutter/flutter/files/3973980/webview_navigation_delegate_android.txt) [webview_navigation_delegate_ios.txt](https://github.com/flutter/flutter/files/3973981/webview_navigation_delegate_ios.txt) <!-- Run `flutter analyze` and attach any output of that command below. If there are any analysis errors, try resolving them before filing this issue. --> ``` $ flutter analyze Analyzing flutter_webview... No issues found! (ran in 9.5s) ``` <!-- Finally, paste the output of running `flutter doctor -v` here. --> ``` $ flutter doctor -v [βœ“] Flutter (Channel stable, v1.12.13+hotfix.5, on Mac OS X 10.15.1 19B88, locale fr-FR) β€’ Flutter version 1.12.13+hotfix.5 at /Users/steve/sdk/flutter β€’ Framework revision 27321ebbad (6 days ago), 2019-12-10 18:15:01 -0800 β€’ Engine revision 2994f7e1e6 β€’ Dart version 2.7.0 [βœ“] Android toolchain - develop for Android devices (Android SDK version 28.0.3) β€’ Android SDK at /Users/steve/Library/Android/sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-28, build-tools 28.0.3 β€’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) β€’ All Android licenses accepted. [βœ“] Xcode - develop for iOS and macOS (Xcode 11.3) β€’ Xcode at /Applications/Xcode.app/Contents/Developer β€’ Xcode 11.3, Build version 11C29 β€’ CocoaPods version 1.8.4 [βœ“] Android Studio (version 3.5) β€’ Android Studio at /Applications/Android Studio.app/Contents β€’ Flutter plugin version 41.1.2 β€’ Dart plugin version 191.8593 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) [βœ“] Connected device (2 available) β€’ Android SDK built for x86 β€’ emulator-5554 β€’ android-x86 β€’ Android 7.0 (API 24) (emulator) β€’ iPhone β€’ 5c21e6117ed37334bb86a3667172e7317c1f609e β€’ ios β€’ iOS 13.2.3 β€’ No issues found! ```
a: fidelity,p: webview,package,team-ecosystem,has reproducible steps,P2,found in release: 2.2,found in release: 2.5,triaged-ecosystem
low
Critical
539,138,051
opencv
There seems to be a bug in modules/calib3d/src/stereobm.cpp at line 685.
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). Please: * Read the documentation to test with the latest developer build. * Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue. * Try to be as detailed as possible in your report. * Report only one problem per created issue. This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 4.1.2 - Operating System / Platform => Ubuntu 16.04 64 Bit - Compiler => GCC 5.4.0 --> - OpenCV => 4.1.2 - Operating System / Platform => ubuntu 16.04 64 bit - Compiler => gcc 5.4.0 ##### Detailed description There may be a bug in modules/calib3d/src/stereobm.cpp at line 685. At line 1161 `cost` is defined as ``` cost.create( left0.size(), CV_16S ); ``` while at line 685 : ``` int* costptr = cost.data ? cost.ptr<int>() + lofs + x : &costbuf; ``` Shoud not it be `short* costptr`?
category: calib3d
low
Critical
539,165,675
electron
Create an alwaysOnTopOfWindow browser option to allow windows to be always on top of specific windows
### Preflight Checklist <!-- Please ensure you've completed the following steps by replacing [ ] with [x]--> * [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project. * [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to. * [x] I have searched the issue tracker for a feature request that matches the one I want to file, without success. ### Problem Description I want a window that's opened by window.open (or other windows I suppose) to be always on top of the main window, but I don't want them to be on top of all windows on the OS, which is how the `alwaysOnTop` flag currently works. Making the window a child of the other window gets me that behavior, but it also gets me other behavior that I don't want. For example, I don't get a separate taskbar icon for the window. ### Proposed Solution Could we get a new flag `alwaysOnTopOfWindow` that's assigned to the specific window that I want this to be on top of? (also an API for toggling this would be nice). ``` const win = new BrowserWindow({ alwaysOnTopOfWindow: mainWindow }); ``` ### Alternatives Considered Right now, I have to use the `parent` option to make the window a proper child of the other window, but this comes with other behavior that I don't really want.
enhancement :sparkles:
low
Minor
539,171,681
flutter
No link local adapter when personal hotspot is enabled on iOS
This sometimes causes mDNS to fail when host and device are connected to public wifi. From: https://github.com/flutter/flutter/issues/46698 @dnfield says: > When hotspot is enabled, we don't get a link local adaptor. And by default, when I'm otherwise connected, that adaptor is disabled. We should look to see if there's some way we can enable that adapator (I'm thinking something like either `ifconfig up` or `defaults` should be able to control that somehow).
c: new feature,platform-ios,tool,a: quality,customer: crowd,P2,team-ios,triaged-ios
low
Minor
539,172,424
go
x/build: report failures fast but still make all test results available
(This feature request/enhancement is related to issue #34119, the goal is to improve developer experience at the expense of using more computational resources.) In https://github.com/golang/go/issues/14305#issuecomment-565093254, there is discussion of whether to use -k on which branches, and on post-submit only vs trybots too. There is a trade-off in that the flag improves information that a failed build reports, but makes the test failure get reported more slowly. A more ideal (for user developer experience) solution is to report a failure fast, as soon as the first test fails, but keep going and make all test results available for later viewing by interested parties. See the [fast-finish builds](https://blog.travis-ci.com/2013-11-27-fast-finishing-builds) feature of Travis CI for some related precedent. /cc @bradfitz @bcmills @toothrot @cagedmantis
Builders,NeedsInvestigation,FeatureRequest
low
Critical
539,175,748
terminal
Code Health: Update existing winrt properties to use `GETSET_PROPERTY` when possible
Pretty self explanatory. Classes with a bunch of ```c++ uint32_t TerminalSettings::DefaultForeground() { return _defaultForeground; } void TerminalSettings::DefaultForeground(uint32_t value) { _defaultForeground = value; } ``` Is an enormous amount of boilerplate we don't need. ```c++ GETSET_PROPERTY(uint32_t, DefaultForeground, DEFAULT_FOREGROUND_WITH_ALPHA); ``` is better.
Help Wanted,Product-Terminal,Issue-Task,Area-CodeHealth
low
Minor
539,175,836
terminal
Add a setting to disable mouse-wheel scrolling
This is in lieu of having actual mouse bindings. Some people might actually want the mouse to not scroll, or might want to pass the mouse scrolling through. Until we have proper mouse bindings, this settings should suffice. ###### note 11/8/2021 This task is a "instead of #1553" task. If we just do #1553, then this isn't relevant. ###### note 2/22/2022 I'd add this as a profile setting, `experimental.mouseWheelScrolling: bool`. Probably just the same as #11710. Since we're using this as an "escape hatch" issue while we wait for the architecture to do #1553 right, I'd just go ahead and disable all wheel scrolling with this setting - including opacity scrolling and zooming.
Help Wanted,Area-TerminalControl,Area-Settings,Product-Terminal,Issue-Task,good first issue
low
Major
539,175,971
terminal
Use an indeterminate progress ring on the tab before a tab is "connected"
Browsers will often use a little spinning animation to indicate that a tab is loading before displaying the favicon. For shells like wsl and the Azure connector, we could maybe do something similar, where we keep it on the spinner until we get the first byte. This would probably require support from WinUI for progress wheels in tabs first
Help Wanted,Area-UserInterface,Product-Terminal,Issue-Task
low
Minor
539,176,018
terminal
Add support for pane switching via tab switcher
As brought up in the tab switcher spec, we probably also want people to be able to navigate panes with that switcher. cc @leonMSFT since I stole this idea straight from his spec.
Area-UserInterface,Product-Terminal,Issue-Task
low
Minor
539,176,326
terminal
Make `Pane` a proper WinRT type
Pretty self explanatory. Right now they're plain c++ types, but in reality they should be exposed as projected WinRT types, so they can be used across binary boundaries.
Product-Terminal,Issue-Task,Area-CodeHealth,Priority-2
low
Major
539,176,379
terminal
Scenario: Add support for 3rd-party extensions
Updated thread from #555 Now that we have a better understanding of what work will be needed to support 3rd party extensions ("plugins"), this thread is the megathread to track all the work that's needed to support them. Before any real work can begin here, we're waiting on some support from the Windows operating system that will be coming Soon<sup>TM</sup>. Fundamentally, third-party code extensions for packaged applications (like us) isn't supported currently. This is _not_ work slated for 1.0. It may not even land in 2.0. This work will likely span multiple releases. Whenever this work does end up starting formally, we'll likely have a number of minor versions with prototypes of the extensibility model, with breaking changes up until the major version release of the Terminal. So if extensions are scheduled for 3.0, then 2.4 might _introduce_ bts of the extensibility model that are later broken in 2.5, 2.6, etc. ## What should extensions be able to do? * Add their own profiles via Dynamic Profile Generators * Customize the rendering of the Terminal, with things like: - custom shaders, font renderers - custom formatting of the buffer (like hyperlinks) * Add their own Color Schemes, keymaps * Modify the UI theme (for adjusting appearance of tabs, scrollbars, etc.) * Modify the layout of the UI - Add elements to the UI like status bars, toolbars, minimaps - reposition existing elements of the UI? Tabs on bottom for example. * Create `Pane`s with their own UI elements * Read the contents of the text buffer, for parsing things like hyperlinks * When extensions modify the UI, they'll need to adjust how large the initial window size is * Add commands to the Command Palette, including nested commands * Add commands to various context menus (such as the `TermControl` context menu, `TabViewItem` menu). ## Work needed to support extensions Some of this work is more directly related to extensions, while others on the list are simply adding features to the terminal to provide points for extensibility in the future. * [x] Make `Profile`, `ColorScheme`, `GlobalAppSettings` proper WinRT types #3998 * [ ] Make `Tab`, `Pane` proper WinRT types #3999, #3922 * [x] Support `Pane`s with non-terminal content #997 (spec: #1080) * [x] Add an optional right-click menu to the `TermControl` (#3337) * [x] Add the command palette #2046 (spec: #2193) * [x] Enable nested commands in the command palette #3994 * [ ] Add comprehensive "XAML" theming functionality #3327 * [ ] Design Extensibility model - this covers actually figuring out the API surface by which developers can write extensions for. **This is most of the work** - This will involve developing a couple extensions in-house and partnering with some community members to explore extensibility model ## Mega-list of extension ideas ### Dynamic Profiles / settings * #3821 Add Developer PowerShell and Developer Command Prompt Generator * #1394 Upon installation, add a Git Bash profile, if Git for Windows is present * #1280 Feature Request SSH-Telnet-Serialport connection * #5049 Device Portal * #5900 Include a profile that will connect to Visual Studio Codespaces * #6105 Add WSL distro default desktop wallpapers as default background * #8339 "ask parameter" for profile. profile with variable that is prompted-for * #9031 Auto generate profile from .ssh/config * #10943 Importing PuTTY sessions * #11641 * This isn't "dynamic profiles", it's more "commands that are provided to the command palette via an API **NOT a plaintext file**" * #12773 ### Additional Connection types * #1999 Plugin: add support for [XYZ]MODEM file transfers * #694 Suggestion: "One-click & snap" connect to bluetooth/serial devices and network hosts using QR Code and Code 128 * #3656 Add support for `tmux` Control Mode * #5321 Support HTM (headless terminal multiplexer) for remote pane/tab management * #11064 Bastion shell similar to azure cloud shell * #13245 ### Buffer Parsing/Manipulation * #3196 Feature Request: Smart Double-click Selection (regeces?) * #2671 Feature Request: link generation for files + other data types * #574 Design Clickable Links & Link preview feature/extension * #5226 Parenthesis matching in text * #6297 Highlighting matches in all history * #5916 Triggers(Including Text Strings) and Actions (internal or external calls) * #6632 ITerm2-like terminal autocomplete * #7749 ability to pipe scrollback buffer to a pager or temrinal editor ### Suggestions * #17344 * that pretty much tracks all of it. ### UI Elements #### App elements * #3459 Add an optional status bar to the bottom * ~~#2994 Feature Request: allow to pick the color of the tabs~~ - We actually did this ourselves! * #2934 Feature Request: Icon buttons to start relevant shell types - (Maybe something like a toolbar? See also #4531, #5273, #7207) * #1595 Feature Request - Scripts Panel - A panel for saved commands #11270 - #5273 Favorites/Shortcuts/Quick List - Also maybe implies some sort of integration of storing their own settings that are linked to Terminal profiles? * #2601 Feature Request: optional splitting of view into scroll pane and static pane while scrolling * ~~#1502 Feature Request: An advanced tab switcher~~ - We actually did this ourselves! * #835 Feature request: Enable customization for tabs on bottom * #644 Feature Request: /help commands output to collapsible pane * ~~#4444 It would be nice to have a kind of popup with a tab name when you switch tabs with CTRL+TAB in fullscreen mode.~~ - We actually did this ourselves! - Let's add a setting for this as well, this sounds like a not so terrible idea. * #5426 "Peek" button to show screen behind Terminal * #5636 Add support for Azure Cloud Shell `code` Editor - In this case, a particular connection might need to be able to spawn a new pane, with HTML content in it * #16387 - We closed this cause we're so very unlikely to do it, but it's not a bad idea post-#835 * #16420 - "Extensions should be able to add settings UI elements" * more... #### Control Elements * #3121 Enhance shell autocompletion with a cool new user interface and shell completion * #2471 Feature Request: ⬇️ button to appear when scrolled many pages up * #2226 Feature Request: Scrollable map view for each tab [minimap] * #1527 FR: IDE-style marks on scrollbar * #5278 Provide a 'Paste mode' to allow a paste keybinding to be 'smart'. - A single keybinding that lets the user chose between "paste", "send ^V", or "Cancel". - Might require adding a custom argument to a `ShortcutAction` we've defined? Or its own `ShortcutAction`. * #6025 FPS overlay for performance debugging * #6511 Control how multi-click selection operates * #6979 Have text as the background instead of just an image * mjpeg streaming for terminal control background #6242 * #6497 * #15837 - similar to the above * Display a indicator on the cursor's line #9993 * #10175 - Hey we did this in 1.19! * rainbow cursor? #10442 * Show and allow editing of typeahead #10690 * #15104 #### Other Elements * #5591 Opening the windows terminal inside Windows Explorer rather than from windows Explorer * #6028 Background image for tab, not per split pane ### Custom Rendering * #3520 Feature request: 'age' output visually * #781 Glow text exploration * #7013 Allow background image to be pixel/fragment shader specified by me * #7380 Add built-in D2d effects * #9221 Smooth cursor animation/transition when moving ### Advanced Settings * #3900 Rolling background image like Windows Themes * #7906 Change tab colour automatically based on tab title text using preset regex in settings ### Miscellaneous * #2516 Feature Request - Live Share support * #469 Feature request: Recording - GIF Capture tool for the Windows Terminal #8098 * #5434 Voice Command * #6372 Notification after a long running command finishes * #7279 Randomize background images from an array of filesystem paths * #8647 Auto display readme`s * #8797 Extension: Pipe output to command palette in Terminal * More elaborate SSH features (like PuTTY) - See: #4653, #11034 <hr> @ future me, 10/14/2021 I tried this in [`dev/migrie/fhl/adaptive-card-extension`](https://github.com/microsoft/terminal/tree/dev/migrie/fhl/adaptive-card-extension), but ran into a number of unfixable build issues that made me hate the world. I couldn't get another package built to be able to load that into the terminal at all. Maybe next month I'll try again. For now, it's on to honks.
Area-Extensibility,Product-Terminal,Issue-Scenario
medium
Critical
539,189,447
pytorch
`clip_grad_norm` allows negative `max_norm` values
## πŸš€ Feature <!-- A clear and concise description of the feature proposal --> Hello, This is not really a feature request, nor a bug report, more a "additional check" proposal. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too --> The current implementation of [nn.utils.clip_grad_norm](https://pytorch.org/docs/stable/_modules/torch/nn/utils/clip_grad.html) allows to pass negative `max_norm`. If you do so, it will fail silently and even worse, reverse all the gradients... ## Pitch <!-- A clear and concise description of what you want to happen. --> It would be good to have a sanity check or to avoid doing anything if negative values are passed. ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered, if any. --> ## Additional context <!-- Add any other context or screenshots about the feature request here. -->
module: nn,triaged,enhancement
low
Critical
539,203,270
flutter
Add color gradient support to polygon in google_maps_flutter
<!-- Thank you for using Flutter! If you are looking for support, please check out our documentation or consider asking a question on Stack Overflow: * https://flutter.dev/ * https://api.flutter.dev/ * https://stackoverflow.com/questions/tagged/flutter?sort=frequent If you have found a bug or if our documentation doesn't have an answer to what you're looking for, then fill our the template below. Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports --> ## Use case Gradient Support? I was thinking of this between 3 points **A, B & C** From *A(Red) to B(green) to C(blue)* ![Screenshot from 2019-12-17 22-21-07](https://user-images.githubusercontent.com/34790378/71016748-9d976600-211b-11ea-8462-bbdcc1a879e6.png) ![Screenshot from 2019-12-17 22-23-58](https://user-images.githubusercontent.com/34790378/71017035-ff57d000-211b-11ea-961c-d5826ecab982.png) ## Proposal If the polygon function could take colors as prop, it would be better isn't it? Like the support in https://github.com/react-native-community/react-native-maps/pull/1911 ? ![](https://camo.githubusercontent.com/61a0639364bb4f6634b44a87bacbd8fc525c4c44/68747470733a2f2f692e696d6775722e636f6d2f5a61344465766d2e706e673f31)
c: new feature,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem
medium
Critical
539,203,529
terminal
Create a connection for simple testing Windows Terminal performance
Right now, the Windows Terminal's perf is pretty tightly bound to the perf of conpty. It might be a little hard for us to be able to really analyze the performance bottlenecks of the Terminal, if we're also hitting the bottlenecks of conpty in another thread. To aid in debugging, I think we should add another connection implementation. This connection should only be valuable for debugging and perf testing. I'm imagining something where it can be configured to emit some number of lines of text, either in monochrome, 16 color, or 256/rgb color, and then it just times how long it takes to emit that sequence. It'll emit all that text, then just print statistics on how long it took to emit all that text. Ideas for settings: (though I'm not sure if this is too much work) ```jsonc { "name": "Perf test", "connectionType": "some guid", "linesToTest": 10000, // or -1 for run without stopping "colors": "monochrome", // or "16color", "256color" "style": "ascii" // or unicode for CJK/emoji }, ``` This way, we can get traces on _just_ the Terminal, isolated from conpty.
Area-Performance,Area-TerminalConnection,Product-Terminal,Issue-Task
low
Critical
539,216,550
PowerToys
Improve test coverage for Runner
# Summary of the new feature/enhancement There is currently no test coverage for the runner project. # Proposed technical implementation details (optional) 1. Identify business logic and test cases that should be applied for the runner. 2. Provide unit tests that can be applied without a redesign of the runner code. 3. Propose refactoring/redesign of the runner code that may improve the testability of the codebase.
Area-Tests,Area-Quality
low
Minor
539,222,426
terminal
Design a way for TerminalConnections to receive arbitrary (JSON?) arguments
This will allow a connection type to peer into the _profile_ that spawned it to get more information out of it, like a `port` from a telnet profile and a `username` from an SSH profile.
Issue-Feature,Area-TerminalConnection,Area-Settings,Product-Terminal
low
Minor
539,242,651
deno
Default timeouts and size limits for `Deno.serve`
Deno tries to have similar standard library as Go which is great, but... Keep in mind that event Go authors made some mistakes when initially developed std. For example by default HTTP server and client doesn't add any timeouts. In order to have production ready HTTP server in Go you would want to add Read/Write timeouts. ```go s := &http.Server{ Addr: ":8080", Handler: myHandler, ReadTimeout: 10 * time.Second, WriteTimeout: 10 * time.Second, MaxHeaderBytes: 1 << 20, } log.Fatal(s.ListenAndServe()) ``` Back to deno. For now there is only few options available. ```ts interface ServerConfig { port: number; hostname?: string; } ``` Example for slow client attack which creates new TCP connections on the server and by slowly reading response body doesn't close it which would eventually lead to server out of file descriptors errors. Deno server ```ts import { serve } from "https://deno.land/[email protected]/http/server.ts"; const s = serve({ port: 8000 }); console.log("http://localhost:8000/"); async function main() { for await (const req of s) { console.log('request:', req); const body = new TextEncoder().encode(`Hello World ${Date.now()}\n`); req.respond({ body }); console.log('response:', body.length); } } main() ``` Go test client ```go package main import ( "fmt" "net/http" "time" ) func main() { client := &http.Client{} for { go func() { req, err := http.NewRequest("POST", "http://localhost:8000", nil) if err != nil { panic(err) } res, err := client.Do(req) if err != nil { panic(err) } defer res.Body.Close() // Simulate slow clean attack by reading response body slowly. buf := make([]byte, 1) for { _, err := res.Body.Read(buf) if err != nil { fmt.Println(err) break } time.Sleep(100 * time.Second) } }() time.Sleep(1 * time.Second) } } ``` Check established connections with `lsof -p <PID>` ``` // 53 deno 19997 anjmao 177u IPv4 0x35f50d8024691719 0t0 TCP localhost:irdmi->localhost:61961 (ESTABLISHED) deno 19997 anjmao 178u IPv4 0x35f50d80249ec3b1 0t0 TCP localhost:irdmi->localhost:61963 (ESTABLISHED) deno 19997 anjmao 179u IPv4 0x35f50d80249eb0a1 0t0 TCP localhost:irdmi->localhost:61965 (ESTABLISHED) deno 19997 anjmao 180u IPv4 0x35f50d8024a4a0a1 0t0 TCP localhost:irdmi->localhost:61967 (ESTABLISHED) // 58 // ... // 209 ```
feat,ext/http
low
Critical
539,274,760
rust
Cargo fix generates broken code for type names passed to derives
When a type is given to a proc macro as a string, such as in https://github.com/diesel-rs/diesel/blob/6e2d467d131030dc0bbce4e4801c7bee7bcbf0dd/diesel/src/type_impls/primitives.rs#L20, `cargo fix --edition` will incorrectly generate `crate::"::old_type::Path"`. If there are nested types inside, such as https://github.com/diesel-rs/diesel/blob/6e2d467d131030dc0bbce4e4801c7bee7bcbf0dd/diesel/src/type_impls/primitives.rs#L42, they won't be touched at all.
A-lints,A-macros,T-compiler,C-bug,A-suggestion-diagnostics,E-needs-mcve,D-invalid-suggestion,A-edition-2018
low
Critical
539,286,507
react-native
Nested Text with onPress / TouchableOpacity Bug
Hey there. I found an issue when rendering nested Text elements. It's almost the exact same as this ticket: https://github.com/facebook/react-native/issues/1030 I was able to get it to sort of work. I had to add an `onPress` to the Text component. Problems: - TypeScript says there isn't an `onPress` on Text elements. But it does in fact work. This should probably be fixed in the type definitons. - Not possible to use TouchableOpacity, so it doesn't feel good when pressing on these items. When using TouchableOpacity like this: ```js <Text> <Text>first part</Text> <TouchableOpacity><Text>Second part</Text></TouchableOpacity> </Text> ``` Second part doesn't get rendered at all. Before you suggest using a `<View>` around the `<Text>` instead, please look at the referenced issue. When you do that, the text runs off screen, or wraps weirdly. TLDR; I need to add a touchable opacity inside a nested Text component. Our api returns text in blocks, the RN app needs to parse it and render an array of text elements together with different styling. React Native version: 0.61.4
Component: TouchableOpacity,Bug,Never gets stale
high
Critical
539,294,909
angular
ExpressionChangedAfterChecked error improvements for Ivy
PR #34381 improves `ExpressionChangedAfterChecked` error by adding property name for property bindings and also the content of the entire property interpolation. There are few more things that we can improve: - [x] ~~for attribute bindings and interpolations, include attribute name and the entire expression (similar to property bindings and interpolations)~~. Implemented in PR #34505. - [x] ~~for text interpolations (like `Some exp {{ exp1 }} and {{ exp2 }}`), we can store metadata in `tView.data` and display the whole block when we throw an error (making sure we limit the length of the output from interpolation)~~. Implemented in PR #34520. - [x] ~~when expression value is an object, the `[Object object]` is printed, we can `JSON.stringify` it (or smth similar) and limit it to 1000 chars or so~~. - [ ] for i18n attributes (like `<div i18n-title title="Some exp {{ exp1 }}"></div>`) we can extract property name from i18n metadata (stored in `tView.data`, but using a different format). Currently property name for such cases are not be included into `ExpressionChangedAfterChecked` error. - [x] ~~Include component's name into the error message, so that it's easier to find the template where the problem happened~~. Implemented in #50286. - [ ] Consider including an expression that caused the problem (should work for text interpolations at least). However this is not trivial, since it'd require changes to the generated code and including the `ngDevMode` flag, so that this extra debug info doesn't make it into prod bundles.
feature,hotlist: error messages,freq3: high,area: core,core: change detection,feature: in backlog
low
Critical
539,300,241
go
go/doc: Examples includes example functions with returns
As documented at https://golang.org/pkg/testing#hdr-Examples, an example function is expected to have no parameters and no returns. `doc.Examples` considers [functions with parameters](https://github.com/golang/go/blob/4b21702fdcd17aee6a52a74cc68c7c9b0ed1b7e3/src/go/doc/example.go#L65-L67) as invalid examples and skips them. Unlike the `vet` pass, `doc.Examples` does not check if there are results. That means both of these are recognized as examples by `doc.Examples`: ```Go func ExampleHello() { fmt.Println("hello") // Output: hello } func ExampleHi() int { fmt.Println("hi") return 42 // Output: hi } ``` It means `x/tools/cmd/godoc` and other tools that display documentation may show such invalid examples: <details><summary>godoc screenshot</summary><br> ![image](https://user-images.githubusercontent.com/1924134/71032546-918fb200-20e3-11ea-83dc-21c96c3d5a04.png) </details> `go vet` catches this problem: ``` # example.org/p ./style_test.go:131:1: ExampleHi should return nothing ``` Perhaps `doc.Examples` should be changed to treat functions with returns as invalid examples, like it does for functions with parameters, and not include them in its output. /cc @griesemer @jayconrod @matloob @bcmills (This is somewhat related to issue #36184 and [CL 211598](https://golang.org/cl/211598).)
NeedsDecision
low
Major
539,323,920
pytorch
[jit] Python type hints in TorchScript classes don't work
The `self.s` assignments should be equivalent, but only the `torch.jit.annotate` one actually works ```python @torch.jit.script class SimpleNulls: def __init__(self): self.a = None # self.s = torch.jit.annotate(Optional[str], None) self.s : Optional[str] = None self.b = False self.f = 0 @torch.jit.script def foo(): options = SimpleNulls() options.s = "bar" return options.s ``` cc @suo
oncall: jit,triaged
low
Minor
539,331,797
flutter
`testWidgets` should assert that all SystemChannels are in the same state as at the start of the test
We should do this before https://github.com/flutter/flutter/issues/47233 Currently, tests can do things like `setMockMethodCallHandler` on various system channels. This actually changes global state, and the test is not held accountable for resetting it. This can cause problems for other tests, particularly if the handler is set up in a way that survives the execution of an individual test - such as how `TestTextInput` manages the handler of `SystemChannels.textInput`. I believe we cannot do this without making it a breaking change (at least for our current definition of breaking changes). We need to: - [ ] Create a way to assert that message handlers (and mock message handlers) are the same from the start to finish of the test - [ ] Update all tests to use this - [ ] Publish a migration guide on how to do this for others. /cc @goderbauer who I've talked about this with offline.
a: tests,framework,c: API break,c: proposal,P3,team-framework,triaged-framework
low
Minor
539,349,926
vscode
Add a command to add multicursors to all Highlight Occurences
One issue with F2 to rename is it doesn't "update as you type". That's the main reason I went for the current multi-cursor approach to do auto rename tag. @aeschli and me talked a bit about having a `mirrorCursorProvider`, so this functionality can be available to more languages...But now I'm wondering if we can easily achieve this by adding a command that would put a cursor in all matching positions: ### Before ```js function add(fir|st, second) { console.log(first) return first + second } ``` ### After command ```js function add(fir|st, second) { console.log(fir|st) return fir|st + second } ``` @jrieken What do you think? I think this goes together with our approach of adding structured editing (expandSelection) by leveraging language server knowledge.
feature-request,editor-multicursor,editor-symbols
low
Major
539,350,844
TypeScript
"Did you mean parent object or child property?" in error messsages
Discussed with @RyanCavanaugh from his personal dev experience. It's common to get an error message when accidentally using an inner property of an object. Similarly, we might forget to actually fetch an inner property of an object. ```ts // @strict: true interface Person { residence: House; } interface House { isHouseOfPain: boolean; } declare let home: House; declare let person: Person; home = person.residence.isHouseOfPain; // ~~~~ // error: Type 'boolean' is not assignable to type 'House'. home = person; // ~~~~ // error: Property 'isHouseOfPain' is missing in type 'Person' but required in type 'House'. // related: 'isHouseOfPain' is declared here. ``` It'd be nice if we could hint to the user "hey, you might have dotted too far into this object" or "hey, did you mean to grab the `residence` property?"
Suggestion,Help Wanted,Effort: Moderate,Domain: Error Messages,Experience Enhancement
low
Critical
539,356,392
pytorch
Error "builtin cannot be used as a value" when add Python snippets in C++
## πŸ› Bug It looks like a great way to define methods in Python format from C++, for debugging and testing purpose. However, I realized that not all eager-mode definitions are accepted. ## To Reproduce As an example, if I add this snipped in my test, ``` script::Module m("m"); m.define(R"JIT( def test_shape_prop(self, x): # type: (int) -> int if not bool(x): return x else: z = torch.zeros([2, 2], dtype=torch.int64) return int(z[0]) )JIT"); std::vector<IValue> inputs; auto minput = 0.5 * torch::ones({}); inputs.emplace_back(minput); auto ref = m.run_method("test_shape_prop", minput); ``` There's run-time error: ``` unknown file: Failure C++ exception with description " builtin cannot be used as a value: File "<string>", line 7 return x else: z = torch.zeros([2, 2], dtype=torch.int64) ~~~~~~~~~~~ <--- HERE return int(z[0]) " thrown in the test body. ``` ## Expected behavior There should be no difference between the models/methods I could add in eager mode vs. adding them in C++. ## Environment PyTorch version: 1.4.0a0+59f180c Is debug build: Yes CUDA used to build PyTorch: None OS: Mac OSX 10.14.6 GCC version: Could not collect CMake version: version 3.14.0 Python version: 3.7 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip] numpy==1.16.4 [pip] numpydoc==0.9.1 [pip] torch==1.4.0a0+e3d5d46 [pip] torchvision==0.1.8 [conda] blas 1.0 mkl [conda] mkl 2019.4 233 [conda] mkl-include 2019.4 233 [conda] mkl-service 2.0.2 py37h1de35cc_0 [conda] mkl_fft 1.0.12 py37h5e564d8_0 [conda] mkl_random 1.0.2 py37h27c97d8_0 [conda] torch 1.4.0a0+b6113dc pypi_0 pypi [conda] torchvision 0.1.8 pypi_0 pypi ## Additional context n/a cc @suo
oncall: jit,low priority,triaged
low
Critical
539,366,666
flutter
non-zero status exit from Android emulator process should throwToolExit
When the tool spins up an Android emulator here: https://github.com/flutter/flutter/blob/a15a81be218f8b2fa2f63d7cd0f8cb0b3a6c2e08/packages/flutter_tools/lib/src/android/android_emulator.dart#L51 the process can come back with a non-zero status and useful information in the message and/or stdout and stderr. Since the tool is unlikely to be able to fix the issues causing the errors, and the error messages from the emulator are usually very informative, the tool should `throwToolExit()` instead of crashing. /cc @blasten
c: crash,platform-android,tool,P2,team-android,triaged-android
low
Critical
539,370,250
TypeScript
Report error cannot emit file if any of the output file is going to write output of referenced project
Found as part of investigating #35468 where referencing project overwrites `index.js` at same location as refereced projects output.
Bug
low
Critical
539,370,924
pytorch
torch.scatter_logsumexp
## πŸš€ Feature torch.scatter_add will distribute values over an output tensor, summing if multiple values land in the same destination coordinate. torch.logsumexp performs addition in linear space of quantities with log-space values, useful for adding probabilities which are stored as log likelihoods, for example. torch.scatter_logsumexp would behave the same as torch.scatter_add, but instead of linear summing, it would logsumexp to sum. ## Motivation I am experimenting with alterations to CTC, and python implementations of CTC have to sum probabilities from identical labels in a target sequence to get the total probability of a particular token. The algorithm is much more numerically stable if implemented with log likelihoods instead of linear probabilities, and torch.scatter_logsumexp would simplify this kind of thing. ## Alternatives Currently either log likelihoods have to be exponential'ed and then scatter_add'ed (numerically unstable), or scattered into a target tensor with an expanded extra dimension (so no two values land together) and then logsumexp'd over that extra dimension cc @ezyang @gchanan @zou3519
triaged,function request,module: scatter & gather ops
low
Major
539,398,378
rust
Implicit jobserver token is yielded by rayon
Rayon is currently yielding the implicit token "owned" by the process which means we can end up in a situation where e.g. a `-j1` build has more than 1 rustc running (even though only one of them is actively running, but they're all consuming memory and other resources, so this is a bug). The current intended fix is to stop yielding the implicit token by keeping track of the amount of tokens we've acquired/yielded and just skipping the yield (and eventual reacquire) for that implicit token. cc https://github.com/rust-lang/rust/issues/64750
T-compiler,C-bug,WG-compiler-parallel
low
Critical
539,411,971
TypeScript
"types" field in package.json pointing to a `.ts` file in node_modules results in the file being compiled and type checked
Continuing from https://github.com/Microsoft/TypeScript/issues/22228 cc @evil-shrike I have this issue. @mhegazy said > the lib has either `"types": "index.ts"` or a `.ts` file at its root. which is wrong. a library should not expose its sources, only its declarations. But features like inferred return types (f.e. when making class-factory mixins) are not compilable to declaration files, resulting in errors like ``` error TS4025: Exported variable 'html' has or is using private name 'htmlBind'. error TS4031: Public property '_currentArea' of exported class has or is using private name 'AreaInternal'. error TS4055: Return type of public method from exported class has or is using private name 'PartHelper'. error TS4073: Parameter 'partHelper' of public method from exported class has or is using private name 'PartHelper'. error TS4078: Parameter 'options' of exported function has or is using private name 'ExtendOptions'. ``` > A .d.ts does not have expressions.. it represents the shape of the API. Not entirely true. As far as I know, the only way to use features (that declaration files don't support) in downstream projects is to get types directly from `.ts` source files. This makes the need to point `types` to `.ts` source files a valid use case. --- This is what I think should happen: If `"types"` points to a `.ts` file, and `"main"` points to a `.js` file, then the compiler should use the `.ts` file only for type definitions and not compile or type-check the code. `"main"` can serve as a guide to telling the compiler whether it should compile sources, or read js files. `"types"` should be for... specifying the source of _types_. Unless I missed it, there's no other way to include types for features that aren't representable in declaration files. Why is it that declaration features don't match source features? It seems that an important goal should be for declaration features to always have the capability of matching source features.
Suggestion,Awaiting More Feedback
low
Critical
539,419,711
flutter
[url_launcher] Result doesn't fire on early dismissal (iOS)
If you launch a URL on iOS and hit the Done button before the initial load completes, the FlutterResult never fires back to the caller. I think the SafariVC delegate method which fires the FlutterResult does not get called in this case.
platform-ios,p: url_launcher,package,P2,team-ios,triaged-ios
low
Major
539,422,667
go
cmd/link: should log searching directory
### What version of Go are you using (`go version`)? <pre> $ go version go version go1.13 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/root/.cache/go-build" GOENV="/root/.config/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GONOPROXY="" GONOSUMDB="*.ctripcorp.com" GOOS="linux" GOPATH="/root/go" GOPRIVATE="" GOPROXY="https://goproxy.io,direct" GOROOT="/usr/local/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build028007412=/tmp/go-build -gno-record-gcc-switches" </pre></details> ### What did you do? <pre> $ go tool link -v main.o HEADER = -H5 -T0x401000 -R0x1000 searching for runtime.a in /usr/local/go/pkg/linux_amd64/runtime.a 0.00 deadcode 0.02 symsize = 0 0.03 pclntab=251242 bytes, funcdata total 57287 bytes 0.04 dodata 0.04 dwarf 0.11 asmb 0.11 codeblk 0.12 rodatblk 0.12 datblk 0.12 reloc 0.12 sym 0.12 symsize = 2688 0.13 symsize = 34560 0.13 dwarf 0.13 headr 0.13 cpu time 21167 symbols 34132 liveness data </pre> ### What did you expect to see? searching for runtime.a in /usr/local/go/pkg/linux_amd64/ ### What did you see instead? searching for runtime.a in /usr/local/go/pkg/linux_amd64/runtime.a
NeedsInvestigation,compiler/runtime
low
Critical
539,565,558
create-react-app
Remove fsevents from optionalDependencies
Is there a reason why `fsevents` is in the optional dependencies at the react-scripts package? It's never used.
issue: proposal
low
Major
539,615,504
TypeScript
Add support for URI style import
# Search Terms browser es module import url ## Suggestion Related issues: #28985, #19942 In browser and deno, import from a "package" is different from the node style. ```ts // it's valid in browser but not in TypeScript import lodash from "https://unpkg.com/[email protected]/lodash.js"; // it's valid in deno but not in TypeScript import { serve } from "https://deno.land/[email protected]/http/server.ts"; ``` Currently there is no way to let TypeScript automatically map the URI to another place like `@types/*` or `$DENO_DIR/deps/https/deno.land/*` The current `path` can map a simple pattern of import module specifier to another place, but in the URI style import, a more flexible way to map the URI is required. ### Proposal (maybe add a new `moduleResolution: browser`) Add a new `uriPaths` that allows to map from a RegExp to a local path. It will NOT effect the emitted code. Just a way to find the type definition for those URIs. I'm willing to implement this feature but I'm not sure if TypeScript will accept this. ## Use Cases For TypeScript to find definition file for imports like `https://unpkg.com/lodash` ## Examples ```jsonc { "compilerOptions": { "uriPaths": { "https://unpkg.com/(.+?)@.+?/.+": "./node_modules/@types/$1", "https://deno.land/(.+?)@v.+?/(.+?)/(.+)": "$DENO_DIR/deps/https/deno.land/$1/$2/$3", "std:(.+)": "./node_modules/builtin-module-types/$1" } } } ``` This rule map https://unpkg.com/[email protected]/lodash.js to `@types/lodash-es` Map https://deno.land/[email protected]/http/server.ts to `$DENO_DIR/deps/https/deno.land/std/http/server.ts`. \$DENO_DIR is an environment variable. By default, on Windows, it's `~\AppData\Local\deno\deps\https\deno.land\std\http\server.ts`. By default on Linux, it is `~/.deno/deps/https/deno.land/std/http/server.ts`. ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,In Discussion
high
Critical
539,637,375
angular
Route is overridden with base href in Location
<!--πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”… Oh hi there! πŸ˜„ To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…--> # 🐞 bug report ### Affected Package <!-- Can you pin-point one or more @angular/* packages as the source of the bug? --> <!-- ✍️edit: --> The issue is caused by package @angular/common ### Is this a regression? No ### Description I have next structure in my project: - `APP_BASE_HREF` provided globally as value `/a/` - Defined route `/article` When entering `/article` page, I expect to get redirect to `/a/article`, but instead Location "helps" Router to get to `/a/rticle`, thus getting `route not found` error. ## πŸ”¬ Minimal Reproduction <!-- Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-issue-repro2 --> <!-- ✍️--> https://stackblitz.com/edit/angular-issue-repro2-bhrjmo ## πŸ”₯ Exception or Error <pre><code> ERROR Error: Uncaught (in promise): Error: Cannot match any routes. URL Segment: 'rticle' Error: Cannot match any routes. URL Segment: 'rticle' at ApplyRedirects.noMatchError (apply_redirects.ts:110) at CatchSubscriber.eval [as selector] (apply_redirects.ts:87) at CatchSubscriber.error (catchError.ts:119) at MapSubscriber.Subscriber._error (Subscriber.ts:145) at MapSubscriber.Subscriber.error (Subscriber.ts:115) at MapSubscriber.Subscriber._error (Subscriber.ts:145) at MapSubscriber.Subscriber.error (Subscriber.ts:115) at MapSubscriber.Subscriber._error (Subscriber.ts:145) at MapSubscriber.Subscriber.error (Subscriber.ts:115) at TapSubscriber._error (tap.ts:126) at resolvePromise (zone.js:814) at resolvePromise (zone.js:771) at eval (zone.js:873) at ZoneDelegate.invokeTask (zone.js:421) at Object.onInvokeTask (ng_zone.ts:262) at ZoneDelegate.invokeTask (zone.js:420) at Zone.runTask (zone.js:188) at drainMicroTaskQueue (zone.js:595) </code></pre> ## 🌍 Your Environment **Angular Version:** <pre><code> <!-- run `ng version` and paste output below --> <!-- ✍️--> Angular CLI: 8.3.20 Node: 13.1.0 OS: win32 x64 Angular: 8.2.14 ... animations, common, compiler, compiler-cli, core, forms ... language-service, platform-browser, platform-browser-dynamic ... platform-server, router, service-worker Package Version -------------------------------------------------------------------- @angular-devkit/architect 0.803.20 @angular-devkit/build-angular 0.803.20 @angular-devkit/build-optimizer 0.803.20 @angular-devkit/build-webpack 0.803.20 @angular-devkit/core 8.3.20 @angular-devkit/schematics 8.3.20 @angular/cdk 8.2.3 @angular/cli 8.3.20 @angular/pwa 0.803.20 @ngtools/webpack 8.3.20 @nguniversal/express-engine 8.1.1 @nguniversal/module-map-ngfactory-loader 8.1.1 @schematics/angular 8.3.20 @schematics/update 0.803.20 rxjs 6.5.3 typescript 3.5.3 webpack 4.41.2 </code></pre> **Anything else relevant?** It would be great to have Location service overridable so I can add some custom [normalize ](https://github.com/angular/angular/blob/master/packages/common/src/location/location.ts#L122) logic, but I'm unable to do so since base href field inside Location is private
type: bug/fix,freq2: medium,area: router,state: confirmed,router: URL parsing/generation,router: config matching/activation/validation,P3
low
Critical
539,650,536
godot
Make upgrading Godot remove existing .mono and .import directories.
**Godot version:** Since the `.mono` and `.import` directories were added. **OS/device including version:** N/A **Issue description:** With newer versions of Godot (especially upgrades such as 3.1 to 3.2), `.mono` (and sometimes `.import`) directories, being left untouched, cause issues when building/running/exporting projects that have been upgraded from a previous version. Some mechanism for newer versions of Godot to detect this and remove the `.mono` and/or `.import` directories as required would be useful.
enhancement,topic:import
low
Major
539,652,530
create-react-app
HOST is ignored in yarn start command (see #2843 as reference)
Reopening of #2843 as it is locked and resolved. In all Unix networking a bind call is based on HOST and PORT. E.g. you can bind two servers, one to 123.123.123.123:3000 and one to 127.0.0.1:3000. The authors in #2843 propose that this is confusing and disallow starting of a dev server (`yarn start`) when the port (ignorant of the host) is already used violating basic unix principles. This prohibits the use of a dev server behind a proxy, that, e.g. exposes the dev port behind a authorization layer. I propose to follow standard unix principles and simplify the code by just trying to bind to the host:port combination. If the port is busy node will give an adequate error message, there is no need to use an extra dependency or error message that restricts versatility.
issue: proposal
low
Critical
539,708,742
pytorch
Support DataParallel with PackedSequence
## πŸš€ Feature Ability to feed PackedSequence into DataParallel model as input. This can reduce wasteful data transfer when dealing with variable length data. ## Motivation Dealing with variable length data as input to DataParallel is painful. If you put them into a python list, DataParallel scatter them along wrong axis (instead of splitting the list into equal size, it splits the elements inside the list). You can pad them to the same size, but then you waste memory and transfer bandwidth moving meaningless padding values. The ideal solution would be DataParallel supporting PackedSequence input, because PackedSequence is designed for handling variable length data. ## Pitch I propose that we can do something like the below example. ```python from torch.nn.utils.rnn import pack_sequence # batch size = 2 and sequence lengths = (2, 3) data = pack_sequence([ torch.tensor([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2]]), torch.tensor([[0.1, 0.1, 0.1], [0.2, 0.2, 0.2], [0.3, 0.3, 0.3]]) ], enforce_sorted=False ) model = nn.LSTM(3, 100) dp_model = nn.DataParallel(model) dp_model.to('cuda') output, _ = dp_model(data) output # PackedSequence ``` ## Alternatives Padding variable length data to the same length and use them as input. This is wasteful in terms of memory and transfer bandwidth. ## Additional context FYI, the above example produces this error with 1.2.0 version ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/opt/conda/lib/python3.7/site-packages/torch/_utils.py", line 369, in reraise raise self.exc_type(msg) AttributeError: Caught AttributeError in replica 0 on device 0. Original Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 564, in forward return self.forward_tensor(input, hx) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 539, in forward_tensor max_batch_size = input.size(0) if self.batch_first else input.size(1) AttributeError: 'tuple' object has no attribute 'size' ``` cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528
oncall: distributed,module: bootcamp,triaged,enhancement,module: data parallel
low
Critical
539,717,361
storybook
Add Flutter Support
I love storybook and would like a Flutter support for it... I known that today it only works for js frameworks, but can we change it? Have you considered changing storybookjs to just storybook? i love js stack, but storybook is already big enough to support other languages. Dart for example... I can't help a lot bringing it to life, but i know a great project ([Dashbook](https://github.com/erickzanardo/dashbook)) and developer ([Erick Zanardo](https://github.com/erickzanardo)) trying it in storybook standards. Can we give it a try? ![dashbook](https://user-images.githubusercontent.com/11022437/71094623-0a494980-218a-11ea-9210-55fb4fe06a75.gif)
feature request
high
Critical
539,736,386
godot
Animation position shifts by 0.0001 second, regardless of animation step size
I'm using Godot 3.2.beta3 on Windows 10 Home, this issue was the same for beta2. When using the AnimationPlayer for 2D animation, using a bonerigged (Polygon2D, Skeleton2D, Bone2D) character, when I try to step between keypositions in my animation using the up and down arrows next to the "animation position" text box, the time shifts by 0.0001 second, regardless of what I'm animating on (0.05 second in the example). I would think this is a bug, since stepping by so little has sadly rendered the arrows useless to me, and I would think it's the same for other animators. ![animationposition](https://user-images.githubusercontent.com/50481103/71097077-b9e4e280-21af-11ea-8edc-c6166d92352a.png)
bug,topic:editor,regression,topic:animation
low
Critical
539,755,149
rust
Windows DllMain name is exported
On Windows, Rust exports the name of `DllMain` (`std`) / `_DllMainCRTStartup` (`no_std`), which is not standard behaviour and unwanted in most cases (that I am aware of). [As far as I understand](https://github.com/rust-lang/rust/issues/37530#issuecomment-259799687) this happens because they must be marked with either `#[no_mangle]` or `#[export_name = ".."]` so the `link.exe` can find the symbol. The `extern` keyword must also be used to mark them as `stdcall` for 32 bit targets. Example project: - Cargo.toml ```toml [lib] crate-type = ["cdylib"] ``` - lib.rs (`std`) ```rust #[no_mangle] extern "system" fn DllMain(_: *const u8, _: u32, _: *const u8) -> u32 { 1 } ``` - lib.rs (`no_std`) ```rust #![no_std] #[panic_handler] fn panic(_: &core::panic::PanicInfo) -> ! { loop {} } #[no_mangle] extern "system" fn _DllMainCRTStartup(_: *const u8, _: u32, _: *const u8) -> u32 { 1 } ``` ## dumpbin output - Example Windows dll file: ``` Dump of file C:\Windows\System32\httpprxp.dll File Type: DLL Section contains the following exports for httpprxp.dll 00000000 characteristics 0.00 version 1 ordinal base 8 number of functions 8 number of names ordinal hint RVA name 1 0 00001870 ProxyHelperGetProxyEventInformation 2 1 00001D00 ProxyHelperProviderConnectToServer 3 2 00001DC0 ProxyHelperProviderDisconnectFromServer 4 3 000010E0 ProxyHelperProviderFreeMemory 5 4 00001220 ProxyHelperProviderRegisterForEventNotification 6 5 00001AC0 ProxyHelperProviderSetProxyConfiguration 7 6 00001BA0 ProxyHelperProviderSetProxyCredentials 8 7 00001620 ProxyHelperProviderUnregisterEventNotification Summary 1000 .data 1000 .didat 1000 .pdata 2000 .rdata 1000 .reloc 1000 .rsrc 2000 .text ``` - Rust example from above (`std`): ``` Dump of file std.dll File Type: DLL Section contains the following exports for std.dll 00000000 characteristics 0.00 version 1 ordinal base 2 number of functions 2 number of names ordinal hint RVA name 1 0 00001000 DllMain = DllMain 2 1 00001010 rust_eh_personality = rust_eh_personality Summary 1000 .data 1000 .pdata 1000 .rdata 1000 .reloc 1000 .text ``` - Rust example from above (`no_std`): ``` Dump of file no_std.dll File Type: DLL Section contains the following exports for no_std.dll 00000000 characteristics 0.00 version 1 ordinal base 1 number of functions 1 number of names ordinal hint RVA name 1 0 00001000 _DllMainCRTStartup = _DllMainCRTStartup Summary 1000 .rdata 1000 .text ``` ## Expected output `DllMain` and `_DllMainCRTStartup` should not be exported by name, just as in the first dumpbin output example of a Windows system dll. ## Tested compiler version ``` rustc 1.41.0-nightly (c8ea4ace9 2019-12-14) binary: rustc commit-hash: c8ea4ace9213ae045123fdfeb59d1ac887656d31 commit-date: 2019-12-14 host: x86_64-pc-windows-msvc release: 1.41.0-nightly LLVM version: 9.0 ```
A-linkage,O-windows,T-compiler,O-windows-msvc,C-bug
low
Major
539,763,615
rust
Missing StorageLive and StorageDead Statements for some locals
It seems that the StorageLive and StorageDead statements are not generated for every local of a function. I tried this code: ```rust pub fn main() { let mut x = 5; x = call(x); } fn call(i: usize) -> usize { i * 2 } ``` I expected to see something like this happen: ``` fn call(_1: usize) -> usize { let mut _0: usize; let mut _2: usize; let mut _3: (usize, bool); bb0: { StorageLive(_2); _2 = _1; StorageLive(_3); // ADDED _3 = CheckedMul(move _2, const 2usize); assert(!move (_3.1: bool), "attempt to multiply with overflow") -> bb1; } bb1: { _0 = move (_3.0: usize); StorageDead(_2); StorageDead(_3); // ADDED return; } } ``` Instead, this happened: ``` fn call(_1: usize) -> usize { let mut _0: usize; let mut _2: usize; let mut _3: (usize, bool); bb0: { StorageLive(_2); _2 = _1; _3 = CheckedMul(move _2, const 2usize); assert(!move (_3.1: bool), "attempt to multiply with overflow") -> bb1; } bb1: { _0 = move (_3.0: usize); StorageDead(_2); return; } } ``` ## Meta `rustc --version --verbose`: rustc 1.41.0-nightly (25d8a9494 2019-11-29) binary: rustc commit-hash: 25d8a9494ca6d77361e47c1505ecf640b168819e commit-date: 2019-11-29 host: x86_64-unknown-linux-gnu release: 1.41.0-nightly LLVM version: 9.0 Reproducable on [Rust Playgound](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&code=pub%20fn%20main()%20%7B%0A%20%20%20%20let%20mut%20x%20%3D%205%3B%0A%20%20%20%20x%20%3D%20call(x)%3B%0A%7D%0A%0Afn%20call(i%3A%20usize)%20-%3E%20usize%20%7B%0A%20%20%20%20i%20*%202%0A%7D%0A) in stable and nightly channel
T-compiler,A-MIR,C-bug
low
Critical
539,772,659
godot
VERTEX in canvas_item shader: visibility problem
**Godot version:** 3.2 beta 4 **OS/device including version:** Windows 7 **Issue description:** Hello everyone, I find it difficult to unambiguously attribute this to a bug or a flaw, there was the following problem: when Sprite or Polygon2D is entirely off the screen, the engine does not draw it, and this is logical. But if it is slightly expanded by moving VERTEX in the Shader (canvas_item), then a piece of the object will fall into the area of the screen, but it will not be drawn anyway, apparently, due to the fact that the processing of the Shader occurs after checking the location in the screen or viewport. Is there a tool to solve the problem? (for example, somehow forcibly specify that this object should be drawn.) **Steps to reproduce:** Place the Sprite or Polygon2d outside the screen or viewport, then in the canvas_item Shader change the VERTEX position so that the visible part of the object is inside the screen. **Minimal reproduction project:** https://github.com/mishkarch/vertex-bug
enhancement,topic:rendering,confirmed,topic:shaders,topic:2d
low
Critical
539,783,388
pytorch
Add option in LSTM layer to access all cell states of all time steps
## πŸš€ Feature <!-- A clear and concise description of the feature proposal --> The LSTM layer in torch.nn should have the option to output the cell states of all time steps along with the hidden states of each time step. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too --> When implementing Recurrent Replay Distributed DQN (R2D2), [See here.](https://openreview.net/forum?id=r1lyTjAqYX) I need access to all the hidden states and cell states of all the time steps of a sequence. Those hidden states and cell states are stored in an array to later on continue training the LSTM based network from a different time step. This is possible using the LSTMCell and a for-loop. But this painfully slow, around 15-30 times slower than using an LSTM layer, even when using a GPU. This is the first major roadblock that I encountered when using PyTorch because it hinders me from using the R2D2 technique, as the current technique of using LSTMCells is simply way too slow. ## Pitch <!-- A clear and concise description of what you want to happen. --> I would like a flag in the LSTM layer which can be toggled to True, which will change the output behavior of the LSTM layer from ` output, (h_n, c_n)` to `output, cell_states, (h_n, c_n)`. Output comprises all hidden states (h_1, ... h_n) and cell_states comprises all cell states (c_1, c_n). The name of the flag could be "return_states" or "return_cell_states". ## Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered, if any. --> An alternative solution would be to somehow add a way to make the speed of using LSTMCells match the speed of the LSTM layer. This seems quite difficult to me to achieve. ## Additional Context: Other people seem to have this issue as well, see [here](https://discuss.pytorch.org/t/lstm-internal-cells-for-each-state/9987/3) or [here](https://discuss.pytorch.org/t/how-to-get-cell-states-for-each-timestep-from-nn-lstm/10907/5). cc @zou3519
feature,module: nn,module: rnn,triaged
low
Major
539,838,940
pytorch
cuDNN convolution does not handle empty input tensor
## πŸ› Bug Discovered by @ngimel code ```python import torch print(torch.__version__) print(torch.version.cuda) b = torch.nn.Conv2d(32, 32, 2, bias=False, padding=1) a = torch.randn(2, 32, 0, 5) print('cpu:', b(a).shape) b.cuda() a = a.cuda() print('cuda:', b(a).shape) ``` output ``` 1.4.0.dev20191217 10.1 cpu: torch.Size([2, 32, 1, 6]) Traceback (most recent call last): File "conv-empty.py", line 10, in <module> print('cuda:', b(a).shape) File "/home/xgao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 539, in __call__ result = self.forward(*input, **kwargs) File "/home/xgao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 345, in forward return self._conv_forward(input, self.weight) File "/home/xgao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 342, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM ``` ## Environment ``` PyTorch version: 1.4.0.dev20191217 Is debug build: No CUDA used to build PyTorch: 10.1 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 8.3.0-6ubuntu1~18.04.1) 8.3.0 CMake version: version 3.14.0 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.1.324 GPU models and configuration: GPU 0: Tesla V100-DGXS-16GB GPU 1: Tesla V100-DGXS-16GB GPU 2: Tesla V100-DGXS-16GB GPU 3: Tesla V100-DGXS-16GB Nvidia driver version: 430.50 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.0 Versions of relevant libraries: [pip] numpy==1.17.2 [pip] numpydoc==0.9.1 [pip] pytorch-ignite==0.2.1 [pip] torch==1.4.0.dev20191217 [pip] torch-nightly==1.2.0.dev20190722 [pip] torchani==1.0.2.dev29+gd32081e [pip] torchvision==0.4.2 [conda] blas 1.0 mkl [conda] magma-cuda100 2.5.1 1 pytorch [conda] magma-cuda101 2.5.1 1 pytorch [conda] mkl 2019.4 243 [conda] mkl-include 2019.4 243 [conda] mkl-service 2.3.0 py37h516909a_0 conda-forge [conda] mkl_fft 1.0.15 py37h516909a_1 conda-forge [conda] mkl_random 1.1.0 py37hb3f55d8_0 conda-forge [conda] pytorch-ignite 0.2.1 pypi_0 pypi [conda] torch 1.4.0.dev20191217 pypi_0 pypi [conda] torch-nightly 1.2.0.dev20190722 pypi_0 pypi [conda] torchani 1.0.2.dev29+gd32081e dev_0 <develop> [conda] torchvision 0.4.2 pypi_0 pypi ``` cc @ngimel @csarofeen @ptrblck @ezyang @gchanan @zou3519
module: cudnn,module: cuda,triaged,small
low
Critical
539,858,110
flutter
TextField Cut/Copy/Paste popup menu is clipped on iOS using Larger Text setting
## Steps to Reproduce 1. Run the Flutter Gallery sample application on a real iOS device. 2. Select Material > Text fields. 3. Type some text into one of the fields. 4. Double-tap on the text just entered to expose the Cut | Copy | Paste popup menu. 5. Notice the popup menu text is clipped on the bottom half. ## Flutter doctor ```console [βœ“] Flutter (Channel beta, v1.12.13+hotfix.6, on Mac OS X 10.15.1 19B88, locale en-US) β€’ Flutter version 1.12.13+hotfix.6 at /Users/name/Development/flutter β€’ Framework revision 18cd7a3601 (7 days ago), 2019-12-11 06:35:39 -0800 β€’ Engine revision 2994f7e1e6 β€’ Dart version 2.7.0 [βœ“] Android toolchain - develop for Android devices (Android SDK version 28.0.3) β€’ Android SDK at /Users/name/Library/Android/sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-28, build-tools 28.0.3 β€’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) β€’ All Android licenses accepted. [βœ“] Xcode - develop for iOS and macOS (Xcode 11.1) β€’ Xcode at /Applications/Xcode.app/Contents/Developer β€’ Xcode 11.1, Build version 11A1027 β€’ CocoaPods version 1.8.4 [βœ“] Android Studio (version 3.5) β€’ Android Studio at /Applications/Android Studio.app/Contents β€’ Flutter plugin version 42.1.1 β€’ Dart plugin version 191.8593 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) [βœ“] VS Code (version 1.39.1) β€’ VS Code at /Applications/Visual Studio Code.app/Contents β€’ Flutter extension version 3.5.1 [βœ“] Connected device (1 available) β€’ iPhone 11 Pro β€’ E8363CE9-9E6B-4344-9096-654BEA2FE787 β€’ ios β€’ com.apple.CoreSimulator.SimRuntime.iOS-13-1 (simulator) β€’ No issues found! ``` ## Screenshot ![IMG_F76542EBBF82-1](https://user-images.githubusercontent.com/17515596/71114693-9741d200-218d-11ea-9fd8-d3963a163735.jpeg)
a: text input,platform-ios,framework,f: material design,a: accessibility,a: fidelity,a: typography,has reproducible steps,found in release: 3.3,found in release: 3.7,team-design,triaged-design
low
Major
539,858,791
pytorch
[jit] `del` with slices doesn't work
See #31273 cc @suo
oncall: jit,triaged
low
Minor
539,858,988
flutter
We should probably warn people if OS generated files aregetting included in their assets
See https://github.com/flutter/flutter/issues/46163#issuecomment-565780153 - we probably want to warn people if they're inadvertently including something like `.DS_Store` in their assets.
tool,a: assets,P3,team-tool,triaged-tool
low
Minor
539,875,355
PowerToys
Side by side virtual desktops on widescreen monitors
# Summary of the new feature/enhancement I’ve always been a big fan of side by side window snapping; but wish I could pin specific apps/windows to a particular side and have them do all their window management within that rect. Obvious cases; browser on the left, vs code on the right, etc. # Proposed technical implementation details (optional) Ideally, just allow me to pin two existing virtual desktops into a single display, side by side. TBD on start bars; could be one in each, or alternately they share a single and have window management/new apps go to the last active desktop.
Idea-Enhancement,Product-Virtual Desktop
low
Major
539,887,726
godot
Unable to move parts in a new animation for a 2D character who already has animations
**Godot version:** ->3.1.1 stable.flathub ->3.1.2 stable.official I witnessed the issue on both versions, just to verify that it wasn't an issue related only to the flatpak version. **OS/device including version:** ->Ubuntu Linux 19.04 64bit Vanilla Gnome on Xorg ->Ryzen 2400G ->Radeon RX 560 (Polaris11, DRM 3.27.0, 5.0.0-37-generic) - driver: amdgpu **Issue description:** I make some animations for the player (everything works fine here), test them, save them to the project and then quit Godot. Next time I have free time to get back to work on the animations, I launch Godot, open the player scene and create a new animation and then: - Can create animation and keys in the animation track editor. - Can't move the parts of the character to create a new pose. - Can't modify the values in the inspector to create the pose. Typing the value in the fields makes the numbers appear, and these numbers turn back to the previous value once enter is pressed, thus not allowing to change the values. Making new player and create new animations works. The issue arises when the character already has some animations made, and new ones are to be created. **Steps to reproduce:** These steps are presumed, as it happened both using the character with and without bones, and both with official and flathub stables: - Create some animations for a character. - Save animations, quit Godot. - Launch Godot and try to create more animations. **Minimal reproduction project:** [Player.zip](https://github.com/godotengine/godot/files/3980242/Player.zip) There are two scenes: Player (with bones) and Hero (without bones) as issue happened first with the boned character, so I tried with an unboned one
bug,topic:editor
low
Minor
539,898,204
PowerToys
Storage Allocation Visualizer
I want to know what's using space on my various storage devices. I've used WinDirStat but it's dated and is certainly an un-modern UI, un-Fluent UI. Would appreciate the ability to delete, recycle, or show items in Explorer.
Idea-New PowerToy
low
Minor
539,907,663
godot
AnimationTree animation flickering
**Godot version:** 3.1.2 > **OS/device including version:** Linux **Issue description:** All the animations that execute the AnimationTree are flickering IF you leave one animation in the AnimationPlayer with the AutoPlay flag. You have no error and this mistake is hard to identify and resolve. It's better to deactivate that flag automatically when the AnimationTree is activated maybe
bug,topic:core,confirmed,usability
low
Critical
539,930,838
PowerToys
Customize Explorer Navigation Pane
# Summary of the new feature/enhancement I would like a PowerToy that allows the user to easily customize which root items appear in the File Explorer Navigation pane. For example, currently I use [registry edits like this](https://www.tenforums.com/tutorials/7299-add-remove-recycle-bin-navigation-pane-windows-10-a.html) to remove the "Network" section and add "Recycle Bin" to the navigation pane. I imagine a tool that allows a user to selection which items show up in that area. Perhaps a dialog that allows a user to check/uncheck which of the default items are visible, or add their own folders there. Perhaps they could optionally choose if the item allows for expanding to view subfolders or behaves more like a shortcut (like quick access items do). A user could even pin any arbitrary folder (similar to the old "explorer from here as root" tool from years past) so they could have quick access to that folder/sufolders. Here's a very rough mockup of what it might look like: ![image](https://user-images.githubusercontent.com/4219926/71124226-03323380-21aa-11ea-9578-ac8082b75aaa.png) **Crutkas:** From https://github.com/microsoft/PowerToys/issues/1984 as well It would be super useful to be able to tweak File Explorer to be more useful in the scene that we have shortcuts that are relevate for me. For me personally, i don't need "3D Objects" but it would be sweet if I had my Source\Repos folder ("C:\Users\crutkas\source\repos")
Idea-New PowerToy,Product-Tweak UI Design,Product-File Explorer
low
Major
539,935,031
flutter
Image.network's caching strategy is problematic
Currently, `Image.network` will cache the decoded version of the image in the `ImageCache` under the following circumstances: - The cache max allowed size is > 0 - The cache is not already full One problem we have is that if the network image is > cache size, we automatically resize it to make the cache bigger and try again. @gaaclarke has been working on fixing that. Another problem is that we _only_ provide ways to cache the decoded image, which may be much much larger than the encoded data. A developer may want to cache the encoded bytes to avoid a network call, but not cache the decoded image (which might be very large). We should make it easier to do the right thing here. /cc @gaaclarke @goderbauer @jonahwilliams @liyuqian @zmtzawqlp @cbracken
framework,c: performance,a: images,perf: memory,P3,team-framework,triaged-framework
medium
Critical
539,935,325
TypeScript
Poor parameter elaboration
![image](https://user-images.githubusercontent.com/972891/71124747-6025ed80-219a-11ea-95f4-7d601885ccfb.png) From https://twitter.com/artyorsh/status/1206695784101695496
Needs Proposal,In Discussion,Domain: Error Messages,Experience Enhancement
low
Minor
539,936,413
flutter
It should be possible to exclude an image from the ImageCache
A large sliver list or grid of images can cause serious memory issues when scrolling because of caching. By default, we'll cache every image child we can during scrolling, and we don't get to clean up until later. A recent PR https://github.com/flutter/flutter/pull/46357 has tried to address this (and contains some very good visualizations of the memory graph in https://github.com/flutter/flutter/pull/46357#issuecomment-565259623), but it would probably be better to just offer some way to _not_ cache such images at all. If they are network images, we'll want to address https://github.com/flutter/flutter/issues/47378 as well, but separately. As a bonus, we should provide some hint to developers about when they should use this, e.g. by putting a timer on cache entries and detecting cache entries that are only accessed once over some substantive period of time. /cc @gaaclarke @goderbauer @jonahwilliams @liyuqian @zmtzawqlp @cbracken
framework,c: performance,f: scrolling,a: images,perf: memory,P2,team-framework,triaged-framework
low
Major
539,941,446
three.js
Serializing custom classes should use a known type
(title originally "[suggestion] Examples should extend `this.name`, not `this.type`") ##### Description of the problem The example components generally override the `type` property of custom objects/materials/etc., like [Water](https://github.com/mrdoob/three.js/blob/dev/examples/js/objects/Water.js) or the [PMREM Materials](https://github.com/mrdoob/three.js/blob/dev/examples/js/pmrem/PMREMGenerator.js) with types of 'EquirectangularToCubeUV' and 'SphericalGaussianBlur'. As these are still Mesh and ShaderMaterial classes respectively, the original type after serialization is lost, e.g. serializing Water shows is type as `Water`, and there's no longer a reference to this object being a Mesh, other than inference. This also fails with ObjectLoader because of this, with the resulting objects being Object3Ds. While not all 'custom classes' can be deserialized understandably, it should be possible for simple Mesh classes that just generate geometry and material. I think for the most part, `this.type` should not be overwritten, and instead `this.name` should be used, or perhaps another property (e.g. `coreType`). Happy to make the changes, but want to be sure I understand how `type` and `name` are to be used and that this change is agreeable.
Suggestion
low
Major
539,945,015
youtube-dl
youtube_dl.utils.UnsupportedError: Unsupported URL:
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.11.28. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape. - Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [X ] I'm reporting a broken site support - [X] I've verified that I'm running youtube-dl version **2019.11.28** - [X] I've checked that all provided URLs are alive and playable in a browser - [X] I've checked that all URLs and arguments with special characters are properly quoted or escaped - [X] I've searched the bugtracker for similar issues including closed ones ## Verbose log <!-- Provide the complete verbose output of youtube-dl that clearly demonstrates the problem. Add the `-v` flag to your command line you run youtube-dl with (`youtube-dl -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this: [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] youtube-dl version 2019.11.28 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] Proxy map: {} <more lines> --> ``` youtube-dl is up-to-date (2019.11.28) C:\Users\darku\Downloads>yt -v https://www.viacomcbspressexpress.com/cbs-entertainment/video?watch=3gia1tte9r [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-v', 'https://www.viacomcbspressexpress.com/cbs-entertainment/video?watch=3gia1tte9r'] [debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252 [debug] youtube-dl version 2019.11.28 [debug] Python version 3.4.4 (CPython) - Windows-10-10.0.18362 [debug] exe versions: none [debug] Proxy map: {} [generic] video?watch=3gia1tte9r: Requesting header WARNING: Falling back on generic information extractor. [generic] video?watch=3gia1tte9r: Downloading webpage [generic] video?watch=3gia1tte9r: Extracting information ERROR: Unsupported URL: https://www.viacomcbspressexpress.com/cbs-entertainment/video?watch=3gia1tte9r Traceback (most recent call last): File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpwy0zjfmc\build\youtube_dl\YoutubeDL.py", line 796, in extract_info File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpwy0zjfmc\build\youtube_dl\extractor\common.py", line 530, in extract File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmpwy0zjfmc\build\youtube_dl\extractor\generic.py", line 3347, in _real_extract youtube_dl.utils.UnsupportedError: Unsupported URL: https://www.viacomcbspressexpress.com/cbs-entertainment/video?watch=3gia1tte9r ``` ## Description <!-- Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible. If work on your issue requires account credentials please provide them or explain how one can obtain them. --> This is was working a couple of days ago, but I'm now getting the URL Unsupported Error.
geo-restricted
low
Critical
539,988,205
PowerToys
Colored Tabs for Virtual Desktops
# Summary of the new feature/enhancement Tabs can be colorized. <!-- A clear and concise description of what the problem is that the new feature would solve. Describe why and how a user would use this new functionality (if applicable). --> If you've got a lot of tabs it can be hard to quickly find the right one. Thus it'd be helpful if there was a second way to identify a tab beside the name. The color can be set through a background color or a knob aside of the name. <!-- A clear and concise description of what you want to happen. -->
Idea-Enhancement,Product-Virtual Desktop
low
Minor
540,046,414
pytorch
Integrate `torch.xxx` and `Tensor.xxx`
## πŸš€ Feature Make all `torch.xxx` and `Tensor.xxx` that shares the same name actually the same function. ## Motivation Most functions from `torch` and methods from `Tensor` that shares the same name actually does the same thing. `a.xxx(b)` is equal to `Tensor.xxx(a,b)` and is equal to `torch.xxx(a,b)`. Therefore, `torch.xxx` ideally should be equivalent to `Tensor.xxx`. But this is not true currently. `torch` functions might have (surprisingly) different arg spec compared to `Tensor` functions. For example, #18095 indicates that `Tensor.flip` allows integer dimension while `torch.flip` only allows tuple dimension. There might be more inconsistencies like this hidden in the code base. It's probably better to simply make `torch.xxx` and `Tensor.xxx` the same function. This will fully solve such inconsistency problem. Besides solving the surprising inconsistency problem, there are a lot more extra advantages of this: * Easier to port `torch` function to `Tensor` (for convenience) * Easier to port `Tensor` function to `torch` (for convenience, current not all of them ported) * Easier to develop new function for both * Better documentation readability (rather than seeing "refer to xxx" in jupyter-notebook) ## Pitch Aliasing all proper functions in `torch` to `Tensor` method directly. i.e. something like ```python torch.flip = Tensor.flip ``` (maybe it should be more than this? I currently can't think of any reason not to do this) ## Alternatives One "possible" thing is to port the whole namespace under `Tensor` (except those started with underscore) into `torch`. Another possibility would be discourage (and deprecate) the use of either one of them. ## Additional proposal For in-place operations, it can be a uniquely named flag for every function. Override the `__getattr__` method of `Tensor` such that, when getting attribute with underscore, convert that to the not-underscored one with `implace=True`. This also helps make things unified. ## Additional context In `numpy`, `np.xxx` and `np.ndarray.xxx` are also different, but that (might be) mainly because `np.xxx` allows the first argument to be anything that is convertible to a `np.ndarray`, while `np.ndarray.xxx` requires the first argument to be a `np.ndarray` type. In `pytorch`, both side need the first argument to be a `Tensor`, therefore both side provides exactly same functionality.
triaged,module: ux
low
Minor
540,074,193
PowerToys
quick action to hide/show icons on the desktop
# Summary of the new feature/enhancement In the desktop's Right click context menu there is an option to hide/show desktop icons, this power toy could let the user set a quick action like a double click on the desktop or a keyboard shortcut to toggle this option <!-- A clear and concise description of what the problem is that the new feature would solve. Describe why and how a user would use this new functionality (if applicable). --> <!-- A clear and concise description of what you want to happen. -->
Idea-New PowerToy,Product-Tweak UI Design
medium
Critical
540,082,792
pytorch
DataParallel doesn't properly handle kwargs
## πŸ› Bug When DP is given a specified non-tensor kwarg and the batch tensor doesn't have enough elements to scatter to each device, a weird error is thrown: ```py In [5]: import torch ...: import torch.nn as nn ...: ...: class Net(nn.Module): ...: def forward(self, x, *, delta=2): ...: return x + 2 ...: ...: dpn = nn.DataParallel(Net(), device_ids=(0, 1)).cuda() ...: x = torch.randn(1, 3, device='cuda') ...: dpn(x) ...: dpn(x, delta=3) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-5-740170232696> in <module> 9 x = torch.randn(1, 3, device='cuda') 10 dpn(x) ---> 11 dpn(x, delta=3) /data/vision/torralba/scratch2/tongzhou/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) /data/vision/torralba/scratch2/tongzhou/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs) 150 return self.module(*inputs[0], **kwargs[0]) 151 replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) --> 152 outputs = self.parallel_apply(replicas, inputs, kwargs) 153 return self.gather(outputs, self.output_device) 154 /data/vision/torralba/scratch2/tongzhou/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs) 160 161 def parallel_apply(self, replicas, inputs, kwargs): --> 162 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) 163 164 def gather(self, outputs, output_device): /data/vision/torralba/scratch2/tongzhou/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices) 83 output = results[i] 84 if isinstance(output, ExceptionWrapper): ---> 85 output.reraise() 86 outputs.append(output) 87 return outputs /data/vision/torralba/scratch2/tongzhou/miniconda3/lib/python3.7/site-packages/torch/_utils.py in reraise(self) 392 # (https://bugs.python.org/issue2651), so we work around it. 393 msg = KeyErrorMessage(msg) --> 394 raise self.exc_type(msg) TypeError: Caught TypeError in replica 1 on device 1. Original Traceback (most recent call last): File "/data/vision/torralba/scratch2/tongzhou/miniconda3/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/data/vision/torralba/scratch2/tongzhou/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) TypeError: forward() missing 1 required positional argument: 'x' ``` The key issue is that https://github.com/pytorch/pytorch/blob/fb24f7c4adbd3469f552f3aa0db006317889f88f/torch/nn/parallel/scatter_gather.py#L36-L41 does not actually care about whether the scattered thing is a tensor or not, and blindly pads args and kwargs to have the same length.
module: nn,triaged,module: data parallel
medium
Critical
540,096,044
pytorch
Make RRef.to_here() non-blocking
## πŸš€ Feature Currently, when creating a Remote Ref and fetching it to the local worker with `to_here()`, we tie up a thread while waiting for the future to be resolved. This could potentially be expensive and we should not block the thread if the user does not need the value immediately. ## Pitch Add a non-blocking flavor for `to_here()` that returns a `Future<T>` that can be resolved via a `future.wait()`. cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528
triaged,module: rpc
low
Minor
540,100,429
kubernetes
Namespace controller cannot keep up with e2e namespace deletion rate
**What happened**: e2e tests deleting namespaces timed out waiting for namespace cleanup. this was the root cause of https://github.com/kubernetes/kubernetes/issues/86181. I graphed namespace controller lag times: * from namespace delete to start of processing by the namespace cleanup controller (average 3:36, min 0:05, max 5:41) * from start of processing to end of processing by the namespace cleanup controller (average 0:43, min 0:01, max 23:35) https://docs.google.com/spreadsheets/d/1hYxDyvZ9o-3T0WrOJ7LgW-sxQn62fU8RWfDjm02OoLc/edit#gid=1101829529 The namespace controller, as configured, can't keep up with the parallelism of the e2e jobs. For most tests, this doesn't fail the test because the e2e job waits at the end for namespace deletion to complete. For the GCEPD test, namespace deletion is a synchronous part of the test. Depending on where the GCEPD test fell, the controller was sometimes too backed up to finish removing the namespace in time. Things we could do: * improve reuse of discovery information; currently, the controller fully rediscovers before every namespace sync. Instead, we could only rediscover if cached discovery data is stale or older than the deletionTimestamp of the namespace being synced * increase the number of parallel workers from the current default of 10 (might need to adjust qps/burst settings as well) * parallelize deletecollection calls when syncing a single namespace * lengthen max retry backoff so that accumulating a few dozen namespaces that need retrying because of finalizing objects doesn't overwhelm the queue /sig api-machinery /cc @deads2k @msau42 Note that this blocks moving https://github.com/kubernetes/kubernetes/issues/86181 back into the main e2e (at least as written)
kind/bug,priority/important-soon,sig/api-machinery,lifecycle/frozen
low
Minor
540,148,362
flutter
Flutter drawer open gesture can't properly be disabled for the drawer and endDrawer separately
I am making an app with a hamburger drawer on the left, but also another drawer on the other side. Everything is running fine, but I want to disable the open gesture by swiping left ONLY for the endDrawer. There is an ugly way of disabling the gesture by setting the drawerEdgeDragWidth to 0, but by doing this, both drawers can't be opened. It would be really nice to have a proper way to disable the gesture and to have it separate properties to do so, for example by adding a drawerSwipeGestureEnabled and endDrawerSwipeGestureEnabled property in Scaffold to be able to turn them off by setting them to false.
framework,f: material design,d: stackoverflow,c: proposal,team-design,triaged-design
low
Minor
540,206,130
flutter
Cupertino back gesture are disabled when using PageRouteBuilder
## Use case I am using `PageRouteBuilder` to add fade transition to some of my routes. I've noticed however that the swipe to go back on IOS is not working when doing so! Which is something I'm used to as an IOS user. Which left me with either choosing a very pleasing to the eye page transitions or better user experience. ## Proposal I was thinking that exposing the private classes `_CupertinoBackGestureDetector` and `_CupertinoBackGestureController` which are used in the `CupertinoPageRoute` to achive the swipe to go back function, or improving the `PageRouteBuilder` to support that feature, will be a great thing to have!
c: new feature,framework,f: cupertino,f: routes,P3,workaround available,team-design,triaged-design
medium
Critical
540,225,300
TypeScript
"paths" option should allow untyped entries
## Search Terms paths option untyped declare module ## Suggestion The existing `"paths"` option allows you to control the types used for bare specifiers in your source code, by mapping them as keys to a list of candidate target specifiers. ```js // tsconfig.json { "paths": { // <bare-specifier>: [ <candidate1>, <candidate2> ] "foo": [ "external-foo", "fallback-foo" ] // if "external-foo" is not found, "fallback-foo" will be attempted, otherwise error } } ``` I suggest we allow the final element in the array of candidates to be the empty string `""`. If hit, it would mean _"This module exists but is untyped."_ ## Use Cases Our build system uses `"paths"` to implement a searching loader for typed dependencies. We know all the acceptable source specifiers ahead-of-time, and so create one property for each - this ensures precise auto-completions for external dependencies in VS Code. We do not know if the target specifiers will exist on disk or not, e.g. the user may not yet have populated the disk via `"npm install"`. ***Strict:*** Some users want to be _strict_ and see errors if no match was found. This behavior comes for free from `"paths"` today. ***Sloppy:*** Sometimes users want to be _sloppy_ and have the source specifier be typed as `any` if no match was found. This behavior is not possible today because there is no way to express _"this modules exists but is untyped"_ in `"paths"` today. ## Workaround We have not discovered any alternative that allows _sloppy_ behavior to be implemented. The closest workaround is to write `declare module "foo"` in an ambient declaration file. However this does not work because it clobbers (takes precedence over) the same entry in `"paths"`. Our desire is for untyped behavior to be the last resort, not the first choice. ## Examples **Current behaviour:** This works today and is fine for implementing ***strict*** behavior. ```js // tsconfig.json { "paths": { "foo": [ "external-foo", "fallback-foo" ] // assume "external-foo" and "fallback-foo" do not exist on disk } } // index.ts import aDefaultImport, { aNamedImport } from "foo"; // Error(2307): Cannot find module ``` **Proposed behaviour:** Here's how we could achieve ***sloppy*** behavior. ```js // tsconfig.json { "paths": { "foo": [ "external-foo", "fallback-foo", "" ] // assume "external-foo" and "fallback-foo" do not exist on disk } } // index.ts import aDefaultImport, { aNamedImport } from "foo"; // no error ``` The usage of "foo" would result in no compile-time error. ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Awaiting More Feedback
low
Critical
540,233,189
pytorch
Distributed hangs on process termination with world_size=1
## πŸ› Bug This was originally reported by @adamlerer. Turning it into an issue. ## To Reproduce ```python from tempfile import TemporaryDirectory from unittest import TestCase, main class TestInitPg(TestCase): def setUp(self): self.basedir = TemporaryDirectory() self.addCleanup(self.basedir.cleanup) def test_init_pg(self): distributed_init_method = f"file://{self.basedir.name}/test1" import torch.distributed as td td.init_process_group("gloo", init_method=distributed_init_method, world_size=1, rank=0) if __name__ == "__main__": main() ``` ## Expected behavior The process should terminate. ## Environment This reproduces on master. ## Additional context I took a stack at the hang and got the following. The `FileStore` appears to assume `world_size > 1`. ``` #0 0x00007f7cfc51bf85 in __GI___nanosleep (requested_time=0x7f7cf7cc4ea0, remaining=0x7f7cf7cc4ea0) at ../sysdeps/unix/sysv/linux/nanosleep.c:27 #1 0x00007f7c6d9c9388 in void std::this_thread::sleep_for<long, std::ratio<1l, 1000l> >(std::chrono::duration<long, std::ratio<1l, 1000l> > const&) () at third-party-buck/platform007/build/libgcc/include/c++/trunk/thread:376 #2 0x00007f7c6d9c6e0d in c10d::(anonymous namespace)::File::File(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int, std::chrono::duration<long, std::ratio<1l, 1000l> >) () at caffe2/torch/lib/c10d/FileStore.cpp:100 #3 0x00007f7c6d9c5b29 in c10d::FileStore::addHelper(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, long) () at caffe2/torch/lib/c10d/FileStore.cpp:274 #4 0x00007f7c6d9c5753 in c10d::FileStore::~FileStore() () at caffe2/torch/lib/c10d/FileStore.cpp:225 #5 0x00007f7c6d9c60a9 in c10d::FileStore::~FileStore() () at caffe2/torch/lib/c10d/FileStore.cpp:222 ``` cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528
oncall: distributed,module: bootcamp,triaged
low
Critical
540,259,906
flutter
Can I get a data of clipboard on flutter driver?
I am doing a test for copying clipboard data. I found I can't import `package:flutter/services.dart` on driver. How can I do if I want to test clipboard data? Thanks in advance for your help.
a: tests,tool,d: api docs,t: flutter driver,c: proposal,P3,team-tool,triaged-tool
low
Major
540,270,816
TypeScript
multi return type declaration emit not as expected
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with the latest published version. It may have already been fixed. For npm: `typescript@next` This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly --> **TypeScript Version:** 3.8.0-dev.20191213 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** **Code** ## origin code ```ts export const enum EnumParseInputUrl { UNKNOWN, STRING, URL, URLSEARCHPARAMS, } export function parseInputUrl<T extends string | number | URL | URLSearchParams>(_input: T) { if (typeof _input === 'number') { let value = _input.toString(); return { type: EnumParseInputUrl.STRING, _input, value, } } else if (typeof _input === 'string') { let value = _input.toString(); return { type: EnumParseInputUrl.STRING, _input, value, } } else if (_input instanceof URL) { let value = _input; return { type: EnumParseInputUrl.URL, _input, value, } } else if (_input instanceof URLSearchParams) { let value = _input; return { type: EnumParseInputUrl.URLSEARCHPARAMS, _input, value, } } let value = _input.toString(); return { type: EnumParseInputUrl.UNKNOWN, _input, value, } } ``` ## the code of if i want export as expected ```ts export function parseInputUrl<T extends string | number | URL | URLSearchParams>(_input: T) { if (typeof _input === 'number') { let value = _input.toString(); return { type: EnumParseInputUrl.STRING as const, _input, value, } } else if (typeof _input === 'string') { let value = _input.toString(); return { type: EnumParseInputUrl.STRING as const, _input, value, } } else if (_input instanceof URL) { let value = _input; return { type: EnumParseInputUrl.URL as const, _input, value, } } else if (_input instanceof URLSearchParams) { let value = _input; return { type: EnumParseInputUrl.URLSEARCHPARAMS as const, _input, value, } } let value = _input.toString(); return { type: EnumParseInputUrl.UNKNOWN as const, _input, value, } } ``` **Expected behavior:** ```ts export declare function parseInputUrl<T extends string | number | URL | URLSearchParams>(_input: T): { type: EnumParseInputUrl.STRING; _input: T & number; value: string; } | { type: EnumParseInputUrl.STRING; _input: T & string; value: string; } | { type: EnumParseInputUrl.URL; _input: T & URL; value: T & URL; } | { type: EnumParseInputUrl.URLSEARCHPARAMS; _input: T & URLSearchParams; value: T & URLSearchParams; } | { type: EnumParseInputUrl.UNKNOWN; _input: T; value: string; }; ``` **Actual behavior:** ```ts export declare const enum EnumParseInputUrl { UNKNOWN = 0, STRING = 1, URL = 2, URLSEARCHPARAMS = 3 } export declare function parseInputUrl<T extends string | number | URL | URLSearchParams>(_input: T): { type: EnumParseInputUrl; _input: T & URL; value: T & URL; } | { type: EnumParseInputUrl; _input: T & URLSearchParams; value: T & URLSearchParams; } | { type: EnumParseInputUrl; _input: T; value: string; }; ``` **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> **Related Issues:** <!-- Did you find other bugs that looked similar? -->
Needs Investigation
low
Critical
540,275,595
PowerToys
Overlay of commands / shortcuts / keys pressed - Screencast Mode
# Summary of the new feature/enhancement It would be great to have a Screencast mode similar to what VSCode features: https://code.visualstudio.com/updates/v1_31#_screencast-mode Very useful for making demos and tutorials. # Proposed technical implementation details I can see the need for some basic control over size and position. Probably font, size, color and position should be flexible enough. Another consideration I'm not sure how to best address is multiple monitors. Should the key stroke message be displayed on all monitors? Or just the active one? Or a fixed one?
Idea-New PowerToy,Product-Tweak UI Design
high
Critical
540,276,943
TypeScript
Heap of out memory for recursive type
**TypeScript Version:** 3.8.0-dev.20191219 **Search Terms:** heap out of memory, recursive, crash **Code** ```ts export type PathOf< T extends { [x: string]: any }, Path extends [K1?, K2?, K3?, K4?, K5?, K6?, K7?, K8?, K9?, K10?, K11?, K12?, K13?], K1 extends keyof T, K2 extends keyof T[K1], K3 extends keyof T[K1][K2], K4 extends keyof T[K1][K2][K3], K5 extends keyof T[K1][K2][K3][K4], K6 extends keyof T[K1][K2][K3][K4][K5], K7 extends keyof T[K1][K2][K3][K4][K5][K6], K8 extends keyof T[K1][K2][K3][K4][K5][K6][K7], K9 extends keyof T[K1][K2][K3][K4][K5][K6][K7][K8], K10 extends keyof T[K1][K2][K3][K4][K5][K6][K7][K8][K9], K11 extends keyof T[K1][K2][K3][K4][K5][K6][K7][K8][K9][K10], K12 extends keyof T[K1][K2][K3][K4][K5][K6][K7][K8][K9][K10][K11], K13 extends keyof T[K1][K2][K3][K4][K5][K6][K7][K8][K9][K10][K11][K12], K14 extends keyof T[K1][K2][K3][K4][K5][K6][K7][K8][K9][K10][K11][K12][K13], K15 extends keyof T[K1][K2][K3][K4][K5][K6][K7][K8][K9][K10][K11][K12][K13][K14] > = Path; ``` **Expected behavior:** It should compile as it was in older version of `tsc`. It starts to happen from version > 3.4. If I reduce the list of keys to 10 it compiles. **Actual behavior:** Heap out of size error: ``` <--- Last few GCs ---> [62544:0x55eeea5d1670] 54060 ms: Scavenge 1377.4 (1415.1) -> 1373.9 (1416.6) MB, 3.6 / 0.0 ms (average mu = 0.250, current mu = 0.187) allocation failure [62544:0x55eeea5d1670] 54086 ms: Scavenge 1377.7 (1416.6) -> 1374.2 (1426.6) MB, 5.0 / 0.0 ms (average mu = 0.250, current mu = 0.187) allocation failure [62544:0x55eeea5d1670] 54148 ms: Scavenge 1382.1 (1426.6) -> 1375.0 (1429.1) MB, 15.8 / 0.0 ms (average mu = 0.250, current mu = 0.187) allocation failure <--- JS stacktrace ---> ==== JS stack trace ========================================= 0: ExitFrame [pc: 0x319579adbe1d] Security context: 0x06b12e21e6e9 <JSObject> 1: getConstraintOfIndexedAccess(aka getConstraintOfIndexedAccess) [0x1791c358f071] [/home/mateusz/.config/yarn/global/node_modules/typescript/lib/tsc.js:~34610] [pc=0x319579ea17a8](this=0x353782f026f1 <undefined>,type=0x33d411a782c9 <Type map = 0x1f09a070b789>) 2: getConstraintOfType(aka getConstraintOfType) [0x1791c358f031] [/home/mateusz/.config/ya... FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 1: 0x55eee7c6ba01 node::Abort() [node] 2: 0x55eee7c6ba4f [node] 3: 0x55eee7e4f132 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node] 4: 0x55eee7e4f387 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node] 5: 0x55eee81f3e83 [node] 6: 0x55eee81f3fd1 [node] 7: 0x55eee8204768 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node] 8: 0x55eee820510f v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node] 9: 0x55eee8207285 v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node] 10: 0x55eee82072d2 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node] 11: 0x55eee81d28b7 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node] 12: 0x55eee8464778 v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node] 13: 0x319579adbe1d ``` **Playground Link:** https://github.com/SirWojtek/path-of-issue Author's code: https://github.com/Morglod/ts-pathof **Related Issues:** https://github.com/microsoft/TypeScript/issues/35289 https://github.com/microsoft/TypeScript/issues/35156
Bug
low
Critical
540,367,022
pytorch
Mnasnet0_5 first layer shape incorrect
## πŸ› Bug Documentation here lays out the code for Mnasnet as used in pytorch: https://pytorch.org/docs/stable/_modules/torchvision/models/mnasnet.html Running this with alpha = 0.5 results in a depth[0] of 16 but the first layer of mnasnet0_5 has 32, indicating it has been trained with alpha = 1 Output from my pdb: (Pdb) alpha = 0.5 (Pdb) _get_depths(alpha) [16, 8, 16, 24, 40, 48, 96, 160] (Pdb) alpha = 0.6 (Pdb) _get_depths(alpha) [24, 16, 16, 24, 48, 56, 112, 192] (Pdb) alpha = 0.8 (Pdb) _get_depths(alpha) [24, 16, 24, 32, 64, 80, 152, 256] (Pdb) alpha = 0.9 (Pdb) _get_depths(alpha) [32, 16, 24, 40, 72, 88, 176, 288] However mnasnet0_5 has 32 filters in first layer (Pdb) mnasnet = mnasnet0_5(pretrained=True) (Pdb) mnasnet.layers[0].weight.shape torch.Size([32, 3, 3, 3]) Steps to reproduce the behaviour: See above code. ## Expected behaviour i would expect mnasnet0_5 to have 16 filters in first layer OR for the code documented at https://pytorch.org/docs/stable/_modules/torchvision/models/mnasnet.html to produce a network with 32 filters in first layer. ## Environment Please copy and paste the output from our [environment collection script]Collecting environment information... PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1.243 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: Could not collect Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: GeForce RTX 2080 Ti GPU 1: GeForce RTX 2080 Ti Nvidia driver version: 430.26 cuDNN version: Could not collect Versions of relevant libraries: [pip] flashtorch==0.0.7 [pip] numpy==1.17.0 [pip] numpydoc==0.9.1 [pip] torch==1.3.1 [pip] torchsummary==1.5.1 [pip] torchvision==0.4.2 [conda] blas 1.0 mkl [conda] flashtorch 0.0.7 pypi_0 pypi [conda] mkl 2019.4 243 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.14 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] pytorch 1.3.1 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch [conda] torch 1.2.0 pypi_0 pypi [conda] torchsummary 1.5.1 pypi_0 pypi [conda] torchvision 0.4.0 pypi_0 pypi cc @fmassa
triaged,module: vision
low
Critical
540,373,923
flutter
FocusNode change notifier not called when focused TextField is removed
The following test will fail the final expect: ``` import 'package:flutter/material.dart'; import 'package:flutter_test/flutter_test.dart'; Widget buildApp(Widget body) { return MaterialApp( home: Scaffold( body: body, ), ); } TextField buildTextField(FocusNode focus) { return TextField( focusNode: focus, autofocus: true, ); } main() { testWidgets('Focus node dispose', (WidgetTester tester) async { final FocusNode focus = FocusNode(); bool isFocused = focus.hasFocus; final VoidCallback focusChanged = () => isFocused = focus.hasFocus; focus.addListener(focusChanged); expect(isFocused, isFalse); await tester.pumpWidget(buildApp(buildTextField(focus))); expect(focus.hasFocus, isTrue); expect(isFocused, isTrue); await tester.pumpWidget(buildApp(Text('No focus'))); expect(focus.hasFocus, isFalse); expect(isFocused, isFalse); }); } ``` I believe this violates the contract established by FocusNode's ChangeNotifier interface, since clearly the FocusNode no longer has focus after its TextField is removed. In our app we expect the change notifier to be called so that we can manage the focus change cleanly. Without this, we need to manage the focus change by chasing down every logic path that can cause the app to rebuild without the text field, which leads to error-prone duplication of code.
a: text input,framework,f: material design,customer: octopod,f: focus,has reproducible steps,found in release: 3.0,found in release: 3.1,team-design,triaged-design
low
Critical
540,375,100
rust
Irrelevant error caused by illegal blanket impl
I have a minimal example, using stable rustc 1.39.0: `Cargo.toml`: ``` [package] name = "minimal" version = "0.1.0" authors = ["Erlend Langseth <[email protected]>"] edition = "2018" [dependencies] # Actix actix = "0.9.0-alpha.2" actix-web = "2.0.0-alpha.6" actix-rt = "1.0.0" actix-http = "1.0.0" # Redis redis-async = "0.6.1" actix-redis = "0.8.0-alpha.1" # Serde serde = { version = "1.0", features = ["derive"] } serde_json = "1.0" ``` `main.rs`: ``` use actix::prelude::*; use actix_web::{ get, web::{Json, Data}, App, HttpServer, }; use serde::{Deserialize}; use redis_async::{ resp_array, resp::{RespValue, FromResp}, client::paired::PairedConnection as RedisConn }; use std::str::FromStr; use std::net::SocketAddr; #[actix_rt::main] async fn main() -> std::io::Result<()> { let redis = redis_async::client::paired::paired_connect(&SocketAddr::from_str("127.0.0.1:6379").unwrap()).await.unwrap(); HttpServer::new(move || { App::new() .wrap(actix_web::middleware::Logger::new( "%a \"%r\" Response: %s %b", )) .data(redis.clone()) .service(get_health) }) .bind("127.0.0.1:8088")? .start() .await } impl<'a, T: Deserialize<'a>> FromResp for T { fn from_resp_int(resp: RespValue) -> Result<T, redis_async::error::Error> { unimplemented!() } } #[get("/admin/v1/health")] async fn get_health(redis: Data<RedisConn>) -> Json<usize> { let res = redis.send::<String>(resp_array!["PING"]).await; Json(0) } ``` It yields this weird error: ``` error[E0283]: type annotations required: cannot resolve `std::string::String: redis_async::resp::FromResp` --> src/main.rs:43:21 | 43 | let res = redis.send::<String>(resp_array!["PING"]).await; | ^^^^ error: aborting due to previous error ``` Whereas the real error is `impl<'a, T: Deserialize<'a>> FromResp for T`. If I remove that impl block, it's all fine.
A-diagnostics,T-compiler,C-bug,S-needs-repro
low
Critical
540,408,424
pytorch
Slow clip_grad_norm_ because of .item() calls when run on device
## Issue description While profiling a slow training process of a quite deep network I got the following results: ![image](https://user-images.githubusercontent.com/1045411/71187912-c96f3480-225e-11ea-85b7-3cca0f34fdad.png) Given I saw the problem was the `.item()` call would copy the values from the GPU crossing the PCI, I went ahead and built a version that would optimize that. I replaced: ```python total_norm = 0.0 for p in parameters: param_norm = p.grad.data.norm(norm_type) total_norm += param_norm ** norm_type ``` with: ```python # PERF: We ensure we are all in the same device to avoid the penalty of moving data back and forth. total_norm = torch.sum(torch.stack([torch.norm(p.grad,norm_type) for p in parameters]) ** norm_type) ``` There I saw quite an improvement: ![image](https://user-images.githubusercontent.com/1045411/71188107-18b56500-225f-11ea-984c-2d37618c2729.png) But, to my surprise performance was still very bad for a very boilerplate function and the problem is actually the call to `.norm()` function. Which is horribly slow. ## System Info PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1 OS: Microsoft Windows 10 Pro GCC version: Could not collect CMake version: Could not collect Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy==1.16.5 [pip] numpydoc==0.9.1 [pip] torch==1.3.1 [pip] torchvision==0.4.2 [conda] blas 1.0 mkl [conda] mkl 2019.4 245 [conda] mkl-service 2.3.0 py37hb782905_0 [conda] mkl_fft 1.0.14 py37h14836fe_0 [conda] mkl_random 1.1.0 py37h675688f_0 [conda] pytorch 1.3.1 py3.7_cuda101_cudnn7_0 pytorch [conda] torchvision 0.4.2 py37_cu101 pytorch cc @ngimel @VitalyFedyunin
module: performance,module: cuda,triaged,module: norms and normalization
low
Critical
540,434,733
PowerToys
Implement more taskbar customization features with FalconX as a PowerToy
# Summary of the new feature/enhancement This feature would help users to have more customization options for their taskbars without any dangerous registry editing tools etc. It allows users to position their taskbar icons in the center of the taskbar similar to the dock approach on other OS. It also allows the user to customize the look and feel of the taskbar's background with options like **transparent** background, **blurred** background or **acrylic** background. This will bring a lot of joy to some group of users who always look for more ways to customize their desktop experience. See here: * Direct link: https://chrisandriessen.nl/web/FalconX.html * Store link: https://www.microsoft.com/en-us/p/falconx-center-taskbar/9pcmz6bxk8gh?activetab=pivot:overviewtab I don't know how big of a problem the paid Store release is since the actual tool is open source. Thanks a lot already ## Examples: ![Demo](https://chrisandriessen.nl/web/img/p1.jpg) ![Demo](https://chrisandriessen.nl/web/img/p2.jpg) ![Demo](https://chrisandriessen.nl/web/img/p5.jpg) ![Demo](https://chrisandriessen.nl/web/img/p4.jpg) ![Demo](https://chrisandriessen.nl/web/img/p3.jpg)
Idea-New PowerToy,Product-Tweak UI Design
low
Major
540,443,510
pytorch
DataParallel has different tensor copy behavior if batch size = 1
A probable bug: https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/data_parallel.py#L149-L150 parallel_apply would splice the data and copy it onto corresponding GPUs (not sure if it uses non_blocking=True under the hood, but maybe it's a good idea!) If batch_size equals to 1, DP assumes that the data has already been copied onto the first GPU. A question: Is it faster to copy the input data onto GPU-0 and then have DP slice and distribute it on all GPUs or keep the input data in RAM and then have DP copy it directly on all GPUs?
triaged,module: data parallel
low
Critical
540,465,862
go
encoding/json: SyntaxError does not have "json:" prefix
It was my understanding that most errors created inside of a standard library package should be prefixed with the package name. For example, the error message for `sql.ErrNoRows` is `errors.New("sql: no rows in result set")`, not `errors.New("no rows in result set")`. Some errors in the `json` package are returned without a leading prefix. I noticed this because I hit one and it took some digging to figure out where in the codebase the error was actually coming from. These tests in scanner_test.go check that the error matches a SyntaxError without a leading prefix. ```go var indentErrorTests = []indentErrorTest{ {`{"X": "foo", "Y"}`, &SyntaxError{"invalid character '}' after object key", 17}}, {`{"X": "foo" "Y": "bar"}`, &SyntaxError{"invalid character '\"' after object key:value pair", 13}}, } func TestIndentErrors(t *testing.T) { for i, tt := range indentErrorTests { slice := make([]uint8, 0) buf := bytes.NewBuffer(slice) if err := Indent(buf, []uint8(tt.in), "", ""); err != nil { if !reflect.DeepEqual(err, tt.err) { t.Errorf("#%d: Indent: %#v", i, err) continue } } } } ``` Is that right? Shouldn't those error messages have a leading "json:" prefix?
NeedsDecision
low
Critical
540,482,362
pytorch
Disable PSIMD? Why Pytorch is trying to download PSimd when PSIMD_SOURCE_DIR is defined?
Can I disable PSimd during installation of Pytorch from source? Why is Pytorch trying to download PSimd to /pytorch/build/confu-srcs/psimd even when PSIMD_SOURCE_DIR is defined? PSimd is already in third_party packages, hence makes no sense why downloading is necessary? These options do not disable PSimd: `BUILD_CAFFE2_OPS=0 USE_NNPACK=0 USE_QNNPACK=0 USE_PYTORCH_QNNPACK=0`
module: build,triaged
low
Minor
540,567,204
electron
Memory leak when opening and closing windows
### Issue Details * **Electron Version:** * 5, 6, 7, 8 * **Operating System:** * Windows 10 * **Last Known Working Electron version:** * ? ### Expected Behavior Opening and closing window should not increase memory ### Actual Behavior Memory will increase for each window even after the window has been destroyed ### To Reproduce ``` const { app, BrowserWindow, BrowserView } = require("electron"); const init = () => { let browserWindow2 = new BrowserWindow({ show: true, }); let j = 0; const interval = setInterval(() => { j++; if (j > 10) { clearInterval(interval); } for(let i = 0; i < 20; i++) { let browserWindow = new BrowserWindow({ show: true, }); setTimeout(() => { browserWindow.close(); }, 5000) } }, 1000); }; app.on("ready", () => { init(); }); ``` The above code snipped should not result in more memory than just opening a single window. I also tried to call the garbage collector manually and I also tried browserWindow.destroy(). No difference.
11-x-y
medium
Critical