id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
607,893,953
godot
CSGMesh GetAabb() / GetTransformedAabb() returns zero
**Godot version:** 3.2.2.beta1 **OS/device including version:** Windows 10 Pro (10.0.18362) **Issue description:** CSGMesh methods `GetAabb()` and `GetTransformedAabb()` returns `Vector3.Zero`. `csgMesh.Mesh.GetAabb()` seems to return the correct value. **Steps to reproduce:** 1. Create a CSGMesh node. 2. Add any mesh to it. 3. Call `GetAabb()` or `GetTransformedAabb()` on the CSGMesh node. 4. Zero-value is returned from the methods.
bug,topic:core,confirmed
low
Minor
607,909,454
TypeScript
Simple Comparison of Generics not allowed?
**TypeScript Version:** 3.8 **Search Terms:** generic comparison compare generic strings **Code** ```ts function test<A, B>(one: A, two: B): A { if (one === two) { throw new Error('no'); } return one; } const obj = {}; test(obj, obj); ``` **Expected behavior:** While the generics are generally indicating that `one` and `two` are independent values, they can still potentially be the same underlying value so they should be able to at least be checked for equality without casting them first, especially since there is no error produced when calling the function with identical values in the first place. At the very least the error message should be more specific since the assumption it is providing is not correct in since the condition returns true :-P ( This is clearly a simplified version of where I ran into this since there'd be no reason to use generics in this specific case :-) ) **Actual behavior:** Shown in example **Playground Link:** [Playground](https://www.typescriptlang.org/v2/en/play?ts=3.8.3#code/GYVwdgxgLglg9mABFApgZygHgIIBpEBCAfABQIoBcieyA7nFQQJRXaIDeAsAFCKIzBEZMCkQBeCXThMOPPnygALAE5xaiEeoCiy1cpIByMHANMA3HMQBfS8pRQQypOQvcb3HhAQZEcAEYAVuIcVq48qBhkgfj+AeY8QA)
Suggestion,Experience Enhancement
low
Critical
607,916,936
TypeScript
Enforce consistent imports when directory is in jsconfig's paths
*TS Template added by @mjbvz* **TypeScript Version**: 3.9.0-dev.20200427 **Search Terms** - javascript - auto import - symlink --- <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> <!-- Use Help > Report Issue to prefill these. --> - VSCode Version: 1.43.2 - OS Version: macOS 10.15.3 Steps to Reproduce: 1. Specify a `jsconfig.json` that includes `compilerOptions.paths` to a specific directory 2. Let intellisense import a variable from this specific directory 3. Depending on how deep you are in the code base, it will resolve that variable relatively or absolutely You can take a look at this reproduction repo: https://github.com/Floriferous/vscode-symlink-import More specifically the 2 files `code.js` and `deepCode.js`, they both resolve the variable differently, as `javascript.preferences.importModuleSpecifier` is set to `auto`. I'd like a folder specified in `paths` to always resolve as `myPathsFolder/...`
Needs Investigation
low
Critical
607,944,085
pytorch
Error running trace on Pytorch Crowd Counting model
Hi, I'm trying to use your prebuilt Pytorch crowd counting model with [NVIDIA's Triton Inference Server](https://docs.nvidia.com/deeplearning/sdk/triton-inference-server-guide/docs/index.html). Looks like it requires the PyTorch model to be saved by `torch.jit.save()`. I'm facing errors while tracing the model with an example. Appreciate any help. The model is available from [this link](https://drive.google.com/open?id=1TRJr9YuP1dFpnbQvSSQHqIqhLFdElo_Q). The model runs fine when I run single predictions using this [predict file](https://github.com/kevhnmay94/SS-DCNet/blob/master/predict.py). First, inference server gave error while I directly use the `best_epoch.pth` checkpoint - ![image](https://user-images.githubusercontent.com/7122670/80325147-f54c8900-8801-11ea-8469-77a41c02ba1c.png) Then, I try to run a trace with `torch.jit.trace()` as [documented here](https://pytorch.org/tutorials/advanced/cpp_export.html) to convert it to torchscript type but it gives me an error on some of the tensors used. Below is the script and stack trace - Python: 3.6.9 Pytorch Version: 1.3.1 Tensorrtserver: 1.11.0 ```python import os import sys import torch import numpy as np from Network.SSDCNet import SSDCNet_classify # Intialize params to model label_indice = np.arange(0.5,22+0.5,0.5) add = np.array([1e-6,0.05,0.10,0.15,0.20,0.25,0.30,0.35,0.40,0.45]) label_indice = np.concatenate((add,label_indice)) class_num = len(label_indice)+1 # Create a model instance net = SSDCNet_classify(class_num,label_indice,div_times=2,frontend_name='VGG16',block_num=5,IF_pre_bn=False,IF_freeze_bn=False,load_weights=True,psize=64,pstride = 64,parse_method ='maxp') all_state_dict = torch.load('/home/ubuntu/mayub/Github/SS-DCNet/SHA/best_epoch.pth',map_location=torch.device('cpu')) # load the state_dict net.load_state_dict(all_state_dict['net_state_dict']) # Change to eval mode net.eval() # Create a sample instance (I get this shape from sample predict file) example = torch.rand(1, 3, 768, 1024) #Run a trace with example traced_script_module = torch.jit.trace(net, example) ``` Trace give me the following error - ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-22-af5c103c2ac9> in <module> ----> 1 traced_script_module = torch.jit.trace(net, example) ~/anaconda3/envs/Crowd_Detection_mayub/lib/python3.6/site-packages/torch/jit/__init__.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, _force_outplace, _module_class, _compilation_unit) 856 return trace_module(func, {'forward': example_inputs}, None, 857 check_trace, wrap_check_inputs(check_inputs), --> 858 check_tolerance, _force_outplace, _module_class) 859 860 if (hasattr(func, '__self__') and isinstance(func.__self__, torch.nn.Module) and ~/anaconda3/envs/Crowd_Detection_mayub/lib/python3.6/site-packages/torch/jit/__init__.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, _force_outplace, _module_class, _compilation_unit) 995 func = mod if method_name == "forward" else getattr(mod, method_name) 996 example_inputs = make_tuple(example_inputs) --> 997 module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, _force_outplace) 998 check_trace_method = module._c._get_method(method_name) 999 RuntimeError: Tracer cannot infer type of {'conv1': [], 'conv2': [], 'conv3': tensor([[[[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [ 0.0000, 10.7944, 12.7899, ..., 19.2663, 1.8643, 5.8381], [ 0.0000, 7.0714, 8.2250, ..., 6.6337, 8.9443, 4.3685], ..., [ 0.0000, 0.0000, 0.1529, ..., 0.0000, 3.9005, 2.5733], [ 0.0000, 0.0000, 1.3247, ..., 9.6619, 5.4776, 12.0128], [ 0.0000, 3.9498, 7.2731, ..., 12.5920, 3.4570, 4.9324]], [[ 6.2152, 4.2109, 1.4538, ..., 9.0008, 4.8270, 5.3406], [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], ..., [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [ 2.7461, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[ 3.3530, 12.9000, 0.0000, ..., 10.2020, 5.6883, 13.4378], [ 8.3358, 0.0000, 0.4850, ..., 0.0000, 0.0000, 1.2402], [10.6660, 0.0000, 0.0000, ..., 0.0000, 2.6775, 3.4912], ..., [ 7.2844, 0.0000, 0.0000, ..., 0.0000, 2.4160, 4.3557], [ 0.0000, 0.0000, 3.2270, ..., 0.0000, 0.0000, 14.2611], [ 0.0000, 8.8993, 8.3703, ..., 0.0000, 9.0703, 7.7137]], ..., [[ 0.0000, 6.9046, 8.8958, ..., 13.0320, 2.4187, 8.6959], [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 1.3236], ..., [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 1.7601], [ 2.6597, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[ 0.0000, 0.0000, 27.1142, ..., 2.5222, 4.2344, 16.2329], [13.0420, 5.2728, 3.5170, ..., 0.0000, 2.5529, 12.2657], [ 7.4807, 2.7274, 4.9321, ..., 0.0000, 1.3603, 9.9757], ..., [ 0.0000, 5.1109, 0.0000, ..., 0.0000, 12.0808, 12.9898], [ 5.5294, 4.1575, 7.9874, ..., 11.1128, 7.7980, 20.6503], [10.8941, 31.3485, 13.9562, ..., 16.0611, 6.2370, 12.5933]], [[ 2.4946, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [14.3443, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [ 0.6402, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], ..., [ 5.3813, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [ 4.0173, 0.0000, 0.0000, ..., 0.0000, 0.2018, 0.0000], [15.4178, 6.2313, 4.7818, ..., 5.4549, 0.0000, 14.1680]]]], grad_fn=<MaxPool2DWithIndicesBackward>), 'conv4': tensor([[[[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 1.0539, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], ..., [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0658, 0.2390, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], ..., [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[0.6219, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], ..., [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [1.3965, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], ..., [[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], ..., [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.6745, ..., 0.0000, 0.0000, 0.0000], ..., [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.7359, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], [[1.6867, 5.0112, 3.1645, ..., 0.6046, 2.9714, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], ..., [0.1528, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 2.5369, 1.0550, 2.8817]]]], grad_fn=<MaxPool2DWithIndicesBackward>), 'conv5': tensor([[[[1.3796e+01, 1.4452e+01, 1.2862e+01, ..., 6.7781e+00, 9.2815e+00, 9.9267e+00], [8.3384e+00, 8.3635e+00, 4.7208e+00, ..., 7.1589e-01, 1.5264e+00, 1.6482e+00], [2.8230e+00, 6.3764e-01, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 7.7796e-01], ..., [4.9598e+00, 2.4025e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 1.0588e+00], [5.8041e+00, 3.8463e+00, 0.0000e+00, ..., 8.4420e-03, 4.0681e-01, 2.0312e+00], [6.6123e+00, 5.5535e+00, 2.7047e+00, ..., 2.8173e+00, 3.5163e+00, 5.7462e+00]], [[2.1448e+00, 2.8523e+00, 1.5730e+00, ..., 9.4367e-01, 2.1874e+00, 1.3170e+00], [4.5012e+00, 4.9501e+00, 2.8207e+00, ..., 1.8323e+00, 4.1885e+00, 4.0239e+00], [7.6315e-01, 2.9784e-01, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 8.0759e-01], ..., [2.9077e+00, 2.0675e+00, 0.0000e+00, ..., 0.0000e+00, 9.1627e-01, 8.6342e-01], [4.7475e+00, 3.7969e+00, 1.4891e+00, ..., 2.4889e-01, 2.4533e+00, 2.8074e+00], [2.3529e+00, 2.9757e+00, 2.3051e+00, ..., 9.2457e-01, 1.9125e+00, 1.6066e+00]], [[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00]], ..., [[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00]], [[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [1.1057e+00, 2.5535e-01, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00]], [[8.2280e-02, 2.7121e+00, 3.0622e+00, ..., 2.3801e+00, 4.3697e+00, 1.9868e+00], [2.6579e+00, 5.2237e+00, 4.1553e+00, ..., 4.4512e+00, 5.2473e+00, 3.5402e+00], [0.0000e+00, 2.0596e+00, 4.3195e-01, ..., 4.8868e-01, 2.0991e-01, 0.0000e+00], ..., [4.6866e+00, 6.1409e+00, 6.4256e-01, ..., 0.0000e+00, 2.2996e+00, 1.2674e+00], [5.3775e+00, 6.8193e+00, 3.7130e+00, ..., 4.2376e+00, 4.7349e+00, 3.6980e+00], [0.0000e+00, 4.8565e-01, 1.4950e-01, ..., 2.2111e+00, 1.1456e+00, 5.6986e-01]]]], grad_fn=<MaxPool2DWithIndicesBackward>)} :List trace inputs must have elements (toTraceableIValue at /opt/conda/conda-bld/pytorch_1573049304260/work/torch/csrc/jit/pybind_utils.h:286) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7f3cc8476687 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: <unknown function> + 0x4f9175 (0x7f3cfdeb6175 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #2: <unknown function> + 0x5731e2 (0x7f3cfdf301e2 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0x58a724 (0x7f3cfdf47724 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x206b86 (0x7f3cfdbc3b86 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #5: _PyCFunction_FastCallDict + 0x154 (0x55b2047adc54 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #6: <unknown function> + 0x199c0e (0x55b204835c0e in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #7: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #8: <unknown function> + 0x192e66 (0x55b20482ee66 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #9: <unknown function> + 0x193e73 (0x55b20482fe73 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #10: <unknown function> + 0x199b95 (0x55b204835b95 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #11: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #12: <unknown function> + 0x192e66 (0x55b20482ee66 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #13: <unknown function> + 0x193e73 (0x55b20482fe73 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #14: <unknown function> + 0x199b95 (0x55b204835b95 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #15: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #16: PyEval_EvalCodeEx + 0x329 (0x55b2048309b9 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #17: PyEval_EvalCode + 0x1c (0x55b20483175c in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #18: <unknown function> + 0x1ba167 (0x55b204856167 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #19: _PyCFunction_FastCallDict + 0x91 (0x55b2047adb91 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #20: <unknown function> + 0x199abc (0x55b204835abc in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #21: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #22: _PyGen_Send + 0x256 (0x55b204838be6 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #23: _PyEval_EvalFrameDefault + 0x144f (0x55b20485989f in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #24: _PyGen_Send + 0x256 (0x55b204838be6 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #25: _PyEval_EvalFrameDefault + 0x144f (0x55b20485989f in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #26: _PyGen_Send + 0x256 (0x55b204838be6 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #27: _PyCFunction_FastCallDict + 0x115 (0x55b2047adc15 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #28: <unknown function> + 0x199abc (0x55b204835abc in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #29: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #30: <unknown function> + 0x193c5b (0x55b20482fc5b in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #31: <unknown function> + 0x199b95 (0x55b204835b95 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #32: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #33: <unknown function> + 0x193c5b (0x55b20482fc5b in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #34: <unknown function> + 0x199b95 (0x55b204835b95 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #35: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #36: <unknown function> + 0x192e66 (0x55b20482ee66 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #37: _PyFunction_FastCallDict + 0x3d8 (0x55b204830598 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #38: _PyObject_FastCallDict + 0x26f (0x55b2047ae01f in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #39: _PyObject_Call_Prepend + 0x63 (0x55b2047b2aa3 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #40: PyObject_Call + 0x3e (0x55b2047ada5e in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #41: _PyEval_EvalFrameDefault + 0x19e7 (0x55b204859e37 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #42: <unknown function> + 0x193136 (0x55b20482f136 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #43: <unknown function> + 0x193ed6 (0x55b20482fed6 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #44: <unknown function> + 0x199b95 (0x55b204835b95 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #45: _PyEval_EvalFrameDefault + 0x10cc (0x55b20485951c in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #46: <unknown function> + 0x19c764 (0x55b204838764 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #47: _PyCFunction_FastCallDict + 0x91 (0x55b2047adb91 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #48: <unknown function> + 0x199abc (0x55b204835abc in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #49: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #50: <unknown function> + 0x193136 (0x55b20482f136 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #51: <unknown function> + 0x193ed6 (0x55b20482fed6 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #52: <unknown function> + 0x199b95 (0x55b204835b95 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #53: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #54: <unknown function> + 0x19c764 (0x55b204838764 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #55: _PyCFunction_FastCallDict + 0x91 (0x55b2047adb91 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #56: <unknown function> + 0x199abc (0x55b204835abc in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #57: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #58: <unknown function> + 0x193136 (0x55b20482f136 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #59: <unknown function> + 0x193ed6 (0x55b20482fed6 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #60: <unknown function> + 0x199b95 (0x55b204835b95 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #61: _PyEval_EvalFrameDefault + 0x30a (0x55b20485875a in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #62: <unknown function> + 0x19c764 (0x55b204838764 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) frame #63: _PyCFunction_FastCallDict + 0x91 (0x55b2047adb91 in /home/ubuntu/anaconda3/envs/Crowd_Detection_mayub/bin/python) ``` Thanks ! cc @suo
oncall: jit,triaged
medium
Critical
607,952,618
pytorch
Add BufferDict container
## 🚀 Feature A `torch.nn.BufferDict` container that mirrors `ParameterDict`. ## Motivation We sometimes work with a lot of buffers, and it would be nice to have a convenient way of dealing with those, similar to `ParameterDict`, but well, with the values being buffered tensors rather than of type `torch.nn.Parameter`. Currently `ParameterDict` specifically uses `register_parameter`, so it's not straightforward to use that transparently. ## Pitch Implement `BufferDict` by mirroring `ParameterDict`. Like so: #37385 ## Alternatives Manually handle toms of buffers. That gets clunky quickly. cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
feature,module: nn,triaged,actionable
medium
Major
607,974,917
flutter
Support iPadOS pointer interactions
This is an umbrella issue that tracks the support for iPadOS pointer interactions. However, features beyond basic cursor type and hover effect will likely have a low priority for a while. ## Reference - Apple's [Design guidelines](https://developer.apple.com/design/human-interface-guidelines/ios/user-interaction/pointers/) - UIKit's [Documentation](https://developer.apple.com/documentation/uikit/pointer_interactions) ## TODO - Support basic mouse features - [ ] Support background color change in hover effect - [ ] Support basic cursor styles (those without magnetism) on hoverable regions - Support region-aware features - [ ] Support highlight effect - [ ] Support lift effect - [ ] Support trajectory prediction ## Related issues - Previous discussion: https://github.com/flutter/flutter/issues/52912 - Scroll wheels: https://github.com/flutter/flutter/issues/54663 - General mouse cursor: https://github.com/flutter/flutter/issues/31952
c: new feature,platform-ios,framework,a: fidelity,f: cupertino,f: gestures,a: desktop,a: mouse,P3,team-design,triaged-design
medium
Critical
607,981,441
pytorch
The pytorch's graph is lack of common names for nodes
## 🐛 Bug <!-- A clear and concise description of what the bug is. --> ![image](https://user-images.githubusercontent.com/44078961/80440384-401dec00-893b-11ea-8a79-7c3838019d23.png) Many nodes do not have a COMMON name. It is difficault to figure out who it is. ## To Reproduce Steps to reproduce the behavior: 1.OH, you can use a forward function with conv2d method and jit.trace it into a graph. And you wiil get it <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> I believe that name with number should change into name by word, like bias and weight. ## Environment That graph is build by the tensorbaord example of pytorch. I think common pytorch is ok. You can get the script and run it with: ``` wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py # For security purposes, please check the contents of collect_env.py before running it. python collect_env.py ``` - PyTorch Version (e.g., 1.0): - OS (e.g., Linux): - How you installed PyTorch (`conda`, `pip`, source): - Build command you used (if compiling from source): - Python version: - CUDA/cuDNN version: - GPU models and configuration: - Any other relevant information: ## Additional context <!-- Add any other context about the problem here. --> ```shell graph(%self : ClassType<LSTM>, %input : Float(1, 1, 3), %6 : (Float(1, 1, 3), Float(1, 1, 3))): %1 : Tensor = prim::GetAttr[name="weight_ih_l0"](%self) %2 : Tensor = prim::GetAttr[name="weight_hh_l0"](%self) %3 : Tensor = prim::GetAttr[name="bias_ih_l0"](%self) %4 : Tensor = prim::GetAttr[name="bias_hh_l0"](%self) %hx.1 : Float(1, 1, 3), %hx : Float(1, 1, 3) = prim::TupleUnpack(%6) %48 : Tensor[] = prim::ListConstruct(%hx.1, %hx), scope: LSTM %49 : Tensor[] = prim::ListConstruct(%1, %2, %3, %4), scope: LSTM ... ``` It is the graph's scirptmodule built by jit.trace. It seems like the node use the number the node's name. Maybe we can use the "name="weight_ih_l0"" word to name the node. cc @suo
module: bootcamp,feature,module: tensorboard,oncall: visualization,days
low
Critical
608,003,134
godot
AnimationPlayer doesn't show the keyframe icon for AnimatedSprite, and consequently that node hides the keyframe icon for the rest of the nodes
**Godot version:** 3.2.1 **OS/device including version:** Win 10 v1909 **Issue description:** When creating a new track on the AnimationPlayer and switching to an animated sprite node, the keyframe icons do not appear. Switching to any other node will hide the keyframe icon for that node as well. The only way to restore the icon is to select the AnimationPlayer node and then select any other node beside the AnimatedSprite node. **Steps to reproduce:** 1. Create a simple node 2. Add an Animatedsprite child node 3. Add an AnimationPlayer child node (to the node or the AnimatedSprite) 4. Create a track 5. Select the AnimatedSprite (no keyframe icon) 6. Select the Node (no keyframe icon) 7. Reselect the AnimationPlayer node 8. Select the Node (Keyframe icon is showin) Video capture of the bug: https://youtu.be/wEoUN-Cr7Lo **Minimal reproduction project:** Video capture of the bug: https://youtu.be/wEoUN-Cr7Lo
bug,topic:editor
low
Critical
608,003,528
godot
AnimatedSprite node doesn't show keyframe icon and disables the icons for any other node.
**Godot version:** 3.2.1 **OS/device including version:** Win 10 v1909 **Issue description:** When creating a new track on the AnimationPlayer and switching to an animated sprite node, the keyframe icons do not appear. Switching to any other node will hide the keyframe icon for that node as well. The only way to restore the icon is to select the AnimationPlayer node and then select any other node beside the AnimatedSprite node. **Steps to reproduce:** 1. Create a simple node 2. Add an Animatedsprite child node 3. Add an AnimationPlayer child node (to the node or the AnimatedSprite) 4. Create a track 5. Select the AnimatedSprite (no keyframe icon) 6. Select the Node (no keyframe icon) 7. Reselect the AnimationPlayer node 8. Select the Node (Keyframe icon is showin) Video capture of the bug: https://youtu.be/wEoUN-Cr7Lo **Minimal reproduction project:** Video capture of the bug: https://youtu.be/wEoUN-Cr7Lo
bug,topic:editor
low
Critical
608,042,021
flutter
Flutter Textstyle class ignores invalid fontFamily and fontSize
Everything is recently installed. ``` Android 3.63(April 13, 2020) C:\Users\allen>flutter --version Flutter 1.17.0-3.2.pre • channel beta • https://github.com/flutter/flutter.git Framework • revision 2a7bc389f2 (6 days ago) • 2020-04-21 20:34:20 -0700 Engine • revision 4c8c31f591 Tools • Dart 2.8.0 (build 2.8.0-dev.20.10) ======================================================= ``` The code snip-it: ``` // Calling Widget child: Text("The quick brown fox jumps over the lazy dog", // Center align text textAlign: TextAlign.center, // set a text style which defines a custom font style: myTextStyle()), // Function TextStyle myTextStyle( {myColor = Colors.blueAccent, myFontFamily='nofont', myFontWeight = FontWeight.w400, myFontSize = -1.0 } ) { return TextStyle( // set color of text color: myColor, // set the font family as defined in pubspec.yaml fontFamily: myFontFamily, // set the font weight fontWeight: myFontWeight, // set the font size fontSize: myFontSize); } ``` ================================================== In either scenario, the console output is normal ``` Performing hot restart... Syncing files to device SM N910V... Restarted application in 8,496ms. ================================================== ``` Two issues here, I would be happy to spawn an extra bug if asked Issue one: If you were to enter an invalid(mispelled) fontFamily, the output to the Android phone is some default font, I assume 'Roboto' Expected behavior: At a minimum an error message to the Console Perhaps a RT error. Testing consideration: You could not run automated tests since there would be string output just not the right one. A plain text output would not be noticed by Testing right away. =========================================================== Issue two: If you were to enter a negative value, like -36.0 for FontSize, I get no text at all Expected behavior: At a minimum an error message to the Console Perhaps a RT error. Testing consideration: Without a Console message, you couldn't run automated tests. ========================================================== The answer to the question "Why would anyone do that" ? Answer Because they can. ========================================================= I have not tried flutter "lint" maybe I should.
framework,engine,a: typography,a: error message,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-engine,triaged-engine
low
Critical
608,109,277
node
http: Reduce API surface
I would like to suggest that [`complete`](https://nodejs.org/dist/latest-v14.x/docs/api/http.html#http_message_complete) and [`aborted`](https://nodejs.org/dist/latest-v14.x/docs/api/http.html#http_message_aborted) are unnecessary API surface and should be at least doc deprecated. The user can keep of track of this state themselves by registering an `'aborted'` handler. In particular the exact semantics of e.g. `complete` is a slightly unclear and might cause more harm than use.
http
medium
Major
608,115,992
pytorch
3D grouped & depthwise convolution very slow on backward pass
Hi, It seems that 3D grouped & depthwise convolution is very slow on the **backward** pass. Backward pass on depthwise convolution takes about 10 times the time of a standard 3D convolution's forward pass. I understand cudnn does not yet support fp32 depthwise convolution (ie forward pass is slower than standard convolution by about 2 times or so for 2D convolution). However, this problem seems to be significantly worse for 3D convolution's _backward_ pass. Reading cudnn docs, it seems 3D depthwise isn't supported. Any suggestions? ```python import torch import torch.nn as nn import time # Timing code taken from https://github.com/facebookresearch/pycls/blob/master/pycls/utils/benchmark.py class Timer(object): def __init__(self): self.total_time = 0.0 self.calls = 0 self.start_time = 0.0 self.diff = 0.0 self.average_time = 0.0 def tic(self): self.start_time = time.time() def toc(self): self.diff = time.time() - self.start_time self.total_time += self.diff self.calls += 1 self.average_time = self.total_time / self.calls def compute_precise_time(model, input_size, loss_fun, batch_size, num_iter=3, label_dtype=torch.int64): """Computes precise time.""" # Generate a dummy mini-batch inputs = torch.rand(batch_size, *input_size) labels = torch.zeros(batch_size, dtype=label_dtype) # Copy the data to the GPU inputs = inputs.cuda(non_blocking=False) labels = labels.cuda(non_blocking=False) # Compute precise time fw_timer = Timer() bw_timer = Timer() model.train() for _cur_iter in range(num_iter): # Forward fw_timer.tic() preds = model(inputs) loss = loss_fun(preds, labels) torch.cuda.synchronize() fw_timer.toc() # Backward bw_timer.tic() loss.backward() torch.cuda.synchronize() bw_timer.toc() torch.cuda.synchronize() print({"prec_train_fw_time": fw_timer.average_time, "prec_train_bw_time": bw_timer.average_time}) class GroupedNet3D(nn.Module): def __init__(self, data_size, layer_channel, groups): super(GroupedNet3D, self).__init__() if groups is None: groups = layer_channel # depth wise self.f = nn.Sequential( nn.Conv3d(data_size[0], layer_channel, 3, bias=False), nn.Conv3d(layer_channel, layer_channel, 3, groups=groups, bias=False), nn.Conv3d(layer_channel, layer_channel, 3, groups=groups, bias=False), nn.Conv3d(layer_channel, layer_channel, 3, groups=groups, bias=False), nn.Conv3d(layer_channel, layer_channel, 3, groups=groups, bias=False), nn.AdaptiveAvgPool3d(1), nn.Flatten(), nn.Linear(layer_channel, 1) ) def forward(self, x): return self.f(x) loss_fun = nn.CrossEntropyLoss().cuda() data_size = (1, 50, 50, 50) batch_size = 16 channels = 32 Model = GroupedNet3D # standard 3D convolution model = Model(data_size, channels, 1).cuda() compute_precise_time(model, data_size, loss_fun, batch_size) # {'prec_train_fw_time': 0.1468369960784912, 'prec_train_bw_time': 0.3601357142130534} # grouped convolution model = Model(data_size, channels, 8).cuda() compute_precise_time(model, data_size, loss_fun, batch_size) # {'prec_train_fw_time': 0.3155516783396403, 'prec_train_bw_time': 1.0136785507202148} # depth wise convolution model = Model(data_size, channels, None).cuda() compute_precise_time(model, data_size, loss_fun, batch_size) # {'prec_train_fw_time': 0.41037146250406903, 'prec_train_bw_time': 1.7994076410929363} torch.backends.cudnn.version() # 7603 torch.version.cuda # '10.1' torch.__version__ # '1.4.0' ``` cc @ngimel @csarofeen @ptrblck
module: cudnn,module: cuda,triaged
low
Major
608,156,720
angular
A destroyed view can be interacted with in ivy
In Angular with ViewEngine a destroyed view can't be interacted with (ex. one can't trigger change detection on such views). With ivy it is possible to call methods on a destroyed view and there is no error message / logs indicating that a view was destroyed. Here is [the test exposing difference in VE vs. ivy behaviour](https://github.com/pkozlowski-opensource/angular/commit/e972be659827be252fec3b6c20c4986161de253c): ```typescript it('should prevent usage of a destroyed fixture', () => { @Component({selector: 'test-cmpt', template: ``}) class TestCmpt { } TestBed.configureTestingModule({declarations: [TestCmpt]}); const fixture = TestBed.createComponent(TestCmpt); fixture.destroy(); expect(() => {fixture.detectChanges()}) .toThrowError(/Attempt to use a destroyed view: detectChanges/); }); ```
freq1: low,area: core,state: confirmed,core: dynamic view creation,type: confusing,P4
low
Critical
608,159,908
ant-design
Slider组件onAfterChange事件触发问题
- [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate. ### Reproduction link [![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/antd-reproduction-template-sldlq) ### Steps to reproduce 点击滑动条改变值,触发一次onAfterChange,此时Slider处于focus状态。然后点击网页外的位置(比如Windows任务栏),此时Slider失焦。然后点击网页的任意位置,onAfterChange又一次触发了。 ### What is expected? onAfterChange只在点击Slider时触发。 ### What is actually happening? 在没有点击Slider时onAfterChange触发了。 | Environment | Info | |---|---| | antd | 3.25.0 | | React | 16.13.1 | | System | Windows10 | | Browser | Chrome63.0.3239.132 | <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
🐛 Bug,Inactive
low
Minor
608,167,055
pytorch
torch.cdist() implementation without using contiguous() calls
## 🚀 Feature A decent torch.cdist() implementation that does not have contiguous() calls which usually resulted in excessive GPU memory usage ## Motivation See the original problem at https://discuss.pytorch.org/t/understanding-cdist-function/76296/12?u=promach ## Pitch Modify [these two lines](https://github.com/pytorch/pytorch/blob/a836c4ca78b72ecc8e0664e1b684af64ce83be42/aten/src/ATen/native/Distance.cpp#L78-L79) inside torch.cdist() implementation ## Alternatives I found some other implementation of cdist() [code 1](https://github.com/pytorch/pytorch/pull/25799#issuecomment-529021810) and [code 2](https://github.com/pytorch/pytorch/issues/15253#issuecomment-491467128), but they are still consuming excessive amount of GPU memory. ## Additional context None cc @VitalyFedyunin @ngimel
module: performance,triaged,enhancement,module: distance functions
low
Minor
608,167,791
pytorch
Compatibility of subset dataset with disabled batch sampling
I think there is a compatibility issue with disabled batch sampling and subset dataset The use-case - define custom batch sampling, and split the dataset using PyTorch split utility function Here's a minimal working example ``` self.train_dataset, self.val_dataset, self.test_dataset = torch.utils.data.random_split( self.dataset, [100, 100, 100]) loader = DataLoader( dataset=self.train_dataset, batch_size=None, batch_sampler=None, sampler=BatchSampler( SequentialSampler(dataset), batch_size=self.hparams.batch_size, drop_last=False), num_workers=self.hparams.num_data_workers, ) ``` And when iterating the subset datasets this is the error ``` Exception has occurred: TypeError list indices must be integers or slices, not list File "/path/utils/data/dataset.py", line 257, in __getitem__ return self.dataset[self.indices[idx]] File "/path/utils/data/_utils/fetch.py", line 46, in fetch data = self.dataset[possibly_batched_index] File "/path/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/path/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/path/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 251, in _evaluate for batch_idx, batch in enumerate(dataloader): File "/path/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 843, in run_pretrain_routine False) File "/path/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 477, in single_gpu_train self.run_pretrain_routine(model) File "/path/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 704, in fit self.single_gpu_train(model) File "/path/train.py", line 152, in main_train trainer.fit(model) File "/path/train.py", line 66, in main main_train(model_class_pointer, hyperparams, logger) File "/path/train.py", line 161, in <module> main() File "/path/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/path/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/path/lib/python3.7/runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "/path/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/path/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) ``` As the self.indices of the `subset` object is a simple python list. Please refer to the forum post, where @ptrblck approves reproducing the bug and offering a workaround. cc @SsnL
module: dataloader,triaged
low
Critical
608,170,112
pytorch
Reset a `torch.optim.Optimizer`
## 🚀 Feature A 'reset_state' method that resets the `state` of an optimizer. ## Motivation Sometimes we may need to reuse an optimizer to train the same model on another dataset. Some optimizers, though, keep track of some additional stats (e.g. Adam's moving averages), and currently, there is no method to reset these stats unless we expose some internal attributes of the class. Check [this question](https://discuss.pytorch.org/t/reset-optimizer-stats/78516/2) in the forum for more info. ## Pitch My proposal consists in adding a method `reset_state(self, state=None)`. If `state is None` is `True`, then we just reset the state, i.e. `self.state = defaultdict(dict)`, otherwise `self.state = state` ## Alternatives Either you manually retrieve all the parameters of an optimizer and create a new one or you manually reset the `state` attribute. In both cases, we're exposing some internals of the optimizer. cc @vincentqb
module: optimizer,triaged
medium
Critical
608,225,309
rust
Unhelpful error message, and endless loop in cargo, when a procedural macro overflows its stack
Using `rustc 1.44.0-nightly (b2e36e6c2 2020-04-22)` on Windows 10. Create a proc-macro crate `crate0` with these contents: extern crate proc_macro; use proc_macro::TokenStream; #[proc_macro] pub fn overflow_stack(item: TokenStream) -> TokenStream { fn recurse(i: u64) { if i < std::u64::MAX { recurse(i + 1); } } recurse(0); item } And a binary crate `crate1` with these contents: #![feature(proc_macro_hygiene)] fn main() { crate0::overflow_stack!(); } `cargo build` for `crate1` will produce this error message: thread 'rustc' has overflowed its stack error: could not compile `crate1`. Caused by: process didn't exit successfully: `rustc {...a long list of arguments follows...}` When a stack-overflow bug is introduced into a procedural macro by making changes to one of the procedural macro's local dependencies, this error message is not helpful for localising the bug. (I experienced this today, and wasted some time thinking that I had triggered an ICE.) I also experienced an endless loop in `cargo build` triggered by that same stack overflow, but I haven't been able to isolate a test-case.
I-crash,C-enhancement,A-diagnostics,A-macros,T-compiler,D-confusing,D-papercut,D-verbose,D-terse,A-proc-macros
low
Critical
608,231,282
go
cmd/compile: experiment with more integer comparison optimizations
In Go 1.15 we've added support for integer-in-range optimizations to the SSA backend. This same infrastructure (mostly in https://github.com/golang/go/blob/master/src/cmd/compile/internal/ssa/fuse_comparisons.go) can quite easily be extended to perform other potential control flow transformations. I've opened this issue in order to track them and get more ideas. Note: these transformations may or may not be worthwhile optimizations. Disjunctions (||): | Before | After | Comments | CL (if applicable) | |-|-|-|-| | `x == 1 \|\| x == 2` | `uint(x - 1) <= 1` | Integer range | [CL 224878](https://golang.org/cl/224878)| | `x == 4 \|\| x == 6` | `x\|2 == 6` | Power of 2 difference | [CL 471815](https://golang.org/cl/471815) | | `x != 0 \|\| y != 0` | `x\|y != 0` | Neq with 0 | | | `x < 0 \|\| y < 0` | `x\|y < 0` | Less with 0 | | | `x >= 0 \|\| y >= 0` | `x&y >= 0` | Geq with 0 | | Conjunctions (&&): | Before | After | Comments | CL (if applicable) | |-|-|-|-| | `x != 1 && x != 2` | `uint(x - 1) > 1` | Integer range | [CL 224878](https://golang.org/cl/224878)| | `x != 4 && x != 6` | `x\|2 != 6` | Power of 2 difference | [CL 471815](https://golang.org/cl/471815) | | `x == 0 && y == 0` | `x\|y == 0` | Eq with 0 | | | `x < 0 && y < 0` | `x&y < 0` | Less with 0 | | | `x >= 0 && y >= 0` | `x\|y >= 0` | Geq with 0 | |
Performance,NeedsInvestigation,compiler/runtime
low
Minor
608,273,668
godot
Inspector: Click-drag to increase or decrease property values does not work with tablet/pen
**Godot version:** 3.2.1 **OS/device including version:** Win64, Wacom Intuos **Issue description:** With tablet and pen the values get stuck: ![click_drag_values_bug1](https://user-images.githubusercontent.com/47016402/80486484-871bd980-895b-11ea-98b5-8f0e1e110c4c.gif) With mouse it works fine: ![click_drag_values_bug2](https://user-images.githubusercontent.com/47016402/80486498-8d11ba80-895b-11ea-8978-0089c25aa6e8.gif)
bug,topic:editor,confirmed,usability,topic:input
low
Critical
608,274,785
vscode
NVDA doesn't read the status line
Testing https://github.com/microsoft/vscode/issues/96265 All NVDA reads is: `No status line found`.
bug,upstream,accessibility,upstream-issue-linked
low
Major
608,287,975
node
Different configurations watched same file returned same instance
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name --> * **Version**: * **Platform**: all * **Subsystem**: fs Using two different configurations watched the same file, the second configuration will not take effect. ### What steps will reproduce the bug? 1. The process will exit. ``` js const fs = require('fs'); fs.watchFile('a.js', {persistent: false} , () => { console.log(1) }); fs.watchFile('a.js', () => { console.log(2) }); ``` 2. The Process does not exit. ``` js const fs = require('fs'); fs.watchFile('a.js', () => { console.log(2) }); fs.watchFile('a.js', {persistent: false} , () => { console.log(1) }); ``` ### How often does it reproduce? Is there a required condition? unrequired condition. ### What is the expected behavior? Neither 1 nor 2 will exit the process. ### What do you see instead? 1. The process will exit. ### Additional information When watched, the StatWatcher instance is de-instanced by filename, and the same instance is returned each time the same file is watched.
fs
low
Critical
608,335,489
godot
Tilemap: set_cellv(map_point, -1) does not work with Ysort
**Godot version:** 3.2.1 **OS/device including version:** Wine64 **Issue description:** ![ysort_set_tile_bug](https://user-images.githubusercontent.com/47016402/80494673-37dba600-8967-11ea-9490-47a5037d24ee.gif) If the Tilemap in question has the **ysort** property enabled, only the first tile will be set to -1. As soon as ysort is disabled, set_cellv works as expected. In fact, the Tilemap does not seem to ever leave the Area2D of the player, even tough it should when moving from tile to tile. So this might be an Area issue too. Player Code: ``` func _on_Area2D_body_entered(body): if "yellow_tile" in body.name: $Label.text = "is in tile" yellow_tile = body var map_point = yellow_tile.world_to_map(global_position + Vector2(16,16)) #-16,-16 = yellowtile position yellow_tile.set_cellv(map_point, -1, false, true) yellow_tile.call_deferred("update_dirty_quadrants") ``` Thanks to user Grandro on Discord who has figured this out! **Steps to reproduce:** Move player on a Tilemap tile. **Minimal reproduction project:** [Grid_set_tile_ysort_bug.zip](https://github.com/godotengine/godot/files/4546130/Grid_set_tile_ysort_bug.zip)
bug,topic:physics
low
Critical
608,410,810
pytorch
[docs] Unclear return type of torch.randint and extra comma in arg spec
1. Note two commas in function spec 2. It's unclear if default output is `torch.int64` (in practice it is) or `torch.get_default_dtype()` which is usually `torch.float32`. It's unclear what `torch.set_default_tensor_type()` reference has to do with `torch.randint`'s return type since by default it returns an integral tensor https://pytorch.org/docs/master/torch.html?highlight=torch%20randint#torch.randint ![image](https://user-images.githubusercontent.com/1041752/80506047-62346000-8975-11ea-99f7-81c7832a1cfb.png)
module: docs,triaged
low
Minor
608,426,841
TypeScript
Missing autocomplete with optional chaining operator
**TypeScript Version:** 3.8 and 3.9 nightly <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** optional chaining, autocomplete, intellisense **Code** The `BaseDataClass` experiences the issue here, most likely due to some of the funkiness with generic types. (Other classes extend `BaseDataClass` in practice, which is why it has these generics.) I recognize this example is pretty unusual, but I'm reporting the optional chaining operator issue specifically in case it is helpful for improving autocomplete for the operator in general. ```ts export interface BaseClass<TProps> { getValue<TKey extends keyof TProps>(propertyName: TKey): TProps[TKey]; } export class BaseClass<TProps> { } interface IBaseData { BaseDataProp: string; } export interface BaseDataClassProps<TData extends IBaseData> { Data?: TData; } export class BaseDataClass< TData extends IBaseData = IBaseData, TProps extends BaseDataClassProps<TData> = BaseDataClassProps<TData>> extends BaseClass<TProps & BaseDataClassProps<TData>> { public exampleMethod(): void { // this.getValue("Data")!. // autocompletes to: this.getValue("Data")!.BaseDataProp; // this.getValue("Data")?. // does not autocomplete to: this.getValue("Data")?.BaseDataProp; } } ``` **Expected behavior:** In the `exampleMethod` above, typing `this.getValue("Data")?.` and then requesting autocomplete (Ctrl+Enter) should give the same suggestions as given for `this.getValue("Data")!.` **Actual behavior:** No suggestions given for `this.getValue("Data")?.` **Playground Link:** [Playground Link](https://www.typescriptlang.org/play/?ts=3.9.0-dev.20200427&ssl=34&ssc=1&pln=1&pc=1#code/KYDwDg9gTgLgBASwHY2FAZgQwMbDgIUwGdgBhAG2KIB4AVABSgjCID4BYAKAG8u5+4Ac2AwAapnIBXYHQDSwAJ5xQqJABMicANaKI6OAyYtWACjBG0MBQDlMAW2AAuA-IUBKZ4eZEA2rVcAugDcXAC+XKCQsHDYlESahCQUVHSM3qxw3HDhnFxcyKgYOHgAkonAACKYMJhcvJwAkOVVNWlgzkQwUMiCIZw5EeDQ8AVoWLgExJXVmMnxbTS0LZjKIKoacGVTyxw8XA3LAPyey30DnJHDMXEJ2zNzNPtLM6vrmlsky3AAvJvNMwAaJ4LV7AdS3T73G4LOg7H6TSE1B4w541Vi7BoqMEbcoPVJGTQAMgR0yR0IJsJm6Lq+zAkgARuQENhVvYwORgABZEQACwgahMHjgADcIAg1Pt6g0GgB6GVwGA8hBEAB0wjEEmkJgARMttW4AIQq-ay+WYSQwCDYCB2dkiYCaS2OE2K5VqkTiKTAHV6w0q-6tIx9E1yhVK1Xqz1a3UzfWHY2NU1wNQQB1wJAQeDmy3W20c1AKiDOxOuiMezXemM1OP+u6B5h9Bo5HJAA)
Bug,Domain: Completion Lists
low
Minor
608,451,627
rust
`rustc` should prefer statically linking native dependencies with unspecified kind if `crt-static` is enabled
The main goal of `+crt-static` (at least on non-Windows targets) is to produce self-contained statically linked executables. With this goal in mind `rustc` switches linking of 1) Rust crates and 2) libc to static if possible. (The `libc` part is done in the libraries through lazy `cfg`s though.) However, for native dependencies with unspecified kind `rustc` will still prefer the dynamic version of the library if both are available. Unfortunately, doing this is not entirely trivial because `-Bstatic -lmylib` won't just prefer the static version (e.g. `libmylib.a`), it will *require* it. So we need to check for the existence of `libmylib.a` first (in which directories exactly?) and then pass `-l:libmylib.a` instead of `-lmylib` if it exists. cc https://github.com/rust-lang/rust/pull/71586 https://github.com/rust-lang/rust/issues/39998
A-linkage,T-compiler,C-bug
low
Major
608,468,518
flutter
String.fromEnvironment without a const silently does the wrong thing in the VM
As [44083](https://github.com/flutter/flutter/pull/44083) was merged in, I would expect it to be working in at least dev, but I am not able to get it to work as expected. I set the channel to dev (flutter channel dev), then did an upgrade (flutter upgrade). Then I issued the following: flutter -run -t lib/main_generic.dart --dart-define=ENV=DEV In my main_generic.dart I do this: var env = String.fromEnvironment("ENV", defaultValue: "PROD"); The env variable always returns PROD. Should this not be picked up and return DEV instead of defaulting to PROD? @jonahwilliams
engine,dependency: dart,customer: crowd,has reproducible steps,P2,found in release: 2.2,team-engine,triaged-engine
low
Critical
608,474,030
go
x/text/collate: unexpected result of KeyFromString()
<!-- Please answer these questions before submitting your issue. Thanks! For questions please use one of our forums: https://github.com/golang/go/wiki/Questions --> ### What version of Go are you using (`go version`)? <pre> $ go version 1.1.13 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env ``` go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/Users/bba/Library/Caches/go-build" GOENV="/Users/bba/Library/Application Support/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GONOPROXY="" GONOSUMDB="" GOOS="darwin" GOPATH="/Users/bba/.gvm/pkgsets/go1.13/global" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/Users/bba/.gvm/gos/go1.13" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/Users/bba/.gvm/gos/go1.13/pkg/tool/darwin_amd64" GCCGO="gccgo" AR="ar" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="/Users/bba/text/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/tg/tb44y2693qd2s6j53chrywyw0000gn/T/go-build299346317=/tmp/go-build -gno-record-gcc-switches -fno-common" ``` </pre></details> ### What did you do? <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> https://play.golang.org/p/__hCo4kcE86 ### What did you expect to see? str = bytesToString(ctor.KeyFromString(buf, "L·")) println("", str) // 1711 The str will be "17110183" instead of "1711". ### What did you see instead? "1711"
NeedsInvestigation
low
Critical
608,498,658
terminal
About dialog appears abruptly, without animation
@DHowett-MSFT I think this PR introduced a new problem that the about dialog no longer opens and closes smoothly with an animation, it appears and disappears abruptly, also the dialog border color is greyish in light theme instead of the accent color (maybe its intended). _Originally posted by @AnuthaDev in https://github.com/microsoft/terminal/pull/5224#issuecomment-620703529_
Issue-Bug,Area-UserInterface,Product-Terminal,Priority-3
low
Minor
608,499,951
rust
Enabling `+crt-static` in a blanket way breaks dynamic libraries including proc macros
This is pretty much https://github.com/rust-lang/cargo/issues/7563 generalized, which was fixed in https://github.com/rust-lang/rust/pull/69519, which I don't consider a correct or satisfactory solution. https://github.com/rust-lang/rust/pull/71586 may be a good preliminary reading. --- If you are building something with Cargo and set `-C target-feature=+crt-static` through `RUSTFLAGS`, or `crt-static` is enabled by default through a target specification, it will be set for all crates during the build, including proc macros and cdylibs (and build scripts?). In most cases this is not an intention when we are enabling `crt-static`. In most cases the intention is to enable it for executables only. So if enabling `crt-static` for executables (via env var or a target spec) also enables it for libraries, it usually results only in "collateral damage". If the target doesn't support `+crt-static` for libaries, we just get an error like https://github.com/rust-lang/cargo/issues/7563 reported. If the target supports `+crt-static` for libraries, we get a very weird library which is unlikely to work when dynamically loaded as a proc macro crate (I didn't verify that though). As a result, we need a way to enable `crt-static` for executables without collaterally damaging libaries. Moreover, this way should be the default. --- https://github.com/rust-lang/rust/pull/71586#discussion_r416054543 lists some possible alternatives of doing this. - Automatically enable `-Ctarget-feature=-crt-static` for proc macro crates or all libraries in Cargo (cc @ehuss), modifying RUSTFLAGS. Question: how to opt-out? - Introduce a new options `-Ctarget-feature=+crt-static-dylib` controlling static linking of libraries instead of `-Ctarget-feature=-crt-static`. It would almost never be used on Unix-like targets (not sure about windows-msvc and wasm). - Keep the existing meaning of `crt-static`, but introduce a new option `-C disable-crt-static-for-dylibs` or something. Cargo would then use it for proc macro crates or all libraries. Question: how to opt-out? - Ignore `+crt-static` for libraries if the target doesn't support it. This is fragile, musl actually supports it despite the current value of the flag of the musl target spec. If the flag is enabled, proc macros will break. - Ignore `+crt-static` for libraries always. It is almost never used on Unix-like targets (not sure about windows-msvc and wasm). `+crt-static-dylib` looks strictly better. - Have two `crt-static` defaults in target specs - one for executables and one for libraries. Solves one half of the problem, but explicit `+crt-static` in `RUSTFLAGS` will still cause collateral damage.
A-linkage,T-compiler,C-bug
medium
Critical
608,528,298
pytorch
rsub incorrectly exposed in torch
``` >>> torch.rsub <built-in method rsub of type object at 0x113435ad0> ``` I believe this is supposed to be an implementation for the magic method `__rsub__`; so this should be made uniform with the rest of the magic methods (which are directly exposed by their names).
triaged,better-engineering
low
Minor
608,536,581
pytorch
Tensor.is_same_size not documented
https://pytorch.org/docs/master/search.html?q=is_same_size&check_keywords=yes&area=default# Seems to be a thing for conveniently testing if sparse sizes actually match. Maybe shouldn't be a method as it is right now.
module: docs,triaged
low
Minor
608,537,627
pytorch
Tensor.as_strided_ is not documented
https://pytorch.org/docs/master/search.html?q=as_strided_&check_keywords=yes&area=default#
module: docs,triaged
low
Minor
608,539,716
pytorch
torch.clamp_max clamp_min shouldn't be there
~~Maybe they're not supposed to be in torch namespace?~~
triaged,better-engineering
low
Minor
608,539,770
pytorch
Multi-Process Single-GPU is bad
I'm sorry to do this, but since your release states that you plan to deprecate Single-Process Multi-GPU, I have to talk about all the issues I have with MPSG. First, this mode takes up a CPU thread per GPU you have. I'm sure that the developers of pytorch are used to working with servers that have tens of CPU cores and the number of cores is much much greater than the number of GPUs, but students like me who are trying to stretch their compute budgets don't necessarily have this luxury. For example, I have a workstation with 4 GPU's and 8 cores. Another with 2 GPU's and 4 cores. Those extra threads are needed for data loading, let alone anything else I want to do with the machine. Second, killing jobs is hard and often results in zombie processes that have to be tracked down and killed manually. CTRL-c should kill all processes and the user shouldn't have to do anything else. This is how it works with DataParallel. On a related note, these extra processes often output stuff to the console in duplicate, which really clutters output. Sometimes even after the head is killed. Third, the mode is error prone when it comes to data loading. We're supposed to use DistributedSampler to ensure that each node loads a different partition of the data. It's not easy to verify that it is working and not creating duplicates, and I'm often trying to do something just a little fancier than what DistributedSampler was designed to do. Failure in this case is silent. You just get worse results. Finally, I just don't like the API of it. It feels like there's all this code clutter "if args.distributed" everywhere and you have to call "reduce_tensor" for each metric you compute. If you forget, everything still "works" your results are just inaccurate. You also have to launch your program with "python -m torch.distributed.launch --nproc_per_node=xx" which is esoteric and clutters your console. As a result, I find myself using DataParallel almost all the time, even though I know it's slower. It's usually not that much slower though. I hope to be able to keep it. What I really hope for is more support for it. Why is it slower? Can that speed gap be closed somewhat? It uses more GPU memory too, especially on GPU 0. Trying to use the new "Channels Last" with it results in no speedup, but if I test on a single GPU, sometimes the speedup is significant. Thanks! I really appreciate PyTorch on the whole. It's an invaluable tool for students like me. cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 @osalpekar
oncall: distributed,triaged,module: data parallel
low
Critical
608,562,123
go
x/playground: detect and run Benchmark functions
### What did you do? Wrote a Playground source file containing a Benchmark function, for which I wanted to see the allocations per operation (a number that does not depend in any way on the absolute running time of the program): https://play.golang.org/p/LKZwUYcGLaG ``` package main import ( "runtime" "testing" ) func BenchmarkAlloc(b *testing.B) { b.ReportAllocs() for n := b.N; n > 0; n-- { x := new(int) runtime.KeepAlive(x) } } ``` ### What did you expect to see? Output similar to `go test -bench .`: ``` goos: linux goarch: amd64 pkg: example.com BenchmarkAlloc-6 1000000000 0.285 ns/op 0 B/op 0 allocs/op PASS ok example.com 0.346s ``` ### What did you see instead? ``` runtime.main_main·f: function main is undeclared in the main package Go build failed. ``` ![6b1nRZe6KaY](https://user-images.githubusercontent.com/5200974/80529459-7e300580-8965-11ea-94d8-ca9f7cd43b34.png) CC @andybons @dmitshur @toothrot @cagedmantis See previously #24311, #6511, #32403.
NeedsInvestigation,FeatureRequest
low
Critical
608,562,663
pytorch
RuntimeError: CUDA error: an illegal memory access was encountered with channels_last
I get an illegal memory access when trying to train mnasnet (any version) with apex (O1) and channels_last <!-- A clear and concise description of what the bug is. --> ## To Reproduce Steps to reproduce the behavior: use the apex imagenet example: python -m torch.distributed.launch --nproc_per_node=2 main_amp.py -a=mnasnet1_3 --b 224 --workers 4 --channels-last=True --opt-level=O1 -b=256 /intel_nvme/imagenet_data/ <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> Traceback (most recent call last): File "main_amp.py", line 542, in <module> main() File "main_amp.py", line 247, in main train(train_loader, model, criterion, optimizer, epoch) File "main_amp.py", line 353, in train scaled_loss.backward() File "/home/tstand/anaconda3/lib/python3.7/contextlib.py", line 119, in __exit__ next(self.gen) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/handle.py", line 123, in scale_loss optimizer._post_amp_backward(loss_scaler) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/_process_optimizer.py", line 249, in post_backward_no_master_weights post_backward_models_are_masters(scaler, params, stashed_grads) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/_process_optimizer.py", line 135, in post_backward_models_are_masters scale_override=(grads_have_scale, stashed_have_scale, out_scale)) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/scaler.py", line 184, in unscale_with_stashed out_scale/stashed_have_scale) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/scaler.py", line 148, in unscale_with_stashed_python self.dynamic) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/scaler.py", line 22, in axpby_check_overflow_python cpu_sum = float(model_grad.float().sum()) RuntimeError: CUDA error: an illegal memory access was encountered terminate called after throwing an instance of 'c10::Error' what(): CUDA error: an illegal memory access was encountered (insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:771) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7f5827507536 in /home/tstand/.local/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x7ae (0x7f582774afbe in /home/tstand/.local/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f58274f7abd in /home/tstand/.local/lib/python3.7/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5236b2 (0x7f58732c06b2 in /home/tstand/.local/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x523756 (0x7f58732c0756 in /home/tstand/.local/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: <unknown function> + 0x19dfce (0x55748c40bfce in /home/tstand/anaconda3/bin/python) frame #6: <unknown function> + 0x103948 (0x55748c371948 in /home/tstand/anaconda3/bin/python) frame #7: <unknown function> + 0x114267 (0x55748c382267 in /home/tstand/anaconda3/bin/python) frame #8: <unknown function> + 0x11427d (0x55748c38227d in /home/tstand/anaconda3/bin/python) frame #9: <unknown function> + 0x11427d (0x55748c38227d in /home/tstand/anaconda3/bin/python) frame #10: PyDict_SetItem + 0x502 (0x55748c3cd602 in /home/tstand/anaconda3/bin/python) frame #11: PyDict_SetItemString + 0x4f (0x55748c3ce0cf in /home/tstand/anaconda3/bin/python) frame #12: PyImport_Cleanup + 0x9e (0x55748c40d91e in /home/tstand/anaconda3/bin/python) frame #13: Py_FinalizeEx + 0x67 (0x55748c483367 in /home/tstand/anaconda3/bin/python) frame #14: <unknown function> + 0x227d93 (0x55748c495d93 in /home/tstand/anaconda3/bin/python) frame #15: _Py_UnixMain + 0x3c (0x55748c4960bc in /home/tstand/anaconda3/bin/python) frame #16: __libc_start_main + 0xf3 (0x7f5875ba81e3 in /lib/x86_64-linux-gnu/libc.so.6) frame #17: <unknown function> + 0x1d0990 (0x55748c43e990 in /home/tstand/anaconda3/bin/python) THCudaCheck FAIL file=/pytorch/aten/src/THC/THCCachingHostAllocator.cpp line=278 error=700 : an illegal memory access was encountered Traceback (most recent call last): File "main_amp.py", line 542, in <module> main() File "main_amp.py", line 247, in main train(train_loader, model, criterion, optimizer, epoch) File "main_amp.py", line 353, in train scaled_loss.backward() File "/home/tstand/anaconda3/lib/python3.7/contextlib.py", line 119, in __exit__ next(self.gen) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/handle.py", line 123, in scale_loss optimizer._post_amp_backward(loss_scaler) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/_process_optimizer.py", line 249, in post_backward_no_master_weights post_backward_models_are_masters(scaler, params, stashed_grads) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/_process_optimizer.py", line 135, in post_backward_models_are_masters scale_override=(grads_have_scale, stashed_have_scale, out_scale)) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/scaler.py", line 184, in unscale_with_stashed out_scale/stashed_have_scale) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/scaler.py", line 148, in unscale_with_stashed_python self.dynamic) File "/home/tstand/anaconda3/lib/python3.7/site-packages/apex/amp/scaler.py", line 22, in axpby_check_overflow_python cpu_sum = float(model_grad.float().sum()) RuntimeError: CUDA error: an illegal memory access was encountered terminate called after throwing an instance of 'c10::Error' what(): CUDA error: an illegal memory access was encountered (insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:771) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7f911c250536 in /home/tstand/.local/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x7ae (0x7f911c493fbe in /home/tstand/.local/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f911c240abd in /home/tstand/.local/lib/python3.7/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5236b2 (0x7f91680096b2 in /home/tstand/.local/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x523756 (0x7f9168009756 in /home/tstand/.local/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: <unknown function> + 0x19dfce (0x5599d63f8fce in /home/tstand/anaconda3/bin/python) frame #6: <unknown function> + 0x103948 (0x5599d635e948 in /home/tstand/anaconda3/bin/python) frame #7: <unknown function> + 0x114267 (0x5599d636f267 in /home/tstand/anaconda3/bin/python) frame #8: <unknown function> + 0x11427d (0x5599d636f27d in /home/tstand/anaconda3/bin/python) frame #9: <unknown function> + 0x11427d (0x5599d636f27d in /home/tstand/anaconda3/bin/python) frame #10: PyDict_SetItem + 0x502 (0x5599d63ba602 in /home/tstand/anaconda3/bin/python) frame #11: PyDict_SetItemString + 0x4f (0x5599d63bb0cf in /home/tstand/anaconda3/bin/python) frame #12: PyImport_Cleanup + 0x9e (0x5599d63fa91e in /home/tstand/anaconda3/bin/python) frame #13: Py_FinalizeEx + 0x67 (0x5599d6470367 in /home/tstand/anaconda3/bin/python) frame #14: <unknown function> + 0x227d93 (0x5599d6482d93 in /home/tstand/anaconda3/bin/python) frame #15: _Py_UnixMain + 0x3c (0x5599d64830bc in /home/tstand/anaconda3/bin/python) frame #16: __libc_start_main + 0xf3 (0x7f916a8f11e3 in /lib/x86_64-linux-gnu/libc.so.6) frame #17: <unknown function> + 0x1d0990 (0x5599d642b990 in /home/tstand/anaconda3/bin/python) Collecting environment information... PyTorch version: 1.5.0 Is debug build: No CUDA used to build PyTorch: 10.2 OS: Ubuntu 19.10 GCC version: (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008 CMake version: Could not collect Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect (not installed outside pytorch) GPU models and configuration: GPU 0: TITAN RTX GPU 1: TITAN RTX Nvidia driver version: 440.82 cuDNN version: Could not collect (not installed outside pytorch) Versions of relevant libraries: [pip3] numpy==1.18.3 [pip3] torch==1.5.0 [pip3] torchvision==0.6.0 [conda] blas 1.0 mkl [conda] cudatoolkit 10.2.89 hfd86e86_0 [conda] mkl 2020.0 166 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] numpy 1.18.1 py37h4f9e942_0 [conda] numpy-base 1.18.1 py37hde5b4d6_1 [conda] numpydoc 0.9.2 py_0 conda-forge [conda] pytorch 1.5.0 py3.7_cuda10.2.89_cudnn7.6.5_0 pytorch [conda] torchvision 0.6.0 py37_cu102 pytorch cc @ezyang @gchanan @zou3519 @bdhirsh @heitorschueroff @seemethere @malfet @walterddr @ngimel @csarofeen @ptrblck
high priority,module: dependency bug,module: binaries,module: cudnn,module: cuda,triaged
medium
Critical
608,577,155
PowerToys
[Run][Folder plugin] No autocomplete text on directories
<!-- **Important: When reporting BSODs or security issues, DO NOT attach memory dumps, logs, or traces to Github issues**. Instead, send dumps/traces to [email protected], referencing this GitHub issue. --> # Environment ``` Windows build number: [run "ver" at a command prompt] PowerToys version: PowerToy module for which you are reporting the bug (if applicable): ``` # Steps to reproduce <!-- A description of how to trigger this bug. --> There is no autocomplete text when searching for directory paths. # Expected behavior <!-- A description of what you're expecting, possibly containing screenshots or reference material. --> # Actual behavior <!-- What's actually happening? --> # Screenshots <!-- If applicable, add screenshots to help explain your problem. --> ![image](https://user-images.githubusercontent.com/56318517/80531818-0d322300-8950-11ea-84e6-ad985734218d.png)
Idea-Enhancement,Product-PowerToys Run,Run-Plugin
low
Critical
608,577,707
flutter
ExpansionTile subtitle should have the same color as ListTile subtitle
<!-- Thank you for using Flutter! If you are looking for support, please check out our documentation or consider asking a question on Stack Overflow: * https://flutter.dev/ * https://api.flutter.dev/ * https://stackoverflow.com/questions/tagged/flutter?sort=frequent If you have found a bug or if our documentation doesn't have an answer to what you're looking for, then fill our the template below. Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports --> ## Steps to Reproduce <!-- You must include full steps to reproduce so that we can reproduce the problem. --> 1. Add ExpansionTile and ListTile with the exact same contents of title and subtitle in one list and compare **Expected results:** <!-- what did you want to see? --> There should be no difference in both ExpansionTile and ListTile. **Actual results:** <!-- what did you see? --> As you can see, there is a color difference between both of them. <img src="https://user-images.githubusercontent.com/5113916/80529947-94a38480-8998-11ea-8683-5e5c600117ef.png" width="420"> <details> <summary>Logs</summary> <!-- Run `flutter analyze` and attach any output of that command below. If there are any analysis errors, try resolving them before filing this issue. --> ``` $ flutter analyze Running "flutter pub get" in stepslow... 2,0s Analyzing stepslow... warning • 'avoid_escaping_inner_quotes' is not a recognized lint rule • analysis_options.yaml:18:7 • undefined_lint_warning warning • 'avoid_redundant_argument_values' is not a recognized lint rule • analysis_options.yaml:28:7 • undefined_lint_warning warning • 'leading_newlines_in_multiline_strings' is not a recognized lint rule • analysis_options.yaml:68:7 • undefined_lint_warning warning • 'missing_whitespace_between_adjacent_strings' is not a recognized lint rule • analysis_options.yaml:74:7 • undefined_lint_warning warning • 'no_logic_in_create_state' is not a recognized lint rule • analysis_options.yaml:77:7 • undefined_lint_warning warning • 'no_runtimeType_toString' is not a recognized lint rule • analysis_options.yaml:78:7 • undefined_lint_warning warning • 'unnecessary_raw_strings' is not a recognized lint rule • analysis_options.yaml:151:7 • undefined_lint_warning warning • 'unnecessary_string_escapes' is not a recognized lint rule • analysis_options.yaml:153:7 • undefined_lint_warning warning • 'unnecessary_string_interpolations' is not a recognized lint rule • analysis_options.yaml:154:7 • undefined_lint_warning warning • 'use_key_in_widget_constructors' is not a recognized lint rule • analysis_options.yaml:160:7 • undefined_lint_warning warning • 'use_raw_strings' is not a recognized lint rule • analysis_options.yaml:161:7 • undefined_lint_warning 11 issues found. (ran in 14.7s) ``` <!-- Finally, paste the output of running `flutter doctor -v` here. --> ``` $ flutter doctor -v [✓] Flutter (Channel stable, v1.12.13+hotfix.9, on Linux, locale cs_CZ.UTF-8) • Flutter version 1.12.13+hotfix.9 at /home/pavel/flutter • Framework revision f139b11009 (4 weeks ago), 2020-03-30 13:57:30 -0700 • Engine revision af51afceb8 • Dart version 2.7.2 [✓] Android toolchain - develop for Android devices (Android SDK version 29.0.3) • Android SDK at /home/pavel/Android/Sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-29, build-tools 29.0.3 • Java binary at: /opt/android-studio/jre/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211) • All Android licenses accepted. [!] Android Studio (version 3.6) • Android Studio at /opt/android-studio ✗ Flutter plugin not installed; this adds Flutter specific functionality. ✗ Dart plugin not installed; this adds Dart specific functionality. • Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211) [✓] Connected device (1 available) • HUAWEI CAM L21 • LHTDU16A22004756 • android-arm64 • Android 6.0 (API 23) ! Doctor found issues in 1 category. ``` Flutter 1.17 also affected </details>
framework,f: material design,c: API break,has reproducible steps,found in release: 3.0,found in release: 3.1,team-design,triaged-design
low
Critical
608,603,730
pytorch
[RuntimeError] Tensor creation using storage fails
``` >>> x=torch.tensor(4) >>> x.storage() 4 [torch.LongStorage of size 1] >>> torch.Tensor(x.storage()) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: result.storage().dtype() == storage.dtype() INTERNAL ASSERT FAILED at "../aten/src/ATen/native/Resize.h":103, please report a bug to PyTorch. ```
module: error checking,triaged
low
Critical
608,626,260
flutter
pod install error NoMethodError - undefined method `size' for nil:NilClass at macho_file.rb in `populate_mach_header'
There is a lot of issues filed against CocoaPods from a situation that arises in Flutter for example: https://github.com/CocoaPods/CocoaPods/issues/8377. They all have the following text: `NoMethodError - undefined method `size' for nil:NilClass` I tried the following things to fix the error: 1) `flutter upgrade; flutter clean; flutter run` 1) delete `ios/Pods` folder 1) `gem update ruby-macho` 1) `sudo gem install cocoapods` The only thing that worked to fix the issue was `git clean -fX` to delete all untracked files. There is something that is messing up CocoaPods that should get flushed with `flutter clean`. Seen on latest stable and master (`1.18.0-9.0.pre.46`) <details> <summary> Full Log </summary> ``` aaclarke-macbookpro2:example aaclarke$ flutter run Launching lib/main.dart on iPhone (2) in debug mode... Warning: Missing build name (CFBundleShortVersionString). Warning: Missing build number (CFBundleVersion). Action Required: You must set a build name and number in the pubspec.yaml file version field before submitting to the App Store. Automatically signing iOS for device deployment using specified development team in Xcode project: S8QB4VV633 Running pod install... 1.6s CocoaPods' output: ↳ Preparing Analyzing dependencies Inspecting targets to integrate Using `ARCHS` setting to build architectures of target `Pods-Runner`: (``) Finding Podfile changes - Flutter - e2e - video_player - video_player_web Fetching external sources -> Fetching podspec for `Flutter` from `Flutter` -> Fetching podspec for `e2e` from `.symlinks/plugins/e2e/ios` -> Fetching podspec for `video_player` from `.symlinks/plugins/video_player/ios` -> Fetching podspec for `video_player_web` from `.symlinks/plugins/video_player_web/ios` Resolving dependencies of `Podfile` CDN: trunk Relative path: CocoaPods-version.yml exists! Returning local because checking is only perfomed in repo update Comparing resolved specification to the sandbox manifest A Flutter A e2e A video_player A video_player_web Downloading dependencies -> Installing Flutter (1.0.0) -> Installing e2e (0.0.1) -> Installing video_player (0.0.1) -> Installing video_player_web (0.0.1) - Running pre install hooks Generating Pods project - Creating Pods project - Installing files into Pods project - Adding source files - Adding frameworks - Adding libraries - Adding resources - Adding development pod helper files - Linking headers - Installing Pod Targets - Installing target `Flutter` iOS 8.0 - Installing target `e2e` iOS 8.0 - Generating module map file at `Pods/Target Support Files/e2e/e2e.modulemap` - Generating umbrella header at `Pods/Target Support Files/e2e/e2e-umbrella.h` - Generating dummy source at `Pods/Target Support Files/e2e/e2e-dummy.m` - Installing target `video_player` iOS 8.0 - Generating module map file at `Pods/Target Support Files/video_player/video_player.modulemap` - Generating umbrella header at `Pods/Target Support Files/video_player/video_player-umbrella.h` - Generating dummy source at `Pods/Target Support Files/video_player/video_player-dummy.m` - Installing target `video_player_web` iOS 8.0 - Installing Aggregate Targets - Installing target `Pods-Runner` iOS 8.0 CDN: trunk Relative path: CocoaPods-version.yml exists! Returning local because checking is only perfomed in repo update ――― MARKDOWN TEMPLATE ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― ### Command ``` /Users/aaclarke/.rvm/gems/ruby-2.4.3/bin/pod install --verbose ``` ### Report * What did you do? * What did you expect to happen? * What happened instead? ### Stack ``` CocoaPods : 1.8.3 Ruby : ruby 2.3.7p456 (2018-03-28 revision 63024) [universal.x86_64-darwin18] RubyGems : 2.5.2.3 Host : Mac OS X 10.14.6 (18G2022) Xcode : 11.3 (11C29) Git : git version 2.26.0.110.g2183baf09c-goog Ruby lib dir : /System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/lib Repositories : master - git - https://github.com/CocoaPods/Specs.git @ e7f31f463f3333f320be4de3f5734141bb3d8921 trunk - CDN - https://cdn.cocoapods.org/ ``` ### Plugins ``` cocoapods-deintegrate : 1.0.4 cocoapods-plugins : 1.0.0 cocoapods-search : 1.0.0 cocoapods-stats : 1.1.0 cocoapods-trunk : 1.4.1 cocoapods-try : 1.1.0 ``` ### Podfile ```ruby # Uncomment this line to define a global platform for your project # platform :ios, '9.0' # CocoaPods analytics sends network stats synchronously affecting flutter build latency. ENV['COCOAPODS_DISABLE_STATS'] = 'true' project 'Runner', { 'Debug' => :debug, 'Profile' => :release, 'Release' => :release, } def parse_KV_file(file, separator='=') file_abs_path = File.expand_path(file) if !File.exists? file_abs_path return []; end generated_key_values = {} skip_line_start_symbols = ["#", "/"] File.foreach(file_abs_path) do |line| next if skip_line_start_symbols.any? { |symbol| line =~ /^\s*#{symbol}/ } plugin = line.split(pattern=separator) if plugin.length == 2 podname = plugin[0].strip() path = plugin[1].strip() podpath = File.expand_path("#{path}", file_abs_path) generated_key_values[podname] = podpath else puts "Invalid plugin specification: #{line}" end end generated_key_values end target 'Runner' do # Flutter Pod copied_flutter_dir = File.join(__dir__, 'Flutter') copied_framework_path = File.join(copied_flutter_dir, 'Flutter.framework') copied_podspec_path = File.join(copied_flutter_dir, 'Flutter.podspec') unless File.exist?(copied_framework_path) && File.exist?(copied_podspec_path) # Copy Flutter.framework and Flutter.podspec to Flutter/ to have something to link against if the xcode backend script has not run yet. # That script will copy the correct debug/profile/release version of the framework based on the currently selected Xcode configuration. # CocoaPods will not embed the framework on pod install (before any build phases can generate) if the dylib does not exist. generated_xcode_build_settings_path = File.join(copied_flutter_dir, 'Generated.xcconfig') unless File.exist?(generated_xcode_build_settings_path) raise "Generated.xcconfig must exist. If you're running pod install manually, make sure flutter pub get is executed first" end generated_xcode_build_settings = parse_KV_file(generated_xcode_build_settings_path) cached_framework_dir = generated_xcode_build_settings['FLUTTER_FRAMEWORK_DIR']; unless File.exist?(copied_framework_path) FileUtils.cp_r(File.join(cached_framework_dir, 'Flutter.framework'), copied_flutter_dir) end unless File.exist?(copied_podspec_path) FileUtils.cp(File.join(cached_framework_dir, 'Flutter.podspec'), copied_flutter_dir) end end # Keep pod path relative so it can be checked into Podfile.lock. pod 'Flutter', :path => 'Flutter' # Plugin Pods # Prepare symlinks folder. We use symlinks to avoid having Podfile.lock # referring to absolute paths on developers' machines. system('rm -rf .symlinks') system('mkdir -p .symlinks/plugins') plugin_pods = parse_KV_file('../.flutter-plugins') plugin_pods.each do |name, path| symlink = File.join('.symlinks', 'plugins', name) File.symlink(path, symlink) pod name, :path => File.join(symlink, 'ios') end end # Prevent Cocoapods from embedding a second Flutter framework and causing an error with the new Xcode build system. install! 'cocoapods', :disable_input_output_paths => true post_install do |installer| installer.pods_project.targets.each do |target| target.build_configurations.each do |config| config.build_settings['ENABLE_BITCODE'] = 'NO' end end end ``` ### Error ``` NoMethodError - undefined method `size' for nil:NilClass /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho/macho_file.rb:455:in `populate_mach_header' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho/macho_file.rb:233:in `populate_fields' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho/macho_file.rb:55:in `initialize_from_bin' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho/macho_file.rb:33:in `new_from_bin' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho/fat_file.rb:365:in `block in populate_machos' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho/fat_file.rb:364:in `each' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho/fat_file.rb:364:in `populate_machos' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho/fat_file.rb:156:in `populate_fields' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho/fat_file.rb:95:in `initialize' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho.rb:31:in `new' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/ruby-macho-1.4.0/lib/macho.rb:31:in `open' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/sandbox/file_accessor.rb:457:in `dynamic_binary?' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/sandbox/file_accessor.rb:171:in `block in vendored_dynamic_frameworks' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/sandbox/file_accessor.rb:170:in `select' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/sandbox/file_accessor.rb:170:in `vendored_dynamic_frameworks' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/sandbox/file_accessor.rb:259:in `vendored_dynamic_artifacts' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:1123:in `block (3 levels) in <class:AggregateTargetSettings>' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:1122:in `any?' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:1122:in `block (2 levels) in <class:AggregateTargetSettings>' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:1121:in `any?' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:1121:in `block in <class:AggregateTargetSettings>' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:114:in `block in define_build_settings_method' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:1113:in `block in <class:AggregateTargetSettings>' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:114:in `block in define_build_settings_method' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:362:in `public_send' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:362:in `block in to_h' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:361:in `each' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:361:in `to_h' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:174:in `block in <class:BuildSettings>' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:970:in `block in <class:AggregateTargetSettings>' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:114:in `block in define_build_settings_method' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/target/build_settings.rb:190:in `save_as' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/pods_project_generator/targ et_installer_helper.rb:24:in `update_changed_file' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/pods_project_generator/aggr egate_target_installer.rb:102:in `block in create_xcconfig_file' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/pods_project_generator/aggr egate_target_installer.rb:98:in `each' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/pods_project_generator/aggr egate_target_installer.rb:98:in `create_xcconfig_file' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/pods_project_generator/aggr egate_target_installer.rb:18:in `block in install!' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/user_interface.rb:145:in `message' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/pods_project_generator/aggr egate_target_installer.rb:14:in `install!' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/pods_project_generator.rb:1 30:in `block (2 levels) in install_aggregate_targets' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/pods_project_generator.rb:1 28:in `map' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/pods_project_generator.rb:1 28:in `block in install_aggregate_targets' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/user_interface.rb:145:in `message' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/pods_project_generator.rb:1 27:in `install_aggregate_targets' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer/xcode/single_pods_project_generat or.rb:20:in `generate!' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer.rb:308:in `block in create_and_save_projects' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/user_interface.rb:64:in `section' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer.rb:303:in `create_and_save_projects' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer.rb:294:in `generate_pods_project' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer.rb:173:in `integrate' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/installer.rb:162:in `install!' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/command/install.rb:52:in `run' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/claide-1.0.3/lib/claide/command.rb:334:in `run' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/lib/cocoapods/command.rb:52:in `run' /Users/aaclarke/.rvm/gems/ruby-2.4.3/gems/cocoapods-1.8.3/bin/pod:55:in `<top (required)>' /Users/aaclarke/.rvm/gems/ruby-2.4.3/bin/pod:22:in `load' /Users/aaclarke/.rvm/gems/ruby-2.4.3/bin/pod:22:in `<main>' ``` ――― TEMPLATE END ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― [!] Oh no, an error occurred. Search for existing GitHub issues similar to yours: https://github.com/CocoaPods/CocoaPods/search?q=undefined+method+%60size%27+for+nil%3ANilClass&type=Issues If none exists, create a ticket, with the template displayed above, on: https://github.com/CocoaPods/CocoaPods/issues/new Be sure to first read the contributing guide for details on how to properly submit a ticket: https://github.com/CocoaPods/CocoaPods/blob/master/CONTRIBUTING.md Don't forget to anonymize any private data! Looking for related issues on cocoapods/cocoapods... - NoMethodError - undefined method `size' for nil:NilClass https://github.com/CocoaPods/CocoaPods/issues/9484 [closed] [9 comments] 2 weeks ago - pod install crashes https://github.com/CocoaPods/CocoaPods/issues/9654 [closed] [2 comments] 4 weeks ago - NoMethodError - undefined method `size' for nil:NilClass https://github.com/CocoaPods/CocoaPods/issues/8377 [closed] [11 comments] 17 Feb 2020 and 4 more at: https://github.com/cocoapods/cocoapods/search?q=undefined%20method%20%60size%27%20for%20nil&type=Issues&utf8=✓ Error output from CocoaPods: ↳ Ignoring eventmachine-1.2.7 because its extensions are not built. Try: gem pristine eventmachine --version 1.2.7 Ignoring executable-hooks-1.6.0 because its extensions are not built. Try: gem pristine executable-hooks --version 1.6.0 Ignoring ffi-1.11.1 because its extensions are not built. Try: gem pristine ffi --version 1.11.1 Ignoring gem-wrappers-1.4.0 because its extensions are not built. Try: gem pristine gem-wrappers --version 1.4.0 Ignoring http_parser.rb-0.6.0 because its extensions are not built. Try: gem pristine http_parser.rb --version 0.6.0 Ignoring nokogiri-1.10.4 because its extensions are not built. Try: gem pristine nokogiri --version 1.10.4 Ignoring sassc-2.2.0 because its extensions are not built. Try: gem pristine sassc --version 2.2.0 [!] Automatically assigning platform `iOS` with version `8.0` on target `Runner` because no platform was specified. Please specify a platform for this target in your Podfile. See `https://guides.cocoapods.org/syntax/podfile.html#platform`. Error running pod install ``` </details>
platform-ios,tool,P3,team-ios,triaged-ios
low
Critical
608,646,450
go
cmd/compile: arch.MAXWIDTH on amd64 out of date?
https://github.com/golang/go/blob/b1b67841d1e229b483b0c9dd50ddcd1795b0f90f/src/cmd/compile/internal/amd64/galign.go#L17 Most amd64 implementations only support 48 bits of linear address space, but apparently Intel's Ice Lake product line launched last year and support up to 57 bits. Theoretically fixing this should be as simple as bumping that constant up, but I'm nervous there are other constants that will need to be bumped too. E.g., probably code that uses the `(*[big]T)(ptr)[:n:n]` idiom might need to be tweaked. Noticed in discussion on #37805. /cc @ianlancetaylor
NeedsInvestigation,compiler/runtime
low
Minor
608,669,698
godot
get_string_size() and get_wordwrap_string_size() don't recognize the tab character.
**Godot version:** 3.2.1 **OS/device including version:** Windows 10 **Issue description:** In the TextEdit you can insert a TAB character into your text and it takes up space. However if you test the size with `get_string_size()` or `get_wordwrap_string_size()`, you will get 0. the code `get_string_size("\t\tHello")` only returns the size of "Hello". the code `get_wordwrap_string_size("\t\tHello\n", 100)` only returns the size of "Hello" and "\n".
bug,topic:gui
low
Minor
608,672,273
PowerToys
[Run][WindowWalker Plugin] Find apps minimized in Systray
<!-- **Important: When reporting BSODs or security issues, DO NOT attach memory dumps, logs, or traces to Github issues**. Instead, send dumps/traces to [email protected], referencing this GitHub issue. --> Windows 10 ``` Windows build number: 1909 PowerToys version: v0.16.1 PowerToy module for which you are reporting the bug (if applicable): Window Walker ``` # Steps to reproduce Launch PowerToys Launch outlook.exe Set outlook.exe to hide to systray when minimized Hit Windows Key + Ctrl Type outlook # Expected behavior Items, such as Outlook minimized to systray would be found # Actual behavior Any app, including outlook will not be found, when minimized to the systray. If this should be a feature request, please move to feature queue. # Screenshots <!-- If applicable, add screenshots to help explain your problem. -->
Idea-Enhancement,Product-PowerToys Run,Run-Plugin
low
Critical
608,672,931
vue
`src` attribute of `img` inside `picture` should be set after `img` is appended to `picture` to avoid unnecessary requests
### Version 2.6.11 ### Reproduction link [https://codepen.io/CaseJnr/pen/VwvWbPE](https://codepen.io/CaseJnr/pen/VwvWbPE) ### Steps to reproduce 1. Open the codepen link in safari 2. Inspect the element 3. Reduce the view width below 900px and refresh the page 4. You will notice that both the red and blue image is requested. 5. Comment out the vue instance and refresh the page. 6. You will notice only the red image is requested (as expected). ### What is expected? Only the required picture resource is requested. ### What is actually happening? Both of the pictures resources are requested, causing redundant downloads. --- In Safari, adding a vue instance to any page will cause redundant picture sources to be requested. The picture element will behave correctly if the vue instance is removed. E.g. &#x3C;picture&#x3E; &#x3C;source media=&#x22;(max-width: 900px)&#x22; srcset=&#x22;small.jpg&#x22;&#x3E; &#x3C;img src=&#x22;large.jpg&#x22; alt=&#x22;&#x22;&#x3E; &#x3C;/picture&#x3E; By default, only the small.jpg should be requested when the view width is below 900px. However, if a vue instance is added to the page, then both the small.jpg and large.jpg are requested. The mobile inspector shows the small.jpg request initiator as the page (expected). The large.jpg initiator is actually the vue instance. <!-- generated by vue-issues. DO NOT REMOVE -->
improvement,browser quirks
low
Major
608,676,958
flutter
StreamBuilder not setting the correct value for connectionState for some Streams
In order to simplify reproduction of this issue, I created a sample App [here](https://github.com/feinstein/streambuilder_connectionstate). I found this bug while addressing an issue reported in [my library](https://github.com/feinstein/firebase_user_stream), so I adapted the user's example here. I know that `StreamBuilder` shouldn't be used like this for this particular use case, but still the bug shouldn't exist either if the user chooses to use it. The App has 2 bottom navigation buttons. On the first page click to login, this will login a test user, then go to the second page, look at the console outputs, then go back into the first page, then again go to the second page, keep alternating between the pages and looking at the logs, they should look like this: ``` I/flutter (28560): created I/flutter (28560): init I/flutter (28560): ConnectionState.waiting I/flutter (28560): snapshot has data: false I/flutter (28560): user name: null I/flutter (28560): ConnectionState.active I/flutter (28560): snapshot has data: true I/flutter (28560): user name: Test user I/flutter (28560): disposed I/flutter (28560): init I/flutter (28560): ConnectionState.waiting I/flutter (28560): snapshot has data: true I/flutter (28560): user name: Test user I/flutter (28560): disposed I/flutter (28560): init I/flutter (28560): ConnectionState.waiting I/flutter (28560): snapshot has data: true I/flutter (28560): user name: Test user I/flutter (28560): disposed ``` As you can see in the first time there's an emission by the stream, things works just fine, `snapshot.connectionState` goes from `ConnectionState.waiting` to `ConnectionState.active` and we get a result. But then afterwards `snapshot.connectionState` stays at `ConnectionState.waiting` even though we get a value from the stream, which goes against the `ConnectionState` docs: ```dart /// Not currently connected to any asynchronous computation. /// /// For example, a [FutureBuilder] whose [FutureBuilder.future] is null. none, /// Connected to an asynchronous computation and awaiting interaction. waiting, /// Connected to an active asynchronous computation. /// /// For example, a [Stream] that has returned at least one value, but is not /// yet done. active, /// Connected to a terminated asynchronous computation. done, ``` A `StreamBuilder` on `ConnectionState.waiting` shouldn't have any data on `snapshot.data` (`awaiting interaction`), data should only be available after it goes to `ConnectionState.done` (`a [Stream] that has returned at least one value`). I think I found the origin of the bug, looking at `_StreamBuilderBaseState` we can find this: ```dart void _subscribe() { if (widget.stream != null) { _subscription = widget.stream.listen((T data) { setState(() { _summary = widget.afterData(_summary, data); }); }, onError: (Object error) { setState(() { _summary = widget.afterError(_summary, error); }); }, onDone: () { setState(() { _summary = widget.afterDone(_summary); }); }); _summary = widget.afterConnected(_summary); } } ``` Some streams, I suspect streams with a cached value (if you know more about it, please do let me know), will go immediately into the `listen` callback activating `_summary = widget.afterData(_summary, data);`, which should trigger `ConnectionState.active`, but since this was called at the same time that the `_subscribe()` function is running, `_summary = widget.afterConnected(_summary);` will be called just afterwards, overriding the value of `_summary` to `ConnectionState.waiting`. One solution is to check if `_summary` can be set to `ConnectionState.waiting` before setting it, which isn't simple as this is a base class calling generic functions. So another soluting might be using a `try-catch` block, something like this: ```dart void _subscribe() { if (widget.stream != null) { try { S _oldSummary = _summary; _summary = widget.afterConnected(_summary); _subscription = widget.stream.listen((T data) { setState(() { _summary = widget.afterData(_summary, data); }); }, onError: (Object error) { setState(() { _summary = widget.afterError(_summary, error); }); }, onDone: () { setState(() { _summary = widget.afterDone(_summary); }); }); } catch(error) { _summary = _oldSummary; rethrow; } } } ``` I didn't test it, but I believe this will set the `StreamBuilder` to `ConnectionState.waiting` at first and if some callback is ready to be called synchronously it will just set the correct value to `_summary`. In case there were any errors while subscribing the listener the `catch` block sets `_summary` back to its previous value, as this is the behavior we have right now. I believe that's why `ConnectionState.waiting` was being set last, just in case if there were any errors on subscription it's value wouldn't change, but still this solution isn't flawless as `widget.afterConnected(_summary);` being called first on a derived class could have side effects. This is just a recommendation on how to fix this, as I didn't test this code and I am not that much experienced with all the details of `StreamBuilder` and how this might effect it.
framework,a: quality,P2,team-framework,triaged-framework
low
Critical
608,690,673
vscode
Custom editor autosave menubar state not updated properly
Hi @mjbvz @jrieken, I've been catching up with the daily updates, thanks for them btw. While working on our stuff, I've been noticing some issues on VSCode. Probably they're known issues but I'm sharing here in case they've been missed somehow. I'm able to reproduce them on both our code and [this](https://github.com/mjbvz/vscode-extension-samples/tree/custom-editor-examples/custom-editor-sample) code. **Linux and Windows** - [File -> Auto Save] Auto save enabled: "Save" is not triggered if you quickly (less than 1s) close the editor after making a change. **Linux-only** - [File -> Auto Save] The "check" symbol is not shown when you enable "auto save" within the custom editor; however, the operation works. - [File -> Revert File] Sometimes the menu button is kept disabled when you start VSCode, even after changes. - [File -> Save All] Same behavior ^ - [File -> Save] Same behavior ^; Ctrl+S works. - [File -> Save As] Same behavior ^; Ctrl+Shift+S works. _note: the menu buttons become enabled when I open a text editor_ **Windows-only** - [File -> Revert File] Can't use; always disabled. Please let me know if you need any further information. _Originally posted by @caponetto in https://github.com/microsoft/vscode/issues/77131#issuecomment-620571200_
bug,help wanted,custom-editors
low
Major
608,734,773
go
encoding/asn1, crypto/x509: failed to parse Intermediate CA certificate
A intermediate CA certificate contains X.509 certificate policy extension with OID **1.2.36.67006527840.666.66.1.1**. The fourth oid (67006527840) in that extension is greater than math.MaxInt32, which causes a certificate unmarshaling error. Specifically the error occurs in the asn1.Unmarshal function while trying to unmarshal the certificate policy extension. The CA cert has been issued by an Internet provider in New Zealand. I am not affiliated with the certifucate authority, and I don't know how that OID was assigned. I cannot force the CA to issue a new CA cert with sub-OIDs that have a value lower than math.MaxInt32; to the best of my understanding there is nothing in the ASN.1 spec that states sub-oids must be lower than 2^31 - 1. I've been able to successfully load the CA certificate using openssl, macOS key chain, and browsers. I understand it may be challenging to change `type ObjectIdentifier []int`, but it's also not a good long term option to close this issue. I saw related issues #30757, #19933, #36881. It looks like in practice there are certificates that have OID values higher than 2^31 - 1. So the reality is golang is unable to process some legitimate certificates. ### What version of Go are you using (`go version`)? go version go1.14.2 linux/amd64 ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? ``` GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/home/serosset/.cache/go-build" GOENV="/home/serosset/.config/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOINSECURE="" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/home/serosset/goroot" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/usr/local/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64" GCCGO="gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build216516397=/tmp/go-build -gno-record-gcc-switches" ``` ### What did you do? Parse CA certificate: https://play.golang.org/p/StRBpHZhgDM Issue narrowed to parsing issue of ASN.1 object identifier: 1. Encode the ASN.1 Object Identifier **1.2.36.67006527840.666.66.1.1** in DER format using the golang asn1.Marshal function. 1. Manually verify the DER encoded data is correct. I used 2 different Linux tools to compare the ASN.1 DER encoding with what is produced by asn1.Marshal. 1. Take the ASN.1 DER encoded byte array and call asn1.Unmarshal. https://play.golang.org/p/f3mP_NRAiFI ### What did you expect to see? Unmarshal in step (3) should succeed and the same original object identifier should have been retrieved. Until a long term fix is determined, maybe the golang asn1 (and x509) package documentation should explicitly mention the implementation is not fully compliant with ITU-T Rec X.690, i.e. object identifiers cannot have values higher than 2^31-1. Also, the error message when parsing a certificate is very low-level and does not provide context: **asn1: structure error: base 128 integer too large**. ### What did you see instead? Step (3) is failing. asn1.Unmarshal fails with error **asn1: structure error: base 128 integer too large**
NeedsInvestigation
low
Critical
608,776,753
pytorch
No speedup from channels_last with DataParallel
## 🐛 Bug For single GPU training, channels last improves performance for many networks that I have tried, but I get no speedup at all when running the same code with DataParallel and two Titan RTX's with nvlink. ## To Reproduce Steps to reproduce the behavior: Unfortunately, apex's imagenet example does not support DataParallel, and Pytorch's imagenet example doesn't use fp16 training (channels last seems to be a bad idea for 32 bit training at the moment, with an epoch of resnet50 taking several hours). So I have to point to my own implementation, which should be very straightforward and based on the aforementioned examples. You can get the repo with: ` git clone https://github.com/tstandley/imagenet_training.git cd imagenet_training ` On a single Titan RTX, resnet50, nvidia apex level O1... ` CUDA_VISIBLE_DEVICES=1 python train_imagenet.py -a=resnet50 -t=imagenet/train -v=imagenet/val --b=512 --workers=4 -n=DL_test --fp16 --channels_last ` ...finishes an epoch about 28 minutes. The same code without --channels_last... ` CUDA_VISIBLE_DEVICES=1 python train_imagenet.py -a=resnet50 -t=imagenet/train -v=imagenet/val --b=512 --workers=4 -n=DL_test --fp16 ` ...takes 37 minutes or so. This is like a 25% speedup. Awesome! Resnet101 takes about an hour without, and 45 minutes with. Also Awesome. Xception (which uses grouped convolution heavily) takes about 54 minutes without and 41 minutes with. A similar speedup. But with DataParallel on 2 Titan RTX w/ nvlink, resnet50 takes... ` CUDA_VISIBLE_DEVICES=1,0 python train_imagenet.py -a=resnet50 -t=imagenet/train -v=imagenet/val --b=512 --workers=8 -n=DL_test --fp16 --channels_last ` ... 20 minutes with channels last and 20 minutes without. It's the same. resnet101 takes 32 minutes with and 32 minutes without. Also the same Xception takes 32 minutes with and 30 without. It's actually slightly worse. ## Expected behavior A similar speedup for DataParallel models ## Environment Collecting environment information... PyTorch version: 1.5.0 Is debug build: No CUDA used to build PyTorch: 10.2 OS: Ubuntu 19.10 GCC version: (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008 CMake version: Could not collect Python version: 3.7 Is CUDA available: Yes CUDA runtime version: Could not collect (not installed outside pytorch) GPU models and configuration: GPU 0: TITAN RTX GPU 1: TITAN RTX Nvidia driver version: 440.82 cuDNN version: Could not collect (not installed outside pytorch) Versions of relevant libraries: [pip3] numpy==1.18.3 [pip3] torch==1.5.0 [pip3] torchvision==0.6.0 [conda] blas 1.0 mkl [conda] cudatoolkit 10.2.89 hfd86e86_0 [conda] mkl 2020.0 166 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] numpy 1.18.1 py37h4f9e942_0 [conda] numpy-base 1.18.1 py37hde5b4d6_1 [conda] numpydoc 0.9.2 py_0 conda-forge [conda] pytorch 1.5.0 py3.7_cuda10.2.89_cudnn7.6.5_0 pytorch [conda] torchvision 0.6.0 py37_cu102 pytorch cc @VitalyFedyunin @jamesr66a @ngimel
module: performance,triaged,module: data parallel,module: memory format
low
Critical
608,804,560
pytorch
Log-linear version of cumsum and cumprod
## 🚀 Feature Log-linear version of cumsum and cumprod, as the current version has quadratic time-complexity. ## Motivation The current cumsum and cumprod are too slow for long sequences. They need a faster implementation such as the one I wrote here, which I translated from Jax ([here](https://github.com/google/jax/commit/824ac86620572285b86dd09529f8869ef36883ad)) to Pytorch ([here](https://gist.github.com/AranKomat/be50d1bcee38411681f7218d2b81dede)). This one doesn't utilize a custom CUDA kernel, and it's slower if the sequence length is short. Nevertheless, it's substantially faster than the existing one due to the log-linear time-complexity if it's sufficiently long. ## Pitch If you like it, maybe you can add it (maybe just copy mine) to PyTorch possibly along with many variants of this, including logcumsumexp. cc @VitalyFedyunin @ngimel
module: performance,feature,triaged
low
Major
608,808,595
pytorch
undefined reference to pthreadpool_compute*
## 🐛 Bug <!-- A clear and concise description of what the bug is. --> When building with current master 07bb442b240edc8bdc944fb466b1fb56c84fa4ee with following script ```bash #!/bin/bash export REL_WITH_DEB_INFO=1 export USE_CUDA=1 export BUILD_TEST=0 export BUILD_BINARY=0 export BUILD_CAFFE2_OPS=0 export USE_MKLDNN=0 export USE_FBGEMM=0 export USE_NNPACK=0 export USE_QNNPACK=0 export USE_XNNPACK=0 pip install -e . -v ``` ```none /home/cloudhan/workspaces/pytorch/build/lib/libtorch_cpu.so: undefined reference to `pthreadpool_compute_1d_tiled' /home/cloudhan/workspaces/pytorch/build/lib/libtorch_cpu.so: undefined reference to `pthreadpool_compute_3d_tiled' /home/cloudhan/workspaces/pytorch/build/lib/libtorch_cpu.so: undefined reference to `caffe2::mobile_pthreadpool()' /home/cloudhan/workspaces/pytorch/build/lib/libtorch_cpu.so: undefined reference to `pthreadpool_compute_1d' /home/cloudhan/workspaces/pytorch/build/lib/libtorch_cpu.so: undefined reference to `pthreadpool_compute_2d' /home/cloudhan/workspaces/pytorch/build/lib/libtorch_cpu.so: undefined reference to `pthreadpool_compute_4d_tiled' ``` similar issue mentioned in https://github.com/pytorch/pytorch/issues/34606#issuecomment-619635665 ## To Reproduce Steps to reproduce the behavior: 1. build the code <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A OS: Ubuntu 16.04.6 LTS GCC version: (Ubuntu 5.5.0-12ubuntu1~16.04) 5.5.0 20171010 CMake version: version 3.17.0 Python version: 3.7 Is CUDA available: N/A CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce RTX 2080 Ti GPU 1: GeForce RTX 2080 Ti GPU 2: GeForce RTX 2080 Ti GPU 3: GeForce RTX 2080 Ti Nvidia driver version: 418.87.01 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4 Versions of relevant libraries: [pip3] numpy==1.16.4 [conda] blas 1.0 mkl [conda] mkl 2020.0 166 [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.0 py37hd6b4f25_0 [conda] msgpack-numpy 0.4.4.3 pypi_0 pypi [conda] numpy 1.18.3 pypi_0 pypi [conda] torch 1.4.0 pypi_0 pypi [conda] torch2trt 0.0.0 dev_0 <develop> <!-- Add any other context about the problem here. --> cc @malfet
module: build,triaged,module: xnnpack
low
Critical
608,831,966
go
gccgo: arm64: programs built with the -static option crashed during runtime initialization
<!-- Please answer these questions before submitting your issue. Thanks! For questions please use one of our forums: https://github.com/golang/go/wiki/Questions --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.14.2 gccgo (GCC) 10.0.1 20200421 (experimental) linux/arm64 $ gcc --version gcc (GCC) 10.0.1 20200421 (experimental) Copyright (C) 2020 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ g++ --version g++ (GCC) 10.0.1 20200421 (experimental) Copyright (C) 2020 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. </pre> ### Does this issue reproduce with the latest release? yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GO111MODULE="" GOARCH="arm64" GOBIN="" GOCACHE="/home/hostname/.cache/go-build" GOENV="/home/hostname/.config/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="arm64" GOHOSTOS="linux" GOINSECURE="" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/home/hostname/gopath" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/home/hostname/gccbuild/install" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/home/hostname/gccbuild/install/libexec/gcc/aarch64-unknown-linux-gnu/10.0.1" GCCGO="/home/hostname/gccbuild/install/bin/gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build614590846=/tmp/go-build -gno-record-gcc-switches -funwind-tables" </pre></details> ### What did you do? $ cat hello.go ``` package main import ( "fmt" ) func main() { fmt.Println("hello") } ``` go build -gccgoflags="-static -g -v" -x -a hello.go <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> ### What did you expect to see? The program outputs "hello" ### What did you see instead? ``` Aborted (core dumped) GDB debugging information: (gdb) r Starting program: /home/erifan01/hello [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1". Program received signal SIGABRT, Aborted. raise (sig=6) at ../sysdeps/unix/sysv/linux/raise.c:51 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) bt #0 raise (sig=6) at ../sysdeps/unix/sysv/linux/raise.c:51 #1 0x000000000050c4d4 in abort () #2 0x00000000005048ec in uw_init_context_1 (context=context@entry=0xfffffffe8f70, outer_cfa=outer_cfa@entry=0xfffffffe99b0, outer_ra=0x4df204 <runtime.probestackmaps+36>) at ../../../../gcc/libgcc/unwind-dw2.c:1600 #3 0x0000000000505270 in _Unwind_Backtrace (trace=trace@entry=0x4de780 <probestackmaps_callback>, trace_argument=trace_argument@entry=0x0) at ../../../../gcc/libgcc/unwind.inc:295 #4 0x00000000004df204 in runtime.probestackmaps () at ../../../../gcc/libgo/runtime/go-unwind.c:868 #5 0x0000000000486708 in runtime.schedinit () at ../../../../gcc/libgo/go/runtime/proc.go:548 #6 0x0000000000400520 in main (argc=<optimized out>, argv=<optimized out>) at ../../../../gcc/libgo/runtime/go-main.c:56 #7 0x0000000000507574 in __libc_start_main () #8 0x00000000004005a0 in _start () Backtrace stopped: previous frame identical to this frame (corrupt stack?) ``` The crash happened here https://github.com/gcc-mirror/gcc/blob/df30ab70690d6088f367e74757f0b5dd6a2587e5/libgcc/unwind-dw2.c#L1593 I found this issue while looking at the issue https://github.com/golang/go/issues/37257. It may be related to precise stack scan, but I haven't figured out why it happened? /cc @ianlancetaylor /cc @cherrymui /cc @thanm
NeedsInvestigation,arch-arm64
low
Critical
608,843,562
go
doc: specify grammar of Deprecated convention
This is a follow-up to #10909, which [defined the convention as follows](https://github.com/golang/go/wiki/Deprecated): > To signal that an identifier should not be used, add a paragraph to its doc comment that begins with `Deprecated:` followed by some information about the deprecation, and a recommendation on what to use instead, if applicable. This specifies that the convention requires the addition of at least one paragraph, but fails to specify how subsequent paragraphs are to be treated (i.e., are they considered part of the deprecation notice or part of the "regular" documentation). The phrase "followed by some information" is ambiguous as to whether it applies only to the paragraph added or includes subsequent paragraphs. This has come up with the implementation of tools trying to automatically surface deprecation notices to users during code review. An example of this: ```go // ClearAllExtensions clears all extensions from m. // This includes populated fields and unknown fields in the extension range. // // Deprecated: Use RangeExtensions instead to clear all populated extension fields: // // proto.RangeExtensions(m, func(xt protoreflect.ExtensionType, _ interface{}) bool { // proto.ClearExtension(m, xt) // return true // }) // // The example rewrite above does not clear unknown fields in the extension range, // which is unlikely to matter in practice. func ClearAllExtensions(m Message) { ... } ``` In this example, the intent of the deprecation notice is to include all subsequent paragraphs and code blocks as part of the deprecation notice. Should they be considered part of the deprecation message or not? At the present moment, the tool we have only surfaces: > Use RangeExtensions instead to clear all populated extension fields: in code review since it only treats the single paragraph as the deprecation warning. The fact that the code block suggesting the alternative is not shown is unfortunate. \cc @stapelberg @bsiegert @dmitshur @neild @FiloSottile
Documentation,NeedsDecision
low
Major
608,844,983
terminal
Add support for Azure Cloud Shell `code` Editor
# Environment ```none Windows build number: Windows 10 Version 1909 Build 18363.657 Windows Terminal version: 0.11.1121.0 ``` # Steps to reproduce 1. Open the Windows Terminal 2. Open Azure Cloud Shell 3. Login to Azure 4. Type code . or code file.txt # Expected behavior The azure cloud shell editor should open in the background with the provided file like explained here: https://docs.microsoft.com/en-us/azure/cloud-shell/using-cloud-shell-editor. # Actual behavior Nothing happens no editor will be opened.
Issue-Feature,Area-Extensibility,Product-Terminal,Area-AzureShell
low
Major
608,903,329
flutter
Access to NavigationRequest Headers in webview NavigationDelegate Function
**Issue -** If I am viewing a page on webview, and a link opens a PDF on the page, then flutter webview can't handle that. We can have a work around to download that file onto user's machine. If its something different than html page. I was trying to do this based upon the url and prevent the navigation if the url contains pdf or anything like that. However if the url doesn't contain such texts, and server is masking the file type in the url, then its not possible to download the file on user's device. Hence in this case if we have access to response headers, which suggest the content-type, things becomes easy. **Current Code** ``` navigationDelegate: (NavigationRequest request) async { print(request.url); if (request.url.contains("download")) { setState(() { shouldChangeStack = false; }); if (await canLaunch(request.url)) { await launch(request.url); } return NavigationDecision.prevent; } else { setState(() { shouldChangeStack = true; }); return NavigationDecision.navigate; } }, ```
c: new feature,d: examples,p: webview,package,team-ecosystem,P3,triaged-ecosystem
low
Major
609,039,409
rust
Tracking Issue for "unsafe blocks in unsafe fn" (RFC #2585)
This is a tracking issue for the RFC "unsafe blocks in unsafe fn" (rust-lang/rfcs#2585). The lint `unsafe_block_in_unsafe_fn` is stable, but the RFC proposes some further things we might want to do. ### About tracking issues Tracking issues are used to record the overall progress of implementation. They are also uses as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label. ### Steps - [x] Introduce an opt-in lint that, when enabled, causes unsafe blocks in unsafe functions to be considered required, and warns if they are absent when an unsafe operation is performed. - [x] Stabilization PR in Rust 1.52.0 (#79208) - [x] Include a suggestion with the lint that can insert required unsafe blocks. This could be as simple as adding a block across the entire function, though more granular insertion is probably better. (#112017) - [ ] Adjust documentation to describe the (somewhat unusual) effect of the lint, and to describe the possibility that the lint will be enabled default ([see instructions on rustc-dev-guide][doc-guide]) - [ ] Write a blog post describing the change, perhaps? [stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr [doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs ### Unresolved Questions * What is the timeline for adding the lint, and cranking up its default level? * Should the default level depend on the edition? * Should we ever make this deny-by-default or even a hard error, in a future edition? * Should we require `cargo fix` to be able to do *something* about this warning before making it even warn-by-default? And how precise should it be? ### Implementation history * Implemented in #71862 * Stabilized in https://github.com/rust-lang/rust/pull/79208 * Fixable suggestion added in #112017 * Made warn-by-default for 2024 edition in #112038
B-RFC-approved,T-lang,C-tracking-issue,disposition-merge,finished-final-comment-period,F-unsafe-block-in-unsafe-fn,S-tracking-impl-incomplete
high
Critical
609,071,598
rust
Why implementation of iterator is not generic enough in async context?
[Cross posting stackoverflow](https://stackoverflow.com/questions/61491070/why-implementation-of-iterator-is-not-generic-enough-in-async-context) because it's look like a compiler bug/limitation. --- Given the [following][1] snippet: ```rust use futures::stream::{self, StreamExt}; async fn from_bar(bar: &[Vec<&u8>]) { let x = bar.iter().flat_map(|i| i.iter().map(|_| async { 42 })); let foo: Vec<_> = stream::iter(x).collect().await; } #[tokio::main] async fn main() { for bar in vec![] { tokio::spawn(async { from_bar(bar).await; }); } } ``` I get the following errors: ```none error[E0308]: mismatched types --> src/main.rs:11:9 | 11 | tokio::spawn(async { | ^^^^^^^^^^^^ one type is more general than the other | = note: expected type `std::ops::FnOnce<(&&u8,)>` found type `std::ops::FnOnce<(&&u8,)>` error: implementation of `std::iter::Iterator` is not general enough --> src/main.rs:11:9 | 11 | tokio::spawn(async { | ^^^^^^^^^^^^ implementation of `std::iter::Iterator` is not general enough | = note: `std::iter::Iterator` would have to be implemented for the type `std::slice::Iter<'0, &u8>`, for any lifetime `'0`... = note: ...but `std::iter::Iterator` is actually implemented for the type `std::slice::Iter<'1, &u8>`, for some specific lifetime `'1` ``` I was expecting no error because the lifetimes seem to be correct to me. Note that removing `main()` or removing the code inside `from_bar()` *both* eliminate the errors. Not only that, the error messages are also very strange. They may be related to [a regression in the compiler][2], though more than that they seem to be in the wrong place ([maybe related][3]). Version `rustc 1.43.0 (4fb7144ed 2020-04-20)`: ```toml [dependencies] futures = '0.3.1' [dependencies.tokio] version = '0.2' features = ['full'] ``` --- Maybe related https://github.com/rust-lang/rust/issues/64650 [1]: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=bece604aa99c31dc710f5f3129296f1f [2]: https://stackoverflow.com/questions/54341465/rust-expected-type-error-prints-mismatched-types-that-are-exactly-the-same/54344401#54344401 [3]: https://github.com/rust-lang/rust/issues/54326
A-diagnostics,T-compiler,C-bug,A-async-await,AsyncAwait-Triaged,D-confusing
low
Critical
609,074,352
flutter
[camera] Add byte streaming capability for the camera
```startImageStream``` was added to Flutter camera but this isn't sufficient. We really need ```startByteStream``` which also includes audio, and is much faster. Many of us are trying to create live streams from our apps and at the moment it's impossible. ```startVideoRecording``` locks the file, soon as for example ```FlutterFFmpeg``` tries to read the file for live RTMP streaming it stops the camera.
c: new feature,p: camera,package,c: proposal,team-ecosystem,P3,triaged-ecosystem
medium
Major
609,079,621
vscode
QuickPick in web with VoiceOver does not read anything
1. yarn web 2. VoiceOver 3. Cmd + P, use keyboard navigation to change focus -> VoiceOver does not read anything I can repro with Safari and Chrome. Does not repro on Linux and Win. So might be a VoiceOver issue.
bug,accessibility,quick-pick,web
low
Major
609,081,437
flutter
Font size is different on Physical Device and Simulator/Emulator
## Steps to Reproduce After building the project and install it on a physical device (iPhone 7, XR and XR MAX), I realized that font size is different. **Expected results:** <!-- what did you want to see? --> Same font size on device and simulator. **Actual results:** <!-- what did you see? --> --- - Font size is **bigger** on an iPhone 7 (and XR): ![font_iPhone 8](https://user-images.githubusercontent.com/1607281/80603894-48f0e980-8a31-11ea-91cd-224f56e723f0.jpg) --- - Font size is **smaller** on an iPhone XR Max: ![font_iPhone_11ProMax](https://user-images.githubusercontent.com/1607281/80603904-4d1d0700-8a31-11ea-91b2-db00a6902b59.jpg) <details> flutter doctor -v [✓] Flutter (Channel stable, v1.12.13+hotfix.9, on Mac OS X 10.15.4 19E287, locale en-US) • Flutter version 1.12.13+hotfix.9 at /Users/peruho/Developer/flutter • Framework revision f139b11009 (4 weeks ago), 2020-03-30 13:57:30 -0700 • Engine revision af51afceb8 • Dart version 2.7.2 [✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3) • Android SDK at /Users/peruho/Library/Android/sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-29, build-tools 28.0.3 • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211) • All Android licenses accepted. [✓] Xcode - develop for iOS and macOS (Xcode 11.4.1) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 11.4.1, Build version 11E503a • CocoaPods version 1.9.0 [✓] Android Studio (version 3.6) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin version 44.0.2 • Dart plugin version 192.7761 • Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211) [✓] VS Code (version 1.44.2) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.9.1 [✓] Connected device (3 available) • No issues found! </details>
framework,f: cupertino,a: quality,a: typography,has reproducible steps,found in release: 3.3,found in release: 3.7,team-design,triaged-design
medium
Major
609,084,444
godot
Can't assign mouse buttons or mouse wheel to editor shortcuts
**Godot version:** All versions including v4.0.dev.custom_build.1d45a269f **OS/device including version:** W10 1903 **Issue description:** Can't assign mouse button inputs to editor shortcuts **Steps to reproduce:** Editor Settings > Shortcuts > Try to set any mouse button, can't **Minimal reproduction project:** None Some related issues: #6366 #26999 The editor settings only allows key inputs, as shown below, where the event is cast to InputEventKey. https://github.com/godotengine/godot/blob/f6e29addd433130100061935b9a2b8f6de38254b/editor/settings_config_dialog.cpp#L318-L330 I think being able to assign BUTTON_XBUTTON1 and BUTTON_XBUTTON2 would be helpful, especially for back/forward editor navigation. In order to not let people mess up the general usage of the editor, I think it may be a case of only allowing the above 2 mouse button input types to be assigned... i.e. not allowing rebinding of MB1, MB2, scroll, etc (unless they are modified...? e.g. with shift, alt, ctrl). Making this change would be relatively simple (I think - haven't looked into it too deep), but it would require changing a decent amount of existing code within the file above, since `last_wait_for_key` would need it's type altered. Happy to work on a PR if the general consensus is that this is worthwhile.
enhancement,topic:editor,usability
medium
Major
609,109,490
godot
Unable to perform a headless Android build (with custom builds) without first opening the GUI
**Godot version:** 3.2.1+ **OS/device including version:** Android **Issue description:** The headless version of godot can be used to export projects without opening the editor. This is important for Continuous Delivery & automation of development. Since Godot 3.2.1, we have been able to use a custom android export when exporting our Android games. In a newly checked out project, one has to go to the Project>"Install Android Build Template" in order to begin exporting for android. However, there is no way to do this with the headless version of godot. Because of this, you are unable to perform an initial build of a godot Android game with the headless version, without first opening it in the GUI. I think there should be a godot command to run this gradle setup from the command line, to keep this tool consistent.
enhancement,platform:android,topic:editor,topic:export
low
Major
609,164,978
flutter
FocusPointer for GestureDetection
## Use case It's come to my attention that it is possible for a Widget to be impervious to gestures. (`Ignore-` and `AbsorbPointer`). Except it actually isn't possible to easily write a widget with a second `GestureDetector` and have the `Gesture`s not be considered by parent widgets. ## Proposal A `FocusPointer`, which when detected in the WidgetTree on `Gesture` only passes the event through to the Widgets within, ignoring Widgets higher up in the WidgetTree, (Would allow things like having a map <with vertical and horizontal gestures> within a container).
c: new feature,framework,f: gestures,c: proposal,P3,team-framework,triaged-framework
low
Minor
609,181,849
flutter
Service worker caching should be smarter
There is a reported hit to IPL when using the service worker caching on the new gallery https://github.com/flutter/gallery . The current caching strategy is optimized for small applications, but larger ones will need a more sophisticated strategy to avoid downloading absolutely everyone on first load. Some ideas: - Part files should not be part of the initial cache, since this defeats the goal of deferred loading. These should be cache on demand - assets _might_ not need to be part of the initial cache - instead these could be moved into cache on demand. Fonts might cause some issues here, since I think the engine still eagerly loads them. - Should install cache everything? My gut says yes but it isn't quite clear. - Are there any assets that absolutely must be cached during the initial load? Probably main JavaScript, html, manifest?
tool,c: performance,platform-web,P3,team-web,triaged-web
low
Major
609,186,793
node
http: ServerRequest & ServerResponse does not set destroyed false
https://github.com/nodejs/node/pull/33131 and https://github.com/nodejs/node/pull/33120 fix this for client side, we should do the same thing server side and ensure that streams that have emitted `'close'` also have `destroyed` set to `false`
http
low
Minor
609,194,765
youtube-dl
Adobe Spark
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.03.24. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights. - Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [ x] I'm reporting a new site support request - [ x] I've verified that I'm running youtube-dl version **2020.03.24** - [x ] I've checked that all provided URLs are alive and playable in a browser - [ x] I've checked that none of provided URLs violate any copyrights - [x ] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs <!-- Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours. --> https://spark.adobe.com/video/YS8iG2JaYeaOr?w=_7758 ## Description > <!-- > Provide any additional information. > If work on your issue requires account credentials please provide them or explain how one can obtain them. > --> > Adobe Spark is an integrated suite of media creation applications for mobile and web developed by Adobe Systems. It comprises three separate design apps: Spark Page, Spark Post, and Spark Video. Wikipedia > Initial release: May 19, 2016 > Developer(s): Adobe Systems Falls back to generic, but fails with: youtube-dl.py -v "https://spark.adobe.com/video/YS8iG2JaYeaOr?w=_7758" [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-v', 'https://spark.adobe.com/video/YS8iG2JaYeaOr?w =_7758'] [debug] Encodings: locale cp1252, fs utf-8, out utf-8, pref cp1252 [debug] youtube-dl version 2020.03.24 [debug] Python version 3.6.3 (CPython) - Windows-7-6.1.7601-SP1 [debug] exe versions: ffmpeg N-71727-g46778ab, rtmpdump 2.4 [debug] Proxy map: {} [generic] YS8iG2JaYeaOr?w=_7758: Requesting header WARNING: Falling back on generic information extractor. [generic] YS8iG2JaYeaOr?w=_7758: Downloading webpage [generic] YS8iG2JaYeaOr?w=_7758: Extracting information ERROR: Unsupported URL: https://spark.adobe.com/video/YS8iG2JaYeaOr?w=_7758 Traceback (most recent call last): File "C:\Transmogrifier\youtube-dl.py\youtube_dl\YoutubeDL.py", line 797, in e xtract_info ie_result = ie.extract(url) File "C:\Transmogrifier\youtube-dl.py\youtube_dl\extractor\common.py", line 53 0, in extract ie_result = self._real_extract(url) File "C:\Transmogrifier\youtube-dl.py\youtube_dl\extractor\generic.py", line 3 352, in _real_extract raise UnsupportedError(url) youtube_dl.utils.UnsupportedError: Unsupported URL: https://spark.adobe.com/vide o/YS8iG2JaYeaOr?w=_7758 Thanks Ringo
site-support-request
low
Critical
609,214,064
go
x/playground: very large output is sometimes corrupted
Today and yesterday, I've observed instances where a playground execution seems to be successful (there's no error), but the output is truncated rather than complete. Filing this issue for tracking purposes, will fill in more information later. **Edit:** A workaround that is available is to modify the program (e.g., add whitespace) and re-run it until it works. /cc @toothrot @bradfitz
NeedsInvestigation
medium
Critical
609,223,046
pytorch
Cannot build pytorch with linker arguments in C{,XX}FLAGS
## 🐛 Bug When building pytorch with the following flags, the compiler aborts due to unknown arguments. To be specific, arguments of the for `-Wl,--sort-common` or `-Wl,-z,relro` get transformed to `-Wl --sort-common` or `-Wl -z relro` respectively. Apparently the commas are not respected by the build system and the different parts of the argument are separated. Used flags: ``` * CFLAGS='-march=native -O3 -pipe -fuse-linker-plugin -fgraphite-identity -floop-nest-optimize -fno-math-errno -fno-trapping-math -fassociative-math -fno-trapping-math -fno-signed-zeros -freciprocal-math -fdevirtualize-at-ltrans -fipa-pta -fno-semantic-interposition -flto=8 -Wl,-O1 -Wl,--as-needed -Wl,--sort-common,-z,now' * CXXFLAGS='-march=native -O3 -pipe -fuse-linker-plugin -fgraphite-identity -floop-nest-optimize -fno-math-errno -fno-trapping-math -fassociative-math -fno-trapping-math -fno-signed-zeros -freciprocal-math -fdevirtualize-at-ltrans -fipa-pta -fno-semantic-interposition -flto=8 -fvisibility-inlines-hidden -Wl,-O1 -Wl,--as-needed -Wl,--sort-common,-z,now' * FFLAGS='-march=native -O3 -pipe -fuse-linker-plugin -fgraphite-identity -floop-nest-optimize -fno-math-errno -fno-trapping-math -fassociative-math -fno-trapping-math -fno-signed-zeros -freciprocal-math -fdevirtualize-at-ltrans -fipa-pta -fno-semantic-interposition -flto=8 -Wl,-O1 -Wl,--as-needed -Wl,--sort-common,-z,now' * FCFLAGS='-march=native -O3 -pipe -fuse-linker-plugin -fgraphite-identity -floop-nest-optimize -fno-math-errno -fno-trapping-math -fassociative-math -fno-trapping-math -fno-signed-zeros -freciprocal-math -fdevirtualize-at-ltrans -fipa-pta -fno-semantic-interposition -flto=8 -Wl,-O1 -Wl,--as-needed -Wl,--sort-common,-z,now' * F77FLAGS='-march=native -O3 -pipe -fuse-linker-plugin -fgraphite-identity -floop-nest-optimize -fno-math-errno -fno-trapping-math -fassociative-math -fno-trapping-math -fno-signed-zeros -freciprocal-math -fdevirtualize-at-ltrans -fipa-pta -fno-semantic-interposition -flto=8 -Wl,-O1 -Wl,--as-needed -Wl,--sort-common,-z,now' * LDFLAGS='-Wl,-O1 -Wl,--as-needed -Wl,--sort-common,-z,now -march=native -O3 -pipe -fuse-linker-plugin -fgraphite-identity -floop-nest-optimize -fno-math-errno -fno-trapping-math -fassociative-math -fno-signed-zeros -freciprocal-math -fdevirtualize-at-ltrans -fipa-pta -fno-semantic-interposition -flto=8 -fvisibility-inlines-hidden' * MAKEOPTS='-j9 -l8.4' ``` (I tried to install pytorch from the official gentoo science overlay) ## To Reproduce Install the the pytorch package on Gentoo from the science overlay. Alternatively, it should be reproducible by simply configuring and building the project with the given `C{,XX}FLAGS`. I've attached the console output from the build here: [build.log](https://github.com/pytorch/pytorch/files/4553238/build.log) ## Expected behavior Successful compilation. ## Environment - PyTorch Version (e.g., 1.0): 1.4 - OS (e.g., Linux): Gentoo Linux - Compilers: gcc 9.3.0 and gcc 8.3.0 - How you installed PyTorch (`conda`, `pip`, source): emerge sci-libs/pytorch (from https://github.com/gentoo/sci/tree/master/sci-libs/pytorch) - Build command you used (if compiling from source): - Python version: 3.6.10 - CUDA/cuDNN version: 7.6.5 (CUDA 10.1) - GPU models and configuration: GT 730 / RX 580 - Any other relevant information: cc @malfet
module: build,triaged
low
Critical
609,223,365
terminal
ColorTool: Add ps1xml as an accepted extension
<!-- 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING: 1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement. 2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement. 3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number). 4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement. 5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement. All good? Then proceed! --> # Description of the new feature/enhancement If you are using Windows Powershell, you'll discover that Windows Terminal changes it's default color scheme. However you are unable to change it back. This is due to a custom scheme extension which you can view in PowerShell ISE under Tools - Options. You can export the built-in themes but ColorTool won't allow you to use them. Per ColorTool doc, only json, .itermcolors, and ini file extensions are accepted. So add the option to read those files in, this may require cooperation with the PowerShell team. Ideally we'd be able to convert the color schemes to json so we can read them into Windows Terminal (ideally these should be added to defaults.json once we can export to json). # A clear and concise description of what the problem is that the new feature would solve. Lets you continue to use the Windows PowerShell color themes if you enjoy using them. # Proposed technical implementation details (optional) # A clear and concise description of what you want to happen. I want my Windows Powershell profile in WT to keep the same color scheme it originally had and/or change it to one of the other defaults listed in ISE.
Product-Colortool,Help Wanted,Issue-Task,Needs-Tag-Fix
low
Critical
609,255,789
godot
Deleting freed object from scene causes game crash
**Godot version:** 92d4a0c **Steps to reproduce:** 1. Enable "Sync Scene Changes" 2. Make some object that gets deleted, e.g. Timer self-destructing with timeout connected to free 3. Run the game and wait for the object to be deleted 4. Remove it in the local tree in the editor 5. Return to game 6. It freezes and then proceeds to crash
bug,topic:editor
low
Critical
609,270,876
TypeScript
Function name not preserved when exported on definition
**TypeScript Version:** 4.0.0-dev.20200429 **Search Terms:** function name shorthand function name missing exported function missing name **Code** ```bash cat > test.ts <<"EOF" var test1 = () => {}; export var test2 = () => {}; console.log(`name for test1 = "${test1.name}"`); console.log(`name for test2 = "${test2.name}"`); EOF npx -p typescript@next tsc test.ts --module umd node test.js ``` **Expected output:** (And also the output from node@latest running the .ts file with .mjs extension) > name for test1 = "test1" > name for test2 = "test2" **Actual output:** > name for test1 = "test1" > name for test2 = "" Also the compiler complains about `.name` property. Isn't the .name property supported at all on shorthand functions? > test.ts:3:40 - error TS2339: Property 'name' does not exist on type '() => void'. **Playground Link:** Couldn't get the [Playground Link](https://www.typescriptlang.org/play/?target=2&module=3#code/G4QwTgBALgpgzlAjBAvBAFASlQPggbwF8BuAKBgA8AHAezCglElgQCZUNsU8iyBjGgDs4NADYwAdKJoBzdAANBIALYwIAMzrR4SDgCIAJPhZIJS1YT3zM-ISPFTZC82s3Md7NIeMezKmJbWxEA) with an `export` to run in the browser.. :/ **Related Issues:** #5611 seems similar, but this case is simpler and should be able to be solved I believe. Generated code seems to be something like: ```js // ... var test1 = function () { }; exports.test2 = function () { }; // ... ``` Should probably be either ```js var test1 = function () { }; var test2 = function () { }; exports.test2 = test2 ``` Or ```js var test1 = function () { }; exports.test2 = function test2() { }; ```
Bug
low
Critical
609,275,934
pytorch
Improve visibility in test suite timings
We now have enough tests that it is quite hard to maintain them and CI takes a significant amount of time. A framework to get a better understanding of the runtime of the tests could be very useful to prioritize refactoring and cleaning of tests to speed things up. Key features: - Individual tests timing (using setUp/tearDown for example) - Aggregate results across all the CI-configs - Allow simple queries like slowest tests for a given config. Or slowest test overall. Or slowest aggregated runtime for a test across all configs. - Optional: nice visualization of the evolution with time (if variance is small enough) cc @ezyang @seemethere
module: ci,module: tests,triaged,better-engineering
low
Major
609,279,573
excalidraw
Text gets blurred after text editing looses focus
### How to reproduce 1. start with fresh canvas 2. enable the text tool, choose the “code” font 3. click in the empty space to insert text 4. type "hallo" 5. click somewhere else to end the text entering/loose focus from the text object ### What happens The text gets a bit blurry. ### What should happen The text should stay sharp ### Environment Ubuntu 19.10, Firefox 75.0 After step 4 | After step 5 ------------ | ------------- ![Screenshot from 2020-04-29 20-43-12](https://user-images.githubusercontent.com/138279/80634661-165be680-8a5b-11ea-9e3c-90af1773719a.png) | ![Screenshot from 2020-04-29 20-43-23](https://user-images.githubusercontent.com/138279/80634657-152ab980-8a5b-11ea-9d1f-04b490d1749c.png)
bug
low
Major
609,280,078
TypeScript
Add the Node that comes with Visual Studio as a NodePath fallback in the .targets file in the Microsoft.TypeScript.MSBuild Nuget package
I am working on a new project using ASPNET Core 3.1 and I wanted to write our font-end code in TypeScript, but found a little problem: You can't use TypeScript and build the project using the dotnet CLI commands if you don't have a stand alone installation of Node. If you build using Visual Studio, it works ok because VS now comes with Node, but building in Azure Pipelines or local with the "dotnet build" command results in an error saying Node could not be found. My project is referencing the Microsoft.TypeScript.MSBuild nuget package. On March 20th 2020, I created a simple repro at https://github.com/kelps/TypeScriptBuildRepro and also submitted a VS feedback about this at https://developercommunity.visualstudio.com/content/problem/957892/typescript-files-do-not-build-using-dotnet-cli-com.html. @timheuer helped me diagnose it and the problem was that Node needed to be in the Path to work. I didn't like that because Visual Studio already comes with Node and I shouldn't need to have another separate Node install to run this. @madskristensen's BuildWebCompiler Nuget package also uses Node to compile .less/.sass/... files and it just works, without a stand alone Node install. After a long time testing and researching I was able to create a very simple work around. I added a msbuild Task to my .csproj file that runs before the TypeScript compilation and sets the NodePath to the one that comes with VS if it is still empty. When it worked, I removed that code from my .csproj file and placed it in the Directory.Build.targets file in the solution directory. **My suggestion** is to add some form of that code to the .targets file in the Microsoft.TypeScript.MSBuild Nuget package. The master in my repo now has the code that works. To see the build failling, just rename the Directory.Build.targets file and try to build the solution with "dotnet build" in a computer without a stand alone Node installation (or at least without Node in the %path%). It is a simple "fix" that provides a good fallback for TypeScript compilation with ASPNET Core on Windows. I am sure my code can be improved, but it is a good start. It uses vswhere to find the VS Node path. For reference, here is the code in my .targets file. ```xml <Project ToolsVersion="15.0"> <!--https://stackoverflow.com/questions/31664834/customize-system-environment-variable-path-for-msbuild-exec-task#31670922--> <!--http://blog.jdhardy.ca/2011/12/setting-environment-variables-for.html--> <!--bat script for findind the Visual Studio instalation path for Node and setting it in the appropriate variable before building TypeScript files--> <PropertyGroup> <SetNodePath> <![CDATA[ @setlocal enabledelayedexpansion @for /f "usebackq tokens=*" %%i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere" -latest -find **\node.exe`) do (@set nodePath=%%~dpi) @echo %NodePath% ]]></SetNodePath> </PropertyGroup> <!--Target that runs the script above if the node path isn't set yet and it is running on Windows--> <Target Name="FindAndSetNodePath" BeforeTargets="PreComputeCompileTypeScriptWithTSConfig" Condition="'$(NodePath)' == '' AND '$(OS)' == 'Windows_NT' AND exists('$(MSBuildProgramFiles32)\Microsoft Visual Studio\Installer\vswhere.exe')"> <Exec Command="$(SetNodePath)" ConsoleToMSBuild="true"> <Output TaskParameter="ConsoleOutput" PropertyName="NodePath"/> </Exec> <Message ContinueOnError="true" Text="NodePath: $(NodePath)" Importance="high" /> </Target> </Project> ```
Suggestion,Visual Studio
low
Critical
609,286,164
go
x/build: add a Windows builder with clang
In theory, we should be able to use either GCC or Clang on all platforms where cgo is supported. Currently, none of the Windows builders use Clang. It would be good to verify cgo works in that configuration.
OS-Windows,Builders,NeedsInvestigation,new-builder
low
Major
609,305,636
rust
`TryFrom` for now-incompatible type falls back to `From` error message
I am working on a library which at some point converts a `mysql_common::value::Value` to a type `DataType` via `DataType::try_from(<value>)`. I am using an `impl TryFrom<mysql_common::value::Value> for DataType` defined in a separate crate, which worked well initially. However, this other crate has since then bumped up the dependencies, such that my project was now using a version of `Value` incompatible to the one used by the other crate. The compiler complained (see error message) that "the trait `std::convert::From<mysql_common::value::Value>` is not implemented for `DataType`", and that the conversion was infallible. I think it would have been clearer and easier to debug had the compiler error message rather said that `TryFrom` was implemented for a type by the same name but incompatible (different version), rather than fall back on a blanket implementation error message which was deceptive in this case. ### Error message ``` error[E0277]: the trait bound `noria::data::DataType: std::convert::From<mysql_common::value::Value>` is not satisfied --> src/lib.rs:283:19 | 283 | match DataType::try_from(value) { | ^^^^^^^^^^^^^^^^^^ the trait `std::convert::From<mysql_common::value::Value>` is not implemented for `noria::data::DataType` | = help: the following implementations were found: <noria::data::DataType as std::convert::From<&'a nom_sql::common::Literal>> <noria::data::DataType as std::convert::From<&'a noria::data::DataType>> <noria::data::DataType as std::convert::From<&'a str>> <noria::data::DataType as std::convert::From<chrono::naive::datetime::NaiveDateTime>> and 11 others = note: required because of the requirements on the impl of `std::convert::Into<noria::data::DataType>` for `mysql_common::value::Value` = note: required because of the requirements on the impl of `std::convert::TryFrom<mysql_common::value::Value>` for `noria::data::DataType` error[E0308]: mismatched types --> src/lib.rs:285:77 | 285 | Err(e) => return Err(WriteProxyErr::MysqlToNoriaDatatypeErr(e)), | ^ expected `&str`, found enum `std::convert::Infallible` ``` ### Meta rustc 1.44.0-nightly (94d346360 2020-04-09) binary: rustc commit-hash: 94d346360da50f159e0dc777dc9bc3c5b6b51a00 commit-date: 2020-04-09 host: x86_64-unknown-linux-gnu release: 1.44.0-nightly LLVM version: 9.0
C-enhancement,A-diagnostics,T-compiler,A-suggestion-diagnostics,D-confusing,D-newcomer-roadblock,D-crate-version-mismatch
low
Critical
609,307,126
TypeScript
Support declaring @types as the supported route for typings dependencies in the package.json
## Search Terms package json types field types types module ## Suggestion Allow libraries written in JavaScript to be able to declare that they have types in DT (or other modules), and that they support a certain version of it. ### For example: Express does not support TypeScript but trusts that the DefinitelyTyped express maintainers know what they are doing generally. They want to declare that DT is the way for TypeScript users to get types out of the box. Today that's not really possible. ## Examples Two potential routes: 1. Override the "types" parameter to support: `"types": "@types/express:^3.0"` in a package.json 1. Recommend a new field in the package.json: `"typesModule": "@types/express:^3.0"` - then when TypeScript normally offers a 'no types found', then it can offer the command to install with the correct semver range. ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals). /cc @wesleytodd @isaacs @ljharb
Suggestion,Needs Proposal,In Discussion
low
Major
609,315,063
TypeScript
Declaration file generation not working as expected
**TypeScript Version:** 4.0.0-dev.20200429 (when trying with `@next`) **TypeScript Version:** 3.8.0.0-dev.20200429 (when using my version `^3.8.0`) **Search Terms:** module resolution, lerna, monorepo, import type, declaration file **Code** Repository https://github.com/Volox/tsc-declaration-issue To bootstrap and run: ``` npx lerna bootstrap npx lerna run compile ``` **Expected behavior:** The declaration files generated are only related to the package. **Actual behavior:** The declaration files are generated correctly only for `lib1` and `lib2` have the full tree of declaration files of both packages. ![Screenshot 2020-04-29 at 15 43 25](https://user-images.githubusercontent.com/1468697/80637776-d51a0580-8a5f-11ea-90f5-42973c9b0b7b.png) **Related Issues:** None that I was able to find **Notes:** The generation of declaration files is done correctly if I remove the "paths" property from `tsconfig.base.json` but I receive compilation errors like: ``` src/index.ts(1,19): error TS2307: Cannot find module 'lib1'. src/index.ts(3,24): error TS2307: Cannot find module 'lib1/foo/bar'. ```
Needs Investigation
low
Critical
609,318,794
flutter
Don’t warn about running as root within systemd container
<!-- Thank you for using Flutter! Please check out our documentation first: * https://flutter.dev/ * https://api.flutter.dev/ If you can't find the answer there, please consider asking a question on the Stack Overflow Web site: * https://stackoverflow.com/questions/tagged/flutter?sort=frequent Please don't file a GitHub issue for support requests. GitHub issues are for tracking defects in the product. If you file a bug asking for help, we will consider this a request for a documentation update. --> The warning about running as root isn't shown when inside a docker container: https://github.com/flutter/flutter/pull/10535. Can we do the same for a systemd container? I frequently run flutter inside systemd containers and would like to silence this warning. Possible fix: ``` # Test if running as superuser – but don't warn if running within Docker or systemd container if [[ "$EUID" == "0" ]] && ! [[ -f /.dockerenv ]] && [ -z "$container" ]; then echo " Woah! You appear to be trying to run flutter as root." echo " We strongly recommend running the flutter tool without superuser privileges." echo " /" ``` _Detecting if inside a systemd container:_ https://github.com/systemd/systemd/blob/f20078df0b568cf365eea8278e98170e59ce2b2d/src/basic/virt.c#L500
c: new feature,tool,a: quality,P3,team-tool,triaged-tool
low
Critical
609,319,077
opencv
CMake: make public version of BUILD_LIST feature
Problem: - current name is not conflict free. For example `OPENCV_` or `CV_` prefix is preferable. - ambiguous purpose: it is about modules only (for example, it is not about 3rdparty build dependencies). `MODULES` suffix can clarify such purpose. Acceptance criteria: - add new name - add information into build-related tutorials relates #9893 relates #10841 relates https://github.com/opencv/opencv_contrib/pull/2417
feature,category: documentation,category: build/install
low
Minor
609,340,909
TypeScript
Rename refactoring doesn't handle changing a member to a non-Identifier
**TypeScript Version:** 3.8.3 **Search Terms:** refactor rename space identifier **Code** ```ts interface Foo { x: number; } var obj: Foo = { x: 1, }; obj.x = 2; ``` Rename the `x` member of `Foo` to ` x` (prefixed with space). **Expected behavior:** ```ts interface Foo { " x": number; } var obj: Foo = { " x": 1, }; obj[" x"] = 2; ``` **Actual behavior:** ```ts interface Foo { x: number; } var obj: Foo = { x: 1, }; obj. x = 2; ``` My use case was I decided to hide some members from intellisense using this trick, but the same thing applies if you want to rename to anything that's not a valid identifier (e.g. things with spaces or invalid identifier characters).
Bug,Needs Proposal
low
Minor
609,342,749
pytorch
Implement generic function scheduler in c10/util
## 🚀 Feature It would be useful to have a general utility in PyTorch that allows functions to be run periodically at a given time interval. This would be similar to folly's functionScheduler (https://github.com/facebook/folly/blob/master/folly/experimental/FunctionScheduler.cpp). This will be useful in a few different cases for RPC, such as eliminating several background watchdog threads that we currently have in favor of using this function scheduler. We also plan on having a metrics handler for RPC that reports metrics at a predefined interval, for which this would also be useful for (see https://github.com/pytorch/pytorch/issues/28245) cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar
triaged,module: rpc
low
Minor
609,363,278
vscode
Tasks (and TaskExecutions) are not === in the onDid(Start|End)Task callbacks.
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> <!-- Use Help > Report Issue to prefill these. --> - VSCode Version: 1.44.2 - OS Version: macOS 10.5.4 Steps to Reproduce: 1. Create a new extension using `yo code`. 2. Add the following to the 'activate' function body: ``` let disposable = vscode.commands.registerCommand('extension.helloWorld', async () => { let echoTask = new vscode.Task({type: "taskbug"}, vscode.TaskScope.Workspace, 'echo', 'taskbug', new vscode.ProcessExecution('/bin/echo', ['hello world'])); const echoTaskExecution = await vscode.tasks.executeTask(echoTask); vscode.tasks.onDidStartTaskProcess(e => { if (e.execution === echoTaskExecution) { console.log(`Detected that my task started with pid ${e.processId}`); } else if (e.execution.task === echoTask) { console.log(`Detected that my task started with pid ${e.processId}`); } }); vscode.tasks.onDidEndTaskProcess(e => { if (e.execution === echoTaskExecution) { console.log(`Detected that my task exited with exit code ${e.exitCode}`); } else if (e.execution.task === echoTask) { console.log(`Detected that my task exited with exit code ${e.exitCode}`); } }); }); ``` 3. Run debug the extension and run the 'Hello World' command. If you break inside the onDidStartTaskProcess and onDidEndTaskProcess that comparison fail, which means its difficult to determine when the task starts or ends. It's worth noting that in VSCode 1.43 the above comparisons both work, but in 1.44 this is not working. <!-- Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes
bug,tasks
low
Critical
609,405,197
PowerToys
Dynamically show Preview Pane when there's a preview available
# Dynamically show Preview Pane when there's a preview available I'm not sure if this is even possible to do at an application level, might require Windows feature update. But effectively, I do want previews for MD/SVGs like this PowerToy does, but I don't want an empty "Preview Pane" taking up space the entire time, and I certainly don't want it to be showing "No preview available" that's such a waste of space and just genuinely annoying, that I turn preview panes off as a whole. Making the MD/SVG previews useless :( # Proposed implementation details - Configured via a setting flag - When selecting a file, check to see if a valid previewer exists, if it does enable the preview pane - Disable it when clicking off the item Might be possible to do this with some registry magic, but that seems a bit hacky... I haven't looked into the Windows APIs to see if there's a better way of handling this
Idea-Enhancement,Product-File Explorer
low
Major
609,408,089
bitcoin
Add new getrpcwhitelist call
For an RPC user it would be nice to be able to discover its whitelist (#12248). A new RPC `getrpcwhitelist` should be added for that. #### Useful skills: Basic understanding of the Bitcoin Core RPC interface, RPC whitelists and our functional test framework #### Want to work on this issue? The purpose of the `good first issue` label is to highlight which issues are suitable for a new contributor without a deep understanding of the codebase. You do not need to request permission to start working on this. You are encouraged to comment on the issue if you are planning to work on it. This will help other contributors monitor which issues are actively being addressed and is also an effective way to request assistance if and when you need it. For guidance on contributing, please read [CONTRIBUTING.md](https://github.com/bitcoin/bitcoin/blob/master/CONTRIBUTING.md) before opening your pull request.
RPC/REST/ZMQ
low
Major
609,448,409
pytorch
Are there any differences in kernel memory between RX2080's and Quadro RTX4000?
🐛 Bug My libtroch DL-learning code operated correctly in Geforce RTX2080 ,but my code did not opearate in Quadro RTX4000. Both Geforce RTX2080 and Quadro RTX4000 have 8GB GDDR memory. But it seemed that Memory capacity was sufficient at Quadro RTX4000. I added the function "torch.cuda.empty.cache()" after 1 iteration at Quadro RTX4000, and my code operated correctly,but the Time of 1 iteration was increased. RTX2080 needed not the additoon of "torch.cuda.empty.cache()". Both RTX2080 and Quadro RTX4000 have 8GB GDDR ; Why did only Quadro RTX4000 needs "torch.cuda.empty.cache()" after 1 iteration? (1)Are there any differences in kernel memory between RX2080 and Quadro RTX4000? (2)What does the "torch.cuda.empty.cache()" do ?  ・ Does it release all memory areas which were allocated by cudaMalloc( )?  ・Does it release L1 cache in SM in GPU kernel ? [Enviromnet] I made the Deep learning code to detect the defects of mechanical parts. I used the libtorch Version 1.3.1. OS : Windows10 Pro 64bit build 1909 Compiler : Visual Studio 2017 profesional. CUDA10.1 cuDNN 7.6.5 for CUDA10.1 GPU : RTX2080 (ZOTAC) and Quadro RTX4000(Nvidia) . I installed only 1 GPU at PCI-E GEN3.0 x16 slots. I'd really appreciate it if anyone give me the good advice . Regards, cc @ngimel
module: cuda,module: memory usage,triaged
low
Critical
609,461,829
godot
Extending engine classes in GDScript or C# not working
**Godot version:** 3.2.1 **OS/device including version:** All platforms **Issue description:** Extending engine classes is essentially broken. **No control over call of super functions** In the current state, godot will call all the super classes' implementations of a function and the user has no way to opt out of that behavior, [as discussed in this issue](https://github.com/godotengine/godot/issues/6500). I completely understand [the reasoning](https://github.com/godotengine/godot/issues/6500#issuecomment-247370255) for this behavior and agree on that. While for user-written classes, there is a workaround: ``` # override this in your subclass func _on_process(delta): pass func _process(delta): _on_process(delta) ``` **The problem** is, _we cannot apply this workaround on engine classes_. As a result, extending engine classes in GDScript is barely supported in the moment, as it is not possible to override functions of engine classes properly. Another problem is that some functions work like this, and _others don't_. There is a lack of consistency. I thought quite a bit about it, and from the various options available, I think introducing the new keyword `override` might be the best option: ``` override func _process(delta): if we_want: ._process(delta) ``` So essentially, `override` will prevent the super classes' implementations of this function to be called. The user opts-in to that behavior and is then in full control if/when/how the super method is called. The "[squirrel programmer](https://github.com/godotengine/godot/issues/6500#issuecomment-247370255)" is still fine and everybody else used to OOP got his powers back. More as a site note: the second reason why extending engine classes is essentially broken in godot in the moment is that it is impossible to extend _virtual_ engine classes and provide a GDScript-implementation of these interfaces. This is [discussed here](https://github.com/godotengine/godot/issues/38294) and [here](https://github.com/godotengine/godot-docs/issues/3460), and affects _all virtual engine classes_ exposed to GDScript. **How to fix** In an ideal world, I would love to have the `override` keyword and have full control on extending engine classes. Also, I would like to be able to implement virtual engine classes. Naturally, this would increase flexibility of GDScript a lot [as discussed here](https://github.com/godotengine/godot/issues/38294). This would essentially fix inheritance of engine classes in godot. If this is not possible for technical reasons, we should at least add a warning/errors to save the user from some frustration: - a warning when a user tries to extend an engine class, because what he is trying to do might in fact not work out at all, because he will have no control over if and when super functions are called. - an error when the user tries to extend virtual engine classes, because this is in fact not working at all - a warning when the user tries to call super functions like `._process` or `._ready`, because this would call the super method twice, and this is most likely _not_ intended Thank you guys for your great work on this wonderful engine. I think it's normal there are flaws at some point, but it's important to save the user from endless frustrating hours trying figure out what's going wrong. Thank you for considering, I appreciate your thoughts about this.
topic:gdscript,documentation,topic:dotnet
low
Critical
609,468,343
material-ui
[Autocomplete] New API to control the highlighted option
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate. ## Summary 💡 Typing into the auto-complete and pressing enter should select something, such as the suggested option. The suggested option to select should be highlighted. ## Examples 🌈 For example type in `Good` into the [playground](https://material-ui.com/components/autocomplete/#playground) and press enter, nothing gets selected. Lets say I have a text matching score when searching for `Good` as following: `The Good, the Bad and the Ugly` = 0.3 `Goodfellas` = 0.8 I would want Goodfellas to be highlighted and selected when enter is pressed. ## Motivation 🔦 The current auto-complete logic makes suggesting an option difficult. Passing in a highlightedValue/suggestedValue would make the above possible. Set the first value in the list to be highlighted if one is not provided.
new feature,component: autocomplete
medium
Critical
609,469,853
flutter
`window.locale` is `null` on Android unless the locale is sent via the channel
This came across after https://github.com/flutter/engine/pull/17473 was merged. Can `window.locale` be `null` on Android? If yes, what is the recommended practice on Android? I presume if a 3P dependency is consuming `window.locale`, then this would be unexpected in an add-to-app scenario, for example. cc @xster, @GaryQian
platform-android,engine,P2,team-android,triaged-android
low
Minor
609,489,609
pytorch
Add name to Class Parameter()
## 🚀 Feature Add name to Class Parameter() [link] (https://github.com/pytorch/pytorch/blob/master/torch/nn/parameter.py). ## Motivation Currently, parameter names are available via `nn.Module.name_parameter()`, it is good enough for a model that locates on a single machine. However, once we start to pass parameters around via RPC, a stable name inside `class Parameter()` become really handy. A stable name facilitate a lot of distributed use cases, say we need to save a distributed model "distributedly", an obvious key to look up a parameter (for offline manipulation or restoring) is the parameter name. And debugging a distributed model will be easier too. There are couple requirement for this name 1. The name should be stable. If one instantiates the same model class multiple times, the name of a particle parameter should be exactly the same. 2. The name should be readable. 3. The name might need to reflect a certain set (not all) of operation on the parameter. e.g. a `detach()` parameter will have the keyword "detached" in it. P.S. I did a quick search and did not see an issue for this, feel free to dupe it if there is an open one already. cc @albanD @mruberry @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar
oncall: distributed,module: nn,triaged,enhancement,module: rpc
low
Critical
609,503,525
flutter
[Web] Web does not support animation sheet golden tests
https://github.com/flutter/flutter/pull/55527 added animation sheet golden tests, which would be very useful when testing the animation of widgets. However it is implemented using screenshots, which is not supported by Web. It'd be great to find a way to support it. More specifically, the current approach takes a screenshot of the target widget after every passed interval, and displays them in a grid afterwards. Multiple solutions were considered before we ended up choosing the current one, despite knowing it was web-incompatible: 1. Move the target widget to a new position, and displays a new widget at the front. In this way, the animating widget will be displayed backwards. - Problem: We want to support interacting with the widget. This approach interrupts such interactions. 2. Render the target widget at both a new location and the original location, which is used for interactions. - Problem: Flutter does not support either cloning widget states, or rendering the same object twice.
a: tests,c: new feature,framework,a: animation,platform-web,P3,team: skip-test,team-web,triaged-web
low
Minor
609,505,398
pytorch
torch.cartesian_prod(*tensors) error when you have tensors with [x,y]
## To Reproduce Steps to reproduce the behavior: ``` a = [[1,1], [2,3], [3,5]] b = [[4, 5], [3,6]] print(list(itertools.product(a, b))) tensor_a = torch.tensor(a) tensor_b = torch.tensor(b) #torch.cartesian_prod(tensor_a, tensor_b) torch.cartesian_prod(tensor_a, tensor_b) ``` ## Error: ``` [([1, 1], [4, 5]), ([1, 1], [3, 6]), ([2, 3], [4, 5]), ([2, 3], [3, 6]), ([3, 5], [4, 5]), ([3, 5], [3, 6])] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-346-78fb1274668d> in <module> 4 tensor_a = torch.tensor(a) 5 tensor_b = torch.tensor(b) ----> 6 torch.cartesian_prod(tensor_a, tensor_b) c:\users\hbb9279\appdata\local\programs\python\python37\lib\site-packages\torch\functional.py in cartesian_prod(*tensors) 603 [3, 5]]) 604 """ --> 605 return torch._C._VariableFunctions.cartesian_prod(tensors) 606 607 RuntimeError: Expect a 1D vector, but got shape [3, 2] ``` ## Expected behavior `torch.cartesian_prod` does not work like itertools.product when you have more than one dimension ## Environment Please copy and paste the output from our [environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or fill out the checklist below manually). You can get the script and run it with: ``` wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py # For security purposes, please check the contents of collect_env.py before running it. python collect_env.py ``` Collecting environment information... PyTorch version: 1.3.0 Is debug build: No CUDA used to build PyTorch: 10.1 OS: Microsoft Windows 10 Enterprise GCC version: Could not collect CMake version: Could not collect Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: Quadro P6000 Nvidia driver version: 426.00 cuDNN version: Could not collect Versions of relevant libraries: [pip3] numpy==1.17.3 [pip3] pytorch-transformers==1.2.0 [pip3] torch==1.3.0 [pip3] torchvision==0.4.1 [conda] Could not collect ## Additional context <!-- Add any other context about the problem here. --> cc @vincentqb @vishwakftw @SsnL @jianyuh
triaged,module: linear algebra
low
Critical
609,529,134
rust
Missing PartialEq<&str> impls on Path and PathBuf
Given that `&OsStr` is comparable to `&str`, I would expect `&Path` (and `PathBuf`) to be comparable to `&str` since `&OsStr` and `&Path` can freely convert to each other. ## Example ```rust use std::path::{Path, PathBuf}; fn main() { let path = PathBuf::from("-"); println!("{}", path == Path::new("-")); println!("{}", path.as_os_str() == Path::new("-")); println!("{}", path.as_os_str() == "-"); println!("{}", path == "-"); let path = Path::new("-"); println!("{}", path == PathBuf::from("-")); println!("{}", path.as_os_str() == Path::new("-")); println!("{}", path.as_os_str() == "-"); println!("{}", path == "-"); } ``` ## Playground https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=acfe18e120c54faf105d655c12a86ea7
T-libs-api,C-feature-request,A-str
low
Critical
609,615,273
TypeScript
Add readonly for all possible function parameter for lib.dom.d.ts
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ --> ## Search Terms <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily --> library, dom ## Suggestion Add readonly for all possible function parameter for `lib.dom.d.ts` ## Examples Here is an example for add readonly props for constructor: ```ts declare var URLSearchParams: { prototype: URLSearchParams; new(init?: string[][] | Record<string, string> | string | URLSearchParams): URLSearchParams; toString(): string; }; ``` It should be ```ts declare var URLSearchParams: { prototype: URLSearchParams; new(init?: ReadonlyArray<readonly [string,string]> | Readonly<Record<string, string>> | string | URLSearchParams): URLSearchParams; toString(): string; }; ``` then I can use ```ts const myGlobalConfig = { myURLSearchParamsInit1: [['foo','bar']], myURLSearchParamsInit2: {foo:'bar'} } as const; const params = new URLSearchParams(myGlobalConfig.myURLSearchParamsInit1) ```
Bug,Domain: lib.d.ts
low
Critical
609,628,480
godot
Use Local Space doesn't work when multiple Spatials are selected
**Godot version:** v3.2.1 **Issue description:** Use Local Space doesn't work when more than one Spatial is selected. ![image](https://user-images.githubusercontent.com/28286961/80679034-64ed9d00-8abc-11ea-988d-31ad038580a5.png) This shouldn't happen when every node has the same transform. **Steps to reproduce:** Create two MeshInstances, move them away from each other, select both and rotate them. Then select Use Local Space and move them. It will use global space. **Minimal reproduction project:** [LocalSpaceBug.zip](https://github.com/godotengine/godot/files/4556471/LocalSpaceBug.zip)
bug,topic:editor,usability
low
Critical
609,648,759
flutter
Native widgets should support fromMap factory
<!-- Thank you for using Flutter! If you are looking for support, please check out our documentation or consider asking a question on Stack Overflow: * https://flutter.dev/ * https://api.flutter.dev/ * https://stackoverflow.com/questions/tagged/flutter?sort=frequent If you have found a bug or if our documentation doesn't have an answer to what you're looking for, then fill our the template below. Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports --> ## Use case Someties developer need to make a dynamic decorator (wrap a widget in another widget programatiacally or style current widget with pre-defined set of styles) and with how things are implemented currently, this is not possible. ## Proposal ```dart Map<String, dynamic> containerData = { "margin": EdgeInsets.all(10.0), "color": Colors.pink, "width" : 25.0 }; Widget container = Container.fromMap(containerData); ``` The content (child) could be set either from the map or by method such as `.setChild();` This would immensly help.
c: new feature,framework,c: proposal,P3,team-framework,triaged-framework
low
Critical
609,671,920
excalidraw
How to use https
Hi, can I run the excalidraw docker container with https support?
support
low
Minor
609,690,483
PowerToys
PowerSymlink - Creates Symlinks from Context Menu ("mklink /D")
Please add a **PowerToy to create Symlinks** from the Windows Explorer context menu. For web projects, I often have to create symlinks (using `mklink /D`) from the command line. Having this feature in the Windows Explorer context menu will be a great time-saver. The context menu should have two entries for this new PowerToy: 1) **Only, if folder/file is selected** - Context menu shows "**PowerSymlink - link to ...**" - (_Idea: Maybe it's possible here to use the a modified version of the existing Windows "Create new Shortcut" feature._) - The click opens a popup with title "Set Symlink destination name". - This popup contains a text field for the Symlink destination filename, a folder selector button, and a confirm button. 2) Context menu shows "**PowerSymlink - create new**" (visible with or without selection) - Implementation like the existing Windows "Create new Shortcut" functionality (See screenshot) - If folder is selected, this folder is automatically set in the popup. The following command is used by me normally to created symlinks on the command-line. The context menu entry for symlinks will also help to make less mistakes while creating symlinks for sure. `mklink /D "C:\Users\username\Local Sites\__CHANGE-PROJECT-NAME__\app\public\wp-content\plugins\__CHANGE-PLUGIN-FOLDER-NAME__" "C:\Users\username\Local Sites - Shared Project Data\plugins\__CHANGE-PLUGIN-FOLDER-NAME__"` ### More ideas: - Popup: It should be possible to direct copy and paste of the complete path including not yet existing filename in the text field. If the file does not exits, a confirm popup asks, if you really want to create this Symlink destination. ![symlink_3](https://user-images.githubusercontent.com/827658/80686640-4d68e100-8ac9-11ea-9c73-ffebea627ac7.jpg) ![windows-shortcut](https://user-images.githubusercontent.com/827658/80685006-d599b700-8ac6-11ea-8c0a-c1463b96be2f.jpg)
Idea-New PowerToy
medium
Critical
609,729,824
flutter
[flutter_driver] Support finding/tapping gesture recognizer on TextSpan from RichText
I have faced this problem while writing gherkin BDD integration test for my flutter application where I am not able to get the driver tapped on TextSpan items under RichText, I know that there is no key assignment for the TextSpan as it is actually not a widget but a part of it as string styling for example. Therefore you are requested to provide a way to detect textSpan by gherkin BDD integration test or create widget key property for TextSpan. Thank you very much in advance.
a: tests,c: new feature,framework,t: flutter driver,a: typography,P2,team-framework,triaged-framework
low
Major
609,827,621
TypeScript
Inconsistency in generic parameter inference
**TypeScript Version:** 3.8.3, nightly, etc. **Search Terms:** infer function return **Code** ```ts declare function h<T extends keyof HTMLElementTagNameMap>( tag: T, ...init: ((e: HTMLElementTagNameMap[T]) => void)[], ): HTMLElementTagNameMap[T]; declare function events<T extends HTMLElement>( handlers: Partial<{ [K in keyof HTMLElementEventMap]: (this: T, e: HTMLElementEventMap[K]) => void }> ): (e: T) => void; declare function props<T extends HTMLElement>( props: Partial<T> ): (e: T) => void; h( 'input', events({ input() { this.value; }, }), props({ value: '42' }) ) ``` **Expected behavior:** no errors **Actual behavior:** ``` Argument of type '{ value: string; }' is not assignable to parameter of type 'Partial<HTMLElement>'. Object literal may only specify known properties, and 'value' does not exist in type 'Partial<HTMLElement>'. ``` **Playground Link:** [playground](https://www.typescriptlang.org/play/?ssl=24&ssc=2&pln=1&pc=1#code/CYUwxgNghgTiAEAzArgOzAFwJYHtXwAsAeAFXhAA8MRVgBneAaxAE8dF4AJEgWQBkAohBABbGhhJQA5gDkoYnlAAOAPgAUAKHjb4GaQC54JADRadAOktZUWDIbVqQh7vyGjxk2fJCKlAbRIAXQBKeABeFXgANxwsYGC-QNNg515BYTFUCWk5BWUAwIBuDQ1QSFgEFHRsPHIo8TpScioaei40t0yMdTNtAihaYRg6QwAFWGwoCCIAb3g-AGl4ayZWdnbXDPEBeqzfQPsMAiwRo2NyVM33LJ3xX0WQ8MiYuPgAXxUNFPhHQxJQiLRWLAYqlcDQOBINCYXD4JQwHBKRpkSjUWgMFzpa7dTQ6eDwxGncYwSbTEifb6-IwA57A0EEXE6ADk1iUyAwTNMeJAuwwdDUM16eOWqDZGDUoUFwulumOdHMUSmyBAxRl7y5OjewQ12gJSIFQrxiogysMTIALAAmJlCrVfIA) **Related Issues:** I'm pretty sure I didn't choose a decent title for the issue, so my ability to search for this is limited as well. I can understand this could be viewed as feature request/improvement, but there are a couple of reasons that make the behaviour feel buggy: 1. The inference works correctly for the `events` call, which is resolved as `events<HTMLInputElement>`. 2. The intellisense works on the `props` argument, which means at some point, the compiler understands I'm writing a `Partial<HTMLInputElement>`. But for some reason, it resolves the call to `props<HTMLElement>`, so the code fails to typecheck. I can work around it by adding the type parameter, but it degrades de developing experience.
Needs Investigation
low
Critical