id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
534,839,464 | flutter | main.dart.js is too large |
flutter build web packaged files are large。
main.dart.js is loading, the entire webpage is blank。
How to do distributed loading,
How to load a web page like。 | c: performance,a: size,platform-web,perf: app size,found in release: 1.17,P3,team-web,triaged-web | high | Critical |
534,909,274 | pytorch | How can I add masks to parameters | Hi,
Can I use hook to add a parameter masking function to Conv2d. Specifically, I’d like to add a binary mask buffer to each Conv2d module, during each training step, I need to update the mask buffer and then use it to mask the weight.
Or, is there any method to add masks and apply the masks to Conv2d in a given model.
Thanks! | module: nn,triaged | low | Minor |
534,957,331 | flutter | StyleComponents to CSS-like | Every Widget with style components contains other Widgets that do the designs.
Would it not better when Style is coded like CSS? It would not create new Objects, the Widget-Tree would be much smaller, the performance would gain and the app-sizes would smaller.
For Example (copied from Flutter documentation):
```dart
Text(
'Hello, $_name! How are you?',
textAlign: TextAlign.center,
overflow: TextOverflow.ellipsis,
style: TextStyle(fontWeight: FontWeight.bold),
)
```
You see the Tree would get a Text Widget that has a TextStyle children.
Would not it be better to integrate the Style methods as properties directly in the text? Like:
```dart
Text(
'Hello, $_name! How are you?',
textAlign: TextAlign.center,
overflow: TextOverflow.ellipsis,
style: [
fontWeight: bold,
fontSize: 42px,
]
)
```
and so on...
Correct me if I'm wrong, but each property creates a new object that needs to be processed in the tree, which also has its own headers and other content. | framework,c: proposal,P3,team-framework,triaged-framework | low | Major |
534,984,475 | pytorch | JIT breaks with postponed annotations | Targetting the correct issue this time, sorry for the noise
## 🐛 Bug
As per [PEP 563 (Postponed Evaluation of Annotations)](https://www.python.org/dev/peps/pep-0563), typing annotations are not automatically evaluated as definition time starting with python 3.7 when using `from __future__ import annotations`.
The solution is to avoid using `__annotations__` directly in [`jit._recursive.infer_concrete_type_builder()`](https://github.com/pytorch/pytorch/blob/master/torch/jit/_recursive.py#L74) but to call [`typing.get_type_hints()`](https://docs.python.org/3.7/library/typing.html#typing.get_type_hints`) instead, which is available since Python 3.7 and has the nice benefit that type hints that were given as strings (which is possible even without the `__future__` call will also be correctly evaluated. Should I make a PR?
## Testcase
```python
from __future__ import annotations
from typing_extensions import Final
import torch
import torch.jit
class ModuleWithFinal(torch.nn.Module):
const: Final[int]
def __init__(self, const):
super().__init__()
self.const = const
mod = ModuleWithFinal(0)
torch.jit.script(mod)
```
This fails with traceback
```text
File "testcase.py", line 14, in <module>
torch.jit.script(mod)
File "torch/jit/__init__.py", line 1256, in script
return torch.jit._recursive.recursive_script(obj)
File "torch/jit/_recursive.py", line 534, in recursive_script
return create_script_module(nn_module, infer_methods_to_compile(nn_module))
File "torch/jit/_recursive.py", line 293, in create_script_module
concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)
File "torch/jit/_recursive.py", line 236, in get_or_create_concrete_type
concrete_type_builder = infer_concrete_type_builder(nn_module)
File "torch/jit/_recursive.py", line 131, in infer_concrete_type_builder
if torch._jit_internal.is_final(ann):
File "torch/_jit_internal.py", line 645, in is_final
return ann.__module__ == 'typing_extensions' and \
AttributeError: 'str' object has no attribute '__module__'
```
cc @suo | oncall: jit,triaged | low | Critical |
534,990,545 | flutter | CupertinoPageRoute should support _kBackGestureWidth customization | CupertinoPageRoute should support _kBackGestureWidth customization
## Use case
Our project manager told us we should expand back gesture area for IOS. but i found that CupertinoPageRoute already has a const named _kBackGestureWidth.it worked after modify this.
but i cannot just modify when i using CupertinoPageRoute builder.
i just copied this file and modified it.
could you please make it support for customization? thanks
## Proposal
I think many users need this. | c: new feature,framework,f: cupertino,c: proposal,P3,team-design,triaged-design | low | Major |
535,005,162 | flutter | Style action label on SnackBar | <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
The designers on my team want a consistent font family for all text within the app.
There is a need to display a SnackBar with an action. Currently, the action label cannot be styled.
## Proposal
I should be able to pass a TextStyle to the SnackBarAction to specify what I want that text to look like.
| c: new feature,framework,f: material design,a: typography,P3,has partial patch,team-design,triaged-design | low | Critical |
535,005,446 | rust | Strange cycles perfomance with generators and very slow `for` in `debug` build [on nightly] | Having following simple benchmark code:
<details><summary> Very large code </summary>
```rust
#![feature(generators, generator_trait)]
#![feature(trait_alias)]
#![allow(while_true)]
#![allow(clippy::print_literal)]
use std::marker::PhantomData;
use std::ops::{Add, Generator, GeneratorState::*};
use std::pin::Pin;
trait FibValue<T> = Add<Output = T> + From<u8> + Copy;
struct Fibs<T: FibValue<T>, G: Generator<Yield = T, Return = !> + Unpin>(
Pin<Box<G>>,
PhantomData<T>,
);
impl<T: FibValue<T>, G: Generator<Yield = T, Return = !> + Unpin> Iterator for Fibs<T, G> {
type Item = T;
fn next(&mut self) -> Option<T> {
if let Yielded(element) = self.0.as_mut().resume() {
element.into()
} else {
None
}
}
}
fn create_fibs_for_gen<T: FibValue<T>>() -> Fibs<T, impl Generator<Yield = T, Return = !> + Unpin> {
Fibs(
Box::pin(|| {
let mut previous: T = 0.into();
let mut current: T = 1.into();
yield previous;
yield current;
for _ in 0usize.. {
let element = current + previous;
previous = current;
current = element;
yield element
}
unreachable!()
}),
PhantomData,
)
}
fn create_fibs_loop_gen<T: FibValue<T>>() -> Fibs<T, impl Generator<Yield = T, Return = !> + Unpin>
{
Fibs(
Box::pin(|| {
let mut previous: T = 0.into();
let mut current: T = 1.into();
yield previous;
yield current;
loop {
let element = current + previous;
previous = current;
current = element;
yield element
}
}),
PhantomData,
)
}
fn create_fibs_while_gen<T: FibValue<T>>() -> Fibs<T, impl Generator<Yield = T, Return = !> + Unpin>
{
Fibs(
Box::pin(|| {
let mut previous: T = 0.into();
let mut current: T = 1.into();
yield previous;
yield current;
while true {
let element = current + previous;
previous = current;
current = element;
yield element
}
unreachable!()
}),
PhantomData,
)
}
fn generate_fibs_loop<T: FibValue<T>>(result: &mut Vec<T>) {
let fib_count = result.capacity();
let mut previous: T = 0.into();
let mut current: T = 1.into();
let mut num = 2;
if fib_count > 0 {
result.push(previous);
if fib_count > 1 {
result.push(current);
}
}
loop {
if fib_count <= num {
break;
} else {
num += 1;
}
let element = current + previous;
previous = current;
current = element;
result.push(element);
}
}
fn generate_fibs_for<T: FibValue<T>>(result: &mut Vec<T>) {
let fib_count = result.capacity();
let mut previous: T = 0.into();
let mut current: T = 1.into();
if fib_count > 0 {
result.push(previous);
if fib_count > 1 {
result.push(current);
}
}
for _ in 2..fib_count {
let element = current + previous;
previous = current;
current = element;
result.push(element);
}
}
fn generate_fibs_while<T: FibValue<T>>(result: &mut Vec<T>) {
let fib_count = result.capacity();
let mut previous: T = 0.into();
let mut current: T = 1.into();
let mut num = 2;
if fib_count > 0 {
result.push(previous);
if fib_count > 1 {
result.push(current);
}
}
while num < fib_count {
num += 1;
let element = current + previous;
previous = current;
current = element;
result.push(element);
}
}
fn main() {
let mut fibs = Vec::with_capacity(10000);
let mut start_time = std::time::Instant::now();
generate_fibs_loop::<f64>(&mut fibs);
println!(
"{} => {:?}, total count: {}, test value: {}",
"fib via `loop`!",
std::time::Instant::now() - start_time,
fibs.len(),
fibs[500].log(std::f64::consts::PI)
);
fibs = Vec::with_capacity(10000);
start_time = std::time::Instant::now();
generate_fibs_for::<f64>(&mut fibs);
println!(
"{} => {:?}, total count: {}, test value: {}",
"fib via `for`!",
std::time::Instant::now() - start_time,
fibs.len(),
fibs[500].log(std::f64::consts::PI)
);
fibs = Vec::with_capacity(10000);
start_time = std::time::Instant::now();
generate_fibs_while::<f64>(&mut fibs);
println!(
"{} => {:?}, total count: {}, test value: {}",
"fib via `while`!",
std::time::Instant::now() - start_time,
fibs.len(),
fibs[500].log(std::f64::consts::PI)
);
start_time = std::time::Instant::now();
fibs = create_fibs_loop_gen::<f64>()
.take(10000)
.collect::<Vec<f64>>();
println!(
"{} => {:?}, total count: {}, test value: {}",
"fib via `loop` generator!",
std::time::Instant::now() - start_time,
fibs.len(),
fibs[500].log(std::f64::consts::PI)
);
start_time = std::time::Instant::now();
fibs = create_fibs_for_gen::<f64>()
.take(10000)
.collect::<Vec<f64>>();
println!(
"{} => {:?}, total count: {}, test value: {}",
"fib via `for` generator!",
std::time::Instant::now() - start_time,
fibs.len(),
fibs[500].log(std::f64::consts::PI),
);
start_time = std::time::Instant::now();
fibs = create_fibs_while_gen::<f64>()
.take(10000)
.collect::<Vec<f64>>();
println!(
"{} => {:?}, total count: {}, test value: {}",
"fib via `while` generator!",
std::time::Instant::now() - start_time,
fibs.len(),
fibs[500].log(std::f64::consts::PI),
);
}
```
</details>
[**If increase benchmark number for every case, median time difference will become less significant**]
But for 1 iteration I got very strange results (with many recompilations and reruns, results are approx. the same):
- Debug on my machine:
```
fib via `loop`! => 283.689µs, total count: 10000, test value: 209.48277540220764
fib via `for`! => 1.037341ms, total count: 10000, test value: 209.48277540220764
fib via `while`! => 363.379µs, total count: 10000, test value: 209.48277540220764
fib via `loop` generator! => 911.265µs, total count: 10000, test value: 209.48277540220764
fib via `for` generator! => 1.331597ms, total count: 10000, test value: 209.48277540220764
fib via `while` generator! => 897.928µs, total count: 10000, test value: 209.48277540220764
```
- Debug on playground:
```
fib via `loop`! => 518.857µs, total count: 10000, test value: 209.48277540220764
fib via `for`! => 1.706352ms, total count: 10000, test value: 209.48277540220764
fib via `while`! => 621.504µs, total count: 10000, test value: 209.48277540220764
fib via `loop` generator! => 1.53625ms, total count: 10000, test value: 209.48277540220764
fib via `for` generator! => 2.30295ms, total count: 10000, test value: 209.48277540220764
fib via `while` generator! => 1.276106ms, total count: 10000, test value: 209.48277540220764
```
Cycle `for` is **up to 3 times slower**
- Release on my machine:
```
fib via `loop`! => 67.293µs, total count: 10000, test value: 209.48277540220764
fib via `for`! => 60.185µs, total count: 10000, test value: 209.48277540220764
fib via `while`! => 17.95µs, total count: 10000, test value: 209.48277540220764
fib via `loop` generator! => 55.928µs, total count: 10000, test value: 209.48277540220764
fib via `for` generator! => 116.994µs, total count: 10000, test value: 209.48277540220764
fib via `while` generator! => 48.887µs, total count: 10000, test value: 209.48277540220764
```
- Release on playground:
```
fib via `loop`! => 60.817µs, total count: 10000, test value: 209.48277540220764
fib via `for`! => 67.948µs, total count: 10000, test value: 209.48277540220764
fib via `while`! => 28.903µs, total count: 10000, test value: 209.48277540220764
fib via `loop` generator! => 46.099µs, total count: 10000, test value: 209.48277540220764
fib via `for` generator! => 158.023µs, total count: 10000, test value: 209.48277540220764
fib via `while` generator! => 108.421µs, total count: 10000, test value: 209.48277540220764
```
`for` and `loop` are slower for some reasons. Also I know that generators aren't stable yet but they already work **very fast** and also are very convenient to use for some purposes. However this behaviour looks strange: `for` in generators is **up to 3 times** less performant than `loop` and `while true` (which's antipattern).
Is this my fault or something wrong with the cycles compilation? I'm very scared... Thanks!
[Playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=b4c2be76169f0f6dc03871055ada362b) | I-slow,C-enhancement,T-compiler,A-coroutines | low | Critical |
535,016,185 | opencv | When creating VisualStudio project with AVX512 baseline, AVX2 seems to override the AVX512 option | I'm trying to compile OpenCV with AVX512_SKX baseline with VisualStudio 2019 generator.
After generating the project and inspecting the build options, I notice that under
ConfigurationProperties > C/C++ > Code Generation > Enable Enhanced Instruction Set
/arch:AVX2 is set
and under
ConfigurationProperties > C/C++ > All Options > Additional Options
/arch:AVX512
the first option seems to override the second one however, since when looking at
ConfigurationProperties > C/C++ > Code Generation > Command Line
only /arch:AVX2 makes it into the command line | category: build/install,incomplete,platform: win32 | low | Minor |
535,031,042 | rust | Don't evaluate promoteds for each monomorphization if it does not depend on generic parameters | In the following code snippet
```rust
fn foo<T>() -> &'static i32 {
&(5 + 6)
}
```
we evaluate the `&(5 + 6)` for every monomorphization of `foo`, even though we could just do this once. We should find a way to get rid of this duplicate evaluation. Maybe we can already do this at the time promoteds are created (const qualif + `promote.rs`) by checking `promoted_mir.needs_substs()` and if it's false, make the `ty::Const` that refers to it not have any substs itself. `const_eval` will then automatically memoize all the evaluations of this promoted.
Alternatively const prop could attempt to evaluate the promoted and if it fails, assume it has generics and leave it to monomorphization. This would require more CPU time than the const qualif version, since we'd do a lot of prospetive evaluations that may not succeed. | C-enhancement,I-compiletime,T-compiler,A-const-eval | low | Major |
535,039,681 | kubernetes | Test coverage of volume relabeling is lacking | **What happened**:
SELinux volume relabeling regressed in 1.16/1.17 with no test failures. See https://github.com/kubernetes/kubernetes/issues/83679
**What you expected to happen**:
Test failures would have prevented the regression. Currently, we apparently only have manual test guarantees that this functions correctly.
/sig node storage
/priority critical-urgent | kind/bug,area/test,priority/backlog,sig/storage,sig/node,help wanted,lifecycle/frozen,needs-triage | medium | Critical |
535,061,614 | pytorch | Categorical.sample too slow | ## 🐛 Bug
Categorical.sample(shape) calls _shape_ times ```torch.multinomial(probas, 1, True)```.
This is slow when we want to get many samples from a large number of classes.
## To Reproduce
Steps to reproduce the behavior:
```py
import torch
sampling = torch.distributions.categorical.Categorical(probs=torch.rand((10_000_000)))
%timeit sampling.sample((100,))
```
Outputs : ```3.16 s ± 117 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)```
## Expected behavior
Shouldn't be slower than a single call to torch.multinomial
```py
probas = torch.rand(10_000_000)
%timeit torch.multinomial(probas, 100, True)
```
which outputs : ```51.7 ms ± 1.31 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)```
## Environment
PyTorch version: 1.3.1
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: Quadro GP100
GPU 1: Quadro GP100
Nvidia driver version: 410.79
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] numpysane==0.17
[pip] torch==1.3.1
[conda] blas 1.0 mkl
[conda] faiss-gpu 1.6.0 py37h1a5d453_0 pytorch
[conda] mkl 2019.4 intel_243 intel
[conda] mkl_fft 1.0.12 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.3.1 py3.7_cuda10.0.130_cudnn7.6.3_0 pytorch
cc @vincentqb @fritzo @neerajprad @alicanb @vishwakftw | module: distributions,triaged | low | Critical |
535,213,527 | go | doc: filename conventions | > If I have simply missed where this is documented, please let me know where I should have looked, and close this out.
Many places describe how custom filename suffixes interact with the toolchain: `_test`, `_linux`, `_arm`. However I cannot find anywhere that says you should use snake case for multi-word file names, as is suggested by the standard library: e.g. `time/zoneinfo_read.go`/`os/removeall_at.go` not `time/zoneinfoRead.go`/`os/removeallAt.go` or `time/zoneinfo-read.go`/`os/removeall-at.go`.
This may seem self evident to experienced users, but I have seen many people get confused by the fact that variables use `CamelCase := ""`, while files use `snake_case.go`. | Documentation,NeedsDecision | medium | Critical |
535,219,915 | pytorch | Remove `.data` | Even though it is not documented, many users still use it. And it leads to many bugs in user code.
So we should remove it completely to prevent this.
The expected steps are:
- [ ] Add a new api to make a shallow copy that does not share version? Or a cat + unbind function that does not share version counter if that is enough.
- [ ] Remove the use of .data in all our internal code
- [ ] `benchmarks/`
- [ ] `docs/source/scripts/build_activation_images.py` and `docs/source/notes/extending.rst`
- [ ] `test/`
- [ ] `torch/autograd/` Blocky by https://github.com/pytorch/pytorch/pull/30258#issuecomment-558344600 (not blocked ones were done in #31479)
- [ ] `torch/cuda/` Use that requires the new api mentioned above.
- [x] `torch/jit` #31480
- [ ] `torch/nn` Simple ones done in #31481 and #31482
- [ ] `torch/optim`
- [ ] `torch/utils/tensorboard/`
- [ ] `torch/tensor.py`
- [x] *ignored* `caffe2/`
- [ ] Add warning to every call to `.data`
- [ ] Actually remove `.data` ?
cc @ezyang @SsnL @albanD @zou3519 @gqchen | module: autograd,triaged,enhancement,better-engineering,actionable | medium | Critical |
535,335,371 | rust | Missing TryInto from usize to float | Currently there is no trait bound that will satisfy this very simple algorithm:
```
fn average<T>(list: &Vec<T>) -> T
where T: Div<Output=T> + Add<Output=T> + Default + TryFrom<usize>
{
let mut total = Default::default();
if list.len() == 0 {
total
}
else {
for val in list.iter() {
total = total + *val;
}
total / TryFrom::try_from(list.len()).unwrap()
}
}
```
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":null}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | T-libs-api,C-feature-request | low | Minor |
535,341,781 | flutter | libEGL.so & libGLESv2.so missing in Linux exe.unstripped directory | On Linux , we should copy `libEGL.so` and `libGLESv2.so` from `out/xxx/` to `out/xxx/exe.unstripped` in order to properly debug the binaries inside `exe.unstripped` directrory (e.g., using gdb). Otherwise, Linux will load the system library instead of our SwiftShader GL implementation. | team,engine,platform-linux,P2,team-engine,triaged-engine | low | Critical |
535,363,329 | rust | rustc requires 7GB of memory for clean build | (This is a followup to https://users.rust-lang.org/t/why-does-rustc-require-7gb-of-memory-for-a-medium-size-crate/35506.)
I have a rust project (https://github.com/samuela/rustybox as of 37c0a1d) that requires ~7GB of memory to build. Why does rustc require so much memory? Are there any particular rust features that lead to this kind of outsized memory usage? Using so much memory seems buggy/unusual to me, and it was suggested on the forum that I open a bug report.
```
$ cargo clean
$ /usr/bin/time -v cargo build --jobs 1
...
Finished dev [unoptimized + debuginfo] target(s) in 3m 17s
Command being timed: "cargo build --jobs 1"
User time (seconds): 156.60
System time (seconds): 17.12
Percent of CPU this job got: 87%
Elapsed (wall clock) time (h:mm:ss or m:ss): 3:17.98
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 7053216
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 27
Minor (reclaiming a frame) page faults: 768571
Voluntary context switches: 38426
Involuntary context switches: 304785
Swaps: 0
File system inputs: 334648
File system outputs: 632816
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
``` | C-enhancement,A-borrow-checker,T-compiler,I-compilemem | low | Critical |
535,376,729 | pytorch | The inference speed of the torch compiled manually is slower than the torch build from official binaries? | I compile the torch(v1.4.0 cudatoolkit=10.2 cudnn=7.6.5) from source with the command "python setup install". And then load the model of shufflenet v2 0.5 with the compiled library(batchsize=8), the speed is 0.0083. But when I load the same model with the torch(v1.3.1 cudatoolkit=10.0 cudnn=7.6.4) build by official binaries through conda, the speed is 0.0075. Why is this fast?
My GPU is 2080TI.
Relevant config:
```
No OpenMP library needs to be linked against
-- Found CUDA: /usr/local/cuda-10.2 (found version "10.2")
-- Caffe2: CUDA detected: 10.2
-- Caffe2: CUDA nvcc is: /usr/local/cuda-10.2/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda-10.2
-- Caffe2: Header version is: 10.2
-- Found cuDNN: v7.6.5 (include: /usr/local/cuda-10.2/include, library: /usr/local/cuda-10.2/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s): 7.5 7.5 7.5
-- Added CUDA NVCC flags for: -gencode;arch=compute_75,code=sm_75
-- Autodetected CUDA architecture(s): 7.5 7.5 7.5
-- Could NOT find CUB (missing: CUB_INCLUDE_DIR)
-- Found CUDA: /usr/local/cuda-10.2 (found suitable version "10.2", minimum required is "7.0")
-- CUDA detected: 10.2
-- Could NOT find NCCL (missing: NCCL_INCLUDE_DIR NCCL_LIBRARY)
```
[cmakelog.log](https://github.com/pytorch/pytorch/files/3942238/cmakelog.log)
cc @ezyang @VitalyFedyunin @ngimel @mruberry | needs reproduction,module: binaries,module: performance,triaged | low | Major |
535,380,926 | godot | [empty] instead of default value in properties inspector for exported Resource | **Godot version:** 3.0
**OS/device including version:** Any
**Issue description:** Value is "[empy]" for exported resource with default value
**Steps to reproduce:**
Create some custom resource class. For example MyResource
Use it in exported variables. Example:
```
export var int_value : int = 12
export var my_resource_value : Resource = MyResource.new()
```
Check inspector:
Actual result:
int_value = 12
my_resource_value = [empty]
Expected result:
int_value = 12
my_resource_value = MyResource with exported values of this resource
**Minimal reproduction project:**
[ExportResource.zip](https://github.com/godotengine/godot/files/3942038/ExportResource.zip)
| discussion,topic:editor | low | Major |
535,384,922 | pytorch | GPU version of minimal example for libtorch fails with "no kernel image is available..." | ## 🐛 Bug
The following GPU version of the minimal example for `libtorch` crashes on `torch::relu` with the error "CUDA error: no kernel image is available for execution on the device".
#include <torch/torch.h>
#include <iostream>
int main() {
torch::Device device(torch::kCUDA);
torch::Tensor tensor = torch::rand({2, 3}).to(device);
tensor = torch::relu(tensor);
std::cout << tensor << std::endl;
}
## To Reproduce
Contents of `CMakeLists.txt`:
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(example-app)
find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
Steps to reproduce:
mkdir build
cd build
cmake -DCMAKE_PREFIX_PATH=/absolute/path/to/libtorch ..
make
./example-app
Error message:
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: no kernel image is available for execution on the device (launch_kernel at /pytorch/aten/src/ATen/native/cuda/Loops.cuh:102)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6a (0x7fa2edaacc4a in /home/dbergh/sdk/libtorch/lib/libc10.so)
frame #1: void at::native::gpu_kernel_impl<__nv_hdl_wrapper_t<false, false, __nv_dl_tag<void (*)(at::TensorIterator&, float, float), &(void at::native::threshold_kernel_impl<float>(at::TensorIterator&, float, float)), 1u>, float (float, float), float, float> >(at::TensorIterator&, __nv_hdl_wrapper_t<false, false, __nv_dl_tag<void (*)(at::TensorIterator&, float, float), &(void at::native::threshold_kernel_impl<float>(at::TensorIterator&, float, float)), 1u>, float (float, float), float, float> const&) + 0xca6 (0x7fa2f0604d56 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #2: void at::native::gpu_kernel<__nv_hdl_wrapper_t<false, false, __nv_dl_tag<void (*)(at::TensorIterator&, float, float), &(void at::native::threshold_kernel_impl<float>(at::TensorIterator&, float, float)), 1u>, float (float, float), float, float> >(at::TensorIterator&, __nv_hdl_wrapper_t<false, false, __nv_dl_tag<void (*)(at::TensorIterator&, float, float), &(void at::native::threshold_kernel_impl<float>(at::TensorIterator&, float, float)), 1u>, float (float, float), float, float> const&) + 0x17b (0x7fa2f0605b2b in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #3: void at::native::gpu_kernel_with_scalars<__nv_hdl_wrapper_t<false, false, __nv_dl_tag<void (*)(at::TensorIterator&, float, float), &(void at::native::threshold_kernel_impl<float>(at::TensorIterator&, float, float)), 1u>, float (float, float), float, float> >(at::TensorIterator&, __nv_hdl_wrapper_t<false, false, __nv_dl_tag<void (*)(at::TensorIterator&, float, float), &(void at::native::threshold_kernel_impl<float>(at::TensorIterator&, float, float)), 1u>, float (float, float), float, float> const&) + 0x51b (0x7fa2f061b65b in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #4: <unknown function> + 0x28f8fde (0x7fa2f05bffde in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #5: <unknown function> + 0x28f9d54 (0x7fa2f05c0d54 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #6: <unknown function> + 0x3add796 (0x7fa2f17a4796 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #7: <unknown function> + 0x3ad1aa8 (0x7fa2f1798aa8 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #8: at::native::threshold(at::Tensor const&, c10::Scalar, c10::Scalar) + 0x49 (0x7fa2f1798e49 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #9: <unknown function> + 0x414ab21 (0x7fa2f1e11b21 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #10: <unknown function> + 0x4013979 (0x7fa2f1cda979 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #11: at::native::relu(at::Tensor const&) + 0x2b0 (0x7fa2f179ab50 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #12: <unknown function> + 0x642fe45 (0x7fa2f40f6e45 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #13: <unknown function> + 0x4012601 (0x7fa2f1cd9601 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #14: <unknown function> + 0x2a063a6 (0x7fa2f06cd3a6 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #15: <unknown function> + 0x5afd3c2 (0x7fa2f37c43c2 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #16: <unknown function> + 0x4012601 (0x7fa2f1cd9601 in /home/dbergh/sdk/libtorch/lib/libtorch.so)
frame #17: at::Tensor c10::KernelFunction::callUnboxed<at::Tensor, at::Tensor const&>(at::Tensor const&) const + 0x97 (0x55cc0be33c6f in ./example-app)
frame #18: c10::impl::OperatorEntry::callUnboxed<at::Tensor, at::Tensor const&>(c10::TensorTypeId, at::Tensor const&) const::{lambda(c10::DispatchTable const&)#1}::operator()(c10::DispatchTable const&) const + 0x61 (0x55cc0be32479 in ./example-app)
frame #19: std::result_of<c10::impl::OperatorEntry::callUnboxed<at::Tensor, at::Tensor const&>(c10::TensorTypeId, at::Tensor const&) const::{lambda(c10::DispatchTable const&)#1} (c10::DispatchTable const&)>::type c10::LeftRight<c10::DispatchTable>::read<c10::impl::OperatorEntry::callUnboxed<at::Tensor, at::Tensor const&>(c10::TensorTypeId, at::Tensor const&) const::{lambda(c10::DispatchTable const&)#1}>(c10::impl::OperatorEntry::callUnboxed<at::Tensor, at::Tensor const&>(c10::TensorTypeId, at::Tensor const&) const::{lambda(c10::DispatchTable const&)#1}&&) const + 0x125 (0x55cc0be33f29 in ./example-app)
frame #20: at::Tensor c10::impl::OperatorEntry::callUnboxed<at::Tensor, at::Tensor const&>(c10::TensorTypeId, at::Tensor const&) const + 0x53 (0x55cc0be324eb in ./example-app)
frame #21: at::Tensor c10::Dispatcher::callUnboxed<at::Tensor, at::Tensor const&>(c10::OperatorHandle const&, c10::TensorTypeId, at::Tensor const&) const + 0x6e (0x55cc0be2fcd2 in ./example-app)
frame #22: <unknown function> + 0x1e22e (0x55cc0be2822e in ./example-app)
frame #23: main + 0xc9 (0x55cc0be28398 in ./example-app)
frame #24: __libc_start_main + 0xe7 (0x7fa2ecc84b97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #25: _start + 0x2a (0x55cc0be26eda in ./example-app)
Aborted (core dumped)
## Expected behavior
Obvious
## Environment
- PyTorch Version: 1.3.1
- OS (e.g., Linux): Linux (Ubuntu 18.04)
- How you installed PyTorch: https://download.pytorch.org/libtorch/cu101/libtorch-cxx11-abi-shared-with-deps-1.3.1.zip
- CUDA/cuDNN version: 10.1.243/7.6.5.32-1
- GPU models and configuration: GeForce GTX 780 (arch 3.5)
## Additional context
Output from `nvidia-smi`
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 780 On | 00000000:01:00.0 N/A | N/A |
| 37% 41C P0 N/A / N/A | 488MiB / 3016MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
Output from cmake
cmake -DCMAKE_PREFIX_PATH=/home/dbergh/sdk/libtorch ..
-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda (found version "10.1")
-- Caffe2: CUDA detected: 10.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 10.1
-- Found CUDNN: /usr/lib/x86_64-linux-gnu/libcudnn.so
-- Found cuDNN: v7.6.5 (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libcudnn.so)
-- Autodetected CUDA architecture(s): 3.5
-- Added CUDA NVCC flags for: -gencode;arch=compute_35,code=sm_35
-- Found torch: /home/dbergh/sdk/libtorch/lib/libtorch.so
-- Configuring done
-- Generating done
-- Build files have been written to: /home/dbergh/test/build
cc @ngimel | module: cuda,triaged | low | Critical |
535,451,745 | go | proposal: crypto/subtle: constant time comparison (eq/neq) for int64 | Several well-used programs in Golang use int64 as an identifier, but do not use constant time comparison when authenticating. This could be used to leak information to an adversary (potentially). Unfortunately, `crypto/subtle` does not have a constant-time comparison algorithm for int64, which would clearly be useful to have.
I do not have a clear understanding of how things work "under the hood" in Golang, so I do not trust myself to write a proper constant time int64 comparison algorithm for `crypto/subtle`. However, I think one could be easily implemented/adapted.
@FiloSottile @rsc @agl It looks like you guys know what you're doing on `crypto/subtle`. Could you help out here? | Proposal,Proposal-Crypto | low | Minor |
535,461,841 | opencv | python: Relink `/usr/lib/x86_64-linux-gnu/libsystemd.so.0' with `/usr/lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol `clock_gettime' |
##### System information (version)
- OpenCV => 4.1.2 (Build from source)
- Operating System / Platform => Ubuntu 19.04
- Compiler =>
```console
longervision-GT72-6QE% gcc --version
gcc (Ubuntu 8.3.0-6ubuntu1) 8.3.0
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
```
I failed to **import cv2** using **python**.
```
longervision-GT72-6QE% python
Python 3.7.3 (default, Oct 7 2019, 12:56:13)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
python: Relink `/usr/lib/x86_64-linux-gnu/libsystemd.so.0' with `/usr/lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol `clock_gettime'
python: Relink `/usr/lib/x86_64-linux-gnu/libudev.so.1' with `/usr/lib/x86_64-linux-gnu/librt.so.1' for IFUNC symbol `clock_gettime'
[1] 25419 segmentation fault (core dumped) python
```
Any suggestions?
Cheers
Pei
| incomplete | low | Critical |
535,479,574 | TypeScript | Add full type hover popup to VS Code commands | Several issues have been opened for this problem, but they have been closed as a duplicate of an issue which barely mentions the problem, let alone addresses it: https://github.com/microsoft/vscode/issues/64566
The closed issues are:
- https://github.com/microsoft/vscode/issues/66405
- https://github.com/microsoft/vscode/issues/76480
- https://github.com/microsoft/vscode/issues/66405
The problem is simple. The three dots means you can't copy and paste definitions like you used to be able to. This is an absolutely essential feature because TypeScript isn't even close to being used everywhere, and the intellisense of Javascript files inside TypeScript files is way worse than inside Javascript files. This basically means the easiest way to create a d.ts file is to copy the type definition in the hover inside Javascript and then paste it into a d.ts file so TypeScript files can use it. It also helps me debug overloads and unions when things are going awry.
Even just a command that would popup the hover with the full definition instead of the abbreviated definition would be incredibly useful. Currently the shortcut is `Ctrl+k, Ctrl+i` for the abbreviated definition, so I take it there is a command behind it that can be typed into the command list (`ctrl+shift+p`). So even just adding the command for people to manually add a shortcut for would be really useful.
I can work around it, but I spend hours more time per week because of it. | Suggestion,Needs Proposal | high | Critical |
535,559,137 | pytorch | Pytorch 1.3.0 on RTX cards: CUDA error: an illegal memory access was encountered | Hi, after upgrade from torch==1.2.0 to torch==1.3.0 (or 1.3.1) I'm seeing these errors in my application running on RTX cards:
**CUDA error: an illegal memory access was encountered**
Error happens randomly in different parts of code, when accessing cuda functions.
For example: image_tensor = self.tensors[image_size].cuda(non_blocking=True)
System runs fine for a while, and then starts producing errors like this one.
## To Reproduce
It's a production system with a large code base so it's hard for me to provide you with a reproduction sample.
## Environment
I have noticed this error only with pytorch==1.3.0 or pytorch.1.3.1 running on RTX 1660Ti and RTX 2080. I have instances of same application running on older hardware (GTX 1070, GTX 1080) with out any problems. Also, using older version of pytorch (1.2.0) on RTX cards no problem as well.
- PyTorch Version: 1.3.0, 1.3.1
- OS: CentOS docker environment
- How you installed PyTorch: pip
- Python version: 3.6
- CUDA/cuDNN version: 10.1, 10.2
- GPU models and configuration: RTX 1660Ti, RTX 2080
cc @ngimel | module: cuda,module: memory usage,triaged | low | Critical |
535,584,954 | PowerToys | Transform (lowercase, Titlecase, UPPERCASE) Feature | # Summary of the new feature/enhancement
As there is a PowerRenamer feature, there should be a PowerTransform feature as well. PowerTransform should help quickly rename *any text* throughout the W10 ecosystem, and especially files, from **`lowercase`** to **`Titlecase`** or **`UPPERCASE`**.
It should be available as a Right click context menu. This is an absolutely missing feature in any Windows OS, and the third party solutions are not great.
Please leave a like if you would want to have this feature implemented. | Idea-New PowerToy | medium | Critical |
535,647,669 | flutter | Dropdown list incorrect position when on-screen keyboard closes | ## Steps to Reproduce
1. Open dialog with a text field and a dropdown button.
2. Focus the text field by tapping on it, this opens the keyboard and changes the position of the dialog.
3. Tap on the dropdown button, its' list is opened in correct position, but the keyboard closes making the dialog go downwards while the dropdown list stays where it opened.
This could also be reproduced without a dialog, but in some form which scrolls, but the behaviour is much more visible with a dialog.
Screenshot of this behaviour:

Code sample:
```
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
floatingActionButton: FloatingActionButton(
onPressed: () => showDialog(
context: context,
builder: (context) => MyDialog(),
),
tooltip: 'Increment',
child: Icon(Icons.add),
),
);
}
}
class MyDialog extends StatelessWidget {
@override
Widget build(BuildContext context) => AlertDialog(
content: Row(
children: <Widget>[
Expanded(
child: TextField(),
),
Expanded(
child: DropdownButton<String>(
items: ['1', '2', '3']
.map((i) => DropdownMenuItem<String>(
value: i,
child: Text(i),
))
.toList(),
onChanged: (_) {},
),
),
],
),
);
}
```
**Target Platform: Android**
**Target OS version/browser: tested with Android 9 and 8**
**Devices: tested with Android 9 emulator, Lenovo Tab M10 (Android 8)**
## Logs
```
[√] Flutter (Channel beta, v1.12.13+hotfix.4, on Microsoft Windows [Version 10.0.18362.476], locale lt-LT)
• Flutter version 1.12.13+hotfix.4 at C:\Users\Rokas\FlutterSDK\flutter
• Framework revision fb60324e6f (11 hours ago), 2019-12-09 15:58:15 -0800
• Engine revision ac9391978e
• Dart version 2.7.0
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at C:\Users\Rokas\AppData\Local\Android\sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.2
• Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
• All Android licenses accepted.
[√] Android Studio (version 3.5)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin version 42.0.1
• Dart plugin version 191.8593
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
[√] VS Code (version 1.40.2)
• VS Code at C:\Users\Rokas\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.6.0
[√] Connected device (1 available)
• AOSP on IA Emulator • emulator-5554 • android-x86 • Android 9 (API 28) (emulator)
• No issues found!
```
| framework,f: material design,c: rendering,has reproducible steps,found in release: 3.3,found in release: 3.6,team-design,triaged-design | medium | Major |
535,650,507 | opencv | Robust variant of solvePnPRefineLM | Currently the LM part of the solvePnPRefineLM function minimizes the L2 loss. However in the refinement setting - where we can assume a well-behaved initial estimate - it is desirable to use a robust loss.
This would allow dealing with outliers (points with non-gaussian noise). In this context the Tukey loss, seems appropriate. See: https://github.com/albanie/mcnRobustLoss
interacts with
- #15650
some more robust functions to choose from: https://core.ac.uk/download/pdf/55191204.pdf
requires extension of [LMSolver](https://github.com/opencv/opencv/blob/a0041125c67dcbbd7483dee0d364d5f5e65e1366/modules/calib3d/include/opencv2/calib3d.hpp#L604) for IRLS
- weighted iteration (HZ A6)
- the actual robust weight function to use (flag) | feature,category: calib3d | low | Minor |
535,667,122 | opencv | OpenEXR loading not making use of multi-threaded IO | OpenEXR library supports multi-threaded file IO by setting Imf::setGlobalThreadCount(), this currently has no effect though on loading since loader code loops over image and reads data line by line (forcing it to be single threaded) | feature,priority: low,category: imgcodecs,category: 3rdparty | low | Minor |
535,671,209 | vscode | Cannot input with Chrome Extension Google Input Tool | **monaco-editor version:** 0.18.1
**Browser:** Chrome / Edge β
**OS:** Windows
**Steps or JS usage snippet reproducing the issue:**
<!-- Do you have a question? Please ask it on https://stackoverflow.com/questions/tagged/monaco-editor -->
Cannot input with [Chrome Extension Google Input Tool](https://chrome.google.com/webstore/detail/google-input-tools/mclkkofklkfljcocdinagocijmpgbhab).
Alphabet can be entered.
Japanese and Chinese etc are impossible
Install the extension and enter in Japanese or Chinese

| feature-request,editor-input,web,chromium | low | Minor |
535,682,733 | go | cmd/compile: minimize stack occupation before function calls | In go 1.13 variables on the stack are kept ~alive~ allocated until the function returns. This is normally harmless, but can lead to surprising results: e.g. in the following example with a buf size of 1<<15 the process crashes because we reach the maximum stack size, but with a size of 1<<16 or more the process does not crash (as the slice is not allocated on the stack).
```go
package main
func main() {
rec(0)
}
func rec(n int) (m int) {
buf := make([]byte, 1<<15)
foo(buf)
// buf is dead now
if n >= 1<<16 {
return n
}
return rec(n+1)
}
//go:noinline
func foo([]byte) {}
```
This is obviously a contrived example, but it would help if the set of ~live~ allocated variables were pruned as stack variables die, or if that is too expensive, at least immediately before performing a function call (especially if recursive).
Such an optimization would not only help in case of recursive functions that allocate a lot on the stack, but would rather help minimize time spent in morestack (https://github.com/golang/go/issues/18138) by reducing stack growth on any call tree that does not contain only live variables (as a further example, in API servers it is somewhat common to have handlers wrapped in middleware, each middleware being often used to execute some logic before and/or after the wrapped handler is executed. In the case of logic executed only or mostly before the wrapped handler, it is likely that most variables in the stack frame of the middleware are dead during the call to the wrapped handler)
A more compact stack, besides requiring less time to be spent in morestack, would also help dcache efficiency. | NeedsInvestigation,compiler/runtime | low | Critical |
535,709,347 | create-react-app | Provide a way to configure the `mainFields` for webpack | The application build relies on the default webpack resolution of entry points for imported libraries, which are `["browser", "module", "main"]` for a web application.
Per spec (I believe) the `module` entry contains the ESM5 version of the library, i.e. tree shaken but transpiled to ES5.
Some libraries, in particular those built with `ng-packagr` offer additional entry points, such as `ESM2015` that contains a tree shaken version compiled to ES2015.
I suggest that the react scripts allow to override the desired priority order in which entry points are resolved, such that the application author can decide to consume a smaller version of a library (because it has been compiled to ES2015 instead to ES5).
In webpack this is accessible via the https://webpack.js.org/configuration/resolve/#resolvemainfields field, but this configuration is not exposed by react scripts.
| issue: proposal | low | Major |
535,720,918 | flutter | Allow custom handling of Notch tap on iOS | This issue is about the handling of the tap on the Notch and how it's currently not customizable from the outside.
## Use case
For normal apps, the current behavior works excellent in a matter that you don't have to do anything to get this working out of the box.
As soon as you want to have a bit more control over your `ScrollController` or have more complicated scroll setups, there is no way of intercepting the click and making it work for the advanced structure.
## Proposal
The `Scaffold` could receive an argument called `onStatusBarTap`, which when provided would override the default behavior found in [scaffold.dart](https://github.com/flutter/flutter/blob/f30b7f4db93ee747cd727df747941a28ead25ff5/packages/flutter/lib/src/material/scaffold.dart#L2133)
This would allow the developer to customize the behavior to his liking and needs.
I would be willing to implement this if there is no blocker. | c: new feature,platform-ios,framework,f: scrolling,P3,team-ios,triaged-ios | low | Major |
535,739,686 | flutter | unresolved supertypes: androidx.lifecycle.LifecycleOwner | ## Steps to Reproduce
1. create new flutter project in vscode
2. add to new repo
3. git clone using another computer (and another user)
4. flutter build apk
**Target Platform:** flutter
<details>
<summary>flutter run --verbose</summary>
```bash
[ +46 ms] executing: [C:\flutter_files\SDK\flutter\] git log -n 1 --pretty=format:%H
[ +143 ms] Exit code 0 from: git log -n 1 --pretty=format:%H
[ +5 ms] 68587a0916366e9512a78df22c44163d041dd5f3
[ +39 ms] executing: [C:\flutter_files\SDK\flutter\] git describe --match v*.*.* --first-parent --long --tags
[ +141 ms] Exit code 0 from: git describe --match v*.*.* --first-parent --long --tags
[ +1 ms] v1.9.1+hotfix.6-0-g68587a091
[ +21 ms] executing: [C:\flutter_files\SDK\flutter\] git rev-parse --abbrev-ref --symbolic @{u}
[ +197 ms] Exit code 0 from: git rev-parse --abbrev-ref --symbolic @{u}
[ +1 ms] origin/stable
[ +1 ms] executing: [C:\flutter_files\SDK\flutter\] git ls-remote --get-url origin
[ +105 ms] Exit code 0 from: git ls-remote --get-url origin
[ +1 ms] https://github.com/flutter/flutter.git
[ +211 ms] executing: [C:\flutter_files\SDK\flutter\] git rev-parse --abbrev-ref HEAD
[ +109 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD
[ +2 ms] stable
[ +307 ms] executing: H:\ANDROID_HOME\platform-tools\adb.exe devices -l
[ +50 ms] Exit code 0 from: H:\ANDROID_HOME\platform-tools\adb.exe devices -l
[ +4 ms] List of devices attached
[ +19 ms] No supported devices connected.
[ +120 ms] "flutter run" took 598ms.
#0 throwToolExit (package:flutter_tools/src/base/common.dart:28:3)
#1 RunCommand.validateCommand (package:flutter_tools/src/commands/run.dart:281:7)
<asynchronous suspension>
#2 FlutterCommand.verifyThenRunCommand (package:flutter_tools/src/runner/flutter_command.dart:465:11)
<asynchronous suspension>
#3 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:407:33)
<asynchronous suspension>
#4 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:154:29)
<asynchronous suspension>
#5 _rootRun (dart:async/zone.dart:1124:13)
#6 _CustomZone.run (dart:async/zone.dart:1021:19)
#7 _runZoned (dart:async/zone.dart:1516:10)
#8 runZoned (dart:async/zone.dart:1463:12)
#9 AppContext.run (package:flutter_tools/src/base/context.dart:153:18)
<asynchronous suspension>
#10 FlutterCommand.run (package:flutter_tools/src/runner/flutter_command.dart:397:20)
#11 CommandRunner.runCommand (package:args/command_runner.dart:197:27)
<asynchronous suspension>
#12 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:402:21)
<asynchronous suspension>
#13 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:154:29)
<asynchronous suspension>
#14 _rootRun (dart:async/zone.dart:1124:13)
#15 _CustomZone.run (dart:async/zone.dart:1021:19)
#16 _runZoned (dart:async/zone.dart:1516:10)
#17 runZoned (dart:async/zone.dart:1463:12)
#18 AppContext.run (package:flutter_tools/src/base/context.dart:153:18)
<asynchronous suspension>
#19 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:356:19)
<asynchronous suspension>
#20 CommandRunner.run.<anonymous closure> (package:args/command_runner.dart:112:25)
#21 new Future.sync (dart:async/future.dart:224:31)
#22 CommandRunner.run (package:args/command_runner.dart:112:14)
#23 FlutterCommandRunner.run (package:flutter_tools/src/runner/flutter_command_runner.dart:242:18)
#24 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:63:22)
<asynchronous suspension>
#25 _rootRun (dart:async/zone.dart:1124:13)
#26 _CustomZone.run (dart:async/zone.dart:1021:19)
#27 _runZoned (dart:async/zone.dart:1516:10)
#28 runZoned (dart:async/zone.dart:1500:12)
#29 run.<anonymous closure> (package:flutter_tools/runner.dart:61:18)
<asynchronous suspension>
#30 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:154:29)
<asynchronous suspension>
#31 _rootRun (dart:async/zone.dart:1124:13)
#32 _CustomZone.run (dart:async/zone.dart:1021:19)
#33 _runZoned (dart:async/zone.dart:1516:10)
#34 runZoned (dart:async/zone.dart:1463:12)
#35 AppContext.run (package:flutter_tools/src/base/context.dart:153:18)
<asynchronous suspension>
#36 runInContext (package:flutter_tools/src/context_runner.dart:59:24)
<asynchronous suspension>
#37 run (package:flutter_tools/runner.dart:50:10)
#38 main (package:flutter_tools/executable.dart:65:9)
<asynchronous suspension>
#39 main (file:///C:/flutter_files/SDK/flutter/packages/flutter_tools/bin/flutter_tools.dart:8:3)
#40 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:303:32)
#41 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:172:12)
```
</details>
**flutter analyze :**
```bash
flutter analyze
Analyzing code_001...
No issues found! (ran in 8.2s)
```
**flutter doctor -v :**
```bash
[√] Flutter (Channel stable, v1.9.1+hotfix.6, on Microsoft Windows [Version 10.0.18362.116], locale en-US)
• Flutter version 1.9.1+hotfix.6 at C:\flutter_files\SDK\flutter
• Framework revision 68587a0916 (3 months ago), 2019-09-13 19:46:58 -0700
• Engine revision b863200c37
• Dart version 2.5.0
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at H:\ANDROID_HOME
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.2
• ANDROID_HOME = H:\ANDROID_HOME
• ANDROID_SDK_ROOT = H:\ANDROID_HOME
• Java binary at: C:\Program Files\Java\jre1.8.0_221\bin\java
• Java version Java(TM) SE Runtime Environment (build 1.8.0_221-b11)
• All Android licenses accepted.
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/setup/#android-setup for detailed instructions).
[√] VS Code (version 1.40.2)
• VS Code at C:\Users\MH\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.7.0
[!] Connected device
! No devices available
! Doctor found issues in 2 categories.
```
<details>
<summary>flutter build apk </summary>
```bash
You are building a fat APK that includes binaries for android-arm, android-arm64.
If you are deploying the app to the Play Store, it's recommended to use app bundles or split the APK to reduce the APK size.
To generate an app bundle, run:
flutter build appbundle --target-platform android-arm,android-arm64
Learn more on: https://developer.android.com/guide/app-bundle
To split the APKs per ABI, run:
flutter build apk --target-platform android-arm,android-arm64 --split-per-abi
Learn more on: https://developer.android.com/studio/build/configure-apk-splits#configure-abi-split
Initializing gradle... 2.5s
Resolving dependencies... 13.0s
Running Gradle task 'assembleRelease'...
e: Supertypes of the following classes cannot be resolved. Please make sure you have the required dependencies in the classpath:
class io.flutter.embedding.android.FlutterActivity, unresolved supertypes: androidx.lifecycle.LifecycleOwner
class io.flutter.embedding.engine.FlutterEngine, unresolved supertypes: androidx.lifecycle.LifecycleOwner
e: C:\flutter_files\projects\code_001\android\app\src\main\kotlin\com\example\taj\MainActivity.kt: (3, 28): Unresolved reference: NonNull
e: C:\flutter_files\projects\code_001\android\app\src\main\kotlin\com\example\taj\MainActivity.kt: (9, 42): Unresolved reference: NonNull
e: C:\flutter_files\projects\code_001\android\app\src\main\kotlin\com\example\taj\MainActivity.kt: (10, 48): Type mismatch: inferred type is FlutterEngine but PluginRegistry! was expected
Running Gradle task 'assembleRelease'...
Running Gradle task 'assembleRelease'... Done 146.8s (!)
*******************************************************************************************
The Gradle failure may have been because of AndroidX incompatibilities in this Flutter app.
See https://goo.gl/CP92wY for more information on the problem and how to fix it.
*******************************************************************************************
Gradle task assembleRelease failed with exit code 1
```
</details> | c: crash,platform-android,tool,t: gradle,dependency: android,a: build,P2,team-android,triaged-android | low | Critical |
535,742,513 | rust | rust-gdb fails at passing command line arguments to the executable | This in particular happens when using servo's mach run utility.
I'm on `rustc 1.41.0-nightly (1bd30ce2a 2019-11-15)`
Output when using plain gdb:
```
$ ./mach -v run --debug --debugger gdb -d "http://nolp.dhl.de/nextt-online-public/set_identcodes.do?lang=de"
Reading symbols from /home/marc/Dokumente/02_GIT/servo/target/debug/servo...fertig.
(gdb) r
Starting program: /home/marc/Dokumente/02_GIT/servo/target/debug/servo http://nolp.dhl.de/nextt-online-public/set_identcodes.do\?lang=de
```
Output when using rust-gdb:
```
$ ./mach -v run --debug --debugger rust-gdb -d "http://nolp.dhl.de/nextt-online-public/set_identcodes.do?lang=de"
GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /home/marc/Dokumente/02_GIT/servo/target/debug/servo...fertig.
/home/marc/Dokumente/02_GIT/servo/http://nolp.dhl.de/nextt-online-public/set_identcodes.do?lang=de: Datei oder Verzeichnis nicht gefunden.
(gdb) q
```
For some reason gdb tries to see this as an additional executable or shared library
Edit: https://github.com/rust-lang/rust/blob/master/src/etc/rust-gdb#L16 would adding --args be enough there? | T-dev-tools,C-bug | low | Critical |
535,757,259 | go | cmd/go: go generate should be more resilient to source changes | ```
go version devel +3c0fbeea7d Tue Nov 5 05:22:07 2019 +0000 linux/amd64
```
When `go generate` runs, it's quite likely to add and remove source files.
This can result in errors which are inappropriate.
Here's a [testscript command](https://pkg.go.dev/github.com/rogpeppe/go-internal/cmd/testscript) script that demonstrates the issue. It's a pared-down version of some real code:
```
go generate ./...
cp names.new names
go generate ./...
-- names --
a
b
c
-- names.new --
a
b
d
-- gen.go --
// +build ignore
package main
import (
"io/ioutil"
"log"
"os"
"path/filepath"
"strings"
)
func main() {
os.RemoveAll("generated")
testData, err := ioutil.ReadFile("names")
if err != nil {
log.Fatal(err)
}
for _, name := range strings.Fields(string(testData)) {
if err := os.MkdirAll(filepath.Join("generated", name), 0777); err != nil {
log.Fatal(err)
}
err := ioutil.WriteFile(filepath.Join("generated", name, "f.go"), []byte(`
// Code generated for this test. DO NOT EDIT.
package `+name+`
func F() {}
`), 0666)
if err != nil {
log.Fatal(err)
}
}
}
-- go.mod --
module m
go 1.14
-- main.go --
package main
//go:generate go run gen.go
func main() {
}
```
When I run it, I see this:
```
% testscript goissue.txtar
> go generate ./...
> cp names.new names
> go generate ./...
[stderr]
generate: open $WORK/generated/c/f.go: no such file or directory
[exit status 1]
FAIL: /tmp/testscript089422577/0/script.txt:3: unexpected go command failure
error running tst.txtar in /tmp/testscript089422577/0
```
That is, the second time that `go generate` runs, it fails.
It seems that `go generate` is evaluating all the files when it runs, and not being resilient when they change (in this case a file changed name).
Perhaps it should be silent in cases like this.
There may also be other cases where `go generate` could be quieter about errors (for example when there's a package name mismatch in files in a package, which can be caused when a package is renamed and `go generate` is called again to regenerate the new files).
One possible approach might be for `go generate` to ignore any errors on files which contain the standard "DO NOT EDIT" comment. | NeedsInvestigation | low | Critical |
535,769,314 | godot | Spotlight Has An Un-wanted "Darkening Effect" | **Godot version:**
3.1.1 / 3.1.2
**OS/device including version:**
Windows 10 Version 1909
**Issue description:**
Spotlight darkens a radius around itself
**Steps to reproduce:**
Create a Spotlight
**Minimal reproduction project:**
Just that
Example :
https://www.reddit.com/r/godot/comments/e8c7sf/spotlight_creating_an_unwanted_darkening_effect/ | bug,topic:rendering | low | Major |
535,774,412 | vscode | Support for RTL languages (such as Arabic / Hebrew / Persian etc.) | My name is Tomer Mahlin. I lead a development team in IBM named Bidi Development Lab. We are specializing (for more than 20 years) in development of support for languages with bidirectional scripts (or "bidi lang." for short) .
We recently ran a sniff assessment on Monaco capabilities with respect to bidi lang. display. We believe there are several functional areas which require improvements (please see more details below).
My team can work on necessary modifications and suggest them via separate pull request, assuming community is interested in addressing the requirements detailed below.
**Plain text editing**
1. There should be a parameter through which it will be possible to communicated the default text direction for content being authored in specific instance of editor. This is a similar parameter to what is used in CKEditor: contentsLangDirection ( http://docs.ckeditor.com/#!/api/CKEDITOR.config-cfg-contentsLangDirection ).
Possible values should be:
- ltr (left-to-right),
- rtl (right-to-left),
- contextual (or auto as used in HTML)
2. In addition to that, there should be explicit way for the end user to interactively change text direction for selected text (or for current paragraph in which cursor is positioned in case current selection is of zero length). This can be achieved via:
- GUI buttons - similar to all rich text editors (i.e. http://ckeditor.com/addon/bidi)
AND / OR (in case there is no toolbars for any new buttons)
- Keyboard shortcuts (i.e. in Notepad it is Ctrl - <Left|Right-Shift>)
**Programming lang. editing**
1. As opposed to plain text, programming lang has well defined syntax. Some part of this syntax is visualized via color schema used for coloring different elements (i..e comments vs variables etc.) of the language. It is critical to enforce visual appearance associated with the syntax regardless of language used for different elements (i.e. comments, variables etc.). If this is not done, it becomes virtually impossible to work with the code when bidi text is used. Simple English example:
a = b + c; // hello world
If bidi characters are used instead you would expect to see:
A = B + C; // DLROW OLLEH
Instead at the moment you see:
DLROW OLLEH // ;C+B=A
The more complex example can be, less intuitive the display will become.
2. Special case is the case of comments or/and constants. Those by all means usually include bidi characters (or at least much more frequently than variables names for example). It is thus preferable to display text in those contexts using natural text direction for bidi languages (which is RTL). We can't store text direction information with text (namely source code file is still a plain text file which can't include any meta information about text such as font size, color, direction etc.). Consequently we should be able to make a smart choice while displaying the text (relying just on the text itself). Most straightforward approach is to enforce auto (aka contextual or first strong) direction of text for each paragraph included in comments.
For example, currently the display of sample text is as follows:
res = var1 + var2; // SI EMAN YM tomer !!!
If we enforce auto text direction on the comment we will see:
res = var1 + var2; // !!! tomer SI EMAN YM
Namely text of comment will appear with actual RTL direction which is a natural one for bidi lang.
Display of text with natural text direction makes it considerably more readable and thus should greatly enhance user experience for bidi users.
**Relevant requests**
At some point support for bidi lang. was requested in vscode via https://github.com/Microsoft/vscode/issues/4994
| feature-request,editor-RTL | high | Critical |
535,797,721 | TypeScript | Allow folding of JSDoc comments | **VsCode version:** 1.40.2 (Oct 2019, 2)
**OS:** Windows 10 v1709 os build 16299.847
**TypeScript Version:** 3.6.3
**Search Terms:** vs code fold "app.get" outlining jsdoc comment expressjs folding
**Code**
```ts
/**
* @api {get} /ProjectsList/:projectId/ Get project
*
* @apiDescription Get a project by projectId.
*
* @apiName Get User Projects
* @apiGroup Project
* @apiVersion 1.0.0
*
*/
app.get('/:retailUnit/:language/ProjectsList/:projectId/', retailUnit, (req, res) => {
const projectListController = new ProjectListController();
});
```
**Repro:**
* Create file foobar.js, copy paste the code from above.
* Try fold all.
**Expected behavior:** Comment and method fold
**Actual behavior:** method fold, comment does not
**Expected behavior:** I can fold the jsdoc comment above the app.get function
**Actual behavior:** I cannot fold the jsdoc comment, outlining not available.
**Related Issues:** Some (incorrectly?) for vs code, but these are the ones I found, but the first one is **very** similar to this one, but for arrow functions instead:
* https://github.com/microsoft/TypeScript/issues/26014
* https://github.com/Microsoft/vscode/issues/55249
* https://github.com/microsoft/vscode/issues/80186
_This is my first report here, I hope it is ok._ :-) | Suggestion,Help Wanted,Effort: Moderate,Domain: Outlining | low | Major |
535,799,690 | rust | Missed optimization on only used once allocation | godbolt link: https://rust.godbolt.org/z/d_VvSo
Here are two snippets that I expect that they should have the same assembly:
```rust
pub fn foo(s: &str) -> bool {
s != "FOOFOOLONGLONG"
}
```
```rust
pub fn foo(s: &str) -> bool {
s.to_string() != "FOOFOOLONGLONG"
}
```
EDIT: For [C version](https://rust.godbolt.org/z/eTcTrchGq), gcc and clang also fails to optimized this case.
| A-LLVM,I-slow,C-enhancement,T-compiler,I-heavy,C-optimization | low | Minor |
535,804,474 | node | move things away from process.binding('util') | `process.binding('util')` has been some kind of hotch-potch of "bindings the JS internals need but don't fall in the scope of a particular binding namespace". There is no compatibility guarantee about what's in this object and changes to it are completely at the whims of Node.js maintainers.
However `process.binding('util')` has been accessible in the user land and is abused by users (refs: https://github.com/nodejs/node/pull/29947#issuecomment-562996397 - considering how volatile it is, when touching this binding I had always been wondering if anyone actually abuses this, until I saw the issue). I propose we try creating a new, internal namespace (that is, a namespace that's not in [the whitelist here](https://github.com/nodejs/node/blob/28efa4fe953c0da6be25cd10e708b84deaada15f/lib/internal/bootstrap/loaders.js#L69)) and try moving everything away to that internal namespace to reduce the abuse.
On a side note, I think there should be a comment in `node_utils.cc` warning that any additions to the binding is at the risk of user-land abuse. | lib / src | low | Minor |
535,809,486 | pytorch | MathJax too small in Firefox | MathJax in your docs is by default rendered tiny (too small to read) in Firefox...
Example screenshot from: https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html
<img width="862" alt="image" src="https://user-images.githubusercontent.com/464192/70542681-6d732480-1b69-11ea-8daf-e3fa3aa62822.png">
My guess is some CSS problem... | module: docs,triaged | low | Minor |
535,852,508 | terminal | Rolling background image like Windows Themes | # Description of the new feature/enhancement
What about a set of rolling background images instead of a single one that Windows Terminal currently supports? Looking at the same background all the time gets boring.
I like [Windows Themes](https://www.microsoft.com/en-us/store/collections/windowsthemes?SilentAuth=1&wa=wsignin1.0) w/ the desktop backgrounds slideshow feature. I download a theme and I get a set of rolling background images on my desktop. What about that same concept w/ Windows Terminal? Thoughts? 💭
### Screenshot reference of Windows 10:

# Proposed technical implementation details (optional)
add a `"backgroundImages" : [ "file1.jpg", "file2.jpg" ]` property to `profiles`. | Help Wanted,Area-UserInterface,Area-Extensibility,Area-Settings,Product-Terminal,Issue-Task | low | Major |
535,864,020 | flutter | Flutter process internals prone to flagging from antivirus on Windows | ## Steps to Reproduce
<!-- Please tell us exactly how to reproduce the problem you are running into. -->
1. Extract flutter release zip to a folder on a Windows machine with Carbon Black defense software installed
2. make sure flutter\bin is in the path
3. run "flutter doctor"
4. Nothing happens
5. Check Windows Application Event Logs
6. You will notice "CbDefense" detected the start of cmd.exe and killed the process
After looking at the dart.exe code online, I see that in many places there is a "runInShell" flag passed to process-related methods (In classes like ProcessImpl). When this flag is set, cmd.exe is used to execute these statements on Windows. This is something that many anti-virus and intrusion detection applications look for on Windows. Is there a reason we need to use the native shell for these operations on Windows? Would Win32 process methods not be sufficient?
## Logs
<!--
Include the full logs of the commands you are running between the lines
with the backticks below. If you are running any "flutter" commands,
please include the output of running them with "--verbose"; for example,
the output of running "flutter --verbose create foo".
-->
After running "flutter doctor" I see this entry in the Windows application even log:
Source: CbDefense
Text: Information: The application <my flutter folder>\flutter\bin\cache\dart-sdk\bin\dart.exe invoked the application C:\Windows\System32\cmd.exe. The operation was blocked and the application terminated by Confer.
<!-- If possible, paste the output of running `flutter doctor -v` here. -->
No output results from "flutter doctor -v" as Carbon Black kills the process prematurely
| tool,dependency: dart,platform-windows,P2,team-tool,triaged-tool | low | Major |
535,895,540 | flutter | FlexibleSpaceBar title doesn't remain vertically center when custom fontSize is given | ## Steps to Reproduce
1. Set fontSize parameter to the text given as title to the `flexibleSpace` parameter in `SliverAppBar`.
2. After collapsing the title isn't vertically centered.
## Cause
Using fixed `bottomPadding` of `16.0` [here](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/material/flexible_space_bar.dart#L378) and fixed `bottom***` alignment [here](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/material/flexible_space_bar.dart#L248-L257). This results in increasing / decreasing top space in the appbar, if a fontSize less than / greater than 20 (default) is used respectively.
Also, changing the [alignment](https://github.com/flutter/flutter/blob/master/packages/flutter/lib/src/material/flexible_space_bar.dart#L390) property to anything other than `bottom***` doesn't work as expected
## Flutter Doctor
```
[√] Flutter (Channel stable, v1.9.1+hotfix.6, on Microsoft Windows [Version 10.0.18363.476], locale en-IN)
• Flutter version 1.9.1+hotfix.6 at D:\Flutter
• Framework revision 68587a0916 (3 months ago), 2019-09-13 19:46:58 -0700
• Engine revision b863200c37
• Dart version 2.5.0
[√] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at C:\Users\rashedmyt\AppData\Local\Android\sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.2
• Java binary at:
C:\Users\rashedmyt\AppData\Local\JetBrains\Toolbox\apps\AndroidStudio\ch-0\191.6010548\jre\bin\java
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
• All Android licenses accepted.
[√] Android Studio (version 3.5)
• Android Studio at C:\Users\rashedmyt\AppData\Local\JetBrains\Toolbox\apps\AndroidStudio\ch-0\191.6010548
• Flutter plugin version 40.2.2
• Dart plugin version 191.8580
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03)
[√] VS Code (version 1.40.2)
• VS Code at C:\Users\rashedmyt\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.7.0
[√] Connected device (1 available)
• Android SDK built for x86 • emulator-5554 • android-x86 • Android 10 (API 29) (emulator)
• No issues found!
```
| framework,f: material design,has reproducible steps,found in release: 2.10,found in release: 2.13,team-design,triaged-design | low | Major |
535,896,035 | go | cmd/go: cgo #line directives cause non-reproducibility | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13.5 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/jayconrod/Library/Caches/go-build"
GOENV="/Users/jayconrod/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/jayconrod/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/opt/go/installed"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/opt/go/installed/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/jayconrod/Code/test/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/rq/x0692kqj6ml8cvrhcqh5bswc008xj1/T/go-build209284358=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
Build an executable that depends on one or more cgo packages using `-trimpath`.
This can be reproduced by adding some cgo code to the program in `build_trimpath.txt`, the test case that covers `-trimpath`.
```
diff --git a/src/cmd/go/testdata/script/build_trimpath.txt b/src/cmd/go/testdata/script/build_trimpath.txt
index cfab80743e..363a01f156 100644
--- a/src/cmd/go/testdata/script/build_trimpath.txt
+++ b/src/cmd/go/testdata/script/build_trimpath.txt
@@ -100,6 +100,9 @@ import (
"strings"
)
+// const int x = 42;
+import "C"
+
func main() {
exe := os.Args[1]
data, err := ioutil.ReadFile(exe)
```
Also observe that `strings $(go env GOROOT)/pkg/darwin_amd64/os/user.a` prints file names in random temporary directories under `/private/var`, indicating these paths weren't completely scrubbed from standard library packages that include cgo.
Below is a longer test case. It builds a small cgo program, then lists file names in the DWARF data in the executable. The working directory should not be listed (but it is).
```
go build -trimpath -o hello .
go run dwarf-list-files.go hello
! stdout $WORK
-- go.mod --
module example.com/go-reproducibility
go 1.13
-- hello.go --
package main
// const int x = 42;
import "C"
import "fmt"
func main() {
fmt.Println(C.x)
}
-- dwarf-list-files.go --
// +build tool
package main
import (
"debug/dwarf"
"debug/macho"
"fmt"
"io"
"log"
"os"
"sort"
)
func main() {
if len(os.Args) != 2 {
fmt.Fprintf(os.Stderr, "usage: dwarf-list-files file\n")
os.Exit(1)
}
files, err := run(os.Args[1])
if err != nil {
log.Fatal(err)
}
for _, file := range files {
fmt.Println(file)
}
}
func run(path string) ([]string, error) {
machoFile, err := macho.Open(path)
if err != nil {
return nil, err
}
defer machoFile.Close()
dwarfData, err := machoFile.DWARF()
if err != nil {
return nil, err
}
dwarfReader := dwarfData.Reader()
files := make(map[string]bool)
for {
e, err := dwarfReader.Next()
if err != nil {
return nil, err
}
if e == nil {
break
}
lr, err := dwarfData.LineReader(e)
if err != nil {
return nil, err
}
if lr == nil {
continue
}
var le dwarf.LineEntry
for {
if err := lr.Next(&le); err != nil {
if err == io.EOF {
break
}
return nil, err
}
files[le.File.Name] = true
}
}
sortedFiles := make([]string, 0, len(files))
for file := range files {
sortedFiles = append(sortedFiles, file)
}
sort.Strings(sortedFiles)
return sortedFiles, nil
}
```
### What did you expect to see?
When the `-trimpath` flag is used, the package root directories (including GOROOT, GOPATH roots, and module roots) should not appear in binaries produced by the `go` command or in any intermediate file.
### What did you see instead?
.c files created by cgo include `#line` directives that point back to the original source files. These .c files are intermediate files, so with `-trimpath`, they shouldn't include absolute paths. The paths should just be derived from the import path, the module path, and the version.
When we compile these files, we use `-fdebug-prefix-map` to strip the temporary working directory, but we don't use that flag to remove the source directory. If `#line` directives are correctly stripped, we don't need this for intermediate `.c` files, but we still need it for source `.c` files. Without this flag, absolute paths to the source directory end up in the DWARF data in the compiled `.o` files.
I don't know much about the DWARF format, but I think paths are compressed in the linked binaries we're producing. `strings` doesn't show any inappropriate source paths, but `dwarf-list-files.go` in the test case above does. Also, it seems like the linker has special handling for `GOROOT_FINAL` for binaries in `GOROOT`, but I'm not sure how that works. | NeedsFix | low | Critical |
535,906,829 | godot | Can't compile bindings MinGW or MSVC | **Godot version:** 3.1.2.stable.official
**OS/device including version:** Microsoft Windows 10 Pro 10.0.18362 compilation 18362
**Issue description:**
Well, this issue needs a little workaround before (two issues in one?) because _scons_ will ignore completely the flag **use_mingw**, the patched version is included in the issue.
After that, the step to generate the bindings with _MinGW_ will stop compiling at some point.
**Steps to reproduce:**
Before the patch I've run the godot command to generate the API file:
```
godot --gdnative-generate-json-api api.json
```
(Window flashes and api.json file appears).
After the patch, running:
```
scons platform=windows use_mingw=yes generate_bindings=yes use_custom_api_file=yes custom_api_file=api.json
```
Will start compiling as expected, until encounters this line:
```shell
g++ -o src\gen\CameraFeed.o -c -g -O3 -std=c++0x -Wwrite-strings -I. -Igodot_headers -Iinclude -Iinclude\gen -Iinclude\core src\gen\CameraFeed.cpp
```
Which yields:
```
src\gen\CameraFeed.cpp: In member function 'void godot::CameraFeed::_set_YCbCr_imgs(godot::Ref<godot::Image>, godot::Ref<godot::Image>)':
src\gen\CameraFeed.cpp:53:112: error: '___godot_icall_void_Object_Object' was not declared in this scope
___godot_icall_void_Object_Object(___mb.mb__set_YCbCr_imgs, (const Object *) this, y_img.ptr(), cbcr_img.ptr());
^
scons: *** [src\gen\CameraFeed.o] Error 1
scons: building terminated because of errors.
```
**Minimal reproduction project:**
While not a project, here is the patched version of SConstruct file: [https://pastebin.com/raw/MYducSmZ](https://pastebin.com/raw/MYducSmZ)
The rest of the project is a blank godot project, not involved at all in the process and godot-cpp is cloned following the tutorial.
Maybe I'm missing something, sorry in that case
Thanks!
| bug,topic:buildsystem | low | Critical |
535,910,890 | flutter | Supporting the Turkmen language for Cupertino/Material widgets | Creating this issue to track interest for adding Turkmen as a supported language for Material/Cupertino widgets.
Related to an [attempt to add Cupertino Turkmen translations](https://github.com/flutter/flutter/pull/44763) and an [attempt to add Material Turkmen translations](https://github.com/flutter/flutter/pull/44765).
cc/ @kea2288
| c: new feature,framework,f: material design,a: internationalization,f: cupertino,P3,workaround available,team-design,triaged-design | low | Minor |
535,937,536 | TypeScript | Add more telemetry for ATA | Currently ATA just fires an event with the list of packages to be installed, the success/failure and the typings installer version. Some more useful telemetry to gather would be per package success/failure, time for each package install, and the registry used (github vs. npm).
The event currently goes from typings installer to tsserver to whoever is listening to tsserver events (the editor). So adding the extra telemetry to the typings installer event seems like the best place (the other option is to calculate it in the tsserver based on the current event payload from the typing installer). | Suggestion,Experience Enhancement | low | Critical |
535,954,822 | flutter | make it easier to use chained `Tween`s in `Hero.createRectTween` | ## Use case
When creating custom animations with a `Hero` widget, you will most likely get to use `createRectTween`, unfortunately it requires `Tween` as return value, not `Animatable`. This makes it difficult to use chains (you should create your own Tween and do it by hand).
This example couldn't work with current implementation:
```
Hero(
createRectTween: (Rect begin, Rect end) {
return RectTween(begin: begin, end: end).chain(
CurveTween(
curve: Curves.easeInOut,
),
);
},
tag: tag,
child: child,
);
```
## Proposal
~~Replace `Tween` => `Animatable` in Hero implementation.~~
~~The only blocker for this change is usage of `.begin` and `.end` which could be safely replaced with `.transform(0)` and `.transform(1)` respectively, since `Tween` implementation highly depends on these values as begin and end.~~
~~I can create pull request with these changes if my proposal seems reasonable.~~
Initial proposal was a bit confusing since for an `Animatable` it's not guaranteed that 1.0 is the end and 0.0 is the beginning. And it's not good to allow any `Animatable` to be returned from `createRectTween`. Discussion in this PR #47657
Just override `Tween.chain` to return a `Tween` is not an option since we should be able to pass `Animatable` and return type wouldn't be `Tween` then. It's not possible to override non-generic method with generic one, so it's not possible to solve it like this either.
I have a few possible solutions:
- Add `Tween.chainTween` which would behave just like `Animatable.chain` but with `Tween` as arg/return type.
- Implement `TweenChain` class (like `TweenSequence`)
Would really appreciate any other suggestions!
I can create pull request with implementation of solution which seems most appropriate. | c: new feature,framework,a: animation,P3,team-framework,triaged-framework | low | Major |
535,969,793 | flutter | Example app and test code sharing between endorsed implementations | Right now we're copy-pasting the example apps into the endorsed implementations. We should have a standard way to share example app code. | team,package,team-ecosystem,P2,triaged-ecosystem | low | Major |
535,988,020 | flutter | [web] re-enable modifier state for lock keys | Sending the modifier state with lock keys was causing an issue for Flutter For Web flutter/flutter#45250
The ideal solution is having another API on Flutter Framework specifically for sending the locks.
After this [PR](https://github.com/flutter/engine/pull/14165), web engine will stop sending the modifier state for lock keys. we should re-enable them once the framework side is ready. | framework,platform-web,P3,team-web,triaged-web | low | Minor |
536,004,034 | rust | #[link_section] is only usable from the root crate | Consider the following MNWE (minimal non-working example :smile:):
```bash
~ $ mkdir mwe
~ $ cd mwe
~/mwe $ cargo new a
Created binary (application) `a` package
~/mwe $ cargo new --lib b
Created library `b` package
~/mwe $ cat <<EOF > b/src/lib.rs
> #[used]
> #[link_section = ".mysection"]
> static X: u32 = 0;
> EOF
~/mwe $ echo 'b = { path="../b" }' >> a/Cargo.toml
~/mwe $ cat <<EOF > a/src/main.rs
> extern crate b; // link the crate
>
> fn main() {}
> EOF
~/mwe $ cd a
~/mwe $ cat <<EOF > link.x
> ENTRY(main)
>
> SECTIONS {
> .mysection : ALIGN(8) {
> _START_ = .;
> KEEP(*(.mysection))
> _END_ = .;
> }
> }
>
> ASSERT(_END_ != _START_, "Section empty");
> EOF
~/mwe $ mkdir .cargo
~/mwe $ cat <<EOF > .cargo/config
> [build]
> rustflags = ["-C", "link-arg=-Tlink.x"]
> EOF
~/mwe/a $ cargo build
Compiling a v0.1.0 (/home/jfrimmel/mwe/a)
error: linking with `cc` failed: exit code: 1
|
= note: "cc" [long output omitted (libb-<hash>.rlib included)] -T../link.x"
= note: /usr/bin/ld: Section empty
```
The setup contains two crates: the binary `a` and the dependent crate `b`. A custom linker script is used in order to put the contents of `.mysection` in a section with the same name. The assert statement makes the error visible: the static variable `X` should be moved into that section, making the section 4 bytes in size, but the section is empty.
If the `static X` is defined in crate `a`, everything works fine:
```bash
~/mwe/a $ cat <<EOF > src/main.rs
#[used]
#[link_section = ".mysection"]
static X: u32 = 0;
fn main() {}
EOF
~/mwe/a $ cargo build
Compiling a v0.1.0 (/home/jfrimmel/mwe/a)
Finished dev [unoptimized + debuginfo] target(s) in 0.19s
```
So the behavior is different when the `#[used] #[link_section] static` is placed in a dependency or in the currently build crate.
Am I doing something wrong (in which case it should be documented more clearly) or is there really a bug in the linker? | A-linkage,A-attributes,E-needs-test,T-compiler,C-bug | low | Critical |
536,017,003 | material-ui | Support for bottom sheets from Material Design specs | - [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
## Summary 💡
It would be great to have an implementation of the Bottom sheet component from the 2019 Material Design specs (https://material.io/components/sheets-bottom/#). I didn't see an issue for it already so I figured I'd create this one to get the conversation started. I'd be potentially interested in helping to create that implementation, though not in the immediate future.
The specs break the component down into three types, which each have pretty distinct behaviour:
1. Standard bottom sheets
* These are permanently anchored to the bottom of the app and are elevated above the other content
* There are no guidelines on how these should look, other than that they should always be 100% width and anchored to the very bottom of the page
2. Modal bottom sheets
* These are sheets that appear after some user interaction and cover part of the screen, with a modal backdrop covering the rest of it
* Once open, they can potentially opened in "full screen" by swiping up
* They can be closed by swiping down or tapping on the backdrop
* It's recommended to have some UI like a close button to close the fullscreen mode
* Again, there are very few guidelines on how these should look
3. Expanding bottom sheets
* These are a very specific UI component that appears at the bottom right of the screen and, when clicked, open a fullscreen modal sheet
I don't think it's necessary to provide a component for Standard bottom sheets, since this can already be achieved very easily, however it would be useful to have components for Modal bottom sheets and Expanding bottom sheets. Since they quite different, I think they should be separate components. I am more interested in the Modal bottom sheets which I have seen used much more in real applications and I think has more use cases, so I think that work should be considered a higher priority.
Since the specs don't give much direction on the visuals of the modal sheet, I think that should be completely left to the user, and only the behaviour should be implemented. Perhaps, with the exception of an optional bar element (not sure what to call it, see screenshots).
P.S. maybe we could call the bar element a "handle bar" or "handle" ?
<details>
<summary>Examples 🌈</summary
Standard bottom sheets are permanently anchored to the bottom of the app, and cannot be resized or moved:

Modal bottom sheets can be opened over top of other content:

Modal bottom sheets should only open to full height if their content fills < 50% of the screen's height:

Modal bottom sheets can be opened to full screen mode:

Expanding bottom sheets should follow the spec exactly:

Examples of modal bottom sheets "in the wild":
Google Tasks:


Note the "bar" thing on the small version and how it disappears on the medium and fullscreen versions:



Google Keep:


OneVersion:
Note the "bar" thing on the small version, and how it morphs to the "collapse" icon in the fullscreen version:


</details>
## Motivation 🔦
I think these are a really useful component which I've been seeing more and more in modern apps and I find are delightful to use. They are a great way to make options available to mobile users since it keeps things accessible to the thumbs at the bottom of the screen. It also helps to keep context with the activity you were in before opening the sheet. I would like to use something like this in one of my projects in the future.
I think the most complex part of implementing a bottom sheet is the swiping interactions and making them nice and fluid. If they don't work well, it loses much of its magic and delight-fullness. This is why I think it should be part of mui core, so that it is easy for developers to add a bottom sheet with solid behaviour with little effort. | new feature,design: material | low | Major |
536,041,218 | flutter | TextEditingController.text docs miss a reason | `TextEditingController.text` docs say it shouldn't be updated in a `build` method, but it doesn't explain why, it just says the listeners will be called. Is this for performance reasons, so the listeners won't lock the `build` method? Or are there any other issues as well?
I think the docs should include a more explained reason on the side-effects, so devs could evaluate more the impacts of their code and their decisions, or facilitate fixing bugs. | a: text input,framework,d: api docs,a: first hour,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-framework,triaged-framework | low | Critical |
536,046,620 | rust | Test exit code should distinguish between failed test and bad args | Today, if I run a test binary with bad args like `--foo-bar`, it prints an error message and exits with code 101. Similarly, if I run a test binary correctly but one of the tests fails, it logs the test failure and exits with 101.
It would be nice if we could distinguish between these cases using only the exit code.
Hopefully nothing depends on either of those cases being exactly 101 today, but if we change the code to distinguish between them, we should probably document it and keep it stable from then on. | C-enhancement,T-compiler,T-dev-tools,A-libtest | low | Critical |
536,075,096 | pytorch | Common lookup of generic types across full script and mobile parsers | ## 🚀 Feature
Common lookup of generic types across full script and mobile parsers
## Motivation
To maximize the codes that can be shared between the two parsers.
## Pitch
As in the title. Build the lookup of generic types (especially Dict, Tuple, List and other composing types) and make it shared by both of the parsers.
## Alternatives
## Additional context
In future the type part of script parsers can be pulled out so that it can be directly used by both full script and mobile interpreter. | feature,triaged | low | Minor |
536,079,710 | go | x/tools/gopls: normalize feature sets of unimported completions and organize imports | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
go version go1.13.4 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Can't test on latest release, will update when brew makes go1.13.5 available
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/steelphase/Library/Caches/go-build"
GOENV="/Users/steelphase/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY="go.company.org"
GONOSUMDB="go.company.org"
GOOS="darwin"
GOPATH="/Users/steelphase/Code/Go"
GOPRIVATE="go.company.org"
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.13.4/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.13.4/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/0b/zk7t542s1sj_31mwpwyk3cy81gy8z8/T/go-build062639987=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
Pasted in some code that referenced a dependency that needed to be imported.
### What did you expect to see?
import to be autocompleted for direct dependency `go.company.org/org/lib/v4/errors`
### What did you see instead?
import autocompleted to transitive dependency `github.com/pkg/errors`
### Additional Details
This is the `go.mod` of the module I'm working in
```mod
module go.company.org/org/thing
go 1.13
require (
go.uber.org/zap v1.13.0
go.company.org/org/lib/v4 v4.1.0
)
```
output of `go mod why`
```bash
steelphase@macbook:lib$go mod why github.com/pkg/errors
# github.com/pkg/errors
go.company.org/org/thing/cmd/thing
go.uber.org/zap
go.uber.org/zap.test
github.com/pkg/errors
``` | gopls,Tools,gopls/imports | medium | Critical |
536,081,857 | flutter | TextField should have a selectAllOnFocus | `TextField` or `TextEditingController` should have a `selectAllOnFocus` property, just like Android has, to easily make the whole text selected once the `TextField` gets in focus. | a: text input,c: new feature,framework,f: material design,c: proposal,P3,workaround available,team-text-input,triaged-text-input | low | Major |
536,146,608 | flutter | Expose a way to see which font engine ended up loading | Internal: b/292548466
customer: dream requested to be able to know what font family is being loaded by the engine.
They are unable to bundle all the fonts they intend to use with their app. They want to rely on system fonts. However, while doing development, they sometimes see odd behavior and want to know whether it is due to a missing font. AFAIK, there's no way of knowing this unless you instrument the engine and print the chosen font family (not an option internally if you are using prebuilts).
Could tracing be added to font loader so that the loaded fonts are printed? Would it be possible to go further and see which font famil(ies) were chosen given a single Text widget? | engine,a: typography,customer: dream (g3),P2,team-engine,triaged-engine | low | Minor |
536,158,373 | nvm | Cannot install node on OpenBSD | <!-- Thank you for being interested in nvm! Please help us by filling out the following form if you‘re having trouble. If you have a feature request, or some other question, please feel free to clear out the form. Thanks! -->
#### Operating system and version:
OpenBSD 6.6
#### `nvm debug` output:
<details>
<!-- do not delete the following blank line -->
```sh
nvm --version: v0.35.1
$SHELL: /usr/local/bin/zsh
$SHLVL: 2
${NVM_DIR}: '${HOME}/.nvm'
$PREFIX: ''
${NPM_CONFIG_PREFIX}: ''
$NVM_NODEJS_ORG_MIRROR: ''
$NVM_IOJS_ORG_MIRROR: ''
shell version: 'zsh 5.7.1 (x86_64-unknown-openbsd6.6)'
uname -a: 'OpenBSD 6.6 GENERIC.MP#510 amd64'
curl: /usr/local/bin/curl, curl 7.67.0 (x86_64-unknown-openbsd6.6) libcurl/7.67.0 LibreSSL/3.0.2 zlib/1.2.3 nghttp2/1.40.0
wget: not found
git: /usr/local/bin/git, git version 2.24.0
ls: grep:: No such file or directory
grep: grep: aliased to grep (grep), grep version 0.9
awk: /usr/bin/awk, awk: unknown option --version ignored
sed: /usr/bin/sed, sed: unknown option -- -
cut: /usr/bin/cut, cut: unknown option -- -
basename: /usr/bin/basename, basename: unknown option -- -
rm: /bin/rm, rm: unknown option -- -
mkdir: /bin/mkdir, mkdir: unknown option -- -
xargs: /usr/bin/xargs, xargs: unknown option -- -
nvm current: system
which node: /usr/local/bin/node
which iojs: iojs not found
which npm: /usr/local/bin/npm
npm config get prefix: /usr/local
npm root -g: /usr/local/lib/node_modules
```
</details>
#### `nvm ls` output:
<details>
<!-- do not delete the following blank line -->
```sh
-> system
iojs -> N/A (default)
node -> stable (-> N/A) (default)
unstable -> N/A (default)
lts/* -> lts/erbium (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.16.2 (-> N/A)
lts/dubnium -> v10.17.0 (-> N/A)
lts/erbium -> v12.13.1 (-> N/A)
```
</details>
#### How did you install `nvm`?
<!-- (e.g. install script in readme, Homebrew) -->
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.1/install.sh | bash
#### What steps did you perform?
nvm install 12
#### What happened?
tar does not recognize -J
```
Can not determine how many core(s) are available, running in single-threaded mode.
Please report an issue on GitHub to help us make nvm run faster on your computer!
Clang v3.5+ detected! CC or CXX not specified, will use Clang as C/C++ compiler!
Downloading https://nodejs.org/dist/v12.13.1/node-v12.13.1.tar.xz...
####################################################################################################################################################################### 100.0%
Computing checksum with sha256 -q
Checksums matched!
tar: unknown option -- J
usage: tar {crtux}[014578befHhjLmNOoPpqsvwXZz]
[blocking-factor | archive | replstr] [-C directory] [-I file]
[file ...]
tar {-crtux} [-014578eHhjLmNOoPpqvwXZz] [-b blocking-factor]
[-C directory] [-f archive] [-I file] [-s replstr] [file ...]
nvm: install v12.13.1 failed!
```
#### What did you expect to happen?
Successfully install node version 12
#### Is there anything in any of your profile files that modifies the `PATH`?
<!-- (e.g. `.bashrc`, `.bash_profile`, `.zshrc`, etc) -->
N/A
| OS: FreeBSD / OpenBSD,installing node | low | Critical |
536,259,348 | vscode | Disambiguate quickSuggestion from editor.action.triggerSuggest | From a CompletionItemProvider it would be useful to be able to disambiguate completions triggered by quickSuggest and manual invocation of the command (via ctrl-shift). This could be used if specific completions are expensive or particularly low confidence.
Currently quickSuggest suggestions and ctrl-space suggestions both come through with CompletionContext.triggerKind = 0 and triggerCharacter = undefined. | suggest,under-discussion | low | Minor |
536,308,429 | ant-design | Select (type: multi/tags) change event cannot be prevented by onInputKeyDown | - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[](https://codesandbox.io/s/dawn-silence-s2hkq)
### Steps to reproduce
1. Open codesandbox link (https://codesandbox.io/s/dawn-silence-s2hkq)
2. Select some items in the select component
3. Press the ENTER key
You notice now that the selection changed because the standard onChange event of the select component still triggered although it should have not.
### What is expected?
It is expected that the last selection before having pressed the ENTER key remains.
### What is actually happening?
The last item selected (or unselected) is reversed by the ENTER press because the standard behaviour of the ENTER key is to select/unselect items.
| Environment | Info |
|---|---|
| antd | 3.26.2 |
| React | 16.12 |
| System | Linux |
| Browser | All |
---
What I try to do is to save the current selections by the ENTER key because that was the most natural behaviour expected by the users of my test group. Therefore I used the onInputKeyDown event to call the blur event. However, the side effect is that the default behaviour of selecting an item cannot be prevented.
Please have a look at the sandbox.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,improvement | low | Major |
536,319,499 | pytorch | Add torch.version.nccl | ## 🚀 Feature
Make NCCL version available at `torch.version.nccl`.
## Motivation
We have `torch.version.cuda` and should have a similar field for the NCCL version.
## Pitch
We should add it. The version is available as a compile time constant by looking at the header. Alternatively, we can pull the version dynamically through [`ncclGetVersion`](https://docs.nvidia.com/deeplearning/sdk/nccl-developer-guide/docs/api/comms.html#ncclgetversion). The latter has my preference, because it will always be correct, whereas the header approach leaves some ambiguity (e.g. you can use header of version X but load version Y).
## Additional context
Once added, this should be added to the version information scraper script as well.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @xush6528 | oncall: distributed,module: bootcamp,triaged,enhancement,module: nccl | low | Minor |
536,363,793 | pytorch | torch.nn.Softplus threshold argument bug? | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
I noticed a potential bug in the torch.nn.Softplus function:
when plotting `torch.nn.Softplus(beta=10, threshold=0.4)` it appears that the linear threshold starts around 0.05 instead of 0.4 as specified, leading to a discontinuity in the function.
## To Reproduce
Steps to reproduce the behavior:
```
import torch
import matplotlib.pyplot as plt
print(torch.version.__version__)
beta = 10
threshold = 0.4
softplus = torch.nn.Softplus(beta=beta, threshold=threshold)
x = torch.arange(-1, 1, 0.01, dtype=torch.float32)
y = softplus(x)
plt.plot(x,y, label="torch.nn.Softplus")
plt.scatter(threshold, softplus(torch.tensor(threshold)), label="threshold value")
plt.legend()
plt.title("torch.nn.Softplus(beta=%s, threshold=%s)"%(beta, threshold))
```
Output:
`>>> 1.2.0`

## Expected behavior
```
def expected_softplus(x, beta=1, threshold=20):
"Expected softplus function"
y=torch.zeros_like(x)
y[x<=threshold] = (1/beta)*torch.log(1+torch.exp(beta*x[x<=threshold]))
y[x>threshold] = x[x>threshold]# revert to linear outside of threshold
return y
beta = 10
threshold = 0.4
softplus = torch.nn.Softplus(beta=beta, threshold=threshold)
x = torch.arange(-1, 1, 0.01, dtype=torch.float32)
y = softplus(x)
plt.plot(x,y, label="torch.nn.Softplus")
plt.plot(x,expected_softplus(x, beta=beta, threshold=threshold), label="expected softplus function")
plt.scatter(threshold, softplus(torch.tensor(threshold)), label="threshold value")
plt.legend()
plt.title("torch.nn.Softplus(beta=%s, threshold=%s)"%(beta, threshold))
```
Output:

## Environment
Collecting environment information...
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.14.6
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] numpydoc==0.9.1
[pip] torch==1.2.0
[pip] torchvision==0.4.0a0+6b959ee
[conda] blas 1.0 mkl
[conda] mkl 2019.4 233
[conda] mkl-service 2.3.0 py37hfbe908c_0
[conda] mkl_fft 1.0.14 py37h5e564d8_0
[conda] mkl_random 1.0.2 py37h27c97d8_0
[conda] pytorch 1.2.0 py3.7_0 pytorch
[conda] torchvision 0.4.0 py37_cpu pytorch | module: nn,triaged | low | Critical |
536,380,489 | create-react-app | Add documentation for using create-react-app with yarn pnp | ### Is your proposal related to a problem?
Documentation request for uding create-react-app with yarn 2 pnp.
### Describe the solution you'd like
To use create-react-app with yarn 2 pnp.
(Describe your proposed solution here.)
Document solution for using create-react-app with yarn 2 pnp
thanks | issue: proposal | low | Major |
536,419,361 | rust | Passing -Z embed-bitcode doesn't embed bitcode for debug builds of external dependencies | I'm not 100% sure if this is a cargo bug or a rustc bug, but it looks like a rustc bug.
I tested using `rustc 1.41.0-nightly (412f43ac5 2019-11-24)` running on MacOSX Catalina.
I put my files in a small repository, which you can find here: https://github.com/davehylands/simple-lib
What I noticed is that if I build using cargo normally (i.e. without using --release) then the object files for external crates do not get bitcode embedded. If I build using cargo with --release when the object files for external creates do get bitcode embedded. In either case, cargo is being run by using `RUSTFLAGS="-Z embed-bitcode"`
You can pass `--release` to the build.sh script to get the --release behaviour, and pass no flags to get the debug behaviour. The build.sh script runs otool over the objects included in libsimple_lib.a to show whether bitcode is included or not.
If you modify the for loop to use
```bash
for obj in rand_core*.o; do
```
rather than:
```bash
for obj in *.o; do
```
then you'll get simpler output.
Normally, I use a rust toolchain that has the rust standard libraries built with bitcode, so I realize that these objects won't have bitcode using the nightly compiler.
Cargo.toml
```
[package]
name = "simple_lib"
version = "0.1.0"
authors = ["Dave Hylands <[email protected]>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
rand = "0.7"
[lib]
name = "simple_lib"
# rustc cannont create dylibs for iOS.
# https://github.com/rust-lang/rust/issues/21727#issuecomment-424026535
crate-type = ["staticlib"]
```
src/lib.rs
```
use rand;
pub fn test_func2() -> String {
let x = rand::random::<u16>();
format!("Your number is {}", x)
}
```
build.sh
```
#!/bin/sh
set -e
if [ "$1" == "--release" ]; then
BUILD_TYPE="release"
CARGO_ARGS="--release"
else
BUILD_TYPE="debug"
fi
TARGET_DIR=target
rm -rf ${TARGET_DIR}
RUSTFLAGS="-Z embed-bitcode" cargo build -v ${CARGO_ARGS}
cd ${TARGET_DIR}/${BUILD_TYPE}
ar x libsimple_lib.a
for obj in *.o; do
if otool -l ${obj} | grep bitcode > /dev/null ; then
echo "${obj} has bitcode"
else
echo "${obj} DOES NOT have bitcode"
fi
done
``` | O-ios,T-compiler,C-bug | low | Critical |
536,423,958 | create-react-app | Move react-scripts/config/jest into react-dev-utils | ### Is your proposal related to a problem?
All the files in the folder `react-scripts/config/jest` are util transforms used in a `jest.config.js`.
They're really useful and very common, so that they're described also in the [official guide of jest](https://jestjs.io/docs/en/webpack.html).
I am building a custom jest config, and would rather use those files than write my own, so I added `react-script` as a dependency and am importing those files from there. However I would prefer if they'd be in `react-dev-utils` instead, because that's more semantically correct and I don't want to install another dependency with a lot of other dependencies I don't need inside, just for a couple of functions.
### Describe the solution you'd like
Move `react-scripts/config/jest` into `react-dev-utils`.
| issue: proposal,needs triage | low | Minor |
536,447,854 | rust | Casting or adding type ascription to panic!() triggers unreachable_code | ```rust
fn test() -> impl Iterator<Item = i32> {
panic!()
}
```
doesn't compile, as `()` (nor `!`) implement `Iterator<Item = i32>`.
```rust
#![feature(never_type_fallback)] // required for `panic!() as _`
fn test() -> impl Iterator<Item = i32> {
panic!() as std::iter::Empty<_>
}
```
compiles, but gives this warning:
```
warning: unreachable expression
--> src/lib.rs:3:5
|
3 | panic!() as std::iter::Empty<_>
| --------^^^^^^^^^^^^^^^^^^^^^^^
| |
| unreachable expression
| any code following this expression is unreachable
|
= note: `#[warn(unreachable_code)]` on by default
= note: this warning originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
```
```rust
#![feature(type_ascription)]
fn test() -> impl Iterator<Item = i32> {
panic!(): std::iter::Empty<_>
}
```
also compiles, but gives the same warning. | C-enhancement,A-lints,P-medium,T-compiler,F-never_type,requires-nightly | low | Major |
536,493,959 | TypeScript | Add a Merge utility type | ## Search Terms
merge types
## Suggestion
It would be great if you could merge two types. The closest I can get right now is a union:
```typescript
type Test1 = { id: number, code: string }
type Test2 = { id: string, code: number }
type Test3 = Test1 | Test2
// This is not allowed, because x doesn't match Test1 OR Test2
export const x: Test3 = {
id: "bob",
code: "bob"
}
```
Basically I want to be able to create the following type by merging Test1 and Test2:
```typescript
type Test3 = { id: number | string, code: number | string }
```
Perhaps there could be a utility to merge the types:
```typescript
type Test3 = Merge<Test1, Test2>
```
## Use Cases
React-Native has a few style types (TextStyle, ImageStyle, ViewStyle) and they are very useful for autocomplete and type checking in my code. The TextStyle has a property `fontSize: number`. I want to be able to redefine the TextStyle type to allow a string value for all of the properties (so that I can use my own variable system). Something like `fontSize: "$fontSizeMD"`
## Examples
```typescript
type EImageStyle = ImageStyle | Partial<Record<keyof ImageStyle, string>>
type ETextStyle = TextStyle | Partial<Record<keyof TextStyle, string>>
type EViewStyle = ViewStyle | Partial<Record<keyof ViewStyle, string>>
type EComponentStyle = EImageStyle | ETextStyle | EViewStyle
type EComponentStyles = { [P in keyof any]: EComponentStyle }
```
With the merge it would look like:
```typescript
type EImageStyle = Merge<ImageStyle, Partial<Record<keyof ImageStyle, string>>>
type ETextStyle = Merge<TextStyle, Partial<Record<keyof TextStyle, string>>>
type EViewStyle = Merge<ViewStyle, Partial<Record<keyof ViewStyle, string>>>
type EComponentStyle = EImageStyle | ETextStyle | EViewStyle
type EComponentStyles = { [P in keyof any]: EComponentStyle }
```
## Checklist
My suggestion meets these guidelines:
* [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [X] This wouldn't change the runtime behavior of existing JavaScript code
* [X] This could be implemented without emitting different JS based on the types of the expressions
* [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Major |
536,504,972 | flutter | Improve the indexability (SEO) of Flutter apps on the web | Latest status: https://github.com/flutter/flutter/issues/46789#issuecomment-1007835929
----
I Just wanted to know whether its SEO Friendly or not and about the status of Initial paintful load. | c: new feature,framework,customer: crowd,platform-web,c: proposal,P2,team-web,triaged-web | low | Critical |
536,509,638 | vscode | Allow to configure symbol visibility in pickers that may show symbols | It's sometimes difficult to use "Go To Symbol" to go to a specific method of a class because the namespace of the palette is polluted with variable names. Personally, I have no need to use "Go To Symbol" to navigate to variables, so just like I can hide them in the outline, I'd like to be able to hide them from the palette.
| feature-request,editor-symbols,quick-open | medium | Critical |
536,514,769 | vscode | [scss] add parameter hints | Currently when we complete a function in SCSS, it completes all parameters:

I think a better experience would be to use parameter hints and do not show the parameter names. | feature-request,css-less-scss | low | Minor |
536,515,768 | vscode | [css] hover for property values | There's completion details for property values:

But no hover for them.
| feature-request,css-less-scss | low | Minor |
536,516,412 | vscode | [scss] provide hover for scss functions | In SCSS, when hovering over a function, we should show its documentation. Currently it only shows the property name's documentation:

Depends on #86764. | feature-request,css-less-scss | low | Minor |
536,533,253 | node | Debugger doesn't pause again after stepping through blackboxed function | <!--
Thank you for reporting a possible bug in Node.js.
Please fill in as much of the template below as you can.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify the affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you can.
-->
* **Version**: v13.3.0 (and older versions. Works in 8.x)
* **Platform**: macOS
<!-- Please provide more details below this comment. -->
Have a script like
```js
// test.js
debugger;
console.log('hello there');
for (var i = 0; i < 100; i++) {
console.log('helllo there')
}
```
- Run `node --inspect-brk test.js`
- Open the developer tools against that process
- Set the following blackbox pattern: `^(?!file:\/\/\/).*`. This should blackbox any script that does not start with `file:///`, and should blackbox all internal node scripts while allowing user scripts to be debugged
- "step in" into the `console.log` statement
- All scripts called by `console.log` should be blackboxed and execution should continue to the next line
- Instead, the script runs to completion
I am not sure whether this is node-specific but I wasn't able to come up with a repro in Chrome. It worked at least in Node 8.
If I set up other scenarios of stepping into node code with this blackbox pattern set, sometimes it works and sometimes I see other wrong behavior - it pauses in `internal/per_context/primordials.js` even though that script is marked as "This script is blackboxewd in debugger" | inspector | low | Critical |
536,538,338 | flutter | [web] exceptions on mouse move when window loses focus | On desktop chrome for Linux, running a flutter application in debug mode and then losing window focus causes all subsequent mouse moves to throw the assertion below:
Error:
```
dart_sdk.js:5312 Uncaught Error: Assertion failed: org-dartlang-sdk:///flutter_web_sdk/lib/_engine/engine/pointer_converter.dart:271:18
!state.down
is not true
at Object.throw_ [as throw] (dart_sdk.js:4021)
at Object.assertFailed (dart_sdk.js:3969)
at _engine.PointerDataConverter.new.convert (dart_sdk.js:145428)
at _engine.PointerAdapter.new.[_convertEventToPointerData] (dart_sdk.js:145120)
at dart_sdk.js:145088
at HTMLElement.<anonymous> (dart_sdk.js:144990)
```
Doctor:
```
[✓] Flutter (Channel master, v1.13.1-pre.94, on Linux, locale en_US.UTF-8)
• Flutter version 1.13.1-pre.94 at /usr/local/google/home/jonahwilliams/Documents/flutter
• Framework revision 36e599eb5d (12 hours ago), 2019-12-11 07:27:13 +0100
• Engine revision 12bf95fd49
• Dart version 2.7.0 (build 2.7.0-dev.2.1 8b8894648f)
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at /usr/local/google/home/jonahwilliams/Android/Sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.2
• Java binary at: /opt/android-studio-with-blaze-3.5/jre/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[✓] Chrome - develop for the web
• Chrome at google-chrome
[✓] Linux toolchain - develop for Linux desktop
• clang++ 8.0.1
• GNU Make 4.2.1
[!] Android Studio (version 3.4)
• Android Studio at /opt/android-studio-with-blaze-3.4
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
✗ Unable to find bundled Java version.
• Try updating or re-installing Android Studio.
[!] Android Studio (version 3.3)
• Android Studio at /usr/local/google/home/jonahwilliams/Documents/android-studio
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
• Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01)
[!] Android Studio (version 3.5)
• Android Studio at /opt/android-studio-with-blaze-3.5
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
• Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[✓] IntelliJ IDEA Community Edition (version 2019.1)
• IntelliJ at /usr/local/google/home/jonahwilliams/Documents/idea-IC-191.7479.19
• Flutter plugin version 36.1.4
• Dart plugin version 191.7830
[!] IntelliJ IDEA Ultimate Edition (version 2018.3)
• IntelliJ at /opt/intellij-ue-2018.3
✗ Flutter plugin not installed; this adds Flutter specific functionality.
• Dart plugin version 183.5912.23
• For information about installing plugins, see
https://flutter.dev/intellij-setup/#installing-the-plugins
[✓] IntelliJ IDEA Ultimate Edition (version 2019.1)
• IntelliJ at /opt/intellij-ue-2019.1
• Flutter plugin version 39.0.2
• Dart plugin version 191.8423
[!] VS Code (version 1.39.2)
• VS Code at /usr/share/code
✗ Flutter extension not installed; install from
https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (3 available)
• Linux • Linux • linux-x64 • Linux
• Chrome • chrome • web-javascript • Google Chrome 79.0.3945.79
• Web Server • web-server • web-javascript • Flutter Tools
``` | engine,platform-web,platform-linux,a: desktop,a: mouse,P2,team-engine,triaged-engine | low | Critical |
536,541,668 | rust | Clarify pitfalls of references and NonNull in FFI | Add a note like the one in this commit to the documentation of references and nonnull: https://github.com/rust-lang/rust/pull/62514/commits/fafa4897985f932490960e90ddd2ff39134e967e
Something like what that commit does for `Box<T>`:
```rust
//! **Important.** At least at present, you should avoid using
//! `Box<T>` types for functions that are defined in C but invoked
//! from Rust. In those cases, you should directly mirror the C types
//! as closely as possible. Using types like `Box<T>` where the C
//! definition is just using `T*` can lead to undefined behavior, as
//! described in [rust-lang/unsafe-code-guidelines#198][ucg#198].
```
but replacing `Box<T>` with the reference types and `NonNull<T>`. | C-enhancement,A-FFI,T-lang,A-docs | low | Minor |
536,554,763 | vscode | Make css auto completion items overtype on semicolon | 
Now that auto complete adds a `;` when selecting properties, it would be great if typing a `;` next to a semicolon would overtype so that I can quickly get to the end of the line and keep typing. I know that you can type `cmd/ctrl+enter` to start a new line, but typing a semicolon is a bit faster.
| feature-request,suggest,editor-autoclosing | low | Minor |
536,560,001 | pytorch | [jit] Python objects as arguments are not mutated | In Python the original object passed in is not modified by running the script function, but in C++ it is (which is the correct behavior).
```python
from typing import Dict
import torch
class XYZ(torch.nn.Module):
def __init__(self):
super().__init__()
@torch.jit.export
def f(self, x: int, d: Dict[int, int]) -> Dict[int, int]:
d[x] = x + 2
return d
jit = torch.jit.script(XYZ())
d0 = {11: 11}
d1 = jit.f(23, d0)
# this assertion passes
assert 23 not in d0 and d1[23] == 25
jit.save("saved.pt")
```
```cpp
#include <torch/script.h>
#include <iostream>
int main() {
torch::jit::script::Module module =
torch::jit::load("saved.pt");
c10::Dict<int64_t, int64_t> d0;
d0.insert(11, 11);
auto d1 = module ({23, d0}).toGenericDict();
std::cout << d0.at(23) << " " << d0.at(11)<<std::endl;
std::cout << d1.at(23) << " " << d1.at(11)<<std::endl;
}
/*
The above code prints:
25 11
25 11
*/
```
cc @suo | oncall: jit,low priority,triaged | low | Minor |
536,566,783 | go | x/net/http2: test timeouts on js-wasm builder | [2019-12-09T16:08:50-c0dbc17/js-wasm](https://build.golang.org/log/16b8f0dc90c74888bb31f97b8741461cbfb016de) has an interesting timeout failure.
It's not obvious to me whether it is due to a bug in the test, a bug in the `js/wasm` standard library, or a bug in `x/net/http2` or one of its dependencies that happens to be exposed by the different goroutine scheduling on `js/wasm`.
The presence of a goroutine blocked in `net.(*bufferedPipe).Read` via `crypto/tls` makes me suspect a concurrency bug in either `golang.org/x/net/http2` or `crypto/tls`, but admittedly that evidence is not compelling.
I also can't reliably determine how often this occurs or on what platforms because I can't fetch most of the `x/net` logs (#35515).
CC @bradfitz @FiloSottile @tombergan | help wanted,NeedsInvestigation,arch-wasm | low | Critical |
536,598,786 | flutter | [proposal] provide layout widget that enforces minimum height | I see many widgets meant to make its child as big as it wants, like `Expanded`, `Flexible`, `BoxConstraints.expand()`, `SizedBox.expand`, but many times I want to set a widget to be as small as possible, analog to Android's `wrap_content`.
Sometimes I get rendering Exceptions because the framework can't calculate the size of a widget, as when using `Row` and `Column` together, as explained [here](https://flutter.dev/docs/development/ui/layout/box-constraints#flex). The solution presented is always to use `Expanded`, but what if I don't want that content to expand? What if I want it as small as it can be without clipping anything?
The solution is usually to use multiple `Expanded` and `Spacer` with different `flex` parameters, but still this isn't perfect as negotiating `flex` values isn't what we always want, as the `Spacer` will not vanish if the screen gets smaller, it will try to maintain it's proportionality, forcing my widget to get smaller.
Maybe I am missing something about the way the framework works, so then this would be lack of tutorials, or maybe the Framework is missing a `ShrinkWrap` widget (as this is a property of some widgets already) or something with a similar name, that will size the wrapped widget to be as small as it can be.
Example of a problem that could benefit from this [here](https://stackoverflow.com/questions/59294421/how-to-make-a-widget-to-be-as-small-as-it-can-wrap-content-in-android), as I couldn't find how to solve this, even with `IntrinsicHeight`. | c: new feature,framework,c: proposal,P3,team-framework,triaged-framework | medium | Major |
536,653,709 | terminal | Epic: Search v2 | # Description of the new feature/enhancement
###### [Original Issue: #605] [Original Spec: #3299] [Initial PR: #3590]
Terminal text search has already realizes single tab case sensitive/insensitive forward/backward search. In phase 2, we are considering these new features:
* [x] Add "Find" button in dropdown menu to trigger search. Now the search dialog is launched by key bindings, we may need to consider binding the dialog to a button in dropdown menu for mouse only operations.
- sorta in #13055. Close enough.
* [x] High-light while you type. Emphasizing all the matches in the buffer with an outline or selection with another color. This provides a clearer view of the searched text.
- [x] #7561 Feature Request: Highlight all finding
* [x] #6319 No results notification (or reveal number of results)
- in #14045
* [x] #7695 next/previous search key bindings
* [x] #8307 Selected text should populate search box
* [x] #8274 Add capability to insert matched hit
* [x] #1527 Display "find" matches in the scrollbar, #8631
- in #14045
* [x] Pressing next/previous match keybindings should move to the next needle, even if the dialog is closed. (Maybe it should auto display the dialog)
- Mentioned in #8522 and #8588
* [x] It'd be cool if there was a shadow underneath the search dialog, like there is for the command palette.
- Mentioned in https://github.com/microsoft/terminal/pull/11105#issuecomment-911929402
- Fixed in #12913
* [x] Regular expression (regex) match. This is a useful search pattern and is implemented in some editors.
- first proto in #9228
- Will be effortless after #15858 merges
- merged in #17316, in 1.22
* [ ] Add size handle. Some text editors let the user resize the search box, and there is a size handle on the left side of the search box. This helps user when they search for long text.
* [ ] #5314 Fuzzy Search
- Go look at #16586
* [x] #4407
* [ ] Search from all tabs. In phase one search are limited within one Terminal window. However, the community also requests search from all tabs. This may require a big change to the search algorithm.
* [x] Depends on #997 by all accounts
* [ ] Search history. Sometimes users would do the same search for several times, thus, storing the search history is useful.
Also related, from #5001:
> * [ ] Add a setting to configure the regex used to detect patterns
> * [ ] Allow "hyperlink" matching for arbitrary patterns and adding custom handlers #8849
> - Theoretically, this could supersede #7562
> - Should be self-consistent with #8294
> * [ ] Control the styling of hyperlinks - attributes used for autodetected hyperlinks, and then different attributes for hovered ones #8294 | Area-UserInterface,Area-TerminalControl,Product-Terminal,Issue-Scenario | medium | Major |
536,657,802 | pytorch | [JIT] tensor(device=...) and tensor.to(device = ...) does not work properly in traced functions and modules | ## 🐛 Bug
tensor.to(device) function results in fixed destination device when being traced by torch.jit.trace.
It seems very similar to #13969 which is reported to be closed 1 year ago, but it still fails to work.
## To Reproduce
```
def x(a):
b = torch.tensor((1.,), device=a.device)
return a+b
y = torch.jit.trace(x, torch.randn(1))
z = y(torch.tensor((2.,), device='cuda'))
```
results in error
```
expected device cuda:0 but got device cpu
```
The same error arises if `b` is initialized in another way:
```
b = torch.tensor((1.,)).to(device=a.device)
```
If is caused by fact that target device is fixed as a literal constant, equal to device of the sample, in a script file, and hence a.device is ignored when the script is played:
```
y.save(...) ->
op_version_set = 1
class PlaceholderModule(Module):
__parameters__ = []
training : bool
def forward(self: __torch__.PlaceholderModule,
a: Tensor) -> Tensor:
_0 = torch.to(CONSTANTS.c0, torch.device("cpu"), 6, False, False)
b = torch.to(torch.detach(_0), dtype=6, layout=0, device=torch.device("cpu"), pin_memory=False, non_blocking=False, copy=False)
return torch.add(a, b, alpha=1)
```
## Expected behavior
device parameter is handled when script is running.
## Environment
Collecting environment information...
PyTorch version: 1.3.0a0+de394b6
Is debug build: No
CUDA used to build PyTorch: 10.0
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
CMake version: version 3.12.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: GeForce GTX 1080 Ti
Nvidia driver version: 411.31
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin\cudnn64_7.dll
Versions of relevant libraries:
[pip3] numpy==1.17.2
[pip3] pytorch-ignite==0.2.0
[pip3] pytorch-toolbelt==0.2.1
[pip3] segmentation-models-pytorch==0.0.2
[pip3] torch==1.3.0a0+de394b6
[pip3] torchvision==0.4.1a0+d94043a
[conda] blas 1.0 mkl
[conda] cuda92 1.0 0 pytorch
[conda] mkl 2019.0 <pip>
[conda] mkl 2019.0 118
[conda] mkl-include 2019.0 <pip>
[conda] mkl-include 2019.0 118
[conda] mkl_fft 1.0.6 py36hdbbee80_0
[conda] mkl_random 1.0.1 py36h9258bd6_0
[conda] pytorch-ignite 0.2.0 <pip>
[conda] pytorch-toolbelt 0.2.1 <pip>
[conda] segmentation-models-pytorch 0.0.2 <pip>
[conda] torch 1.3.0a0+de394b6 <pip>
[conda] torchvision 0.4.0 <pip>
[conda] torchvision 0.4.1a0+d94043a <pip>
cc @suo | oncall: jit,triaged | low | Critical |
536,657,934 | godot | MouseMode.Confined locks the mouse when the game starts with its window unfocused | **Version:** Godot 3.2 beta 1
When you start the game while, for example, browsing the web with Chrome, the game starts unfocused; still, the game locks the mouse and you get to focus the window and unfocus again to unlock it.
Input.SetMouseMode(Input.MouseMode.Confined) is in an object's _Ready() in the first scene, if that matters. | bug,usability,topic:input | low | Minor |
536,684,810 | TypeScript | compilerOptions:outDir is warning if set to null inside VSCode | **TypeScript Version:** 3.7.2
**Search Terms:**
Unset outDir in extended tsconfig
outDir null
outDir Type
**Code**
```
{
"extends": "./tsconfig",
"compilerOptions": {
//...
"outDir": null,
}
}
```
**Expected behavior:**
No warning inside VSCode.
**Actual behavior:**
```
{
"resource": "/home/jogo/dev/boomsaas/tsconfig.single.json",
"owner": "_generated_diagnostic_collection_name_#0",
"severity": 4,
"message": "Incorrect type. Expected \"string\".",
"startLineNumber": 6,
"startColumn": 13,
"endLineNumber": 6,
"endColumn": 17
}
```
I want to unset the outDir configuration inside my extended config to have the transpiled files next to their source files. Adding a string like "" would instead make them appear in the root folder though. Null seems to work fine.
| Bug | low | Minor |
536,700,980 | flutter | [Web] Icon is not rendered using the right font during widget tests on web | Discrepancy found in newly enabled screenshot tests for web in PR #46820
Web:

Flutter tester:

Tests: Multiple, but here's one: test/material/bottom_navigation_bar_test.dart | a: tests,framework,platform-web,P2,team-web,triaged-web | low | Minor |
536,701,000 | flutter | [Web] Inverting colors doesn't work on Web | Discrepancy found in newly enabled screenshot tests for web in PR #46820
Web:

Flutter tester:

Test: test/widgets/invert_colors_test.dart | a: tests,framework,platform-web,P2,team-web,triaged-web | low | Minor |
536,701,008 | flutter | [Web] ListWheelScrollView works incorrectly on Web | Discrepancy found in newly enabled screenshot tests for web in PR #46820
First test:
Web:

Flutter tester:

Second test:
Web:

Flutter tester:

Test: test/widgets/list_wheel_scroll_view_test.dart | framework,f: material design,platform-web,P2,team-web,triaged-web | low | Minor |
536,707,451 | react | Input nodes leaked by the browser retain React fibers | **Do you want to request a *feature* or report a *bug*?**
🐛
**What is the current behavior?**
Browsers retain references to inputs in their undo stacks, which in turn retain React fibers (including `memoizedProps`)
See https://bugs.chromium.org/p/chromium/issues/detail?id=1029189
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:**
https://jsfiddle.net/altxg/nzu6ab5e/3/
**What is the expected behavior?**
Although the leak originates from the browser, it might be helpful if React detached internal fiber references from input and contenteditable nodes on unmount
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
All versions of React as far as I know.
I can reproduce the leak on Chrome and Firefox on Mac
---
Potentially related issues:
https://github.com/facebook/react/issues/12692
https://github.com/facebook/react/issues/16087 | Type: Needs Investigation | medium | Critical |
536,718,714 | kubernetes | Better protect secrets from common application vulnerabilities that expose the filesystem | We've done a lot with secrets to meet rigid applications where they are. Is there anything we can do to optionally give flexible applications better security? Can we do better than directly mounting secrets onto the filesystem? Ideally filesystem information leaks (e.g. directory traversal bugs) wouldn’t result in secrets lost to bad guys.
Options discussed in sig-auth:
* kubelet encrypts secret files and stores the encryption key in an xattr. Filesystem traversals rarely grant attackers access to extended file attributes. Would this work for sandboxed runtimes? What do we do on windows?
* kubelet encrypts secret files and stores the encryption key in an environment variable. We've occasionally had environment variable leaks, but by storing secret encryption keys in environment variables, we force an attacker to gain access to the intersection of the process environment and the filesystem. This wouldn't help when an attacker has full read access to the filesystem (due to /proc/self/env).
* kubelet or pod unmounts secrets when the container is ready. Many secrets need to be read during initialization and never again. This would help in those circumstances but also precludes rotation.
/sig auth
/sig storage
/kind feature | priority/backlog,sig/storage,kind/feature,sig/auth,lifecycle/frozen | medium | Critical |
536,765,642 | flutter | Performance: FPS Drops dramatically with 4k+ particles. (Desktop - Darwin) | ## Details
I have been testing Flutter to evaluate its rendering capabilities. Because I intend to create a Graph viewer That needs to deal with millions or billions of nodes, with colors, movements, arrows and so on. Browser applications using Canvas or OpenGL works well. But I believed Flutter would be even better because it claims to tend to be "native". My experience says otherwise. Or maybe I'm not doing it correctly.
I tested Flame engine, I did a test with Flutter-Particles which uses Flutter canvas painting (Almost all animations I found about flutter used it) and all the other tests I did were using the same painting canvas. The last one I've tested was the gskinnerTeam's "Guide To the Stars Particles". That one is perfect an easy one to reproduce. And beautiful.
All tests were similar. All of them worked well below 1k. They started to degrade from 3k and visibly bad from 4k above. 30k of particles is impossible to deal.
I used Perf Overlay to analyze it.
I thought of creating a practical example myself, but from the simple tests I found, I came across performance issues - So I could not do anything better than what is already done. And I have two Macs. A Macbook pro 18 and a MacMini 17. Both had similar performance and could not handle above 5k ~ 30k.
That's all I have. Maybe add support for API GPU for direct access by frameworks like Flame. Or even check if is true that this rendering uses a single thread and make it work with more. Dunno.
Cheers.
## To reproduce it
UPDATE: Use this example https://dartpad.dartlang.org/40308e0a5f47acba46ba62f4d8be2bf4
increase the number of disks in child: VariousDiscs. Above 500 starts to decrease performance.
Clone flutter_vignettes repo and use this example https://github.com/gskinnerTeam/flutter_vignettes/tree/master/vignettes/constellations_list
In main.dart
```
import 'package:flutter/foundation.dart'
show debugDefaultTargetPlatformOverride;
import 'demo.dart';
void main() {
// See https://github.com/flutter/flutter/wiki/Desktop-shells#target-platform-override
debugDefaultTargetPlatformOverride = TargetPlatform.fuchsia;
runApp(new App());
}
```
>and of course add the macos folder.
And done.
To add more particles, go to demo.dart and go to the line 70 and change it
```
int starCount = 5000;
```
Let it run. Go increasing the amount of particle. And check the perf overlay. Also, you can use the XCode debug analyzer and see that it seems to be using a single thread. In there we can't see the GPU usage, but let's trust the Flutter overlay for this.
More context.
Conversation with the Flame guys https://github.com/flame-engine/flame/issues/149
My first suspect was this https://github.com/flutter/flutter/issues/32569
**Target Platform:** MacOS Mojave.
**Target OS version/browser:** Mojave 10.14.6
**Devices:** Desktop
Running Flutter in the master channel.
## Logs
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
```
[✓] Flutter (Channel master, v1.13.1-pre.76, on Mac OS X 10.14.6 18G95, locale pt-BR)
• Flutter version 1.13.1-pre.76 at /Users/micheldiz/flutter
• Framework revision d0526d3f92 (2 days ago), 2019-12-09 18:58:29 -0800
• Engine revision b9080c92b9
• Dart version 2.7.0 (build 2.7.0-dev.2.1 8b8894648f)
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/setup/#android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, set ANDROID_HOME to
that location.
You may also want to add it to your PATH environment variable.
[✓] Xcode - develop for iOS and macOS (Xcode 11.2.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.2.1, Build version 11B500
• CocoaPods version 1.8.4
[!] Android Studio (not installed)
• Android Studio not found; download from
https://developer.android.com/studio/index.html
(or visit https://flutter.dev/setup/#android-setup for detailed instructions).
[✓] VS Code (version 1.40.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.7.1
[✓] Connected device (1 available)
• macOS • macOS • darwin-x64 • Mac OS X 10.14.6 18G95
```
| engine,c: performance,platform-mac,platform-windows,c: rendering,a: desktop,has reproducible steps,P3,found in release: 3.19,found in release: 3.20,team-macos,triaged-macos | low | Critical |
536,777,901 | go | html/template: incorrectly escapes urls with fragments | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
1.13.5
</pre>
### Does this issue reproduce with the latest release?
yes (can reproduce it on go playground)
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
The code contains URL used by Kibana (eleastic search plugin which uses RISON for URLs)
https://play.golang.org/p/7-WnsX9FgKk
```go
package main
import (
"fmt"
"html/template"
"net/url"
"os"
)
func main() {
testUrl := "https://host.com/_plugin/kibana/app/kibana#/discover?_g=(refreshInterval:(pause:!t,value:0),time:(from:now-15m,mode:quick,to:now))&_a=(columns:!(_source),index:'745526c0-a366-11e9-ada4-f767130ae0b4',interval:auto,query:(language:lucene,query:'app:\"app\" AND proc:\"web\" AND environment:\"staging\"'),sort:!('@timestamp',desc))"
parsedUrl, err := url.Parse(testUrl)
if err != nil {
panic(err)
}
tmpl, err := template.New("name").Parse(`<a href="{{.}}">link</a>`)
if err != nil {
panic(err)
}
fmt.Println(testUrl)
fmt.Println(parsedUrl)
tmpl.Execute(os.Stdout, parsedUrl)
}
```
### What did you expect to see?
I expected to see the original URL since the fragments should be escaped so:
`https://host.com/_plugin/kibana/app/kibana#/discover?_g=(refreshInterval:(pause:!t,value:0),time:(from:now-15m,mode:quick,to:now))&_a=(columns:!(_source),index:'745526c0-a366-11e9-ada4-f767130ae0b4',interval:auto,query:(language:lucene,query:'app:"app" AND proc:"web" AND environment:"staging"'),sort:!('@timestamp',desc))`
URL as escaped by net.url package is also acceptable (Kibana seems to be able to handle it):
`https://host.com/_plugin/kibana/app/kibana#/discover?_g=(refreshInterval:(pause:!t,value:0),time:(from:now-15m,mode:quick,to:now))&_a=(columns:!(_source),index:%27745526c0-a366-11e9-ada4-f767130ae0b4%27,interval:auto,query:(language:lucene,query:%27app:%22app%22%20AND%20proc:%22web%22%20AND%20environment:%22staging%22%27),sort:!(%27@timestamp%27,desc))`
### What did you see instead?
The URL that is invalid and has wrong parts escaped:
`<a href="https://host.com/_plugin/kibana/app/kibana#/discover?_g=%28refreshInterval:%28pause:!t,value:0%29,time:%28from:now-15m,mode:quick,to:now%29%29&_a=%28columns:!%28_source%29,index:%27745526c0-a366-11e9-ada4-f767130ae0b4%27,interval:auto,query:%28language:lucene,query:%27app:%22app%22%20AND%20proc:%22web%22%20AND%20environment:%22staging%22%27%29,sort:!%28%27@timestamp%27,desc%29%29">link</a>`
If this is indeed a bug any workaround I can apply right now to make the template do the right thing?
| NeedsInvestigation | low | Critical |
536,810,499 | pytorch | torch::nn::functional::interpolate crash | ## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
I'm sharing a minor error.
Steps to reproduce the behavior:
```c++
/*
images[index] is C * H * W : 3 *480*640 2D Image tensor
When calling interpolation, unknowexception occurs if the data type is not Kfloat.
*/
images[index] = F::interpolate(images[index], F::InterpolateFuncOptions().scale_factor({ scale_factor }).mode(torch::kLinear).align_corners(false));
images[index] = normalizeChannels(images[index]);
/*
If change data type, it works normally
img_tensor = img_tensor.toType(at::kFloat);
*/
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): Libtorch Nightly build date : 19.11.22
- OS (e.g., Linux): Windows 10
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version: 10.2 , 7.5
- GPU models and configuration: 2080TI
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
cc @yf225 | module: cpp,triaged | low | Critical |
536,819,348 | PowerToys | [FancyZones] Zone always on top | # Summary of the new feature/enhancement
Currently I have 3 zones one is console zone however the other zones overlay on top of it hiding the zone.
as you can see the console is over the browser

but as soon as i click the browser the console is sent behind the browser

A setting on zone creation to have always on top would be a massive help
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed technical implementation details (optional)
<!--
A clear and concise description of what you want to happen.
-->
| Idea-Enhancement,FancyZones-Editor,Product-FancyZones | low | Major |
536,827,766 | flutter | Document old Android embedding's FlutterView migration | The new FlutterView class has not enough document.
https://github.com/flutter/flutter/wiki/Upgrading-pre-1.12-Android-projects
The replacements for the methods removed from old FlutterView should be mentioned.
Thanks. | platform-android,engine,d: api docs,a: existing-apps,customer: crowd,P3,team-android,triaged-android | low | Major |
536,828,408 | youtube-dl | Request for ruv.is support | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.11.28. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [X] I'm reporting a feature request
- [X] I've verified that I'm running youtube-dl version **2019.11.28**
- [ ] I've searched the bugtracker for similar feature requests including closed ones
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
www.ruv.is is icelandic tv, they have a "play channel" with direct link https://www.ruv.is/sjonvarp/
is it possible to do support for this website? | request | low | Critical |
536,851,319 | vue | slot fallback content is always rendered even when not used | ### Version
2.6.10
### Reproduction link
[https://codesandbox.io/s/vue-test-default-slot-always-exec-syuny](https://codesandbox.io/s/vue-test-default-slot-always-exec-syuny)
### Steps to reproduce
when computed props or method declared in slot fallback , looks like this:
```
// XXX component
<slot>
no render, but always run {{p}}, {{print()}} <Func/>
</slot>
// use it
<XXX>
actual replaced content
</XXX>
```
### What is expected?
computed and method not invoked , because they can't render
### What is actually happening?
computed is invoked once change.
method is invoked once render .
<!-- generated by vue-issues. DO NOT REMOVE --> | improvement,has workaround | low | Major |
536,867,465 | angular | [Animations]: Animating from current style, not initial one | - | type: bug/fix,area: animations,freq2: medium,state: confirmed,P3 | low | Minor |
536,885,092 | go | math/big: big.Int JSON marshalling to/from string | `big.Int` by design marshalls / unmarshalls to a number in JSON but for actual BIG numbers this ends up with a mess in other languages, specially javascript ...
Now `encoding/json` has a `,string` struct tag to flag a field to be quoted as a string instead but it's restricted only for `strings, floats, integers, and booleans can be quoted.` atm.
If this could also be applied to `big.Int` then we'd have a good solution for BIG numbers without breaking the existing functionality at all!
related; https://github.com/golang/go/issues/28154
and there's probably many more people running into this issue...
| NeedsInvestigation | low | Minor |
536,885,271 | flutter | Code coverage does not include const values | Here is sample code, I found that static const InfoWindow noText = InfoWindow(); was not execute while run the test unit. so the code coverage could not reach to 100%, pull request was reject. Currently, I am just change the static const instance to function, if you have any idea, please tell me, thank you!
```dart
//test class example
class InfoWindow {
const InfoWindow({
this.title,
this.snippet,
this.anchor = const Offset(0.5, 0.0),
this.onTap,
});
static const InfoWindow noText = InfoWindow();
//test unit
test('notext infowindow', (){
expect(noText.title, "");
});
```
| a: tests,engine,dependency: dart,P2,team-engine,triaged-engine | low | Major |
536,896,934 | node | Debugger isn't stopped at the `debugger;` statement when debugging using `--inspect` | Note: previous thread was https://github.com/nodejs/node/issues/10457, but it got closed recommending `--inspect-brk` which is more a workaround than a solution imo.
**Is your feature request related to a problem? Please describe.**
When debugging a CLI program, I want to break at a specific location. For DevX reasons I don't want to use `--inspect-brk`, as it would require me to attach the debugger before even knowing whether the codepath covered by the breakpoint is hit - which is particularly cumbersome when using nested processes (with `--inspect-port=0`).
**Describe the solution you'd like**
I'd like `debugger;` statements to pause the execution even if a debugger isn't attached yet; I'd then attach it myself.
**Describe alternatives you've considered**
Pausing the process manually using `execSync` w/ `sleep` or similar doesn't work: the debugger doesn't see the Node process. I suspect an internal integration is required for the process to answer the debugger during a synchronous call? | inspector | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.