id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
424,588,793 | TypeScript | Symbol definition improvement | <!-- ๐จ STOP ๐จ ๐ฆ๐ง๐ข๐ฃ ๐จ ๐บ๐ป๐ถ๐ท ๐จ
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
symbol in:title, symbol improvement, symbol description definition
## Suggestion
By using more and more `symbol` in my projects, and to make debugging easier in development mode, I fill the optional description to Symbol. Then, on production, I remove all description during the bundling process:

So I was thinking that it could be really useful to have the same approach for TypeScript - [playground](https://www.typescriptlang.org/play/#src=type%20xsymbol%3CT%3E%20%3D%20%7B%7D%3B%0D%0A%0D%0Ainterface%20xSymbolConstructor%20%7B%0D%0A%20%20%20%20()%3A%20symbol%3B%0D%0A%20%20%20%20%3CT%20extends%20string%20%7C%20number%3E(description%3A%20T)%3A%20xsymbol%3CT%3E%3B%0D%0A%7D%0D%0A%0D%0Avar%20xSymbol%3A%20xSymbolConstructor%3B%0D%0A%0D%0Avar%20s1%20%3D%20xSymbol()%3B%0D%0Avar%20s2%20%3D%20xSymbol(%22test%22)%3B):


Just imagine we don't have `xsymbol<"test">` but the "native" `symbol<"test">` or `symbol("test")` to be closer to DevTools.
Then, by extending this approach, I don't know if it could be helpful to increase the typing inference.
Given this code:
```ts
const foo = {
type: 'foo' as 'foo',
foo: ''
};
const bar = {
type: 'bar' as 'bar',
bar: ''
};
type FooBarAction = typeof foo | typeof bar;
function getType<T>(input: {type: T}): T {
return input.type;
}
function foobar(action: FooBarAction) {
switch (action.type) {
case getType(foo):
action.foo;
break;
case getType(bar):
action.bar;
break;
}
}
```

Here, according to the `type` property value, the `action` provide correctly `foo` property or `bar` property.
But, if I replace them by `symbol`:
```ts
const sFoo = {
type: Symbol('sfoo'),
sfoo: ''
};
const sBar = {
type: Symbol('sbar'),
sbar: ''
};
type SFooBarAction = typeof sFoo | typeof sBar;
function sfoobar(action: SFooBarAction) {
switch (action.type) {
case getType(sFoo):
action.sfoo;
break;
case getType(sBar):
action.sbar;
break;
}
}
```


Here TypeScript cannot resolve properly the correct definition of my `action` variable and keep saying that it get the `SFooBarAction` type, so only `type` property is available.
According to this PR #30196, I understand that working on `symbol` in the core system of TypeScript is not an easy thing, but probably it could be helpful if the optional `description` of `Symbol` as a definition descriptor to have the unique symbol in the definition. But definitely, as soon as we provide same `description`, we will face the same problem as today.
[Full Playground](https://www.typescriptlang.org/play/#src=type%20xsymbol%3CT%3E%20%3D%20%7B%7D%3B%0D%0A%0D%0Ainterface%20xSymbolConstructor%20%7B%0D%0A%20%20%20%20()%3A%20symbol%3B%0D%0A%20%20%20%20%3CT%20extends%20string%20%7C%20number%3E(description%3A%20T)%3A%20xsymbol%3CT%3E%3B%0D%0A%7D%0D%0A%0D%0Avar%20xSymbol%3A%20xSymbolConstructor%3B%0D%0A%0D%0Avar%20s1%20%3D%20xSymbol()%3B%0D%0Avar%20s2%20%3D%20xSymbol(%22test%22)%3B%0D%0A%0D%0Aconst%20foo%20%3D%20%7B%0D%0A%20%20%20%20type%3A%20'foo'%20as%20'foo'%2C%0D%0A%20%20%20%20foo%3A%20''%0D%0A%7D%3B%0D%0A%0D%0Aconst%20bar%20%3D%20%7B%0D%0A%20%20%20%20type%3A%20'bar'%20as%20'bar'%2C%0D%0A%20%20%20%20bar%3A%20''%0D%0A%7D%3B%0D%0A%0D%0Atype%20FooBarAction%20%3D%20typeof%20foo%20%7C%20typeof%20bar%3B%0D%0A%0D%0Afunction%20getType%3CT%3E(input%3A%20%7Btype%3A%20T%7D)%3A%20T%20%7B%0D%0A%20%20%20%20return%20input.type%3B%0D%0A%7D%0D%0A%0D%0Afunction%20foobar(action%3A%20FooBarAction)%20%7B%0D%0A%20%20%20%20switch%20(action.type)%20%7B%0D%0A%20%20%20%20%20%20%20%20case%20getType(foo)%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20action.foo%3B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20break%3B%0D%0A%20%20%20%20%20%20%20%20case%20getType(bar)%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20action.bar%3B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20break%3B%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0Aconst%20sFoo%20%3D%20%7B%0D%0A%20%20%20%20type%3A%20Symbol('sfoo')%2C%0D%0A%20%20%20%20sfoo%3A%20''%0D%0A%7D%3B%0D%0A%0D%0Aconst%20sBar%20%3D%20%7B%0D%0A%20%20%20%20type%3A%20Symbol('sbar')%2C%0D%0A%20%20%20%20sbar%3A%20''%0D%0A%7D%3B%0D%0A%0D%0Atype%20SFooBarAction%20%3D%20typeof%20sFoo%20%7C%20typeof%20sBar%3B%0D%0A%0D%0Afunction%20sfoobar(action%3A%20SFooBarAction)%20%7B%0D%0A%20%20%20%20switch%20(action.type)%20%7B%0D%0A%20%20%20%20%20%20%20%20case%20getType(sFoo)%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20action.sfoo%3B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20break%3B%0D%0A%20%20%20%20%20%20%20%20case%20getType(sBar)%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20action.sbar%3B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20break%3B%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0A%0D%0Aconst%20xFoo%20%3D%20%7B%0D%0A%20%20%20%20type%3A%20xSymbol('sfoo')%2C%0D%0A%20%20%20%20xfoo%3A%20''%0D%0A%7D%3B%0D%0A%0D%0Aconst%20xBar%20%3D%20%7B%0D%0A%20%20%20%20type%3A%20xSymbol('sbar')%2C%0D%0A%20%20%20%20xbar%3A%20''%0D%0A%7D%3B%0D%0A%0D%0Atype%20XFooBarAction%20%3D%20typeof%20xFoo%20%7C%20typeof%20xBar%3B%0D%0A%0D%0Afunction%20xfoobar(action%3A%20XFooBarAction)%20%7B%0D%0A%20%20%20%20switch%20(action.type)%20%7B%0D%0A%20%20%20%20%20%20%20%20case%20getType(xFoo)%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20action.xfoo%3B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20break%3B%0D%0A%20%20%20%20%20%20%20%20case%20getType(xBar)%3A%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20action.xbar%3B%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20break%3B%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A)
## Use Cases
For the first part, everywhere when we use `Symbol`.
For the second part, as soon as unique value is required, like for `Event Emitter`, `Action` in `Redux` architecture, ...
## Examples
See above on the `Suggestion` section
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Critical |
424,591,482 | flutter | Change viewportFraction dynamically in PageController | <!-- Thank you for using Flutter!
Please check out our documentation first:
* https://flutter.io/
* https://docs.flutter.io/
If you can't find the answer there, please consider asking a question on
the Stack Overflow Web site:
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
Please don't file a GitHub issue for support requests. GitHub issues are
for tracking defects in the product. If you file a bug asking for help, we
will consider this a request for a documentation update.
-->
I have searched everywhere and posted on SO but no luck.
I have a PageView and want to zoom on double tap.
Here's the code:
```dart
@override
Widget build(BuildContext context) {
this.controller = PageController(
initialPage: 0,
viewportFraction: _viewportScale,
);
return PageView(
controller: controller,
onPageChanged: (int pageIndex) {
setState(() {
_viewportScale = 1.0;
});
},
children: this.urls.map((String url) {
return Container(
child: GestureDetector(
child: Image.network(url),
onTap: () => Navigator.pop(context),
onDoubleTap: () {
setState(() {
_viewportScale = _viewportScale == 1.0 ? 2.0 : 1.0;
});
},
),
);
}).toList(),
);
}
```
And when I double tap get this error:
```
I/flutter ( 6492): โโโก EXCEPTION CAUGHT BY WIDGETS LIBRARY โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
I/flutter ( 6492): The following assertion was thrown building NotificationListener<ScrollNotification>:
I/flutter ( 6492): Unexpected call to replaceSemanticsActions() method of RawGestureDetectorState.
I/flutter ( 6492): The replaceSemanticsActions() method can only be called outside of the build phase.
I/flutter ( 6492):
I/flutter ( 6492): When the exception was thrown, this was the stack:
I/flutter ( 6492): #0 RawGestureDetectorState.replaceSemanticsActions.<anonymous closure> (package:flutter/src/widgets/gesture_detector.dart:737:9)
I/flutter ( 6492): #1 RawGestureDetectorState.replaceSemanticsActions (package:flutter/src/widgets/gesture_detector.dart:743:6)
I/flutter ( 6492): #2 ScrollableState.setSemanticsActions (package:flutter/src/widgets/scrollable.dart:379:40)
I/flutter ( 6492): #3 ScrollPosition._updateSemanticActions (package:flutter/src/widgets/scroll_position.dart:445:13)
I/flutter ( 6492): #4 ScrollPosition.notifyListeners (package:flutter/src/widgets/scroll_position.dart:695:5)
I/flutter ( 6492): #5 ScrollPosition.forcePixels (package:flutter/src/widgets/scroll_position.dart:318:5)
I/flutter ( 6492): #6 _PagePosition.viewportFraction= (package:flutter/src/widgets/page_view.dart:263:7)
I/flutter ( 6492): #7 PageController.attach (package:flutter/src/widgets/page_view.dart:173:18)
I/flutter ( 6492): #8 ScrollableState.didUpdateWidget (package:flutter/src/widgets/scrollable.dart:356:26)
I/flutter ( 6492): #9 StatefulElement.update (package:flutter/src/widgets/framework.dart:3884:58)
I/flutter ( 6492): #10 Element.updateChild (package:flutter/src/widgets/framework.dart:2752:15)
I/flutter ( 6492): #11 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3752:16)
I/flutter ( 6492): #12 Element.rebuild (package:flutter/src/widgets/framework.dart:3564:5)
I/flutter ( 6492): #13 StatelessElement.update (package:flutter/src/widgets/framework.dart:3801:5)
I/flutter ( 6492): #14 Element.updateChild (package:flutter/src/widgets/framework.dart:2752:15)
I/flutter ( 6492): #15 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3752:16)
I/flutter ( 6492): #16 Element.rebuild (package:flutter/src/widgets/framework.dart:3564:5)
I/flutter ( 6492): #17 StatefulElement.update (package:flutter/src/widgets/framework.dart:3899:5)
I/flutter ( 6492): #18 Element.updateChild (package:flutter/src/widgets/framework.dart:2752:15)
I/flutter ( 6492): #19 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3752:16)
I/flutter ( 6492): #20 Element.rebuild (package:flutter/src/widgets/framework.dart:3564:5)
I/flutter ( 6492): #21 BuildOwner.buildScope (package:flutter/src/widgets/framework.dart:2277:33)
I/flutter ( 6492): #22 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&SemanticsBinding&RendererBinding&WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:700:20)
I/flutter ( 6492): #23 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding&PaintingBinding&SemanticsBinding&RendererBinding._handlePersistentFrameCallback (package:flutter/src/rendering/binding.dart:275:5)
I/flutter ( 6492): #24 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:990:15)
I/flutter ( 6492): #25 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:930:9)
I/flutter ( 6492): #26 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._handleDrawFrame (package:flutter/src/scheduler/binding.dart:842:5)
I/flutter ( 6492): #30 _invoke (dart:ui/hooks.dart:209:10)
I/flutter ( 6492): #31 _drawFrame (dart:ui/hooks.dart:168:3)
I/flutter ( 6492): (elided 3 frames from package dart:async)
I/flutter ( 6492): โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
```
As you can see `viewportFraction` is changed onDoubleTap. It's set to 1.0 initially in class props declaration.
I found something similar https://github.com/flutter/flutter/issues/23873 but this is not about if it's smaller that 1.0, it's when changing `viewportFraction` dynamically.
Usually I solve these kind of issues by trials and errors but here have no clue what's going on. | c: new feature,framework,P2,team-framework,triaged-framework | low | Critical |
424,659,533 | TypeScript | refactoring for listing parameters from single line to column and back | parameters
```ts
// before
function fn(a: A, b: B, c: C) { /*..*/ }
// after
function fn(
a: A,
b: B,
c: C
) { /*..*/ }
// and back
function fn(a: A, b: B, c: C) { /*..*/ }
```
arguments
```ts
// before
return foldArray(toReversed(values), toListOf<T>(), (result, value) => add(result, value));
// after
return foldArray(
toReversed(values),
toListOf<T>(),
(result, value) => add(result, value)
);
``` | Suggestion,Awaiting More Feedback | low | Minor |
424,674,770 | TypeScript | syntax to control over distributivness | from: https://github.com/Microsoft/TypeScript/issues/30569#issuecomment-476007534
so basically we need a way to say it clear and loud in the language whether we want:
```ts
Promise<A> | Promise<B>
```
or
```ts
Promise<A | B>
```
as a result of a type operation | Suggestion,In Discussion | medium | Major |
424,679,779 | react-native | JavaScript strings with NULL character are not handled properly | ## ๐ Bug Report
JavaScript strings with NULL character are not handled properly
## To Reproduce
```jsx
<Text style={styles.welcome}>{'Hello \u0000 World'}</Text>
```
The text is cuted to Hello
It does not happen when Debug JS Remotely.
## Expected Behavior
Hello World
## Code Example
https://github.com/gaodeng/RN-NULL-character-ISSUE
## Environment
```
info
React Native Environment Info:
System:
OS: macOS 10.14.3
CPU: (4) x64 Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz
Memory: 282.02 MB / 8.00 GB
Shell: 3.2.57 - /bin/bash
Binaries:
Node: 10.15.1 - /usr/local/bin/node
Yarn: 1.5.1 - /usr/local/bin/yarn
npm: 6.8.0 - /usr/local/bin/npm
Watchman: 4.7.0 - /usr/local/bin/watchman
SDKs:
iOS SDK:
Platforms: iOS 12.1, macOS 10.14, tvOS 12.1, watchOS 5.1
Android SDK:
API Levels: 22, 23, 24, 25, 26, 27, 28
Build Tools: 23.0.1, 25.0.0, 25.0.1, 25.0.2, 25.0.3, 26.0.0, 26.0.1, 26.0.2, 26.0.3, 27.0.2, 27.0.3, 28.0.0, 28.0.3
System Images: android-18 | Google APIs Intel x86 Atom, android-22 | Google APIs Intel x86 Atom, android-23 | Google APIs Intel x86 Atom_64, android-24 | Intel x86 Atom_64, android-25 | Google APIs ARM EABI v7a, android-25 | Google APIs Intel x86 Atom_64, android-27 | Google APIs Intel x86 Atom, android-P | Google APIs Intel x86 Atom, android-P | Google Play Intel x86 Atom_64
IDEs:
Android Studio: 3.3 AI-182.5107.16.33.5314842
Xcode: 10.1/10B61 - /usr/bin/xcodebuild
npmPackages:
react: 16.8.3 => 16.8.3
react-native: 0.59.1 => 0.59.1
npmGlobalPackages:
create-react-native-app: 1.0.0
react-native-cli: 2.0.1
react-native-create-library: 3.1.2
react-native-git-upgrade: 0.2.7
react-native-rename: 2.1.5
``` | JavaScript,Bug,Never gets stale | medium | Critical |
424,691,645 | pytorch | Add layer-wise adaptive rate scaling (LARS) optimizer | ## ๐ Feature
Add a PyTorch implementation of layer-wise adaptive rate scaling (LARS) from the paper "[Large Batch Training of Convolutional Networks](https://arxiv.org/abs/1708.03888)" by You, Gitman, and Ginsburg. Namely, include the implementation found [here](https://github.com/noahgolmant/pytorch-lars) in the `torch.optim.optimizers` submodule.
## Motivation
LARS is one of the most popular optimization algorithms for large-batch training. This feature will expose the algorithm through the high-level optimizer interface. There are currently no other implementations of this algorithm available in widely used frameworks.
## Pitch
I have already written/tested a PyTorch optimizer for LARS in [this repo](https://github.com/noahgolmant/pytorch-lars). Changes would be minimal (I just have to make the imports relative at the top of the file).
## Alternatives
The only existing code for this is in caffe, and it looks like it is not exposed at a higher level within the solver framework. I also do not know whether or not that code has been tested.
## Additional context
Notably, I was unable to reproduce their results (namely reducing the generalization gap) on CIFAR-10 with ResNet. I would welcome additional tests to see if this implementation is able to replicate their performance on larger datasets like ImageNet.
| feature,module: optimizer,triaged | medium | Major |
424,691,705 | godot | Complex model render incorrectly using GLES3 unless increasing blend shape max buffer size | **Godot version:**
3.1 official
**OS/device including version:**
Windows 10, Vega 8 and Intel integrated, latest drivers
**Issue description:**
Imported model is not completly rendered when using GLES3, but it does when using GLES2, it also rendered correctly in versin 3.0.6
**Steps to reproduce:**
I'm linking a Git project because of model complexity project size is too big. I'm importing models using Fuse CC->Mixamo->Custom Script,
[In this video](https://youtu.be/QYfEXgkOYnE) a few gaps are visible in the forehead and the neck areas.
[In this video](https://youtu.be/yzclSGAC4s4) the full left arm is missing.
The models have the same base body mesh, with 65 bones and about 80 blenshapes, second has more animations, meshes and materials (meshes were added trying to hide as much skin as possible, and skull polygons were removed too).
**Minimal reproduction project:**
[Git project](https://gitlab.com/DavidOC/GodotJam062018) | enhancement,topic:rendering,documentation,topic:import,topic:3d | low | Major |
424,709,346 | godot | Support offset in BoneAttachment | When adding a BoneAttachment to a Skeleton, the BoneAttachment's position is set to that of the bone it is attached to. This makes sense, but then if you explicitly alter the transform of the BoneAttachment in the editor, it always goes back to the bone's position at runtime. This is causing us issues, as the only bones we have in our character's hands are a) at the wrist, and b) at the fingers. This means that when we place a weapon as a child of the BoneAttachement, it is never in the appropriate position (it's eaier placed on the wrist, or on the fingers). Ideally, if the BoneAttachment is moved in editor, that offset would be taken into account when setting the BoneAttachment's position. | enhancement,topic:core | low | Major |
424,719,040 | rust | Cannot implement trait on type alias involving another trait's associated type | When trying to implement `From` for nalgebra types, I discovered the following surprising behavior:
crate2/src/lib.rs
```rust
pub trait Foo {
type Bar;
}
pub struct Baz;
impl Foo for Baz {
type Bar = ();
}
pub struct Quux<T>(pub T);
pub type Corge = Quux<<Baz as Foo>::Bar>;
```
crate1/src/lib.rs
```rust
pub struct Grault;
// this doesn't compile
impl From<Grault> for crate2::Corge {
fn from(x: Grault) -> Self {
Self(())
}
}
// but this seemingly equivalent impl works fine
impl From<Grault> for crate2::Quux<()> {
fn from(x: Grault) -> Self {
Self(())
}
}
```
Error:
```
error[E0210]: type parameter `<crate2::Baz as crate2::Foo>::Bar` must be used as the type parameter for some local type (e.g. `MyStruct<<crate2::Baz as crate2::Foo>::Bar>`)
--> src/lib.rs:3:1
|
3 | impl From<Grault> for crate2::Corge {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type parameter `<crate2::Baz as crate2::Foo>::Bar` must be used as the type parameter for some local type
|
= note: only traits defined in the current crate can be implemented for a type parameter
error: aborting due to previous error
For more information about this error, try `rustc --explain E0210`.
```
The error occurs no matter which crate the type alias is in. In addition to failing at all here being almost certainly incorrect, the error message is strange; `Foo::Bar` is not a type parameter at all! | A-type-system,A-associated-items,T-compiler,C-bug,T-types | low | Critical |
424,753,998 | svelte | `bind:group` does not work with nested components | I'm trying to bind a store variable to group of checkboxes and it works till I move checkbox into a separate component, after that only one checkbox can be chosen at a time, here's an example from repl: https://gist.github.com/imbolc/e29205d6901d135c8c1bd8c3eec26d67 | feature request | high | Critical |
424,767,122 | ant-design | ่ฏทไธบ่กจๆ ผ (Table) ็ปไปถๆทปๅ ไธไธชๆฐๆฎ่ก (tr) ๅ็ป็ๅ่ฝใ | - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
็จ่กจๆ ผๅฑ็คบๆฐๆฎ็ๆถๅ๏ผๅธๆ่ฝๅฐๅ
ทๆๅ
ณ่็ๆฐๆฎ่กๅ็ปๅฑ็คบ๏ผๆฏๅฆ็จ่พนๆก็บฟๅฐๅคไธชๆฐๆฎ่กๆก่ตทๆฅไฝไธบไธไธชๆดไฝๆฅๅฑ็คบใ
### What does the proposed API look like?
```js
const data = [
[{}, {}], //ๅ็ป1
[{}, {}], //ๅ็ป2
[{}, {}], //ๅ็ป3
[{}, {}], //ๅ็ป4
//...
];
<Table dataSource={data} />
```
**ๅฆๆๅฏไปฅ็่ฏ๏ผๅฏไปฅๅจๅๆ็ `dataSource` ๅฑๆงไธๆฉๅฑ๏ผ่ฆๅ็ป็ๆฐๆฎ่ก๏ผ็ฑๅผๅ่
่ชๅทฑๅๅฅฝใ**
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ก Feature Request,Inactive | medium | Major |
424,798,765 | go | net/url: make URL parsing return an error on IPv6 literal without brackets | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
go version go1.11.2 windows/amd64
</pre>
### Does this issue reproduce with the latest release?
<pre>
yes
</pre>
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
set GOARCH=amd64
set GOHOSTARCH=amd64
set GOHOSTOS=windows
</pre></details>
### What did you do?
url.URL provides the methods `.Hostname()` and `.Port()` which schould split the URL's host (`.Host`) into its address and port part. In certain cases, these functions are not able to interpret IPv6 addresses correctly, leading to invalid output.
Here is a test function feeding different sample URLs, demonstrating the issue (**all test URLs are valid and should succeed!**):
```
func TestUrl(t *testing.T) {
tests := []struct {
name string
input string
wantHost string
wantPort string
}{
{"domain-unknonw-scheme", "asfd://localhost/?q=0&p=1#frag", "localhost", ""},
{"domain-implicit", "http://localhost/?q=0&p=1#frag", "localhost", ""},
{"domain-explicit", "http://localhost:80/?q=0&p=1#frag", "localhost", "80"},
{"domain-explicit-other-port", "http://localhost:80/?q=0&p=1#frag", "localhost", "80"},
{"ipv4-implicit", "http://127.0.0.1/?q=0&p=1#frag", "127.0.0.1", ""},
{"ipv4-explicit", "http://127.0.0.1:80/?q=0&p=1#frag", "127.0.0.1", "80"},
{"ipv4-explicit-other-port", "http://127.0.0.1:80/?q=0&p=1#frag", "127.0.0.1", "80"},
{"ipv6-explicit", "http://[1::]:80/?q=0&p=1#frag", "1::", "80"},
{"ipv6-explicit-other-port", "http://[1::]:80/?q=0&p=1#frag", "1::", "80"},
{"ipv6-implicit-1", "http://[1::]/?q=0&p=1#frag", "1::", ""},
{"ipv6-implicit-2", "http://1::/?q=0&p=1#frag", "1::", ""},
{"ipv6-implicit-3", "http://1::2008/?q=0&p=1#frag", "1::2008", ""},
{"ipv6-implicit-4", "http://1::2008:1/?q=0&p=1#frag", "1::2008:1", ""},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Prepare URL
u, err := url.Parse(tt.input)
if err != nil {
t.Errorf("could not parse url: %v", err)
return
}
// Extract hostname and port
host := u.Hostname()
port := u.Port()
// Compare result
if host != tt.wantHost {
t.Errorf("TestUrl() got = %v, want %v", host, tt.wantHost)
}
if port != tt.wantPort {
t.Errorf("TestUrl() got1 = %v, want %v", port, tt.wantPort)
}
})
}
}
```
Output:

### What did you expect to see?
All sample URLs in the test cases above are valid ones, hence, all tests should succeed as defined
### What did you see instead?
The top test samples work as expected, however, the bottom three return incorrect results. The bottom three samples are valid IPv6 URLs with inplicit port specification, but `.Hostname()` and `.Port()` interpres them as IPv4 addresses returning parts of the IPv6 address as if it was the explicit port of an IPv4 input. E.g., in one of the test outputs, ":2008" is returned as the port, but it is actually part of the IPv6 address.
### Where is the bug?
`.Hostname()` and `.Port()` implement their own logic to split the port from the hostname. I've found that there is already a close function in the net package, called `net.SplitHostPort()`, which does it's job correctly. If `.Hostname()` and `.Port()` just called that function instead of re-implementing the logic, everything should work as expected. Below is the proof in form of a test function with exactly the same inputs as above, but using `net.SplitHostPort()` instead of `.Hostname()` / `.Port()`.
```
func TestUrlNetSplit(t *testing.T) {
tests := []struct {
name string
input string
wantHost string
wantPort string
}{
{"domain-unknonw-scheme", "asfd://localhost/?q=0&p=1#frag", "localhost", ""},
{"domain-implicit", "http://localhost/?q=0&p=1#frag", "localhost", ""},
{"domain-explicit", "http://localhost:80/?q=0&p=1#frag", "localhost", "80"},
{"domain-explicit-other-port", "http://localhost:80/?q=0&p=1#frag", "localhost", "80"},
{"ipv4-implicit", "http://127.0.0.1/?q=0&p=1#frag", "127.0.0.1", ""},
{"ipv4-explicit", "http://127.0.0.1:80/?q=0&p=1#frag", "127.0.0.1", "80"},
{"ipv4-explicit-other-port", "http://127.0.0.1:80/?q=0&p=1#frag", "127.0.0.1", "80"},
{"ipv6-explicit", "http://[1::]:80/?q=0&p=1#frag", "1::", "80"},
{"ipv6-explicit-other-port", "http://[1::]:80/?q=0&p=1#frag", "1::", "80"},
{"ipv6-implicit-1", "http://[1::]/?q=0&p=1#frag", "1::", ""},
{"ipv6-implicit-2", "http://1::/?q=0&p=1#frag", "1::", ""},
{"ipv6-implicit-3", "http://1::2008/?q=0&p=1#frag", "1::2008", ""},
{"ipv6-implicit-4", "http://1::2008:1/?q=0&p=1#frag", "1::2008:1", ""},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Prepare URL
u, err := url.Parse(tt.input)
if err != nil {
t.Errorf("could not parse url: %v", err)
return
}
host, port, err := net.SplitHostPort(u.Host)
if err != nil {
// Could not split port from host, as there is no port specified
host = u.Host
port = ""
}
// Compare result
if host != tt.wantHost {
t.Errorf("TestUrl() got = %v, want %v", host, tt.wantHost)
}
if port != tt.wantPort {
t.Errorf("TestUrl() got1 = %v, want %v", port, tt.wantPort)
}
})
}
}
```
Output:

| help wanted,NeedsFix | medium | Critical |
424,800,710 | electron | TitleBar/MenuBar Settings | Good day
I like Electron because it has a good code implementation.
BrowserWindow has a lot of nice options for customization
[https://electronjs.org/docs/api/browser-window](url)
But I hope you will add some simple options for TitleBar customization
For some easy applications, change a color bar is more than enough. For more advanced applications I use index.html file with custom navbar settings and styles.
Proposed Solution
You have already added -titleBarStyle- String and it's really fine for me and my users on MacOS.
So I hope you will add some more options in near future.
My proposition is:
titleBarColor: String (optional) use HEXcolor (#80FFFFFF), // TitleBar color. It is really, what I wish for my users
titleBarHeight: Integer (optional) use values (50), // TitleBar height
titleBarTextColor: String (optional) use HEXcolor (#80FFFFFF), // Text color in TitleBar
titleBarElementsColor: String (optional) use HEXcolor (#80FFFFFF) // Some buttons elements color in TitleBar
With Great respect
A.W.
| enhancement :sparkles: | low | Minor |
424,872,689 | flutter | PaintingStyle should have a strokeAndFill enum value | Hi,
I know that this has been removed here https://github.com/flutter/engine/pull/3037
Is there any reason to remove this?
Or any other solution to avoid hitting the same pixels twice with a stroke draw and a fill draw? | framework,engine,d: api docs,P3,team-engine,triaged-engine | medium | Critical |
424,918,202 | scrcpy | can you make scrcpy work on ip stream .like play on live on any browser and media player | feature request | low | Minor |
|
424,920,401 | rust | [eRFC] Include call graph information in LLVM IR | ## Summary
Add an experimental compiler feature / flag to add call graph information, in
the form of LLVM metadata, to the LLVM IR (`.ll`) files produced by the
compiler.
## Motivation
(This section ended up being a bit long winded. The TL;DR is improving existing
stack analysis usage tools.)
Stack usage analysis is a hard requirement for [certifying] safety critical
(embedded) applications. This analysis is usually implemented as a static
analysis tool that computes the worst case stack usage of an application. The
information provided by this tool is used in the certification process to
demonstrate that the application won't run into a stack overflow at runtime.
[certifying]: https://www.absint.com/qualification/safety.htm
[Several][a] [such][b] [tools][c] exist for C/C++ programs, mainly in commercial
and closed source forms. A few months ago the Rust compiler gained a feature
that's a pre-requisite for implementing such tools in the Rust world: [`-Z
emit-stack-sizes`]. This flag makes stack usage information about every Rust
function available in the binary artifacts (object files) produced by the
compiler.
[a]: https://www.absint.com/stackanalyzer/index.htm
[b]: https://www.iar.com/support/tech-notes/general/stack-usage-and-stack-usage-control-files/
[c]: https://www.adacore.com/gnatpro/toolsuite/gnatstack
[`-Z emit-stack-sizes`]: https://doc.rust-lang.org/unstable-book/compiler-flags/emit-stack-sizes.html
And just recently a tool that uses this flag and call graph analysis to perform
whole program stack usage analysis has been developed: [`cargo-call-stack`][]
(*full disclaimer: I'm the author of said tool*). The tool does OK when dealing
with programs that only uses direct function calls but it's lacking (either
over-pessimistic or flat out incorrect) when analyzing programs that contain
indirect function calls, that is function pointer calls and/or dynamic
dispatch.
[`cargo-call-stack`]: https://github.com/japaric/cargo-call-stack
Call graph analysis in the presence of indirect function calls is notoriously
hard, but Rust strong typing makes the problem tractable -- in fact, dynamic
dispatch is easier to reason about than function pointer calls. However, this
last part only holds true when Rust type information is available to the tool,
which is not the case today.
To elaborate: it's' important that the call graph is extracted from
post-LLVM-optimization output as that greatly reduces the chance of inserting
invalid edges. For example, what appears to be a function call at the (Rust)
source level (e.g. `let x = foo();`) may not actually exist in the final binary
due to inlining or dead code elimination. For this reason, Rust stack usage
analysis tools are limited to two sources of call graph information: the machine
code in the final executable and post-optimization LLVM IR (`rustc`'s
`--emit=llvm-ir`). The former contains no type information and the latter
contains *LLVM* type information, not Rust type information.
`cargo-call-stack` currently uses the type information available in the
LLVM IR of a crate to reason about indirect function calls. However, LLVM type
information is not as complete as Rust type information because the conversion
is lossy. Consider the following Rust source code and corresponding LLVM IR.
``` rust
#[no_mangle] // shorter name
static F: AtomicPtr<fn() -> u32> = AtomicPtr::new(foo as *mut _);
fn main() {
if let Some(f) = unsafe { F.load(Ordering::Relaxed).as_ref() } {
f(); // function pointer call
}
// volatile load to preserve the return value of `bar`
unsafe {
core::ptr::read_volatile(&baz());
}
}
#[no_mangle]
fn foo() -> u32 {
F.store(bar as *mut _, Ordering::Relaxed);
0
}
#[no_mangle]
fn bar() -> u32 {
1
}
#[inline(never)]
#[no_mangle]
fn baz() -> i32 {
F.load(Ordering::Relaxed) as usize as i32
}
```
``` llvm
define void @main() unnamed_addr #3 !dbg !1240 {
start:
%_14 = alloca i32, align 4
%0 = load atomic i32, i32* bitcast (<{ i8*, [0 x i8] }>* @F to i32*) monotonic, align 4
%1 = icmp eq i32 %0, 0, !dbg !1251
br i1 %1, label %bb5, label %bb3, !dbg !1251
bb3: ; preds = %start
%2 = inttoptr i32 %0 to i32 ()**, !dbg !1252
%3 = load i32 ()*, i32 ()** %2, align 4, !dbg !1254, !nonnull !46
%4 = tail call i32 %3() #9, !dbg !1254
br label %bb5, !dbg !1255
; ..
}
; Function Attrs: norecurse nounwind
define internal i32 @foo() unnamed_addr #0 !dbg !1189 {
; ..
}
; Function Attrs: norecurse nounwind readnone
define internal i32 @bar() unnamed_addr #1 !dbg !1215 {
; ..
}
; Function Attrs: noinline norecurse nounwind
define internal fastcc i32 @baz() unnamed_addr #2 !dbg !1217 {
; ..
}
```
Note how in the LLVM IR output `foo`, `bar` and `baz` all have the same function
signature: `i32 ()`, which is the LLVM version of `fn() -> i32`. There are no
unsigned integer types in LLVM IR so both Rust types, `i32` and `u32`, get
converted to `i32` in the LLVM IR.
Line `%4 = ..` in the LLVM IR is the function pointer call. This too,
incorrectly, indicates that a function pointer with signature `i32 ()` (`fn() ->
i32`) is being invoked.
This lossy conversion leads `cargo-call-stack` to incorrectly add an edge
between the node that represents the function pointer call and `baz`. See below:

If the tool had access to call graph information from the compiler it would have
produced the following accurate call graph.

This eRFC proposes adding a feature to aid call graph and stack usage analysis.
(For a more in depth explanation of how `cargo-call-stack` works please refer to
this blog post: https://blog.japaric.io/stack-analysis/)
## Design
We propose that call graph information is added to the LLVM IR that `rustc`
produces in the form of [LLVM metadata] when the unstable `-Z call-metadata`
`rustc` flag is used.
[LLVM metadata]: https://llvm.org/docs/LangRef.html#metadata
### Function pointer calls
Functions that are converted into function pointers in Rust source (e.g. `let x:
fn() -> i32 = foo`) will get extra LLVM metadata in their definitions (IR:
`define`). The metadata will have the form `!{!"fn", !"fn() -> i32"}`, where the
second node represents the signature of the function. Likewise, function pointer
calls will get similar LLVM metadata at call site (IR: `call`/ `invoke`).
Revisiting the previous example, the IR would change as shown below:
``` llvm
define void @main() unnamed_addr #3 !dbg !1240 {
; ..
%4 = tail call i32 %3() #9, !dbg !1254, !rust !0
; .. ^^^^^^^^ (ADDED)
}
; Function Attrs: norecurse nounwind
define internal i32 @foo() unnamed_addr #0 !dbg !1189 !rust !0 {
; .. ^^^^^^^^ (ADDED)
}
; Function Attrs: norecurse nounwind readnone
define internal i32 @bar() unnamed_addr #1 !dbg !1215 !rust !0 {
; .. ^^^^^^^^ (ADDED)
}
; Function Attrs: noinline norecurse nounwind
define internal fastcc i32 @baz() unnamed_addr #2 !dbg !1217 {
; ..
}
; ..
; (ADDED) at the bottom of the file
!0 = !{!"fn", "fn() -> i32"}
; ..
```
Note how `main` and `baz` didn't get the extra `!rust` metadata because they are
never converted into function pointers. Whereas both `foo` and `bar` got the
same metadata because they have the same signature and are converted into
function pointers in the source code (lines `static F` and `F.store`).
When tools parse this LLVM IR they will know that line `%4 = ..` can invoke
`foo` or `bar` (`!rust !0`), but not `baz` or `main` because the latter two
don't have the same "fn" metadata.
This `-Z` flag only promises two things with respect to "fn" metadata:
- Only functions that are converted (coerced) into function pointers in the
source code will get "fn" metadata -- note that this does *not* necessarily mean that
function will be called via a function pointer call
- That the string node that comes after the `!"fn"` node will be *unique* for
each function type -- the flag does *not* make any promise about the contents
or syntax of this string node. (Having a stringified version of the function
signature in the LLVM IR would be nice to have but it's not required to
produce an accurate call graph.)
Adding this kind of metadata doesn't affect LLVM optimization passes and more
importantly our previous experiments show that this custom metadata is not
removed by LLVM passes.
### Trait objects
There's one more of bit of information we can encode in the metadata to make the
analysis less pessimistic: information about trait objects.
Consider the following Rust source code and corresponding LLVM IR.
``` rust
static TO: Mutex<&'static (dyn Foo + Sync)> = Mutex::new(&Bar);
static X: AtomicI32 = AtomicI32::new(0);
fn main() {
(*TO.lock()).foo();
if X.load(Ordering::Relaxed).quux() {
// side effect to keep `quux`'s return value
unsafe { asm!("" : : : "memory" : "volatile") }
}
}
trait Foo {
fn foo(&self) -> bool {
false
}
}
struct Bar;
impl Foo for Bar {}
struct Baz;
impl Foo for Baz {
fn foo(&self) -> bool {
true
}
}
trait Quux {
fn quux(&self) -> bool;
}
impl Quux for i32 {
#[inline(never)]
fn quux(&self) -> bool {
*TO.lock() = &Baz;
unsafe { core::ptr::read_volatile(self) > 0 }
}
}
```
``` llvm
; Function Attrs: noinline noreturn nounwind
define void @main() unnamed_addr #2 !dbg !1418 {
; ..
%8 = tail call zeroext i1 %7({}* nonnull align 1 %4) #8, !dbg !1437, !rust !0
; .. ^^^^^^^^
}
; app::Foo::foo
; Function Attrs: norecurse nounwind readnone
define internal zeroext i1 @_ZN3app3Foo3foo17h5a849e28d8bf9a2eE(
%Bar* noalias nocapture nonnull readonly align 1
) unnamed_addr #0 !dbg !1224 !rust !1 {
; .. ^^^^^^^^
}
; <app::Baz as app::Foo>::foo
; Function Attrs: norecurse nounwind readnone
define internal zeroext i1
@"_ZN37_$LT$app..Baz$u20$as$u20$app..Foo$GT$3foo17h9e4a36340940b841E"(
%Baz* noalias nocapture nonnull readonly align 1
) unnamed_addr #0 !dbg !1236 !rust !2 {
; .. ^^^^^^^^
}
; <i32 as app::Quux>::quux
; Function Attrs: noinline norecurse nounwind
define internal fastcc zeroext i1
@"_ZN33_$LT$i32$u20$as$u20$app..Quux$GT$4quux17haf5232e76b46052fE"(
i32* noalias readonly align 4 dereferenceable(4)
) unnamed_addr #1 !dbg !1245 !rust !3 {
; .. ^^^^^^^^
}
; ..
!0 = "fn(*mut ()) -> bool"
!1 = "fn(&Bar) -> bool"
!2 = "fn(&Baz) -> bool"
!3 = "fn(&i32) -> bool"
; ..
```
In this case we have dynamic dispatch, which shows up in the LLVM IR at line
`%8` as a function pointer call where the signature of the function pointer is
`i1 ({}*)`, which is more or less equivalent to Rust's `fn(*mut ()) -> bool` --
the `{}*` denotes an "erased" type.
With just the function signature metadata tools could at best assume that the
dynamic dispatch could invoke `Bar.foo()`, `Baz.foo()` or `i32.quux()` resulting
in the following, incorrect call graph.

Thus, we also propose that the `-Z call-metadata` flag adds trait-method
information to trait method implementations (IR: `define`) *of traits that are
converted into trait objects*, and to dynamic dispatch sites (IR: `call _ %_({}*
_, ..)`) using the following metadata syntax: `!{!"dyn", !"Foo", !"foo"}`, where
the second node represents the trait and the third node represents the method
being dispatched / defined.
Building upon the previous example, here's how the "dyn" metadata would be
emitted by the compiler:
``` llvm
; Function Attrs: noinline noreturn nounwind
define void @main() unnamed_addr #2 !dbg !1418 {
; ..
%8 = tail call zeroext i1 %7({}* nonnull align 1 %4) #8, !dbg !1437, !rust !0
; ..
}
; app::Foo::foo
; Function Attrs: norecurse nounwind readnone
define internal zeroext i1 @_ZN3app3Foo3foo17h5a849e28d8bf9a2eE(
%Bar* noalias nocapture nonnull readonly align 1
) unnamed_addr #0 !dbg !1224 !rust !0 {
; .. ^^^^^^^^ (CHANGED)
}
; <app::Baz as app::Foo>::foo
; Function Attrs: norecurse nounwind readnone
define internal zeroext i1
@"_ZN37_$LT$app..Baz$u20$as$u20$app..Foo$GT$3foo17h9e4a36340940b841E"(
%Baz* noalias nocapture nonnull readonly align 1
) unnamed_addr #0 !dbg !1236 !rust !0 {
; .. ^^^^^^^^ (CHANGED)
}
; <i32 as app::Quux>::quux
; Function Attrs: noinline norecurse nounwind
define internal fastcc zeroext i1
@"_ZN33_$LT$i32$u20$as$u20$app..Quux$GT$4quux17haf5232e76b46052fE"(
i32* noalias readonly align 4 dereferenceable(4)
) unnamed_addr #1 !dbg !1245 {
; .. ^^^^^^^^ (REMOVED)
}
; ..
!0 = !{!"dyn", !"Foo", !"foo"}" ; CHANGED
; ..
```
Note that `<i32 as Quux>::quux` loses its `!rust` metadata because there's no
`dyn Quux` in the source code.
With trait-method information tools would be able to limit the candidates of
dynamic dispatch to the actual implementations of the trait being dispatched.
Thus the call graph produced by the tools would become:

Like "fn" metadata, "dyn" metadata only promises two things:
- Only trait method implementations (including default implementations) of
traits *that appear as trait objects* (e.g. `&dyn Foo`, `Box<dyn Bar>`) in the
source code will get this kind of metadata
- That the string nodes that come after the `!"dyn"` node will be *unique* for
each trait and method -- the flag does *not* make any promise about the
contents or syntax of these string nodes.
#### Destructors
Calling the destructor of a trait object (e.g. `Box<dyn Foo>`) can result in the
destructor of any `Foo` implementer being invoked. This information will also be
encoded in the LLVM IR using "drop" metadata of the form: `!{!"drop", !"Foo"}`
where the second node represents the trait.
Here's an example of this kind of metadata:
``` rust
trait Foo {
fn foo(&self) {}
}
struct Bar;
impl Foo for Bar {}
struct Baz;
impl Foo for Baz {}
static MAYBE: AtomicBool = AtomicBool::new(false);
fn main() {
let mut x: Box<dyn Foo> = Box::new(Bar);
if MAYBE.load(Ordering::Relaxed) {
x = Box::new(Baz);
}
drop(x);
}
```
Unoptimized LLVM IR:
``` llvm
; core::ptr::real_drop_in_place
define internal void @_(%Baz* nonnull align 1) unnamed_addr #4 !rust !199 {
; ..
}
; core::ptr::real_drop_in_place
define internal void @_(%Bar* nonnull align 1) unnamed_addr #4 !rust !199 {
; ..
}
; hello::main
define internal void @() {
; ..
; `drop(x)`
invoke void @_ZN4core3ptr18real_drop_in_place17h258eb03c50ca2fcaE(..)
; ..
}
; core::ptr::real_drop_in_place
define internal void @_ZN4core3ptr18real_drop_in_place17h258eb03c50ca2fcaE(..) {
; ..
; calls destructor on the concrete type behind the trait object
invoke void %8({}* align 1 %3)
to label %bb3 unwind label %cleanup, !dbg !209, !rust !199
; ..
}
!199 = !{!"drop", !"Foo"}
```
Here dropping `x` can result in `Bar`'s or `Baz`'s destructor being invoked (see `!199`).
### Multiple metadata
Some function definitions may get more than one different metadata kind or
different instances of the same kind. In that case we'll use a metadata tuple
(e.g. `!{!1, !2}`) to group the different instances. An example:
``` rust
trait Foo {
fn foo(&self) -> bool;
}
trait Bar {
fn bar(&self) -> i32 {
0
}
}
struct Baz;
impl Foo for Baz {
fn foo(&self) -> bool {
false
}
}
impl Bar for Baz {}
fn main() {
let x: &Foo = &Baz;
let y: &Bar = &Baz;
let z: fn(&Baz) -> bool = Baz::foo;
}
```
Unoptimized LLVM IR:
``` llvm
; core::ptr::real_drop_in_place
define internal void @_(%Baz* nonnull align 1) unnamed_addr #2 !rust !107 {
; ..
}
; <hello::Baz as hello::Foo>::foo
define internal zeroext i1 @_(%Baz* noalias nonnull readonly align 1) unnamed_addr #2 !rust !157 {
; ..
}
!105 = !{!"drop", !"Foo"}
!106 = !{!"drop", !"Bar"}
!107 = !{!105, !106}
; ..
!155 = !{!"dyn", !"Foo", !"foo"}
!156 = !{!"fn", !"fn(&Baz) -> bool"}
!157 = !{!155, !156}
```
### Summary
In summary, these are the proposed changes:
- Add an unstable `-Z call-metadata` flag
- Using this flag adds extra LLVM metadata to the LLVM IR produced by `rustc`
(`--emit=llvm-ir`). Three kinds of metadata will be added:
- `!{!"fn", !"fn() -> i32"}` metadata will be added to the definitions of
functions (IR: `define`) *that are coerced into function pointers in the
source code* and to function pointer calls (IR: `call _ %_(..)`). The second
node is a string that uniquely identifies the signature (type) of the
function.
- `!{!"dyn", !"Trait", !"method"}` metadata will be added to the trait method
implementations (IR: `define`) of traits *that appear as trait objects in
the source code* and to dynamic dispatch sites (IR: `call _ %_({}* _, ..)`).
The second node is a string that uniquely identifies the trait and the third
node is a string that uniquely identifies the trait method.
- `!{!"drop", "Trait"}` metadata will be added to destructors (IR: `define`)
of types that implement traits *that appear as trait objects in the source
code* and to the invocations of trait object destructors. The second node is
a string that uniquely identifies the implemented trait / trait object.
## Alternatives
An alternative would be to make type information available in the final binary
artifact, that is in the executable, rather than in the LLVM IR. This would make
the feature harder to implement and *less* portable. Making the type information
available in, say, the ELF format would require designing a (binary) format
to encode the information in a linker section plus non-trivial implementation
work. Making this feature available in other formats (Mach-O, WASM, etc.) would
only multiply the amount of required work, likely leading to this feature being
implemented for some formats but not others.
## Drawbacks
LLVM IR is tied to the LLVM backend; this makes the proposed feature hard, or
maybe even impossible, to port to other backends like cranelift. I don't think
this is much of an issue as this is an experimental feature; backend portability
can, and should, be revisited when we consider stabilizing this feature (if
ever).
---
Since this is a (hopefully small) experimental compiler feature (along the lines
of [`-Z emit-stack-sizes`][pr51946]) and not a language (semantics or syntax)
change I'm posting this in rust-lang/rust for FCP consideration. If this
warrants a formal RFC I'd be happy to repost this in rust-lang/rfcs.
[pr51946]: https://github.com/rust-lang/rust/pull/51946
cc @rust-lang/compiler @oli-obk | A-LLVM,T-compiler,WG-embedded,needs-fcp,A-CLI | medium | Major |
424,934,543 | scrcpy | Hotkeys for record screen | To record a video I need to run the command:
```scrcpy -r file.mkv```
This method is inconvenient if the scrcpy is already running and I need to make a video. I will have to return to the console, close the program, enter the command. After recording the video, I will have to close the scrcpy and reopen it without parameters.
I would like to be able to manage the recording of video from the scrcpy, namely
1. Start recording
2. Pause recording
3. Stop recording
For this, I can use third-party software as OBS, but when I test mobile applications in the scrcpy, switching to between different windows is distracting. | feature request,record | low | Major |
424,949,450 | vscode | [html] bracket matching in strings | Issue Type: <b>Bug</b>
In HTML the minor sign inside of tags is sometimes recognised as opening a new tag.
For Example:
`<div data-ng-if="(something <= 0)" class="alert alert-warning">`

VS Code version: Code 1.32.3 (a3db5be9b5c6ba46bb7555ec5d60178ecc2eaae4, 2019-03-14T23:43:35.476Z)
OS version: Windows_NT x64 10.0.16299
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz (4 x 2494)|
|GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: unavailable_off<br>surface_synchronization: enabled_on<br>video_decode: enabled<br>webgl: enabled<br>webgl2: enabled|
|Memory (System)|7.94GB (3.81GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (3)</summary>
Extension|Author (truncated)|Version
---|---|---
python|ms-|2019.2.5558
cpptools|ms-|0.22.1
vscode-spotify|shy|3.1.0
(2 theme extensions excluded)
</details>
<!-- generated by issue reporter --> | bug,html | low | Critical |
424,956,527 | go | x/tools/cmd/goimports: prefer main module requirements | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.1 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/myitcv/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/myitcv/gostuff"
GOPROXY=""
GORACE=""
GOROOT="/home/myitcv/gos"
GOTMPDIR=""
GOTOOLDIR="/home/myitcv/gos/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build898038110=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
Ran [`testscript`](https://github.com/rogpeppe/go-internal/blob/master/cmd/testscript/README.md) on the following:
```
# install goimports
env GOMODPROXY=$GOPROXY
env GOPROXY=
go install golang.org/x/tools/cmd/goimports
# switch back to our local proxy
env GOPROXY=$GOMODPROXY
# "warm" the module (download) cache
go get gopkg.in/tomb.v1
go get gopkg.in/tomb.v2
# test goimports
cd mod
go mod tidy
exec goimports main.go
stdout '\Q"gopkg.in/tomb.v2"\E'
-- go.mod --
module goimports
require golang.org/x/tools v0.0.0-20190322203728-c1a832b0ad89
-- mod/go.mod --
module mod
require gopkg.in/tomb.v2 v2.0.0
-- mod/main.go --
package main
import (
"fmt"
)
func main() {
fmt.Println(tomb.Tomb)
}
-- .gomodproxy/gopkg.in_tomb.v1_v1.0.0/.mod --
module gopkg.in/tomb.v1
-- .gomodproxy/gopkg.in_tomb.v1_v1.0.0/.info --
{"Version":"v1.0.0","Time":"2018-10-22T18:45:39Z"}
-- .gomodproxy/gopkg.in_tomb.v1_v1.0.0/go.mod --
module gopkg.in/tomb.v1
-- .gomodproxy/gopkg.in_tomb.v1_v1.0.0/tomb.go --
package tomb
const Tomb = "A great package v1"
-- .gomodproxy/gopkg.in_tomb.v2_v2.0.0/.mod --
module gopkg.in/tomb.v2
-- .gomodproxy/gopkg.in_tomb.v2_v2.0.0/.info --
{"Version":"v2.0.0","Time":"2018-10-22T18:45:39Z"}
-- .gomodproxy/gopkg.in_tomb.v2_v2.0.0/go.mod --
module gopkg.in/tomb.v2
-- .gomodproxy/gopkg.in_tomb.v2_v2.0.0/tomb.go --
package tomb
const Tomb = "A great package v2"
```
### What did you expect to see?
A passing run.
### What did you see instead?
A failed run.
`goimports` should be "consulting" the main module for matches before dropping down to a module cache-based search. Here that would have resulted in `gopkg.in/tomb.v2` being correctly resolved, instead of `gopkg.in/tomb.v1`.
This will, I suspect, also massively improve the speed of `goimports` in a large majority of cases.
cc @heschik | NeedsInvestigation,Tools | low | Critical |
424,959,472 | opencv | New JasPer release fixing CVEs | JasPer 2.0.16 got recently released.
I would like to see a transition to OpenJPEG as suggested in https://github.com/opencv/opencv/issues/5849 but in the meantime I think updating the sources for libjasper might be a good idea since it fixes two CVEs.
I would have created a PR instead but looking at the to me unfamilar directory structure at `opencv/3rdparty/libjasper` I'll rather let you handle it properly. | category: imgcodecs | low | Minor |
424,982,212 | react | Memoized components should forward displayName | **Do you want to request a *feature* or report a *bug*?**
I'd like to report a bug.
**What is the current behavior?**
First of all, thanks for the great work on fixing https://github.com/facebook/react/issues/14807. However there is still an issue with the current implementation.
`React.memo` does not forward displayName for tests. In snapshots, components display as `<Component />` and string assertions such as `.find('MyMemoizedComponent')` won't work.
**What is the expected behavior?**
`React.memo` should forward displayName for the test renderer.
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
* React 16.8.5
* Jest 24.5.0
* enzyme 3.9.0
* enzyme-adapter-react-16 1.11.2
---
N.B. - Potentially related to https://github.com/facebook/react/issues/14319, but this is related to the more recent changes to support `memo` in the test renderer. Please close if needed, I'm quite new here!
I'd be happy to submit a PR if the issue is not too complex to look into :smile: | Type: Enhancement,Component: Shallow Renderer | medium | Critical |
424,995,504 | flutter | google_maps_flutter package - CameraPosition does not follow location updates | The GoogleMap widget already is getting the location and listening to location updates if it is instantiated with myLocationEnabled. I would like it to be able to do two additional things with this information:
1. Instantiate the map with the CameraPosition set to the user's current location.
2. Update the CameraPosition when the user's location changes. As it is the blue dot showing the location moves, but the camera does not, letting the location marker go off screen.
I have not found any way to do this without using another plugin that gets the user's initial position and then listens to updates in location. This is not ideal since the map widget is already listening for the location, it seems redundant to have two plugins listening for locations. | c: new feature,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem | low | Minor |
425,000,028 | pytorch | improve jit error message for legacy constructor | Reported by @jph00
> new_tensor is a legacy constructor and is not supported in the JIT
It'd be helpful to know in the error message what the non-legacy alternative is. | module: docs,triaged | low | Critical |
425,011,574 | go | x/net/nettest: extend TestConn with optional interface checks | EDIT: we'll proceed with Option 1.
---
For the same project as discussed in #30984, I'm using `nettest.TestConn` to test a custom `net.Conn` implementation.
Now that I've got the basics covered, I realize it'd be useful to be able to extend the tests to make sure that my `CloseRead`/`CloseWrite` methods that are not a mandatory part of `net.Conn` could also be tested on my type.
I see two options here:
1) Allow `TestConn` to perform a couple of type assertion tests to see if methods such as `CloseRead` and `CloseWrite` are implemented on the `net.Conn` type, and then run additional tests if so.
This has the advantage that anyone who consumes this package and passes a `net.Conn` with these methods will have these tests run. `net.TCPConn`, `net.UnixConn`, and my custom `vsock.Conn` type (as an example) would run these added tests, but `net.UDPConn` would not.
Perhaps it'd also make a sense to have an optional test for `SyscallConn` as well, since it is widely implemented.
2) Export `timeoutWrapper` and `connTester` in some form, to enable callers to create their own local tests in a `nettest.TestConn`-style.
This could be nice to keep the amount of code in this package smaller, but it'd also mean that certain tests such as the `CloseRead`/`CloseWrite` would need to be duplicated between different projects.
With this said, I think option 1 is preferable in order to reduce duplication in the community, but I can understand why one wouldn't want to add additional complexity to this package to test methods that are not a required part of `net.Conn`.
/cc @dsnet @mikioh @acln0 | Proposal,Proposal-Accepted | low | Minor |
425,015,793 | flutter | Support better scaling of Android platform views | Right now when a scale transform is applied to an AndroidView the texture is drawn scaled without scaling the virtual display and the internal view.
This can be improved by sending the scale factors to the PlatformViewsController and calling [setScaleX](https://developer.android.com/reference/android/view/View.html#setScaleX(float)) and [setScaleY]( https://developer.android.com/reference/android/view/View.html#setScaleY(float)) on the embedded view.
There will still be timing issues when the scale is changes at runtime after the view is visible, but for cases where the scale is always the same this should be a net improvement. | platform-android,framework,a: platform-views,c: proposal,P2,team-android,triaged-android | low | Minor |
425,043,188 | pytorch | UnpicklingError when trying to load multiple objects from a file | ## ๐ UnpicklingError when trying to load multiple objects from a file
Pickle allows to dump multiple self-contained objects into the same file and later load them through subsequent reads. The pickling mechanism from PyTorch has the same behavior when using an in-memory buffer like `io.BytesIO`, but raises an error when using regular files.
## To Reproduce
This is the behavior of the standard `pickle` module:
```python
import io
import torch
import pickle
b=open('/tmp/file.pt', 'wb')
for i in range(3):
was_at = b.tell()
pickle.dump(torch.ones(10), b)
print(f'{i}: {was_at:04d}-{b.tell():04d} ({b.tell()-was_at:03d})')
b.close()
```
```
>>> 0: 0000-0427 (427)
>>> 1: 0427-0854 (427)
>>> 2: 0854-1281 (427)
```
```python
i = 0
b=open('/tmp/file.pt', 'rb')
while True:
try:
was_at = b.tell()
pickle.load(b)
print(f'{i}: {was_at:04d}-{b.tell():04d} ({b.tell()-was_at:03d})')
i+=1
except EOFError:
break
b.close()
```
```
>>> 0: 0000-0427 (427)
>>> 1: 0427-0854 (427)
>>> 2: 0854-1281 (427)
```
PyTorch works fine with `io.BytesIO`, I get the same behavior:
```python
b=io.BytesIO()
for i in range(3):
was_at = b.tell()
torch.save(torch.ones(10), b)
print(f'{i}: {was_at:04d}-{b.tell():04d} ({b.tell()-was_at:03d})')
```
```
>>> 0: 0000-0377 (377)
>>> 1: 0377-0754 (377)
>>> 2: 0754-1131 (377)
```
```python
i = 0
b.seek(0)
while True:
try:
was_at = b.tell()
torch.load(b)
print(f'{i}: {was_at:04d}-{b.tell():04d} ({b.tell()-was_at:03d})')
i+=1
except EOFError:
break
```
```
>>> 0: 0000-0377 (377)
>>> 1: 0377-0754 (377)
>>> 2: 0754-1131 (377)
```
However, `UnpicklingError` is raised when using the serialization methods from PyTorch on a regular file:
```python
b=open('/tmp/file.pt', 'wb')
for i in range(3):
was_at = b.tell()
torch.save(torch.ones(10), b)
print(f'{i}: {was_at:04d}-{b.tell():04d} ({b.tell()-was_at:03d})')
b.close()
```
```
>>> 0: 0000-0377 (377)
>>> 1: 0377-0754 (377)
>>> 2: 0754-1131 (377)
```
```python
i = 0
b=open('/tmp/file.pt', 'rb')
while True:
try:
was_at = b.tell()
torch.load(b)
print(f'{i}: {was_at:04d}-{b.tell():04d} ({b.tell()-was_at:03d})')
i+=1
except EOFError:
break
b.close()
```
```
>>> 0: 0000--425 (-425)
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
<ipython-input-38-a8789bdba75a> in <module>
12 try:
13 was_at = b.tell()
---> 14 torch.load(b)
15 print(f'{i}: {was_at:04d}-{b.tell():04d} ({b.tell()-was_at:03d})')
16 i+=1
.../python3.7/site-packages/torch/serialization.py in load(f, map_location, pickle_module)
366 f = open(f, 'rb')
367 try:
--> 368 return _load(f, map_location, pickle_module)
369 finally:
370 if new_fd:
.../python3.7/site-packages/torch/serialization.py in _load(f, map_location, pickle_module)
530 f.seek(0)
531
--> 532 magic_number = pickle_module.load(f)
533 if magic_number != MAGIC_NUMBER:
534 raise RuntimeError("Invalid magic number; corrupt file?")
UnpicklingError: invalid load key, '\x0a'.
```
Note how the read location inside the file given by `b.tell()` results to be negative: `-425`.
## Expected behavior
The serialization methods of Pickle and Pytorch should work in similar ways.
## Environment
```
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 18.04.2 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1
GPU models and configuration: GPU 0: GeForce GTX 1050 Ti with Max-Q Design
Nvidia driver version: 418.43
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] 19.0.3
[conda] pytorch 1.0.1 py3.7_cuda10.0.130_cudnn7.4.2_2 pytorch
```
## Additional context
The main reason I want to serialize multiple objects individually rather than packing them in a list is because they represent inputs that might be created at different times from different processes and that I still want to process in batch.
| module: pickle,module: serialization,triaged | low | Critical |
425,071,304 | vue | TypeScript: Vue types $attrs should be type Record<string, any> | ### Version
2.6.10
### What is expected?
`Vue.prototype.$attrs` should be type `Record<string, any>` and not `Record<string, string>`.
Since `2.4.0`, `vm.$attrs` has contained extracted bindings not recognized as props. The type of these values is unknown.
### Steps to reproduce
Take the following component:
```javascript
new Vue({
data: function () {
return {
count: 0
}
},
mounted() {
let someFunc = this.$attrs.someBoundAttr as Function
// Type 'string' cannot be converted to type 'Function'
},
template: `
<button v-on:click="count++" :someBoundAttr="() => count">
You clicked me {{ count }} times.
</button>`
})
```
Notice the error in TypeScript checking:
```
Type 'string' cannot be converted to type 'Function'
```
### What is actually happening?
...
### Reproduction link
[http://www.typescriptlang.org/play/](http://www.typescriptlang.org/play/)
<!-- generated by vue-issues. DO NOT REMOVE --> | typescript | medium | Critical |
425,105,330 | go | x/net/nettest: add TestListener API | I'm currently implementing my own `net.Listener` (see also #30984 and #31033), https://godoc.org/github.com/mdlayher/vsock#Listener, and would like to ensure it is in full compliance with the `net.Listener` contract; that is:
- Accept blocks until a connection is receive, or it is interrupted
- Close terminates the listener, but can also unblock Accept
- Maybe: test for SetDeadline method (which is common for `net.Listener`, see #6892) and verify that an expired deadline unblocks Accept as well (this idea is in line with what I propose with #31033)
I've played around with this a bit locally to try to see what makes sense, and I will send a draft CL with my proposed API and a single test. The basic idea is to mirror what `TestConn` is doing, and perhaps the two can share a fair bit of internal code.
/cc @dsnet @mikioh @acln0 | Proposal,Proposal-Accepted | low | Major |
425,122,888 | TypeScript | Enum redeclared as var inside block scope | <!-- ๐จ STOP ๐จ ๐ฆ๐ง๐ข๐ฃ ๐จ ๐บ๐ป๐ถ๐ท ๐จ
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** TS Playground as of 2019-03-25
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** enum declarations can only merge, enum block scope, enum redeclaration
**Code**
```ts
{ enum Foo { } var Foo }
enum Bar { } var Bar;
```
**Expected behavior:**
They should both error, or they should be both accepted
**Actual behavior:**
Only `Bar` is an error
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://www.typescriptlang.org/play/index.html#src=%7B%20enum%20Foo%20%7B%20%7D%20var%20Foo%20%7D%0A%0Aenum%20Bar%20%7B%20%7D%20var%20Bar%3B%0A%0A(function%20()%20%7B%20enum%20Baz%20%7B%20%7D%20var%20Baz%20%7D)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
Found while fixing https://github.com/babel/babel/issues/9763
| Bug | low | Critical |
425,140,496 | vscode | [json] Provide support for highlighting the source of the error in json files rather than highlighting the entire file | In VSCode, I have a RunConfigurationSchema.json file and a runconfig.json file open. When I edit my runconfig.json file and I make an invalid change to a parameter (as seen for the 'framework' parameter in the picture attached); all the lines in the json file are highlighted and underlined with green squiggly lines. It is difficult to decipher where the actual error is stemming from unless the user hovers specifically over the parameter containing the error. Can we instead highlight the line that contains the error only?

| feature-request,json | low | Critical |
425,170,780 | rust | Add guidance when unused_imports fires on an item only used in cfg(test) | This is a very common source of confusion for new Rust users: They import a module for use in tests, then get an "unused import" warning when building a non-test target. Meanwhile, removing the import breaks their tests. [See here](https://users.rust-lang.org/t/seemingly-invalid-unused-import-warning/21465) for an example.
I think it would be tremendously helpful to people learning the language to provide a hint if the item is actually used in `cfg(test)`. I'm not sure how difficult this would be to implement, however. | C-enhancement,A-lints,T-compiler,A-libtest,A-suggestion-diagnostics,D-confusing,D-newcomer-roadblock | low | Major |
425,172,751 | godot | [3.x] Autocomplete of vars fails unless directly below and also if line previously had error | **Godot version:**
3.1 64-bit
**OS/device including version:**
Windows 10
**Issue description:**
Autocomplete does not work
**Steps to reproduce:**
Hello,
New user here. Two autocomplete bugs that I encountered on using Godot for the first time, hampering my ability to learn and use the editor/language properly...
Autocomplete not working for instances
1. Using latest 64-bit Windows 3.1 version standard edition download the 2D Kinematic demo (or simply use the code below)
2. In the script, move the cursor to directly below line 11 where 'motion' variable is defined as Vector2D
3. Type motion. and auto complete will present the correct list, e.g. find 'normalized()'
4. Move cursor to bottom of script and repeat item 3. All you see are generic 'object' constants and methods and specific methods, e.g. 'normalized()' is not available
I saw a bug report for this from a while ago and said it was fixed, clearly not quite. This bug means you simply cannot use autocomplete and scripting (for new users) is trial and error.
Autocomplete fails to run at all after a fault is found, even after fault is cleared:
1. Using the sample script place the cursor anywhere below the 'var motion=Vector2D()' line
2. Type 'motion. ' then press enter. You will get an error message saying 'expected identifier' and the line turns brown.
3. Go back to this line and remove the '.' then press '.' again and no autocomplete is shown
4. Delete all the text to the start of the line and repeat 'motion.' and still autocomplete is not shown
5. Delete all the line, then delete again to remove the line, and the error disappears and you can type again
i.e. once an error is found, even after you start editing the error, it never goes away and features such as autocomplete continue.
I have also found sometimes that even after removing a line, and retyping, autocomplete doesn't even give you any hints, you have to remove the line a couple of times and then eventually it works!
Sample code for player.gd is as below:
**Minimal reproduction project:**
```gdscript
extends KinematicBody2D
# This is a demo showing how KinematicBody2D
# move_and_slide works.
# Member variables
const MOTION_SPEED = 160 # Pixels/second
func _physics_process(delta):
var motion = Vector2()
if Input.is_action_pressed("move_up"):
motion += Vector2(0, -1)
if Input.is_action_pressed("move_bottom"):
motion += Vector2(0, 1)
if Input.is_action_pressed("move_left"):
motion += Vector2(-1, 0)
if Input.is_action_pressed("move_right"):
motion += Vector2(1, 0)
motion = motion.normalized() * MOTION_SPEED
move_and_slide(motion)
```
| bug,topic:gdscript,topic:editor | low | Critical |
425,198,177 | go | x/review: use alternative remote repository | ### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.1 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/user/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/user/go"
GOPROXY=""
GORACE=""
GOROOT="/home/user/.local/share/umake/go/go-lang"
GOTMPDIR=""
GOTOOLDIR="/home/user/.local/share/umake/go/go-lang/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/user/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build019412724=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
```bash
git codereview mail
```
### What did you expect to see?
```
remote: Processing changes: refs: 1, new: 1, done
remote:
remote: SUCCESS
remote:
remote: https://go-review.googlesource.com/c/gddo/+/123456 pkg: commit message [NEW]
remote:
```
### What did you see instead?
```
```
In #9266, this high level issue was identified: `git-codereview` only references the `origin` remote when [pushing changes](https://github.com/golang/review/blob/master/git-codereview/mail.go#L138). While this behavior is probably nice in preventing unnecessary configuration via `git branch --set-upstream-to=`, but it makes having an `origin` that does not point to `go.googlesource.com` impossible. Would it be possible to read the `remote` out of the gitconfig or allow overriding via flags? | NeedsInvestigation | low | Critical |
425,199,998 | go | cmd/compile: pack structs containing anonymous fields more tightly | I attempted to refactor some common fields (E) out of a struct (T1), yielding (T2). Unfortunately, this changed the packing of the fields, which makes this refactoring infeasible in this case.
```go
type E struct {
X int
B uint8
}
type T1 struct {
X int
B uint8
C uint8
}
type T2 struct {
E
C uint8
}
```
Observe the Offset of C in this code: https://play.golang.com/p/ac5CCIWIlZS
On a first pass, I don't see anything in the spec that forbids T1 and T2 being laid out identically in memory. | Performance,NeedsInvestigation,compiler/runtime | low | Minor |
425,202,876 | godot | [Bullet] Convex Collision Sibling wrong margin with Bullet | **Godot version:**
3.1
**OS/device including version:**
Windows 10 x64 / NVIDIA GeForce GTX 1060 6GB / Intel Core i7 4790K
**Issue description:**
When a CollisionShape is created from a mesh using "**Create Convex Collision Sibling**", it's being it is getting away from the bottom cube by the "collision margin" distance (0.04):

This only happens with **ConvexPolygonShape**.
With **ConcavePolygonShape** or even a BoxShape or other native collision shapes, **this does not occur**.

**Steps to reproduce:**
1) Run the attached project. It's with a CollisionShape as **ConvexPolygonShape** and you'll see the margin.
2) Change the red cube CollisionShape to BoxShape or generate a ConcavePolygonShape and the margin is not there.
**Minimal reproduction project:**
[testes.zip](https://github.com/godotengine/godot/files/3006314/testes.zip)
| bug,discussion,confirmed,documentation,topic:physics | low | Major |
425,235,348 | flutter | Make SnackBarAction's check for Scaffold nullOk | Currently its not possible to use a SnackBar with a SnackBarAction somewhere other than a Scaffold (like a persistent, global SnackBar) because of the SnackBarAction's attempt to hide the SnackBar through the Scaffold.
It seems all that would have to be changed is _handlePressed in _SnackBarActionState changing the last line:
```dart
Scaffold.of(context).hideCurrentSnackBar(reason: SnackBarClosedReason.action);
```
to
```dart
Scaffold.of(context, nullOk: true)?.hideCurrentSnackBar(reason: SnackBarClosedReason.action);
```
| framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Minor |
425,235,633 | godot | RayShape2D Collides with the bottom of one way collision tiles | <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if non-official. -->
v3.1.stable.official (Steam)
**OS/device including version:**
<!-- Specify GPU model and drivers if graphics-related. -->
macOS Mojave 10.14.3
**Issue description:**
<!-- What happened, and what was expected. -->
If you use RayShape2D with one way collision tiles, it collides with the bottom them.
**Steps to reproduce:**
Create a KinematicBody2D with a RayShape2D:

Create a tilemap with one way collision tiles:


Create a basic jump logic, and then when you jump you will collide with the bottom of the tiles:

**Minimal reproduction project:**
<!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
[rayshape2d_bug.zip](https://github.com/godotengine/godot/files/3006660/rayshape2d_bug.zip)
| bug,confirmed,topic:physics | low | Critical |
425,319,953 | vscode | Debug: Server ready action pattern should have validation | Testing #71074
When I input an invalid regex in the `serverReadyAction.pattern` field, there is no JSON validation or warning on debug start:

| bug,feature-request,debug | low | Critical |
425,321,685 | angular | ReactiveForms: FormGroup.removeControl doesn't call control.setParent(null) | # ๐ bug report
### Affected Package
`@angular/forms`
### Is this a regression?
Most probably no.
### Description
The `FormGroup` class' `removeControl` method does not call `setParent(null)` for the target control. This appears to be an inconsistency in the implementation.
Our expectation is that `removeControl` would call `control.setParent(null)`, to clear the control's parent once detached. This would also make it symmetric with `addControl`.
#### Use case
We have a dynamic form implementation where a class extending `FormGroup` is dynamically attaching/detaching child groups/controls using `addControl`/`removeControl`. The controls can react to their inclusion/exclusion in their `setParent` method (to e.g. subscribe to a parent's `attachedState` observable).
Since we're extending `FormGroup` this is easy to work around by just overriding `removeControl`, but IMHO the inconsistency in the framework should be fixed.
## ๐ฌ Minimal Reproduction
The call is missing from: https://github.com/angular/angular/blob/master/packages/forms/src/model.ts#L1298 .
## ๐ฅ Exception or Error
None.
## ๐ Your Environment
**Angular Version: 7.2.0**
<pre><code>
Angular CLI: 7.3.0
Node: 10.15.3
OS: win32 x64
</code></pre>
**Anything else relevant?**
No. | area: forms,P4 | low | Critical |
425,342,005 | flutter | [tool]Provide better feedback when "Initializing Gradle" | Take a look at #26077, #25808 and especially #15106, all of which are about the "Initializing Gradle" step taking forever, and all of which have been closed with no changes, despite tons of people having (apparently) the same problem.
There seem to be a number of issues - it downloads 400MB of Gradle (wtf), I get this error: `java.lang.RuntimeException: Timeout of 120000 reached waiting for exclusive access to file: ~/wrapper/dists/gradle-4.10.2-all/9fahxiiecdb76a5g3aw9oi8rv/gradle-4.10.2-all.zip`, and so on.
Clearly this step is not robust enough to have "Initializing Gradle" as the only feedback. It should show more steps - "Downloading Gradle", "Unzipping Gradle", etc.
Btw I am talking about the VSCode extension, and the `flutter run` command line. I suspect it may output more information in Android Studio. | tool,t: gradle,a: quality,c: proposal,P3,team-tool,triaged-tool | low | Critical |
425,350,620 | rust | compiletest: emit time spent for each test suite | Right now we have test suites with thousands of files each. It would be useful for compiletest to indicate how long it takes when it runs each such suite, so that someone reviewing where time is going during testing can know where the bulk of time overall is being spent without having to stare at the output holding a stopwatch. | A-testsuite,C-enhancement,T-bootstrap,E-medium,A-compiletest | low | Minor |
425,383,503 | create-react-app | Defer loading of code-split CSS | I have used code splitting to split my CRA code which is working great. However, I want to be able to delay the loading of certain CSS files. For example, I lazy load my page's footer using reat-lazyload (which loads the component as you scroll down the page) but the CSS is always loaded in on the load of the page by CRA. Is there a way to delay the loading of certain CSS chunks with CRA to remove unnecessary download bandwidth during the critical page load? | issue: needs investigation | low | Major |
425,420,793 | kubernetes | Separate cacher is created for each version of a resource | This is forked from https://github.com/kubernetes/kubernetes/issues/51825 which was closed without fixing this particular issue.
Currently a separate cacher is created for every version of an API.
That may introduce a significant overhead for both memory (every object is kept in memory) as well as CPU (each cacher is watching separately all changes).
The reason for that is that we're creating registry for each version completely independently from each other.
One example (out of many)
https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/apps/rest/storage_apps.go#L60
We create storage for every version (v1beta1, v1beta2, v1) independently, that internally is calling storage decorator:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/store.go#L1289
and that results in a separate cacher for every single version.
We should finally fix that.
@kubernetes/sig-api-machinery-bugs @kubernetes/sig-scalability-bugs @smarterclayton
@resouer - can be something that your team may want to look into | kind/bug,sig/scalability,sig/api-machinery,priority/important-longterm,lifecycle/frozen | medium | Critical |
425,443,687 | vscode | No clue that server ready action is running | re https://github.com/Microsoft/vscode/issues/71074
* have a slow vm
* configure service action to open external app
* have firefox as default browser (which likely is slow starting or installing updates on start)
* start debugging
* nothing happens... firefox isn't showing UI, the debug output doesn't hint towards anything
I would expect that the debug output/terminal shows a message like "detected localhost:3000, starting default browser..." | feature-request,debug | low | Critical |
425,448,265 | pytorch | [CPP] Allow binding config structs into the Python front end | ## ๐ Feature
Allow binding config structs into the Python front end.
Like
```
struct config
{
int x_min, x_max, y_min, y_max;
}
torch::jit::RegisterStruct("config::config", &config)
```
## Motivation
Right now, I have to pass configs into the custom ops by arguments. Which gets really hairy in practice like
```
torch::Tensor my_kernel(torch::Tensor my_actual_tensor, int x_min, int x_max, int y_min, int y_max, int z_min, int z_max, ...other a ton of configs...)
```
Instead, it would be much nicer if it could be:
```
torch::Tensor my_kernel(torch::Tensor my_actual_tensor, config &config)
```
cc @yf225 @glaringlee | oncall: jit,module: cpp | low | Minor |
425,453,208 | scrcpy | kudos and compliments! | I just wanted to compliment you on this excellent effort. This is an unbelievably stable and useful project. I couldn't find any other way to compliment this project on github so I thought that amoung the sea of issues and feature requests, that I would drop one that's just a compliment. You can close/delete or do whatever should be done to this issue :-) | wontfix | low | Major |
425,460,467 | godot | AutoExposure flickering wildly when framebuffer allocation = 2D | **Godot version:**
81292665d5dcc991d3c9341245b269193329ee22 (latest master as of this issue), but occurs since at least 3.1
**OS/device including version:**
Antergos, kernel 5.0.4-arch1-1-ARCH
NVIDIA GeForce GTX 1080 Ti, proprietary drivers, version 418.56
**Issue description:**
AutoExposure causes screen to flicker like it has a personal vendetta against people with epilepsy when there is a sprite with a normalmap on-screen and the framebuffer allocation is set to 2D. Possibly related to #16224?
**Steps to reproduce:**
- Set Project Settings -> Rendering -> Quality -> Intended Usage -> Framebuffer Allocation to 2D
- Create a Sprite with a normal map texture in a 2D scene
- Add a WorldEnvironment node with Background Mode = Canvas and Auto Exposure enabled
**Minimal reproduction project:**
[bugrepro.tar.gz](https://github.com/godotengine/godot/files/3008629/bugrepro.tar.gz)
| bug,topic:rendering,confirmed,topic:2d | low | Critical |
425,469,329 | vscode | Accessibility service not firing onDidChangeAccessibilitySupport | Found testing https://github.com/Microsoft/vscode/issues/67744
All testing was done in the explorer view.
I turned on NVDA and saw that the list keyboard navigation changed.
When I turned off NVDA, I expected that the list keyboard navigation would go back to the new filtering behavior. It did not.
I reloaded. It still stayed on the simple navigation.
I had to fully restart VS Code before the navigation went back to the filtering behavior. If VS Code can automatically switch to simple when there is a screen reader (no restart) then it should be able to switch back. | upstream,accessibility,windows,workbench-launch,upstream-issue-linked,chromium | low | Major |
425,507,418 | flutter | showSearch keyboardType | You don't always search with text, some times you search with numbers, there should be an option to show a numeric TextInputType on a showSearch screen. | c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Minor |
425,512,047 | go | net/http/httptest: make it possible to use a Server (TLS or not) to test cookies | # httptest.NewTLSServer uses a cert that is not valid for localhost
<!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.12.1 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/danielcormier/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/danielcormier/go"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12.1/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12.1/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/84/_6l41bt970l9fmsrwc_p1gv00000gn/T/go-build114647702=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
I'm trying to test something that involves cookies that set the `Domain` attribute. As discussed in #12610, `*net/http/cookiejar.Jar` won't return cookies for an IP (as per the relevant RFCs). `(*httptest.Server).URL` has the host set to an IP address (defaults to `127.0.0.1` or `::1`).
To do the test I needed, I spun up an `*httptest.Server` using `httptest.NewTLSServer(...)`, replaced the IP in `(*httptest.Server).URL` with `localhost` and attempted to send a request to it with `(*httptest.Server).Client()`.
### What did you expect to see?
I expected the `httptest.NewTLSServer(...)` to use a TLS cert that could be valid for `localhost`, as well as the loopback IP addresses.
I expected to be able to successfully make an HTTPS request to `localhost` at the correct port that the `*httptest.Server` was listening on by using `(*httptest.Server).Client()`.
### What did you see instead?
`x509: certificate is valid for example.com, not localhost`
### Example
For completeness, I'm including the suite of tests showing the different behaviors with `*cookiejar.Jar` and the various combinations of `*httptest.Server`. The problematic test here is `TestCookies/tls/localhost/default_cert` (line 174). The test at line 185 shows that the original issue with cookies is resolved if I send requests to `localhost` with a cert valid for that hostname.
```golang
package cookies_test
import (
"crypto/tls"
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"net/http/cookiejar"
"net/http/httptest"
"net/url"
"testing"
"time"
)
func TestCookies(t *testing.T) {
const (
routeSetCookie = "/set-cookie"
routeExpectCookie = "/expect-cookie"
cookieName = "token"
)
handler := func(tb testing.TB) http.Handler {
setCookie := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
host, _, err := net.SplitHostPort(r.Host)
if err != nil {
host = r.Host
}
cookie := &http.Cookie{
Name: cookieName,
Value: "the value",
Domain: host,
Path: "/",
HttpOnly: true,
}
tb.Logf("Setting cookie: %s", cookie)
http.SetCookie(w, cookie)
})
expectCookie := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := r.Cookie(cookieName)
switch err {
case nil:
// Success!
case http.ErrNoCookie:
msg := "No cookie"
tb.Log(msg)
http.Error(w, msg, http.StatusBadRequest)
return
default:
msg := fmt.Sprintf("Failed to get cookie: %v", err)
tb.Log(msg)
http.Error(w, msg, http.StatusInternalServerError)
return
}
tb.Log("The cookie was set")
})
mux := http.NewServeMux()
mux.Handle(routeSetCookie, setCookie)
mux.Handle(routeExpectCookie, expectCookie)
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
target := r.RequestURI
tb.Logf("---- START %s", target)
mux.ServeHTTP(w, r)
tb.Logf("---- END %s", target)
})
}
testCookies := func(tb testing.TB, svr *httptest.Server) {
jar, err := cookiejar.New(nil)
if err != nil {
tb.Fatal(err)
}
httpClient := svr.Client()
httpClient.Timeout = 1 * time.Second
httpClient.Jar = jar
resp, err := httpClient.Get(svr.URL + routeExpectCookie)
if err != nil {
tb.Fatal(err)
}
defer func() {
io.Copy(ioutil.Discard, resp.Body)
resp.Body.Close()
}()
if resp.StatusCode != http.StatusBadRequest {
body, _ := ioutil.ReadAll(resp.Body)
tb.Fatalf("Should not have cookie: %s\n%s", resp.Status, body)
}
resp, err = httpClient.Get(svr.URL + routeSetCookie)
if err != nil {
tb.Fatal(err)
}
if resp.StatusCode != http.StatusOK {
body, _ := ioutil.ReadAll(resp.Body)
tb.Fatalf("%s\n%s", resp.Status, body)
}
resp, err = httpClient.Get(svr.URL + routeExpectCookie)
if err != nil {
tb.Fatal(err)
}
if resp.StatusCode != http.StatusOK {
body, _ := ioutil.ReadAll(resp.Body)
tb.Fatalf("%s\n%s", resp.Status, body)
}
}
useLocalhost := func(tb testing.TB, svr *httptest.Server) {
svrURL, err := url.Parse(svr.URL)
if err != nil {
tb.Fatal(err)
}
svrURL.Host = net.JoinHostPort("localhost", svrURL.Port())
svr.URL = svrURL.String()
}
t.Run("no tls", func(t *testing.T) {
t.Run("ip", func(t *testing.T) {
// This fails because `cookiejar.CookieJar` follows the RFCs and drops the `Domain` of a
// cookie where its set to an IP, rather than a domain. We'll skip it, but it's here if
// you want to see for yourself.
t.SkipNow()
svr := httptest.NewServer(handler(t))
defer svr.Close()
testCookies(t, svr)
})
t.Run("localhost", func(t *testing.T) {
// This works.
svr := httptest.NewServer(handler(t))
defer svr.Close()
useLocalhost(t, svr)
testCookies(t, svr)
})
})
t.Run("tls", func(t *testing.T) {
t.Run("ip", func(t *testing.T) {
// This fails because `cookiejar.CookieJar` follows the RFCs and drops the `Domain` of a
// cookie where its set to an IP, rather than a domain. We'll skip it, but it's here if
// you want to see for yourself.
t.SkipNow()
svr := httptest.NewTLSServer(handler(t))
defer svr.Close()
testCookies(t, svr)
})
t.Run("localhost", func(t *testing.T) {
t.Run("default cert", func(t *testing.T) {
// This fails because the cert `httptest.NewTLSServer` serves up is valid for
// 127.0.0.1, ::1, and examlple.com. Not localhost.
svr := httptest.NewTLSServer(handler(t))
defer svr.Close()
useLocalhost(t, svr)
testCookies(t, svr)
})
t.Run("localhost cert", func(t *testing.T) {
// This works. But, you need to generate your own cert for localhost.
svr := httptest.NewUnstartedServer(handler(t))
certPEM := []byte(`-----BEGIN CERTIFICATE-----
MIIDJTCCAg2gAwIBAgIQas3l/GRJkOGvAZP1CLIvRzANBgkqhkiG9w0BAQsFADAb
MRkwFwYDVQQKExBBY21lIENvcnBvcmF0aW9uMCAXDTAwMDEwMTAwMDAwMFoYDzIx
MDAwMTAxMDAwMDAwWjAbMRkwFwYDVQQKExBBY21lIENvcnBvcmF0aW9uMIIBIjAN
BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAn46okXbPHDmuwHMcQHyPtDl1qoKL
WA/U5x1VcLHGOR6vQKjkNUXbW0yU0HYcyBtmr5gugdmlaFvCRlMSaG1pyC5iCCha
HlTyFyaZi0o2zGT34fS8Jj2WUKE/pR9pOqEoWx8UezBHw/NBZjGCjKe4ASzCQqbn
KA6DxQfRBypU+OFAIK3KsRP6Xvwqd2N/a5FybL9ixKYNbAj7b2vAhW7NIWw++m2T
Hif+bTsEhLAGUG3KGW9OGcJiAewyZb4DgZPgE1ourEud9goVbcCTZBYbpV3U0tZa
XxqJIOfhsfCe4fDcqe1Hspq47SLdvP8FP/qKTbFOqoA/NAlrmboxOw+mXwIDAQAB
o2MwYTAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDAYDVR0T
AQH/BAIwADAsBgNVHREEJTAjgglsb2NhbGhvc3SHEAAAAAAAAAAAAAAAAAAAAAGH
BH8AAAEwDQYJKoZIhvcNAQELBQADggEBAG2cVK1TZvHoaiqA40QEjKehKqq4vKLc
At/FrITEgvNTIkvguEvLw5wsUO/3Nt/atjWtFdSJCLWCLzrgiLOLtJubkrDzzbus
/OsI0cf/fMTyCnjt64efSz2RDPPllRbJd3zZBkuOWhPoxx/Sz0VRvQKGFb9mvPoI
PTZ22ugwZdS/3PnMEoVO46iQumGARXQEbiGApeXPObK0E6Fs7pqwomU9Ny2XsyXS
je06pfouDv8UlLzZLY/fVJLHN6aM7odw5iPp2p7ttFdgn1l/LVlZVX9FBwegHXet
5OSC7pDc+kbLg1cJE8/7dF47VBEVKSvr5ldgRuvDtEf4PupKRl4rhik=
-----END CERTIFICATE-----`)
keyPEM := []byte(`-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAn46okXbPHDmuwHMcQHyPtDl1qoKLWA/U5x1VcLHGOR6vQKjk
NUXbW0yU0HYcyBtmr5gugdmlaFvCRlMSaG1pyC5iCChaHlTyFyaZi0o2zGT34fS8
Jj2WUKE/pR9pOqEoWx8UezBHw/NBZjGCjKe4ASzCQqbnKA6DxQfRBypU+OFAIK3K
sRP6Xvwqd2N/a5FybL9ixKYNbAj7b2vAhW7NIWw++m2THif+bTsEhLAGUG3KGW9O
GcJiAewyZb4DgZPgE1ourEud9goVbcCTZBYbpV3U0tZaXxqJIOfhsfCe4fDcqe1H
spq47SLdvP8FP/qKTbFOqoA/NAlrmboxOw+mXwIDAQABAoIBAQCLVxlNF5WdT56W
ALDOfDk/KeLhSmoIOKM0RkDETuwOHAbuj8/j2iLLo6BeQJe4BX3yoRMUYQ77iQ6r
PYbY3ZxAroj8GMlCrepRX2s94kziyNZVZNYfCy/HMFqViE3sXqsQkJ7hSfOSY1Bc
v6YD0cB2fjET5g/+wlY+7imUeVqFkUd5+CuSa4MVheWRiXCFydm+GdUMbHGJauZk
KYSz6oE5vXkDCbcjpyH2Ay7QuHiE00wI2DqsvkZJy8et+XgYL8iNj10JulnDXJg6
MmSf0ZsDfhfJW9AQDzZjXfbSRsskztnehN4UcJH8enLaLbanlYisPpIsj9jpqLwt
EDcsHX+ZAoGBAMC0nO9MSoxu4Q9WP8Lq5qT1QYrRvmSO9mHz+4wmYvConofucqlK
M6HXD/qXIU8pTHZ5WHjnnEyNOvVdsK/P6uYkdqWRXig8/zgoi6DGuujlthJ7BKYW
I7Fvh2z217p3y0IHQvHYjxQk0ag9kOxkdqiYW6WxNcUj2QeXgDkEjcddAoGBANP2
0XI1tEm+IThXHnnT8DYci4xIi9JBvkWgq+atw8ZNicNfN+a0RX5f42fFUCkGE3F6
JgQgSwIAr6zbLKH8RzwU/V5dpO7vuPrgsCRwFsovKAhyCpW0PflJXIKPY6xrbRnc
t2cSOitZzWBdGQJQANGcd+qdGDG/NBcsYdchKfTrAoGAMS/ovsviW2YR3DBPphj/
NivDxwMybchv6yCznFpP9s2TaW7bpYpjE3Qph/T7c5E/Cx5+Dp5Prtp9qhN3/eg8
NPIptqkcN3kaS+NNgIQ5QSkhCCaOUTZldezZzF5VQitBnmDsHX8BRkr/mMneK/iY
sP/ypKBO8TrtMprhB6y546ECgYEAgFXwejYJ8pwrgPE+goTP6/NcipNiFOu5SG7/
pauP3YEU6DW+ovCDIwDrrujIoA4Nt6c9XUIwKAZCV2Zcn7cfakFLJteMBR8f4MYp
3+X95mym0HY78mgvHcBNQr+OmdZxODdq0/01OwokTzQO8FeAJ2mVMXfsLjKWV3GH
y7lIrgECgYEAiZIEx3fBc3TBcaZb5vbWyAQyfC5vgI0N25ZaIwoG5g6CkjKt8nfH
Xfl1da9pWbcAgRLlq+XhqAJQdUjZ0NfKeWSQxT8TQob8ZfiAHXwjTf20qFrarsPl
jVyKqKuj7Vl7evexIhY03RL6S/koyDGJWdUt9myZB6mdFJBBFQIuv8U=
-----END RSA PRIVATE KEY-----`)
cert, err := tls.X509KeyPair(certPEM, keyPEM)
if err != nil {
t.Fatal(err)
}
svr.TLS = &tls.Config{
Certificates: []tls.Certificate{cert},
}
svr.StartTLS()
defer svr.Close()
useLocalhost(t, svr)
testCookies(t, svr)
})
})
})
}
```
I attempted to have [a conversation](https://groups.google.com/d/msg/golang-nuts/rTWMjLJa9U8/ty4mDuXEBAAJ) about the hosts included in the cert used by `httptest.NewTLSServer()` in the golang-nuts group, but it went nowhere. | help wanted,NeedsInvestigation,FeatureRequest | low | Critical |
425,514,249 | vscode | Call hierarchy: use different icons to express semantics of call graph nodes | testing #71083:
Since all nodes of a call graph are functions, representing them with a function icon does not provide additional information:

Instead we could use different icons to express the following semantics:
- an icon for the "peeked" function (root of the tree). This could be the icon you are using today.
- a strong (bold) up-arrow-like icon (โ) for direct callers
- a dimmed up-arrow-like icon for indirect callers
Another dimension to encode in the icon could be information about the caller being local or external. | feature-request,callhierarchy | low | Minor |
425,565,628 | vscode | File Provider: support symbolic link operations | With the new file service we can support move/copy across file system providers for files and folders. However, if one of the items is a symbolic link, there is currently no way to re-create such a link on the target provider.
In node.js terms there is a method called [`symlink`](https://nodejs.org/docs/v10.2.0/api/fs.html#fs_fs_symlink_target_path_type_callback).
I think conceptually we would need new API such as `link(source, target)` and then providers have to deal with the quirks.
On top of that we also need a way to resolve the target of the link, otherwise we cannot make a decision how to restore it.
Bottom line: I think it only makes sense to preserve symlinks if they are self contained in the data that is being moved/copied, not when pointing outside. | feature-request,api,remote,file-io | low | Minor |
425,581,631 | godot | mesh render problems with concave n-gons | **Godot version:**
3.1 stable
**OS/device including version:**
Windows 10, GTX 1080, 417.71
**Issue description:**
Imported a self made model, and got a wrong rendering in the preview window.
Model is made with blender. Exported wavefront object is correctly rendered when imported back into an empty blender project. Object also renders corectly with the Unity engine.
(godot)
http://prntscr.com/n39qih
(blender)
http://prntscr.com/n39rjl
(unity)
http://prntscr.com/n39snu
**Steps to reproduce:**
- Use in zip included object (curve.obj)
- Import it into Godot
- Drag it into any scene
- Create a new empty material for the first material slot of the mesh, for better visualisation
- Inspect model
**Minimal reproduction project:**
[ModelBug.zip](https://github.com/godotengine/godot/files/3009668/ModelBug.zip) | bug,documentation,topic:import | low | Critical |
425,583,883 | flutter | "flutter" tool should check for announcements | We should have a well-defined location where we store announcements, and the flutter tool should check this location regularly and report currently-active announcements when it is run.
Such announcements should also be reported to IDEs using `--machine`.
We would use this relatively rarely. One example of when we would use this is to announce the developer survey.
Announcements should only be shown for a few days before being automatically muted.
There should be a "flutter announcements" command that shows all the announcements (even ones that got muted). Maybe also a "flutter announcements mute" command that mutes all current announcements (in which case "flutter announcements" would be a shorthand for "flutter announcements show" or some such). | c: new feature,tool,from: study,customer: product,P2,team-tool,triaged-tool | low | Major |
425,593,881 | TypeScript | Incremental --build, then delete generated js file, then another incremental --build does not recreate js file | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.20190326
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
incremental
**Code**
tsconfig.json:
```json
{
"compilerOptions": {
"incremental": true
}
}
```
x.ts:
```ts
console.log("hello");
```
**Expected behavior:**
* `$ tsc --build`
* `x.js` has been generated
* `$ rm x.js`
* `$ tsc --build`
Expected: x.js has been regenerated
**Actual behavior:**
x.js still does not exist
**Related Issues:**
Did not find any. | Suggestion,Awaiting More Feedback,Domain: --incremental | high | Critical |
425,644,053 | vscode | http.systemCertificates requires window reload | Testing #71108.
Following the steps in the test item, once I'm in a state at which the extension receives data from the server, unchecking `http.systemCertificates` has no effect. I need to reload the window to make the extension through an error again. | bug,proxy | low | Critical |
425,652,657 | TypeScript | Change API for compiler option "target" to use strings instead of numbers | Currently, `ScriptTarget.ESNext` has different values depending on which version of TS you are using.
This makes it very hard for clients of tsserver to resiliently allow users to specify "ESNext" and have it work across changes to `ScriptTarget`.
After discussing with @RyanCavanaugh and @weswigham, it seems to make sense to change this setting to expect strings instead of numbers, so that moving forward, `esnext` is interpreted as the version of ES Next supported by that version of TS (i.e. 2019 for 3.3, "2020" for 3.4). | Needs Investigation | low | Minor |
425,663,594 | flutter | Platform messages sent before Flutter initializes are lost | There are many cases where the platform will respond to lower-level events prior to Flutter fully initializing. For example (on Android):
- Execution via a shortcut
- Push notifications
- Low-level callbacks
- App/Deep links
In these instances, the platform may wish to notify Flutter that these things have happened via a `MethodChannel` (or other mechanism) when it starts. It's fairly trivial to write a plugin to buffer this information and to check for these conditions on start up, but I am guessing that this makes more sense to be part of the engine.
The way I'm thinking of it is that there are two possible scenarios:
1. The platform (developer) can add values to an initialization object that can be queried on startup and flushed at a point (either automatically or manually)
2. Any platform messages sent before Flutter initialization has completed could be buffered / queued and sent when Flutter executes.
Option 2 could create an issue as it could flood the application on initialization and put the app into an unstable state. | c: new feature,engine,a: fidelity,c: proposal,P3,team-engine,triaged-engine | low | Minor |
425,711,015 | youtube-dl | Error on litv.tv | F:\youtube-dl 2019.03.18>youtube-dl -i https://www.litv.tv/vod/comi
c/content.do?content_id=VOD00142950
[LiTV] VOD00142950: Downloading webpage
[LiTV] VOD00142950: Downloading JSON metadata
ERROR: Unable to download JSON metadata: HTTP Error 405: Method Not Allowed (cau
sed by HTTPError()); please report this issue on https://yt-dl.org/bug . Make su
re you are using the latest version; type youtube-dl -U to update. Be sure to
call youtube-dl with the --verbose flag and include its complete output.
litv่
ณๆฌๅฏไปฅไฟฎๆนๅ
| geo-restricted | low | Critical |
425,739,330 | flutter | Closing BottomSheet - managing bottomsheet controller is a pain | In order to replace the bottom sheet you need to first close it, if it already exists, then showBottomSheet.
The Scaffold.of(context) method is very useful when showing a bottom sheet, but you have to save the controller returned by Scaffold.showBottomSheet() in order to be able to close it, because you can't just show a new bottom sheet without closing an existing one.
Since a scaffold would only ever show one bottom sheet, IMHO the controller should be stored in the scaffold state for us, so we could simply then do a Scaffold.of(context).closeBottomSheet(), or even better, make the showBottomSheet method clean out an existing sheet on our behalf.
Right now, I have to save the controller and pass it around, or make it available using some other method (Bloc etc). | framework,f: material design,a: quality,c: proposal,P3,team-design,triaged-design | low | Minor |
425,767,944 | three.js | GLTFExporter: support animations from multiple scenes | Currently has this:
```javascript
for ( var i = 0; i < options.animations.length; ++ i ) {
processAnimation( options.animations[ i ], input[ 0 ] );
}
```
This should be based on input[j] | Enhancement | low | Minor |
425,803,242 | TypeScript | Pull required references from referenced projects. | ## Search Terms
Inherit references from referenced project.
Pull required references from referenced projects.
## Suggestion
I have projects that have references to other projects, but one other project has references to another project it is dependent on. That seems to be completely ignored, and is causing compiler errors. It forces me to have to know all the references of the referenced projects and pull those also, which I should not have to do.
## Use Cases
This is pretty obvious. In a typical C# project in Visual Studio (for example), referencing a project resolves other references from that referenced project, and so on. It is not good practice to force projects to pull direct dependencies that are referenced indirectly by 3rd parties.
## Examples
_tsconfig.json_
```
{
"references": [
{ "path": "3rdPartyProject" } // 3rdPartyProjectFile.d.ts
],
```
_3rdPartyProject/tsconfig.json_
```
{
"references": [
{ "path": "Another3rdPartyProject" } // Another3rdPartyProjectFile.d.ts
],
```
This setup only outputs `3rdPartyProjectFile.d.ts`, as required by the first referenced projects. Intellisense fails, however, because the references in `3rdPartyProject/tsconfig.json` are not included, thus `Another3rdPartyProjectFile.d.ts` is missing. This forces me to open all related tsconfig references and pollute my project json with them. This also may not scale well.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
425,827,913 | TypeScript | Allow "T extends enum" generic constraint | TypeScript has a discrete `enum` type that allows various compile-time checks and constraints to be enforced when using such types. It would be extremely useful to allow generic constraints to be limited to enum types - currently the only way to do this is via `T extends string | number` which neither conveys the intent of the programmer, nor imposes the requisite type enforcement.
```ts
export enum StandardSortOrder {
Default,
Most,
Least
}
export enum AlternativeSortOrder {
Default,
High,
Medium,
Low
}
export interface IThingThatUsesASortOrder<T extends enum> { // doesn't compile
sortOrder: T;
}
``` | Suggestion,Awaiting More Feedback | high | Critical |
425,835,718 | pytorch | Jit fail with TracingCheckError with tracing model with layers created after init. | ## ๐ Bug
Jit fail with TracingCheckError with tracing model with layers created after init.
## To Reproduce
``` python
import torch
from torch import nn
class InputDepthAdapter(nn.Module):
def __init__(self, channels):
super().__init__()
self.channels = channels
self.reshape_conv_dict = nn.ModuleDict()
self.inputs_list = None
def adopt_depth(self, input):
input_depth = input.shape[1]
input_depth_name = str(input_depth)
if input_depth_name not in self.reshape_conv_dict:
reshape_conv = nn.Conv2d(
input_depth,
self.channels,
kernel_size=1,
stride=1,
padding=0,
)
reshape_conv.to(input.device)
self.reshape_conv_dict[input_depth_name] = reshape_conv
reshape_conv = self.reshape_conv_dict[input_depth_name]
return reshape_conv(input)
def forward(self, input):
input = self.adopt_depth(input)
return input
model = InputDepthAdapter(10)
dummy_input1 = torch.randn(1, 3, 10, 10)
model(dummy_input1);
torch.jit.trace(model, dummy_input1)
```
```
TracingCheckError Traceback (most recent call last)
<ipython-input-7-25d3e20ac58b> in <module>()
38
39 model(dummy_input1);
---> 40 torch.jit.trace(model, dummy_input1)
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, _force_outplace)
641 _check_trace(check_inputs, func, executor_options, module, check_tolerance, _force_outplace)
642 else:
--> 643 _check_trace([example_inputs], func, executor_options, module, check_tolerance, _force_outplace)
644
645 return module
/opt/conda/lib/python3.6/site-packages/torch/autograd/grad_mode.py in decorate_no_grad(*args, **kwargs)
41 def decorate_no_grad(*args, **kwargs):
42 with self:
---> 43 return func(*args, **kwargs)
44 return decorate_no_grad
45
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py in _check_trace(check_inputs, func, executor_options, module, check_tolerance, force_outplace)
543 diag_info = graph_diagnostic_info()
544 if any(info is not None for info in diag_info):
--> 545 raise TracingCheckError(*diag_info)
546
547
TracingCheckError: Tracing failed sanity checks!
ERROR: Graphs differed across invocations!
Graph diff:
graph(%input : Tensor
%1 : Tensor
+ %2 : Tensor
+ %3 : Tensor
- %2 : Tensor) {
? ^
+ %4 : Tensor) {
? ^
- %tensor.2 : Tensor = prim::Constant[value=<Tensor>](), scope: InputDepthAdapter
- %4 : float = prim::Constant[value=-0.57735](), scope: InputDepthAdapter
- %5 : float = prim::Constant[value=0.57735](), scope: InputDepthAdapter
? ^^^^ ^^^^^^^
+ %5 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
? ^^ ^ +++++++
- %6 : Generator = prim::NoneGenerator(), scope: InputDepthAdapter
- %tensor.3 : Tensor = aten::uniform_(%tensor.2, %4, %5, %6), scope: InputDepthAdapter
- %tensor : Tensor = prim::Constant[value= 0.2530 0.3104 -0.2102 0.5556 -0.0427 0.4699 0.1543 0.2224 -0.5463 -0.4535 [ CPUFloatType{10} ]](), scope: InputDepthAdapter
+ %6 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
+ %7 : int[] = prim::ListConstruct(%5, %6), scope: InputDepthAdapter/Conv2d
+ %8 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
- %9 : float = prim::Constant[value=-0.57735](), scope: InputDepthAdapter
? ^^^^ - ------
+ %9 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
? ^^ +++++++
+ %10 : int[] = prim::ListConstruct(%8, %9), scope: InputDepthAdapter/Conv2d
- %10 : float = prim::Constant[value=0.57735](), scope: InputDepthAdapter
- %11 : Generator = prim::NoneGenerator(), scope: InputDepthAdapter
- %12 : Tensor = aten::uniform_(%tensor, %9, %10, %11), scope: InputDepthAdapter
- %13 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
? ^
+ %11 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
? ^
- %14 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
? ^
+ %12 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
? ^
- %15 : int[] = prim::ListConstruct(%13, %14), scope: InputDepthAdapter/Conv2d
? ^ ^ ^
+ %13 : int[] = prim::ListConstruct(%11, %12), scope: InputDepthAdapter/Conv2d
? ^ ^ ^
+ %14 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
+ %15 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
%16 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
- %17 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
- %18 : int[] = prim::ListConstruct(%16, %17), scope: InputDepthAdapter/Conv2d
? ^ ^ ^
+ %17 : int[] = prim::ListConstruct(%15, %16), scope: InputDepthAdapter/Conv2d
? ^ ^ ^
- %19 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
? ^
+ %18 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
? ^
- %20 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
? ^^ ^^^ ^
+ %19 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
? ^^ ^^^^ ^
- %21 : int[] = prim::ListConstruct(%19, %20), scope: InputDepthAdapter/Conv2d
- %22 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
? ^
+ %20 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
? ^
- %23 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
- %24 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
- %25 : int[] = prim::ListConstruct(%23, %24), scope: InputDepthAdapter/Conv2d
- %26 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
- %27 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
- %28 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d
- %29 : bool = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
? ^
+ %21 : bool = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
? ^
- %30 : Tensor = aten::_convolution(%input, %tensor.3, %12, %15, %18, %21, %22, %25, %26, %27, %28, %29), scope: InputDepthAdapter/Conv2d
+ %22 : Tensor = aten::_convolution(%input, %3, %4, %7, %10, %13, %14, %17, %18, %19, %20, %21), scope: InputDepthAdapter/Conv2d
- return (%30);
? ^^
+ return (%22);
? ^^
}
First diverging operator:
Node diff:
- %tensor.2 : Tensor = prim::Constant[value=<Tensor>](), scope: InputDepthAdapter
+ %5 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d
Trace source location:
/opt/conda/lib/python3.6/site-packages/torch/nn/init.py(187): _calculate_fan_in_and_fan_out
/opt/conda/lib/python3.6/site-packages/torch/nn/init.py(257): _calculate_correct_fan
/opt/conda/lib/python3.6/site-packages/torch/nn/init.py(288): kaiming_uniform_
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py(47): reset_parameters
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py(43): __init__
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py(315): __init__
<ipython-input-7-25d3e20ac58b>(24): adopt_depth
<ipython-input-7-25d3e20ac58b>(33): forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(477): _slow_forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(487): __call__
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py(636): trace
<ipython-input-7-25d3e20ac58b>(40): <module>
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2963): run_code
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2909): run_ast_nodes
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2785): _run_cell
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2662): run_cell
/opt/conda/lib/python3.6/site-packages/ipykernel/zmqshell.py(537): run_cell
/opt/conda/lib/python3.6/site-packages/ipykernel/ipkernel.py(208): do_execute
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(399): execute_request
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(233): dispatch_shell
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(283): dispatcher
/opt/conda/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(432): _run_callback
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(480): _handle_recv
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(450): _handle_events
/opt/conda/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/opt/conda/lib/python3.6/site-packages/tornado/platform/asyncio.py(117): _handle_events
/opt/conda/lib/python3.6/asyncio/events.py(145): _run
/opt/conda/lib/python3.6/asyncio/base_events.py(1432): _run_once
/opt/conda/lib/python3.6/asyncio/base_events.py(422): run_forever
/opt/conda/lib/python3.6/site-packages/tornado/platform/asyncio.py(127): start
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelapp.py(486): start
/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py(658): launch_instance
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py(16): <module>
/opt/conda/lib/python3.6/runpy.py(85): _run_code
/opt/conda/lib/python3.6/runpy.py(193): _run_module_as_main
Check source location:
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py(320): forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(477): _slow_forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(487): __call__
<ipython-input-7-25d3e20ac58b>(30): adopt_depth
<ipython-input-7-25d3e20ac58b>(33): forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(477): _slow_forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(487): __call__
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py(636): trace
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py(436): _check_trace
/opt/conda/lib/python3.6/site-packages/torch/autograd/grad_mode.py(43): decorate_no_grad
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py(643): trace
<ipython-input-7-25d3e20ac58b>(40): <module>
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2963): run_code
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2909): run_ast_nodes
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2785): _run_cell
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2662): run_cell
/opt/conda/lib/python3.6/site-packages/ipykernel/zmqshell.py(537): run_cell
/opt/conda/lib/python3.6/site-packages/ipykernel/ipkernel.py(208): do_execute
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(399): execute_request
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(233): dispatch_shell
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(283): dispatcher
/opt/conda/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(432): _run_callback
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(480): _handle_recv
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(450): _handle_events
/opt/conda/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/opt/conda/lib/python3.6/site-packages/tornado/platform/asyncio.py(117): _handle_events
/opt/conda/lib/python3.6/asyncio/events.py(145): _run
/opt/conda/lib/python3.6/asyncio/base_events.py(1432): _run_once
/opt/conda/lib/python3.6/asyncio/base_events.py(422): run_forever
/opt/conda/lib/python3.6/site-packages/tornado/platform/asyncio.py(127): start
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelapp.py(486): start
/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py(658): launch_instance
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py(16): <module>
/opt/conda/lib/python3.6/runpy.py(85): _run_code
/opt/conda/lib/python3.6/runpy.py(193): _run_module_as_main
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.11.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: GeForce GTX 1080
GPU 1: GeForce GTX 1080
GPU 2: GeForce GTX 1080
GPU 3: GeForce GTX 1080
GPU 4: GeForce GTX 1080
GPU 5: GeForce GTX 1080
GPU 6: GeForce GTX 1080
GPU 7: GeForce GTX 1080
Nvidia driver version: 410.78
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.4
Versions of relevant libraries:
[pip] numpy==1.14.3
[pip] numpydoc==0.8.0
[pip] torch==1.0.1.post2
[pip] torchvision==0.2.2.post3
[conda] blas 1.0 mkl
[conda] magma-cuda90 2.5.0 1 pytorch
[conda] mkl 2018.0.2 1
[conda] mkl-include 2019.3 199
[conda] mkl-service 1.1.2 py36h17a0993_4
[conda] mkl_fft 1.0.1 py36h3010b51_0
[conda] mkl_random 1.0.1 py36h629b387_0
[conda] mkldnn 0.13.0 0 mingfeima
[conda] torch 1.0.1.post2 pypi_0 pypi
[conda] torchvision 0.2.2.post3 pypi_0 pypi
## Code with setting layers to attributes doesn't work either.
```python
import torch
from torch import nn
class InputDepthAdapter(nn.Module):
def __init__(self, channels):
super().__init__()
self.channels = channels
#self.reshape_conv_dict = nn.ModuleDict()
self.inputs_list = None
def adopt_depth(self, input):
input_depth = input.shape[1]
input_depth_name = str(input_depth)
#if input_depth_name not in self.reshape_conv_dict:
if not hasattr(self, input_depth_name):
reshape_conv = nn.Conv2d(
input_depth,
self.channels,
kernel_size=1,
stride=1,
padding=0,
)
reshape_conv.to(input.device)
#self.reshape_conv_dict[input_depth_name] = reshape_conv
setattr(self, input_depth_name, reshape_conv)
reshape_conv = getattr(self, input_depth_name)
return reshape_conv(input)
def forward(self, input):
input = self.adopt_depth(input)
return input
model = InputDepthAdapter(10)
dummy_input1 = torch.randn(1, 3, 10, 10)
model(dummy_input1);
torch.jit.trace(model, dummy_input1)
```
```
TracingCheckError: Tracing failed sanity checks!
ERROR: Graphs differed across invocations!
Graph diff:
graph(%input : Tensor
%1 : Tensor
+ %2 : Tensor
+ %3 : Tensor
- %2 : Tensor) {
? ^
+ %4 : Tensor) {
? ^
- %tensor.2 : Tensor = prim::Constant[value=<Tensor>](), scope: InputDepthAdapter
- %4 : float = prim::Constant[value=-0.57735](), scope: InputDepthAdapter
- %5 : float = prim::Constant[value=0.57735](), scope: InputDepthAdapter
? ^^^^ ^^^^^^^
+ %5 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^^ ^ ++++++++++++++++++
- %6 : Generator = prim::NoneGenerator(), scope: InputDepthAdapter
- %tensor.3 : Tensor = aten::uniform_(%tensor.2, %4, %5, %6), scope: InputDepthAdapter
- %tensor : Tensor = prim::Constant[value= 0.0085 0.1566 0.0217 -0.5607 -0.1521 -0.4864 0.0343 0.2710 0.1833 0.2357 [ CPUFloatType{10} ]](), scope: InputDepthAdapter
+ %6 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
+ %7 : int[] = prim::ListConstruct(%5, %6), scope: InputDepthAdapter/Conv2d[tensor(3)]
+ %8 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %9 : float = prim::Constant[value=-0.57735](), scope: InputDepthAdapter
? ^^^^ - ------
+ %9 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^^ ++++++++++++++++++
+ %10 : int[] = prim::ListConstruct(%8, %9), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %10 : float = prim::Constant[value=0.57735](), scope: InputDepthAdapter
- %11 : Generator = prim::NoneGenerator(), scope: InputDepthAdapter
- %12 : Tensor = aten::uniform_(%tensor, %9, %10, %11), scope: InputDepthAdapter
- %13 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^
+ %11 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^
- %14 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^
+ %12 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^
- %15 : int[] = prim::ListConstruct(%13, %14), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^ ^ ^
+ %13 : int[] = prim::ListConstruct(%11, %12), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^ ^ ^
+ %14 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
+ %15 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
%16 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %17 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %18 : int[] = prim::ListConstruct(%16, %17), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^ ^ ^
+ %17 : int[] = prim::ListConstruct(%15, %16), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^ ^ ^
- %19 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^
+ %18 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^
- %20 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^^ ^^^ ^
+ %19 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^^ ^^^^ ^
- %21 : int[] = prim::ListConstruct(%19, %20), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %22 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^
+ %20 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^
- %23 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %24 : int = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %25 : int[] = prim::ListConstruct(%23, %24), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %26 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %27 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %28 : bool = prim::Constant[value=0](), scope: InputDepthAdapter/Conv2d[tensor(3)]
- %29 : bool = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^
+ %21 : bool = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
? ^
- %30 : Tensor = aten::_convolution(%input, %tensor.3, %12, %15, %18, %21, %22, %25, %26, %27, %28, %29), scope: InputDepthAdapter/Conv2d[tensor(3)]
+ %22 : Tensor = aten::_convolution(%input, %3, %4, %7, %10, %13, %14, %17, %18, %19, %20, %21), scope: InputDepthAdapter/Conv2d[tensor(3)]
- return (%30);
? ^^
+ return (%22);
? ^^
}
First diverging operator:
Node diff:
- %tensor.2 : Tensor = prim::Constant[value=<Tensor>](), scope: InputDepthAdapter
+ %5 : int = prim::Constant[value=1](), scope: InputDepthAdapter/Conv2d[tensor(3)]
Trace source location:
/opt/conda/lib/python3.6/site-packages/torch/nn/init.py(187): _calculate_fan_in_and_fan_out
/opt/conda/lib/python3.6/site-packages/torch/nn/init.py(257): _calculate_correct_fan
/opt/conda/lib/python3.6/site-packages/torch/nn/init.py(288): kaiming_uniform_
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py(47): reset_parameters
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py(43): __init__
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py(315): __init__
<ipython-input-8-968b96d263cc>(25): adopt_depth
<ipython-input-8-968b96d263cc>(35): forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(477): _slow_forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(487): __call__
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py(636): trace
<ipython-input-8-968b96d263cc>(42): <module>
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2963): run_code
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2909): run_ast_nodes
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2785): _run_cell
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2662): run_cell
/opt/conda/lib/python3.6/site-packages/ipykernel/zmqshell.py(537): run_cell
/opt/conda/lib/python3.6/site-packages/ipykernel/ipkernel.py(208): do_execute
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(399): execute_request
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(233): dispatch_shell
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(283): dispatcher
/opt/conda/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(432): _run_callback
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(480): _handle_recv
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(450): _handle_events
/opt/conda/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/opt/conda/lib/python3.6/site-packages/tornado/platform/asyncio.py(117): _handle_events
/opt/conda/lib/python3.6/asyncio/events.py(145): _run
/opt/conda/lib/python3.6/asyncio/base_events.py(1432): _run_once
/opt/conda/lib/python3.6/asyncio/base_events.py(422): run_forever
/opt/conda/lib/python3.6/site-packages/tornado/platform/asyncio.py(127): start
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelapp.py(486): start
/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py(658): launch_instance
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py(16): <module>
/opt/conda/lib/python3.6/runpy.py(85): _run_code
/opt/conda/lib/python3.6/runpy.py(193): _run_module_as_main
Check source location:
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py(320): forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(477): _slow_forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(487): __call__
<ipython-input-8-968b96d263cc>(32): adopt_depth
<ipython-input-8-968b96d263cc>(35): forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(477): _slow_forward
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py(487): __call__
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py(636): trace
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py(436): _check_trace
/opt/conda/lib/python3.6/site-packages/torch/autograd/grad_mode.py(43): decorate_no_grad
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py(643): trace
<ipython-input-8-968b96d263cc>(42): <module>
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2963): run_code
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2909): run_ast_nodes
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2785): _run_cell
/opt/conda/lib/python3.6/site-packages/IPython/core/interactiveshell.py(2662): run_cell
/opt/conda/lib/python3.6/site-packages/ipykernel/zmqshell.py(537): run_cell
/opt/conda/lib/python3.6/site-packages/ipykernel/ipkernel.py(208): do_execute
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(399): execute_request
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(233): dispatch_shell
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelbase.py(283): dispatcher
/opt/conda/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(432): _run_callback
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(480): _handle_recv
/opt/conda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py(450): _handle_events
/opt/conda/lib/python3.6/site-packages/tornado/stack_context.py(276): null_wrapper
/opt/conda/lib/python3.6/site-packages/tornado/platform/asyncio.py(117): _handle_events
/opt/conda/lib/python3.6/asyncio/events.py(145): _run
/opt/conda/lib/python3.6/asyncio/base_events.py(1432): _run_once
/opt/conda/lib/python3.6/asyncio/base_events.py(422): run_forever
/opt/conda/lib/python3.6/site-packages/tornado/platform/asyncio.py(127): start
/opt/conda/lib/python3.6/site-packages/ipykernel/kernelapp.py(486): start
/opt/conda/lib/python3.6/site-packages/traitlets/config/application.py(658): launch_instance
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py(16): <module>
/opt/conda/lib/python3.6/runpy.py(85): _run_code
/opt/conda/lib/python3.6/runpy.py(193): _run_module_as_main
```
| oncall: jit | low | Critical |
425,919,623 | TypeScript | Add support for using declaration | <!-- ๐จ STOP ๐จ ๐ฆ๐ง๐ข๐ฃ ๐จ ๐บ๐ป๐ถ๐ท ๐จ
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section.
-->
## Search Terms
using-declaration, using, declaration, overload
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
Add language support similar to Using-declaration in C++ (https://en.cppreference.com/w/cpp/language/using_declaration) to pull in names from base class.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
In derived class a new overload is added to an API. This requires that overloads from base classes are copied to derived class otherwise typescript complains. A keyword like the using-declaration in C++ to pull in names from base class would be handy to avoid duplication.
## Examples
<!-- Show how this would be used and what the behavior would be -->
```ts
class Emitter {
addListener(event: "foo", cb: (p: string) => void): void;
addListener(event: "bar", cb: (p: string, e: number) => void): void;
addListener(event: string, cb: (...arg: any[]) => void): void {}
}
class SpecialEmitter extends Emitter {
// I would prefer using Emitter.addListener or similar here
addListener(event: "foo", cb: (p: string) => void): void;
addListener(event: "bar", cb: (p: string, e: number) => void): void;
// new overloads supported by SpecialEmitter
addListener(event: "new", cb: (x: RegExp) => void): void;
addListener(event: "next", cb: (x: number, b: boolean) => void): void;
addListener(event: string, cb: (...arg: any[]) => void): void {}
}
```
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Awaiting More Feedback | low | Critical |
426,034,784 | flutter | UI collapses when writing in TextFormField | I am trying to make simple UI where there is a column with three elements. The first one is a listview, the second one is a TextFormField and last one is a container. When I try to type in the TextFormField then the bottom container also replaces which I don't understand.
Here is my code to help you understand my problem more clearly.
class demo extends StatefulWidget {
@override
_demoState createState() => _demoState();
}
class _demoState extends State<demo> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text("demo"),),
body: new Column(
children: <Widget>[
Expanded(
child: new ListView.builder(
itemCount:1000,
itemBuilder: (context,index){
return Container(
padding: EdgeInsets.all(10.0),
height: 25.0,
color: Colors.grey,
child: Text(index.toString()),
);
}
),
),
new TextFormField(
decoration: InputDecoration(
border: OutlineInputBorder(),
hintText: "Enter Demo",
hintStyle: TextStyle(color: Colors.grey),
),
),
new Container(
height: 20.0,
child: Text("Demo Text"),
)
],
),
);
}
}
| c: new feature,framework,f: material design,d: stackoverflow,c: proposal,P3,team-design,triaged-design | low | Minor |
426,110,298 | flutter | RefreshProgressIndicator looks weird without square constraints | If I put a `RefreshProgressIndicator` in a `SizedBox` with an unequal `width` and `height`, I end up with an animating elliptical spinner drawn over a circular background. I don't think that this is intended.
Either the background should also be elliptical or the spinner should be constrained to be circular. (My preference would be for the latter and to make `CircularProgressIndicator` actually circular: it'd match the name and documentation, and I doubt anyone really wants elliptical spinners.)

```
void main() {
return runApp(MaterialApp(
home: Scaffold(
body: Center(
child: SizedBox(
width: 400,
height: 200,
child: RefreshProgressIndicator(),
),
),
),
));
}
```
Flutter 1.4.2-pre.3 โข channel unknown โข unknown source
Framework โข revision bafe7cbbb4 (13 hours ago) โข 2019-03-26 21:40:53 -0700
Engine โข revision 0d83a2ecd1
Tools โข Dart 2.2.1 (build 2.2.1-dev.2.0 None) | framework,f: material design,a: fidelity,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design | low | Minor |
426,110,410 | kubernetes | Volume operation metrics improvements | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
* Filter out user configuration errors from system errors
* Separately measure storage plugin time from pure Kubernetes overhead
* Measure total latency from a K8s user's perspective (time from creating a Pod to the volume being attached and/or mounted) instead of the current operation metrics which only measures one retry loop.
**Why is this needed**:
Improve visibility of the user experience with volumes, and be able to better account for Kubernetes overhead. | sig/storage,kind/feature,lifecycle/frozen | low | Critical |
426,142,230 | flutter | [Discussion] Scope spanning multiple screens | Having a way to manage scopes in Flutter.

(From https://proandroiddev.com/dagger-2-part-ii-custom-scopes-component-dependencies-subcomponents-697c1fa1cfc)
_An example:_
The login is implemented using multiple pages (which are managed by the `Navigator`). Each login page wants to access the `LoginBloc`, once the login is completed the `LoginBloc` is no longer needed and should be disposed.
**Possible implementations right now:**
1. Nested navigators
This solves the issue as each `Navigator` can hold the necessary state, though this approach introduces quite a few problems because `Navigators` are not really intedned to be nested this way.
2. Having the Bloc above the `Navigator` and only exposing them to the necessary routes
Two problems with this - nesting these dependencies can become pretty cumbersome because each route needs to be explicitly wrapped in all of the used dependencies (with dependencies I mean some kind of `InheritedWidget` which exposes the actual data).
And, even though other routes have no access to the data, the data is not disposed, which leads to a memory leak and leaves the bloc in an "undifined" state.
_(Bloc in these examples could be replaced with any kind of business logic/ data storage)_
The main problem is that each route is completely separate and there doesn't seem to be an easy way to accomplish this.
I was wondering if this is anything that should be in the framework or should be provided as a separate package.
I've talked to @rrousselGit about this, he had a few ieads how this could be solved but we'd like to hear other thoughts.
| framework,customer: crowd,c: proposal,a: error message,P3,team-framework,triaged-framework | medium | Critical |
426,143,026 | pytorch | Make it easier to figure out what CuDNN convolution algorithm we actually chose | While debugging #12484 I wanted PyTorch to tell me which cuDNN convolution algorithm it selected. But I have no way of getting it to do so, except running nvprof and reading the tea leaves. | module: cudnn,module: logging,triaged | low | Critical |
426,164,192 | TypeScript | allow inline intellisense comment on es6 export |
## Search Terms
intellisense export
## Suggestion
allow inline intellisense documentation of es6 export, or at least propagate the intellisense comments of the exported package. currently nothing is exported (other than type definitions)
## Use Cases
allows for index files for large libraries to have better descriptions of the sub-modules it contains.
current (working) approach is to use the old pre-es6 ```/** comment */export import x = requires("x")```
## Examples
```typescript
/** promise fs based api and related quality-of-life helper functions */
import * as fs from "./fs";
export { fs };
```
the above intellisense comment doesn't propagate.
## Checklist
My suggestion meets these guidelines:
* [ X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [ X] This wouldn't change the runtime behavior of existing JavaScript code
* [ X] This could be implemented without emitting different JS based on the types of the expressions
* [ X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,Experience Enhancement | low | Minor |
426,171,310 | godot | [3.x] Code completion is not full for custom functions | **Godot version:** 3.1
**OS/device including version:** Ubuntu 18.04
**Issue description:**
Code completion does not provide parameter hints for custom functions if they are used from a script other than they are declared in.
```gdscript
# one script
extends Object
class_name KnownClass
func test(param: String):
pass
```
```gdscript
# another script
extends Node2D
func _ready():
var kc: KnownClass = KnownClass.new()
kc.test("here completion does not show parameter list")
```
**Minimal reproduction project:**
[repro_cc_custom_func.tar.gz](https://github.com/godotengine/godot/files/3015179/repro_cc_custom_func.tar.gz)
| bug,topic:gdscript,topic:editor,confirmed | low | Minor |
426,187,886 | flutter | Allow switching to verbose mode without having to kill the flutter tool | It would be nice if verbose logging can be turned on via a keyboard shortcut:
r - hot reload
R - hot restart
v - enable/disable verbose logging
This is a minor FR. Motivation is to support users who either forget to pass the `-v` flag or use flutter tooling in build/run subsystems that wrap `flutter` tools where it might not be trivial to pass options through. | c: new feature,tool,P3,team-tool,triaged-tool | low | Minor |
426,201,449 | three.js | UniformUtils.cloneUniforms() does not clone arrays of objects. | ##### Description of the problem
Right now in `UniformUtils.cloneUniforms` when a uniform is a THREE object, it properly clones it, and when it is an array of primitives, it properly slices it, but if the uniform is an array of THREE objects (say an array of Vector3), it does not clone the individual objects.
There needs to be another check within the `} else if ( Array.isArray( property ) ) {` condition that checks if the items in the array are THREE objects are just primitives. Since they all must be the same type because of GLSL restrictions, I suppose that only the first element needs to be checked.
##### Three.js version
- [x] Dev
- [x] r103
- [ ] ...
##### Browser
- [x] All of them
- [ ] Chrome
- [ ] Firefox
- [ ] Internet Explorer
##### OS
- [x] All of them
- [ ] Windows
- [ ] macOS
- [ ] Linux
- [ ] Android
- [ ] iOS
##### Hardware Requirements (graphics card, VR Device, ...)
| Enhancement | low | Major |
426,207,610 | flutter | Embedded Android views may send accessibility events before the accessibility bridge knows about them | Right now the main problem with this is that it might result with the Android accessibility framework asking the accessibility bridge for an embedded node (we do generate the flutter<-->embedded ID mapping on the fly when an event is sent) and the bridge cannot create the node as the bounds of the embedded view are not yet known.
Right now we are not returning the node in this case.
Potential ways we could handle this better are either to communicate the bounds of the embedded view to the accessibility bridge earlier, or to delay view creation until it is in the semantics tree. | platform-android,framework,a: accessibility,a: platform-views,P2,team-android,triaged-android | low | Major |
426,214,240 | neovim | Cursorline has unwanted underline in DiffChange/DiffAdd regions | - `nvim --version`: v0.3.4
- Vim (version: v8.1, patches 1-577) behaves differently?
- Operating system/version: macOS 10.13.6 (High Sierra)
- Terminal name/version: iTerm2
- `$TERM`: tmux-256color (but behavior is same outside tmux with xterm-256color)
### Steps to reproduce using `nvim -u NORC`
```
# Create two files to diff, a and b, containing some text with some differences, then:
nvim -u NONE -d a b
```
Turn on `'cursorline'` and define a highlight group for it that does not contain an underline.
```
:set cursorline
:hi clear CursorLine
:hi CursorLine gui=bold
```
### Actual behaviour
On moving the cursor through a changed region in the diff, the lines get underline styling applied, as though `CursorLine` had `gui=bold,underline`. Move the cursor onto a line outside the changed region and see the lines no longer have underline, only bold.
Doesn't happen if `CursorLine` has a fg color (eg. `guifg=#ff0000`), although in practice you probably don't ever want to assign a fg color to `CursorLine`.
Note the underlining on the cursor line in the left-hand pane visible in the screenshot:
<img width="1436" alt="diff" src="https://user-images.githubusercontent.com/7074/55116877-1ddaee80-50e9-11e9-8ad2-92b889654f79.png">
### Expected behaviour
The cursorline shouldn't have underline styling applied when within changed regions in diff mode, unless `CursorLine` actually has underline style turned on. | compatibility,bug-regression,display,core,highlight | low | Major |
426,221,214 | godot | Remember the last used folder for materials and others | **Godot version:**
3.1
**OS/device including version:**
Windows 10 x64 / NVIDIA GeForce GTX 1060 6GB / Intel Core i7 4790K
**Issue description:**
If I do repetitive tasks like insert materials that are in subfolders, Godot always returns to the `res://` folder, so for each operation, I have to repeat all steps until finding that folder again.
**Steps to reproduce:**
1) Load a material (which is located on a subfolder) to a mesh
2) Repeat the process to other mesh and you'll have to repeat all steps to find the subfolder.
| enhancement,topic:editor,usability | low | Minor |
426,226,498 | flutter | [Inspector] Widget mode tool tips don't respect SafeArea | From @willlarche
Using inspector widget mode on iPhone X+, the notch occludes the tool tip for selected widgets in that general area.

They should respect the MediaQuery padding (SafeArea).
Moving this from: https://github.com/flutter/flutter-intellij/issues/2677
| framework,f: inspector,P2,team-framework,triaged-framework | low | Minor |
426,229,308 | go | x/build/cmd/gopherbot: doesn't close re-opened backport issues | Issue #29270 was filed about gopherbot being too aggressive about closing backport issues. It fought humans (more than one time) who tried to re-open a backport issue. /cc @bradfitz
The fix to that issue was to have gopherbot check for a `gi.HasEvent("reopened")` event and not consider any re-opened backport issues. It worked to resolve that issue, but it doesn't take into account a situation where a backport issue is re-opened (e.g., because the fix is reverted) and then resolved via another future CL.
This happened in #30266.
Until this is resolved, the workaround is to manually close a backport issue. This situation happens very infrequently, so this is not a high priority bug. | help wanted,Builders,NeedsFix | low | Critical |
426,234,100 | go | net/http/httputil: Reverse Proxy X-Forwarded-For include Port | In the reverse proxy the port is not being added to the header `X-Forwarded-For`. The port is not a requirement but optional. From RFC 7329
```
If the server receiving proxied requests requires some
address-based functionality, this parameter MAY instead contain an IP
address (and, potentially, a port number).
```
As the port currently isn't being included by default, and it is something I need, I am explicitly setting the header. The result is I end up with `X-Forwarded-For: 123.34.567.89:9876, 123.34.567.89`. It would be great to either add an option to include the ports or do a check that the value about to be added doesn't already exist in the string (duplicate without the port).
https://github.com/golang/go/blob/3089d189569ed272eaf2bc6c4330e848a46e9999/src/net/http/httputil/reverseproxy.go#L249-L257 | help wanted,NeedsFix,FeatureRequest | low | Minor |
426,251,350 | go | math/big: optimize amd64 asm shlVU and shrVU for shift==0 case | When `shift == 0`, `shlVU` and `shrVU` reduce to a memcopy. When `z.ptr == x.ptr`, it further reduces to a no-op. The pure Go implementation has these optimizations, as of https://go-review.googlesource.com/c/go/+/164967. The arm64 implementation has one of them (see https://github.com/golang/go/issues/31084#issuecomment-477406354). We should add both to the amd64 implementation.
cc @griesemer
| Performance,help wanted,NeedsFix | low | Minor |
426,300,927 | TypeScript | [Design Policy] Consider JSDoc feature parity with Typescript | ## Search Terms
jsdoc parity, jsdoc equivalence, jsdoc
## Suggestion
The JSDoc mode of TypeScript is very useful in cases where a build step (esp for node libraries for example) isn't desired or when other constraints prevent not writing in JS. However it can be annoying when trying to implement something that can't be expressed in JSDoc mode due to requiring Typescript syntax.
As such I would like to propose that TypeScript ensures that anything that can be written inside a `.ts` file can be expressed (in at least some way) within a pure javascript + jsdoc file.
In order to get an idea of the current scope needed for feature parity this is a list of issues and features that break parity between the two modes (if any are missing just say and I'll add to the list):
- <s>[Bug?] No way to express the `object` type
- Currently in JSDoc `/** @type {object} */` is equivalent to `/** @type {any} */`, there doesn't seem to be any way to represent `const x: object` purely in JS + JSDoc, this seems like a bug.</s> Fixed
- `interface`
- `abstract class`
- <s>`protected`/`private` members</s> Fixed
- ~function overloading~ v5.0
- ~[open issue](https://github.com/Microsoft/TypeScript/issues/25590)~
- ~defaults for generics~
- ~[open issue](https://github.com/Microsoft/TypeScript/issues/29401)~
- `declare` syntax in it's various forms and declaration merging
- `declare global { ... }`
- `declare module "moduleName"`
- `declare class Foo`
- `declare interface`
- `declare namespace`
- `namespace`
- `enum`
- /** @enum */ is not quite equivalent
- ~`as const`~ v4.5
- ~[open issue](https://github.com/Microsoft/TypeScript/issues/30445)~
- non-null assertion `expr!`
## Checklist
My suggestion meets these guidelines:
* [โ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [โ] This wouldn't change the runtime behavior of existing JavaScript code
* [โ] This could be implemented without emitting different JS based on the types of the expressions
* [โ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [โ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Meta-Issue | medium | Critical |
426,307,198 | TypeScript | Type narrowing on object properties lost in async IIFE | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.20190327
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
narrowing iife control flow
**Code**
```ts
declare const x: string | undefined;
declare const y: { z: string | undefined };
function needsString(it: string) { return it; }
function a() {
if (!x) {
throw new Error("Missing x");
}
const res1 = (() => needsString(x))(); // all good, per #8849
const res2 = (async () => needsString(x))(); // ditto
if (!y.z) {
throw new Error("Missing z.")
}
const res3 = (() => needsString(y.z))(); // still good
const res4 = (async () => needsString(y.z))(); // now things blow up
}
```
**Expected behavior:**
The call to `needsString` in the expression for res4 succeeds, like the one in `res2` and `res3`. Even though the function is async, the code in it -- at least that's before an await -- runs immediately, I believe, and so should be subject to the narrowing (esp. given that `res2` works).
**Actual behavior:**
Error for `needsString(y.z)`: Argument of type 'string | undefined' is not assignable to parameter of type 'string'.
**Playground Link:** https://www.typescriptlang.org/play/index.html#src=declare%20const%20x%3A%20string%20%7C%20undefined%3B%0Adeclare%20const%20y%3A%20%7B%20z%3A%20string%20%7C%20undefined%20%7D%3B%0Afunction%20needsString(it%3A%20string)%20%7B%20return%20it%3B%20%7D%0A%0Afunction%20a()%20%7B%0A%20%20if%20(!x)%20%7B%0A%20%20%20%20throw%20new%20Error(%22Missing%20x%22)%3B%0A%20%20%7D%0A%20%20const%20res1%20%3D%20(()%20%3D%3E%20needsString(x))()%3B%20%2F%2F%20all%20good%2C%20per%20%238849%0A%20%20const%20res2%20%3D%20(async%20()%20%3D%3E%20needsString(x))()%3B%20%20%2F%2F%20ditto%0A%0A%20%20if%20(!y.z)%20%7B%0A%20%20%20%20throw%20new%20Error(%22Missing%20z.%22)%0A%20%20%7D%0A%0A%20%20const%20res3%20%3D%20(()%20%3D%3E%20needsString(y.z))()%3B%20%20%2F%2F%20still%20good%0A%20%20const%20res4%20%3D%20(async%20()%20%3D%3E%20needsString(y.z))()%3B%20%20%2F%2F%20now%20things%20blow%20up%20%20%0A%7D
**Related Issues:**
https://github.com/Microsoft/TypeScript/pull/8849, which I think was supposed to fix IIFE issues like this, but appears to have missed a case (or I'm missing something). | Suggestion,Experience Enhancement | low | Critical |
426,355,918 | rust | Rust generate erroneous debug line information for non-local panic handlers | Related to https://github.com/rust-lang/rust/issues/55352, Rustc 1.33 generates incorrect debugging information for panic handlers not in the current crate:
Compiling:
```rust
// Taken from https://github.com/rust-lang/rust/issues/55352
pub struct Person {
pub name: String,
pub age: u32
}
pub fn get_age() -> u32 {
42
}
pub fn create_bob() -> Person {
Person {
name: "Bob".to_string(),
age: get_age()
}
}
#[cfg(test)]
mod tests {
use super::create_bob;
use super::get_age;
#[test]
fn test_create_bob() {
create_bob();
}
#[test]
fn test_get_age() {
assert_eq!(get_age(), 42);
}
}
```
as `src/lib.rs`, Rust generates a `.debug_lines` section, which contains the following (from `readelf --debug-dump=decodedline`):
```
CU: /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858/src/libtest/lib.rs:
File name Line number Starting address
lib.rs 330 0x1e580
lib.rs 331 0x1e587
lib.rs 0 0x1e590
./<::core::macros::assert_eq macros>:[++]
<::core::macros::assert_eq macros> 14 0x1e597
<::core::macros::assert_eq macros> 15 0x1e5a6
<::core::macros::assert_eq macros> 15 0x1e5b0
<::core::macros::assert_eq macros> 16 0x1e5bd
<::core::macros::assert_eq macros> 16 0x1e5c4
<::core::macros::assert_eq macros> 16 0x1e5cc
<::core::macros::assert_eq macros> 16 0x1e5d2
<::core::macros::assert_eq macros> 16 0x1e5d6
<::core::macros::assert_eq macros> 0 0x1e5de
/rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858/src/libtest/lib.rs:
lib.rs 335 0x1e5e5
src/lib.rs:
lib.rs 1 0x1e5f2
```
This means that the `/rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858/src/libtest/lib.rs` compilation unit points a debug line at line 1 of `src/lib.rs`
Reading through the generated assembly (via `objdump -S -d`)
```
000000000001e580 <_ZN4test18assert_test_result17haf3989bfec92a9a7E>:
1e580: 48 81 ec 68 01 00 00 sub $0x168,%rsp
1e587: e8 54 e4 ff ff callq 1c9e0 <_ZN54_$LT$$LP$$RP$$u20$as$u20$std..process..Termination$GT$6report17h05d35b96322ac0d1E>
1e58c: 89 44 24 64 mov %eax,0x64(%rsp)
1e590: 48 8d 05 25 dd 0a 00 lea 0xadd25(%rip),%rax # cc2bc <_IO_stdin_used+0xfc>
1e597: 48 8d 4c 24 64 lea 0x64(%rsp),%rcx
1e59c: 48 89 4c 24 68 mov %rcx,0x68(%rsp)
1e5a1: 48 89 44 24 70 mov %rax,0x70(%rsp)
1e5a6: 48 8b 44 24 68 mov 0x68(%rsp),%rax
1e5ab: 48 89 44 24 78 mov %rax,0x78(%rsp)
1e5b0: 48 8b 44 24 70 mov 0x70(%rsp),%rax
1e5b5: 48 89 84 24 80 00 00 mov %rax,0x80(%rsp)
1e5bc: 00
1e5bd: 48 8b 44 24 78 mov 0x78(%rsp),%rax
1e5c2: 8b 10 mov (%rax),%edx
1e5c4: 48 8b 84 24 80 00 00 mov 0x80(%rsp),%rax
1e5cb: 00
1e5cc: 3b 10 cmp (%rax),%edx
1e5ce: 40 0f 94 c6 sete %sil
1e5d2: 40 80 f6 ff xor $0xff,%sil
1e5d6: 40 f6 c6 01 test $0x1,%sil
1e5da: 75 02 jne 1e5de <_ZN4test18assert_test_result17haf3989bfec92a9a7E+0x5e>
1e5dc: eb 3d jmp 1e61b <_ZN4test18assert_test_result17haf3989bfec92a9a7E+0x9b>
1e5de: 48 8d 35 3b 15 0a 00 lea 0xa153b(%rip),%rsi # bfb20 <_ZN4core3fmt3num52_$LT$impl$u20$core..fmt..Display$u20$for$u20$i32$GT$3fmt17ha3b2ffd94f72c608E>
1e5e5: 48 8d 44 24 64 lea 0x64(%rsp),%rax
1e5ea: 48 89 84 24 40 01 00 mov %rax,0x140(%rsp)
1e5f1: 00
// Taken from https://github.com/rust-lang/rust/issues/55352
1e5f2: 48 8b 84 24 40 01 00 mov 0x140(%rsp),%rax
```
`1ef52` is part of the panic handler if a test fails.
Thus it's not possible to cover the code without having a test fail! | A-debuginfo,T-compiler,E-help-wanted,C-bug | low | Critical |
426,500,666 | godot | Can't use `new()` or reference own class name in static function (fixed in `master`) | IMO title is pretty self explanatory. I'll provide some code I used to write:
```gdscript
# @param string|null name
# @param int|BaseType type
static func build(name, type = 0): # Argument|int
# --- SOME CODE ---
return new(name, type)
```
Now I get: Parser Error: Method 'new' is not declared in the current class.
Guess I could create an separate class but it when you have only 10 lines method in each class it may create a great overhead doubling loading that code chains. | bug,topic:gdscript,confirmed | medium | Critical |
426,505,396 | rust | eventual goal: re-remove leak-check from compiler | PR #58592 re-added the so-called leak check.
Re-adding the leak-check masked a number of bugs, where you need to use `-Z no-leak-check` to observe them again.
We plan to eventually re-remove the leak check. But before doing so, we need to double-check that all of the aforementioned masked bugs are either fixed, or at the very least revisited in terms of determining their priority in a leak-check-free world. | T-compiler,C-tracking-issue,S-tracking-unimplemented,T-types | low | Critical |
426,509,598 | flutter | Add shadow to an image or icon | Hi all! I searched for an answer on how to do this I didn't find it. I tried copying [this method](https://github.com/flutter/flutter/issues/3402) used for shadowing text, but couldn't make it work for my image.
Basically, I had an image of an icon (like the one below). I wanted to use it as a button (the iOS style), coloring it and then shadowing it so it would stand out.
I looked for shadows already in flutter, but I only found square or rounded shadows, but I wanted the shadow to copy the image content (when it wasn't transparent).
As I said I tried to copy the above method used for texts, but couldn't make it work correctly. The shadow didn't show in the correct place (I was using a Stack and two Positioned, one for the image and one for the shadow).

I think it would be great if this could be done directly by flutter as a new Widget or Decoration. | c: new feature,framework,P3,team-framework,triaged-framework | medium | Critical |
426,523,759 | flutter | webview_flutter black screen | 
| e: device-specific,framework,f: routes,p: webview,package,c: rendering,team-ecosystem,has reproducible steps,P2,found in release: 2.2,found in release: 2.5,triaged-ecosystem | low | Critical |
426,532,318 | flutter | Return data with popUntil | Navigator.pop supports returning data, I wonder why popUntil doesn't?
Ideally it should also accept a result argument, and pass it to the pop method it calls.
Would flutter team accept a such proposal? | c: new feature,framework,f: routes,P3,team-framework,triaged-framework | high | Critical |
426,573,772 | react | Noscript tags no longer rendering components in 16.5.0 | **Do you want to request a *feature* or report a *bug*?**
Bug
**What is the current behavior?**
Starting in version 16.4.3, the following code:
```
<noscript>
<a href="/cat">Cat</a>
<a href="/dog">Dog</a>
</noscript>
```
is being rendered in the browser as:
```
<noscript></noscript>
```
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem.**
Prior to 16.4.3: https://codesandbox.io/embed/5mww4nzpwp
After 16.4.3: https://codesandbox.io/embed/6v8m4yo303
(The changes are not visible, but if you `inspect element` you can see that, in the first example, the links are being rendered, and in the second example they're not being rendered.)
**What is the expected behavior?**
It should render in the browser the same as in the code:
```
<noscript>
<a href="/cat">Cat</a>
<a href="/dog">Dog</a>
</noscript>
```
**Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?**
See above, it was working in versions prior to 16.4.3. (I couldn't find a previous issue mentioning this bug. I think it could have been introduced in the fix for https://github.com/facebook/react/issues/11423)
**Why is this a problem?**
I use a snapshot tool with React to generate a set of static pages from a React project. These pages have less functionality than the full application, but they allow webcrawlers and users who have disabled JavaScript to use the site at a basic level. For example, the code might look like this:
```
<FancyInteractiveButton linksTo="page">link</FancyInteractiveButton>
<noscript>
<a href="page">link</a>
</noscript>
```
Preventing components in `<noscript>` tags from rendering breaks this functionality for users with JavaScript disabled. The generated snapshots no longer contain the links. It also makes the site harder to navigate by webcrawlers, even if they have JavaScript enabled, because they have to be smart enough to use the fancy button instead of following the link. | Type: Discussion | low | Critical |
426,597,099 | react | Dancing between state and effects - a real-world use case | I started this as a gist but Dan mentioned that this would be a good discussion issue so here goes. I've been writing with and refactoring code into hooks for a while now. For 95% of code, they are great and very straight-forward once you get the basic idea. There are still a few more complex cases where I struggle with the right answer though. This is an attempt to explain them.
## The use case
This is a real-world use case from an app I'm building: interacting with a list items. I've simplified the examples into codesandboxes though to illustrate the basic idea.
Here's the first one: https://codesandbox.io/s/lx55q0v3qz. It renders a list of items, and if you click on any of them, an editable input will appear to edit it (it doesn't save yet). The colored box on the right will change whenever an item rerenders.
If you click around in the items, you can see that when changing the edited item, all items rerender. But the `Row` component is wrapped with `React.memo`! They all rerender because the `onEdit` is new each time the app renders, causing all items to rerender.
## Maintaining callback identity
We want `onEdit` to be same function for all future renders. In this case, it's easy because it doesn't depend on anything. We can simply wrap it in `useCallback` with an empty dependencies array:
```js
let onEdit = useCallback(id => {
setEditingId(id);
}, []);
```
Now, you can see clicking around only rerenders the necessary items (only those colors change): https://codesandbox.io/s/k33klz68yr
## Implementing saving
We're missing a crucial feature: after editing an item, on blur it should save the value. In my app the way the "save" event gets triggered is different, but doing it on blur is fine here.
To do this, we create an `onSave` callback in the app and pass it down to each item, which calls it on blur with the new value. `onSave` takes a new item and updates the items array with the new item and sets the `items` state.
Here is it running: https://codesandbox.io/s/yvl79qj5vj
You'll notice that all items are rerendering again when saving. The list rerenders twice when you click another item: first when you click down and the input loses focus, and then again to edit a different item. So all the colors change once, and then only the editing rows color changes again.
The reason all of them are rerendering is because `onSave` is now a new callback every render. But we can't fix it with the same technique as `onEdit` because it depends on `items` - so we *have* to create a new callback which closes over `items` otherwise you'd lose previous edits. This is the "callbacks are recreated too many times" problem with hooks.
One solution is to switch to `useReducer`. Here's that implementation:
https://codesandbox.io/s/nrq5y77kj0
Note that I still wrap up the reducer into `onEdit` and `onSave` callbacks that are passed down to the row. I find passing callbacks to be clearer in most cases, and works with any components in the ecosystem that already expect callbacks. We can simply use `useCallback` with no dependencies though since `dispatch` is always the same.
Note how that even when saving an item, only the necessary rows rerender.
## The difference between event handlers and dispatch
There's a problem though. This works with a simple demo, but in my real app `onSave` *both* optimistically updates local state and saves it off to the server. It does a side effect.
It's something like this:
```js
async function onSave(transaction) {
let { changes, newTransactions } = updateTransaction(transactions, transaction);
// optimistic update
setTransactions(newTransactions)
// save to server
await postToServer('apply-changes', { changes })
}
```
There's a big difference between the phase when an event handler and dispatch is run. Event handlers are run whenever they are triggered (naturally) but the dispatching the action (running `reducer`) happens when rendering. The reducer must be pure because of this.
Here's the reducer from https://codesandbox.io/s/nrq5y77kj0:
```js
function reducer(state, action) {
switch (action.type) {
case "save-item": {
let { item } = action;
return {
...state,
items: items.map(it => (it.id === item.id ? item : it))
};
}
case "edit-item": {
return { ...state, editingId: action.id };
}
}
}
```
How is `save-item` also supposed to trigger a side effect? First, item's important to understand these 3 phases:
```
Event handler -> render -> commit
```
Events are run in the first phase, which causes a render (when dispatches happen), and when everything is finally ready to be flushed to the DOM it does it in a "commit" phase, which is when all effects are run (more or less).
We need our side effect to run in the commit phase.
### Option 1
One option is to use a ref to "mark" the saving effect to be run. Here's the code: https://codesandbox.io/s/m5xrrm4ym8
Basically you create a flag as a ref:
```js
let shouldSave = useRef(false);
```
Luckily, we've already wrapped the save dispatch into an event handler. Inside `onSave` we mark this flag as true. We can't do it inside of the reducer because it must be pure:
```js
let onSave = useCallback(item => {
shouldSave.current = true;
dispatch({ type: "save-item", item });
}, []);
```
Finally, we define an effect that always runs after rendering and checks the flag and resets it:
```js
useEffect(() => {
if (shouldSave.current) {
// save... all the items to the server?
post(items)
shouldSave.current = false;
}
});
```
I thought this option was going to work, but just ran into this issue. We don't know *what* to save anymore. We certainly don't want to send the entire items array to the server! I suppose we could store the item in the ref, but what happens if multiple events are fired before the effect runs? I suppose you could store an array, but... do we really need that?
### Option 2
**Note**: I just noticed this option is documented in [How to read an often-changing value from useCallback?](https://reactjs.org/docs/hooks-faq.html#how-to-read-an-often-changing-value-from-usecallback), but I disagree with the tone used. I think this is a fine pattern an better in many cases than `dispatch`, even if it's not quite as robust. Especially since it's not as powerful as callbacks. (see end of this section)
Keeping around all of the data we need to do the effect might work in some cases, but it feels a little clunky. If we could "queue up" effect from the reducer, that would work, but we can't do that. Instead, another option is to embrace callbacks.
Going back to the version which used a naive `onSave` which forced all items to rerender (https://codesandbox.io/s/yvl79qj5vj), `onSave` looks like this:
```js
let onSave = useCallback(
item => {
setItems(items.map(it => (it.id === item.id ? item : it)));
},
[items]
);
```
The core problem is that it depends on items. We need to recreate `onSave` because it closes over `items`. But what if it didn't close over it? Instead, let's create a ref:
```js
let latestItems = useRef(items);
```
And an effect which keeps it up-to-date with items:
```js
useEffect(() => {
latestItems.current = items
});
```
Now, the `onSave` callback can reference the ref to always get the up-to-date items. Which means we can memoize it with `useCallback`:
```js
let onSave = useCallback(item => {
setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
}, []);
```
We are **intentionally** opting to always referencing the latest item. The biggest change with hooks in my opinion is that they are safe by default: an async function will always reference the exact same state that existed at the time they were called. Classes operate the other way: you access state from this.state which can be mutated between async work. Sometimes you want that though so you can maintain callback identity.
Here is the running sandbox for it: https://codesandbox.io/s/0129jop840. Notice how you can edit items and only the necessary rows rerender, even though it updates `items`. Now, we can do anything we want in our callback, like posting to a server:
```js
let onSave = useCallback(item => {
setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
// save to server
post('/save-item', { item })
}, []);
```
Basically, if all you need is the latest data since last commit, **callbacks can be memoized as well as reducers**. The drawback is that you need to put each piece of data you need in a ref. If you have lots of pieces of data and only a few simple effects, reducers would be better, but in my case (and I suspect in many others) it's easier to use callbacks with refs.
It's nice too because in my real app the save process is more complicated. It needs to get changes back from the server and apply them locally as well, so it looks more like this:
```js
let onSave = useCallback(item => {
setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
// save to server
let changes = await post('/save-item', { item })
applyChanges(latestItems.current, changes)
}, []);
```
Maintainability-wise, it's *really* nice to see this whole flow here in one place. Breakin this up to try to manually queue up effects and do a dance with `useReducer` feels much more convoluted.
### Option 3
I suppose another option would be to try to "mark" the effect to be run in state itself. That way you could do it in `useReducer` as well, and it would look something like this:
```js
function reducer(state, action) {
switch (action.type) {
case "save-item": {
let { item } = action;
return {
...state,
items: state.items.map(it => (it.id === item.id ? item : it)),
itemsToSave: itemsToSave.concat([item])
};
}
// ...
}
}
```
And an effect would check the `itemsToSave` state and save them off. The problem is resetting that state - the effect would have to change state, causing a useless rerender, and it's not determistic to make sure that the effect does not run multiple times before `itemsToSave` gets reset.
In my experience mixing effects into state, causing renders, make things a lot more difficult to maintain and debug.
### What's the difference between Option 1 and 2?
Is there a crucial difference between 1 and 2? Yes, but I'd argue it's not a big deal if you can accept it. Remember these three phases:
```
Event handler -> render -> commit
```
The big difference is option 1 is doing the side effect in the commit phase, and option 2 is doing it in the event handler phase. Why does this matter?
If, for some reason, an item called `onSave` multiple times before the next commit phase happened, option 1 is more robust. A reducer will "queue up" the actions and run them in order, updating state in between them, so if you did:
```js
onSave({ id: 1, name: "Foo" })
onSave({ id: 2, name: "Bar" })
```
which runs the callback twice immediately, the reducer will process the first save and update the items, and process the second save **passing in the already updated state**.
However, with option 2, when processing the second save **the commit phase hasn't been run yet** so the `latestItems` ref hasn't been updated yet. **The first save will be lost**.
However, the ergonomics of option 2 is much better for many use cases, and I think it's fine to weight these benefits and never need the ability to handle such quick updates. Although concurrent mode might introduce some interesting arguments against that.
## Another small use case for triggering effects
In case this wasn't already long enough, there's a similar use case I'll describe quickly. You can also add new items to the list by editing data in an empty row, and the state of this "new item" is tracked separately. "Saving" this item doesn't touch the backend, but simply updates the local state, and separate explicit "add" action is needed to add it to the list.
The hard part is that there is a keybinding for adding the item to the list while editing - something like alt+enter. The problem is I want to coordinate with the state change, so first I want to save the existing field and *then* add to the list. The saving process is complicated so it need to run through that first (I can't just duplicate it all in `onAdd`).
This isn't a problem specific to hooks, it's just about coordinating with state changes. When I was working with reducers, I had though that something like this would be neat. Basically when the new items detect that you want to save + add it first an action like `{ type: 'save-item', fields: { name: 'Foo' }, shouldAdd: true }`
```js
function reducer(state, action) {
switch (action.type) {
case "save-item": {
let { fields } = action;
let newItem = { ...state.newItem, ...fields };
if(action.shouldAdd) {
shouldAdd.current = true
}
return { ...state, newItem };
}
// ...
}
}
```
where `shouldAdd` is a ref that is checked on commit phase and saves the item off to the server. This isn't possible though.
Another option would be for the item to call `onAdd` instead of `onSave` when saving + adding, and you could manually call the reducer yourself to process the changes:
```js
async function onAdd(fields) {
let action = { type: 'save-item', fields }
dispatch(action)
let newItem = reducer(state, action)
post('/add-item', { newItem });
}
```
This is kind of a neat trick: you are manually running the reducer to get the updated state, and React will run the reducer again whenever it wants.
Since I ended up liking callbacks for my original problems, I ended up going with a similar approach where I have a ref flag that I just set in `onSave`:
```js
let [newItem, setNewItem] = useState({})
let latestNewItem = useRef(newItem);
let shouldAdd = useRef(false);
useEffect(() => {
latestNewItem.current = newItem;
})
useEffect(() => {
if(shouldAdd.current) {
setNewItem({})
post('/add-item', { newItem })
shouldAdd.current = false;
}
})
let onSave = useCallback((fields, { add }) => {
// In my real app, applying the changes to the current item is a bit more complicated than this,
// so it's not an option to separate on an `onAdd` function that duplicates this logic
setNewItem({ ...latestNewItem.current, ...fields });
// This action also should add, mark the effect to be run
if(add) {
shouldAdd.current = true;
}
}, [])
```
## Conclusions
Sorry for the length of this. I figure I'd be over-detailed rather than under-detailed, and I've been brewing these thoughts since hooks came out. I'll try to conclude my thoughts here:
* Effects are **very nice**. It feels like we have easy access to the "commit" phase of React, whereas previously it was all in `componentDidUpdate` and not composable at all. Now it's super easy to throw on code to the commit phase which makes coordinating stuff with state easier.
* Reducers have interesting properties, and I can see how they are fully robust in a concurrent world, but for many cases they are too limited. The ergonomics of implementing many effect-ful workflows with them requires an awkward dance, kind of like when you try to track effect states in local state and split up workflows. Keeping a linear workflow in a callback is not only nice, but necessary in many cases for maintainability.
* Callbacks can be made memoizable without much work. In many cases I think it's easier to use the ref trick than reducers, but the question is: just *how* dangerous is it? Right now it's not that dangerous, but maybe concurrent mode really is going to break it.
* If that's the case, we should figure out a better way to weave together effects and state changes.
I hope all of this made sense. Let me know if something is unclear and I'll try to fix it. | Type: Discussion | high | Critical |
426,648,338 | opencv | custom deallocator | ##### Detailed description
The constructor for OpenCV matrices that takes user allocated memory should allow the user to provide a custom deallocator function that is called when the `cv::Mat` goes out of scope.
When a `cv::Mat` is cunstructed from a pointer to data and passed around, it is hard to keep track of the original object from whoes data the cv::Mat was created from. This becomes more difficult, if the data is part of another object and the object uses a custom deallocator, e.g. `delete_<object>()` instead of the usual `free()` call. | feature,category: core | low | Minor |
426,652,677 | opencv | Mat::copyTo should not re-allocate existing data | follow up of the discussion in #14175.
Currently `Mat::copyTo` will re-allocate the `dst` storage if dimensions or type mismatch (e.g. if src is larger, but also if dst is larger).
This is typically not the desired behavior and should lead to an assertion, which was already implemented in 62ed6cdc742db2f3065f2c472dc8216970d130b9. However the respective code is currently defined out.
The implemented solution notably excludes creation of `dst` which *is* a typical use case for OutputArrays as in
> use the pre-allocated memory for output or allocate it initially
The other solutions discussed in #14175 are not desirable:
- > copyTo(): always allocate new dst
same as clone(). Does not allow for the valid use-case described above
- > copyInto(): forbid allocation/re-allocation of dst
too strict as discussed above. New API that basically is copyTo | category: core,RFC | low | Minor |
426,683,511 | TypeScript | If node_modules is excluded by default, it should not be part of a confusing documentation example | Hi,
This is not a code issue but documentation that leads to confusion. By default node_modules folder is excluded for tsc. This is good (I guess), but the following documentation https://www.typescriptlang.org/docs/handbook/tsconfig-json.html has an example using node_modules as a folder to exclude. Personally I think this example was very badly picked because it leads to think that you need to exclude "node_modules" explicitly (which I was doind in all my projects for that reason). You need to read the full document to notice that. It is confusing.
PS: This led to confusion as you can see in other typescript project (https://github.com/TypeStrong/ts-loader/issues/278)
Thanks for the great typescript thing by the way!
| Docs | low | Minor |
426,695,212 | flutter | FloatingActionButton tap recognition has clicking problems with Transform | I am building a complex action menu consisting of multiple FloatingActionButtons. Unfortunately if a FAB is using transform, there are problems with clicking. If the example below is executed you will see that the button can be pressed at the lower half of it, but not on the upper half of it. It seems to me, that you can only click in any area that is both inside of the "normal" FAB location and inside the current drawn FAB.
## Steps to Reproduce
Tested within android emulator:
```dart
import 'package:flutter/material.dart';
class TestView extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
floatingActionButtonLocation: FloatingActionButtonLocation.centerDocked,
floatingActionButton: Transform(
transform: Matrix4.identity()
..translate(0.0, -32.0),
child: FloatingActionButton(
child: const Icon(Icons.add), onPressed: () => actionButtonPressed(),
),
),
body: Text(''),
);
}
void actionButtonPressed() {
print('pressed');
}
}
```
## Flutter Doctor
```
[โ] Flutter (Channel stable, v1.2.1, on Microsoft Windows [Version 6.1.7601], locale de-DE)
โข Flutter version 1.2.1 at E:\flutter
โข Framework revision 8661d8aecd (6 weeks ago), 2019-02-14 19:19:53 -0800
โข Engine revision 3757390fa4
โข Dart version 2.1.2 (build 2.1.2-dev.0.0 0a7dcf17eb)
[โ] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
โข Android SDK at C:\Users\T\AppData\Local\Android\android-sdk
โข Android NDK location not configured (optional; useful for native profiling support)
โข Platform android-28, build-tools 28.0.3
โข ANDROID_HOME = C:\Users\T\AppData\Local\Android\android-sdk
โข Java binary at: F:\androidstudio3\android-studio\jre\bin\java
โข Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01)
โข All Android licenses accepted.
[โ] Connected device (1 available)
โข Android SDK built for x86 โข emulator-5554 โข android-x86 โข Android 7.1.1 (API 25) (emulator)
```
| framework,f: material design,f: gestures,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-design,triaged-design | low | Major |
426,700,386 | TypeScript | `readonly` mapped type modifiers for `Map` / `ReadonlyMap` | ## Search Terms
ReadonlyMap
ReadWrite
Writable
## Suggestion
3.4.0-rc introduced the ability to use readonly mapped type modifiers with arrays.
(https://devblogs.microsoft.com/typescript/announcing-typescript-3-4-rc/)
It would be useful if the same syntax worked on
## Use Cases
This is useful in any library dealing with immutable versions of types. It would allow mapped types to easily convert between Map and ReadonlyMap.
## Examples
```
type ReadWrite<T> = { -readonly [P in keyof T]: T[P] };
const map: ReadWrite<ReadonlyMap<string, string>> = new Map();
map.put('a', 'c'); // this should work, since readonly has been removed.
```
```
type Readonly<T> = { readonly [P in keyof T]: T[P] };
const map: Readonly<Map<string, string>> = new Map();
map.put('a', 'c'); // this should not work, since readonly has been applied
```
Similar examples using `Array` instead of `Map` didn't work before 3.4 and did work afterwards.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
| Suggestion,In Discussion | low | Minor |
426,704,564 | TypeScript | Unconstrained generics are incorrectly assignable to _any_ "Partial" type | <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. -->
**TypeScript Version:** 3.4.0-dev.201xxxxx
**Code**
```ts
declare let x: Partial<HTMLElement>;
function f<T>(y: T) {
x = y;
}
f({ innerHtml: Symbol() }); // this is blatantly incompatible with `HTMLElement`
if (x.innerHTML) {
x.innerHTML.toLowerCase(); // We just set it to a `symbol` and not a `string`, this will error at runtime
}
```
**Expected behavior:**
An error on `x = y`.
**Actual behavior:**
No error.
[**Playground Link**](http://www.typescriptlang.org/play/#src=declare%20let%20x%3A%20Partial%3CHTMLElement%3E%3B%0D%0A%0D%0Afunction%20f%3CT%3E(y%3A%20T)%20%7B%0D%0A%20%20%20%20x%20%3D%20y%3B%0D%0A%7D%0D%0A%0D%0Af(%7B%20innerHtml%3A%20Symbol()%20%7D)%3B%0D%0A%0D%0Aif%20(x.innerHTML)%20%7B%0D%0A%20%20%20%20x.innerHTML.toLowerCase()%3B%0D%0A%7D%0D%0A)
The root cause is this relationship in `structuredTypeRelatedTo` added way back in [this](https://github.com/Microsoft/TypeScript/pull/26517):
```ts
if (relation !== subtypeRelation && isPartialMappedType(target) && isEmptyObjectType(source)) {
return Ternary.True;
}
```
This is unsound when `source` is the empty type resulting from the constraint of an unconstrained generic type (which, if changed to `unknown`, catches this issue!). It's unsound when it comes from a constraint at all, actually. Fixing this will break `react-redux`, whose recursive `Shared` type (which we have in our test suite) actually only checks because of this unsoundness. | Bug | low | Critical |
426,724,120 | scrcpy | Register as unlock agent | Android allows you to disable locking with SmartLock:
- when connected to a WIFI network
- when connect to a bluetooth device
- when phone is at position
So, should be cool to disable locking when scrcpy is running. | feature request | low | Minor |
426,806,999 | opencv | Draw lines with transparency x( serveral lines with difference transparency on a color image (not using addWeighted that is slow I have 500 lines) on Color image | I know there is a addWeighted method. But what I need is draw many lines, every line with different transparency.
Which means, can not using N mask to draw then addWeighted, that too slow speed.
To draw a line on BGR format image:
```
line(img_a, p1, p2, c, 2, CV_AA);
```
It does not add alpha even c is Scalar with alphas.
I convert image from BGR to BGRA with 4 channels, but the line still full with color rather than with transparency.
Any body could help me out with simply what I need?
> serveral lines with difference transparency on a color image (not using addWeighted that is slow I have 500 lines). | category: imgproc,priority: low | low | Major |
426,871,949 | go | x/tools/analysis/passes/httpresponse: check whether http.Response Body is closed. | [httpresponse](https://github.com/golang/tools/tree/master/go/analysis/passes/httpresponse) really helps you to avoid a mistake when using net/http package.
But it only whether your function call of `Body.Close()` is in correct line or not.
So I recently created own linter to check "whether you call `Body.Close()` or not" like the below case.
```
func f() {
_, err := http.Get("http://example.com/") // want "response body must be closed"
if err != nil {
// handle error
}
return
}
```
https://github.com/timakin/bodyclose
After I released, I realized it might be better if an official `httpresponse` supported this workflow.
So, can I contribute to add this feature? | NeedsInvestigation,Tools,Analysis | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.