id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
649,872,156 |
flutter
|
Add a Sidebar widget to Cupertino for a native look on iPadOS
|
## Use case
Build iPadOS apps which look native and up-to-date regarding the new UI concept of sidebars on iPad apps
## Proposal
iPadOS 14 introduces the UI concept a toggle sidebar which is prominently used in a variety of Appleβs own apps and surely in lots of updated 3rd party apps to make them feel native. In SwiftUI, there is already an implementation for it:
> SwiftUI will automatically take care of showing a button to slide in your bar from the side of the screen, and also collapse it with your primary view if youβre in a compact size class. If youβre presenting a list inside your sidebar, itβs a good idea to use the .listStyle() to give it the system-standard theme for sidebars, like this: .... (Source: [Hacking with Swift - How to add a sidebar for iPadOS](https://www.hackingwithswift.com/quick-start/swiftui/how-to-add-a-sidebar-for-ipados)
It would be fantastic to have the sidebar as a βnativeβ flutter widget in Cupertino.
|
c: new feature,framework,f: cupertino,P2,team-design,triaged-design
|
low
|
Major
|
649,874,667 |
excalidraw
|
Feature: Import data (local file and json link) to exsiting canvas
|
When we load a new json file or open a json link, it will erase the existing canvas and load data.
There would be certain use cases not to erase the existing canvas and load data in addition.
----
#1861 & #1862 : Slightly related as the problem is erasing canvas without confirmation.
#1091 : Related as my original idea is to create a gallery of reusable drawings (and developed [an external tool](https://github.com/dai-shi/excalidraw-gallery)).
#1537 & #859 : Because system copy&paste is not implemented, this would be only way to merge two drawings.
|
enhancement
|
low
|
Minor
|
649,934,964 |
pytorch
|
Inconsistent handling of torch.Size.__add__
|
`+ list` is allowed in JIT scripting, but not regularly.
```py
In [50]: def f(x: torch.Tensor) -> torch.Tensor:
...: # `x` is known to have dim -1 of size 18
...: shape = x.shape[:-1] + [6, 3]
...: return x.reshape(shape)
...:
...:
In [51]: f(torch.randn(18))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-51-472433a8673d> in <module>
----> 1 f(torch.randn(18))
<ipython-input-50-0fb98040e0b1> in f(x)
1 def f(x: torch.Tensor) -> torch.Tensor:
2 # `x` is known to have dim -1 of size 18
----> 3 shape = x.shape[:-1] + [6, 3]
4 return x.reshape(shape)
5
TypeError: can only concatenate tuple (not "list") to tuple
In [53]: torch.jit.script(f)(torch.randn(18))
Out[53]:
tensor([[-0.1262, -0.7522, 1.0233],
[ 0.2715, -1.5179, 0.1224],
[-0.2018, 0.6756, 2.3353],
[ 0.0312, -0.0629, -0.8199],
[ 2.0379, -1.4921, 0.5088],
[ 0.1909, 1.2629, -0.6989]])
```
`TypeError: can only concatenate tuple (not "list") to tuple`
cc @suo @gmagogsfm
|
oncall: jit,weeks,TSUsability,TSRootCause:PyTorchParityGap
|
low
|
Critical
|
649,953,020 |
TypeScript
|
unreachable code detected not work for if clause.
|
<!-- π¨ STOP π¨ STOP π¨ STOP π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.9.2
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** unreachable
**Code**
```ts
const hel2 = (x: number) => {
switch (typeof x) {
case 'number': return 0
}
x // Unreachable code detected
}
const hel12 = (x: number) => {
if(typeof x === 'number'){
return 0;
}
x // expect same error here!
}
```
**Expected behavior:**
in commnet
**Actual behavior:**
in commnet
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
https://www.staging-typescript.org/play?#code/MYewdgzgLgBAFgUwDYCYYF4YAoAeAuGMAVwFsAjBAJwEoMA+GAbwCgYYIB3ASymDmygBPAA4IQAMxg5aLNm2ABDCAhgByYuSqqClBFCKUwMAAysYAXzM5ml5qEixESAIxpMuAhoo16TM13EsIVEJKQx0THVSb1VqWTldfUMTAG4zSzZrSyA
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
|
Suggestion,Awaiting More Feedback
|
low
|
Critical
|
649,966,890 |
next.js
|
Invalid HTML inside `dangerouslySetInnerHTML` breaks the page.
|
# Bug report
## Describe the bug
If invalid HTML is added to `dangerouslySetInnerHTML`, Next.js will output a blank page without providing any feedback. This can be hard to track when working with a CMS provider or markdown files.
## To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
1. Clone https://github.com/lfades/nextjs-inner-html-bug
2. Run `yarn && yarn dev` or `npm i && npm run dev`
3. See that `pages/index.js` is a blank page with no errors
## Expected behavior
Invalid HTML inside `dangerouslySetInnerHTML` should throw and/or let the user know that there's something wrong.
The demo also has an `index.html` and `index.js` in the root directory that shows how the same code works in React alone, it doesn't produce an error either, but it shows the content.
|
good first issue
|
medium
|
Critical
|
649,967,030 |
pytorch
|
Vectorized torch.eig()
|
## π Feature
<!-- A clear and concise description of the feature proposal -->
The **torch.eig()** function does not support vectorized calculation. It only takes a single matrix (n, n) as input.
Please make it to support vectorized(batched?) matices like (*, n, n). * is for batch.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
Even though the **torch.symeig()** takes vectorized(batched?) matrices with a shape of (*, n, n), **torch.eig() does not**
## Pitch
<!-- A clear and concise description of what you want to happen. -->
If I put a tensor with shape (batch, n, n), it will outs (batch, eigenvalue_real, eigenvalue_imagery) and (batch, eigenvectors)
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
I think if someone cheats the implementation of **torch.symeig()**, this request could be much easier.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
I added a toy example
~~~
>>> x = torch.zeros([4000,4,4])
>>> x = x + torch.eye(4)
>>> torch.symeig(x)
torch.return_types.symeig(
eigenvalues=tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
...,
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]]),
eigenvectors=tensor([]))
>>> torch.eig(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: invalid argument 1: A should be 2 dimensional at /pytorch/aten/src/TH/generic/THTensorLapack.cpp:193
~~~
cc @VitalyFedyunin @ngimel
|
module: performance,triaged,enhancement,module: vectorization
|
low
|
Critical
|
649,990,995 |
flutter
|
[tool_crash] ProcessException: The system cannot find the file specified. Command: C:\src\flutter\flutter\bin\flutter.BAT upgrade --continue --no-version-check
|
## Command
```
flutter upgrade
```
## Steps to Reproduce
1. ...
2. ...
3. ...
## Logs
ProcessException: ProcessException: The system cannot find the file specified.
Command: C:\src\flutter\flutter\bin\flutter.BAT upgrade --continue --no-version-check
```
```
```
[β] Flutter (Channel stable, v1.17.4, on Microsoft Windows [Version 10.0.18363.900], locale en-US)
β’ Flutter version 1.17.4 at C:\src\flutter\flutter
β’ Framework revision 1ad9baa8b9 (2 days ago), 2020-06-30 12:53:55 -0700
β’ Engine revision ee76268252
β’ Dart version 2.8.4
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
β’ Android SDK at C:\Users\Pratiksha\AppData\Local\Android\sdk
β’ Platform android-29, build-tools 29.0.3
β’ Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
β Android license status unknown.
Try re-installing or updating your Android SDK Manager.
See https://developer.android.com/studio/#downloads or visit visit
https://flutter.dev/docs/get-started/install/windows#android-setup for detailed instructions.
[β] Android Studio (version 4.0)
β’ Android Studio at C:\Program Files\Android\Android Studio
β’ Flutter plugin version 47.1.2
β’ Dart plugin version 193.7361
β’ Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
[β] VS Code (version 1.46.1)
β’ VS Code at C:\Users\Pratiksha\AppData\Local\Programs\Microsoft VS Code
β’ Flutter extension version 3.11.0
[!] Connected device
! No devices available
! Doctor found issues in 2 categories.
```
## Flutter Application Metadata
No pubspec in working directory.
|
c: crash,tool,platform-windows,P2,team-tool,triaged-tool
|
low
|
Critical
|
650,061,472 |
neovim
|
Indent alignment lines and does not display the first-level tab character
|
<!-- Before reporting: search existing issues and check the FAQ. -->
- `nvim --version`: NVIM v0.4.3 Build type: Release LuaJIT 2.0.5
- `vim -u DEFAULTS` (version: ) behaves differently? No
- Operating system/version: MacOS Catalina 10.15.5
- Terminal name/version: iTerm2
- `$TERM`: xterm-256color
I need the Indent alignment lines and does not display the first-level tab character:
```
set listchars=tab:\Β¦\ ,trail:β ,extends:>,precedes:<,nbsp:+
set list
hi SpecialKey ctermfg=239 ctermbg=202
```
It started like this:
<img src="https://user-images.githubusercontent.com/32320149/86309990-9830e180-bc4f-11ea-89eb-86e1cd658cce.png" width="400px" alt="">
But I would like this **which does not display the first-level tab character**:
<img src="https://user-images.githubusercontent.com/32320149/86310020-a5e66700-bc4f-11ea-9f53-5c5c0dee7d05.png" width="400px" alt="">
> Why
I think It will be more reasonable, simple, and beautiful. "Radish or cabbage; each to his own delight."
I hope I explained it clearly.π
|
enhancement
|
low
|
Minor
|
650,071,802 |
youtube-dl
|
YouTube chat replay support
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.06.16.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a site feature request
- [x] I've verified that I'm running youtube-dl version **2020.06.16.1**
- [x] I've searched the bugtracker for similar site feature requests including closed ones
## Description
<!--
Provide an explanation of your site feature request in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
-->
YouTube now has "chat replay" for recorded livestreams in the same style as Twitch, which youtube-dl already supports extraction of as a "subtitle". It would be beneficial for youtube-dl to also support extraction as a subtitle for YouTube, as like on Twitch, chat on YouTube can form a very important part of the livestream in question. There is no existing support for this in youtube-dl, or similar option that I can see.
There is a Python library at https://github.com/taizan-hokuto/pytchat which may be useful for the implementation of this.
Amongst other formats, it supports output as JSON, which could simply be passed back as the output for a new "subtitle" - the same style as the Twitch chat replay.
Use case example: The archiving of a YouTube channel, including all metadata. At the moment the chat replay would not be saved, meaning there is no context for content in the video which may refer to it.
|
incomplete
|
low
|
Critical
|
650,108,359 |
angular
|
Ivy: Animation events removed when node's position is changed
|
<!--π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
Oh hi there! π
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
π
-->
# π bug report
### Affected Package
<!-- Can you pin-point one or more @angular/* packages as the source of the bug? -->
<!-- βοΈedit: --> The issue is caused by package @angular/animations
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- βοΈ--> Yes, without ivy animation events don't removed.
### Description
<!-- βοΈ-->
Animation events is removed from `AnimationTransitionNamespace` during change node's position inside `ViewContainerRef`. This causes issue inside `mat-tab-group` when tab's order is changed because displaying content depends on animation events.
When node's index is changed a node detaches from old position and inserts into new one. During detach animation events are removed, but not added when node inserts back into new position.
## π¬ Minimal Reproduction
<!--
Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-ivy
-->
<!-- βοΈ-->
[StackBlitz](https://stackblitz.com/github/ghostlytalamaur/angular-rearrange-tab-bug) [GitHub Repo](https://github.com/ghostlytalamaur/angular-rearrange-tab-bug)
#### Steps to reproduce:
1. Open StackBlitz
2. Enable Ivy in settings
3. Click on rearrange button
4. Click on Tab 2. There is no content.
<!--
If StackBlitz is not suitable for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue.
A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem.
Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior.
Issues that don't have enough info and can't be reproduced will be closed.
You can read more about issue submission guidelines here: https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-submitting-an-issue
-->
|
area: animations,state: confirmed,P3
|
low
|
Critical
|
650,142,089 |
pytorch
|
MultiheadAttention set(-inf) cause 'Nan' in loss computing
|
## π Bug
I plan to reimplement a transformer variant model. I import MultiheadAttention from torch.nn.modules.activation.
In the encoder part, to be specific, the self multi-head attention part, if **the whole input is padded**, it means the key_padding_mask parameter is full of True.
"When the value is True, the corresponding value on the attention layer will be filled with -inf."
This setting leads to **NaN** in model parameters and raises ValueError("nan loss encountered").
## To Reproduce
Steps to reproduce the behavior:
1. Initialize a MultiheadAttention.
self.self_attn= MultiheadAttention(embed_dim=embed_dim,num_heads=nhead,dropout=dropout)
2. In forward() function.
src, attn = self.self_attn(src,src,src,attn_mask=src_mask,
key_padding_mask=src_key_padding_mask)
3. Then pass an x. The vector src_key_padding_mask is all implemented True. The original sentence in src is \<pad\> * max_seq_length.
I use allennlp, this raises "ValueError: nan loss encountered".
I founded that one example in a batch is full of \<pad\>, which causes this issue.
## Expected behavior
I found some descriptions of almost the same problem in fairseq.
[fairseq](https://github.com/pytorch/fairseq/blob/master/fairseq/modules/transformer_layer.py)
line 103
# anything in original attn_mask = 1, becomes -1e8
# anything in original attn_mask = 0, becomes 0
# Note that we cannot use -inf here, because at some edge cases,
# the attention weight (before softmax) for some padded element in query
# will become -inf, which results in NaN in model parameters
Hope u can learn from their practices.
## Environment
- PyTorch Version (e.g., 1.0): 1.5.1
- OS (e.g., Linux): Ubuntu 16.04.6 LTS
- How you installed PyTorch (`conda`, `pip`, source): pip
- Python version: 3.7
- CUDA/cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.4
- GPU models and configuration: TITAN Xp
- Any other relevant information:
numpy==1.18.5
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
|
module: nn,triaged,module: NaNs and Infs
|
low
|
Critical
|
650,147,883 |
TypeScript
|
Give better errors comparing massive singleton union types
|

|
Suggestion,Needs Proposal,Domain: Error Messages
|
low
|
Critical
|
650,150,193 |
TypeScript
|
Provide documentation on writing fourslash tests
|
> You know what fourslash needs? Extensive documentation/comments on how fourslash works and the fourslash API. Half the time I can never figure out what I need to call to test something and have to dig through other tests to figure out when to use `/**/` or `[| |]`, etc.
— @rbuckton
|
Docs,Infrastructure
|
low
|
Minor
|
650,151,977 |
TypeScript
|
Hovering on imported module shows path from TypeScript folder.
|
*TS Template added by @mjbvz*
**TypeScript Version**: 3.9.6
**Search Terms**
- quickinfo
- types
- javascript
---
<!-- β οΈβ οΈ Do Not Delete This! bug_report_template β οΈβ οΈ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: 1.46.1
- OS Version: Windows 10
Steps to Reproduce:
1. Require modules like yargs, validate in a nodejs project. (Installed in project folder)
2. On hover, TypeScripts node_modules path is shown instead of project's node_modules folder.
3. Packages not in TypeScript's node_modules folder has no such issue.
Hovering on yargs:

Hovering on chalk:

<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
|
Suggestion,Awaiting More Feedback
|
low
|
Critical
|
650,160,527 |
pytorch
|
[RFC] [RPC] Automatic retries of all requests in TensorPipe agent
|
To increase resiliency to infra issues, we could provide an automatic transparent retry mechanism in the RPC agent.
There already exists something of that kind, but my understanding is that it's an internal API used only for RRef operations. A challenge of extending it to all requests is that, due to that retry system living outside the agent, it retries each failed request as a new separate request, and would thus only work for idempotent functions (the RRef ops are, but user functions may not be).
What I propose is to implement the retry inside the agent (in particular, I am thinking about the TensorPipe one; I'm not sure how it would work for the other ones). The idea is to assign each request and each response message a unique ID on the sender side, and then have the receiver confirm the transfer with an ACK message. The sender would keep a map from ID to data of all the messages to which it hasn't received an ACK yet. When a pipe fails and is re-established, the sender will re-send all those pending messages. The receiver will also keep a set of IDs that it has already received and dealt with, and can thus check if the sender is sending an ID for the second time, and in that case just send back the ACK without performing the operation anew. This way the operation becomes effectively idempotent. (Some details will still have to be ironed out, but this is the gist of the idea).
Note that the above works when there is an error in the transmission but both endpoints stay up and running and are able to reconnect. If one endpoint goes down and then comes up again this wouldn't work, as then its map of pending messages wouldn't be preserved. However, such a scenario seems out of scope, as retrying a stateful RPC call on a worker that has lost all its state will probably not make sense either. So we could probably safely assume that when one worker goes down it, it will never come back up or that all of them will be restarted (which I believe is what elastic does).
The alternatives to this approach are two:
- We could push the retry even further down, into TensorPipe. The problem there is that TensorPipe doesn't natively operate on a request-response protocol (this is added on top by the agent) and thus wouldn't normally send ACKs. Moreover, this would require TensorPipe to keep data alive even after it has finished sending it, just in case the transfer may later fail and it needs to send it again. This logic may not be desirable in all circumstances and IMO should live at a higher level of the stack.
- We could avoid retrying automatically and count on the user to catch exceptions in their RPC calls and, knowing which ones are idempotent and which ones aren't, pick their own logic of retrying. This seems burdensome on the user, and scales poorly, as each user would have to reimplement what is effectively a similar logic.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar @jiayisuse @lw @beauby
|
triaged,module: rpc,module: tensorpipe
|
low
|
Critical
|
650,167,619 |
godot
|
Invalid cast exception when using C# and GD Script
|
**Godot version:**
3.2.2
**OS/device including version:**
Windows 10 64 bits
**Issue description:**
error when trying to conbine [this accessibility plugin](https://github.com/lightsoutgames/godot-accessibility) (that is in gdScript) with c# scripts that i want to use on my Project.
I got a invalid cast exception when I try to compyle a simple hello world Project with a empty c# script. If I don't add any c# script to the Project, the plugin Works fine.
and no, isn't a option dont use the plugin, because that plugin brings accessibility to all the Godot ecosystem, the editor itself and the result game. On [this post](https://godotforums.org/discussion/22953/c-cant-create-a-simple-script-in-a-small-example-project) we got some more details.
I am using Godot 3.2, but if I downgrade to 3.1 it Works fine.
ERROR: debug_send_unhandled_exception_error: System.InvalidCastException: Specified cast is not valid.
At: modules/mono/mono_gd/gd_mono_utils.cpp:357
ERROR: call_build: An EditorPlugin build callback failed.
At: editor/editor_node.cpp:5268
ERROR: debug_send_unhandled_exception_error: System.InvalidCastException: Specified cast is not valid.
At: modules/mono/mono_gd/gd_mono_utils.cpp:357
**Steps to reproduce:**
1. download or clone [this project](https://github.com/lightsoutgames/godot-accessible-starter) and atach a c# script into it.
2. Run the project.
|
bug,topic:dotnet
|
medium
|
Critical
|
650,179,620 |
opencv
|
FR: Implement "smart" reference on js similar to tensorflow.tidy()
|
Developers are required to manually manage memory on OpenCV.js and keeping some of the references may led to cumbersome code, I suggest providing an mechanism similar to the Tensorflow.js tf.tidy, that is used to cleanup GPU textures after processing TF code: https://js.tensorflow.org/api/0.11.7/#tidy
Here is an example usage:
```
// y = 2 ^ 2 + 1
const y = tf.tidy(() => {
// a, b, and one will be cleaned up when the tidy ends.
const one = tf.scalar(1);
const a = tf.scalar(2);
const b = a.square();
console.log('numTensors (in tidy): ' + tf.memory().numTensors);
// The value returned inside the tidy function will return
// through the tidy, in this case to the variable y.
return b.add(one);
});
console.log('numTensors (outside tidy): ' + tf.memory().numTensors);
y.print();
```
|
feature,category: javascript (js)
|
low
|
Minor
|
650,267,782 |
go
|
cmd/doc: package search inconsistent with methods or fields
|
```
$ go version
go version devel +5de90d33c8 Thu Jul 2 22:08:11 2020 +0000 linux/amd64
```
Running `go doc rand`, as expected, gives the documentation for `crypto/rand`, as it finds that one before `math/rand`:
```
package rand // import "crypto/rand"
Package rand implements a cryptographically secure random number generator.
var Reader io.Reader
func Int(rand io.Reader, max *big.Int) (n *big.Int, err error)
func Prime(rand io.Reader, bits int) (p *big.Int, err error)
```
Running `go doc rand.rand`, however, gives the documentation for `math/rand.Rand`, as it uses the lack of the symbol in `crypto/rand` to know that that's not the right package. So far so good:
```
package rand // import "math/rand"
type Rand struct {
// Has unexported fields.
}
A Rand is a source of random numbers.
func New(src Source) *Rand
func (r *Rand) ExpFloat64() float64
func (r *Rand) Float32() float32
func (r *Rand) Float64() float64
func (r *Rand) Int() int
func (r *Rand) Int31() int32
func (r *Rand) Int31n(n int32) int32
func (r *Rand) Int63() int64
func (r *Rand) Int63n(n int64) int64
func (r *Rand) Intn(n int) int
func (r *Rand) NormFloat64() float64
func (r *Rand) Perm(n int) []int
func (r *Rand) Read(p []byte) (n int, err error)
func (r *Rand) Seed(seed int64)
func (r *Rand) Shuffle(n int, swap func(i, j int))
func (r *Rand) Uint32() uint32
func (r *Rand) Uint64() uint64
```
However, running `go doc rand.rand.int` fails:
```
doc: symbol rand is not a type in package rand installed in "crypto/rand"
exit status 1
```
It seems like it doesn't actually search past the first package if there's a method specified. Manually specifying the package, such as `go doc math/rand.rand.int`, works as expected.
|
NeedsInvestigation
|
low
|
Critical
|
650,297,336 |
pytorch
|
Will the model run slower when deployed using libtorch ?
|
I'm trying to deploy a yolov5s model in my C++ program.
I followed the instructions in [https://gist.github.com/jakepoz/eb36163814a8f1b6ceb31e8addbba270](url) to get a torchscript converted model.
These are my C++ code, I put these code in a thread in ORBSLAM (a SLAM system), meanwhile there are other SLAM threads running.
```cpp
modelpath = "yolov5s.torchscript";
cout << "before loading" << endl;
long start = time_in_ms();
model = torch::jit::load(modelpath);
long end = time_in_ms();
cout << "it took " << end - start << " ms to load the model" << endl;
torch::jit::getProfilingMode() = false;
torch::jit::getExecutorMode() = false;
torch::jit::setGraphExecutorOptimize(false);
tensor_image = torch::zeros((1, 3, 640,640));
long start = time_in_ms();
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 640, 640}));
//inputs.emplace_back(tensor_image);
torch::jit::IValue output = model.forward(inputs);
long end = time_in_ms();
cout << "it took " << end - start << " ms to run the model once" << endl;
```
It took 720ms to load the model and 1300ms to run the model once.
But when I run this model in python environment, it only takes 200ms.
I would like to know if it's reasonable or what should I do to accelerate this.
cc @yf225 @glaringlee @VitalyFedyunin @ngimel
|
module: performance,module: cpp,triaged
|
low
|
Major
|
650,297,482 |
TypeScript
|
JS Find function definition involving a function generator
|
*TS Template added by @mjbvz*
**TypeScript Version**: 4.0.0-dev.20200702
**Search Terms**
- go to definition
---
Issue Type: <b>Bug</b>
```
// x.js
module.exports.a = () => {return ()=>{}};
// y.js
let n = require('./x.js');
module.exports.b = n.a();
// anywhere other than y.js
let m = require('./y.js');
let c = m.b(); // ctrl-click b
```
ctrl-click b
leads to 2 definitions and the default is a's definition instead of b's
VS Code version: Code 1.46.1 (cd9ea6488829f560dc949a8b2fb789f3cdc05f5d, 2020-06-17T21:13:20.174Z)
OS version: Windows_NT x64 10.0.17134
<details><summary>Extensions (15)</summary>
Extension|Author (truncated)|Version
---|---|---
project-manager|ale|11.1.0
vscode-eslint|dba|2.1.5
clipboard-manager|Edg|1.4.2
vscode-npm-script|eg2|0.3.12
vscode-test-explorer|hbe|2.19.1
vscode-heroku|iva|1.2.6
docthis|joe|0.7.1
mongodb-vscode|mon|0.0.4
csharp|ms-|1.22.1
mssql|ms-|1.9.0
python|ms-|2020.6.91350
cpptools|ms-|0.28.3
prettier-now|rem|1.4.9
tabnine-vscode|Tab|2.8.6
poor-mans-t-sql-formatter-vscode|Tao|1.6.10
</details>
<!-- generated by issue reporter -->
|
Bug
|
low
|
Critical
|
650,300,140 |
godot
|
PhysicalBones do not move with skeleton when simulation is off
|
<!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if using non-official build. -->
3.2.1.stable.custom_build.f0a489cf4
**OS/device including version:**
<!-- Specify GPU model, drivers, and the backend (GLES2, GLES3, Vulkan) if graphics-related. -->
Linux 5.7.6-arch1-1
**Issue description:**
<!-- What happened, and what was expected. -->
Maybe this is expected behavior, but it is a bit surprising.
When you create a physical skeleton, the `PhysicalBone`s will not move with the rig, but will remain in place until you start simulation.
However, they _do_ still detect collisions, which means you end up with these invisible blockades littered around your world.
You can see this in the [platformer demo](https://github.com/godotengine/godot-demo-projects/tree/master/3d/platformer):

It would be nice if, before starting the simulation, the `PhysicalBone`s just followed the transforms of their parent bones.
Failing that, maybe they should at least not collide (or we should call out in the docs that the user should clear the collision layer/mask and set it when starting the simulation).
**Steps to reproduce:**
1. Generate PhysicalBones from a skeleton
2. Move the character
3. Note that you collide with your own frozen, invisible skeleton
**Minimal reproduction project:**
<!-- A small Godot project which reproduces the issue. Drag and drop a zip archive to upload it. -->
[platformer demo](https://github.com/godotengine/godot-demo-projects/tree/master/3d/platformer)
|
bug,confirmed,topic:physics
|
low
|
Minor
|
650,330,293 |
TypeScript
|
Adding a compilerOption to disable error on property override accessor in 4.0.beta
|
## Search Terms
property override accessor
## Suggestion
Previously in 3.9, property can override an accessor with no emit errors. And in 4.0 beta, a break change was introduced by #33509.
It's reasonable to report an error on property override accessor in most case, but I'm using `experimentalDecorators` to inject a property accessor in prototype.
Currently I cannot find a solution for this use case, so I'm suggesting to add a new compilerOption to disable this check(strictOnly?) for compatibility.
## Use Cases
```
class Animal {
private _age: number
get age(){ return this._age }
set age(value){ this._age = value }
}
class Dog extends Animal {
@defaultValue(100) age: number; // Unexpected error here
}
function defaultValue(value) {
return (obj, name) => {
Object.defineProperty(obj, name, {
get() { return value }
})
}
}
```
## Checklist
My suggestion meets these guidelines:
* [ ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
|
Suggestion,Awaiting More Feedback
|
low
|
Critical
|
650,334,847 |
go
|
proposal: crypto/x509: implement OCSP verifier
|
Although we have a golang.org/x/crypto/ocsp package, we don't in fact have an OCSP verifier. The existing package provides serialization and parsing, but not APIs for "get me an OCSP response for this certificate" and "given this certificate and this OCSP response tell me if it's revoked". (They are separate because you only want the latter when checking stapled responses.)
There is a lot of complexity, subtlety, and dark knowledge involved in OCSP unfortunately. Here are a few notes on things the verifier needs to do (from reading [this thread](https://groups.google.com/d/msg/mozilla.dev.security.policy/EzjIkNGfVEE/XSfw4tZPBwAJ)):
* check that the response is signed directly by the issuer (without needing the OCSP EKU) or that it's signed by a Delegated Responder issued directly by the issuer (with the OCSP EKU)
* for Delegated Responders, **not** require the EKU to be enforced up the chain
* check that the signer has the correct KeyUsage
* for Delegated Responders, require them to be End Entity certificates (i.e. not a CA; this is an off-spec Mozilla check that protected them from the mess of CAs giving the OCSP EKU away to intermediates)
* for Delegated Responders, maybe check that the `id-pkix-ocsp-nocheck` extension is present (this is a BR requirement, but if it's not an IETF requirement we might want to skip it)
There are definitely a lot more things to consider (for example, the `id-pkix-ocsp-nocheck` extension needs to be processed itself), the list above are just notes of things I learned from that one incident.
A difficult question is where to put the code, and how to surface it. We'll want to use it in crypto/tls for Must-Staple (#22274) but it feels like the wrong place for the code to live. An obvious answer would be golang.org/x/crypto/ocsp, but then we can't use it from crypto/x509 without an import loop. Should `x509.VerifyOptions` have an option to verify a stapled response? Probably, I feel like we'd regret doing OCSP verification separately from path building and certificate verification anyway. What about the API to fetch a response? Maybe that can stay in golang.org/x/crypto/ocsp, separating the concerns of obtaining responses and verifying them.
We should also look around the ecosystem, because surely someone had to implement this, and we should compare results.
|
Proposal,Proposal-Hold,Proposal-Crypto
|
medium
|
Critical
|
650,427,607 |
flutter
|
Error connecting to the service protocol: failed to connect to http://127.0.0.1:1027/
|
### Steps to Reproduce
I am trying to debug _Firebase Dynamic Links_ and I cannot reproduce the issue with the Simulator, so I need to debug on a physical device. So from _Visual Code_, I choose my physical device and _Start Debugging_. Alas, it crashes during startup with:
```
Error connecting to the service protocol: failed to connect to http://127.0.0.1:1027/
Exited (sigterm)
```
_What can I do?_
<details>
<summary>Logs</summary>
```
[β] Flutter (Channel stable, v1.17.5, on Mac OS X 10.15.5 19F101, locale en-US)
β’ Flutter version 1.17.5 at /Users/anthony/flutter
β’ Framework revision 8af6b2f038 (3 days ago), 2020-06-30 12:53:55 -0700
β’ Engine revision ee76268252
β’ Dart version 2.8.4
[β] Android toolchain - develop for Android devices (Android SDK version 30.0.0-rc4)
β’ Android SDK at /Users/anthony/Library/Android/sdk
β’ Platform android-R, build-tools 30.0.0-rc4
β’ Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
β’ All Android licenses accepted.
[β] Xcode - develop for iOS and macOS (Xcode 11.5)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 11.5, Build version 11E608c
β’ CocoaPods version 1.9.3
[β] Android Studio (version 4.0)
β’ Android Studio at /Applications/Android Studio.app/Contents
β’ Flutter plugin version 46.0.2
β’ Dart plugin version 193.7361
β’ Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
[β] VS Code (version 1.46.1)
β’ VS Code at /Applications/Visual Studio Code.app/Contents
β’ Flutter extension version 3.12.1
[β] Connected device (3 available)
β’ Anthony's iPhone X β’ dd69530247333fb75a3729579a8510e4c02268d4 β’ ios β’ iOS 13.5.1
β’ iPhone 11 Pro Max β’ 24F5F921-BD98-497F-9DEF-7F99FEDA41E1 β’ ios β’ com.apple.CoreSimulator.SimRuntime.iOS-13-5 (simulator)
β’ iPad (7th generation) β’ 9A327E6C-836E-497B-B174-86AD0B20D40E β’ ios β’ com.apple.CoreSimulator.SimRuntime.iOS-13-5 (simulator)
β’ No issues found!
```
</details>
|
platform-ios,tool,P2,c: fatal crash,team-ios,triaged-ios
|
low
|
Critical
|
650,477,734 |
angular
|
The entire use of NgModule should be deprecated
|
The aim of this post is to create a discussion on the topic, there is also a really simple solution I offered, but I really like to hear your thoughts...
It is my opinion that creating a module system in Angular **NgModule** in addition to the already existing module system of Javascript is a bad idea and should be changed, or at least the developers should have a choice to work in a non NgModule mode...
Here is why I think NgModules will hold angular progress back:
1. Bundle size - if I'm creating a library or installing a library containing a module with 10 components, I want to use one of those components in my app, but I only want to use one of them, I add the module to my imports array and my bundle will contain 9 components I'm not even using - this is incorrect with the new Ivy.
2. Maybe this is why angular material wraps almost every component in the library in it's own module, a bit ugly in my opinion but necessary because of the NgModule system, still it requires me adding tons of stuff to my imports library.
3. I know how to solve 2 I will just create CommonModule with the imports in one module and then every module that use them will add that module to the imports, but isn't that pattern messing my bundle size as well?
For example let's say I have common stuff that every module is using and than a bunch of lazy loaded module that each we can divide to a group that use the common ones and another group of other common modules, I don't want to add everything to to CommonModule so I would have do to Common1Module, Common2Module, etc. for every group cause if I throw everything on the CommonModule it will effect the bundle size of every lazy loaded part of my app won't it?
4. It's not a reason but it's a big clue that the NgModule system is not needed, there is no module system in any other frontend framework or library that i'm aware of, including React and Vue, they simply use the module system in JS.
5. Learning curve - While it's pretty easy for people to learn that they need modules, separation of concerns is a pretty well understood concept for people and they know they should split their code to "libraries", so the angular module system is pretty simple to understand, but usually when I teach and talk to angular developers they get lost how the DI works cause the module system heavily intertwined in how the DI is searching for providers.
6. HttpClientModule is a good example here, let's say I'm creating a library which contain an NgModule. Obviously that module can be added to the imports of a lazy loaded module, I cannot add HttpClientModule to my library cause it can create another instance of HttpClient if it is used, and then I can mess up my interceptor. So now I have to worry about peerDependencies as well. Example ngrx libraries require us to add HttpClientModule. We already have a peerDependency mechanism inside javascript modules, now we have to do the same with angular
Currently I see the Ivy renderer allows us to create components without modules, but it is still heavily limited in what the component can use in terms of other directives and providers, without a module that is wrapped and supplies them.
It somewhat seemed to me that the NgModules system is something that was taken from the Angular.js ecosystem and just moved along to the current Angular and I don't think it's needed at all, and not only that it creates more harm than good.
I think the NgModule requires a drastic change which will have to cause a drastic change in the DI, and we will not achieve the bundle size of the competitors React and Vue unless a major change here is achieved .
I think NgModule has more cons than pros and would love to hear your thoughts.
Making a really general proposal from this post these are the changes I think should be made:
1. The DI system should not automatically create different trees for modules and for the view, rather it should be a simple solution that should be inspired from the library TypeDI.
You import the DI and you register services to it, and if you want to create a tree structure in your DI you do it manually, you import the DI system and fork it so a certain component can have it's own dependencies.
Or at least to support backward compatibility allow the DI to work in manual mode, just like we did with the change detection strategy. The manual mode of the DI will enable the developer to control the forking of the di to trees and thus he can choose to work in a non NgModule mode.
2. After that remove the modules entirely, every component that use a certain directive component should use the js module system and use a certain import.
Maybe this step can also somehow be integrated in the compiler in some way so based on the selector angular will know which component to integrate without using the js module system. I'm not entirely strong on the compilation process so I don't think I'm qualified to make a suggestion here.
|
area: core,core: NgModule,needs: discussion
|
high
|
Critical
|
650,498,501 |
rust
|
rustc performs auto-ref when a raw pointer would be enough
|
The following code:
```rust
#![feature(slice_ptr_len)]
pub struct Test {
data: [u8],
}
pub fn test_len(t: *const Test) -> usize {
unsafe { (*t).data.len() }
}
```
generates MIR like
```
_2 = &((*_1).0: [u8]);
_0 = const core::slice::<impl [u8]>::len(move _2) -> bb1;
```
This means that a reference to `data` gets created, even though a raw pointer would be enough. That is a problem because creating a reference makes aliasing and validity assumptions that could be avoided. It would be better if rustc would not implicitly introduce such assumptions.
Cc @matthewjasper
|
T-compiler,A-MIR,C-bug,F-arbitrary_self_types
|
medium
|
Critical
|
650,511,015 |
flutter
|
[google_sign_in] Can't sign in to youtube channels?
|
<!-- You must include full steps to reproduce so that we can reproduce the problem. -->
I'm trying to login with youtube channels (not just the Google account itself.)
on the console, I've added auth/youtube to the OAuth screen (not sure if this is needed)
I've also added this to my scopes
`GoogleSignIn googleSignIn = GoogleSignIn(
scopes: [
'email',
'https://www.googleapis.com/auth/youtube', // Youtube scope
],
);`
(I'm also using FirebaseUser to capture emails, not sure if this is possible to interlace?)
When I try to log in it shows me my e-mails and when selected it gives me a screen if I want to accept "project......" (not sure how to change this to the name app btw) to manage my youtube channels and whatnot. however when I press yes, it doesn't show me the channel list so I can select the one I want. So I don't really have access to the user channel stuff like playlists, name, memberships and subscriptions, image, etc... how could I fix this?
Here are the screens I DO get


also here's my sign in code
````
class AuthService {
String OAuthClientId = '*hidden but came from the dev console*'; //Don't use this for nothing?
String discoveryDocs = 'https://www.googleapis.com/discovery/v1/apis/youtube/v3/rest';
String scopes = 'http://www.googleapis.com/auth/youtube.readonly';
final FirebaseAuth auth = FirebaseAuth.instance;
GoogleSignIn googleSignIn = GoogleSignIn(
scopes: [
'email',
'https://www.googleapis.com/auth/youtube', // Youtube scope
],
);
```
Future<FirebaseUser> login() async {
final GoogleSignInAccount googleSignInAccount = await googleSignIn.signIn();
final GoogleSignInAuthentication googleSignInAuthentication = await googleSignInAccount.authentication;
final AuthCredential credential = GoogleAuthProvider.getCredential(
accessToken: googleSignInAuthentication.accessToken,
idToken: googleSignInAuthentication.idToken,
);
// You'll need this token to call the Youtube API. It expires every 30 minutes.
final token = googleSignInAuthentication.accessToken;
final AuthResult authResult = await auth.signInWithCredential(credential);
final FirebaseUser user = authResult.user;
assert(!user.isAnonymous);
assert(await user.getIdToken() != null);
final FirebaseUser currentUser = await auth.currentUser();
assert(user.uid == currentUser.uid);
return currentUser; }`
|
platform-android,d: stackoverflow,p: google_sign_in,package,P2,team-android,triaged-android
|
low
|
Major
|
650,543,385 |
react-native
|
TouchableOpacity setOpacityTo is not a function
|
Originally reported in expo/expo#9026, I ran into the same problem so I'm re-reporting this here
## Description
The `setOpacityTo` on TouchableOpacity is no longer available.
## React Native version:
0.62.2
```
Expo CLI 3.20.3 environment info:
System:
OS: macOS Mojave 10.14.6
Shell: 5.3 - /bin/zsh
Binaries:
Node: 10.20.1 - /var/folders/qt/10542x9d2wbg0x93mmh1xsw00000gn/T/yarn--1593777562592-0.5909086070884821/node
Yarn: 1.22.4 - /var/folders/qt/10542x9d2wbg0x93mmh1xsw00000gn/T/yarn--1593777562592-0.5909086070884821/yarn
npm: 6.14.4 - ~/.asdf/installs/nodejs/10.20.1/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
IDEs:
Android Studio: 3.5 AI-191.8026.42.35.5791312
Xcode: 11.0/11A420a - /usr/bin/xcodebuild
npmPackages:
expo: ^38.0.0 => 38.0.4
react: 16.11.0 => 16.11.0
react-dom: 16.11.0 => 16.11.0
react-native: https://github.com/expo/react-native/archive/sdk-38.0.0.tar.gz => 0.62.2
react-native-web: ^0.12.0 => 0.12.3
react-navigation: ^4 => 4.3.9
```
## Steps To Reproduce
Provide a detailed list of steps that reproduce the issue.
1. Create a reference to `TouchableOpacity`
2. Try calling `setOpacityTo`
## Expected Results
Opacity of the element is changed
## Actual Results
Fatal crash due to missing method
## Snack, code example, screenshot, or link to a repository:
https://snack.expo.io/@zanona/3cb338
|
Issue: Author Provided Repro,Component: TouchableOpacity
|
low
|
Critical
|
650,589,526 |
go
|
cmd/go: go test -json splits log records
|
I maintain a test runner tool (https://github.com/grasparv/testie).
Here is my problem:
```
$ go version
go version go1.14 linux/amd64
$ cat hello_test.go
package hello
import (
"testing"
)
func TestSomething(t *testing.T) {
t.Logf("first item\nsecond item")
t.Logf("third item")
}
$ go test -v -json
{"Time":"2020-07-03T15:08:12.426294755+02:00","Action":"run","Package":"example","Test":"TestSomething"}
{"Time":"2020-07-03T15:08:12.426393632+02:00","Action":"output","Package":"example","Test":"TestSomething","Output":"=== RUN TestSomething\n"}
{"Time":"2020-07-03T15:08:12.426403571+02:00","Action":"output","Package":"example","Test":"TestSomething","Output":" TestSomething: hello_test.go:8: first item\n"}
{"Time":"2020-07-03T15:08:12.426407445+02:00","Action":"output","Package":"example","Test":"TestSomething","Output":" second item\n"}
{"Time":"2020-07-03T15:08:12.426411074+02:00","Action":"output","Package":"example","Test":"TestSomething","Output":" TestSomething: hello_test.go:9: third item\n"}
{"Time":"2020-07-03T15:08:12.426415993+02:00","Action":"output","Package":"example","Test":"TestSomething","Output":"--- PASS: TestSomething (0.00s)\n"}
{"Time":"2020-07-03T15:08:12.426419389+02:00","Action":"pass","Package":"example","Test":"TestSomething","Elapsed":0}
{"Time":"2020-07-03T15:08:12.426423735+02:00","Action":"output","Package":"example","Output":"PASS\n"}
{"Time":"2020-07-03T15:08:12.4264854+02:00","Action":"output","Package":"example","Output":"ok \texample\t0.001s\n"}
{"Time":"2020-07-03T15:08:12.42649666+02:00","Action":"pass","Package":"example","Elapsed":0.001}
```
I expected seeing first and second item reported as a single entry (instead second entry gets its own log event). The above is an issue since it destroys information by breaking lines. For instance, my test runner tool might want to change indentation on some messages, but here it is impossible to see which log lines that actually belong together.
|
NeedsInvestigation,GoCommand
|
low
|
Minor
|
650,623,270 |
material-ui
|
[Autocomplete] Jumps between being expanded to the top/bottom
|
- [x] The issue is present in the latest release.
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
## Current Behavior π―
When having not that much space below the bottom of the input field the Autocomplete opens to the top.
By entering text into the input field the Autocomplete field gets shorter and jumps to the bottom.
<!-- Describe what happens instead of the expected behavior. -->
## Expected Behavior π€
After opening the Autocomplete ist should stay at the same position as the switch to the bottom is not expected and the user has to re-orientate to find the list of suggestions.
## Steps to Reproduce πΉ
CodeSandbox used for reproduction: https://codesandbox.io/s/naughty-hermann-24rmn
Steps:
1. The input field

2. Ensure that there is not much space to the bottom
3. After clicking into the field, the Autocomplete opens to the top (as expected βοΈ)

4. After typing an "a", the Autocomplete jumps to the bottom (not expected, should stay β)

## Your Environment π
| Tech | Version |
| ----------- | --------- |
| Material-UI | v4.11.0 |
| React | v16.13.1 |
| Browser | Chrome 83 |
|
bug π,component: autocomplete,design
|
low
|
Major
|
650,631,106 |
rust
|
SpecForElem for i16/u16 and other digits
|
We have specialization for `SpecForElem` for `i8` and `u8` (which provides a small performance win over plain extend for vec) but we could look into using specialization for that involves duplicating more digits with `rep stosw` (or `d` or `q`). It may need benchmarking to ensure that it's actually faster as mentioned by @joshtriplett
I am not sure how to reproduce the assembly `rep stosw`, someone that know could take up this issue.
Previous discussion: https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/SpecForElem.20for.20other.20integers
|
I-slow,C-enhancement,A-collections,A-specialization,T-libs
|
low
|
Major
|
650,651,318 |
pytorch
|
Inconsistent behaviour when parameter appears multiple times in parameter list
|
## π Bug
When we pass a list of parameters or parameter groups to an optimizer, and one parameter appears multiple times we get different behaviours, and it is not clear whether this is intended that way:
* If the parameter appears twice within one parameter group, everything works. That parameter will get updated *twice* though.
* If the parameter appears in distinct parameter groups, then we get an error.
## To Reproduce
```
import torch
x = torch.zeros((1,), requires_grad=True)
# uncomment one of the following three lines for each of the cases:
# a = torch.optim.SGD(params=[dict(params=[x])], lr=0.1) # baseline
# a = torch.optim.SGD(params=[dict(params=[x, x])], lr=0.1) # apparently acceptable (?); x is updated twice
# a = torch.optim.SGD(params=[dict(params=x), dict(params=x)], lr=0.1) # apparently not acceptable
x.sum().backward()
a.step()
print(x)
```
## Expected behavior
I would expect that no matter in what way a parameter apperas multiple times, we get the same behaviour in both situations: Either we get errors in both situations, or both situations are deemed acceptable.
## Environment
- PyTorch Version (e.g., 1.0): 1.5
- OS (e.g., Linux): Win/Linux
- How you installed PyTorch: conda
- Python version: 3.7
cc @jlin27 @vincentqb
|
module: docs,module: optimizer,triaged
|
low
|
Critical
|
650,659,649 |
excalidraw
|
Feature request: See myself in the user list
|
In collaborative edition, I only see others in the list of users on the top right. Since a color is automatically assigned to my user avatar, I regularly use a similar color to write content, so everybody can track who wrote what. Currently, I can't see myself in the list so I need to ask others which color I am.
I would like to see myself in the list. Any reason not to?
|
enhancement
|
low
|
Minor
|
650,662,978 |
pytorch
|
Pytorch 1.4 compilation hangs on AMD Epyc
|
## π Bug
[1352/3291] : && /software/CMake/3.14.0-GCCcore-8.3.0/bin/cmake -E remove lib/libfbgemm.a && /software/binutils/2.32-GCCcore-8.3.0/bin/ar qc lib/libfbgemm.a third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/ExecuteKernel.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/ExecuteKernelU8S8.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/Fbgemm.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/FbgemmFP16.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/FbgemmConv.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/FbgemmI64.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/FbgemmI8Spmdm.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/FbgemmSpConv.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/FbgemmSpMM.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/Fused8BitRowwiseEmbeddingLookup.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GenerateKernelU8S8S32ACC16.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GenerateKernelU8S8S32ACC16Avx512.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GenerateKernelU8S8S32ACC16Avx512VNNI.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GenerateKernelU8S8S32ACC32.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GenerateKernelU8S8S32ACC32Avx512.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GenerateKernelU8S8S32ACC32Avx512VNNI.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GroupwiseConvAcc32Avx2.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackAMatrix.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackAWithIm2Col.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackBMatrix.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackMatrix.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackAWithQuantRowOffset.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackAWithRowOffset.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackWeightMatrixForGConv.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackWeightsForConv.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/QuantUtils.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/RefImplementations.cc.o third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/Utils.cc.o third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/FbgemmFP16UKernelsAvx2.cc.o third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/FbgemmI8Depthwise3DAvx2.cc.o third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/FbgemmI8Depthwise3x3Avx2.cc.o third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/FbgemmI8DepthwiseAvx2.cc.o third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/FbgemmI8DepthwisePerChannelQuantAvx2.cc.o third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/OptimizedKernelsAvx2.cc.o third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/PackDepthwiseConvMatrixAvx2.cc.o third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/QuantUtilsAvx2.cc.o third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/U
## To Reproduce
Steps to reproduce the behavior:
1. Compile pytorch from source on AMD Epyc.
it hangs mid-compilation, as seen above.
## Expected behavior
An error message, at least.
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0): 1.4
- OS (e.g., Linux): Centos 7
- How you installed PyTorch (`conda`, `pip`, source): source
- Build command you used (if compiling from source): easybuild recipe
- Python version: 3.6.8
- CUDA/cuDNN version: 7.5.1.10
- GPU models and configuration: nvidia v100
- Any other relevant information: jusuf supercomputer from JΓΌlich Supercomputing Centre, Germany
## Additional context
It's the exact same recipe used to compile Pytorch 1.4 on every other platform on the institute. The only difference is the cpu.
cc @malfet
|
module: build,triaged,module: vectorization
|
low
|
Critical
|
650,683,473 |
TypeScript
|
Conditional + annotation causes erroneous typing
|
<!-- π¨ STOP π¨ STOP π¨ STOP π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 33.9.4 & 4.0.0@beta
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** Type annotation wrongly overrides generic with errors
**Code**
```ts
const createMatrix = <D extends number, T>(
dimensions: D,
initialValues: T | null = null
): Matrix<D, T> => {
const currentDimensionLength = dimensions;
const remainingDimensions = dimensions - 1;
const needsRecursion = remainingDimensions > 0;
const currentMatrix = Array(currentDimensionLength).fill(initialValues);
const finalMatrix = needsRecursion
? currentMatrix.map(() =>
createMatrix(remainingDimensions, initialValues)
)
: currentMatrix;
return finalMatrix as Matrix<D, T>;
};
```
```ts
type Matrix<D extends number, T> = D extends 1
? T[]
: D extends 2
? T[][]
: D extends 3
? T[][][]
: any[][][][];
```
**Expected behavior:**
Adding type annotations to variables should be compatible with the functions return signature, causing no type errors
**Actual behavior:**
Assigning an annotated variable to the return value of the function causes an error. The generic `T` is wrongly inferred to be _any of_ `array where depth < D` where it should be inferred to only the base type of the nested array. Removing the annotation solves the problem but removes type safety when variable assignment happens after creation.
The problem seems to be that (when annotated) the type is inferred to be the union of each type in `Matrix` that's preceding the conditional that returns true.
```ts
const n1: number[] = createMatrix(1); // <- works - generic T is set to `number`
const n2: number[][] = createMatrix(2); // <- doesn't work - generic T is set to `number | number[]`
const n3: number[][][] = createMatrix(3); // <- doesn't work - generic T is set to `number | number[] | number[][]`
const n4 = createMatrix(2); // <- regular inference works - generic T is `unknown`, return type is unknown[][]
const n5: number[][] = createMatrix(2, 0); // <- works when - initialValues is set - generic T is `number`, return type is number[][]
const n6: unknown[][] = createMatrix(2); // <- works fine, as type T is unknown. A more specific annotation should constrain `unknown` to `number`
```
**Playground Link:** [Link](https://www.staging-typescript.org/play?#code/PTAEBUE8AcFNQIYDskHsAuD0EtVNAO4BOeA5gDaSioBusRR2AJrAM6imxL3YDGh2dAAtQ9EkVYAoSSFABRAB5xe6WE1AAjWEIQ1cRAFwywAQSZNsSUqHQx4yNJhx526VKBoJGCDeTahWIVQAV3J1LVBeVABbaCxsX3gCQRFheAAzYKQVXCR2Ilh0YKIkYwDsUiQsYtgAGkiEYNZLazQbO1EGVAlpWRMVYIRyTW1dfSM+1mbKlsR8BwwsNQ8vbB8-G3c00AKikpXyYPhUdJshDKycvAam-2RO8QA6MvBzji4efnBQbHZiMkoPyQ6TEyzcIzmVBOoAA5F4iAgqARzgVQCxoMJQAAeUAAERhhBR8EEASCoSYZQilhBDDB7jwgO2GgQrHgtjg1FO224rFU6nhiMeoAASrBorRZtsFk5cgFUOQ6FJZNtoCREtFNMF0DsxbR-Oz4KwECDbISuCtvIlEFMKkholxtTpoHA8oh0qoiJECvE8M9egAqf2SUD+kWFYquhCgaJYRgKUAACmioRwAFoLPa8rkhogGIiAJQCTGQEKeljNArqDNcZouObqTyHNjPEPB0MAATiCI11azddToAABrjBwYIG9y9hK2jsJna67odsY+g423QJ2vAgNZZBGtyAA1IZHdgD17wRtHM5YUQIXgiaCoSzayxneDLuNA3dDX761BC3GwOkjTkOgrjuIOSChOQg4th2uwRuwJjRrG2DxtCw6DjOc65Kw9TJJisC3vej5IM+SBfuQP7qOC2yDjuOBDIeTasJhXZboU9CwcAkhRHk2q8N6qgALIofGAC82K4qICiqEgTDsJB0RaEQ9TgAAfAmwagNpvbzqwY64rUWnafRe5MceY7fAAPqAkHkMMEl2eQkj5mOIkrqhWKGRAamgGJvkAN7GZELj8cUBSkbis41rkAAyXCkJiEm6ThADcwW8byOoxjuVhRdhdbJdFfaugOACM6XadpmXatwaisKKvDFPOfnZQguWkPlMV1r5AAM6UZaFkThQ67kfhJJh5pACZNbSkXFfO8VWMI+aPOk2D2QmpmMUebD5gNVUhXxoDrVU5BjahrV1fJjXNbkwWHQA-MNc3oBdCiPDG0AJgmhb+Q9h2AwJhHCaJCYFDl5F5QtOH1NtB67aw+YA4dyOA1VY6zRFb2iQdVXwfsp1DO91qgO9XmqWp6UAL4DbIZ6gKqqDqgEsBimBELCNeCbIua0pLEwhbbAaPzsNSoLUe4ETbFksonGUhF3u0HKqrAvBqJKby8RYzhnVe2oE66K5HM8Ivk1JsAyVw8m2cESn0JTrUW1bcnsGVwXPeAADaAC6wUGdJsk2wATB7EC+77-t4oH1vsAAzGH3s+xHfuHQHltB+wAAsicpynUfIJAefJyXA01TYbDoGVY6KcpvutcDSzvQmZX7aAshYgOBDdAA1ie7zcIwXyi6z2rghBdvKYOPFDaovLBzXk-0CnDeCbAzfB23HcDkwqBsEgMLat3RA96AA6cIPfAQCPrJj+Btf0KANkP0QvuYQOgQhGEEKDlkPdoAQJAg56hpHwDVBElg6S23tkQGex057oDjovGBxdV4g3XmDOOW8wCdzRHvVgB8j69zPgPT419fij02EOF+T9oF1x9rQl+Kd36ki-uEeAv8kD-1QIA4Br4wGhQgdwSWdD6DSHLggrOaCm5g03qlduOCBwFFIKELwQIaRcHVoQXu-cL5kO+BQzh3DeH1ENsrYk7A-4AKQPnCRlcACsyD6H1wko3UGHkFAJmDvUXq2DsRdx0WafAdFyIMQRsxTCFDb4kL0UPch7AJ4wL4WYkWFCmElzgVlBBAA2McVieE2JLtI9xcYvF+NwcfPuJ1IH1BZOY+JoB8mAL9L0MAAAhWA5AeEj22PKJgeFe4tHqJkbIushQAEkYQahXJASU7gIZ6moHQIgXSEA23SN0RAIx0AejxAADXqOcbIbI3jcAICdS4utEw+D1MjSQIyrhgLXu9AA8mELE6kEwpRcGOMqcNQlmURo9Syrlw4+wGg8q5biMEeLeUwD5GlgrfLyGObxwV4bmTYMCiALlLIpwhZc2U0LXnvM+UimGPzQBxyModDFQLLK4rBfi6QkKiXPNEnChFmlDrIv0qALONKqp0uYti8AjKk7FwJaMtl6CSXwrJTyilKLREqXRQCnaIqGWgrlQiqVjyvSyo5aSxFiqCrKpfoKky6rwkWWvs-KCV0oKMp1epUAQVDrlyxg6LqJUlqJREEVM1rBKpVXLhDdqUNOpKvYIG7qpVQAVUGsda6DU1Z3WuBJcNHUfV6VAH1PG1UhpetIiTCaU0ZojXmmav1K01obXIFta1mKkYFqOllIm51RJXTZjdNNEh7ro20s9YtOMPGfQQN9X6fk1Io0HcSo1TBwZigjS0HNsNPxhObWjQd2lt3o0xpW0dcZW1mI7e9Gm0gzYLoRa1CVJcV42RdWpX2A1Mlj0rnC6uKqXEGpkbCsILc-GVPYKdWAb6K68jhQvb9RTXHsv-YuuRCjtEnxA5A8DCC4VIJgyvODhqEMJiwfI2QwHqncHEbPD9YQpF4b-XGOFZTiNgFI6BjDVGmCOJwww2jJTUIMe8aAXxTGUNVNY3YyDYRcmNK4dY3Dv7eMKH40BwJoGgA)
**Related Issues:**
None
|
Needs Investigation
|
low
|
Critical
|
650,686,658 |
pytorch
|
Regarding graphs page on site
|
## π Feature
<!-- A clear and concise description of the feature proposal -->
A graphs page on site, to track the evolution of PyTorch, along with the time required to train neural networks, along with the data available.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
Without graph it become difficult to see, where things are headed.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
I suggest a few graphs, a few more could be added,
1) Evolution of PyTorch
2) Evolution of time required to train neural network, since the beginning of PyTorch
3) Evolution of data available since the beginning of PyTorch in terms of size
4) Evolution of number of parameters in neural network
5) Evolution of FLOPS
cc @jlin27
|
module: docs,triaged
|
low
|
Minor
|
650,690,509 |
excalidraw
|
Feature request: Automatically change the color when joining a live-collaboration session
|
Following #1868, it would be nice to auto-change the default color to the color of avatar, e.g. if I enter a live-collaboration session and I'm assign a red avatar, my default color would be red and all I'd type would be red instead of black. It wouldn't prevent me from changing the color.
I'm not sure this is a behavior we would always want. It could be a toggle when creating a session. My use case is team retrospectives, where we write and draw stuff one by one.
|
enhancement
|
low
|
Minor
|
650,700,078 |
godot
|
SkeletonIK Targets cause bones to deform
|
**Godot version:**
3.2.2.stable.official
**OS/device including version:**
Windows 10 Pro - GLES3
**Issue description:**
Godot deforms a skeleton's bones instead of rotating it. This seems to be related to, or a regression of #34415

This happens regardless of whether I use magnets or not
**Steps to reproduce:**
Create a chain of bones longer than 2-3 bones, and set an IK target for the tip.
**Minimal reproduction project:**
[.escn file containing the model + rig](https://github.com/moonwards1/Moonwards-Virtual-Moon/blob/7c00c0e0fdbbf92609512b0d656fab24a6b5d486/Assets/MoonTown/Models/Athlete_Rover/AtheleteRover_Rigged.escn)
[Commit](https://github.com/moonwards1/Moonwards-Virtual-Moon/blob/7c00c0e0fdbbf92609512b0d656fab24a6b5d486l)
[blend file](https://github.com/moonwards1/Moonwards-Assets/blob/master/Models/AtheleteRover_Rigged.blend)
Credits to [Neurotremolo](https://github.com/Neurotremolo) for finding this one
|
bug,topic:core,confirmed
|
low
|
Minor
|
650,709,217 |
pytorch
|
The values calculated according to the document isn't equal to the values calculated by framework
|
## π Documentation
Torch 1.5.0 CPU, Linux
<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->
https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py#L448
Bug API: `torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=-1)`
If set T_max =5, initial learning_rate of optimizer are 0.5, eta_min=0:
The value calculated by framework are as flow :

But the value calculated by formula in the document are as flowοΌ
https://pytorch.org/docs/stable/optim.html?highlight=cosineannealinglr#torch.optim.lr_scheduler.CosineAnnealingLR

These Two doesn't match, in epoch 5. one is 0, but the other is 0.0954915028125. I want to know which one is right, thank you very much!
cc @jlin27 @vincentqb
|
module: docs,module: optimizer,triaged
|
low
|
Critical
|
650,719,783 |
scrcpy
|
A simple thank you
|
Hello!
I just wanted to say thank you for your great work. I have used your software and i am really happy with it. Thank you so much for your effort and hard work!
|
wontfix
|
low
|
Major
|
650,724,352 |
pytorch
|
len of dataloader when using iterable dataset does not reflect batch size
|
## π Bug
If I construct an iterable dataset of length 100, and want to then use that in a dataloader with batch_size=4, one would expect the length of the dataloader to be 25, but instead it is 100. This is because it is implemented as
```
if self._dataset_kind == _DatasetKind.Iterable:
length = self._IterableDataset_len_called = len(self.dataset)
return length
```
## To Reproduce
Steps to reproduce the behavior:
```
from torch.utils.data import DataLoader, IterableDataset, Dataset
test_items = list(range(100))
class TestDataset(Dataset):
def __init__(self, test_items):
self.x = test_items
def __getitem__(self, item):
return self.x[item]
def __len__(self) -> int:
return len(self.x)
class TestIterableDataset(IterableDataset):
def __init__(self, test_items):
self.x = test_items
def __iter__(self):
return iter(self.x)
def __len__(self) -> int:
return len(self.x)
print(len(TestIterableDataset(test_items)))
print(len(TestDataset(test_items)))
print(len(DataLoader(TestIterableDataset(test_items), batch_size=4)))
print(len(DataLoader(TestDataset(test_items), batch_size=4)))
```
this prints:
```
100
100
100
25
```
## Expected behavior
I would expect this to print:
```
100
100
25
25
```
## Environment
Please copy and paste the output from our
torch '1.5.1'
## Additional context
If people agree this should be fixed, I would love to put out a PR for it! Seems manageable and I've been wanting to contribute for a while now :)
cc @SsnL @albanD @mruberry
|
module: nn,module: dataloader,triaged
|
low
|
Critical
|
650,728,127 |
rust
|
Naming an associated type can cause a compile failure
|
<!--
Thank you for filing a bug report! π Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I have found two different pieces of code that should be equivalent, but one compiles and the other does not.
This code compiles successfully: ([Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=907f7fefa2a2f9d02f4a6a178a4f55ed))
```rust
struct Lazy<Ptr>(Ptr) where Ptr: Deref, Ptr::Target: Record;
impl<Ptr:Deref> Record for Lazy<Ptr> where Ptr::Target: Record {
fn try_field_ref<F:Field>(&self)->Option<&F> {
self.0.try_field_ref()
}
}
```
But this code fails to compile: ([Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f9d0dca192e5702b1528ab19d3e5bba5))
```rust
struct Lazy<Ptr>(Ptr) where Ptr: Deref, Ptr::Target: Record;
impl<R:Record, Ptr:Deref<Target=R>> Record for Lazy<Ptr> {
fn try_field_ref<F:Field>(&self)->Option<&F> {
self.0.try_field_ref()
}
}
```
Error Message:
```
Compiling playground v0.0.1 (/playground)
error[E0311]: the parameter type `R` may not live long enough
--> src/lib.rs:14:9
|
14 | self.0.try_field_ref()
| ^^^^^^
|
= help: consider adding an explicit lifetime bound for `R`
note: the parameter type `R` must be valid for the anonymous lifetime #1 defined on the method body at 13:5...
--> src/lib.rs:13:5
|
13 | / fn try_field_ref<F:Field>(&self)->Option<&F> {
14 | | self.0.try_field_ref()
15 | | }
| |_____^
note: ...so that the type `R` is not borrowed for too long
--> src/lib.rs:14:9
|
14 | self.0.try_field_ref()
| ^^^^^^
error[E0311]: the parameter type `R` may not live long enough
--> src/lib.rs:14:16
|
14 | self.0.try_field_ref()
| ^^^^^^^^^^^^^
|
= help: consider adding an explicit lifetime bound for `R`
note: the parameter type `R` must be valid for the anonymous lifetime #1 defined on the method body at 13:5...
--> src/lib.rs:13:5
|
13 | / fn try_field_ref<F:Field>(&self)->Option<&F> {
14 | | self.0.try_field_ref()
15 | | }
| |_____^
note: ...so that the reference type `&R` does not outlive the data it points at
--> src/lib.rs:14:16
|
14 | self.0.try_field_ref()
| ^^^^^^^^^^^^^
error: aborting due to 2 previous errors
error: could not compile `playground`.
```
|
A-lifetimes,A-associated-items,T-compiler,C-bug
|
low
|
Critical
|
650,747,205 |
rust
|
Missed inference with array of nonzero divisors
|
This is an ER about a possible currently missed inference. I am not sure if here I am asking too much from LLVM. This code shows three different implementations of a simple function (that's reduced from other code):
```rust
#![feature(core_intrinsics)]
use std::intrinsics::assume;
pub fn foo1(x: u32, i: usize) -> Option<u32> {
const PS: [u32; 6] = [2, 3, 5, 7, 11, 13];
if i < PS.len() {
let psi = PS[i];
Some(x % psi)
} else {
None
}
}
pub fn foo2(x: u32, i: usize) -> Option<u32> {
const PS: [u32; 6] = [2, 3, 5, 7, 11, 13];
if i < PS.len() {
let psi = PS[i];
unsafe { assume(psi != 0); }
Some(x % psi)
} else {
None
}
}
pub fn foo3(x: u32, i: usize) -> Option<u32> {
match i {
0 => Some(x % 2),
1 => Some(x % 3),
2 => Some(x % 5),
3 => Some(x % 7),
4 => Some(x % 11),
5 => Some(x % 13),
_ => None,
}
}
```
Gives (rustc 1.46.0-nightly 3503f565e 2020-07-02):
```asm
foo1:
push rax
cmp rsi, 5
ja .LBB0_1
lea rax, [rip + .L__unnamed_1]
mov ecx, dword ptr [rax + 4*rsi]
test ecx, ecx
je .LBB0_5
mov eax, edi
xor edx, edx
div ecx
mov eax, 1
pop rcx
ret
.LBB0_1:
xor eax, eax
pop rcx
ret
.LBB0_5:
lea rdi, [rip + str.0]
lea rdx, [rip + .L__unnamed_2]
mov esi, 57
call qword ptr [rip + core::panicking::panic@GOTPCREL]
ud2
foo2:
cmp rsi, 5
ja .LBB1_1
lea rcx, [rip + .L__unnamed_1]
mov eax, edi
xor edx, edx
div dword ptr [rcx + 4*rsi]
mov eax, 1
ret
.LBB1_1:
xor eax, eax
ret
foo3:
cmp rsi, 5
ja .LBB2_1
lea rax, [rip + .LJTI2_0]
movsxd rcx, dword ptr [rax + 4*rsi]
add rcx, rax
jmp rcx
.LBB2_3:
and edi, 1
mov eax, 1
mov edx, edi
ret
.LBB2_1:
xor eax, eax
mov edx, edi
ret
.LBB2_4:
mov eax, edi
mov ecx, 2863311531
imul rcx, rax
shr rcx, 33
lea eax, [rcx + 2*rcx]
sub edi, eax
mov eax, 1
mov edx, edi
ret
.LBB2_5:
mov eax, edi
mov ecx, 3435973837
imul rcx, rax
shr rcx, 34
lea eax, [rcx + 4*rcx]
sub edi, eax
mov eax, 1
mov edx, edi
ret
.LBB2_6:
mov eax, edi
imul rax, rax, 613566757
shr rax, 32
mov ecx, edi
sub ecx, eax
shr ecx
add ecx, eax
shr ecx, 2
lea eax, [8*rcx]
sub ecx, eax
add ecx, edi
mov eax, 1
mov edx, ecx
ret
.LBB2_7:
mov eax, edi
mov ecx, 3123612579
imul rcx, rax
shr rcx, 35
lea eax, [rcx + 4*rcx]
lea eax, [rcx + 2*rax]
sub edi, eax
mov eax, 1
mov edx, edi
ret
.LBB2_8:
mov eax, edi
imul rax, rax, 1321528399
shr rax, 34
lea ecx, [rax + 2*rax]
lea eax, [rax + 4*rcx]
sub edi, eax
mov eax, 1
mov edx, edi
ret
.LJTI2_0:
.long .LBB2_3-.LJTI2_0
.long .LBB2_4-.LJTI2_0
.long .LBB2_5-.LJTI2_0
.long .LBB2_6-.LJTI2_0
.long .LBB2_7-.LJTI2_0
.long .LBB2_8-.LJTI2_0
```
The PS array contains only nonzero values, so "attempt to calculate the remainder with a divisor of zero" can't happen. Is this worth submitting upstream?
|
I-slow,C-enhancement,T-compiler
|
low
|
Minor
|
650,762,152 |
terminal
|
Support fonts and fonts attributes through inline escape sequences like mintty and vte52
|
# Escape sequences to set fonts and their attributes are ignored, instead they are displayed as raw
Terminals support escape sequences to alter the fonts. Windows Terminal at the moment only support basic colors and reverse colors
This is linked to #109 #2916 #5461 #5462 #6703 and #6205 as currently most of the VTE52 font attributes are not supported.
Full support can be tested with:
https://github.com/csdvrx/sixel-testsuite/ansi-vte52.sh
Expected output:
https://github.com/csdvrx/sixel-testsuite/blob/master/test-passed-part1.jpg
It would be desirable to support all mintty font attributes, including support for alternative fonts, using ECMA-48 SGR codes. This would be linked to #1163 for usecases like CJK.
Mintty patch implementing this is on https://github.com/mintty/mintty/commit/1250829806d3da2f05b3207183b190747d2cc127
This allows sequences like:
echo "\e[12mVery thin font\e[0m"

|
Help Wanted,Area-VT,Area-Fonts,Product-Terminal,Issue-Task
|
low
|
Major
|
650,802,775 |
godot
|
get_property_list() differs between build configurations
|
**Godot version:**
3.2.2 stable.mono
**OS/device including version:**
<!-- Specify GPU model, drivers, and the backend (GLES2, GLES3, Vulkan) if graphics-related. -->
Windows 10 x64
**Issue description:**
<!-- What happened, and what was expected. -->
I've been pulling my hair out trying to get my project to behave properly after upgrading to 3.2.2; The build configurations needed to be updated to compile and debug again, and after pointing everything to the new locations, everything seemed to get along just fine. However, a rather insidious problem that even confused the debugger when it popped up originally under an incorrect build configuration popped up for me again when attempting to export either a release or debug build.
When attempting to access and assign a c# node's property from a separate gdscript file, it becomes unreachable in exported builds. The property in question wraps a generics class, which may be related to the issue, but I'm more inclined to believe something broke between godot versions in terms of export template implementation. I've created and attached a minimal example project using some of the same code from my larger project. The relevant files are pretty much all in the root, including `Test.cs` which is the sample implementation of the C# node that other GDScripts would be fetching data from.
**Steps to reproduce:**
Run attached project in editor. Text box will be filled with information fetched from the test node's c# property `Ksr`. Pressing F12 will assign dummy data to Ksr. If successful, a series of alert boxes will appear. (These will continue to appear because of the way input is hooked until the project is killed, sorry.) The relevant code to pop the alert box is located in `FMCore/RTable.cs`, lines 188 and 196. (testing to validate a Set actually changed the value to something sane.)
Perform the same steps as above, only instead of running from the editor, build a debug release and run it instead. Console will display the message `ResponseCurve: Can't find the target property 'Ksr' to send the table to."`. This is triggered in line 43 of `Node2D.gd` for failing to find the relevant property in the object property list.
**Minimal reproduction project:**
[CSharpPropertyUnreachable.zip](https://github.com/godotengine/godot/files/4871942/CSharpPropertyUnreachable.zip)
|
bug,topic:dotnet
|
low
|
Critical
|
650,803,698 |
flutter
|
macOS Edit menu doesn't auto-enable text-related menu items
|
## Details
```dart
SelectableText(
'You have pushed the button this many times:',
toolbarOptions: ToolbarOptions(
selectAll: true,
copy: true,
),
),
```

**Target Platform:** macOS
**Target OS version/browser:** macOS 10.15
**Devices:** MacBook Pro (16-inch, 2019)
## Logs
flutter analyze
```
C02CN0H4MD6V:selectable_tour sunbreak$ flutter analyze
Analyzing selectable_tour...
No issues found! (ran in 2.2s)
```
flutter doctor -v
```
C02CN0H4MD6V:selectable_tour sunbreak$ flutter doctor -v
[β] Flutter (Channel master, 1.19.0-4.1.pre, on Mac OS X 10.15.4 19E287, locale en-CN)
β’ Flutter version 1.19.0-4.1.pre at /Users/sunbreak/flutter/flutter-1.19.0-4.1.pre
β’ Framework revision f994b76974 (3 weeks ago), 2020-06-09 15:53:13 -0700
β’ Engine revision 9a28c3bcf4
β’ Dart version 2.9.0 (build 2.9.0-14.1.beta)
β’ Pub download mirror https://pub.flutter-io.cn
β’ Flutter download mirror https://storage.flutter-io.cn
[β] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
β’ Android SDK at /Users/sunbreak/Library/Android/sdk
β’ Platform android-29, build-tools 29.0.3
β’ Java binary at: /Users/sunbreak/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/192.6392135/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
β’ All Android licenses accepted.
[β] Xcode - develop for iOS and macOS (Xcode 11.5)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Xcode 11.5, Build version 11E608c
β’ CocoaPods version 1.9.2
[β] Chrome - develop for the web
β’ Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[β] Android Studio (version 3.6)
β’ Android Studio at /Users/sunbreak/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/192.6392135/Android Studio.app/Contents
β’ Flutter plugin version 46.0.1
β’ Dart plugin version 192.8052
β’ Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
[β] IntelliJ IDEA Ultimate Edition (version 2020.1.2)
β’ IntelliJ at /Users/sunbreak/Applications/JetBrains Toolbox/IntelliJ IDEA Ultimate.app
β’ Flutter plugin version 47.1.3
β’ Dart plugin version 201.7846.93
[β] VS Code (version 1.46.1)
β’ VS Code at /Applications/Visual Studio Code.app/Contents
β’ Flutter extension version 3.11.0
[β] Connected device (3 available)
β’ macOS β’ macOS β’ darwin-x64 β’ Mac OS X 10.15.4 19E287
β’ Web Server β’ web-server β’ web-javascript β’ Flutter Tools
β’ Chrome β’ chrome β’ web-javascript β’ Google Chrome 78.0.3904.108
β’ No issues found!
```
|
a: text input,framework,platform-mac,a: desktop,P2,fyi-text-input,team-macos,triaged-macos
|
low
|
Major
|
650,804,992 |
pytorch
|
Add a launching script for RPC
|
See forum discussion: https://discuss.pytorch.org/t/distributed-model-parallel-using-distributed-rpc/87875/3
This might be helpful to provide launching script for RPC as we did for DDP.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @xush6528 @jjlilley @osalpekar @jiayisuse
|
feature,triaged,module: rpc
|
low
|
Minor
|
650,805,681 |
rust
|
Additional register with same const behaviour
|
In `rustc 1.44.1 (c7087fe00 2020-06-17)` and `linux x86_64`. When use a struct use additional useless register. An example is a simple copy bytes. Without structure https://github.com/botika/buf-min/blob/3ad9833bee4f748f2089ad39fcf9047f9a39b064/benches/src/main.rs#L74-L108 .
```asm
benches::raw_dyn:
push rbx
sub rsp, 32
mov rbx, rdi
; ...
```
Only push `rbx` register.
While with the structure and exactly the same behavior: https://github.com/botika/buf-min/blob/3ad9833bee4f748f2089ad39fcf9047f9a39b064/benches/src/main.rs#L110-L156
```asm
benches::ibuffer:
push r14
push rbx
sub rsp, 40
mov rbx, rdi
mov r14d, 12
; ...
```
An `r14d` I have a constant length, of `"Hello World!"`.
~~Therefore, the optimization does not work correctly when using a structure.~~
Edit: Sorry my English is not very advanced, the translator's fault. The optimizer does not behave the same in the case of a structure.
|
I-slow,T-compiler,C-bug
|
low
|
Minor
|
650,825,082 |
PowerToys
|
Add support for .emf in File Explorer Preview
|
Please add the feature :
.emf preview
PLEASE!!!
|
Idea-New PowerToy,Product-File Explorer
|
low
|
Minor
|
650,832,512 |
go
|
x/tools/present: unexpected eof on slide header only
|
<!--
Please answer these questions before submitting your issue. Thanks!
For questions please use one of our forums: https://github.com/golang/go/wiki/Questions
-->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.14.4 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/tz70s/Library/Caches/go-build"
GOENV="/Users/tz70s/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/tz70s/WorkSpace/golang/golang"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.14.4/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.14.4/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/f9/zc8k9cr145557jz3bqwmsz880000gn/T/go-build573678616=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
The present tool assume an implicit behavior that the header & the content (next slide page) should exist simultaneously.
E.g. if we have such slide file:
```
# Go present tool
Present tool reproducer
Tags: go, tool
Tzu-Chiao Yeh
5 Jul 2020
[email protected]
```
instead of
```
# Go present tool
Present tool reproducer
Tags: go, tool
Tzu-Chiao Yeh
5 Jul 2020
[email protected]
## Slide page
```
We'll get an 'unexpected eof' error.
### What did you expect to see?
This error message can lead user confusing:
* Actually legacy syntax will have the same issue as well.
* Ideally the "header-only" behavior is acceptable due to usually we start to write the first slide.
* Document somewhere (sorry if I missed it) and provide better error message would be helpful if this behavior is not acceptable.
### What did you see instead?
'unexpected eof' error
Actually this issue can be divided into two scenarios:
* Provided header but no author and no sections.
* Provided header and author but no sections.
I think a simple improvement is just skip sections when we reach eof on header or author?
|
NeedsInvestigation,Tools
|
low
|
Critical
|
650,832,975 |
go
|
net/smtp: add Client.TLSConfig field and Client.SendMail method
|
`smtp.SendMail` actually does a great job, but the fact that it always uses `smtp.Dial` make it less flexible both for testing and for certain cases like ours, where we wanted to customize the `Client.localName`
In fact, in certain operations, `Client.localName` must correspond to the FQDN hostname sending the email (always `localhost` in the current implementation), this lack of flexibility led us to duplicate the entire SendMail (and tests) code just to address this issue.
I propose that we add a new public method `smtp.SendMailWithDialer` or `smtp.SendMailWithDialerFunc` and a new type `Dialer`/`DialerFunc`
```go
type Dialer func() (*Client, error) // or DialerFunc ?
func SendMailWithDialer(dial Dialer, a Auth, from string, to []string, msg []byte) error
```
The change set required is very minimal and backward compatible, `smtp.SendMail` could use `smtp.SendMailWithDialer` just by passing `smtp.Dial` as the dialer:
```go
func makeDialer(addr string) func() (*Client, error) {
return func() (*Client, error) {
return Dial(addr)
}
}
func SendMail(addr string, a Auth, from string, to []string, msg []byte) error {
return SendMailWithDialer(makeDialer(addr), a, from, to, msg)
}
```
This would allow for much more control both for testing and real uses cases with minimum code, for example:
```go
func myDialer() (*smtp.Client, error) {
cl, err := Dial("remoteSmtp:25")
if err != nil { return nil, err }
if err := cl.Hello("my_FQDN") {
return nil, err
}
return cl, nil
}
smtp.SendMailWithDialer(myDialer, auth, from, to, msg)
```
I am willing to make a CL if it is accepted
Edit:
- At first the proposal was about adding `SendMailWithClient`, but it turned out that [line validation](https://github.com/golang/go/blob/master/src/net/smtp/smtp.go#L320) would be duplicate both in `SendMailWithClient` and `SendMail` as the validation must occur before the dial
- The proposal included a proposal to make `config.localName` public, but since `Client.Hello` exists and SendMail uses `Client.hello` internally which does nothing if a Hello was called before this change is no longer required as the Dialer function could call `Hello` before passing to `SendMailWithDialer`
|
Proposal,Proposal-Accepted
|
medium
|
Critical
|
650,834,263 |
pytorch
|
[FR] NCCL and bool type
|
NCCL should support `bool`. Currently if you use a `bool` buffer in your module then DataParallel complains since
https://github.com/pytorch/pytorch/blob/5b194b0fb21e4cb1ef0357b3315e311db4432338/torch/csrc/cuda/nccl.cpp#L95
gets triggered in replicate.
We should just map `bool` to `ncclChar` too.
|
triaged,module: nccl,small,module: data parallel
|
low
|
Minor
|
650,858,739 |
javascript
|
Configuration of react/jsx-no-bind allows to use arrow functions, while the style guide prohibits it
|
The current configuration of `react/jsx-no-bind` allows to use arrow functions as event handlers (and ignores the use of bind in DOM components too): https://github.com/airbnb/javascript/blob/master/packages/eslint-config-airbnb/rules/react.js#L104

However, here's what the style guide has to say bout this:
---

---
1. I agree with the guide's argument and belive that it would be best to update the rule configuration to match it. Using arrow function to initialize an event handler creates a brand new function on each render same as `bind()` does, which can lead to unnecessary re-renders.
2. To be honest, I'd even turn `ignoreDOMComponents` to `false` too β it is unclear why DOM components deserve a different treatment. I'd say the reasoning from the style guide fully applies to them too.
3. If, however, there is some reasoning behind these exceptions β then the style guide should probably reflect it.
|
question,react
|
low
|
Minor
|
650,881,264 |
java-design-patterns
|
Functional core, imperative shell pattern
|
## Description
The Functional Core Imperative Shell (FCIS) design pattern aims to segregate the purely functional part of the code (Functional Core) from the side-effect-laden part (Imperative Shell). This separation enhances testability, maintainability, and robustness by isolating side effects and minimizing their scope.
### Main Elements of the Pattern:
1. **Functional Core**:
- Pure functions without side effects.
- Deterministic outputs based on inputs.
- Contains the business logic of the application.
- Facilitates easy unit testing.
2. **Imperative Shell**:
- Encapsulates side effects such as I/O operations, database access, and API calls.
- Interfaces with the outside world.
- Manages state changes and interactions.
- Bridges the Functional Core with the real-world environment.
### Implementation Steps:
1. Identify and extract the business logic into pure functions.
2. Encapsulate side-effect operations within dedicated modules.
3. Define clear interfaces between the Functional Core and Imperative Shell.
4. Write unit tests for the Functional Core functions.
5. Ensure integration tests cover interactions within the Imperative Shell.
## References
- [Functional Core, Imperative Shell - Gary Bernhardt](https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell)
- [Building Modern Architectures: Functional Core, Imperative Shell - Albert Llousas](https://medium.com/@albert.llousas/building-modern-architectures-functional-core-imperative-shell-revamp-0bb5ae62b589)
- [Functional Core Imperative Shell - Kenneth Lange](https://kennethlange.com/functional-core-imperative-shell/)
- [A Look at the Functional Core and Imperative Shell Pattern - SSENSE Tech](https://medium.com/ssense-tech/a-look-at-the-functional-core-and-imperative-shell-pattern-be2498da153a)
## Acceptance Criteria
1. The project codebase should have a clear separation between the Functional Core and the Imperative Shell.
2. All business logic should reside in pure functions within the Functional Core, with appropriate unit tests.
3. All side-effect operations should be encapsulated in the Imperative Shell, with integration tests to ensure correct interactions.
|
info: help wanted,epic: pattern,type: feature
|
low
|
Minor
|
650,886,985 |
rust
|
Missing warning about `target_feature` when using for `core::arch::x86_64` functions
|
I tried this code: https://rust.godbolt.org/z/b9A7oP
```rust
extern crate core;
use core::arch::x86_64::__m128;
use core::arch::x86_64::_mm_extract_ps;
use core::arch::x86_64::_mm_hadd_ps;
pub unsafe fn add_reduce(a: __m128, b: __m128) -> f32 {
let c = _mm_hadd_ps(a, b);
f32::from_bits(_mm_extract_ps(c, 0) as u32)
}
```
I expect to see this happens: rustc lint about missing `target_feature` in `add_reduce` when using `_mm_extract_ps`.
Instead this happen: There is no warning and functions are not inlined.
### Meta
* rustc 1.46.0-nightly (3503f565e 2020-07-02) x86_64-unknown-linux-gnu
|
A-lints,T-compiler,C-feature-request
|
low
|
Minor
|
650,887,969 |
tensorflow
|
Zero AUROC when validation set is only comprised of positive labels
|
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Tested on Ubuntu 18.04 and MacOS Catalina 10.15.5
- TensorFlow installed from (source or binary): installed from Conda (tensorflow-gpu) on Ubuntu and from pip on MacOS
- TensorFlow version (use command below): v2.2.0-rc4-8-g2b96f3662b 2.2.0
- Python version: 3.7.6 (on both machines)
- CUDA/cuDNN version: 11.0 (on Ubuntu machine)
- GPU model and memory: 4 NVIDIA V100 GPUs
**Describe the current behavior**
The AUROC is zero when there are no negatives in the validation set. This may lead users to believe that the model is not performing correctly, while it is just an error related to a misuse of the AUROC metric.
**Describe the expected behavior**
The AUROC should be NaN when no negatives are present in the validation set. This way the user is aware that there is something wrong with the metric that is being used, with the possibility to add a warning since there is a 0/0 somewhere in the computation of the AUROC.
**Standalone code to reproduce the issue**
```python
from tensorflow.keras.metrics import AUC
metric = AUC()
metric.update_state([1, 1, 1], [1, 0.5, 0.3])
metric.result()
```
I believe that similar issues also happen for other metrics, as uncanny values were displayed also for Recall and Precision, but I have not identified a simple reproducible example.
|
stat:awaiting tensorflower,type:bug,comp:keras,TF 2.11
|
low
|
Critical
|
650,889,083 |
rust
|
Better error for import of associated items
|
Hi. I was using a crate which had a struct with a number of associated items and I was trying to save typing in an error handling function. But:
```rust
12 | use rocket::http::Status::*;
| ^^^^^^ `Status` is a struct, not a module
```
I looked in the Reference and I wasn't able to find any discussion of this. In particular, I would expect it to be documented in the explanation of _Use declarations_ under _Items_.
After considerably more digging, I found:
* #24915, a closed issue where the resolution was that this is not expected to work
* https://internals.rust-lang.org/t/importing-associated-constants/6610/5, a discussion about how to maybe implement this (although it seems unlikely that that approach could work for the glob import I was trying...)
It seems that this situation is likely to persist. Maybe the error message could be improved? Something like
> hint: importing associated items is not supported; you must use a path from to the parent type each time
could help the user by letting them know they can't avoid the extra typing.
I will also file an issue against the Reference. Thanks for your attention.
|
C-enhancement,A-diagnostics,A-associated-items,T-compiler
|
low
|
Critical
|
650,895,083 |
go
|
cmd/compile: reclaim binary size increase from CL 35554 constant to interface allocation optimizations
|
In [CL 35554](https://go-review.googlesource.com/c/go/+/35554/) for #18704, @josharian added some very nice allocation optimizations, as part of his efforts to claw back some of the allocation penalty of #8405, but a result was a modest increase in binary size.
In CL 35554, @josharian mentioned that the ~0.5% binary size increase could likely be recovered in the future, and there was some discussion of adding a tracking issue for that.
I did a quick look for that follow-up tracking issue a couple years ago, and again searched just now, but both times haven't found it.
Posting this new issue now in case it is helpful to either:
1. serve as the tracking issue for that possible future work, or
2. if that is already tracked (or even completed!), help a future gopher more easily follow the issue trail through cross linking ;-)
Here is a snippet from CL 35554:
> This CL adds ~0.5% to binary size, despite
> decreasing the size of many functions,
> because it also adds many static symbols.
>
> This binary size regression could be recovered in
> future (but currently unplanned) work.
>
> There is a lot of content-duplication in these
> symbols; this statement generates six new symbols,
> three containing an int 1 and three containing
> a pointer to the string "a":
>
> fmt.Println(1, 1, 1, "a", "a", "a")
>
> These symbols could be made content-addressable.
>
> Furthermore, these symbols are small, so the
> alignment and naming overhead is large.
> As with the go.strings section, these symbols
> could be hidden and have their alignment reduced.
Another reason for posting now is that there is some recent renewed energy around binary size (#6853).
Sorry if I missed something obvious. @josharian or anyone else, feel free to close if this is already addressed or tracked elsewhere.
|
Performance,NeedsInvestigation,compiler/runtime
|
low
|
Minor
|
650,916,019 |
vscode
|
"Add Selection To Previous Find Match" interferes with "Add Selection To Next Find Match"
|
<!-- β οΈβ οΈ Do Not Delete This! bug_report_template β οΈβ οΈ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version:1.46.1
- OS Version:2004 Build 19041.329
Steps to Reproduce:
1. Add Selection To Next Find Match [Ctrl+D]
2. Now say you also want to expand the selection upwards --> Add Selection To Previous Find Match
3. Nothing happens.
Expected Behavior:
The two commands should complement each other, so for example when you start in the middle of a function and quickly want to edit a specific word you can quickly expand the selection towards both the top and the bottom. Currently the two commands in conjunction are absolutely useless, and instead will only be useful if you solely want to expand the selection upwards OR downwards. Fixing this issue would make these commands in either case useful.
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
|
feature-request,editor-commands
|
low
|
Critical
|
650,916,903 |
rust
|
std::line macro can be wrong in 1.44.1 stable under proc-macros
|
Hi,
This issue seems to be solved on nightly but I couldn't find an explicit change in that code, so if this issue isn't helpful feel free to close.
I have a proc-macro that generates loggings, and on one occurrence it gets the line number wrong on stable but right on nightly:
The test: https://github.com/elichai/log-derive/blob/2020-test-logger/tests/first.rs#L121
Fails on stable: https://travis-ci.com/github/elichai/log-derive/jobs/357565126#L305
Passes on nightly: https://travis-ci.com/github/elichai/log-derive/jobs/357565129#L314
Using `cargo-bisect-rustc`I found that the bug was fixed in https://github.com/rust-lang/rust/commit/a9ca1ec9280ca1e5020edd699917c3367a30a798 (https://pastebin.com/raw/0rwkWwZU) but I don't see anything related there
If this is helpful I can try and create a minimal reproducible code, but because it's fixed in nightly I'm not sure if it's worth the time
|
E-needs-test,A-macros,T-compiler,C-bug,E-needs-mcve,A-proc-macros
|
low
|
Critical
|
650,926,746 |
node
|
stream: Readable batched iteration
|
This is a continuation of https://github.com/nodejs/node/pull/34035 and the promises session we had on OpenJS about async iteration performance of streams. One alternative discussed was to batch reading.
I was thinking we could do something along the lines of:
```js
async function* createBatchedAsyncIterator(stream, batchLen) {
let callback = nop;
function next(resolve) {
if (this === stream) {
callback();
callback = nop;
} else {
callback = resolve;
}
}
stream
.on('readable', next)
.on('error', next)
.on('end', next)
.on('close', next);
try {
const state = stream._readableState;
while (true) {
let buffer;
while (true) {
const chunk = stream.read();
if (chunk === null) break;
if (!buffer) buffer = [];
buffer.push(chunk);
if (batchLen && buffer.length >= batchLen) break;
}
if (buffer) {
yield buffer;
} else if (state.errored) {
throw state.errored;
} else if (state.ended) {
break;
} else if (state.closed) {
// TODO(ronag): ERR_PREMATURE_CLOSE?
break;
} else {
await new Promise(next);
}
}
} catch (err) {
destroyImpl.destroyer(stream, err);
throw err;
} finally {
destroyImpl.destroyer(stream, null);
}
}
Readable.batched = function (stream, batchLen) {
return createBatchedAsyncIterator(stream, batchLen);
}
```
Which would make the following possible:
```js
// Concurrency
for await (const requests of Readable.batched(stream, 128)) {
// Process in parallel with concurrency limit of 128.
await Promise.all(requests.map(dispatch))
}
// Speed
for await (const requests of Readable.batched(stream, 128)) {
for (const request of requests) {
// All in the same tick
}
}
```
It's still not perfect since if one element takes very long it would reduce concurrency. However, it would still be a step forward. Also reducing the async iteration overhead.
|
stream
|
medium
|
Critical
|
650,939,983 |
godot
|
GDScript: get_property_default_value always returns Null
|
<!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if using non-official build. -->
3.2.2 stable
**OS/device including version:**
<!-- Specify GPU model, drivers, and the backend (GLES2, GLES3, Vulkan) if graphics-related. -->
Windows
**Issue description:**
<!-- What happened, and what was expected. -->
return the property default value, getting `Null` instead
**Steps to reproduce:**
```gdscript
var a = 1
func _ready():
print (get_script().get_property_default_value("a"))
```
**Minimal reproduction project:**
<!-- A small Godot project which reproduces the issue. Drag and drop a zip archive to upload it. -->
not required
|
bug,topic:gdscript,confirmed
|
low
|
Minor
|
650,943,998 |
go
|
cmd/cgo: add #cgo CFLAGS and #CGO CXXFLAGS directives that only apply to files in the current package
|
My situation is that instead of distributing static libraries, I have the C++ and C sources of the C package I'm wrapping stored in the same directory as my Go files, so that the package can in theory can be compiled on any C compiler.
In order to improve performance, I added a build tag which added the -flto flag, in order to enable link-time-optimization between all the C++ files.
However, I got an error when I added this line:
#CGO CFLAGS: -flto
to the file which implements LTO.
cannot load DWARF output from $WORK\b002\\_cgo_.o: decoding dwarf section info a
t offset 0x0: too short
So I'd like a way to apply CFLAGS and CXXFLAGS to only the C and C++ files that are actually part of the package, and not CGO generated C files.
|
NeedsInvestigation,FeatureRequest,compiler/runtime
|
low
|
Critical
|
650,956,516 |
rust
|
Bad suggestion `dyn pub` on proc macro (#61963 regressed)
|
#72306 causes the compiler to emit (and blessed) this bad suggestion in the [test](https://github.com/rust-lang/rust/blob/0cd7ff7ddfb75a38dca81ad3e76b1e984129e939/src/test/ui/suggestions/issue-61963.rs) for #61963 ([output](https://github.com/rust-lang/rust/blob/0cd7ff7ddfb75a38dca81ad3e76b1e984129e939/src/test/ui/suggestions/issue-61963.stderr)):
```
error: trait objects without an explicit `dyn` are deprecated
--> $DIR/issue-61963.rs:18:1
|
LL | pub struct Foo {
| ^^^ help: use `dyn`: `dyn pub`
```
This is the behaviour the test is supposed to guard against (almost - the original bad suggestion was to insert `dyn` before the proc macro attribute), so the issue is regressed.
The issue was originally fixed nearly a year ago, so this is a regression from stable.
|
A-diagnostics,P-medium,A-macros,T-compiler,regression-from-stable-to-stable,C-bug,A-suggestion-diagnostics,D-invalid-suggestion,A-proc-macros
|
low
|
Critical
|
650,966,433 |
go
|
runtime: libunwind is unable to unwind CGo to Go's stack
|
### What version of Go are you using (`go version`)?
`master` as of the build
<pre>
$ go version
devel +dd150176c3 Fri Jul 3 03:31:29 2020 +0000
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/steeve/Library/Caches/go-build"
GOENV="/Users/steeve/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOINSECURE=""
GOMODCACHE="/Users/steeve/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/steeve/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/Users/steeve/code/github.com/znly/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/Users/steeve/code/github.com/znly/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/steeve/code/github.com/znly/go/src/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/bs/51dlb_nn5k35xq9qfsxv9wc00000gr/T/go-build842228435=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
Following @cherrymui's comment on #39524, I figured I tried to check why lots of our backtraces on iOS stop at `runtime.asmcgocall`.
Since I wanted to reproduce it on my computer and `lldb` manges to properly backtrace, I figured I'd give `libunwind` a try, since this is was iOS uses when a program crashes.
Unfortunately `libunwind` didn't manage to walk the stack past CGo generated `_Cfunc_` functions.
Given this program:
```go
package main
/*
#cgo CFLAGS: -O0
#include <libunwind.h>
#include <stdio.h>
void backtrace() {
unw_cursor_t cursor;
unw_context_t context;
// Initialize cursor to current frame for local unwinding.
unw_getcontext(&context);
unw_init_local(&cursor, &context);
// Unwind frames one by one, going up the frame stack.
while (unw_step(&cursor) > 0) {
unw_word_t offset, pc;
unw_get_reg(&cursor, UNW_REG_IP, &pc);
if (pc == 0) {
break;
}
printf("0x%llx:", pc);
char sym[256];
if (unw_get_proc_name(&cursor, sym, sizeof(sym), &offset) == 0) {
printf(" (%s+0x%llx)\n", sym, offset);
} else {
printf(" -- error: unable to obtain symbol name for this frame\n");
}
}
}
void two() {
printf("two\n");
backtrace();
}
void one() {
printf("one\n");
two();
}
*/
import "C"
//go:noinline
func goone() {
C.one()
}
func main() {
goone()
}
```
It prints:
```
one1
two2
0x40617fe: (two+0x1e)
0x406182e: (one+0x1e)
0x406168b: (_cgo_7c45d1c2feef_Cfunc_one+0x1b)
```
I tried doing Go(1) -> C(1) -> Go(2) -> C(2) and backtrace, and it only unwinds C(2).
Also, I tried to make set `asmcgocall` to have a 16 bytes stack, hoping that the generated frame pointer would help, but it didn't.
### What did you expect to see?
The complete backtrace.
### What did you see instead?
A backtrace for C functions only.
|
NeedsInvestigation,compiler/runtime
|
medium
|
Critical
|
650,976,282 |
terminal
|
Anti-Aliasing on profile icons.
|
<!--
π¨π¨π¨π¨π¨π¨π¨π¨π¨π¨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
An option to add image filtering on profile icons to remove jagged edges from icons.
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
As it is right now, profile icons are shrunk using what appears to nearest neighbor scaling, adding an option to change the resize algorithm would make icons appear less weird.
# Proposed technical implementation details (optional)
Implement a resize filter such as bilinear or trilinear and let the user change that in the settings file.
<!--
A clear and concise description of what you want to happen.
-->
In a future version, add an option to apply a different resize filter to profile icon to remove strange image scaling results and make the icon more visible.
|
Area-UserInterface,Product-Terminal,Issue-Task,Priority-3
|
low
|
Critical
|
650,998,489 |
rust
|
Non-'static Lifetimes in Const Generics
|
We currently don't correctly handle non-`'static` lifetimes in const generics. Until #74051 is merged we ICE when a non-`'static` lifetime is hit, as seen in #60814.
Many uses for const generics does not hit this limitation, but the following case has been brought up:
```rust
fn test<'a, const VALUE: std::mem::Discriminant<Enum<'a>>(v: Enum<'a>) -> bool {
std::mem::discriminant(&v) == VALUE
}
```
as quoted from @lcnr on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/lifetime.20in.20const-generic/near/202866360)
> `mem::Discriminant` is invariant so `VALUE` has to be `mem::Discriminant` of `Enum<'a>` and can't use `Enum<'static>`.
@varkor @eddyb @nikomatsakis
|
C-enhancement,A-lifetimes,T-lang,A-const-generics
|
low
|
Minor
|
650,999,974 |
godot
|
[Bullet] Area zero gravity with Bullet
|
**Godot version:**
Godot 3.2.2 Official, Standard 64bit (Windows)
**OS/device including version:**
Windows 10
GPU: Nvidia 1060m
Backend: GLES2
**Issue description:**
Short: RigidBody in Area with zero gravity override still has some initial (internal) force applied, while using Bullet but not Godot physics. Expected results are zero forces applied and RigidBody remains still. Potentially related to Issue #32776 or #35378.
**Steps to reproduce:**
- Ensure Bullet physics is in use (default?), no other global physics modifications.
- Create Scene with RigidBody entirely within an Area. The Area should use Replace Space Override and set gravity to zero (both vector and magnitude).
- Run the scene.
- The expectation is that no force is applied to RigidBody as it has always been within the Area with no gravity. Same results when creating an instance of the RigidBody after the scene is running (not in minimal project)
- Change physics back-end to Godot and run Scene; expected results achieved (no movement)
**Minimal reproduction project:**
[bzg.zip](https://github.com/godotengine/godot/files/4874428/bzg.zip)
|
bug,confirmed,topic:physics
|
low
|
Minor
|
651,038,525 |
PowerToys
|
Always open new windows in center
|
Idea-New PowerToy
|
low
|
Major
|
|
651,057,472 |
youtube-dl
|
Add support for pops.vn
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this checklist in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.06.16.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated versions will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bug tracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2020.06.16.1**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of the provided URLs violate any copyrights
- [x] I've searched the bug tracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace the following example URLs by yours.
-->
- Single video: https://pops.vn/video/one-piece-s15-tap-517-ngay-tai-ngo-cua-bang-hai-tac-mu-rom-5ef97425f6e78e1592773b86
- A series: https://pops.vn/series/one-piece-dao-hai-tac-5e0afd7a08b95c003d0c39ed
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
A Vietnamese and Thai entertainment video streaming platform, including anime. Anime series are geo-restricted in all other countries and are fully licensed.
|
site-support-request
|
low
|
Critical
|
651,073,496 |
PowerToys
|
Preview word, ppt, excel files without Office
|
Many people may don't installed Office. It may be supported to preview word, ppt, excel, pdf without other office software.
|
Idea-New PowerToy,Product-File Explorer
|
low
|
Major
|
651,081,352 |
pytorch
|
'_mm256_extract_epi64' was not declared in this scope when compiling on Debian 32-bit
|
## π Bug
I am attempting to compile pytorch on Debian 32-bit VM. Compilation errors about AVX have been mentioned before #17901, but I still came across similar errors mentioning `_mm256_extract_epi64`, and they seemed to come from ATen not Caffe2.
## To Reproduce
Steps to reproduce the behavior:
1. Debian 32-bit install miniconda, create python 3.6 environment
1. Follow https://github.com/pytorch/pytorch#from-source to compile
1. Using flags USE_CUDA=0; USE_DISTRIBUTED=0; USE_MKLDNN=0; USE_NNPACK=0; USE_QNNPACK=0; USE_FBGEMM=0
```
[2394/3508] Building CXX object caffe2...ve/cpu/batch_norm_kernel.cpp.AVX.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/batch_norm_kernel.cpp.AVX.cpp.o
/usr/bin/c++ -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_EXTERNAL_MZCRC -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -Iaten/src -I../aten/src -I. -I../ -I../cmake/../third_party/benchmark/include -Icaffe2/contrib/aten -I../third_party/onnx -Ithird_party/onnx -I../third_party/foxi -Ithird_party/foxi -I../caffe2/../torch/csrc/api -I../caffe2/../torch/csrc/api/include -I../caffe2/aten/src/TH -Icaffe2/aten/src/TH -I../caffe2/../torch/../aten/src -Icaffe2/aten/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../caffe2/../torch/csrc -I../caffe2/../torch/../third_party/miniz-2.0.8 -I../aten/src/TH -I../aten/../third_party/catch/single_include -I../aten/src/ATen/.. -Icaffe2/aten/src/ATen -I../third_party/miniz-2.0.8 -I../caffe2/core/nomnigraph/include -I../third_party/FXdiv/include -I../c10/.. -I../third_party/pthreadpool/include -I../third_party/cpuinfo/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../third_party/cpuinfo/deps/clog/include -I../third_party/FP16/include -I../third_party/fmt/include -isystem ../cmake/../third_party/googletest/googlemock/include -isystem ../cmake/../third_party/googletest/googletest/include -isystem ../third_party/protobuf/src -isystem /home/pi/miniconda3/envs/py36/include -isystem ../third_party/XNNPACK/include -isystem ../cmake/../third_party/eigen -isystem /home/pi/miniconda3/envs/py36/include/python3.6m -isystem /home/pi/miniconda3/envs/py36/lib/python3.6/site-packages/numpy/core/include -isystem ../cmake/../third_party/pybind11/include -isystem include -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow -DHAVE_AVX_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -DHAVE_GCC_GET_CPUID -DUSE_AVX -DTH_HAVE_THREAD -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -fopenmp -DCAFFE2_BUILD_MAIN_LIB -pthread -std=gnu++14 -O3 -mavx -DCPU_CAPABILITY=AVX -DCPU_CAPABILITY_AVX -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/batch_norm_kernel.cpp.AVX.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/batch_norm_kernel.cpp.AVX.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/batch_norm_kernel.cpp.AVX.cpp.o -c aten/src/ATen/native/cpu/batch_norm_kernel.cpp.AVX.cpp
In file included from ../aten/src/ATen/cpu/vec256/vec256.h:13,
from ../aten/src/ATen/native/cpu/Loops.h:35,
from aten/src/ATen/native/cpu/batch_norm_kernel.cpp.AVX.cpp:7:
../aten/src/ATen/cpu/vec256/vec256_qint.h: In member function βat::vec256::{anonymous}::Vec256<c10::qint8>::float_vec_return_type at::vec256::{anonymous}::Vec256<c10::qint8>::dequantize(at::vec256::{anonymous}::Vec256<float>, at::vec256::{anonymous}::Vec256<float>, at::vec256::{anonymous}::Vec256<float>) constβ:
../aten/src/ATen/cpu/vec256/vec256_qint.h:567:40: error: β_mm256_extract_epi64β was not declared in this scope
__m128i int_val0 = _mm_set1_epi64x(_mm256_extract_epi64(vals, 0));
^~~~~~~~~~~~~~~~~~~~
../aten/src/ATen/cpu/vec256/vec256_qint.h:567:40: note: suggested alternative: β_mm256_extract_epi8β
__m128i int_val0 = _mm_set1_epi64x(_mm256_extract_epi64(vals, 0));
^~~~~~~~~~~~~~~~~~~~
_mm256_extract_epi8
../aten/src/ATen/cpu/vec256/vec256_qint.h: In member function βat::vec256::{anonymous}::Vec256<c10::quint8>::float_vec_return_type at::vec256::{anonymous}::Vec256<c10::quint8>::dequantize(at::vec256::{anonymous}::Vec256<float>, at::vec256::{anonymous}::Vec256<float>, at::vec256::{anonymous}::Vec256<float>) constβ:
../aten/src/ATen/cpu/vec256/vec256_qint.h:838:40: error: β_mm256_extract_epi64β was not declared in this scope
__m128i int_val0 = _mm_set1_epi64x(_mm256_extract_epi64(vals, 0));
^~~~~~~~~~~~~~~~~~~~
../aten/src/ATen/cpu/vec256/vec256_qint.h:838:40: note: suggested alternative: β_mm256_extract_epi8β
__m128i int_val0 = _mm_set1_epi64x(_mm256_extract_epi64(vals, 0));
^~~~~~~~~~~~~~~~~~~~
_mm256_extract_epi8
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "setup.py", line 734, in <module>
build_deps()
File "setup.py", line 318, in build_deps
cmake=cmake)
File "/home/pi/pytorch/tools/build_pytorch_libs.py", line 62, in build_caffe2
cmake.build(my_env)
File "/home/pi/pytorch/tools/setup_helpers/cmake.py", line 345, in build
self.run(build_args, my_env)
File "/home/pi/pytorch/tools/setup_helpers/cmake.py", line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File "/home/pi/miniconda3/envs/py36/lib/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '1']' returned non-zero exit status 1.
```
## Expected behavior
Expected no compilation error regarding AVX on 32-bit system
## Environment
PyTorch version: latest
Is debug build: N/A
CUDA used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster)
GCC version: (Debian 8.3.0-6) 8.3.0
CMake version: version 3.12.2
Python version: 3.6
Is CUDA available: N/A
CUDA runtime version: Could not collect
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.15.4
[conda] blas 1.0 mkl
[conda] mkl 2019.1 144
[conda] mkl-include 2019.1 144
[conda] mkl_fft 1.0.6 py36hd81dba3_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
[conda] numpy 1.15.4 py36h7e9f1db_0
[conda] numpy-base 1.15.4 py36hde5b4d6_0
cc @malfet
|
module: build,triaged
|
low
|
Critical
|
651,085,983 |
go
|
cmd/compile: type inference could be less strict when there are interface arguments
|
The [following program](https://go2goplay.golang.org/p/emZ6kZE6kT1) fails:
```
func Equal(type T comparable)(x, y T) bool {
return x == y
}
func main() {
var x interface{} = 5
var y = 5
fmt.Println(Equal(x, y))
}
```
The error is:
```
prog.go2:15:23: type int of y does not match inferred type interface{} for T
```
Although it's true that the types don't match exactly, it seems to me that it might be nice
to allow the type argument to unify to `interface{}` in the same way that we allow
a concrete type to be passed to an interface type.
|
NeedsDecision,TypeInference
|
low
|
Critical
|
651,088,862 |
create-react-app
|
Erroneous CSS optimization
|
### Preface
First of all, I'd like to apologize if this issue is not related to `create-react-app` itself. Please redirect me to the proper package if it isn't. I am not particularly knowledgeable when it comes to packages and setup.
### Describe the bug
When creating an optimized build (`npm build`) I get erroneous CSS optimization (I assume "something" is optimizing incorrectly, but I have no idea what). This does not reproduce when simply running locally through `npm start`.
This is the CSS in question:
```CSS
.selector {
border-width: var(--border-width);
border-color: var(--border-color);
}
```
it gets optimized into:
```CSS
.selector {
border: var(--border-width) solid var(--border-color);
}
```
which may seem correct at first, except `border-width` can be a set of values for the different "sides" of the border, and that does not work with the shorthand. (`border: 20px 0 30px 0 solid red` is not valid, for instance). So the properties should not be getting aggregated. At least as far as my understanding goes.
### Environment
current version of create-react-app: 3.4.1
System:
OS: Windows 10 10.0.19041
CPU: (8) x64 Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
Binaries:
Node: 8.16.0 - C:\Program Files\nodejs\node.EXE
Yarn: Not Found
npm: 6.4.1 - C:\Program Files\nodejs\npm.CMD
Browsers:
Edge: 44.19041.1.0
Internet Explorer: Not Found
npmPackages:
react: ^16.12.0 => 16.13.1
react-dom: ^16.12.0 => 16.13.1
react-scripts: 3.4.1 => 3.4.1
npmGlobalPackages:
create-react-app: Not Found
### Steps to reproduce
In "description". Essentially:
- Create a CSS class that contains variable `border-width` and `border-color`.
- Programmatically set `border-width` to represent multiple "sides" (`10px 20px 0 30px`).
- Run `npm start`. Observe correct CSS.
- Run `npm build`. Observe incorrect CSS.
For brevity, here's how to do step 2:
```JS
const rootStyle = document.documentElement.style;
rootStyle.setProperty("--border-width", "10px 20px 0 30px");
```
### Reproducible demo
You can find my published, erroneously optimized page here:
https://protos.now.sh
Inspect any "input" and you'll see the CSS is incorrect.
|
stale,issue: bug report
|
medium
|
Critical
|
651,090,493 |
pytorch
|
Difference between allocated and reserved CUDA memory
|
## β Questions and Help
I've asked this [here](https://discuss.pytorch.org/t/difference-between-allocated-and-reserved-memory/87278) but got no response.
---
I imagined that that the difference between allocated and reserved memory is the following:
- **allocated** memory is the amount memory that is actually used by PyTorch.
- **reserved** is the allocated memory plus pre-cached memory.
If that is correct the following should hold:
- **reserved** memory >= **allocated** memory
- **reserved** memory == **allocated** memory after calling `torch.cuda.empty_cache()`
Is my understanding correct?
---
Iβm asking this since I have trouble determining the peak memory requirement for a piece of code. My setup is as follows:
```python
for param in params:
torch.cuda.empty_cache()
torch.cuda.reset_peak_memory_stats()
try:
test_function(param)
except RuntimeError:
break
finally:
print(torch.cuda.memory_summary())
```
Here is the compressed output:
| `param` | failing | Allocated memory (Peak Usage) | GPU reserved memory (Peak Usage) |
| ------- | ------- | ----------------------------- | -------------------------------- |
| `1` | `False` | 15006 MB | 16726 MB |
| `2` | `False` | 17402 MB | 19354 MB |
| `3` | `False` | 19961 MB | 22184 MB |
| `4` | `True` | 20609 MB | 22454 MB |
(Note that for `param==4` the memory report was generated after the error was raised and thus does not reflect the actual memory usage for the whole of `test_function`)
The memory requirement is growing approx. quadratic with `param`. A quick extrapolation for the failing `param` (`param==4`) gives 22683 MB of allocated memory and 25210 MB of reserved memory. I have 24189 MB available and no other processes are running. Thus, if my understanding about allocated and reserved memory is correct, the case should not fail, but it does.
Can someone explain why this is not the case?
## Environment
```
PyTorch version: 1.5.1+cu101
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: Could not collect
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: TITAN RTX
Nvidia driver version: 418.87.00
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip3] numpy==1.19.0
[pip3] torch==1.5.1+cu101
[pip3] torchvision==0.6.1+cu101
[conda] Could not collect
```
|
module: memory usage,triaged
|
low
|
Critical
|
651,113,749 |
rust
|
Inconsistent borrow check in async function
|
I tried this code:
```rust
use std::collections::HashMap;
fn f1(map: &mut HashMap<i32, String>, k: i32) -> &String {
if let Some(s) = map.get(&k) {
return s;
}
map.insert(k, k.to_string());
map.get(&k).unwrap()
}
async fn f2(map: &mut HashMap<i32, String>, k: i32) -> &String {
if let Some(s) = map.get(&k) {
return s;
}
map.insert(k, k.to_string());
map.get(&k).unwrap()
}
```
`f2` seems to be correct but can not be compiled.
```
Compiling playground v0.0.1 (/playground)
error[E0502]: cannot borrow `*map` as mutable because it is also borrowed as immutable
--> src/lib.rs:15:5
|
12 | if let Some(s) = map.get(&k) {
| --- immutable borrow occurs here
13 | return s;
| - returning this value requires that `*map` is borrowed for `'1`
14 | }
15 | map.insert(k, k.to_string());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ mutable borrow occurs here
16 | map.get(&k).unwrap()
17 | }
| - return type of generator is &'1 std::string::String
error: aborting due to previous error
For more information about this error, try `rustc --explain E0502`.
error: could not compile `playground`.
To learn more, run the command again with --verbose.
```
### Meta
According to the playground, the problem exists on `1.44.1` and `1.46.0-nightly (2020-07-04 0cd7ff7ddfb75a38dca8)`.
Other versions may be the same.
|
P-medium,A-borrow-checker,T-compiler,C-bug,NLL-polonius,fixed-by-polonius
|
low
|
Critical
|
651,140,258 |
transformers
|
Tabert
|
# π New model addition
## Model description
a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).
<!-- Important information -->
## Open source status
* [X] the model implementation is available: (give details)
https://github.com/facebookresearch/TaBERT
* [X] the model weights are available: (give details)
https://github.com/facebookresearch/TaBERT
* [ ] who are the authors: (mention them, if possible by @gh-username)
|
New model,Feature request
|
low
|
Major
|
651,166,842 |
youtube-dl
|
Please add support for StageIt <--streaming concerts
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.06.16.1. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2020.06.16.1**
- [x] I've checked that all provided URLs are alive and playable in a browser*
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
Main site: https://www.stageit.com/site/landing
*Sample URL: https://www.stageit.com/michael_mcdermott/the_american_in_me/84726
*Note: It's impossible for me to provide an "alive" URL, because all concerts are streamed live (no archived concerts).
Sample Request URL: https://uhsakamai-a.akamaihd.net/wdc04/wdc04-uhs-omega02/live/23889591/1593961198216/plain/uhs/6/chunk_1593961374_d112c4cd78.m4v
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
StageIt is a platform which enables musicians to sell tickets and receive tips to their livestream concerts.
On StageIt's front page, they proudly proclaim: "On StageIt, artists perform live online shows from their laptop that are never recorded or archived. That's right! Every StageIt show is a once-in-a-lifetime experience that's not to be missed."
Well, with your help, I would like to change this. I've been recording concerts my entire life. When a concert ends, I like to leave with a souvenir...a recording of the performance I just witnessed, so that I can relive it again and again.
It's free to setup a StageIt account. Most concerts are "pay what you can", so you can usually buy a ticket for $1, although $10 is the recommended price. StageIt uses "Notes" as currency. 10 Notes = $1.00 USD. When you buy Notes, you must buy a minimum of $5.00, which you can use for concert tickets and/or tips.
Here's the URL from this morning's Michael McDermott concert, although obviously, as per their policy, this stream is no longer available:
https://www.stageit.com/michael_mcdermott/the_american_in_me/84726
I've attached some screenshots of my Developer Tools window. It's very easy to identify the a/v elements. In this case, they are streamed as separate .m4a and .m4v elements. For example:
chunk_1593961228_e1f584b7d3.m4a
chunk_1593961228_e1f584b7d3.m4v
chunk_1593961229_e1f584b7d3.m4a
chunk_1593961229_e1f584b7d3.m4v
chunk_1593961230_b0c4b201c0.m4a
chunk_1593961230_b0c4b201c0.m4v
chunk_1593961231_b0c4b201c0.m4a
chunk_1593961231_b0c4b201c0.m4v
Thanks for your help!
Cheers,
poochbeast57




|
site-support-request
|
low
|
Critical
|
651,186,469 |
PowerToys
|
[Run] Skin / theme / customization / size / transparency / opacity / overlay
|
Since I frequently use PT Run, I found that it is a bit dull to use only color theme to decorate RUN.
Can you add a "add skin"function?
**Anticipated effect**
allowing users to set a background picture for their PT Run.
|
Idea-Enhancement,Product-PowerToys Run,Area-User Interface
|
medium
|
Critical
|
651,188,066 |
PowerToys
|
Combine Windows Snap with FancyZone? Add more border targets to the screen and link them to zones? Combined window resizing like in Snap?
|
Windows snap (move window to top screen for full screen, to the left for 50% etc.) is a fantastic feature with a great UX and actually even faster to use than FancyZones. I use both in combination.
The only limitation in Snap since it was introduces are too few targets. 50% left/right and full screen is not enough. You have now also this 25% zone when you drag a window in the screen corner. But I don't find this too useful and a bit limited on larger screens. I often want something like 1/3 or 2/3. Or something centred with free space to the left and right. This is easy to do with FancyZones.
But the Snap UX feels so great and even faster than FancyZones. I like FancyZone to access more complicated zones possibilities. But using Snap for your most common zones would be great and for me even more efficient.
My idea is to add more targets to the screen for window Snap, and give the possibility to map these targets to zones defined in FancyZone.
For example:
- Left corner is not 25% resize, but re-mapped to FancyZone e.g. zone 1.
- Think also about to add additional Snap zones on the screen border. Like moving a window at the top at 0-20% left is an own target that could be used for something different than full screen. Full screen would be top/left 20%-80% for example.
In addition. When you Snap two windows 50% 50% and resize one window, you resize both windows at the same time in Windows 10. This is also a great feature. It would be great when this would be the same for directly touching zones in FancyZone too. You need to resize only one windows instead of two this way.
This is just a rough idea. Especially if and how to setup additional Snap targets needs some additional thoughts. But this would feel like the ultimate solution for me.
|
Idea-Enhancement,FancyZones-Dragging&UI,Product-FancyZones
|
medium
|
Major
|
651,188,687 |
godot
|
Mesh and materials clear on imported obj at runtime and on project load
|
**Godot version:**
3.2.1
**OS/device including version:**
Windows 10 Pro Education
**Issue description:**
Materials seem to be being cleared at runtime and the mesh property of the MeshInstance is cleared during project load
**Steps to reproduce:**
- Open ViewportContainer3D and assign the ShapeA mesh to it
- Run the ViewportContainer3D and see that the material is reset to the original material (in this case null)
- Open the Remote tab and Try to assign a material to the MeshInstance's mesh during runtime
- See that it's set to the original material before you can view it
- Close Project and Reopen it
- See that the mesh is cleared
**Minimal reproduction project:**
[AstroWars3D.zip](https://github.com/godotengine/godot/files/4876321/AstroWars3D.zip)
|
bug,topic:core
|
low
|
Minor
|
651,191,646 |
storybook
|
Controls: JSX support
|
Currently React nodes, such as `children`, show a JSON object editor by default. They should show a text editor instead that supports full JSX editing.
|
feature request,react,addon: contexts
|
medium
|
Critical
|
651,191,753 |
storybook
|
Controls: React text node support
|
Currently React nodes, such as `children`, show a JSON object editor by default. They should show a text editor instead, in the short term. Long term, we should support JSX #11428
|
feature request,react,has workaround,addon: controls
|
medium
|
Critical
|
651,201,171 |
pytorch
|
Is this a bug? The values calculated according to the document isn't equal to the values calculated by framework
|
Torch 1.5.0 CPU, Linux
Bug API: `torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=-1)`
<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->
https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py#L448
If set T_max =5, initial learning_rate of optimizer are 0.5, eta_min=0:
The value calculated by framework are as flow :

But the value calculated by formula in the document are as flowοΌ
https://pytorch.org/docs/stable/optim.html?highlight=cosineannealinglr#torch.optim.lr_scheduler.CosineAnnealingLR

These Two doesn't match, in epoch 5. one is 0, but the other is 0.0954915028125. I want to know which one is right, and is this a bug?
thank you very much!
cc @vincentqb
|
module: optimizer,triaged
|
low
|
Critical
|
651,283,792 |
flutter
|
[tool_crash] ArgumentError: Invalid argument(s): Cannot find executable for /home/abhishek/flutter/bin/cache/dart-sdk/bin/pub.
|
## Command
```
flutter pub get
```
## Steps to Reproduce
1. flutter pub get (taking long time)
2. flutter upgrade (thrown error)
## Logs
ArgumentError: Invalid argument(s): Cannot find executable for /home/abhishek/flutter/bin/cache/dart-sdk/bin/pub.
```
#0 _getExecutable (package:process/src/interface/local_process_manager.dart:127:5)
#1 LocalProcessManager.start (package:process/src/interface/local_process_manager.dart:43:30)
#2 _DefaultProcessUtils.start (package:flutter_tools/src/base/process.dart:466:28)
#3 _DefaultProcessUtils.stream (package:flutter_tools/src/base/process.dart:484:35)
#4 _DefaultPub.batch (package:flutter_tools/src/dart/pub.dart:281:34)
<asynchronous suspension>
#5 _DefaultPub.get (package:flutter_tools/src/dart/pub.dart:209:15)
#6 PackagesGetCommand._runPubGet (package:flutter_tools/src/commands/packages.dart:94:17)
#7 PackagesGetCommand.runCommand (package:flutter_tools/src/commands/packages.dart:125:11)
#8 FlutterCommand.verifyThenRunCommand (package:flutter_tools/src/runner/flutter_command.dart:860:18)
#9 _rootRunUnary (dart:async/zone.dart:1198:47)
#10 _CustomZone.runUnary (dart:async/zone.dart:1100:19)
#11 _FutureListener.handleValue (dart:async/future_impl.dart:143:18)
#12 Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:696:45)
#13 Future._propagateToListeners (dart:async/future_impl.dart:725:32)
#14 Future._completeWithValue (dart:async/future_impl.dart:529:5)
#15 Future._asyncCompleteWithValue.<anonymous closure> (dart:async/future_impl.dart:567:7)
#16 _rootRun (dart:async/zone.dart:1190:13)
#17 _CustomZone.run (dart:async/zone.dart:1093:19)
#18 _CustomZone.runGuarded (dart:async/zone.dart:997:7)
#19 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:1037:23)
#20 _microtaskLoop (dart:async/schedule_microtask.dart:41:21)
#21 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50:5)
#22 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:118:13)
#23 _Timer._runTimers (dart:isolate-patch/timer_impl.dart:404:11)
```
```
[β] Flutter (Channel master, 1.20.0-3.0.pre.126, on Linux, locale en_IN)
β’ Flutter version 1.20.0-3.0.pre.126 at /home/abhishek/flutter
β’ Framework revision 462b0ea76e (11 hours ago), 2020-07-05 14:58:01 -0400
β’ Engine revision f8bbcc396b
β’ Dart version 2.9.0 (build 2.9.0-20.0.dev 8afe9875a6)
[β] Android toolchain - develop for Android devices (Android SDK version 30.0.0)
β’ Android SDK at /home/abhishek/Android/Sdk
β’ Platform android-30, build-tools 30.0.0
β’ Java binary at: /home/abhishek/Downloads/android-studio-ide-191.5900203-linux/android-studio/jre/bin/java
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
β’ All Android licenses accepted.
[β] Android Studio (version 3.5)
β’ Android Studio at /home/abhishek/Downloads/android-studio-ide-191.5900203-linux/android-studio
β’ Flutter plugin version 44.0.1
β’ Dart plugin version 191.8593
β’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405)
[β] Connected device (1 available)
β’ CPH1909 (mobile) β’ EUHQSOYLY9V8KFMV β’ android-arm64 β’ Android 8.1.0 (API 27)
β’ No issues found!
```
## Flutter Application Metadata
**Type**: app
**Version**: 1.0.0+1
**Material**: true
**Android X**: false
**Module**: false
**Plugin**: false
**Android package**: null
**iOS bundle identifier**: null
**Creation channel**: stable
**Creation framework version**: f139b11009aeb8ed2a3a3aa8b0066e482709dde3
|
c: crash,tool,P2,team-tool,triaged-tool
|
low
|
Critical
|
651,298,858 |
rust
|
using inner tool attributes in crate root induces compiler error on intra-crate macro use
|
Take a crate that has the following `foo.rs`, that contains a macro, `foomacro()!`:
```rust
#[macro_export]
macro_rules! foomacro {
($f:expr) => ({
println!("foomacro: {}", $f);
});
}
```
This macro is used in `bar.rs`:
```rust
use crate::foomacro;
pub fn bar() {
foomacro!("bar");
}
```
Finally, here is the root of the crate and `main.rs`:
```rust
#![feature(custom_inner_attributes)]
// Setting any inner tool attribute here causes a compiler error on bar's use of
// foomacro!():
//
// error: macro-expanded `macro_export` macros from the current crate cannot
// be referred to by absolute paths
//
// To see this, uncomment either of the following lines:
//
// #![clippy::cyclomatic_complexity = "100"]
// #![rustfmt::skip]
mod foo;
mod bar;
fn main() {
foomacro!("foo");
bar::bar();
}
```
As the comment there indicates, as written, this compiles and runs as expected:
```
$ cargo run
Compiling modmac v0.1.0 (/home/bmc/modmac)
Finished dev [unoptimized + debuginfo] target(s) in 0.18s
Running `target/debug/modmac`
foomacro: foo
foomacro: bar
```
If, however, either of the inner tool attributes is uncommented, the code fails on compiling `bar.rs`, complaining about its use of `foomacro()!`
```
$ cargo run
Compiling modmac v0.1.0 (/home/bmc/modmac)
error: macro-expanded `macro_export` macros from the current crate cannot be referred to by absolute paths
--> src/bar.rs:2:5
|
2 | use crate::foomacro;
| ^^^^^^^^^^^^^^^
|
= note: `#[deny(macro_expanded_macro_exports_accessed_by_absolute_paths)]` on by default
= warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
= note: for more information, see issue #52234 <https://github.com/rust-lang/rust/issues/52234>
note: the macro is defined here
--> src/foo.rs:2:1
|
2 | / macro_rules! foomacro {
3 | | ($f:expr) => ({
4 | | println!("foomacro: {}", $f);
5 | | });
6 | | }
| |_^
error: aborting due to previous error
error: could not compile `modmac`.
```
This behavior seems surprising, especially because tool attributes are generally thought to only be relevant to the specified tool:
> When a tool is not in use, the tool's attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes.
Thanks in advance for any consideration of this issue -- and apologies if this is an elaborate form of pilot error!
|
A-attributes,A-resolve,A-macros,T-compiler,C-bug
|
low
|
Critical
|
651,345,569 |
godot
|
AnimationTree in inherited scene does not work
|
<!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:**
<!-- Specify commit hash if using non-official build. -->
V3.2.2.stable.official (opened via steam)
**OS/device including version:**
<!-- Specify GPU model, drivers, and the backend (GLES2, GLES3, Vulkan) if graphics-related. -->
GLES2. Have tried this on GLES3, too.
**Issue description:**
<!-- What happened, and what was expected. -->
When I played my project that contains a scene that inherits from another scene which has an animation tree, that animation tree will not play any animation when called.
**Specifically**: An inherited scene's AnimationTree's parameter/state/playback seem to keep switching into a blank AnimationTreeStateMachine. This is visually shown by a reload icon appearing at that var, and as the result of the newly-created AnimationTreeStateMachine, the AnimationTree cannot change the animation state when called.
**Steps to reproduce:**
1. Create new scene (let it be called Base)
2. Add AnimationPlayer and AnimationTree and fill the appropriate details (such as adding a new AnimationTreeStateMachine to its Tree Root).
3. Add another Node for the AnimationPlayer to interact with.
4. Attach script to AnimationTree to call for its state change.
5. Create New Inherited Scene from Base (let it be called Inherited)
6. Add button to have AnimationTree change state when pressed.
7. Play Scene (Inherited)
**Minimal reproduction project:**
<!-- A small Godot project which reproduces the issue. Drag and drop a zip archive to upload it. -->
[inherited_animtree.zip](https://github.com/godotengine/godot/files/4877619/inherited_animtree.zip)
Note: There is no Main Scene; Base and/or Inherited have to opened from the FileSystem dock.
Note2: I have added print statements in AnimationTree script to show that the function is called, but doesn't change state.
|
bug,topic:core,topic:animation
|
low
|
Major
|
651,389,115 |
youtube-dl
|
please add support for tv.lonelyplanet.com
|
## Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I'm running youtube-dl version **2020.06.16.1**
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that none of provided URLs violate any copyrights
- [X] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
- Single video: https://tv.lonelyplanet.com/play/#/tv/837/season/17/episode/1/1.-building-england:-part-2
- Single video: https://tv.lonelyplanet.com/tv/planet-food/4/#episode-1
- Playlist: https://tv.lonelyplanet.com/tv/globe-trekker/17/
## Description
tv.lonelyplanet.com lets you watch copies of their own show (Lonely Planet/Globe Trekker) and some other travel related programmes.
Some of the shows may be watched just by registering an email address, others require an account and payment but all the links given above can be watched just by registering your email address to create a free account.
|
site-support-request
|
low
|
Critical
|
651,404,788 |
TypeScript
|
Bundling typescript using webpack: the request of a dependency is an expression (+possible fix)
|
**TypeScript Version:** 3.9.6 and 4.0.0-dev.20200706
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
webpack request of a dependency expression
**Code**
```ts
import ts from 'typescript';
console.log(ts)
```
**Expected behavior:**
Bundle successfully, and without warnings, using webpack.
**Actual behavior:**
Bundles successfully, but a warning is shown:
```
WARNING in ./node_modules/typescript/lib/typescript.js 5710:41-60
Critical dependency: the request of a dependency is an expression
```
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
N/A
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
Couldn't find any that talked about this issue
**Suggested fix:**
The warning is printed due to the following dynamic require call:
```ts
require: function (baseDir, moduleName) {
try {
var modulePath = ts.resolveJSModule(moduleName, baseDir, nodeSystem);
return { module: require(modulePath), modulePath: modulePath, error: undefined };
}
catch (error) {
return { module: undefined, modulePath: undefined, error: error };
}
}
```
Rather than calling require directly, create a helper function:
```ts
function requireModule(requestingModule, specifier) {
return requestingModule.require(specifier)
}
```
And call it the following way:
```ts
return { module: requireModule(module, modulePath), modulePath: modulePath, error: undefined };
```
The resulting code will behave 1:1 in Node, while not triggering any warnings in bundlers trying to resolve these dynamic calls.
|
Suggestion,Experience Enhancement
|
medium
|
Critical
|
651,434,177 |
rust
|
Parameter usage is not properly propagated through trait aliases
|
It seems to me that both `S1` and `S2` should compile:
```rust
#![feature(trait_alias)]
// Works
struct S1<'a, T: 'a, I: Iterator<Item = &'a T> + Clone> {
field: I,
}
trait SpecialIterator<'a, T: 'a> = Iterator<Item = &'a T> + Clone;
// Doesn't work
struct S2<'a, T: 'a, I: SpecialIterator<'a, T>> {
field: I,
}
```
[Playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=27cb8b2de9ddc65a62f440e1b907ec39). On `1.46.0-nightly (2020-07-05 2753fab7ce3647033146)`.
|
T-compiler,C-bug,F-trait_alias,requires-nightly
|
low
|
Critical
|
651,446,134 |
pytorch
|
Simplify Adam Optimizer
|
<!-- A clear and concise description of the feature proposal -->
At the moment in the Adam optimizer the exponential moving averages are decoupled from the bias correction, as per the original paper. However, it is possible to combine these operations into a single update step.
An unbiased exponential moving average with parameter `b` can be computed using the following update rule:
`b_t = 1-(1-b)/(1-b^t)`
`u_t = b_t * u_(t-1) + (1-b_t)*x_t`
This eliminates the bias correction operation for the second-moment estimate, so should be slightly faster and use less memory, and would be a simple change.
I'm happy to implement this.
cc @vincentqb
|
module: optimizer,triaged,enhancement
|
low
|
Minor
|
651,455,433 |
TypeScript
|
Object member completions are offered from all union constituents even after union has been discriminated
|
<!-- π¨ STOP π¨ STOP π¨ STOP π¨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.9.5 (probably more)
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
**Code**
```ts
type Choice = {
kind: number, x: 42
} | {
kind: undefined, y: 42
}
const c: Choice = {kind: undefined, X_OR_Y: 42} // X_OR_Y is a placeholder
```
**Expected behavior:**
When you use autocompletion to replace `X_OR_Y` by `x` or `y`, the editor should suggest `y`, only:
<img width="305" alt="Screenshot 2020-07-06 at 13 08 01" src="https://user-images.githubusercontent.com/1295054/86587535-152ec480-bf8a-11ea-96fa-23212c5e40a1.png">
**Actual behavior:**
The editor suggests both `x` or `y`:
<img width="315" alt="Screenshot 2020-07-06 at 13 11 32" src="https://user-images.githubusercontent.com/1295054/86587624-40b1af00-bf8a-11ea-9e70-b97e0cef4b8c.png">
Of course, if you choose `x`, the compiler will then complain about the wrongly typed assignment.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
[Playground](https://www.typescriptlang.org/play?#code/C4TwDgpgBAwgFgewJYGNoF4oG8BQV9QDWSAdgCYBcUJArgLYBGEATgDRQAeVALAEw4BfKAB9seAsXJUa5CADNSEMuxA9+AnDhQISAZ2BQUVeMjRRMWSZSgyy8xcqgANAPoB5AEouAmmoFA)
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
|
Bug,Domain: Completion Lists
|
low
|
Critical
|
651,466,881 |
youtube-dl
|
Site support request: livesets.online
|
## Checklist
- [x] I'm reporting a new site support request
- [x] I've verified that I'm running youtube-dl version **2020.06.16.1**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single audio: https://www.livesets.online/markus_schulz_presents_-_global_dj_broadcast_(29_may_2014).shtml (has a link to the site below)
- Single audio: https://www.livesets.online/stream/markus_schulz_presents_-_global_dj_broadcast_(29_may_2014).html (plays the stream, even if takes 10-20s to load)
- Playlist: n/a
## Description
I believe this could be fairly easy. the first site links to the second, the second contains
<script>start_set('https://stream.livesets.stream/playlist/1b82e26f1e7430d285343d966b0a2349/markus_schulz_presents_-_global_dj_broadcast_(29_may_2014)/0/stream.m3u8','256kbps cbr - markus schulz presents global dj broadcast 29 may 2014')</script>
and
youtube-dl https://stream.livesets.stream/playlist/1b82e26f1e7430d285343d966b0a2349/markus_schulz_presents_-_global_dj_broadcast_(29_may_2014)/0/stream.m3u8
works fine already.
Main tasks would be finding the playlist URL and extracting a proper file name from its URL.
|
site-support-request
|
low
|
Critical
|
651,485,858 |
node
|
readable[Symbol.asyncIterator]().next() on stream causes immediate exit (no error)
|
* **v14.5.0**:
* **Darwin helmholtz 19.5.0 Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64 x86_64**:
* **Subsystem**:
### What steps will reproduce the bug?
When `next()` is called in the program below, nothing else gets run. list.txt and list2.txt are just two line files `line1\nline2` and `line3\nline4`
This repl.it shows the issue.
https://repl.it/repls/FrightenedPastLogin#index.js
```
const fs = require("fs");
const readline = require("readline");
async function main() {
const linereader = readline.createInterface({
input: fs.createReadStream("./list.txt"),
});
for await (const s of fs.createReadStream("./list2.txt")){}
console.log(await linereader[Symbol.asyncIterator]().next())
// nothing below here gets run
console.log("=============");
for await (let s of linereader) {
console.log(s);
}
console.log("=============");
return "test";
}
main()
.then((t) => console.log(t))
.catch((e) => console.log(e));
```
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior?
The rest of the program should execute. The following should print
```
node test.js
{ value: 'line1', done: false }
=============
line2
=============
test
```
### What do you see instead?
Nothing:
```
node test.js
```
Possibly related to #33792 #34035?
|
readline,stream
|
low
|
Critical
|
651,506,399 |
opencv
|
still get "FFMPEG: tag 0x00000021/'!???' is not found" error after built from source in pyhon3.7 Ubuntu16.04.
|
##### System information (version)
<!-- Example
- OpenCV => 4.2
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2017
-->
- OpenCV => 4.4.0-pre
- Operating System / Platform => ubuntu 16.04:
- Compiler => gcc?(not sure)
##### Detailed description
Guys. I still get "FFMPEG: tag 0x00000021/'!???' is not found" error after built from source in pyhon3.7 Ubuntu16.04.
$ python3.7
Python 3.7.8 (default, Jun 29 2020, 05:46:05)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'4.4.0-pre'
>>> cv2.VideoWriter("test.mp4",0x21,10,(10,10))
OpenCV: FFMPEG: tag 0x00000021/'!???' is not found (format 'mp4 / MP4 (MPEG-4 Part 14)')'
<VideoWriter 0x7f6c63947a70>
this is my cmake script:
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D WITH_FFMPEG=ON \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
-D OPENCV_SKIP_PYTHON_LOADER=ON \
-D PYTHON3_EXECUTABLE=/usr/bin/python3.7 \
-D PYTHON_INCLUDE_DIR=/usr/include/python3.7 \
-D PYTHON_INCLUDE_DIR2=/usr/include/x86_64-linux-gnu/python3.7m \
-D PYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
-D PYTHON3_NUMPY_INCLUDE_DIRS=/usr/local/lib/python3.7/dist-packages/numpy/core/include/ \
..
and I have verified the ffmpeg on my machine is workable with h264(x264):
$ffmpeg -i input.mp4 -vcodec libx264 -f mp4 x264-output.mp4
ffmpeg version 4.3-2~16.04.york1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609
#....
[aac @ 0x5601ecc5d640] Qavg: 62966.328
$ffmpeg -codecs | grep x264
ffmpeg version 4.3-2~16.04.york1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609
configuration: --prefix=/usr --extra-version='2~16.04.york1' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 51.100 / 56. 51.100
libavcodec 58. 91.100 / 58. 91.100
libavformat 58. 45.100 / 58. 45.100
libavdevice 58. 10.100 / 58. 10.100
libavfilter 7. 85.100 / 7. 85.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 7.100 / 5. 7.100
libswresample 3. 7.100 / 3. 7.100
libpostproc 55. 7.100 / 55. 7.100
DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_v4l2m2m ) (encoders: libx264 libx264rgb h264_omx h264_v4l2m2m h264_vaapi )
At first build , my ffmpeg has not installed with x264, so I installed x264 and rebuild opencv, maybe somehow it will cause the problem?
Maybe it's because somehow my opencv failed to link my ffmpeg , but I dont know how to check it and how to fix it.
Any idea I will appreciate
This is my cv2.getBuildInformation() output:
>>> print(cv2.getBuildInformation())
General configuration for OpenCV 4.4.0-pre =====================================
Version control: 4.3.0-533-g992c908
Extra modules:
Location (extra): /home/wangxinping/Documents/workspace/opencv/opencv_contrib/modules
Version control (extra): 4.3.0-78-g4683455
Platform:
Timestamp: 2020-07-06T11:13:58Z
Host: Linux 4.15.0-64-generic x86_64
CMake: 3.5.1
CMake generator: Unix Makefiles
CMake build tool: /usr/bin/make
Configuration: RELEASE
CPU/HW features:
Baseline: SSE SSE2 SSE3
requested: SSE3
Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX AVX2 AVX512_SKX
requested: SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX
SSE4_1 (17 files): + SSSE3 SSE4_1
SSE4_2 (2 files): + SSSE3 SSE4_1 POPCNT SSE4_2
FP16 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX
AVX (5 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX
AVX2 (31 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2
AVX512_SKX (7 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 AVX_512F AVX512_COMMON AVX512_SKX
C/C++:
Built as dynamic libs?: YES
C++ standard: 11
C++ Compiler: /usr/bin/c++ (ver 5.4.0)
C++ flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG
C++ flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Winit-self -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG
C Compiler: /usr/bin/cc
C flags (Release): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG
C flags (Debug): -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Winit-self -Wno-comment -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG
Linker flags (Release): -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -Wl,--gc-sections -Wl,--as-needed
Linker flags (Debug): -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -Wl,--gc-sections -Wl,--as-needed
ccache: NO
Precompiled headers: NO
Extra dependencies: dl m pthread rt
3rdparty dependencies:
OpenCV modules:
To be built: aruco bgsegm bioinspired calib3d ccalib core datasets dnn dnn_objdetect dnn_superres dpm face features2d flann freetype fuzzy gapi hfs highgui img_hash imgcodecs imgproc intensity_transform line_descriptor ml objdetect optflow phase_unwrapping photo plot python3 quality rapid reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking ts video videoio videostab xfeatures2d ximgproc xobjdetect xphoto
Disabled: world
Disabled by dependency: -
Unavailable: alphamat cnn_3dobj cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev cvv hdf java js julia matlab ovis python2 sfm viz
Applications: tests perf_tests apps
Documentation: NO
Non-free algorithms: NO
GUI:
GTK+: YES (ver 2.24.30)
GThread : YES (ver 2.48.2)
GtkGlExt: NO
VTK support: NO
Media I/O:
ZLib: /usr/lib/x86_64-linux-gnu/libz.so (ver 1.2.8)
JPEG: /usr/lib/x86_64-linux-gnu/libjpeg.so (ver 80)
WEBP: build (ver encoder: 0x020f)
PNG: /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.2.54)
TIFF: /usr/lib/x86_64-linux-gnu/libtiff.so (ver 42 / 4.0.6)
JPEG 2000: /usr/lib/x86_64-linux-gnu/libjasper.so (ver 1.900.1)
OpenEXR: /usr/lib/x86_64-linux-gnu/libImath.so /usr/lib/x86_64-linux-gnu/libIlmImf.so /usr/lib/x86_64-linux-gnu/libIex.so /usr/lib/x86_64-linux-gnu/libHalf.so /usr/lib/x86_64-linux-gnu/libIlmThread.so (ver 2_2)
HDR: YES
SUNRASTER: YES
PXM: YES
PFM: YES
Video I/O:
DC1394: YES (2.2.4)
FFMPEG: YES
avcodec: YES (58.91.100)
avformat: YES (58.45.100)
avutil: YES (56.51.100)
swscale: YES (5.7.100)
avresample: NO
GStreamer: NO
v4l/v4l2: YES (linux/videodev2.h)
Parallel framework: pthreads
Trace: YES (with Intel ITT)
Other third-party libraries:
Intel IPP: 2020.0.0 Gold [2020.0.0]
at: /home/wangxinping/Documents/workspace/opencv/opencv/build/3rdparty/ippicv/ippicv_lnx/icv
Intel IPP IW: sources (2020.0.0)
at: /home/wangxinping/Documents/workspace/opencv/opencv/build/3rdparty/ippicv/ippicv_lnx/iw
Lapack: NO
Eigen: NO
Custom HAL: NO
Protobuf: build (3.5.1)
OpenCL: YES (no extra features)
Include path: /home/wangxinping/Documents/workspace/opencv/opencv/3rdparty/include/opencl/1.2
Link libraries: Dynamic load
Python 3:
Interpreter: /usr/bin/python3.7 (ver 3.7.8)
Libraries: /usr/lib/x86_64-linux-gnu/libpython3.7m.so (ver 3.7.8)
numpy: /usr/local/lib/python3.7/dist-packages/numpy/core/include (ver 1.18.5)
install path: lib/python3.7/dist-packages
Python (for build): /usr/bin/python2.7
Java:
ant: NO
JNI: NO
Java wrappers: NO
Java tests: NO
Install to: /usr/local
----------------------------------------------------------------
##### Steps to reproduce
I cant reproduce it cause I have only one machine
##### Issue submission checklist
- [x] I report the issue, it's not a question
<!--
OpenCV team works with answers.opencv.org, Stack Overflow and other communities
to discuss problems. Tickets with question without real issue statement will be
closed.
-->
- [x] I checked the problem with documentation, FAQ, open issues,
answers.opencv.org, Stack Overflow, etc and have not found solution
<!--
Places to check:
* OpenCV documentation: https://docs.opencv.org
* FAQ page: https://github.com/opencv/opencv/wiki/FAQ
* OpenCV forum: https://answers.opencv.org
* OpenCV issue tracker: https://github.com/opencv/opencv/issues?q=is%3Aissue
* Stack Overflow branch: https://stackoverflow.com/questions/tagged/opencv
-->
- [x] I updated to latest OpenCV version and the issue is still there
<!--
master branch for OpenCV 4.x and 3.4 branch for OpenCV 3.x releases.
OpenCV team supports only latest release for each branch.
The ticket is closed, if the problem is not reproduced with modern version.
-->
- [x] There is reproducer code and related data files: videos, images, onnx, etc
<!--
The best reproducer -- test case for OpenCV that we can add to the library.
Recommendations for media files and binary files:
* Try to reproduce the issue with images and videos in opencv_extra repository
to reduce attachment size
* Use PNG for images, if you report some CV related bug, but not image reader
issue
* Attach the image as archite to the ticket, if you report some reader issue.
Image hosting services compress images and it breaks the repro code.
* Provide ONNX file for some public model or ONNX file with with random weights,
if you report ONNX parsing or handling issue. Architecture details diagram
from netron tool can be very useful too. See https://lutzroeder.github.io/netron/
-->
|
invalid,question (invalid tracker)
|
low
|
Critical
|
651,521,358 |
vscode
|
Allow DAP progress notifications to be shown more prominently than the status bar
|
Extracting this from https://github.com/microsoft/vscode/issues/101405#issuecomment-652358374 since it's not really the same issue:
I changed Dart/Flutter over to using DAP progress notifications (from previous custom messages that showed full notifications for progress). I've had a few comments suggesting the status bar notifications may not be obvious enough requesting the full notifications are brought back.
I think progress messages tend to fall into two fairly distinct categories:
- things like "Analyzing...", where we want to show something is happening, but it's not directly related to what the user is doing
- things like hot reload/launching/building - something specifically started by the user and that they are waiting for to complete before doing anything else
The second type I think could be better shown as more obvious notifications (it's something I'd like myself - sometimes when I press F5 to launch my VS Code extension code I wonder why nothing is happening - the only indication is "Building..." in the status bar - I'd rather have an expanded notification).
Requests for this:
- https://github.com/Dart-Code/Dart-Code/issues/2597#issuecomment-652204865
- https://github.com/Dart-Code/Dart-Code/issues/2601#issuecomment-653356813
- https://github.com/flutter/flutter/issues/60439 / https://github.com/Dart-Code/Dart-Code/issues/2628
|
feature-request,debug
|
medium
|
Major
|
651,541,978 |
flutter
|
ReorderableListView animate insertion/removal of items
|
<!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## Proposal
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
|
c: new feature,framework,f: material design,P3,team-design,triaged-design
|
low
|
Critical
|
651,543,841 |
rust
|
LLDB show a non existing variable at O1
|
LLDB show variable 'res' (when stepping at line 12) that does not exists in the code.
```bash
$ cat -n a.rs
1 // run-pass
2 use std::ptr;
3 use std::rc::Rc;
4 use std::sync::Arc;
5
6 fn main() {
7 let p: *const u8 = ptr::null();
8 let rc = Rc::new(1usize);
9 let arc = Arc::new(1usize);
10 let b = Box::new("hi");
11
12 let _ = format!("{:p}{:p}{:p}",
13 rc, arc, b);
14
15 if cfg!(target_pointer_width = "32") {
16 assert_eq!(format!("{:#p}", p),
17 "0x00000000");
18 } else {
19 assert_eq!(format!("{:#p}", p),
20 "0x0000000000000000");
21 }
22 assert_eq!(format!("{:p}", p),
23 "0x0");
24 }
$ cat a.rs
// run-pass
use std::ptr;
use std::rc::Rc;
use std::sync::Arc;
fn main() {
let p: *const u8 = ptr::null();
let rc = Rc::new(1usize);
let arc = Arc::new(1usize);
let b = Box::new("hi");
let _ = format!("{:p}{:p}{:p}",
rc, arc, b);
if cfg!(target_pointer_width = "32") {
assert_eq!(format!("{:#p}", p),
"0x00000000");
} else {
assert_eq!(format!("{:#p}", p),
"0x0000000000000000");
}
assert_eq!(format!("{:p}", p),
"0x0");
}
$ rustc --version
+rustc 1.46.0-nightly (3503f565e 2020-07-02)
$ lldb -v
lldb version 11.0.0
clang revision ee26a31e7b02e124d71091d47f2ae624774e5e0a
llvm revision ee26a31e7b02e124d71091d47f2ae624774e5e0a
$ rustc -g -C opt-level=1 -o opt a.rs
$ lldb opt
lldb opt
(lldb) target create "opt"
Current executable set to 'opt' (x86_64).
(lldb) b -l 12
Breakpoint 1: 2 locations.
(lldb) r
Process 62 launched: 'opt' (x86_64)
Process 62 stopped
* thread #1, name = 'opt', stop reason = breakpoint 1.1
frame #0: 0x00005555555590f6 opt`a::main::h1122ef72fcd4218b at a.rs:12:13
9 let arc = Arc::new(1usize);
10 let b = Box::new("hi");
11
-> 12 let _ = format!("{:p}{:p}{:p}",
13 rc, arc, b);
14
15 if cfg!(target_pointer_width = "32") {
(lldb) frame var
(unsigned char *) p = <empty constant data>
(alloc::rc::Rc<unsigned long>) rc = <variable not available>
(alloc::sync::Arc<unsigned long>) arc = <variable not available>
(&str *) b = 0x000055555578fa80
(lldb) c
Process 62 resuming
Process 62 stopped
* thread #1, name = 'opt', stop reason = breakpoint 1.2
frame #0: 0x000055555555918f opt`a::main::h1122ef72fcd4218b at a.rs:12:13
9 let arc = Arc::new(1usize);
10 let b = Box::new("hi");
11
-> 12 let _ = format!("{:p}{:p}{:p}",
13 rc, arc, b);
14
15 if cfg!(target_pointer_width = "32") {
(lldb) frame var
(unsigned char *) p = <empty constant data>
(alloc::rc::Rc<unsigned long>) rc = <variable not available>
(alloc::sync::Arc<unsigned long>) arc = <variable not available>
(&str *) b = <variable not available>
(alloc::string::String) res = {
vec = {
buf = {
ptr = (pointer = "0x55555578fa500x55555578fa700x55555578fa80\U00000002", _marker = core::marker::PhantomData<unsigned char> @ 0x00007fffffffe290)
```
|
A-debuginfo,E-needs-test,A-macros,T-compiler
|
low
|
Minor
|
651,624,649 |
TypeScript
|
Write typescript as modules, but output js without modules
|
## Search Terms
Searched the whole internet for anything like it, only webpack and browserify related 'solutions' are proposed.
## Suggestion
Split module options in tsconfig so typescript modules could be set separately from output target modules.
## Use Cases
Our usecase:
- We have a huge codebase, and we want to slowly move to typescript.
- We have a sophisticated build system that we don't want to change - we want typescript to transpile to js and to take js and continue to use it as we were.
- We don't need nor want to use modules in js. We never used modules - we have our system of imports, minification,
versioning, caching, etc.
- The only reason why we need to use modules in typescript is tsc incremental watch build. With no modules, it builds complete codebase on each change. We want to build only file changed and dependent files.
So currently we have two bad options - don't use modules (unacceptably slow build time) or use modules (get the output that we don't want). Currently we 'mitigate' the issue by removing parts of js tsc has output (removing stuff we recognize as related to exports and require), but that's just terrible.
Even worse, even if we were to use modules, require / exports are inlined even in output we want to inline to index.html as critical JS...
## Examples
I guess I explained it all above.
## Checklist
My suggestion meets these guidelines:
* [ * ] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [ * ] This wouldn't change the runtime behavior of existing JavaScript code
* [ * ] This could be implemented without emitting different JS based on the types of the expressions
* [ * ] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [ * ] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
|
Suggestion,Awaiting More Feedback
|
medium
|
Major
|
651,637,141 |
flutter
|
Google_maps_flutter KML support
|
## Use case
I have a app where I want to load approximately 1000+ streets data(polylines) to draw on initState but It is taking too much time to load and blocking a UI when I try to do setState to place polylines on map. So I think if I can load the kml on it then it will be less expensive computation. So my app will be a faster to load the lines or any other data as well. Alongside with it I have other 2 apps in which I wanted to use KML but for them I don't had that much data to load on map which is taking too much time to load but for this
There is no plugin/package on pub.dev which is providing a facility for loading a KML on map right now.
## Proposal
So , if google_maps_flutter package can have the feature of adding KML layer on map then It will be good for many developers as many are waiting and wondering for this feature. There was already a feature request on this thread https://github.com/flutter/flutter/issues/33563 but probably there was no response from flutter team and probably he closed it and probably at that time no one was concerned about this feature but as of now the flutter community is growing there are many developers who are looking for official feature from goole_maps_flutter. I will be waiting for your response.
|
c: new feature,p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem
|
medium
|
Major
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.