id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
578,045,699 |
go
|
cmd/go: 'go mod why' should return module results even without '-m'
|
<!--
Please answer these questions before submitting your issue. Thanks!
For questions please use one of our forums: https://github.com/golang/go/wiki/Questions
-->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.14 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes.
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/meling/Library/Caches/go-build"
GOENV="/Users/meling/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/meling/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.14/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.14/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/meling/work/gorums/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/xd/1g4dygzx1_g3thyggq8qllh40000gn/T/go-build624263632=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
Ran `go mod why` without -m flag when the argument was a module instead of a package, leading to the message:
```
% go mod why github.com/labstack/echo-contrib
# github.com/labstack/echo-contrib
(main module does not need package github.com/labstack/echo-contrib)
```
This is unhelpful and has bitten me many times now. How do I know whether or not something in my go.mod file is a package or module? This is confusing to me and probably others.
### What did you expect to see?
I expected to see which package or module was using the relevant module (or package):
```
% go mod why -m github.com/labstack/echo-contrib
# github.com/labstack/echo-contrib
github.com/autograde/aguis/web
github.com/labstack/echo-contrib/session
```
### Proposal
I propose that the `go mod why` command should return a result either way. If the tool finds that the main module does not depend on the supplied package it should check if it depends on a corresponding module instead, obviating the need for the -m flag.
|
NeedsFix,modules
|
low
|
Critical
|
578,109,290 |
nvm
|
Error EACCES -13 when using npm globally
|
#### Operating system and version:
Ubuntu 18.04.4 (WSL)
#### `nvm debug` output:
<details>
<!-- do not delete the following blank line -->
```sh
$ nvm debug
nvm --version: v0.35.3
$SHELL: /usr/bin/zsh
$SHLVL: 1
${HOME}: /home/mithic
${NVM_DIR}: '${HOME}/.nvm'
${PATH}: ${NVM_DIR}/versions/node/v12.16.1/bin:${HOME}/.local/texlive/2019/bin/x86_64-linux:${HOME}/.local/bin:${HOME}/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/mnt/c/Program Files/Python38/Scripts/:/mnt/c/Program Files/Python38/:/mnt/c/Program Files (x86)/Common Files/Oracle/Java/javapath:/mnt/c/WINDOWS/system32:/mnt/c/WINDOWS:/mnt/c/WINDOWS/System32/Wbem:/mnt/c/WINDOWS/System32/WindowsPowerShell/v1.0:/mnt/c/WINDOWS/System32/OpenSSH:/mnt/c/Program Files/nodejs:/mnt/c/Program Files/AMD/StoreMI/ECmd:/mnt/c/Program Files (x86)/LilyPond/usr/bin:/mnt/c/Users/rpc01/AppData/Local/Programs/Python/Python37-32:/mnt/c/Program Files (x86)/texlive/2019/bin/win32:/mnt/c/ProgramData/chocolatey/bin:/mnt/c/Hunspell/src/tools:/mnt/c/WINDOWS/System32/WindowsPowerShell/v1.0/:/mnt/c/WINDOWS/System32/OpenSSH/:/mnt/c/Users/rpc01/.windows-build-tools/python27/:/mnt/c/Users/rpc01/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/rpc01/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/rpc01/AppData/Local/Programs/Microsoft VS Code/bin
$PREFIX: ''
${NPM_CONFIG_PREFIX}: ''
$NVM_NODEJS_ORG_MIRROR: ''
$NVM_IOJS_ORG_MIRROR: ''
shell version: 'zsh 5.4.2 (x86_64-ubuntu-linux-gnu)'
uname -a: 'Linux 4.4.0-19041-Microsoft #1-Microsoft Fri Dec 06 14:06:00 PST 2019 x86_64 x86_64 x86_64 GNU/Linux'
OS version: Ubuntu 18.04.4 LTS
curl: /usr/bin/curl, curl 7.58.0 (x86_64-pc-linux-gnu) libcurl/7.58.0 OpenSSL/1.1.1 zlib/1.2.11 libidn2/2.0.4 libpsl/0.19.1 (+libidn2/2.0.4) nghttp2/1.30.0 librtmp/2.3
wget: /usr/bin/wget, GNU Wget 1.19.4 built on linux-gnu.
git: /usr/bin/git, git version 2.17.1
ls: cannot access 'grep:': No such file or directory
grep: grep: aliased to grep --color (grep --color), grep (GNU grep) 3.1
awk: /usr/bin/awk, GNU Awk 4.1.4, API: 1.1 (GNU MPFR 4.0.1, GNU MP 6.1.2)
sed: /bin/sed, sed (GNU sed) 4.4
cut: /usr/bin/cut, cut (GNU coreutils) 8.28
basename: /usr/bin/basename, basename (GNU coreutils) 8.28
ls: cannot access 'rm:': No such file or directory
rm: rm: aliased to rm -i (rm -i), rm (GNU coreutils) 8.28
ls: cannot access 'mkdir:': No such file or directory
mkdir: mkdir: aliased to nocorrect mkdir (nocorrect mkdir), mkdir (GNU coreutils) 8.28
xargs: /usr/bin/xargs, xargs (GNU findutils) 4.7.0-git
nvm current: v12.16.1
which node: ${NVM_DIR}/versions/node/v12.16.1/bin/node
which iojs: iojs not found
which npm: ${NVM_DIR}/versions/node/v12.16.1/bin/npm
npm config get prefix: ${NVM_DIR}/versions/node/v12.16.1
npm root -g: ${NVM_DIR}/versions/node/v12.16.1/lib/node_modules
```
</details>
#### `nvm ls` output:
<details>
<!-- do not delete the following blank line -->
```sh
$ nvm ls
-> v12.16.1
default -> lts/* (-> v12.16.1)
node -> stable (-> v12.16.1) (default)
stable -> 12.16 (-> v12.16.1) (default)
iojs -> N/A (default)
unstable -> N/A (default)
lts/* -> lts/erbium (-> v12.16.1)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.19.0 (-> N/A)
lts/erbium -> v12.16.1
```
</details>
#### How did you install `nvm`?
<!-- (e.g. install script in readme, Homebrew) -->
Install and update script from readme:
`wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash`
#### What steps did you perform?
I only ran `npm -g upgrade`
#### What happened?
I got the EACCES -13 error:
<details>
```sh
$ npm -g upgrade
npm ERR! code EACCES
npm ERR! syscall rename
npm ERR! path /home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/npm-d9b9c5ef/node_modules/yargs
npm ERR! dest /home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/yargs-b6502a51
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, rename '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/npm-d9b9c5ef/node_modules/yargs' -> '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/yargs-b6502a51'
npm ERR! [OperationalError: EACCES: permission denied, rename '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/npm-d9b9c5ef/node_modules/yargs' -> '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/yargs-b6502a51'] {
npm ERR! cause: [Error: EACCES: permission denied, rename '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/npm-d9b9c5ef/node_modules/yargs' -> '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/yargs-b6502a51'] {
npm ERR! errno: -13,
npm ERR! code: 'EACCES',
npm ERR! syscall: 'rename',
npm ERR! path: '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/npm-d9b9c5ef/node_modules/yargs',
npm ERR! dest: '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/yargs-b6502a51'
npm ERR! },
npm ERR! stack: "Error: EACCES: permission denied, rename '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/npm-d9b9c5ef/node_modules/yargs' -> '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/yargs-b6502a51'",
npm ERR! errno: -13,
npm ERR! code: 'EACCES',
npm ERR! syscall: 'rename',
npm ERR! path: '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/npm-d9b9c5ef/node_modules/yargs',
npm ERR! dest: '/home/mithic/.nvm/versions/node/v12.16.1/lib/node_modules/.staging/yargs-b6502a51'
npm ERR! }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It is likely you do not have the permissions to access this file as the current user
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/mithic/.npm/_logs/2020-03-09T18_06_31_294Z-debug.log
```
</details>
#### What did you expect to happen?
I expected it to upgrade any packages that might be out of date, and not throw any errors, especially given that the reason I began using nvm was to solve this very issue (as npm docs explain [here](https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally)).
#### Is there anything in any of your profile files that modifies the `PATH`?
<!-- (e.g. `.bashrc`, `.bash_profile`, `.zshrc`, etc) -->
Yes:
`export PATH="/home/mithic/.local/texlive/2019/bin/x86_64-linux:$HOME/.local/bin:$HOME/bin:/usr/local/bin:$PATH"`
Additionally, since I am using WSL it also uses the PATH from windows (the full path can be seen in the `nvm debug` output above). I do not believe this to be part of the issue as all of the directories seem to be the correct one.
|
OS: windows,needs followup
|
low
|
Critical
|
578,145,666 |
TypeScript
|
Incorrect codegen and error detection for static property used as computed key in instance property of the same class
|
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.8.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** static class properties, computed property access
TypeScript is inconsistent in it's handling of static properties used as keys.
**Code**
```ts
class Some {
static readonly prop = Symbol();
[Some.prop] = 22; // compile time error
}
const s = class Some1 {
static readonly prop = Symbol();
[Some1.prop] = 22; // runtime error
};
```
**Expected behavior:**
Both examples work like
```js
// javascript
const s = Symbol();
class Some {
static get prop() { return s; };
constructor() {
this[Some.prop] = 22;
}
}
```
**Actual behavior:**
First example yields compile time error `TS2449: Class 'Some' used before its declaration`.
Second example results in emit that will throw at runtime
```js
"use strict";
var _a, _b;
const s = (_b = class Some {
constructor() {
this[_a] = 22;
}
},
_a = Some.prop,
_b.prop = Symbol(),
_b);
```
**Playground Links:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
[Playground Link 1](https://www.typescriptlang.org/play/?ssl=1&ssc=11&pln=1&pc=1#code/MYGwhgzhAEDKD2BbAptA3gKGt6EAuYeAlsNAE7JgAm8AdiAJ7QAOZ8z0AvHA4gEbwQACgCUAbgxYcAbQQoAdK3YBdLtABM6iQF8MQA)
[Playground Link 2](https://www.typescriptlang.org/play/#code/MYewdgzgLgBBMF4bADYEMLwMogLYFMYBvAKBnLijSgEtgYAnfNAE3BQE8YAHBkbxDCwdcAIxAoAFAEoA3CTIUA2jgIA6XvwC6ggEy75AXxJA)
**Relevant**
[proposal for class fields](https://github.com/tc39/proposal-class-fields)
|
Bug
|
low
|
Critical
|
578,160,659 |
flutter
|
Consider adding Performance overlay to platform-web
|
**Note:** I'm not sure how this would work or how much work this entails. But it came up as a topic from conversations with @liyuqian, and we noticed there is no bug tracking it.
## Use case
* Performance optimization: Running a Flutter app on the web and seeing some skipped frames, I'd like to reuse my knowledge of Flutter performance tools to start addressing the issue.
* Education: Writing docs about performance, I'd like to give the users two version of an app in [DartPad](https://dartpad.dartlang.org/). Users are able to bring up the performance overlay to see how one version of the app is performing much faster than the other one.
## Proposal
* Enable the performance overlay for Flutter web apps.
For clarity, this is what I mean by performance overlay.

It is [an engine layer](https://github.com/flutter/engine/blob/master/flow/layers/performance_overlay_layer.h) added to a `--profile` (or `--debug`) version of an app by the engine. It shows the build times for the UI thread (a.k.a. main thread, app thread, etc.) and the GPU thread (a.k.a. raster thread -- i.e. not actually running on the GPU).
|
c: new feature,framework,platform-web,c: proposal,P2,team-web,triaged-web
|
low
|
Critical
|
578,187,259 |
angular
|
Using slot API with Angular Elements throws an error
|
# 🐞 bug report
### Affected Package
angular/elements
### Description
I am making a container component in Angular 9 to be turned into a web component.
The container component is first turned into a Web Component once that is done ANY children can be passed into it and manipulate it. I have code that wraps the `<ng-content>` but once the turned into a web component the children are not wrapped.
## 🔬 Minimal Reproduction
First clone https://github.com/alevyKorio/SmartContainer.git
then run `npm install`
then `npm run build:ngelement`
go into `elements` folder and the container has become a web component `SmartContainer.js`
copy that file
clone https://github.com/alevyKorio/SmartContainerShell.git
paste file into root directory of SmartContainerShell folder
run `npm install`
run `npm run serve`
## 🔥 Exception or Error
As you can see in this image

The `div`s are siblings to the `app-smart-container` instead of children
## 🌍 Your Environment
**Angular Version:**
<pre><code>
<!-- run `ng version` and paste output below -->
<!-- ✍️-->
Angular CLI: 9.0.5
Node: 10.16.2
OS: win32 x64
Angular: 9.0.5
... animations, cli, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Ivy Workspace: Yes
Package Version
-----------------------------------------------------------
@angular-devkit/architect 0.900.5
@angular-devkit/build-angular 0.900.5
@angular-devkit/build-optimizer 0.900.5
@angular-devkit/build-webpack 0.900.5
@angular-devkit/core 9.0.5
@angular-devkit/schematics 9.0.5
@ngtools/webpack 9.0.5
@schematics/angular 9.0.5
@schematics/update 0.900.5
rxjs 6.5.4
typescript 3.7.5
webpack 4.41.2
</code></pre>
**Anything else relevant?**
Once changes are made to SmartContainer run then `npm run build:ngelement`
go into the `elements` folder and copy the new `SmartContainer.js` to the shell and restart the shell
|
type: bug/fix,area: elements,state: confirmed,state: needs more investigation,P4
|
low
|
Critical
|
578,189,795 |
flutter
|
Add check for existence of system requirement command line tools
|
Check for the presence of all required executables as soon as possible (before artifacts are downloaded) in the tool and give a good error message. `flutter doctor` makes sense but the validators may be run too late to catch `git` or `zip` exceptions.
https://flutter.dev/docs/get-started/install/windows#system-requirements
- Windows PowerShell 5.0 or newer (this is pre-installed with Windows 10)
- Git for Windows 2.x, with the Use Git from the Windows Command Prompt option.
https://flutter.dev/docs/get-started/install/macos#system-requirements
- bash
- curl
- git 2.x
- mkdir
- rm
- unzip
- which
- zip
https://flutter.dev/docs/get-started/install/linux#system-requirements
(above list + xz-utils)
|
tool,t: flutter doctor,P3,team-tool,triaged-tool
|
low
|
Critical
|
578,190,383 |
storybook
|
storySort isn't find all stories in project.
|
**Describe the bug**
To control the order of my stories I have the following storySort function:
```
storySort: (a, b) => {
// Control root level sort order.
const sort = [
'Library/Base Components',
'Library/Abstractions',
'Library/Modules',
'Library/Transient',
'Library/Page Specific',
'Library/Deprecated',
'Library/Uncategorized',
'Pages',
];
const sortObj = {};
sort.forEach(function(a, i) {
sortObj[a] = i + 1;
});
const aSplit = a[1].kind.split('/');
const bSplit = b[1].kind.split('/');
if (aSplit && bSplit) {
return (
sortObj[`${aSplit[0]}/${aSplit[1]}`] -
sortObj[`${bSplit[0]}/${bSplit[1]}`]
);
}
return a - b;
},
```
I have story files in two locations e.g. `src/components/SomeComponet/SomeComponent.stories.js` and `src/pages/SomePage/SomePage.stories.js`.
My `main.js` includes stories from these locations with `stories: ['../src/**/*.stories.([tj]s|mdx)'],`, all components appear Storybook however the stores in `src/pages` don't get ordered, and aren't available in the `storySort`. If I `console.log(a[1].kind === 'Pages/SomePage')` I never get true. If I do the same for any of the stories that live in `src/components` e.g. `console.log(a[1].kind === 'Library/SomeComponent')` I always get true.
**To Reproduce**
Repeat what I've outlined above.
**Expected behavior**
Using my sort function above I'd expect to see stories with a little starting with `Pages` to appear after `Library`.
**Screenshots**

**System:**
```
Environment Info:
System:
OS: macOS Mojave 10.14.1
CPU: (8) x64 Intel(R) Core(TM) i7-8559U CPU @ 2.70GHz
Binaries:
Node: 12.14.1 - ~/.nvm/versions/node/v12.14.1/bin/node
Yarn: 1.15.2 - /usr/local/bin/yarn
npm: 6.13.4 - ~/.nvm/versions/node/v12.14.1/bin/npm
Browsers:
Chrome: 80.0.3987.132
Firefox: 72.0.2
Safari: 12.0.1
npmPackages:
@storybook/addon-backgrounds: ^5.3.13 => 5.3.13
@storybook/addon-docs: ^6.0.0-alpha.9 => 6.0.0-alpha.12
@storybook/addon-knobs: ^5.3.12 => 5.3.13
@storybook/addon-storysource: ^5.3.13 => 5.3.13
@storybook/preset-typescript: ^1.2.0 => 1.2.0
@storybook/source-loader: ^5.3.13 => 5.3.13
@storybook/vue: ^5.3.12 => 5.3.13
```
|
question / support,core
|
low
|
Critical
|
578,201,800 |
rust
|
Detect introduction of deadlock by using `if lock.read()`/`else lock.write()` in the same expression
|
A coworker came across the following:
```rust
use std::sync::RwLock;
use std::collections::HashMap;
fn foo() {
let lock: RwLock<HashMap<u32, String>> = RwLock::new(HashMap::new());
let test = if let Some(item) = lock.read().unwrap().get(&5) {
println!("its in there");
item.clone()
} else { lock.write().unwrap().entry(5).or_insert(" eggs".to_string());
println!("ok we put it there");
" eggs".to_string()
};
println!("There were {}", test);
}
fn main() {
foo();
}
```
This code compiles, but it will deadlock because of the `lock.read().unwrap()` in the `if let` being kept alive until the end of the `foo` function. MIRI actually catches this, but it would be nice to have a lint against this kind of usage, given that it is a latent foot-gun.
|
C-enhancement,A-lints,T-lang,C-feature-request
|
low
|
Minor
|
578,202,524 |
create-react-app
|
When Ejecting - should not use scripts folder
|
I have a scripts folder already in my project - when I ran `npm run eject` it complains about a clash with existing scripts folder. I assume it might be a good idea to move react scripts to a folder with a slightly less common name?
|
issue: proposal,needs triage
|
low
|
Minor
|
578,203,856 |
flutter
|
flutter clean times out on Runner.xcworkspace clean
|
<!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Steps to Reproduce
<!-- You must include full steps to reproduce so that we can reproduce the problem. -->
0. find a mac os machine
1. install flutter
2. `flutter test`
3. `flutter clean`
4. see it times out (after running for 5 hours)
Please see this run for more details:
https://github.com/tianhaoz95/photochat/runs/494113600?check_suite_focus=true#step:8:9
**Expected results:** <!-- what did you want to see? -->
It should cleans up the build in less than 5 minutes
**Actual results:** <!-- what did you see? -->
It gets stuck and never returns.
|
tool,platform-mac,P2,team-tool,triaged-tool
|
low
|
Critical
|
578,207,346 |
rust
|
Compiler selects invalid `--lldb-python` path
|
<!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
For my mac (macOS Mojave), the `--lldb-python` path is being set to `/usr/bin/python3`, which doesn't exist.
Furthermore, `/usr/bin` is immutable, so I cannot simply symlink to `/usr/local/bin/python3`, the result of `which python3`.
This is causing the `debuginfo` test suite to fail.
### Code
```
./x.py -i test src/test/debuginfo
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
rust version:
Top of `master`, specifically:
https://github.com/rust-lang/rust/commit/3dbade652ed8ebac70f903e01f51cd92c4e4302c
### Error output
```
running 115 tests
iFFFFFFFiFFFFFiFFFFFFFFFFFiFFFFiFFiiFFiFFiFFiiFiFFFFFFFFFFFFFiFiFiiFFFFFFFFiFFFFFiFiiFFFF.iFFFFiFFiF 100/115
FFFFiiFiFFFFFFF
failures:
---- [debuginfo-lldb] debuginfo/basic-types-globals-metadata.rs stdout ----
NOTE: compiletest thinks it is using LLDB version 1100
NOTE: compiletest thinks it is using LLDB without native rust support
error: Failed to setup Python process for LLDB script: No such file or directory (os error 2)
[ERROR compiletest::runtest] fatal error, panic: "Failed to setup Python process for LLDB script: No such file or directory (os error 2)"
thread 'main' panicked at 'fatal error', src/tools/compiletest/src/runtest.rs:2133:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
// …
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
running 115 tests
iFFFFFFFiFFFFFiFFFFFFFFFFFiFFFFiFFiiFFiFFiFFiiFiFFFFFFFFFFFFFiFiFiiFFFFFFFFiFFFFFiFiiFFFF.iFFFFiFFiF 100/115
FFFFiiFiFFFFFFF
failures:
---- [debuginfo-lldb] debuginfo/basic-types-globals-metadata.rs stdout ----
NOTE: compiletest thinks it is using LLDB version 1100
NOTE: compiletest thinks it is using LLDB without native rust support
error: Failed to setup Python process for LLDB script: No such file or directory (os error 2)
[ERROR compiletest::runtest] fatal error, panic: "Failed to setup Python process for LLDB script: No such file or directory (os error 2)"
thread 'main' panicked at 'fatal error', src/tools/compiletest/src/runtest.rs:2133:9
stack backtrace:
0: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
1: core::fmt::write
2: std::io::Write::write_fmt
3: std::io::impls::<impl std::io::Write for alloc::boxed::Box<W>>::write_fmt
4: std::sys_common::backtrace::print
5: std::panicking::default_hook::{{closure}}
6: std::panicking::default_hook
7: std::panicking::rust_panic_with_hook
8: std::panicking::begin_panic
9: compiletest::runtest::TestCx::fatal
10: compiletest::runtest::TestCx::cmd2procres
11: compiletest::runtest::TestCx::run_revision
12: compiletest::runtest::run
13: core::ops::function::FnOnce::call_once{{vtable.shim}}
14: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
15: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once
16: __rust_maybe_catch_panic
17: std::panicking::try
18: test::run_test_in_process
19: test::run_test::run_test_inner
20: test::run_test
21: test::run_tests
22: test::console::run_tests_console
23: compiletest::main
24: std::rt::lang_start::{{closure}}
25: std::panicking::try::do_call
26: __rust_maybe_catch_panic
27: std::panicking::try
28: std::rt::lang_start_internal
29: main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
</p>
</details>
|
A-testsuite,A-debuginfo,T-compiler,T-bootstrap,C-bug,A-compiletest
|
low
|
Critical
|
578,252,732 |
pytorch
|
Expose chunk_sizes for DataParallel
|
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Allow users to add custom chunk_sizes when running DataParallel.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
I am using the SparseConvNet package (https://github.com/facebookresearch/SparseConvNet), where tensors cannot be split evenly because each sample may have different sizes.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
Add a new if statement to allow a custom scatter function to handle chunk_sizes if desired in `torch.nn.parallel.scatter_gather.scatter.scatter_map`:
```python
def scatter_map(obj):
if isinstance(obj, torch.Tensor):
return Scatter.apply(target_gpus, None, dim, obj)
if hasattr(obj, 'scatter'):
return obj.scatter(target_gpus, dim=dim)
if isinstance(obj, tuple) and len(obj) > 0:
return list(zip(*map(scatter_map, obj)))
if isinstance(obj, list) and len(obj) > 0:
return list(map(list, zip(*map(scatter_map, obj))))
if isinstance(obj, dict) and len(obj) > 0:
return list(map(type(obj), zip(*map(scatter_map, obj.items()))))
return [obj for targets in target_gpus]
```
https://github.com/pytorch/pytorch/blob/f62a0060972d594cc1c4ab99d44267373eee4ec6/torch/nn/parallel/scatter_gather.py#L11
And the same for gather/gather_map.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
|
module: nn,triaged,enhancement,module: data parallel
|
low
|
Minor
|
578,265,139 |
flutter
|
Add an example project to the new package template
|
All Flutter packages should include an example project that demonstrates how to use the packages.
Typically developers create a Flutter project under `/example` for this purpose.
Flutter should automatically create a new Flutter project under `/example` when creating a new Flutter package using the Flutter tool.
By adding this subproject, more developers are likely to implement an example project, and it alleviates the need for every package developer to manually create this subproject for every new package.
|
tool,P3,team-tool,triaged-tool
|
low
|
Major
|
578,342,473 |
vscode
|
Welcome -> Interface Overview Screen - Title Text for the first two features (Search & File explorer) Needs to change/swap
|
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ -->
Interface Overview Screen - Title Text for the first two features (Search & File explorer) Needs to change/swap
<!-- Use Help > Report Issue to prefill these. -->
- VSCode Version: Visual Studio Code - Version: 1.41.1
- OS Version: Mac OS
Steps to Reproduce:
1. Initial Welcome screen (Help Menu -> Welcome)
2. Click on Interface Overview from Right Side Learn Section
<!-- Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No: Its common issue comes every time
<img width="1440" alt="vscode-interface-overview-screen" src="https://user-images.githubusercontent.com/41334766/76281734-f78f6e00-62bb-11ea-9ae6-0e64aab10aa7.png">
|
bug,workbench-welcome
|
low
|
Minor
|
578,436,730 |
TypeScript
|
Type definitions overshadowing
|
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.8.3
(also tried 3.9.0-dev.20200310)
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
"Module '"immutable"' has no exported member"
**Code**
_(not self contained)_
```ts
import { hash } from "immutable";
export function test(): number {
return hash(1);
}
```
**Actual behavior:**
```
src/index.ts:1:10 - error TS2305: Module '"immutable"' has no exported member 'hash'.
1 import { hash } from "immutable";
~~~~
```
even though
`grep "export function hash" node_modules/immutable/dist/immutable.d.ts`
reveals
`export function hash(value: any): number;`
Because it uses type definitions in
`node_modules/@types/draft-js/node_modules/immutable/dist/immutable.d.ts`
instead of
`node_modules/immutable/dist/immutable.d.ts`
**Demo:**
sadly I did not find any way to put it in a playground, in any case:
https://gitlab.com/rawieo/issue_demo_1
**Related Issues:**
- https://github.com/facebook/create-react-app/issues/8578
- https://github.com/immutable-js/immutable-js/issues/1502 (probably)
|
Bug
|
low
|
Critical
|
578,440,025 |
rust
|
todo! and unimplemented! does not work with impl Trait return types
|
This code
```
trait SomeTrait {
fn some_func();
}
fn todo_impl_trait() -> impl SomeTrait { todo!() }
```
does not compile because
`the trait `SomeTrait` is not implemented for ()`
But such code
```
trait SomeTrait {
fn some_func();
}
fn todo_impl_trait<T: SomeTrait>() -> T { todo!() }
```
compiles correctly. Can this problem be resolved to use both todo!() and impl Trait?
|
C-enhancement,A-diagnostics,T-compiler,A-impl-trait
|
medium
|
Critical
|
578,459,689 |
vscode
|
SCM - Provide multiple ScmResourceGroup in menu commands
|
Hi,
When implementing a menu on a ResourceGroup in SourceControl view. There is always only one group in command args even if the user select multiple ResourceGroup and perform a common menu action there.
```ts
// groups length is always 1 even when multiple group selected
commands.registerCommand("commandId", (...groups: SourceControlResourceGroup[]) => {
```
The expected behavior is to have the list of SourceControlResourceGroup in command args like when selecting multiple SourceControlResource and execute a common menu action.
```ts
// resources length is greater than 1 when there is multiple resources selected in SourceControl View
commands.registerCommand("commandId", (...resources: SourceControlResourceState[]) => {
```
Regards.
- VSCode Version: 1.42.1
- OS Version: win10
|
help wanted,feature-request,scm
|
low
|
Minor
|
578,496,930 |
angular
|
Dynamic FormControl binding is broken
|
<!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅
Oh hi there! 😄
To expedite issue processing please search open and closed issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅-->
# 🐞 bug report
### Affected Package
<!-- Can you pin-point one or more @angular/* packages as the source of the bug? -->
<!-- ✍️edit: --> The issue is caused by package @angular/forms
### Is this a regression?
<!-- Did this behavior use to work in the previous version? -->
<!-- ✍️--> No, none that I'm aware of.
### Description
<!-- ✍️--> We have a setup in our project where we bind different form-controls to a single input. Additionally we handle blur events to set formatted value to a form-control. The binding is done via function call to determine which form-control to bind input to. The problem is that I change the value on which the function is dependent, form-control binding is kept as if it's cached or memoized?
## 🔬 Minimal Reproduction
<!--
Please create and share minimal reproduction of the issue starting with this template: https://stackblitz.com/fork/angular-issue-repro2
-->
<!-- ✍️-->
[Link](https://stackblitz.com/edit/angular-issue-repro2-uakgvb)
<!--
If StackBlitz is not suitable for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue.
A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem.
Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior.
Issues that don't have enough info and can't be reproduced will be closed.
You can read more about issue submission guidelines here: https://github.com/angular/angular/blob/master/CONTRIBUTING.md#-submitting-an-issue
-->
|
type: bug/fix,area: forms,state: confirmed,forms: Controls API,P4
|
low
|
Critical
|
578,517,755 |
TypeScript
|
A way to ignore prefix/suffix in module ID, then resolve the remaining ID as usual
|
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Search Terms
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
wildcard module declarations ignore prefix suffix relative path
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
```ts
import url, { other, exported, values } from 'css:./whatever.css';
```
**TL;DR: I'm looking for a way to say "for all modules starting `css:`, ignore the `css:` bit, and resolve the remaining identifier as normal**.
As identified by [wildcard module declarations](https://www.typescriptlang.org/docs/handbook/modules.html#wildcard-module-declarations), some build tools use prefixes/suffixes to influence how a file is imported.
You can provide types for these:
```ts
declare module "css:*" {
const value: string;
export default value;
export const other: string;
export const exported: string;
export const values: string;
}
```
The above allows developers to say "all modules starting `css:` look like this…".
However, what if the exports from each CSS files differs? This is true for CSS modules, where the class names in a CSS file are 'exported'.
In these cases it's typical to generate a `.css.d.ts` file alongside your CSS that defines the types. However, if you're using a prefix like `css:`, TypeScript can't find the location of the `.css.d.ts` file.
**I'm looking for a way to say "for all modules starting `css:`, ignore the `css:` bit, and resolve the remaining identifier as normal**.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
- Build tools that use prefixes/suffixes to indicate how to process files.
- Files that may export different things per file (like CSS modules)
## Examples
<!-- Show how this would be used and what the behavior would be -->
It isn't clear to me if this should be done in tsconfig or in a type definition file.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
|
Suggestion,In Discussion
|
low
|
Critical
|
578,598,402 |
vscode
|
Settings Sync : Allow for custom backend service end points
|
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
According to the Settings Sync Plan #90129 and to the Settings Sync documentation ( https://code.visualstudio.com/docs/editor/settings-sync ) it seems that only Microsoft and public Github account are supported.
What if we wanted to use a Github Enterprise backend, or any other git remote ( Gitlab, Gogs, whatever... ) ?
|
feature-request,settings-sync
|
high
|
Critical
|
578,604,488 |
youtube-dl
|
https://nation.foxnews.com support request
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.03.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [ X] I'm reporting a new site support request
- [ X] I've verified that I'm running youtube-dl version **2020.03.08**
- [ X] I've checked that all provided URLs are alive and playable in a browser
- [ X] I've checked that none of provided URLs violate any copyrights
- [ X] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
Single Video: https://nation.foxnews.com/watch/0667a730448e69c75ab7eab7cb3225a2/
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE
Request https://nation.foxnews.com/watch/0667a730448e69c75ab7eab7cb3225a2/
|
site-support-request
|
low
|
Critical
|
578,652,575 |
TypeScript
|
Recognize JS namespace pattern in TS
|
I short, I propose two things:
1. In TS files recognize JavaScript namespace patterns that are recognized in JS files with `--allowJs`. Like `var my = my || {}; my.app = my.app || {};` (#7632)
2. Recognize methods that create global JavaScript namespaces from provided string. For example calling [`Ext.na("Company.data")`](https://stackoverflow.com/a/18152489/350384) should have the same effect as writing it "by hand" with `var Company = Company || {}; Company.data = Company.data || {};`
## Why 1.
It's now possible to have project with `--allowJs` flag when you can define such code in JS file and TS file recognizes it:
_JS file_:
```js
var app = app || {};
app.pages = app.pages || {};
app.pages.admin = app.pages.admin || {};
app.pages.admin.mailing = (function () {
return {
/**
* @param {string} email
*/
sendMail: function (email) { }
}
})()
```
_TS file_:
```ts
(function(){
app.pages.admin.mailing.sendMail("[email protected]")
})();
```
See what I get when I hover over `mailing` property in my TS file.

The problem is that I want to incrementally migrate old JavaScript project to Typescript and convert each JS file to TS. It's not so easy or obvious how to migrate JS file that use this namespaces pattern without adding a lot of type definitions that can be inferred in JS file with `allowJs` compiler flag. Why not recognize this pattern in Typescript files?
## Why 2. and example
This is the actual pattern that my current project uses in JS files
```js
namespace('app.pages.admin');
app.pages.admin.mailing = (function () {
return {
/**
* @param {string} email
*/
sendMail: function (email) { }
}
})()
```
`namespace` function is global function that basically creates global namespace objects. It's logically equivalent to writing `var app = app || {}; app.pages = app.pages || {}; app.pages.admin = app.pages.admin || {};` from previous example.
I want to be able to change JS file to TS file and be able to use this code in new TS files (but with static safety that TS provides).
I propose that we recognize special type alias definition for which compiler will recognize this pattern and act as if this global "javascript namespace" (or ["expando"](https://github.com/microsoft/TypeScript/issues/10566) object) was created. For example this type alias will be added to standard lib:
```ts
type JavaScriptNamespace = string;
```
Then I could declare global `namespace` function like this
```ts
declare function namespace(n : JavaScriptNamespace): void;
```
and this TS code would be valid:
```ts
namespace('app.pages.admin');
app.pages.admin.mailing = (function () {
return {
sendMail: function (email: string): void { }
}
})()
```
It's a bit similar in spirit to `ThisType`. I mean compiler have special handling to some type. But if old compiler sees this type then nothing happens (it's just a string type).
## Use cases and current "workarounds"
It's all about making migration of current JS project to TS easier. It's hard to convince my team (and even myself) that we should use TS when you need to write code like this to get the same behavior you had in JS but with type safety:
```ts
//file: global.d.ts
declare var app: NamespaceApp
declare function namespace(namespace: string): void;
interface NamespaceApp {
pages: NamespacePages;
}
interface NamespacePages {
admin: NamespaceAdmin;
}
interface NamespaceAdmin {
}
//file: module1.js
interface NamespaceAdmin {
module1: {
foo: (arg: string) => void;
bar: (arg: string[]) => void;
}
}
namespace("app.pages.admin")
app.pages.admin.module1 = (function () {
return {
foo: function (arg: string) {
},
bar: function (arg: string[]) {
}
}
})()
//file: module2.js
interface NamespaceAdmin {
module2: {
foo2: (arg: string) => void;
bar2: (arg: string[]) => void;
}
}
namespace("app.pages.admin")
app.pages.admin.module2 = (function () {
return {
foo2: function (arg: string) {
},
bar2: function (arg: string[]) {
}
}
})()
```
you need to write type definitions for you methods twice and split you type definitions into interfaces to merge them 🤮
My current workaround is using Typescript namespaces like this (I have two approaches, don't like either of them):
```ts
//first approach
namespace app1.pages.admin.mailing {
function privateFunc(){}
export function sendMail(email: string): void{
privateFunc();
}
}
//second approach
namespace app2.pages.admin {
export const mailing = (function(){
function privateFunc(){}
function sendMail(email: string){}
return {
sendMail
}
})()
}
```
There are many problems with this use of Typescript's namespace:
* the generated code is bigger, because this is how TS's namespaces work
* The first approach would almost look similar to original JS code if you could use "export" syntax like in ES modules. Then at leas it would visually look similar to previous JS code that is using `return` in IIFE. Something like that:
```ts
//second approach
namespace app5.pages.admin {
function privateFunc(){}
function sendMail(email: string): void{
privateFunc();
}
export { sendMail } //Error: "Export declarations are not permitted in a namespace.",
}
```
But it's not supported. I guess it's not worth to change it now since namespaces are not used that often nowadays.
* You basically can't represent namespaces that have same name as part of namespace. For example in JS I had `app.pages.app' namespace and I cannot have it in TS:
```ts
namespace app.pages.app {
export const mailing = (function(){
function privateFunc(){}
function sendMail(email: string){
app.pages.someOtherModule.foo(); //compiler error Property 'pages' does not exist on type 'typeof app'.
}
return {
sendMail
}
})()
}
```
In the end I had to use different name that I would use in TS files and use workaround to make old name work in other JS files:
```ts
(app.pages as any).app = app.pages.otherName;
```
## Summary
As I mentioned it's all about easing migration of old JavaScript projects that use old namespace pattern (for example because they use ExtJS library) to TypeScript. I know that nowadays this pattern is not that popular because you should use ES modules. But I believe there are a lot of people that would love to move their projects to TS but it's hard because it would require to move to ES modules first. And it's a huge task itself. Actually my plan is to migrate current code to TS with old namespace and then try to migrate it to ES modules. It should be much easier to migrate when most of you code is typed. You have more confidence when compiler helps you.
If TS team thinks that it's only worth doing 1. proposal (just recognize what's recognized in `allowJS` now, without implementing `JavaScriptNamespace` alias proposal) it would be "good enough" for me because it will be much better than my current namespace workarounds.
But if I had this `JavaScriptNamespace` alias feature then it would be possible to include all my JS files to TS compilation with `allowJS` and this JS code will be available in TS! (of course function arguments will be `any`, but still). I would also get better IntellSense in current JS files because this `namespace` function will be recognized as creator of a namespace (when using Salsa).
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
|
Suggestion,In Discussion
|
low
|
Critical
|
578,655,245 |
node
|
No stack trace with missing async keyword and dynamic imports
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
-->
* **Version**: v13.9.0
* **Platform**: Linux PC_NAME 5.3.0-40-generic nodejs/modules#32~18.04.1-Ubuntu SMP Mon Feb 3 14:05:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**:
### What steps will reproduce the bug?
Consider the following code:
#### a.mjs
```js
(async () => {
"use strict";
let b = await import("./b.mjs");
await b.default();
})();
```
#### b.mjs
```js
import fs from 'fs';
function bug() {
// The something doesn't have to exist
console.log(await fs.promises.readFile("/proc/cpuinfo", "utf-8"));
}
export default async function () {
await bug();
}
```
#### Explanation
There is obviously a bug in `b.mjs` above. The `bug()` function is missing the `async` keyword`. However, when I attempt to execute the above, I get the following error:
```
(node:30870) UnhandledPromiseRejectionWarning: SyntaxError: Unexpected reserved word
at Loader.moduleStrategy (internal/modules/esm/translators.js:81:18)
at async link (internal/modules/esm/module_job.js:37:21)
(node:30870) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:30870) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
This isn't particularly helpful. Now, consider the following snippet:
**c.mjs:**
```js
import fs from 'fs';
function bug() {
// The something doesn't have to exist
console.log(await fs.promises.readFile("/proc/cpuinfo", "utf-8"));
}
(async () => {
"use strict";
await bug();
})();
```
There's bug in this one too, but executing it yields a very different and much more helpful error:
```
file:///tmp/c.mjs:5
console.log(await fs.promises.readFile("/proc/cpuinfo", "utf-8"));
^^^^^
SyntaxError: Unexpected reserved word
at Loader.moduleStrategy (internal/modules/esm/translators.js:81:18)
at async link (internal/modules/esm/module_job.js:37:21)
```
Much better. Node gives us a helping hand by telling us where the error occurred.
### How often does it reproduce? Is there a required condition?
This bug in the error message only appears to occur when a module is dynamically imported with `import("./path/to/file.mjs")`. Specifically, the first error message is missing this bit:
```
file:///tmp/c.mjs:5
console.log(await fs.promises.readFile("/proc/cpuinfo", "utf-8"));
^^^^^
```
...obviously the filepath and line number would be different for b.mjs is this bit was added.
### What is the expected behavior?
The first error message should look like the second.
<!--
If possible please provide textual output instead of screenshots.
-->
When dynamically importing a module that is missing the async keyword on a method, the error message should tell me where the error occurred.
### What do you see instead?
The bit there it tells me where the "unexpected reserved word" was found is missing. See above for examples of what's gone wrong.
<!--
If possible please provide textual output instead of screenshots.
-->
### Additional information
If possible, when it's the `await` keyword that was detected as unexpected, the error message should reflect this more closely. For example, it might be helpful to say `SyntaxError: Unexpected reserved word "await" (did you forget the "async" keyword?)" or something like that.
<!--
Tell us anything else you think we should know.
-->
|
esm
|
low
|
Critical
|
578,682,986 |
flutter
|
DropdownButton does not grows based on content when isDense is true.
|
https://b.corp.google.com/issues/151121131.
|
framework,f: material design,d: api docs,customer: money (g3),has reproducible steps,found in release: 3.0,found in release: 3.1,team-design,triaged-design
|
low
|
Major
|
578,731,505 |
godot
|
Tilesets behave incredibly buggy when multiple Tiles share the same texture
|
**Godot version:** 3.1.2 stable
**OS/device including version:** Win64
**Issue description:**
Because the same texture can't be added more than once

and because clicking into a texture acts as selecting the Tile, and because clicking at those arrows will advance between tiles, and subtiles and shape

Tilesets start to behave unpredictably crazy when multiple Tiles try to share the same texture.
**Steps to reproduce:**
1. Have a simple spritesheet,
2. create an Autotile Tile and select all the spritesheet as region
3. create another Autotile Tile and select all the spritesheet as region, change modulate/bitmask/collision/priority whatever
4. create another Atlas Tile and select all the spritesheet as region
5. paint some tiles
6. try to go back into Tilesets to select a specifc tile and change settings
7. witness the chaos unfold
**Minimal reproduction project:**
[Tileset_single_texture_issue.zip](https://github.com/godotengine/godot/files/4313749/Tileset_single_texture_issue.zip)
|
bug,topic:core,topic:editor
|
low
|
Critical
|
578,777,312 |
pytorch
|
Strange behaviour of F.interpolate with bicubic mode.
|
## 🐛 Bug
Input:
X with shape (2,3,256,256)
RUN:
Y = F.interpolate(X, [256,256], mode='bicubic', align_corners=True)
Z = F.interpolate(Y, [256,256], mode='bicubic', align_corners=True)
ISSUE:
Y[1,:,:,:] = zeros and Z[1,:,:,:] = zeros.
Following is my code:
```
def interpolate_torch(x, s=256, mode='bicubic', align_corners=True):
return F.interpolate(x, [s, s], mode=mode, align_corners=True)
def get_transform(resize_crop, colorJitter, horizen_flip, inputsize=1024):
transform_list =[]
if resize_crop:
transform_list.append(transforms.RandomResizedCrop(inputsize, scale=(0.8, 1.0), interpolation=PIL.Image.BICUBIC))
else:
if inputsize != 1024:
transform_list.append(transforms.Resize(inputsize, interpolation=PIL.Image.BICUBIC))
if colorJitter:
transform_list.append(transforms.ColorJitter())
if horizen_flip:
transform_list.append(transforms.RandomHorizontalFlip())
transform_list += [transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5))]
return transforms.Compose(transform_list)
def img2tensor(name):
transf = get_transform(resize_crop=False, colorJitter=False,
horizen_flip=False, inputsize=256)
img = Image.open(name)
img_torch = transf(img)
return img_torch
y2 = img2tensor('content.png').unsqueeze(0)
y1 = img2tensor('content.png').unsqueeze(0)
y = torch.cat([y2, y1],0)
print(y[1,:,:,:])
y_bicubic = interpolate_torch(y, s=256, mode='bicubic')
print(y_bicubic[1,:,:,:])
y_bicubic_bicubic = interpolate_torch(y_bicubic, s=128, mode='bicubic')
```
Environment:
pytorch 1.1.0
torchvision 0.2.2.post3
scikit-image 0.15.0
python3
|
module: nn,triaged
|
low
|
Critical
|
578,802,347 |
flutter
|
Changing MaterialApp to CupertinoApp exceptions on routes.
|
Hi, I have changed my flutter app from MaterialApp to CupertinoApp. It has error on routes.
Tested on Android Device and Android Emulator.
Following codes works with MaterialApp.
```
import 'package:flutter/material.dart';
import 'package:wasd/onboarding.dart';
import 'strings.dart';
void main() => runApp(WASD());
class WASD extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: Strings.appTitle,
initialRoute: Strings.navOnboarding,
routes: {
Strings.navOnboarding: (context) => Onboarding()
},
);
}
}
```
However, when I changed to CupertinoApp like following. It shows error.
```
import 'package:flutter/cupertino.dart';
import 'package:wasd/onboarding.dart';
import 'strings.dart';
void main() => runApp(WASD());
class WASD extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return CupertinoApp(
title: Strings.appTitle,
initialRoute: Strings.navOnboarding,
routes: {
Strings.navOnboarding: (context) => Onboarding()
},
);
}
}
```
<details>
<summary>Logs</summary>
```
Performing hot reload...
Syncing files to device Android SDK built for x86...
════════ Exception caught by widgets library ═══════════════════════════════════════════════════════
The following assertion was thrown building Builder(dirty, dependencies: [CupertinoUserInterfaceLevel, _InheritedCupertinoTheme]):
Either the home property must be specified, or the routes table must include an entry for "/", or there must be on onGenerateRoute callback specified, or there must be an onUnknownRoute callback specified, or the builder property must be specified, because otherwise there is nothing to fall back on if the app is started with an intent that specifies an unknown route.
'package:flutter/src/widgets/app.dart':
Failed assertion: line 178 pos 10: 'builder != null ||
home != null ||
routes.containsKey(Navigator.defaultRouteName) ||
onGenerateRoute != null ||
onUnknownRoute != null'
Either the assertion indicates an error in the framework itself, or we should provide substantially more information in this error message to help you determine and fix the underlying cause.
In either case, please report this assertion by filing a bug on GitHub:
https://github.com/flutter/flutter/issues/new?template=BUG.md
The relevant error-causing widget was:
CupertinoApp file:///Users/abdullahbalta/StudioProjects/wasd-flutter/lib/main.dart:11:12
When the exception was thrown, this was the stack:
#2 new WidgetsApp (package:flutter/src/widgets/app.dart:178:10)
#3 _CupertinoAppState.build.<anonymous closure> (package:flutter/src/cupertino/app.dart:278:22)
#4 Builder.build (package:flutter/src/widgets/basic.dart:6757:41)
#5 StatelessElement.build (package:flutter/src/widgets/framework.dart:4291:28)
#6 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:4223:15)
...
════════════════════════════════════════════════════════════════════════════════════════════════════
Reloaded 1 of 639 libraries in 418ms.
```
</details>
<details>
<summary>strings.dart</summary>
```
class Strings {
static String appTitle = "WASD";
//routes
static String navOnboarding = '/onboarding';
}
```
</details>
So, assertion error message is very clear, but why this is not working?
|
framework,f: material design,d: api docs,has reproducible steps,found in release: 3.0,found in release: 3.1,team-design,triaged-design
|
low
|
Critical
|
578,842,855 |
kubernetes
|
Issues in published repos are not very visible
|
E.g. https://github.com/kubernetes/apimachinery, https://github.com/kubernetes/client-go, etc
People file issues there because it's logical, but we (api machinery) don't include them in our triage meeting.
We should either adjust our query, change our process in some other way (like tag issues when we've looked at them), or adjust the readmes and issue templates in the published repos.
/sig api-machinery
|
kind/bug,sig/api-machinery,lifecycle/frozen
|
low
|
Major
|
578,871,592 |
godot
|
Tileset: Neither Autotile nor Atlas Tile Priority allow "per Tile" priority
|
**Godot version:** 3.2.1.stable
**OS/device including version:** Win64
**Issue description:**
My goal is to have a Tile with 8 Subtiles and every time I paint a tile, a random one out of those 8 is picked. This is what the "Priority" tab in the Tileset is for.
Both Atlas texture as well as a Autotile have Priority tab. So I assumed a Tile with 8 identical bitmasks, should provide me with 8 randomly chosen Tiles, if every tile is set to the default 1/8 priority. [At least according to GDQuest](https://youtu.be/F6VerW98gEc?t=333), 1/8 should mean that out of 8 painted tiles, the probability for each subtile to appear is 1.
I would expect this to work, however what seems to be the case is that priority only takes effect, if there are multiple tiles for the same bitmask, **and only if the bitmask _rules_ return true.**
As a consequence, despite **Atlas Tiles** having a "Priority" tab, it seems to have absolutely no purpose or effect:

**Autotiles** _without_ a bitmask set will also show no effect of priority, just like with Atlas tiles:

Note the 1/0 there.
If you set the 3x3 bitmask to _fill the entire tile_, all tiles surrounded by other tiles, will be affected by "priority", but not those that are not surrounded at all sides:

If you set the 3x3 bitmask _only in the center_ of each tile, the opposite is the case, only tiles not surrounded by any other tiles will be affected by the priority setting:

The desired result however, would be to allow the user control over which tiles are set according to priority and which are not. The way bitmaps are currently set up, seem to make the priority feature either not working or useless.
See it in action:

**Minimal reproduction project:**
[Tileset_Priority_issue.zip](https://github.com/godotengine/godot/files/4314839/Tileset_Priority_issue.zip)
PS: If you wonder why Atlas tiles don't show up in the right color: https://github.com/godotengine/godot/issues/36964
|
bug,enhancement,topic:core,topic:editor,topic:2d
|
medium
|
Major
|
578,877,320 |
go
|
x/pkgsite: search algorithm doesn't find similarly named packages
|
<!--
Please answer these questions before submitting your issue. Thanks!
For questions, please email [email protected].
-->
### What is the URL of the page with the issue?
https://pkg.go.dev/search?q=blake2
### What did you do?
Searched for `blake2` at pkg.go.dev
### What did you expect to see?
Search results which included both https://pkg.go.dev/golang.org/x/crypto/blake2b and https://pkg.go.dev/golang.org/x/crypto/blake2s
### What did you see instead?
Search results that did not include those packages
It looks like the search isn't finding similarly named packages. In this case, I knew there were two blake2-related packages in x/crypto, but when I searched for "blake2", neither of the similarly named `blake2s` or `blake2b` packages were in the results
|
NeedsInvestigation,pkgsite,pkgsite/search
|
low
|
Minor
|
578,886,341 |
flutter
|
Standardize FPS computation
|
Investigating what’s the best way of measuring FPS in Flutter with sporadic user inputs and animations. On top of that, there are also multiple threads and pipelining.
This seems to also be a common issue for Chrome-based OS and Fuchsia.
We can either come up with an API, or at least document some standardized algorithm or specifications on how to compute the FPS in such scenarios.
|
c: new feature,engine,c: performance,perf: speed,P3,team-engine,triaged-engine
|
low
|
Major
|
578,886,968 |
terminal
|
Disable (or Customize) key to suppress mouse events
|
# Description of the new feature/enhancement
Work item introduced by #4856. Shift is used to suppress mouse events. A user may want to disable this feature.
If we want to be able to customize it (which I don't think anybody is asking for but I'll include the idea in here anyways), this may be a part of #1553.
|
Issue-Feature,Area-Settings,Product-Terminal
|
low
|
Major
|
578,924,054 |
terminal
|
shift+left at the end of a line in the console selects a non existent character
|
# Environment
19582 rs_prerelease
# Steps to reproduce
open cmd
type: type f:\mx\public\x86fre.nocil\onecore\external\sdk\inc\crt\direct.h
# Expected behavior
shift+left should select the h, and ctrl+shift+left should select the whole path
# Actual behavior
* shift+left selects the empty character after the h
* ctrl+shift+left selects the empty character after the h. A 2nd ctrl+shift+left selects the path
|
Product-Conhost,Area-Input,Issue-Bug,Priority-3
|
low
|
Minor
|
578,950,867 |
TypeScript
|
Disable or remove specific code action/fixes/refactoring
|
## Search Terms
## Suggestion
https://github.com/microsoft/vscode/issues/92305
Because when some code action is available, there will be a icon shows on the code editing area.
If some fixes I always not want, it's annoying that the clickable icon will show every time my cursor move on the code or select some code or some other actions. Especially when the current line left is not white spaces, the icon will show at previous line and cover the privious line code.
For example, I always want use require in js file not import.
And maybe some visual effect like the dash underline, the show fixes button in the popup tip, the dimmed color text should be removed, when the relate code action is disabled. Because these hint there are problems in my code but actually not.
## Use Cases
## Examples
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
|
Suggestion,In Discussion
|
low
|
Major
|
578,974,017 |
flutter
|
Prevent status and system bar appearance on touch or on keyboard in flutter
|
I have asking my problem in [stackoverflow](https://stackoverflow.com/questions/60611854/prevent-status-and-system-bar-appearance-on-touch-or-on-keyboard-in-flutter) but no answer or comment as for now.
My problem is a below.
I have flutter app that will be put within public in tablet or in android box. My app should be run on full screen. I have user `SystemChrome.setEnabledSystemUIOverlays([]);` on `initState()`.
But, the problem is whenever user slide top or bottom, the status and system bar will be appear like the answer in [stackoverflow](https://stackoverflow.com/a/57126488/3436326). Public can close or exit the app in back button. Same thing goes with keyboard input appearance.
What is the way to prevent or hide status and system navigation bar until I close the app? The app can be closing on administrator access page to exit the app. Is there something that can be done on MainActivity.java? I cannot find any way currently to prevent status and system bar from user access while app is running.
In Xamarin Forms that I use before, I have below code that I apply from online:
```
//Remove title bar
RequestWindowFeature(WindowFeatures.NoTitle);
//Remove notification bar
Window.SetFlags(WindowManagerFlags.Fullscreen, WindowManagerFlags.Fullscreen);
//Wake screen when app is added to task
Window.SetFlags(WindowManagerFlags.TurnScreenOn, WindowManagerFlags.TurnScreenOn);
//Keep screen on without timeout
Window.SetFlags(WindowManagerFlags.KeepScreenOn, WindowManagerFlags.KeepScreenOn);
if (Build.VERSION.SdkInt >= BuildVersionCodes.Lollipop)
{
var stBarHeight = typeof(Xamarin.Forms.Platform.Android.FormsAppCompatActivity).GetField("statusBarHeight", System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.NonPublic);
if (stBarHeight == null)
{
stBarHeight = typeof(Xamarin.Forms.Platform.Android.FormsAppCompatActivity).GetField("_statusBarHeight", System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.NonPublic);
}
stBarHeight?.SetValue(this, 0);
}
```
|
platform-android,framework,c: proposal,a: layout,P3,team-android,triaged-android
|
low
|
Major
|
578,974,275 |
pytorch
|
Restructure `multi_head_attention_forward`
|
## 🚀 Feature
Restructure the function `multi_head_attention_forward` in [nn.functional](https://github.com/pytorch/pytorch/blob/23b2fba79a6d2baadbb528b58ce6adb0ea929976/torch/nn/functional.py#L3573) into several functions to improve the ability to experiment. In particular, decompose the function so that the following are available:
* The input embedding functions.
* The computation of attention weights.
* The output embedding function.
This will allow users to try different embeddings or attention mechanisms without having to recode the rest.
## Motivation
Addresses the issue of decomposing the function as mentioned in #32590. It also moves forward on including more support for attention mechanisms.
## Pitch
Currently, the `mutli_head_attention_forward` function encapsulates the projection of the `query`, `key`, and `value`, computing attention for these projections, and computing the output projection after applying attention. Furthermore, the input embedding utilizes several code paths that are different embeddings. By decomposing the function into several parts, we can make it more readable and open to experimentation.
The following plan is based on the above:
* Functions for computing the input embeddings `q`, `k`, and `v`. There are currently four code paths used for doing this, and three unique embeddings are used. Each embedding should be an individual function so that it's clearer what method is being used. The embeddings used are labeled 'self-attention' (where `query = key = value`), 'encoder-decoder attention' (where `key = value`) and one that is unlabeled but is probably just called `attention`. The last embedding has two code paths depending on whether `in_proj_weight` is used or separate weights are used for `query`, `key` and `value`. (See L3669-L3748.)
* A function for applying attention to get a new query. Some models rely on computing attention in different ways, and separating this out would allow us to use those more freely. This should optionally return the attention weights. Specifically, this is the Scaled Dot-Product Attention. (See L3750-L3824.)
* A function for computing the output projection of the query. There is currently only one function needed for doing this. (See L3826-L3836.)
## Alternatives
Some of the restructurings I have suggested could be skipped in favor of introducing fewer functions. In particular, only one of the input embeddings needs to be provided. The rest could be left to the end-user.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
|
module: nn,triaged
|
medium
|
Critical
|
578,987,703 |
pytorch
|
Significant speed difference between P100 and V100
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
P100 (running time is about 30 seconds ) is much slower than V100 (2-3 seconds) when running `affine_grid` and `grid_sample` forward/backward pass
## To Reproduce
My code is like:
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
```python
fixed = torch.from_numpy(fixed).float().cuda()
moving = torch.from_numpy(moving).float().cuda()
theta = torch.eye(3,4, requires_grad=Ture).unsqueeze(0).float().cuda()
torch.cuda.synchronize()
start = time.time()
for i in range(max_iteration):
grid = torcn.nn.functional.affine_grid(theta, fixed.size())
output = torch.nn.functional.grid_sample(moving, grid)
optim.zero_grad()
loss = loss_fn(fixed, output)
loss.backward()
optim.step()
torch.cuda.synchronize()
end = time.time()
print(end -start)
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Usually V100 gives a 2x or 3x speedup over the P100. What causes this large difference?
## Environment
- PyTorch Version: 1.1.0
- OS: Ubuntu 16.04
- How you installed PyTorch: pip
- Python version: 3.7.3
- CUDA version: 9.2
- GPU models and configuration: P100 and V100
cc @csarofeen @ptrblck @ngimel @VitalyFedyunin
|
module: performance,module: cuda,triaged,module: cublas
|
low
|
Critical
|
579,006,294 |
create-react-app
|
support to use resourceQuery to match `css module` css rule.
|
### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
It is not a problem but an idea.
CRA support use file extension such like `.module.css` or `.module.scss` to enable css module.
I think using `resourceQuery` is also a good choice.
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
```javascript
import style from 'app.module.css'
import style from 'app.css?css_module'
```
The two above are both ok to enable css module.
When css rules match resource query `css_module`, css module will be enabled.
### Describe alternatives you've considered
It is easier to help the projects which not obey the file extension rule to use CRA, because we could use babel plugin to add query at import statement.
So, we could enable css module automaticly without change file name.
```javascript
import 'common.css'
import style from 'app.css'
// after babel plugin
import 'common.css' // no need to add query
import style from 'app.css?css_module'
```
### Additional context
It is easy to add a babel plugin, but change webpack rule is not without `eject`.
|
issue: proposal,needs triage
|
low
|
Minor
|
579,051,999 |
TypeScript
|
Enforce correct bounds on assignability when the target is a lookup type on a constrained type parameter
|
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker.
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ, especially the "Common Feature Requests" section: https://github.com/Microsoft/TypeScript/wiki/FAQ
-->
## Search Terms
interface extends generic argument
<!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily -->
## Suggestion
<!-- A summary of what you'd like to see added or changed -->
### Currently
There is a [difference](https://www.typescriptlang.org/play/?ssl=1&ssc=1&pln=11&pc=2#code/JYOwLgpgTgZghgYwgAgGIHt0AU5TgW2QG8BYAKGWQCFcAuZdAIwCsIEwBucgX3PJgCuIdsHQhkMTAB4AKsggAPSCAAmAZzSYcefAD4AFAEpi5SgjFqwyANb0ZAbQBENKI4C6yALzFuHZAHp-ZAB5AGkePjJBYTBRcUY4AC9ZeSUIVQ0mVnYDY1IKZHMQSxs7Lx8-QPkoKHQoHiA) between the validity of assignment to `k` in `foo` and `baz`.
```ts
interface FooParam {
Bar: object;
}
function foo<T extends FooParam>() {
const k: T["Bar"] = {}; // OK
}
function baz<T extends object>() {
const k: T = {}; // error
}
```
In `baz`, it is not possible to assign the empty object to `k` even though `k` is type `T` and `T extends object`. This, as I understand, is the correct behavior since an empty object cannot be guaranteed to be a subtype of `T` even though it is a subtype of `object`. (For instance, `T` could be `number[]` and then we'd have a problem).
In `foo`, on the other hand, it is possible to assign the empty object to `k`, even though it cannot be guaranteed that the empty object would be a subtype of `T["Bar"]`. For example, `T` could be something like `{ Bar: number[] }`, and we would have essentially the same problem as in the case of `baz`.
### Wanted
It should not be possible to assign to `k` in both cases.
## Use Cases
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
I am currently using the above strategy of submitting an `interface` as generic argument as to reduce the number of places of arguments in the generics and make it more resistant to semantic changes in order.
The situation that I am facing is that of a functionality that needs many delegates that accepting a large set of different kinds of inputs and returning different kinds of outputs, but the exact set of inputs and outputs are kept generic and type-checked. Using placed generic arguments is error-prone and hard to read.
As it is currently, protection against the type not even being a subtype of the interface property type is already afforded (so, with the example above, assigning `2` to `k` which expects `T["Bar"]` is an error). This issue just requests that the full protection afforded by "standalone" type parameters are also afforded here.
## Examples
<!-- Show how this would be used and what the behavior would be -->
```ts
function foo<T extends /*...*/,U extends /*...*/,V /*... and so on */,W,X,Y,Z>(input: T,
logic1: (input: T) => U,
logic2: (input: U) => W,
logic3: (input: W) => X,
logic4: (input1: W, input2: X) => U,
/* ... many more contextual callbacks */
) {
/* ... */
}
```
This would then turn to something like so:
```ts
interface FooParams {
T: /* ... whatever T extends */
U: /* ... whatever U extends */,
/* ... and so on, obviously with more sensible names */
}
function foo<Params extends FooParams>(input: Params["T"],
logic1: (input: Params["T"]) => Params["U"],
/* ... */
) {
/* ... */
}
```
## Checklist
My suggestion meets these guidelines:
* :question: This wouldn't be a breaking change in existing TypeScript/JavaScript code.
:point_up: JavaScript wouldn't break obviously. But since the TypeScript behavior would restrict what are currently valid typings, I'm honestly not sure because I don't know how idiomatic this usage is elsewhere and what/whether people currently actively expect to be the behavior.
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
|
Suggestion,In Discussion
|
low
|
Critical
|
579,056,469 |
rust
|
"cannot borrow a constant which may contain interior mutability" for local variable inside constant evaluation
|
This code [(playground link)](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=9eaf4438991432c41b272b35023cdc82):
```rust
#![feature(const_raw_ptr_deref)]
use std::cell::UnsafeCell;
const FOO: u32 = {
let x = UnsafeCell::new(42);
unsafe { *x.get() }
};
```
produces on latest nightly:
> error[E0492]: cannot borrow a constant which may contain interior mutability, create a static instead
However, following the advice to create a static simply results in a "constants cannot refer to statics" error.
The limitation seems unnecessary. We're not really borrowing a constant; `x` only exists during constant evaluation and will not be emitted into the binary, so there's no reason it would be problematic to mutate it.
[This issue affects `memoffset`.](https://github.com/Gilnaa/memoffset/issues/37)
|
T-lang,T-compiler,C-bug,A-const-eval
|
low
|
Critical
|
579,088,587 |
ant-design
|
Sizing class suffixes are inconsistent
|
- [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[https://gist.github.com/romu70/6f56ecace444fc68c02d309b06f4c0c0](https://gist.github.com/romu70/6f56ecace444fc68c02d309b06f4c0c0)
### Steps to reproduce
Just inspect a Ant html.
### What is expected?
Same naming convention for components sizes. Bootstrap suffixes are great: -sm and -lg everywhere.
### What is actually happening?
For layout naming, Ant uses Bootstrap suffixes: xs, sm, md, lg, xl, xxl. But for components, those suffixes are inconsistent:
button, list, input, select use the same Bootstrap suffixes, great!
steps, switch, DataPicker, TimePicker, Progress (and maybe ohers) use "-small" and "-large" suffixes.
there is no size class for the circular progress component
IMHO, those latter ones should be changed to use the same convention (-sm and -lg).
| Environment | Info |
|---|---|
| antd | 4.0.1 |
| React | doesn't matter |
| System | doesn't matter |
| Browser | doesn't matter |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
help wanted,Inactive,improvement
|
low
|
Major
|
579,115,152 |
opencv
|
Error loading ONNX model (FasterRCNN50 from Torchvision)
|
I am trying to load FasterRCNN model of torchvision but facing some errors. Here is the code for the model:-
```
import torch
import torchvision
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
model.eval()
predictions = model(x)
torch.onnx.export(model, x, "fasterrcnn50.onnx", opset_version=11)
```
And loading my model using :
```
import cv2 as cv
net = cv.dnn.readNet('fasterrcnn50.onnx')
```
Here I am getting an error which is:
```
cv2.error: OpenCV(4.2.0-dev) /Users/adityakumar/opencv/modules/dnn/src/graph_simplifier.cpp:79: error: (-212:Parsing error) Input node with name 2780 not found in function 'getInputNodeId'
```
torch==1.4.0
torchvision==0.5.0
OpenCV==4.2.0-dev
Here is the link of model: [model](https://drive.google.com/open?id=1WvsBv9gsAulTL-SOivfAgNWn7ZGvsmM3)
Please check, how can I resolve this error?
|
category: dnn,category: dnn (onnx)
|
low
|
Critical
|
579,128,936 |
TypeScript
|
No error with Partial<Record<number, {}>> in object spreading
|
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.8.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:** [type inference], [partial], [record], [object spreading]
**Code**
```ts
type Point = {
x: number
y: number
}
type Space = Partial<Record<number, Point>>
function addPointReducer(point: Point, pointId: number, prevSpace: Space): Space {
const prevPoint = prevSpace[pointId]
return {
...prevPoint,
[pointId]: point,
}
}
```
**Expected behavior:**
Should throw an error on `...prevPoint` because `Point` doesn't equal to `Space`.
**Actual behavior:**
No error.
**Playground Link:** https://www.typescriptlang.org/play/?ssl=1&ssc=1&pln=17&pc=1#code/C4TwDgpgBACg9gSwHbCgXigbwFBSgDwC4okBXAWwCMIAnXKEYsq27AX221EigGUwAhgGNoGGAJrAEAgDYAeAEoQhcGgBM5zajQA0sRCgB8hztgBmpJEKlwkUAWrXxkwJWtIiaACjAHgxZxQ9XxcASTUmCm1gmggAN35hCGJEkQBKFMERLHoVJABnVDBYuMDUDGL41IgAbRCUcIBdTjxY4FIaOxw8PAA6fsrSvx16PDq-JuJ64BG8Dg4gA
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
|
Needs Investigation
|
low
|
Critical
|
579,226,837 |
opencv
|
How to call a saved_model formatted model via opencv
|
We know that saved_model is often used for tfserving for model deployment. It has the following structure:
`assets/
assets.extra/
variables/
variables.data-?????-of-?????
variables.index
saved_model.pb|saved_model.pbtxt`
Now I want to call the saved_model format model through opencv,How can I do this, thank you very much.
I know that the .pb model exported by tf1.x can be imported through the structure file .pbtxt,but what should be done with saved_model
The platform is c ++,ubuntu 18.04,opencv4.1
|
feature,priority: low,category: dnn
|
low
|
Major
|
579,243,485 |
rust
|
Can't use anonymous functions with calling conventions other than Rust
|
Hi all
Compiler refuses to compile functions when other than "Rust" calling convention used.
I tried this code:
```rust
#![feature(abi_vectorcall)]
type Dyad = unsafe extern "fastcall" fn(u32, u32) -> u32; // DOES NOT WORK
// type Dyad = unsafe extern "vectorcall" fn(u32, u32) -> u32; // DOES NOT WORK
//type Dyad = fn(u32, u32) -> u32; // WORKS as expected
fn f(t:u32, x:u32, y:u32) -> u32 {
let d:Dyad = match t {
1 => |x,y| x+y,
_ => unimplemented!()
};
d(x, y)
}
fn main() {
println!("{}", f(1, 10, 20));
}
```
### Meta
`rustc --version --verbose`:
```
rustc 1.43.0-nightly (3dbade652 2020-03-09)
binary: rustc
commit-hash: 3dbade652ed8ebac70f903e01f51cd92c4e4302c
commit-date: 2020-03-09
host: x86_64-unknown-linux-gnu
release: 1.43.0-nightly
LLVM version: 9.0
```
|
A-closures,T-lang,C-feature-request
|
low
|
Minor
|
579,289,228 |
go
|
runtime: TestGdbPythonCgo is flaky on linux-mips64le-rtrk builder
|
[2020-03-10T16:25:09-9f74f0a/linux-mips64le-rtrk](https://build.golang.org/log/831e6d961dc437580e7a28b21de9eda7f58bc052)
[2020-02-01T06:01:05-866920a/linux-mips64le-rtrk](https://build.golang.org/log/ad72868121a4b2d90e55564ce3555ccb31990fb1)
[2019-11-27T16:41:28-b11d02e/linux-mips64le-rtrk](https://build.golang.org/log/29671fce6fe13a427763ea98c037a631cb003738)
[2019-11-19T02:41:53-2d8c199/linux-mips64le-rtrk](https://build.golang.org/log/edacd43bb72f84c16e2aedd1f92d82a3662564cb)
[2019-11-15T18:31:19-398f9e1/linux-mips64le-rtrk](https://build.golang.org/log/093f96e39ec0226e76e0911e3b6e9413282060ca)
[2019-11-13T15:52:21-bf49905/linux-mips64le-rtrk](https://build.golang.org/log/e70febbaa80764d873d914f0588b3d46edfd3e02)
[2019-11-12T22:30:48-995ade8/linux-mips64le-rtrk](https://build.golang.org/log/12463488e52cafb2ce5ab8b9f0d1caec3ebf2196)
CC @ianlancetaylor @bogojevic @milanknezevic
```
##### GOMAXPROCS=2 runtime -cpu=1,2,4 -quick
--- FAIL: TestGdbPythonCgo (6.02s)
runtime-gdb_test.go:69: gdb version 8.2
runtime-gdb_test.go:236: gdb output: Loading Go Runtime support.
Loading Go Runtime support.
Breakpoint 1 at 0x1200ad4b4: file /tmp/gobuilder-mips64le/tmp/go-build883242937/main.go, line 15.
Couldn't get registers: No such process.
BEGIN info goroutines
Traceback (most recent call last):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 442, in invoke
for ptr in SliceValue(gdb.parse_and_eval("'runtime.allgs'")):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 91, in __getitem__
if i < 0 or i >= self.len:
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 84, in len
return int(self.val['len'])
gdb.MemoryError: Cannot access memory at address 0x120193958
Error occurred in Python command: Cannot access memory at address 0x120193958
END
BEGIN print mapvar
No symbol "mapvar" in current context.
END
BEGIN print strvar
No symbol "strvar" in current context.
END
BEGIN info locals
No frame selected.
END
BEGIN goroutine 1 bt
Traceback (most recent call last):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 536, in invoke
self.invoke_per_goid(goid, cmd)
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 539, in invoke_per_goid
pc, sp = find_goroutine(goid)
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 467, in find_goroutine
for ptr in SliceValue(gdb.parse_and_eval("'runtime.allgs'")):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 91, in __getitem__
if i < 0 or i >= self.len:
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 84, in len
return int(self.val['len'])
gdb.MemoryError: Cannot access memory at address 0x120193958
Error occurred in Python command: Cannot access memory at address 0x120193958
END
BEGIN goroutine 2 bt
Traceback (most recent call last):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 536, in invoke
self.invoke_per_goid(goid, cmd)
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 539, in invoke_per_goid
pc, sp = find_goroutine(goid)
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 467, in find_goroutine
for ptr in SliceValue(gdb.parse_and_eval("'runtime.allgs'")):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 91, in __getitem__
if i < 0 or i >= self.len:
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 84, in len
return int(self.val['len'])
gdb.MemoryError: Cannot access memory at address 0x120193958
Error occurred in Python command: Cannot access memory at address 0x120193958
END
BEGIN goroutine all bt
Traceback (most recent call last):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 530, in invoke
for ptr in SliceValue(gdb.parse_and_eval("'runtime.allgs'")):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 91, in __getitem__
if i < 0 or i >= self.len:
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 84, in len
return int(self.val['len'])
gdb.MemoryError: Cannot access memory at address 0x120193958
Error occurred in Python command: Cannot access memory at address 0x120193958
END
Breakpoint 2 at 0x1200ad55c: file /tmp/gobuilder-mips64le/tmp/go-build883242937/main.go, line 20.
Cannot execute this command while the selected thread is running.
BEGIN goroutine 1 bt at the end
Traceback (most recent call last):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 536, in invoke
self.invoke_per_goid(goid, cmd)
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 539, in invoke_per_goid
pc, sp = find_goroutine(goid)
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 467, in find_goroutine
for ptr in SliceValue(gdb.parse_and_eval("'runtime.allgs'")):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 91, in __getitem__
if i < 0 or i >= self.len:
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 84, in len
return int(self.val['len'])
gdb.MemoryError: Cannot access memory at address 0x120193958
Error occurred in Python command: Cannot access memory at address 0x120193958
END
runtime-gdb_test.go:265: info goroutines failed: Traceback (most recent call last):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 442, in invoke
for ptr in SliceValue(gdb.parse_and_eval("'runtime.allgs'")):
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 91, in __getitem__
if i < 0 or i >= self.len:
File "/tmp/gobuilder-mips64le/go/src/runtime/runtime-gdb.py", line 84, in len
return int(self.val['len'])
gdb.MemoryError: Cannot access memory at address 0x120193958
Error occurred in Python command: Cannot access memory at address 0x120193958
FAIL
FAIL runtime 177.962s
```
|
NeedsInvestigation,arch-mips,compiler/runtime
|
low
|
Critical
|
579,328,266 |
TypeScript
|
Call hierarchy becomes infinitely nested on derived class methods using super call
|
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨
Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section!
Please help us by doing the following steps before logging an issue:
* Search: https://github.com/Microsoft/TypeScript/search?type=Issues
* Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ
Please fill in the *entire* template below.
-->
<!--
Please try to reproduce the issue with the latest published version. It may have already been fixed.
For npm: `typescript@next`
This is also the 'Nightly' version in the playground: http://www.typescriptlang.org/play/?ts=Nightly
-->
**TypeScript Version:** 3.8.3
<!-- Search terms you tried before logging this (so others can find this issue more easily) -->
**Search Terms:**
call hierarchy, call tree, find references, function calls
**Code**
```ts
// A *self-contained* demonstration of the problem follows...
// Test this by running 'Show Call Hierarchy' command on the line marked below
class A {
foo() {
}
}
class B extends A {
foo() { <-- show call hierarchy
super.foo();
}
}
function bar() {
let a: A = new B();
a.foo();
}
```
**Expected behavior:**
Run the 'Show Call Hierarchy' command in vscode on the line marked above. Resulting call tree should look like this:

**Actual behavior:**
Instead, an infinitely nested call tree is generated:

NOTE: if the line with the `super.foo()` call is commented out, the call tree is displayed as expected.
**Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior -->
**Related Issues:** <!-- Did you find other bugs that looked similar? -->
|
Bug,Help Wanted,Domain: Signature Help
|
low
|
Critical
|
579,395,692 |
TypeScript
|
@description JSDoc tag interferes with callback parameter documentation
|
**TypeScript Version:** 3.8.3
**Search Terms:** jsdoc param
**Expected behavior:**
Below code creates no errors, type of `x` is recognized as `(value: number) => boolean`
**Actual behavior:**
Type of `x` is `() => boolean`, the invocation creates a type error.
Removing the `@description` tag makes the code work as expected.
My interpretation of https://jsdoc.app/tags-description.html is that you should be able to place `@description` anywhere in the comment without interfering with other tags:
> By using the @description tag, you can place the description anywhere in the JSDoc comment.
<!-- Did you find other bugs that looked similar? -->
**Related Issues:** Ran into a crash #37265 while trying to repro this with template parameters.
**Code**
```ts
// @ts-check
/**
* @callback IterablePredicate
* @description return true if given element of iterable matches an internal condition
* @param {number} value the current item being evaluated
* @returns {boolean} true if the entry satisfies given condition
**/
/**
* @type {IterablePredicate}
*/
let x;
x(3) // Expected 0 arguments, but got 1.
```
<details><summary><b>Compiler Options</b></summary>
```json
{
"compilerOptions": {
"noImplicitAny": true,
"strictFunctionTypes": true,
"strictPropertyInitialization": true,
"strictBindCallApply": true,
"noImplicitThis": true,
"noImplicitReturns": true,
"alwaysStrict": true,
"esModuleInterop": true,
"checkJs": true,
"allowJs": true,
"declaration": true,
"experimentalDecorators": true,
"emitDecoratorMetadata": true,
"moduleResolution": 2,
"target": "ES2017",
"jsx": "React",
"module": "ESNext"
}
}
```
</details>
**Playground Link:** [Provided](https://www.typescriptlang.org/v2/en/play?useJavaScript=true#code/FAehAIAEBcGcFoDGALApog1sUAqHxwcpEBDAGzICMTNwBJaVAJxMrNQAUnUATAS1KMCRSD1SxETPgAdofAPYA7cN2gBXJsuhM1qcHwBm4AOZ8AbqmWp2AW0vRw8o30Ys2emyWgpx4Esr5FV0VycEQlfjklYShpEhYbcABvRTUbSmYAX3Azcl1waDQwjW4g-UZEjMDjcFRcsjUvXhjIVQ1FWGTKeXl2f2ztfMMCovsmAE9wWC8+WAM+X1MLZXDFSIVFYRwQbBA8Fuhx6T0khmZWdi5eASbM4R32BwAPAG5gJ4AKAGYASiA)
|
Bug,Help Wanted,checkJs,PursuitFellowship
|
low
|
Critical
|
579,433,921 |
node
|
Move to Electron's docs-parser tooling for Node.js documentation
|
Over the past four months, I've gotten more and more involved in Electron as a project. Generally, they are extremely forward on automation and tooling intended to reduce maintainer burden since they are such a small team maintaining such a large project.
One of the tools they created, [docs-parser](https://github.com/electron/docs-parser), is especially interesting and I think Node.js could benefit from adopting it as a fundamental piece of our documentation tooling.
## Why
I see quite significant benefits to adopting docs-parser:
- **More consistency in Node.js documentation.** Docs parser _requires_ you to write in according to the [documentation styleguide](https://github.com/electron/electron/blob/master/docs/styleguide.md) that Electron uses. As a reader and consumer I've never had a particularly good experience using our docs, and have . Whether it's the hierarchical improvements or the relative consistency from section to section, docs-parser's enforced writing style helps ensure that our users' experience while consuming documentation is _consistent_.
- **Straightforward detection of missing context.** In the example above, you can see that each of the properties on the object returned by `getHeapStatistics()` have an empty "description" property. This emptiness is extremely useful in finding places where additional context can be added. For example, it's trivial to write a tool that checks for empty "description" properties. This provides an excellent path forward to enriching our documentation with context that's useful to users who don't have the same understanding that we do.
- **Potential internal tooling to reduce maintainer burden.** Electron doesn't just ship this by itself. They have two other tools, [typescript-definitions](https://github.com/electron/typescript-definitions) and [archaeologist](https://github.com/electron/archaeologist) which - when used together - provide a GitHub check that surfaces a representation of the documentation API as a .d.ts, diffing the current PR's representation with the representation of the documentation in `master`. This provides a way for maintainers to parse out the changes to APIs in code in addition to just personally reading the docs themselves.
## How does docs-parser differ from what we currently have?
### Document Structure
docs-parser requires a specific document structure that is currently documented in [the Electron Styleguide API Reference section](https://github.com/electron/electron/blob/master/docs/styleguide.md#api-references).
This structure has specific expectations about titles, descriptions, modules, methods, events, classes (undocumented here: multi-class mode, which would be needed in Node.js and is currently supported by docs-parser), static properties, instance methods, and instance properties.
I've worked on converting Node.js's `querystring`, `v8`, and `worker-threads` docs in a personal repo which you can find here in the docs/api directory: [bnb/node-docs-parser](https://github.com/bnb/node-docs-parser). Please take a look if you're interested in what the differences are - the first commit on each file was the state I started with and the final commit in each is the docs-parser version. Additionally, there's a directory with the original versions that you can compare side-by-side if you'd prefer to approach it that way.
In doing this I found that - while a few things did need to be shuffled around to correctly parse - the overall updated structure was more clear and approachable with minimal additional effort.
### Actual Markdown
docs-parser requires a comparatively strict structure around markdown _input_, since it directly parses markdown.
Here's an example from Node.js:
```md
## `querystring.escape(str)`
<!-- YAML
added: v0.1.25
-->
* `str` {string}
The `querystring.escape()` method performs URL percent-encoding on the given
`str` in a manner that is optimized for the specific requirements of URL
query strings.
The `querystring.escape()` method is used by `querystring.stringify()` and is
generally not expected to be used directly. It is exported primarily to allow
application code to provide a replacement percent-encoding implementation if
necessary by assigning `querystring.escape` to an alternative function.
```
And here's the current equivalent in docs-parser:
```
### `querystring.escape(str)`
- `str` String
The `querystring.escape()` method performs URL percent-encoding on the given
`str` in a manner that is optimized for the specific requirements of URL
query strings.
The `querystring.escape()` method is used by `querystring.stringify()` and is
generally not expected to be used directly. It is exported primarily to allow
application code to provide a replacement percent-encoding implementation if
necessary by assigning `querystring.escape` to an alternative function.
```
They seem nearly identical, and indeed they basically are. This is a good example of how minor some of the necessary changes are. Here's another slightly more complicated example:
#### Node.js version:
```
### `v8.getHeapStatistics()`
Returns `Object`
* `total_heap_size` number
* `total_heap_size_executable` number
* `total_physical_size` number
* `total_available_size` number
* `used_heap_size` number
* `heap_size_limit` number
* `malloced_memory` number
* `peak_malloced_memory` number
* `does_zap_garbage` number
* `number_of_native_contexts` number
* `number_of_detached_contexts` number
`does_zap_garbage` is a 0/1 boolean, which signifies whether the
`--zap_code_space` option is enabled or not. This makes V8 overwrite heap
garbage with a bit pattern. The RSS footprint (resident memory set) gets bigger
because it continuously touches all heap pages and that makes them less likely
to get swapped out by the operating system.
`number_of_native_contexts` The value of native_context is the number of the
top-level contexts currently active. Increase of this number over time indicates
a memory leak.
`number_of_detached_contexts` The value of detached_context is the number
of contexts that were detached and not yet garbage collected. This number
being non-zero indicates a potential memory leak.
<!-- eslint-skip -->
` ` `js
{
total_heap_size: 7326976,
total_heap_size_executable: 4194304,
total_physical_size: 7326976,
total_available_size: 1152656,
used_heap_size: 3476208,
heap_size_limit: 1535115264,
malloced_memory: 16384,
peak_malloced_memory: 1127496,
does_zap_garbage: 0,
number_of_native_contexts: 1,
number_of_detached_contexts: 0
}
` ` `
```
#### docs-parser version:
```md
### `v8.getHeapStatistics()`
Returns `Object`
* `total_heap_size` number
* `total_heap_size_executable` number
* `total_physical_size` number
* `total_available_size` number
* `used_heap_size` number
* `heap_size_limit` number
* `malloced_memory` number
* `peak_malloced_memory` number
* `does_zap_garbage` number
* `number_of_native_contexts` number
* `number_of_detached_contexts` number
`does_zap_garbage` is a 0/1 boolean, which signifies whether the
`--zap_code_space` option is enabled or not. This makes V8 overwrite heap
garbage with a bit pattern. The RSS footprint (resident memory set) gets bigger
because it continuously touches all heap pages and that makes them less likely
to get swapped out by the operating system.
`number_of_native_contexts` The value of native_context is the number of the
top-level contexts currently active. Increase of this number over time indicates
a memory leak.
`number_of_detached_contexts` The value of detached_context is the number
of contexts that were detached and not yet garbage collected. This number
being non-zero indicates a potential memory leak.
<!-- eslint-skip -->
` ` `js
{
total_heap_size: 7326976,
total_heap_size_executable: 4194304,
total_physical_size: 7326976,
total_available_size: 1152656,
used_heap_size: 3476208,
heap_size_limit: 1535115264,
malloced_memory: 16384,
peak_malloced_memory: 1127496,
does_zap_garbage: 0,
number_of_native_contexts: 1,
number_of_detached_contexts: 0
}
` ` `
```
However, docs-parser has an interesting technical benefit. Like our current setup, it outputs JSON. Compare the two following JSON _outputs_:
#### Node.js JSON output:
```
{
"textRaw": "`v8.getHeapStatistics()`",
"type": "method",
"name": "getHeapStatistics",
"meta": {
"added": [
"v1.0.0"
],
"changes": [
{
"version": "v7.2.0",
"pr-url": "https://github.com/nodejs/node/pull/8610",
"description": "Added `malloced_memory`, `peak_malloced_memory`, and `does_zap_garbage`."
},
{
"version": "v7.5.0",
"pr-url": "https://github.com/nodejs/node/pull/10186",
"description": "Support values exceeding the 32-bit unsigned integer range."
}
]
},
"signatures": [
{
"return": {
"textRaw": "Returns: {Object}",
"name": "return",
"type": "Object"
},
"params": []
}
],
"desc": "<p>Returns an object with the following properties:</p>\n<ul>\n<li><code>total_heap_size</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n<li><code>total_heap_size_executable</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n<li><code>total_physical_size</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n<li><code>total_available_size</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n<li><code>used_heap_size</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n<li><code>heap_size_limit</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n<li><code>malloced_memory</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n<li><code>peak_malloced_memory</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n<li><code>does_zap_garbage</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n<li><code>number_of_native_contexts</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n<li><code>number_of_detached_contexts</code> <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures#Number_type\" class=\"type\"><number></a></li>\n</ul>\n<p><code>does_zap_garbage</code> is a 0/1 boolean, which signifies whether the\n<code>--zap_code_space</code> option is enabled or not. This makes V8 overwrite heap\ngarbage with a bit pattern. The RSS footprint (resident memory set) gets bigger\nbecause it continuously touches all heap pages and that makes them less likely\nto get swapped out by the operating system.</p>\n<p><code>number_of_native_contexts</code> The value of native_context is the number of the\ntop-level contexts currently active. Increase of this number over time indicates\na memory leak.</p>\n<p><code>number_of_detached_contexts</code> The value of detached_context is the number\nof contexts that were detached and not yet garbage collected. This number\nbeing non-zero indicates a potential memory leak.</p>\n<!-- eslint-skip -->\n<pre><code class=\"language-js\">{\n total_heap_size: 7326976,\n total_heap_size_executable: 4194304,\n total_physical_size: 7326976,\n total_available_size: 1152656,\n used_heap_size: 3476208,\n heap_size_limit: 1535115264,\n malloced_memory: 16384,\n peak_malloced_memory: 1127496,\n does_zap_garbage: 0,\n number_of_native_contexts: 1,\n number_of_detached_contexts: 0\n}\n</code></pre>"
},
```
#### docs-parser JSON output:
```
{
"name": "getHeapStatistics",
"signature": "()",
"description": "* `total_heap_size` number\n* `total_heap_size_executable` number\n* `total_physical_size` number\n* `total_available_size` number\n* `used_heap_size` number\n* `heap_size_limit` number\n* `malloced_memory` number\n* `peak_malloced_memory` number\n* `does_zap_garbage` number\n* `number_of_native_contexts` number\n* `number_of_detached_contexts` number\n\n`does_zap_garbage` is a 0/1 boolean, which signifies whether the `--zap_code_space` option is enabled or not. This makes V8 overwrite heap garbage with a bit pattern. The RSS footprint (resident memory set) gets bigger because it continuously touches all heap pages and that makes them less likely to get swapped out by the operating system.\n\n`number_of_native_contexts` The value of native_context is the number of the top-level contexts currently active. Increase of this number over time indicates a memory leak.\n\n`number_of_detached_contexts` The value of detached_context is the number of contexts that were detached and not yet garbage collected. This number being non-zero indicates a potential memory leak.\n\n<!-- eslint-skip -->",
"parameters": [],
"returns": {
"collection": false,
"type": "Object",
"properties": [
{
"name": "total_heap_size",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
},
{
"name": "total_heap_size_executable",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
},
{
"name": "total_physical_size",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
},
{
"name": "total_available_size",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
},
{
"name": "used_heap_size",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
},
{
"name": "heap_size_limit",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
},
{
"name": "malloced_memory",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
},
{
"name": "peak_malloced_memory",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
},
{
"name": "does_zap_garbage",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
},
{
"name": "number_of_native_contexts",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
},
{
"name": "number_of_detached_contexts",
"description": "",
"required": true,
"additionalTags": [],
"collection": false,
"type": "number"
}
]
},
"additionalTags": []
},
```
The latter output has a significantly larger amount of _useful_ context extracted from the same Markdown.
## Current Challenges and Potential Blockers
I would like to get a feeling for how folks feel about these potential technical/functional blockers.
- [ ] Three elements of metadata that Node.js uses are currently missing from docs-parser: changes, introduced in, and stability.
- Potential solution: I've talked with @MarshallOfSound and it seems that there's potential for adding an extensible metadata section to docs-parser.
- [ ] docs-parser does not currently output individual, per-API JSON files.
- Potential solution: this could be PR'ed or extracted in an additional step from the all-in-one file.
- [ ] some elements of docs-parser are currently hardcoded to be electron-specific.
- Potential solution: [electron/docs-parser#21](https://github.com/electron/docs-parser/pull/21), additional configuration by file that @MarshallOfSound said he'd be interested in shipping.
- [ ] Bug in the multi-class mode which results in a parsing error in docs that have mutliple classes ([@electronjs/docs-parser#27](https://github.com/electron/docs-parser/issues/27))
- Potential solution: slated to be fixed.
|
doc,discuss
|
medium
|
Critical
|
579,450,224 |
go
|
x/pkgsite: searching by smartcontractkit and chainlink does not show github.com/smartcontractkit/chainlink
|
There's a fairly active Go project on GitHub that's not found on go.dev so there might be others.
[smartcontractkit/chainlink](https://github.com/smartcontractkit/chainlink) has 6 releases this year, 42 contributors, and 9,372 commits.

Just a hunch: could this be related to the project install using `git clone` instead of `go get`?
### What did you expect to see?
smartcontractkit/chainlink existing on go.dev.
### What did you see instead?
smartcontractkit/chainlink is not found in go.dev.
|
NeedsInvestigation,pkgsite,pkgsite/search
|
low
|
Major
|
579,478,841 |
kubernetes
|
Cannot set the eventTime field when creating/updating Events
|
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
We cannot create/update an `Event` resource if we set the `eventTime` field to a valid value, e.g., `"2020-03-05T19:40:14.000000Z"`. The Kubernetes API responds with HTTP 422 and the following error:
```
events "istio-ingressgateway.15f97fc1c9cd685f" was not valid:
* reportingController: Required value
* reportingController: Invalid value: "": name part must be non-empty
* reportingController: Invalid value: "": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')
* reportingInstance: Required value
* action: Required value
```
If the `eventTime` field is `null`, the `Event` resource is created/updated successfully. In both cases, the `Event` resource does have a name.
**What you expected to happen**:
The `Event` resource should be created/updated, and it should have the provided value in its `eventTime` field.
**How to reproduce it (as minimally and precisely as possible)**:
In a Kubernetes cluster, choose a random event and edit it with `kubectl edit events <event>`. In the editor window, change `eventTime` to `"2020-03-05T19:40:14.000000Z"`. The update will fail with the aforementioned message.
**Anything else we need to know?**:
We see that the same applies to v1.15.
**Environment**:
- Kubernetes version (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.22", GitCommit:"be6e649e67f82e64e43e32119b2b14250a7f9542", GitTreeState:"clean", BuildDate:"2020-01-30T19:02:16Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
```
- Cloud provider or hardware configuration: GKE
- OS (e.g: `cat /etc/os-release`): Ubuntu 18.04.4 LTS
- Kernel (e.g. `uname -a`): `Linux hostname 4.15.0-1052-gke #55-Ubuntu SMP Tue Feb 11 21:45:20 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux`
- Install tools:
- Network plugin and version (if this is a network-related bug):
- Others:
|
kind/bug,sig/api-machinery,lifecycle/frozen,triage/accepted
|
low
|
Critical
|
579,495,648 |
go
|
cmd/go: "+build comment … blank line" warning not produced by "go run" or "go build"
|
<!--
Please answer these questions before submitting your issue. Thanks!
For questions please use one of our forums: https://github.com/golang/go/wiki/Questions
-->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.14 linux/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/wolf/.cache/go-build"
GOENV="/home/wolf/.config/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GONOPROXY="go.showmax.cc"
GONOSUMDB="go.showmax.cc"
GOOS="linux"
GOPATH="/home/wolf/go"
GOPRIVATE="go.showmax.cc"
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/lib/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/tmp/ww/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build147562273=/tmp/go-build -gno-record-gcc-switches"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
```
+$ tree .
.
├── a.go
├── b.go
├── c.go
└── go.mod
0 directories, 4 files
+$ cat a.go
package main
import "fmt"
func main() {
fmt.Println(value)
}
+$ cat b.go
// +build !windows
package main
var value = "POSIX"
+$ cat c.go
// +build windows
package main
var value = "WIN"
+$ cat go.mod
module test
go 1.14
+$ go run .
# test
./c.go:4:5: value redeclared in this block
previous declaration at ./b.go:4:5
```
### What did you expect to see?
```
# test
./b.go:1:1: +build comment must appear before package clause and be followed by a blank line
./c.go:1:1: +build comment must appear before package clause and be followed by a blank line
```
### What did you see instead?
```
# test
./c.go:4:5: value redeclared in this block
previous declaration at ./b.go:4:5
```
|
NeedsInvestigation
|
low
|
Critical
|
579,509,580 |
pytorch
|
Multiple CMake target errors ever since commit 0e52627358
|
## 🐛 Bug
Building pytorch on platform `ppc64le`, since Feb 28, has resulted in a flurry of CMake Error
messages partway through the build. An example of the error message appears below, though the error will repeat with some variations and different target names:
```core header install: /home/jenkins/pytorch/build/aten/src/ATen/core/TensorBody.h
core header install: /home/jenkins/pytorch/build/aten/src/ATen/core/TensorMethods.h
CMake Error at cmake/Codegen.cmake:207 (add_custom_target):
add_custom_target cannot create target "ATEN_CPU_FILES_GEN_TARGET" because
another target with the same name already exists. The existing target is a
custom target created in source directory "/home/jenkins/pytorch/caffe2".
See documentation for policy CMP0002 for more details.
Call Stack (most recent call first):
caffe2/CMakeLists.txt:2 (include)
```
To see a full log with the varying CMake errors, here is an example CI output on ppc64le:
https://powerci.osuosl.org/job/pytorch-linux-cuda92-cudnn7-py3-mpi-build-only-gpu/518/consoleFull
## To Reproduce
Steps to reproduce the behavior:
1. Run the build scripts for pytorch on a `ppc64le` system. In my case, it is a docker container
running ubuntu and using cuda10.2 (or 10.1 or 10.0) but i'm doubtful the OS or CUDA level matters. Wait a few minutes; the errors seem to start occuring in under 5 min.
NOTE:
I manually tried different builds with different levels of pytorch master until I narrowed it
down to the exact pull request on which the problem started to happen: The commit ID
where it starts failing is `0e52627358` -- this corresponds to PR #338669. The errors
don't appear if I simply build from the prior commit.
Reference: https://github.com/pytorch/pytorch/commit/0e52627358c4ec60243f9d8e19b722b7b2265ffc
Not being an expert in this, I haven't determined what it is in this commit that triggered the error,
but it has happened continuously in every CI run since that change was included.
I'll continue to look a bit closer, but I wanted to get the issue documented. Perhaps @kimishpatel has some suggestions or thoughts on what to look at (as user who made the aforementioned commit)?
## Environment
Ubuntu 18.04.4
CUDA 10.2 (also reproduces on 10.1 and 10.0).
Build using current master branch of pytorch, or any since 0e52627358
Built on ppc64le in a docker container, but I don't know if this is necessarily tied to
the one platform or not.
|
module: build,triaged
|
medium
|
Critical
|
579,521,840 |
pytorch
|
Eigen version for PyTorch ?
|
Which version of eigen is working for PyTorch?
```
In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/src/util/CXX11Meta.h:14:0,
from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:31,
from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/util/EmulateArray.h:254:30: error: redeclared with 1 template parameter
template <typename T> struct array_size;
^~~~~~~~~~
In file included from /usr/include/eigen3/Eigen/Core:151:0,
from ....../pytorch/caffe2/operators/conv_op_eigen.cc:1:
/usr/include/eigen3/Eigen/src/Core/util/Meta.h:393:55: note: previous declaration ‘template<class T, class EnableIf> struct Eigen::internal::array_size’ used 2 template parameters
template<typename T, typename EnableIf = void> struct array_size {
^~~~~~~~~~
In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/src/util/CXX11Meta.h:14:0,
from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:31,
from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/util/EmulateArray.h:255:41: error: redefinition of ‘struct Eigen::internal::array_size<const std::array<_Tp, _Nm> >’
template<class T, std::size_t N> struct array_size<const std::array<T,N> > {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/eigen3/Eigen/Core:151:0,
from ....../pytorch/caffe2/operators/conv_op_eigen.cc:1:
/usr/include/eigen3/Eigen/src/Core/util/Meta.h:409:44: note: previous definition of ‘struct Eigen::internal::array_size<const std::array<_Tp, _Nm> >’
template<typename T, std::size_t N> struct array_size<const std::array<T,N> > {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/src/util/CXX11Meta.h:14:0,
from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:31,
from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/util/EmulateArray.h:258:30: error: redeclared with 1 template parameter
template <typename T> struct array_size;
^~~~~~~~~~
In file included from /usr/include/eigen3/Eigen/Core:151:0,
from ....../pytorch/caffe2/operators/conv_op_eigen.cc:1:
/usr/include/eigen3/Eigen/src/Core/util/Meta.h:393:55: note: previous declaration ‘template<class T, class EnableIf> struct Eigen::internal::array_size’ used 2 template parameters
template<typename T, typename EnableIf = void> struct array_size {
^~~~~~~~~~
In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/src/util/CXX11Meta.h:14:0,
from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:31,
from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/util/EmulateArray.h:259:41: error: redefinition of ‘struct Eigen::internal::array_size<std::array<_Tp, _Nm> >’
template<class T, std::size_t N> struct array_size<std::array<T,N> > {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/eigen3/Eigen/Core:151:0,
from ....../pytorch/caffe2/operators/conv_op_eigen.cc:1:
/usr/include/eigen3/Eigen/src/Core/util/Meta.h:412:44: note: previous definition of ‘struct Eigen::internal::array_size<std::array<_Tp, _Nm> >’
template<typename T, std::size_t N> struct array_size<std::array<T,N> > {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:113:0,
from ....../pytorch/caffe2/operators/conv_op_eigen.cc:10:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/Tensor/TensorContraction.h: In member function ‘void Eigen::TensorContractionEvaluatorBase<Derived>::evalGemm(Eigen::TensorContractionEvaluatorBase<Derived>::Scalar*) const’:
/usr/include/eigen3/unsupported/Eigen/CXX11/src/Tensor/TensorContraction.h:466:111: error: wrong number of template arguments (6, should be at least 7)
internal::gemm_pack_lhs<LhsScalar, Index, typename LhsMapper::SubMapper, mr, Traits::LhsProgress, ColMajor> pack_lhs;
^
In file included from /usr/include/eigen3/Eigen/Core:270:0,
from ....../pytorch/caffe2/operators/conv_op_eigen.cc:1:
/usr/include/eigen3/Eigen/src/Core/util/BlasUtil.h:28:8: note: provided for ‘template<class Scalar, class Index, class DataMapper, int Pack1, int Pack2, class Packet, int StorageOrder, bool Conjugate, bool PanelMode> struct Eigen::internal::gemm_pack_lhs’
struct gemm_pack_lhs;
^~~~~~~~~~~~~
In file included from /usr/include/eigen3/unsupported/Eigen/CXX11/Tensor:113:0,
```
|
module: build,triaged
|
low
|
Critical
|
579,525,002 |
kubernetes
|
Stateful set controller does not appear to backoff when pods are evicted
|
The SS controller appears to go into a very tight loop when the `Should recreate evicted statefulset` e2e test runs. The controller should go into backoff after an eviction for that pod instead of hammering the node.
```
I0311 16:27:00.916503 1968 kubelet.go:1913] SyncLoop (ADD, "api"): "ss-0_e2e-statefulset-1676(56d12171-0f09-48a1-bf6e-9bddbaaaccdf)"
I0311 16:27:00.919123 1968 predicate.go:132] Predicate failed on Pod: ss-0_e2e-statefulset-1676(56d12171-0f09-48a1-bf6e-9bddbaaaccdf), for reason: Predicate PodFitsHostPorts failed
I0311 16:27:00.928857 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(56d12171-0f09-48a1-bf6e-9bddbaaaccdf)" with "{\"metadata\":{\"uid\":\"56d12171-0f09-48a1-bf6e-9bddbaaaccdf\"},\"status\":{\"message\":\"Pod Predicate PodFitsHostPorts failed\",\"phase\":\"Failed\",\"qosClass\":null
I0311 16:27:00.928950 1968 status_manager.go:723] Status for pod "ss-0_e2e-statefulset-1676(56d12171-0f09-48a1-bf6e-9bddbaaaccdf)" updated successfully after 9ms: (1, {Phase:Failed Conditions:[] Message:Pod Predicate PodFitsHostPorts failed Reason:PodFitsHostPorts NominatedNodeName: HostIP: PodIP: PodIPs:[] Sta
I0311 16:27:00.947085 1968 kubelet.go:1929] SyncLoop (DELETE, "api"): "ss-0_e2e-statefulset-1676(56d12171-0f09-48a1-bf6e-9bddbaaaccdf)"
I0311 16:27:00.947306 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(56d12171-0f09-48a1-bf6e-9bddbaaaccdf)" with "{\"metadata\":{\"uid\":\"56d12171-0f09-48a1-bf6e-9bddbaaaccdf\"}}"
I0311 16:27:00.947326 1968 status_manager.go:721] Status for pod "ss-0_e2e-statefulset-1676(56d12171-0f09-48a1-bf6e-9bddbaaaccdf)" is up-to-date after 0s: (2)
W0311 16:27:00.953181 1968 status_manager.go:746] Failed to delete status for pod "ss-0_e2e-statefulset-1676(56d12171-0f09-48a1-bf6e-9bddbaaaccdf)": pods "ss-0" not found
I0311 16:27:00.955849 1968 kubelet.go:1923] SyncLoop (REMOVE, "api"): "ss-0_e2e-statefulset-1676(56d12171-0f09-48a1-bf6e-9bddbaaaccdf)"
I0311 16:27:00.955893 1968 kubelet.go:2122] Failed to delete pod "ss-0_e2e-statefulset-1676(56d12171-0f09-48a1-bf6e-9bddbaaaccdf)", err: pod not found
I0311 16:27:00.971667 1968 kubelet.go:1913] SyncLoop (ADD, "api"): "ss-0_e2e-statefulset-1676(db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7)"
I0311 16:27:00.974231 1968 predicate.go:132] Predicate failed on Pod: ss-0_e2e-statefulset-1676(db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7), for reason: Predicate PodFitsHostPorts failed
I0311 16:27:00.981680 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7)" with "{\"metadata\":{\"uid\":\"db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7\"},\"status\":{\"message\":\"Pod Predicate PodFitsHostPorts failed\",\"phase\":\"Failed\",\"qosClass\":null
I0311 16:27:00.981731 1968 status_manager.go:723] Status for pod "ss-0_e2e-statefulset-1676(db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7)" updated successfully after 7ms: (1, {Phase:Failed Conditions:[] Message:Pod Predicate PodFitsHostPorts failed Reason:PodFitsHostPorts NominatedNodeName: HostIP: PodIP: PodIPs:[] Sta
I0311 16:27:01.004014 1968 kubelet.go:1929] SyncLoop (DELETE, "api"): "ss-0_e2e-statefulset-1676(db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7)"
I0311 16:27:01.004231 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7)" with "{\"metadata\":{\"uid\":\"db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7\"}}"
I0311 16:27:01.004252 1968 status_manager.go:721] Status for pod "ss-0_e2e-statefulset-1676(db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7)" is up-to-date after 0s: (2)
W0311 16:27:01.019111 1968 status_manager.go:746] Failed to delete status for pod "ss-0_e2e-statefulset-1676(db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7)": pods "ss-0" not found
I0311 16:27:01.022066 1968 kubelet.go:1923] SyncLoop (REMOVE, "api"): "ss-0_e2e-statefulset-1676(db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7)"
I0311 16:27:01.024450 1968 kubelet.go:2122] Failed to delete pod "ss-0_e2e-statefulset-1676(db1fdaf1-f0ee-4ce3-90a4-d81212ff05b7)", err: pod not found
I0311 16:27:01.041625 1968 kubelet.go:1913] SyncLoop (ADD, "api"): "ss-0_e2e-statefulset-1676(e4b733fb-dce6-4c25-8439-227ffea4f4c1)"
I0311 16:27:01.044126 1968 predicate.go:132] Predicate failed on Pod: ss-0_e2e-statefulset-1676(e4b733fb-dce6-4c25-8439-227ffea4f4c1), for reason: Predicate PodFitsHostPorts failed
I0311 16:27:01.054016 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(e4b733fb-dce6-4c25-8439-227ffea4f4c1)" with "{\"metadata\":{\"uid\":\"e4b733fb-dce6-4c25-8439-227ffea4f4c1\"},\"status\":{\"message\":\"Pod Predicate PodFitsHostPorts failed\",\"phase\":\"Failed\",\"qosClass\":null
I0311 16:27:01.054064 1968 status_manager.go:723] Status for pod "ss-0_e2e-statefulset-1676(e4b733fb-dce6-4c25-8439-227ffea4f4c1)" updated successfully after 9ms: (1, {Phase:Failed Conditions:[] Message:Pod Predicate PodFitsHostPorts failed Reason:PodFitsHostPorts NominatedNodeName: HostIP: PodIP: PodIPs:[] Sta
I0311 16:27:01.073734 1968 kubelet.go:1929] SyncLoop (DELETE, "api"): "ss-0_e2e-statefulset-1676(e4b733fb-dce6-4c25-8439-227ffea4f4c1)"
I0311 16:27:01.074068 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(e4b733fb-dce6-4c25-8439-227ffea4f4c1)" with "{\"metadata\":{\"uid\":\"e4b733fb-dce6-4c25-8439-227ffea4f4c1\"}}"
I0311 16:27:01.074108 1968 status_manager.go:721] Status for pod "ss-0_e2e-statefulset-1676(e4b733fb-dce6-4c25-8439-227ffea4f4c1)" is up-to-date after 0s: (2)
I0311 16:27:01.076842 1968 kubelet.go:1923] SyncLoop (REMOVE, "api"): "ss-0_e2e-statefulset-1676(e4b733fb-dce6-4c25-8439-227ffea4f4c1)"
I0311 16:27:01.076883 1968 kubelet.go:2122] Failed to delete pod "ss-0_e2e-statefulset-1676(e4b733fb-dce6-4c25-8439-227ffea4f4c1)", err: pod not found
W0311 16:27:01.078630 1968 status_manager.go:746] Failed to delete status for pod "ss-0_e2e-statefulset-1676(e4b733fb-dce6-4c25-8439-227ffea4f4c1)": pods "ss-0" not found
I0311 16:27:01.096107 1968 kubelet.go:1913] SyncLoop (ADD, "api"): "ss-0_e2e-statefulset-1676(44747d3d-745b-4495-93e0-4f72da04a371)"
I0311 16:27:01.102192 1968 predicate.go:132] Predicate failed on Pod: ss-0_e2e-statefulset-1676(44747d3d-745b-4495-93e0-4f72da04a371), for reason: Predicate PodFitsHostPorts failed
I0311 16:27:01.114357 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(44747d3d-745b-4495-93e0-4f72da04a371)" with "{\"metadata\":{\"uid\":\"44747d3d-745b-4495-93e0-4f72da04a371\"},\"status\":{\"message\":\"Pod Predicate PodFitsHostPorts failed\",\"phase\":\"Failed\",\"qosClass\":null
I0311 16:27:01.114408 1968 status_manager.go:723] Status for pod "ss-0_e2e-statefulset-1676(44747d3d-745b-4495-93e0-4f72da04a371)" updated successfully after 12ms: (1, {Phase:Failed Conditions:[] Message:Pod Predicate PodFitsHostPorts failed Reason:PodFitsHostPorts NominatedNodeName: HostIP: PodIP: PodIPs:[] St
I0311 16:27:01.135433 1968 kubelet.go:1929] SyncLoop (DELETE, "api"): "ss-0_e2e-statefulset-1676(44747d3d-745b-4495-93e0-4f72da04a371)"
I0311 16:27:01.135659 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(44747d3d-745b-4495-93e0-4f72da04a371)" with "{\"metadata\":{\"uid\":\"44747d3d-745b-4495-93e0-4f72da04a371\"}}"
I0311 16:27:01.135684 1968 status_manager.go:721] Status for pod "ss-0_e2e-statefulset-1676(44747d3d-745b-4495-93e0-4f72da04a371)" is up-to-date after 0s: (2)
I0311 16:27:01.140683 1968 kubelet.go:1923] SyncLoop (REMOVE, "api"): "ss-0_e2e-statefulset-1676(44747d3d-745b-4495-93e0-4f72da04a371)"
I0311 16:27:01.140738 1968 kubelet.go:2122] Failed to delete pod "ss-0_e2e-statefulset-1676(44747d3d-745b-4495-93e0-4f72da04a371)", err: pod not found
W0311 16:27:01.147333 1968 status_manager.go:746] Failed to delete status for pod "ss-0_e2e-statefulset-1676(44747d3d-745b-4495-93e0-4f72da04a371)": pods "ss-0" not found
I0311 16:27:01.167503 1968 kubelet.go:1913] SyncLoop (ADD, "api"): "ss-0_e2e-statefulset-1676(ff780fca-b041-4a49-8ee9-2e090a4ba4ec)"
I0311 16:27:01.171201 1968 predicate.go:132] Predicate failed on Pod: ss-0_e2e-statefulset-1676(ff780fca-b041-4a49-8ee9-2e090a4ba4ec), for reason: Predicate PodFitsHostPorts failed
I0311 16:27:01.182253 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(ff780fca-b041-4a49-8ee9-2e090a4ba4ec)" with "{\"metadata\":{\"uid\":\"ff780fca-b041-4a49-8ee9-2e090a4ba4ec\"},\"status\":{\"message\":\"Pod Predicate PodFitsHostPorts failed\",\"phase\":\"Failed\",\"qosClass\":null
I0311 16:27:01.182306 1968 status_manager.go:723] Status for pod "ss-0_e2e-statefulset-1676(ff780fca-b041-4a49-8ee9-2e090a4ba4ec)" updated successfully after 10ms: (1, {Phase:Failed Conditions:[] Message:Pod Predicate PodFitsHostPorts failed Reason:PodFitsHostPorts NominatedNodeName: HostIP: PodIP: PodIPs:[] St
I0311 16:27:01.203646 1968 kubelet.go:1929] SyncLoop (DELETE, "api"): "ss-0_e2e-statefulset-1676(ff780fca-b041-4a49-8ee9-2e090a4ba4ec)"
I0311 16:27:01.203906 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(ff780fca-b041-4a49-8ee9-2e090a4ba4ec)" with "{\"metadata\":{\"uid\":\"ff780fca-b041-4a49-8ee9-2e090a4ba4ec\"}}"
I0311 16:27:01.203955 1968 status_manager.go:721] Status for pod "ss-0_e2e-statefulset-1676(ff780fca-b041-4a49-8ee9-2e090a4ba4ec)" is up-to-date after 0s: (2)
W0311 16:27:01.212792 1968 status_manager.go:746] Failed to delete status for pod "ss-0_e2e-statefulset-1676(ff780fca-b041-4a49-8ee9-2e090a4ba4ec)": pods "ss-0" not found
I0311 16:27:01.215840 1968 kubelet.go:1923] SyncLoop (REMOVE, "api"): "ss-0_e2e-statefulset-1676(ff780fca-b041-4a49-8ee9-2e090a4ba4ec)"
I0311 16:27:01.215899 1968 kubelet.go:2122] Failed to delete pod "ss-0_e2e-statefulset-1676(ff780fca-b041-4a49-8ee9-2e090a4ba4ec)", err: pod not found
I0311 16:27:01.230930 1968 kubelet.go:1913] SyncLoop (ADD, "api"): "ss-0_e2e-statefulset-1676(84703c07-ef17-41ad-a74e-f02616301cb3)"
I0311 16:27:01.241160 1968 predicate.go:132] Predicate failed on Pod: ss-0_e2e-statefulset-1676(84703c07-ef17-41ad-a74e-f02616301cb3), for reason: Predicate PodFitsHostPorts failed
I0311 16:27:01.251997 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(84703c07-ef17-41ad-a74e-f02616301cb3)" with "{\"metadata\":{\"uid\":\"84703c07-ef17-41ad-a74e-f02616301cb3\"},\"status\":{\"message\":\"Pod Predicate PodFitsHostPorts failed\",\"phase\":\"Failed\",\"qosClass\":null
I0311 16:27:01.252053 1968 status_manager.go:723] Status for pod "ss-0_e2e-statefulset-1676(84703c07-ef17-41ad-a74e-f02616301cb3)" updated successfully after 10ms: (1, {Phase:Failed Conditions:[] Message:Pod Predicate PodFitsHostPorts failed Reason:PodFitsHostPorts NominatedNodeName: HostIP: PodIP: PodIPs:[] St
I0311 16:27:01.265596 1968 kubelet.go:1929] SyncLoop (DELETE, "api"): "ss-0_e2e-statefulset-1676(84703c07-ef17-41ad-a74e-f02616301cb3)"
I0311 16:27:01.265830 1968 status_manager.go:696] Patch status for pod "ss-0_e2e-statefulset-1676(84703c07-ef17-41ad-a74e-f02616301cb3)" with "{\"metadata\":{\"uid\":\"84703c07-ef17-41ad-a74e-f02616301cb3\"}}"
I0311 16:27:01.265857 1968 status_manager.go:721] Status for pod "ss-0_e2e-statefulset-1676(84703c07-ef17-41ad-a74e-f02616301cb3)" is up-to-date after 0s: (2)
I0311 16:27:01.274502 1968 kubelet.go:1923] SyncLoop (REMOVE, "api"): "ss-0_e2e-statefulset-1676(84703c07-ef17-41ad-a74e-f02616301cb3)"
I0311 16:27:01.274561 1968 kubelet.go:2122] Failed to delete pod "ss-0_e2e-statefulset-1676(84703c07-ef17-41ad-a74e-f02616301cb3)", err: pod not found
W0311 16:27:01.277282 1968 status_manager.go:746] Failed to delete status for pod "ss-0_e2e-statefulset-1676(84703c07-ef17-41ad-a74e-f02616301cb3)": pods "ss-0" not found
I0311 16:27:01.296172 1968 kubelet.go:1913] SyncLoop (ADD, "api"): "ss-0_e2e-statefulset-1676(4a76c75f-f835-4982-974e-097480c2d292)"
```
The surge in this one test was enough to cause the Kubelet to significantly slow down its processing of status (another bug I intend to fix).
/sig apps
|
kind/bug,sig/apps,lifecycle/frozen,needs-triage
|
low
|
Critical
|
579,527,734 |
kubernetes
|
Kubelet has no rate controls after a pod was evicted
|
There is no mechanism today to prevent abusive eviction loops (controller recreates a pod that was just evicted). #89067 seems to indicate that the kubelet and the controller will just fight until the controller stops. It seems reasonable that an eviction decision should perform some rate limit on subsequent pods from that controller, namespace, or name, although this could be tricky.
/sig node
|
sig/node,kind/feature,needs-triage
|
low
|
Major
|
579,539,725 |
rust
|
VecDeque should support splice / extend_front
|
`VecDeque` should support `splice` for convenience.
([`SliceDeque`](https://docs.rs/slice-deque/0.3.0/slice_deque/struct.SliceDeque.html#method.splice) supports it, but it doesn't build on wasm and seems kind of abandoned (several unmerged PRs with no updates).)
`VecDeque` should also support `extend_front` (like repeated `push_front` from an iterator. Compared to calling `q.splice(..0, iter)`, the elements added with `q.extend_front(iter)` would be in reverse order).
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"tomkarw"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END -->
|
A-collections,T-libs-api,C-feature-request
|
medium
|
Major
|
579,558,253 |
excalidraw
|
Investigate simplifying collaboration session workflow
|
There are two main pain points:
1. Too much text.
2. User having to manually start the sharing session.
@vjeux suggests that we should remove the `start session` button in favor of automatically starting it when user clicks on the menu :busts_in_silhouette: button.
His proposal (as I understand it) would be as such:
1. Click on the menu button starts the session, updates URL, and opens the dialog with the link to be shared with others.
2. When you close the dialog, and later click on the menu button again, it would stop the session, effectively making the menu button into a start/stop button.
I have a problem with this workflow for these reasons:
1. Users may just want to check what the button is, or how it works, and decide they don't want to start a session. Thus, after reading through the dialog, they'll close it, possibly not realizing the session is already in progress.
Note that currently the behavior of a scene that's singleplayer/multiplayer is different. When you refresh in multiplayer, you loose the scene data. There may be other differences. For this reason alone, I don't think we can go forward with this until at least https://github.com/excalidraw/excalidraw/issues/910 is resolved.
Another consideration: we've heard some voices that wanted to ensure absolute privacy (despite our E2E encryption), going as far as to express desire to be able to run excalidraw without support for creating shareable links. If we make it this easy to start a collaborative session, and automatically start sending data to a server, this will move us further away from achieving that goal.
2. Overloading the menu button to also stop the server is not intuitive. Users may want to open the dialog while a session is in progress for several reasons (while not wanting to stop the session):
1. To re-read the instructions.
2. To copy the link to send to others (note that many users won't be aware they can simply copy the link from the URL, as it's not obvious it's the same link we show in the dialog).
Moreover, if they inadvertently stop the session in this way and want to start it again (e.g. to reconnect to the room where they colleagues are), they'll generate a new link for a different room).
### Counter-proposal
1. Clicking on the menu button will open the dialog, start the session automatically, and update the URL, and show the link in the modal. But, we'll also show a `stop session` button in the dialog so they can manually stop it.
2. Clicking on the menu button again won't do anything except show the same dialog.
This proposal has the same benefit of removing one extra step, but makes it more obvious how to stop the server, and doesn't have the disadvantage of users mistakenly stopping the server just by clicking on the menu button.
----
That said, I'm still not certain even my counter-proposal is a good way to go, for some of the aforementioned reasons (mainly, starting a session automatically is not how these things usually work --- you want to warn the users, first).
|
enhancement,discussion,collaboration
|
low
|
Major
|
579,559,717 |
pytorch
|
A better way to show users all build options
|
We currently have build options enumerated at the top of setup.py. This, however, is redundant (as the info is largely available in CMakeLists.txt) and is pretty fallible and confusing. We should make it more user-friendly, say, `python setup.py help` should list the options extracted from `CMakeLists.txt`.
---
This dates back to discussion long time ago with @ezyang , but was never written down as a formal issue.
|
module: build,triaged
|
low
|
Minor
|
579,592,292 |
go
|
net/textproto: reduce allocations in ReadMIMEHeader
|
### What version of Go are you using (`go version`)?
<pre>
1.14
</pre>
### What did you do?
For our use-case (a HTTP proxy) `textproto.ReadMIMEHeader` is in the top 3 in term of allocations. In order to ease the GC, we could pre-allocate 512 bytes to be used for small header values. This can reduce by half the number of allocations for this function.
```
name old time/op new time/op delta
ReadMIMEHeader/client_headers-16 2.30µs ± 5% 2.11µs ± 2% -8.21% (p=0.000 n=10+10)
ReadMIMEHeader/server_headers-16 1.94µs ± 3% 1.85µs ± 3% -4.56% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
ReadMIMEHeader/client_headers-16 1.53kB ± 0% 1.69kB ± 0% +10.48% (p=0.000 n=10+10)
ReadMIMEHeader/server_headers-16 1.09kB ± 0% 1.44kB ± 0% +32.32% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
ReadMIMEHeader/client_headers-16 14.0 ± 0% 6.0 ± 0% -57.14% (p=0.000 n=10+10)
ReadMIMEHeader/server_headers-16 14.0 ± 0% 6.0 ± 0% -57.14% (p=0.000 n=10+10)
```
The patch is pretty small:
```
index d26e981ae4..6126a6685c 100644
--- a/src/net/textproto/reader.go
+++ b/src/net/textproto/reader.go
@@ -13,6 +13,7 @@ import (
"strconv"
"strings"
"sync"
+ "unsafe"
)
// A Reader implements convenience methods for reading requests
@@ -502,6 +503,12 @@ func (r *Reader) ReadMIMEHeader() (MIMEHeader, error) {
return m, ProtocolError("malformed MIME header initial line: " + string(line))
}
+ // Create a pre-allocated byte slice for all header values to save
+ // allocations for the first 512 bytes of values, a size that fit typical
+ // small header values, larger ones will get their own allocation.
+ const valuesPreAllocSize = 1 << 9
+ valuesPreAlloc := make([]byte, 0, valuesPreAllocSize)
+
for {
kv, err := r.readContinuedLineSlice(mustHaveFieldNameColon)
if len(kv) == 0 {
@@ -527,7 +534,17 @@ func (r *Reader) ReadMIMEHeader() (MIMEHeader, error) {
for i < len(kv) && (kv[i] == ' ' || kv[i] == '\t') {
i++
}
- value := string(kv[i:])
+
+ // Try to fit the value in the pre-allocated buffer to save allocations.
+ var value string
+ if len(kv[i:]) <= valuesPreAllocSize-len(valuesPreAlloc) {
+ off := len(valuesPreAlloc)
+ valuesPreAlloc = append(valuesPreAlloc, kv[i:]...)
+ v := valuesPreAlloc[off:]
+ value = *(*string)(unsafe.Pointer(&v))
+ } else {
+ value = string(kv[i:])
+ }
vv := m[key]
if vv == nil && len(strs) > 0 {
```
Please tell me if it's worth submitting a PR with this change.
|
NeedsDecision
|
medium
|
Critical
|
579,628,574 |
rust
|
Tracking Issue for Read::is_read_vectored/Write::is_write_vectored.
|
### Unresolved Questions
- [ ] What's the right naming for these methods?
### Implementation history
* #67841
|
T-libs-api,B-unstable,C-tracking-issue,A-io,Libs-Tracked,Libs-Small
|
low
|
Major
|
579,690,901 |
flutter
|
Refactoring FlutterViewController to support addToExistingApp better
|
<!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Use case
As we know, flutter was designed to only support one flutter view container in the whole app at the very beginning. As a result, the life cycle of FlutterViewController is as same as the whole app. The FlutterViewController will start, resume and pause the app life cycle state according to the view appear/disappear events. What is more, it handle the surface update process and not allowed the sub class to override.
And now, we try to support adding it to existing app, and support multiple flutter view controller sometimes.
However, due to the first design, we met a lot of problems, as following:
1. the app's life cycle is not as same as one FlutterViewController, because we have more Flutter VC.
2. The sequence of multiple views' life cycle have side effect on the surface updates. for example:
I start a FlutterVC#1 to render flutter page#2, then I push a second FlutterVC#2 to render flutter page#2. The FlutterVC#2 will cover the FlutterVC#1. Note that we have only one FlutterEngine which hold only one surface, so when I started the FlutterVC#2, the FlutterVC#2 view will appear and surfaceUpdate:YES, then for a while, the FlutterVC#1 will view disappear, and surfaceUpdate:NO, which will let the FlutterEngine hold a nulled surface, and introduce failure of FlutterVC#2.
<!--
Please tell us the problem you are running into that led to you wanting
a new feature.
Is your feature request related to a problem? Please give a clear and
concise description of what the problem is.
Describe alternative solutions you've considered. Is there a package
on pub.dev/flutter that already solves this?
-->
## Proposal
I suggest that:
1. refactoring the FlutterViewController to extract the sub functions occurred in view's appear and disappear event, and let those sub function public accessed by sub class.
Take the viewWillAppear as an example:
```objc
- (void)viewWillAppear:(BOOL)animated {
TRACE_EVENT0("flutter", "viewWillAppear");
if (_engineNeedsLaunch) {
[_engine.get() launchEngine:nil libraryURI:nil];
[_engine.get() setViewController:self];
_engineNeedsLaunch = NO;
}
// Send platform settings to Flutter, e.g., platform brightness.
[self onUserSettingsChanged:nil];
// Only recreate surface on subsequent appearances when viewport metrics are known.
// First time surface creation is done on viewDidLayoutSubviews.
if (_viewportMetrics.physical_width) {
[self surfaceUpdated:YES];
}
[[_engine.get() lifecycleChannel] sendMessage:@"AppLifecycleState.inactive"];
[super viewWillAppear:animated];
}
```
We can refactor it to following sub functions:
```objc
- (void)onEngineNeedsLaunch{....}//this function should be public to access by sub class
- (void)deactivateEngine{...}//public function to sub class
- (void)surfaceUpdated:(BOOL)appeared {...}//make this function public to sub class
- (int)physical_width{...}//just an example
- (void)viewWillAppear:(BOOL)animated {
TRACE_EVENT0("flutter", "viewWillAppear");
[self onEngineNeedsLaunch];
// Only recreate surface on subsequent appearances when viewport metrics are known.
// First time surface creation is done on viewDidLayoutSubviews.
if ([self physical_width]) {
[self surfaceUpdated:YES];
}
[self deactivateEngine:YES];
[super viewWillAppear:animated];
}
```
Then in my MyFlutterViewController which inherit from FlutterViewController, I can decide when I need to call surfaceUpdated function during view's life cycle, and avoid to introduce the problem mentioned in problem#2.
```objc
@implementaton MYFlutterViewController : public FlutterViewController
- (void)viewWillAppear:(BOOL)animated {
TRACE_EVENT0("myflutter", "viewWillAppear");
[self onEngineNeedsLaunch];
// Only recreate surface on subsequent appearances when viewport metrics are known.
// First time surface creation is done on viewDidLayoutSubviews.
if ([self physical_width]) {
[self surfaceUpdated:YES];
}
[self deactivateEngine:YES];
// here I am able to not call super's implementation, and avoid surfaceUpdate:YES, and move this call to viewAppear handler
// [super viewWillAppear:animated];
}
```
<!--
Briefly but precisely describe what you would like Flutter to be able to do.
Consider attaching images showing what you are imagining.
Does this have to be provided by Flutter directly, or can it be provided
by a package on pub.dev/flutter? If so, maybe consider implementing and
publishing such a package rather than filing a bug.
-->
|
platform-ios,engine,customer: alibaba,a: existing-apps,c: proposal,P3,team-ios,triaged-ios
|
low
|
Critical
|
579,701,624 |
flutter
|
[Camera] Audio stream
|
`[✓] Flutter (Channel stable, v1.12.13+hotfix.8, on Mac OS X 10.15.3 19D76, locale en-TW)
• Flutter version 1.12.13+hotfix.8 at /Users/tonypottera/.flutter_stable
• Framework revision 0b8abb4724 (4 周前), 2020-02-11 11:44:36 -0800
• Engine revision e1e6ced81d
• Dart version 2.7.0`
I'm building a video recording application which needs to show audio amplitudes before, during, and after video recording.
Currently, on iOS, I can use `flutter_sound: ^2.1.1` to record audio, and use `camera: ^0.5.7+4` to capture video without any problem.
However, on Android, `startVideoRecording` will crash if `flutter_sound` is already recording audio.
I'm wondering if there's any way to achieve this goal.
For example, it would be great if `camera` provides `startAudioStream` functionality.
|
c: new feature,p: camera,package,team-ecosystem,P3,triaged-ecosystem
|
low
|
Critical
|
579,764,074 |
TypeScript
|
Order member intellisense by inheritance
|
## Suggestion
I would like to see an optional way to allow the members of an object be displayed in the order in which they are inherited. In other words, the members should be listed from most specific to most general. Subclasses before superclasses. Perhaps a user could have this be their default intellisense behavior, or there could be a key-press to toggle this behavior.
## Use Cases
In several systems I've either worked in or looked at, I found myself thinking that it would be beneficial to order members by their order of inheritance. Consider this case: You have a `Node` in TypeScript or ESLint, and you narrow it to a certain kind of `Node`, for example, a `CallExpression`. You narrowed it to a `CallExpression` type because you want to do something specific to the `Node` only if it is the `CallExpression` type. You probably want to call a certain method or access a certain member which did not exist on `Node` but exists on `CallExpression`. As far as I'm concerned, **the most important members in any particular object are the ones most recently inherited**. If I only needed the information from `Node`, I would not have needed to narrow the type. Hence, I would like to see members from subclasses prioritized over members from superclasses.
Often times when working in a system where objects have tons of methods and properties, I find myself putting the `ts.CallExpression` type somewhere so I can go-to-definition and look for myself what the "latest" methods are for a narrowed object. Here's a little reenactment of what I've actually done during development many times:

###### This example actually uses ts-node, which, as a side effect of being a wrapper for TS, wraps properties as methods. E.g. `returnType: Type` becomes `getReturnType(): Type` in ts-node. It also aggregates all useful operations you could perform on/with an object, e.g. it allows you to do things like call `getType()` directly on any `Node`, which under the hood delegates to the proper TS typechecker API.
<!--
What do you want to use this for?
What shortcomings exist with current approaches?
-->
## Examples
How it could look:

###### In the above image, `B.y` comes before `A.x` because `B` is the lowest subclass and `A` is the superclass.
Members inherited from the same class or same type would be sorted alphabetically.
It is possible for a particular type to inherit members from multiple types along the same inheritance level via intersection types. In this case, members of each individual type would be grouped together, with subsequent intersected types taking priority. E.g. `A & B` would display all the members of `B` as a group, followed by all the members of `A` as a group.

Groups could optionally be separated by extra spacing or perhaps a horizontal bar:

Keep in mind that types which have been reconstructed out of intersected types would not be affected:

Members from subclasses or subsequent types in an intersection which intersect with superclasses or prior types in an intersection should be listed with the superclass or prior type group.

There could also optionally be a keybind for VSCode which toggles this setting, so you could switch between alphabetical and inheritance-based ordering on the fly.
## Checklist
My suggestion meets these guidelines:
* [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
* [x] This wouldn't change the runtime behavior of existing JavaScript code
* [x] This could be implemented without emitting different JS based on the types of the expressions
* [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.)
* [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
|
Suggestion,Awaiting More Feedback
|
low
|
Major
|
579,855,250 |
neovim
|
Add flag and environment variable to _not_ touch STDIN
|
The change to default STDIN handling from #2087 is very welcome, but the current situation leaves a whole set of possible use cases.
Consider a pattern like this:
```
find -type d | while read dir; do (cd $dir && zsh -i); done
```
This is greatly simplified from actual usage and obviously if I was _only_ trying to run Neovim just passing it a list of files would be better, but in cases where other activities need to be done semi-manually, I use this kind of loop frequently to open interactive shells in all the places that need work.
Now if in that shell instance I run `nvim MYFILE`, it will open two buffers, one with whatever was remaining that `while read` hadn't read from STDIN yet, and one with the requested file. This is very obnoxious because it ruins the rest of the pipeline.
A current work around is to only ever launch with `nvim MYFILE < /dev/null` so that the STDIN read by nvim is not any pending parent loop input but an empty one of its own.
It would be nice to be able to set an environment variable or pass a flag of some kind to suppress any attempt to read STDIN.
|
enhancement
|
low
|
Minor
|
579,863,726 |
svelte
|
Event bindings on animate directive
|
It would be nice to be able to bind events on the `animate:*` directive, much like the one that are available with the [transition directive](https://svelte.dev/docs#Transition_events) (`introstart`, `introend,` `outrostart`, `outroend`).
This feature would be useful in order to prevent other changes to the current component state while an animation is occuring, as a component update during an animation can make it glitch.
I think two events named `animstart` and `animend` would be appropriate.
In the meantime, is there a way to emulate this feature with what svelte already offers ?
|
feature request,temp-stale
|
medium
|
Major
|
579,892,885 |
rust
|
Add in memory object files support to CompiledModule and ArchiveBuilder
|
https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/struct.CompiledModule.html
https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/back/archive/trait.ArchiveBuilder.html
rustc_codegen_cranelift can emit object files to memory without writing them to the disk. With the current setup they need to be written to the disk before an rlib can be created. This doubles the amount of data written to the disk compared to emitting the object files to memory and then creating an rlib file from that. Linking a dylib or executable will still need to write the object files to the disk, but at least rlibs don't have to anymore.
|
C-enhancement,A-codegen,I-compiletime,T-compiler,A-cranelift
|
low
|
Minor
|
579,911,394 |
pytorch
|
Support creating a CPU tensor from ctypes pointer in Python / from_blob(ptr, shape, strides, dtype)
|
NumPy has utilities that allow to create an array from a ctypes pointer in pure Python (without C++ extensions):
- https://docs.scipy.org/doc/numpy/reference/routines.ctypeslib.html#module-numpy.ctypeslib:
- https://numpy.org/doc/stable/reference/routines.ctypeslib.html#numpy.ctypeslib.as_array
- https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.ctypes.html
In C++ land, it seems that [`torch::from_blob`](https://pytorch.org/cppdocs/api/function_namespacetorch_1ad7fb2a7759ef8c9443b489ddde494787.html#function-documentation) should do the trick, but it has no binding to Python.
I propose to have utilities that would allow to do without passing by NumPy first or creating a C++ extension for a single-function `torch::from_blob` call, i.e. Python binding for `torch::from_blob` (that exist in java bindings btw), potentially with a custom deleter.
Context: I'd like to make a very thin ffmpeg audio-reading wrapper: the C code would do that allocation, return it to calling code, and the calling code would be responsible to free that memory. Pseudocode for NumPy (I still need to fix the C code) is here: https://github.com/vadimkantorov/audioprimer/blob/master/decode_audio.py . Ideally I'll have a single C file that's independent of NumPy/Torch and just slightly different versions of interfacing with it. I can think of some alternatives for this particular case, but Python way of creating a tensor from a raw ctypes pointer still may be useful for existing ctypes codebases.
|
module: docs,module: memory usage,triaged,enhancement,module: numpy
|
low
|
Major
|
579,942,508 |
node
|
buffer: ~2x slowdown in master compared to v12.x
|
* **Version**: master
* **Platform**: `Linux foo 5.0.0-36-generic #39~18.04.1-Ubuntu SMP Tue Nov 12 11:09:50 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux`
* **Subsystem**: buffer
I was running some benchmarks (for private code) and noticed a significant slowdown with some Buffer methods. Here is a comparison of the C++ portion of `--prof` between v12.16.1 and master:
v12.16.1:
<details>
```
[C++]:
ticks total nonlib name
66 4.2% 8.4% void node::Buffer::(anonymous namespace)::StringSlice<(node::encoding)1>(v8::FunctionCallbackInfo<v8::Value> const&)
54 3.4% 6.9% __libc_read
48 3.0% 6.1% node::Buffer::(anonymous namespace)::ParseArrayIndex(node::Environment*, v8::Local<v8::Value>, unsigned long, unsigned long*)
35 2.2% 4.5% v8::ArrayBuffer::GetContents()
35 2.2% 4.5% epoll_pwait
27 1.7% 3.4% v8::internal::Builtin_TypedArrayPrototypeBuffer(int, unsigned long*, v8::internal::Isolate*)
26 1.6% 3.3% v8::String::NewFromUtf8(v8::Isolate*, char const*, v8::NewStringType, int)
24 1.5% 3.1% node::native_module::NativeModuleEnv::CompileFunction(v8::FunctionCallbackInfo<v8::Value> const&)
22 1.4% 2.8% __pthread_cond_signal
20 1.3% 2.5% node::StringBytes::Encode(v8::Isolate*, char const*, unsigned long, node::encoding, v8::Local<v8::Value>*)
16 1.0% 2.0% v8::Value::IsArrayBufferView() const
14 0.9% 1.8% v8::ArrayBufferView::Buffer()
9 0.6% 1.1% v8::internal::FixedArray::set(int, v8::internal::Object)
8 0.5% 1.0% v8::ArrayBufferView::ByteLength()
7 0.4% 0.9% v8::Value::IntegerValue(v8::Local<v8::Context>) const
6 0.4% 0.8% v8::ArrayBufferView::ByteOffset()
5 0.3% 0.6% __libc_malloc
4 0.3% 0.5% write
4 0.3% 0.5% v8::internal::libc_memmove(void*, void const*, unsigned long)
4 0.3% 0.5% node::binding::GetInternalBinding(v8::FunctionCallbackInfo<v8::Value> const&)
3 0.2% 0.4% v8::internal::libc_memset(void*, int, unsigned long)
3 0.2% 0.4% __lll_unlock_wake
2 0.1% 0.3% void node::StreamBase::JSMethod<&node::StreamBase::WriteBuffer>(v8::FunctionCallbackInfo<v8::Value> const&)
2 0.1% 0.3% fwrite
2 0.1% 0.3% do_futex_wait.constprop.1
2 0.1% 0.3% __clock_gettime
2 0.1% 0.3% __GI___pthread_mutex_unlock
1 0.1% 0.1% void node::StreamBase::JSMethod<&(int node::StreamBase::WriteString<(node::encoding)1>(v8::FunctionCallbackInfo<v8::Value> const&))>(v8::FunctionCallbackInfo<v8::Value> const&)
1 0.1% 0.1% v8::internal::Scope::DeserializeScopeChain(v8::internal::Isolate*, v8::internal::Zone*, v8::internal::ScopeInfo, v8::internal::DeclarationScope*, v8::internal::AstValueFactory*, v8::internal::Scope::DeserializationMode)
1 0.1% 0.1% v8::internal::RuntimeCallTimerScope::RuntimeCallTimerScope(v8::internal::Isolate*, v8::internal::RuntimeCallCounterId)
1 0.1% 0.1% v8::EscapableHandleScope::Escape(unsigned long*)
1 0.1% 0.1% std::ostreambuf_iterator<char, std::char_traits<char> > std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::_M_insert_int<long>(std::ostreambuf_iterator<char, std::char_traits<char> >, std::ios_base&, char, long) const
1 0.1% 0.1% std::ostream::sentry::sentry(std::ostream&)
1 0.1% 0.1% std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)
1 0.1% 0.1% std::__detail::_Prime_rehash_policy::_M_need_rehash(unsigned long, unsigned long, unsigned long) const
1 0.1% 0.1% non-virtual thunk to node::LibuvStreamWrap::GetAsyncWrap()
1 0.1% 0.1% node::LibuvStreamWrap::ReadStart()::{lambda(uv_stream_s*, long, uv_buf_t const*)#2}::_FUN(uv_stream_s*, long, uv_buf_t const*)
1 0.1% 0.1% node::CustomBufferJSListener::OnStreamRead(long, uv_buf_t const&)
1 0.1% 0.1% node::AsyncWrap::EmitTraceEventBefore()
1 0.1% 0.1% mprotect
1 0.1% 0.1% getpid
1 0.1% 0.1% cfree
1 0.1% 0.1% __lll_lock_wait
```
</details>
master:
<details>
```
[C++]:
ticks total nonlib name
155 6.5% 14.0% std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release()
88 3.7% 7.9% __GI___pthread_mutex_lock
87 3.7% 7.8% v8::ArrayBuffer::GetBackingStore()
71 3.0% 6.4% __GI___pthread_mutex_unlock
54 2.3% 4.9% void node::Buffer::(anonymous namespace)::StringSlice<(node::encoding)1>(v8::FunctionCallbackInfo<v8::Value> const&)
36 1.5% 3.2% node::Buffer::(anonymous namespace)::ParseArrayIndex(node::Environment*, v8::Local<v8::Value>, unsigned long, unsigned long*)
32 1.4% 2.9% v8::internal::Builtin_TypedArrayPrototypeBuffer(int, unsigned long*, v8::internal::Isolate*)
31 1.3% 2.8% v8::Value::IntegerValue(v8::Local<v8::Context>) const
27 1.1% 2.4% node::native_module::NativeModuleEnv::CompileFunction(v8::FunctionCallbackInfo<v8::Value> const&)
27 1.1% 2.4% epoll_pwait
26 1.1% 2.3% v8::String::NewFromUtf8(v8::Isolate*, char const*, v8::NewStringType, int)
20 0.8% 1.8% v8::Value::IsArrayBufferView() const
20 0.8% 1.8% __libc_read
19 0.8% 1.7% __pthread_cond_signal
16 0.7% 1.4% node::StringBytes::Encode(v8::Isolate*, char const*, unsigned long, node::encoding, v8::Local<v8::Value>*)
13 0.5% 1.2% v8::ArrayBufferView::Buffer()
10 0.4% 0.9% v8::internal::FixedArray::set(int, v8::internal::Object)
6 0.3% 0.5% v8::internal::libc_memset(void*, int, unsigned long)
6 0.3% 0.5% v8::internal::RuntimeCallTimerScope::RuntimeCallTimerScope(v8::internal::Isolate*, v8::internal::RuntimeCallCounterId)
5 0.2% 0.5% v8::internal::libc_memmove(void*, void const*, unsigned long)
5 0.2% 0.5% v8::ArrayBufferView::ByteLength()
4 0.2% 0.4% write
4 0.2% 0.4% void node::StreamBase::JSMethod<&node::StreamBase::WriteBuffer>(v8::FunctionCallbackInfo<v8::Value> const&)
4 0.2% 0.4% do_futex_wait.constprop.1
4 0.2% 0.4% __lll_lock_wait
3 0.1% 0.3% node::binding::GetInternalBinding(v8::FunctionCallbackInfo<v8::Value> const&)
3 0.1% 0.3% _init
2 0.1% 0.2% v8::BackingStore::Data() const
2 0.1% 0.2% fwrite
2 0.1% 0.2% cfree
2 0.1% 0.2% __lll_unlock_wake
2 0.1% 0.2% __libc_malloc
1 0.0% 0.1% v8::internal::DeclarationScope::DeclareDefaultFunctionVariables(v8::internal::AstValueFactory*)
1 0.0% 0.1% v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*)
1 0.0% 0.1% v8::internal::AstValueFactory::GetOneByteStringInternal(v8::internal::Vector<unsigned char const>)
1 0.0% 0.1% std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::do_put(std::ostreambuf_iterator<char, std::char_traits<char> >, std::ios_base&, char, long) const
1 0.0% 0.1% operator new[](unsigned long)
1 0.0% 0.1% node::contextify::ContextifyContext::CompileFunction(v8::FunctionCallbackInfo<v8::Value> const&)
1 0.0% 0.1% node::TCPWrap::Connect(v8::FunctionCallbackInfo<v8::Value> const&)
1 0.0% 0.1% node::Environment::RunAndClearNativeImmediates(bool)
1 0.0% 0.1% node::AsyncWrap::EmitTraceEventBefore()
1 0.0% 0.1% mprotect
1 0.0% 0.1% __pthread_cond_timedwait
1 0.0% 0.1% __fxstat
1 0.0% 0.1% __GI___pthread_getspecific
```
</details>
As you will see, master has these additional items at the top of the list:
```
155 6.5% 14.0% std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release()
88 3.7% 7.9% __GI___pthread_mutex_lock
87 3.7% 7.8% v8::ArrayBuffer::GetBackingStore()
71 3.0% 6.4% __GI___pthread_mutex_unlock
```
Is there some way we can avoid this slowdown?
|
buffer,c++,performance
|
medium
|
Major
|
579,951,854 |
opencv
|
Feature request: equivalent of moveaxis (Python/Numpy) and permute (MATLAB)
|
References:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.moveaxis.html
https://nl.mathworks.com/help/matlab/ref/permute.html
As far as I know, this operation is not available in OpenCV. About a year ago, someone on Stack Overflow also asked for an easy way to do this, but unfortunately the post remains unanswered.
See https://stackoverflow.com/questions/55117581/how-to-permute-the-axes-on-cvmat-in-c
|
feature,category: core,effort: few days
|
low
|
Minor
|
579,981,139 |
godot
|
Light2D mask disables CanvasItemMaterial
|
**Godot version:3.2.1**
**OS/device including version: Windows 10 1809**
**Issue description:**
To properly display alphas from a transparent Viewport (Transparent Bg property on), you need to apply a CanvasItemMaterial to the Sprite with *Blend Mode = Premult Alpha*, or your viewport does not display correctly (colors and opacity gets less intense).
As soon as a Light2D with *Mode = Mask* touches the Sprite, the CanvasItemMaterial seems to disappear.
Visual appearence (thats what is supposed to look like), when the Light2D does not touch the sprite:

Visual appearance when the Light2D touches the sprite:

**Steps to reproduce:**
Make a viewport with transparent content
Apply viewport-texture to a sprite
Move a Light2D with mode=mask over the sprite
**Minimal reproduction project:**
[test_light_vs_canvasitemmaterial.zip](https://github.com/godotengine/godot/files/4324814/test_light_vs_canvasitemmaterial.zip)
|
bug,topic:rendering
|
low
|
Minor
|
579,989,783 |
godot
|
incorrect error message: The class couldn't be fully loaded...
|
**Godot version:** 3.2.1 stable
**OS/device including version:** Mac OS 10.15.3
**Issue description:** Incorrect error message: `The class "SingleItemClickable" couldn't be fully loaded...`. There's no reason for such error, and game works without any problems
**Steps to reproduce:** Open attached project and open BaseLevel.gd. The error appears at the end of the file.
**Minimal reproduction project:**
[Lake copy.zip](https://github.com/godotengine/godot/files/4324756/Lake.copy.zip)
Tried to strip the project to bare minimum, that's why it looks weird. I suppose the problem may be with autoloads, so I left the classes there.
This might be a duplicate of https://github.com/godotengine/godot/issues/21461 , but here you have a minimal product that reproduces the bug every time (at least in my case).
|
discussion,topic:gdscript
|
low
|
Critical
|
580,058,696 |
pytorch
|
Confusing error message of tensor constructor when passing a storage
|
I tried to use torch.Tensor and pass it a torch.Storage (maybe wrongly) and got this:
```
>>> torch.Tensor(torch.zeros(1, dtype = torch.int16).storage())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Expected object of data type float but got data type 4687170077835386990 for argument #2 'source'
```
Even if this is not allowed, a better error message would make more sense!
Context: I have a storage of various types (can be float32, uint8, int16 etc) and I want to construct a tensor from it without making an if statement. torch.as_tensor doesn't work as well
|
triaged
|
low
|
Critical
|
580,076,496 |
pytorch
|
Support passing memoryview to torch.as_tensor
|
As far as I understand, memoryview python objects support itemsize and shapes, ndim, strides etc:
https://docs.python.org/3/library/stdtypes.html#memoryview :
```python
# from python docs example
import struct
buf = struct.pack("i"*12, *list(range(12)))
x = memoryview(buf)
y = x.cast('i', shape=[2,2,3])
print(y.shape) # (2, 2, 3)
import torch
torch.as_tensor(y)
# ValueError: could not determine the shape of object type 'memoryview'
# kind of funny, because shape exists :0
```
|
triaged,enhancement
|
low
|
Critical
|
580,123,812 |
pytorch
|
Add "strict" flag to ignore missing parameters in Optimizer.load_state_dict
|
## 🚀 Feature
Add a `strict` flag to `Optimizer.load_state_dict` to match the interface for `nn.Module.load_state_dict`. This allows the optimizer to ignore missing parameters in the optimizer state. `nn.Module.load_state_dict` already supports this behavior.
## Motivation
Currently, loading optimizer state fails when the new optimizer object has distinct parameters from the saved state. This makes finetuning difficult since I would prefer to load partial optimizer state, e.g. to create new Adam moment estimates for the new layers.
## Pitch
Add a `strict` optional positional argument to `Optimizer.load_state_dict`, so that the signature is `load_state_dict(self, state_dict, strict=True)`.
## Alternatives
The current workaround is to take the newly initialized optimizer state, update this dictionary with the old one, and then call `load_state_dict`.
```
opt = optim.SGD(...)
checkpoint = torch.load(...)
old_opt_state = checkpoint["opt"]
new_opt_state = opt.state_dict()
new_opt_state.update(old_opt_state)
opt.load_state_dict(new_opt_state)
```
(EDIT: the above does not work. It is necessary to update the 'param_group' list in the state dict too. Will update that in a second)
The new code would look like:
```
opt = optim.SGD(...)
checkpoint = torch.load(...)
old_opt_state = checkpoint["opt"]
opt.load_state_dict(old_opt_state, strict=False)
```
cc @vincentqb
|
module: optimizer,triaged,enhancement
|
low
|
Major
|
580,129,556 |
ant-design
|
Configurable layout for Upload control
|
- [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
Permits customizing the render location of the Upload button/dragger separately from file list.
### What does the proposed API look like?
Provide separate controls for file list and Upload button/dragger, so they can be arranged where needed. They can both live under an ancestor Upload.
```
<Upload>
<MyLayout>
<Upload.FileList/>
<Upload.Dragger/>
</MyLayout>
<Upload>
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
help wanted,Inactive
|
low
|
Minor
|
580,147,204 |
flutter
|
Flutter stops animations even when screen is only partially obscured
|
See: b/151332834
When showing native bottom sheet or alert dialogs, screen is only partially obscured. If there's an ongoing animation, it stops when the native UI starts displaying leading to jarring effects. See repro:
```dart
import 'package:flutter/material.dart';
import 'package:url_launcher/url_launcher.dart';
void main() => runApp(
MaterialApp(
home: MyApp(),
theme: ThemeData(
splashFactory: InkRipple.splashFactory,
),
)
);
class MyApp extends StatefulWidget {
@override
_MyAppState createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
_launchURL() async {
const url = 'tel://9000000';
if (await canLaunch(url)) {
// This uses url launcher plugin to display an overlay.
await launch(url);
} else {
throw 'Could not launch $url';
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text("Flutter"),
),
body: Center(
child: FlatButton(
child: Text("A long launch URL"),
shape: RoundedRectangleBorder(
side: BorderSide(
color: Colors.blue,
width: 1,
),
borderRadius: BorderRadius.all(Radius.circular(24)),
),
onPressed: _launchURL,
)
),
);
}
}
```
Video is attached. Ripple gets stuck in the middle when the overlay is shown.
[RPReplay_Final1583912940.mp4.gz](https://github.com/flutter/flutter/files/4325907/RPReplay_Final1583912940.mp4.gz)
|
framework,a: animation,customer: money (g3),P2,team-framework,triaged-framework
|
low
|
Major
|
580,153,450 |
node
|
re-enable crypto dh keygen testing on Arm
|
A Crypto test has been disabled on AArch64 as part of PR: https://github.com/nodejs/node/pull/31178
After a brief chat with Sam, he suggests a discussion about this should include @tniessen. I would like to discover the root-cause of the poor performance. The build logs for the job have expired for the Arm builds - is there a record of the hardware that was used for the Arm build?
EDIT: corrected summary.
|
crypto,test,arm
|
low
|
Major
|
580,173,426 |
go
|
x/build/cmd/coordinator: include -longtest and -race builders for trybots on release branches
|
After a major version of a Go release is out, we send relatively few CLs to the release branch in order to backport fixes for minor releases, per https://golang.org/wiki/MinorReleases policy.
We should consider including -longtest and -race builders in the default tryset (set of builders included in a normal trybot run) for those branches::
- [ ] darwin-amd64-longtest (#35678)
- [x] linux-386-longtest
- [x] linux-amd64-longtest
- [x] linux-arm64-longtest (#49649)
- [x] windows-amd64-longtest
- darwin-amd64-race
- freebsd-amd64-race
- linux-amd64-race
- windows-amd64-race
Those builders already run as post-submit builders, but also running them during trybots will give us more information to work with before a CL is submitted to a release branch.
We may want to do this only for minor releases, i.e., after a major Go release is out. Or we can do this always, which would include CLs sent to a release branch after the first release candidate is made.
An existing alternative is to manually request those builders via [SlowBots](https://golang.org/wiki/SlowBots).
/cc @cagedmantis @toothrot @golang/osp-team
|
Builders,NeedsFix
|
low
|
Major
|
580,194,221 |
flutter
|
[tool_crash] Invalid argument(s): Cannot find executable for /flutter/bin/cache/artifacts/
|
## TL;DR Workaround
Wiping out the flutter cache will cause the flutter tool to re-try downloading the required cached artifacts when next invoked, which will usually fix the issue.
```
$ rm -rf <path to flutter installation>/bin/cache
$ flutter run -v# or whatever command you were attempting
```
## Issue
The current caching mechanism in the tool does not guarantee that a cached binary is present before invoking.
## Proposed solution
Rather than directly invoking a binary from the flutter cache using processutils, we should have a method on the cache that first checks for the file's existence, and if it isn't there tries to download it first, then invoke.
|
c: crash,tool,P2,team-tool,triaged-tool
|
low
|
Critical
|
580,200,288 |
youtube-dl
|
ert webtv
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2020.03.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- x ] I'm reporting a new site support request
- [ c] I've verified that I'm running youtube-dl version **2020.03.08**
- [x ] I've checked that all provided URLs are alive and playable in a browser
- [ x] I've checked that none of provided URLs violate any copyrights
- [x ] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.youtube.com/watch?v=BaW_jenozKc
- Single video: https://youtu.be/BaW_jenozKc
- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
WRITE DESCRIPTION HERE
ERT WebTV
Olympic Torch Relay 2020
C:\Users\andry>y -v https://webtv.ert.gr/ert-sports-live/
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', 'https://webtv.ert.gr/ert-sports-live/']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2020.03.08
[debug] Python version 3.4.4 (CPython) - Windows-Vista-6.0.6002-SP2
[debug] exe versions: ffmpeg N-94698-g83e0b71-Reino, ffprobe N-94698-g83e0b
71-Reino, rtmpdump 2.4
[debug] Proxy map: {}
[generic] ert-sports-live: Requesting header
WARNING: Falling back on generic information extractor.
[generic] ert-sports-live: Downloading webpage
[generic] ert-sports-live: Extracting information
ERROR: Unsupported URL: https://webtv.ert.gr/ert-sports-live/
Traceback (most recent call last):
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmp3
78onz6j\build\youtube_dl\YoutubeDL.py", line 797, in extract_info
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmp3
78onz6j\build\youtube_dl\extractor\common.py", line 530, in extract
File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\ytdl-org\tmp3
78onz6j\build\youtube_dl\extractor\generic.py", line 3351, in _real_extract
youtube_dl.utils.UnsupportedError: Unsupported URL: https://webtv.ert.gr/er
t-sports-live/
|
site-support-request
|
low
|
Critical
|
580,210,485 |
pytorch
|
[feature request]Support dilation parameter for unfold2d_* function (slow cpu maxpool2d #28733)
|
## Issue description
@soumith, @VitalyFedyunin, @karimhasebou, I was investigating slow maxpool2d on cpu issue #28733.
I thought I could use the conv2d implementation, but two important functions unfold2d_acc, unfold2d_copy (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/Unfold2d.cpp) lack of dilation parameters, which are required in maxpool2d. This also leads to slow conv2d with dilation on cpu.
Here is my proposal:
1. Either update the Unfold2d.cpp file by adding dilation parameter, then address the other issues later. This plan involves lots of changes and work, but I can give it a try.
2. Or I just find a temporary solution for maxpool2d, i.e., if without dilation, use conv2d equivalent implementation, if with dilation, remain the current implementation.
What do you guys think?
## Code example
> import torch
> import time
> import torch.nn as nn
>
> model = nn.Conv2d(in_channels=16, out_channels=16, kernel_size=3, stride=2, dilation=2)
> net = nn.MaxPool2d(kernel_size=3, stride=2, dilation=2)
> blob = torch.randn(1, 16, 5000, 5000, device='cpu')
>
> t0 = time.time()
> with torch.no_grad():
> outputs = model(blob)
> print("PyTorch Conv: {}".format(time.time() - t0))
>
> t0 = time.time()
> with torch.no_grad():
> pred = net(blob)
> print("PyTorch MaxPool: {}".format(time.time() - t0))
running time:
with dilation:
conv2d 2.02s
maxpool2d 2.32s
without dilation:
conv2d **0.35s**
maxpool2d 2.69s
## System Info
PyTorch version: 1.5.0a0+a22008f
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.14.0
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
Nvidia driver version: 430.50
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] torch==1.5.0a0+a22008f
[conda] blas 1.0 mkl
[conda] magma-cuda101 2.5.2 1 pytorch
[conda] mkl 2020.0 166
[conda] mkl-include 2020.0 166
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] torch 1.5.0a0+a22008f pypi_0 pypi
cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @VitalyFedyunin @albanD @mruberry @ngimel @heitorschueroff
|
module: performance,module: nn,triaged,module: mkldnn,module: pooling,function request
|
low
|
Critical
|
580,213,962 |
next.js
|
paths get url-encoded with next export and getStaticPaths
|
# Bug report
## Describe the bug
Paths get url-encoded when using `next export`. This causes them to get double-encoded when live.
## To Reproduce
test case: https://gitlab.com/meesvandongen/test-case-next-export
build & export the project, serve it and go to the relevant page:
* [/test/path%20with%20spaces](https://meesvandongen.gitlab.io/test-case-next-export/test/path%20with%20spaces)
doesn't work
* [/test/path%2520with%2520spaces](https://meesvandongen.gitlab.io/test-case-next-export/test/path%2520with%2520spaces)
works; because we encoded the percent in `%20` (space), giving `%2520`
## Expected behavior
The routes are exported without encoding.
## Additional context
This behaviour is different from exportPathMap, where the behaviour was as desired.
|
Pages Router
|
medium
|
Critical
|
580,230,784 |
PowerToys
|
[FZ Editor] Add a Ruler
|
Split from #585 as requested..
When creating new layouts, it would be very useful to have on screen screen coordinates/ruler. In addition to the previous suggestions (#585), this would aid in perfectly aligning the various zones, for those of us that must have precision in our lives.
|
Idea-Enhancement,FancyZones-Editor,Product-FancyZones
|
low
|
Major
|
580,247,575 |
pytorch
|
Pytorch not compatible with react native android
|
## 🐛 Bug
I started using Pytorch in my app in which a module is already using ReactNative.Application is crashing while running the model
## To Reproduce
Steps to reproduce the behavior:
1.Download sample react native Android code
2.Integrate Pytorch
3.Try to load a sample model using Pytorch.
App crashes with following error
java.lang.UnsatisfiedLinkError: couldn't find DSO to load: libpytorch_jni.so caused by: dlopen failed: cannot locate symbol "_ZN8facebook3jni11JByteBuffer5orderENS0_9alias_refINS0_10JByteOrderEEE" referenced by "/data/app/com.******-sAs98rDRDFL87ujKDr_Ctw==/lib/arm64/libpytorch_jni.so"...
## Expected behavior
App should not crash while loading model
- PyTorch Version (e.g., 1.4.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
|
oncall: mobile
|
low
|
Critical
|
580,259,265 |
go
|
x/build: build infrastructure credentials should be recycled
|
After migrating all of the build infrastructure secrets into a secret management system #37171, we should recycle all of the secrets.
@toothrot @dmitshur @FiloSottile
|
Builders,NeedsFix
|
low
|
Minor
|
580,296,168 |
godot
|
Vehicle Wheel doesnt spin on air
|
Windows 10
Ryzen 5 2600, Nvidia GTX 1060
Godot 3.2
If applying an engineforce to the wheel, they only spin when in contact with something.
I applied a engine force to a single wheel and also to the vehicle body and while on the air the wheel mesh doesnt appear to be spinning and wheel.get_rpm() returns 0
Also when on the ground and skidding(i set friction slip to 0.1) the wheel doesnt keep rotating if you accelerate, it matches the speed of the road, for example, if the car is rolling backwards and you accelerate forwards the wheels keep rotating backwards
|
bug,confirmed,topic:physics,topic:3d
|
low
|
Major
|
580,297,862 |
go
|
net/http: Allow obtaining original header capitalization
|
### What version of Go are you using (`go version`)?
1.13
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
Windows, x64
### What did you do?
Make a HTTP request, read it's headers, send those headers somewhere else.
### What did you expect to see?
Headers are preserved exactly
### What did you see instead?
Header capitialization gets canonicalized
I realize this has been discussed [before (about 7 years ago)](https://github.com/golang/go/issues/5022).
However, in the time since then many users have been adversely affected by this. E.g.
https://github.com/containous/traefik/issues/466
https://github.com/Azure/azure-storage-azcopy/issues/113
So I'd like to find out, would the Go team consider not a change to the current behaviour, but simply a way for an HTTPResponse to provide an _additional_ map, that maps canonicalizedName -> originalName. If we could just get that map, then those of us who really need header case preservation could use the information it contains to achieve what we need. (Find out the original capitalization when we read a response, and then directly manipulate the outbound request map when we forward that data on).
I'd be happy to contribute the code for the above, if we expected it to be accepted.
|
NeedsInvestigation
|
medium
|
Critical
|
580,341,326 |
puppeteer
|
Feature request: Implement environment variable for `--no-sandbox`
|
### Steps to reproduce
* Puppeteer version: all
* Platform / OS version: Docker
* Node.js version: All
**What steps will reproduce the problem?**
Running puppeteer as a root user errors with the common: `Running as root without --no-sandbox is not supported. See https://crbug.com/638180.`. You can get around this by adding a `--no-sandbox` to every invocation of puppeteer, but this is not easily feasible or desirable when there are thousands of tests present.
For our specific use case we are utilizing puppeteer to run integration tests, and chowning all files for suitable permissions takes way too long. We are looking at speeding this up by moving to a no-sandbox environment.
**One possible solution**
Please consider adding support for an environment variable, such as `PUPPETEER_DANGEROUS_NO_SANDBOX=true`, whenever this environment variable is set, run all tests as no-sandbox.
|
feature,good first issue,confirmed
|
medium
|
Critical
|
580,378,092 |
flutter
|
[in_app_purchase] Support InApp purchase local verification
|
InApp purchase plugin should support local/offline purchase verification
Android Purchase Verification:
https://developer.android.com/google/play/billing/billing_library_overview#Verify-purchase-device
iOS Receipt Validation: https://developer.apple.com/library/archive/releasenotes/General/ValidateAppStoreReceipt/Chapters/ValidateLocally.html
This is not only a security enhancement but also a functional bug fix. I can explain why.
In Android, when we purchase an auto-renewing subscription, the plugin returns that subscription when we query the past purchases and when the subscription is expired or the user cancels it, the plugin will not return the purchase when we query it. So in Android, this works as expected. But in iOS there is a behaviour change when the subscription is expired. The plugin still returns the purchase when we query the past purchase, which is unexpected in the perspective of the plugin user, especially if he successfully implemented the Android part and according to the user, this is a bug in the plugin.
This may be a problem of the iOS ecosystem that the validity of the receipt may be checked locally in order to determine whether the subscription is active or not. But since this is a plugin, it should abstract away all of the complexities like this and provide idiomatic APIs to handle the situation like this.
So the plugin should provide some basic offline verification of the purchase for security and functional consistency.
Tested in in_app_purchase: 0.2.0+7
|
c: new feature,p: in_app_purchase,package,c: proposal,team-ecosystem,P3,triaged-ecosystem
|
medium
|
Critical
|
580,388,985 |
TypeScript
|
No way to use composite project where outDir is determined by build tool
|
In https://github.com/microsoft/TypeScript/issues/37257 I discussed one way to host TS project references under Bazel, where each build step gets only `.d.ts` and `tsconfig.json` files from the referenced projects. This is broken because TS module resolution fails to locate the `.d.ts` files in their `outDir`.
In this issue, let's try a different approach for Bazel, to satisfy current TS module resolution, each build step should get the `.ts` sources of its referenced projects, and the `.tsbuildinfo` output as well.
Bazel is picky about the setting of `--outDir`, because it supports cross-compiled builds in distinct output locations. For example if you're on a Mac and building for local platform, the outDir is `<workspace root>/bazel-out/darwin-fastbuild/bin`. But if you're on a Mac and building for a docker image, you need native binaries to be for the target platform so outDir is `<workspace root>/bazel-out/k8-fastbuild/bin` and if you want binaries with debug information it's `<workspace root>/bazel-out/k8-dbg/bin`.
Since the outDir is platform-specific, we don't want to check in a config file that hard-codes it. So we prefer to always have Bazel pass the `--outDir` option on the command-line to `tsc`, rather than include it in the tsconfig files.
Now we come to the problem. TypeScript "composite" setting enables up-to-dateness checking in `tsc`. With this input structure, and trying to `tsc -p b`
```
a/a.ts
a/tsconfig.json (no outDir specified; a was compiled with tsc --outDir=bazel-out/darwin-fastbuild/bin/a)
b/b.ts
b/tsconfig.json
bazel-out/darwin-fastbuild/bin/a/a.d.ts
bazel-out/darwin-fastbuild/bin/a/tsconfig.tsbuildinfo
```
we get `b/b.ts(1,17): error TS6305: Output file '/private/var/tmp/_bazel_alx/df60115ea7f2a64e10fb4aa64b7d827f/sandbox/darwin-sandbox/54/execroot/ts_composite/a/a.d.ts' has not been built from source file '/private/var/tmp/_bazel_alx/df60115ea7f2a64e10fb4aa64b7d827f/sandbox/darwin-sandbox/54/execroot/ts_composite/a/a.ts'.`
This indicates that TS is looking for `a.d.ts` next to `a.ts`. If we hard-code the platform-dependent outDir into a/tsconfig.json like I've done here
https://github.com/alexeagle/ts_composite/pull/5 it solves this problem, but not in a way we can ship.
Note that the compilation of a/tsconfig.json produces a .tsbuildinfo file with a correct "outDir" (based on the --outDir flag that was passed to compilation of `a`
```
"options": {
"project": "../../../../a/tsconfig.json",
"outDir": "./",
}
```
So it seems like the behavior for compiling `b` is to read the `outDir` setting from a/tsconfig.json rather than trust the `options.outDir` reflected in the .tsbuildinfo output.
|
Needs Investigation
|
medium
|
Critical
|
580,403,020 |
flutter
|
SliverAppBar: Copy/Paste controls for TextField are enlarged when AppBar is expanded.
|
This example is using a NestedScrollView + SliverAppBar as header. Inside the AppBar, is a TextField.
http://screens.gskinner.com/shawn/2020-03-13_00-36-22.mp4
You can see that the Copy/Paste/Select all controls grow with the scale of the Widget which is not desired.
```
List<String> items = List.generate(30, (index) => "List Item $index");
return NestedScrollView(
headerSliverBuilder: (context, bool isScrolled) => <Widget>[
SliverAppBar(
pinned: true,
expandedHeight: 200,
flexibleSpace: FlexibleSpaceBar(
centerTitle: false,
titlePadding: EdgeInsets.only(left: Insets.med, bottom: Insets.med),
background: Opacity(opacity: .2, child: Image.network("https://picsum.photos/600/300", fit: BoxFit.cover)),
title: TextField(
decoration: InputDecoration(hintText: "Untitled List"),
style: TextStyle(color: Colors.white, fontWeight: FontWeight.bold),
showCursor: false),
),
)
],
body: ListView.builder(
itemExtent: 50,
itemCount: items.length,
itemBuilder: (_, int index) {
return Container(alignment: Alignment.center, child: Text(items[index]));
}),
);
```
|
a: text input,framework,f: material design,f: scrolling,a: quality,has reproducible steps,P2,found in release: 3.7,found in release: 3.9,team-design,triaged-design
|
low
|
Major
|
580,403,346 |
pytorch
|
RuntimeError: derivative for grid_sampler_2d_backward is not implemented
|
Hi
When trying to compute the second order derivative of `grid_sampler`, the following error occurs: `RuntimeError: derivative for grid_sampler_2d_backward is not implemented`.
It seems useful to support second order derivative for bilinear interpolation operations like this, given bilinear interpolation is used in some common operations like deformable convs and roialign.
Another side question is that, if I write a custom operation in cuda, do I need to write a function by myself to support second order derivative of that operation? For example, the deformable conv operation in mmdet has its own cuda implementation (see [here](https://github.com/open-mmlab/mmdetection/blob/master/mmdet/ops/dcn/deform_conv.py#L51)), is it possible to get the second order derivative of this operation without having to write my own implementation.
Thanks a lot
|
triaged,module: interpolation
|
high
|
Critical
|
580,449,872 |
pytorch
|
Pytorch report INTERNAL ASSERT FAILED at ..\torch\csrc\jit\ir.cpp:1529 when use torch.jit.script to convert to model
|
## 🐛 Bug
```
<!-- A clear and concise description of what the bug is. -->
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 534, in recursive_script
return create_script_module(nn_module, infer_methods_to_compile(nn_module))
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 296, in create_script_mod
ule
return create_script_module_impl(nn_module, concrete_type, cpp_module, stubs)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 336, in create_script_mod
ule_impl
script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\__init__.py", line 1593, in _construct
init_fn(script_module)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 328, in init_fn
scripted = recursive_script(orig_value)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 534, in recursive_script
return create_script_module(nn_module, infer_methods_to_compile(nn_module))
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 296, in create_script_mod
ule
return create_script_module_impl(nn_module, concrete_type, cpp_module, stubs)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 340, in create_script_mod
ule_impl
create_methods_from_stubs(concrete_type, stubs)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 259, in create_methods_fr
om_stubs
concrete_type._create_methods(defs, rcbs, defaults)
RuntimeError: values[i]->type()->isSubtypeOf(value_type) INTERNAL ASSERT FAILED at ..\torch\csrc\jit\ir.cpp:1529, please
report a bug to PyTorch. (createDict at ..\torch\csrc\jit\ir.cpp:1529)
(no backtrace available)
```
## To Reproduce
Steps to reproduce the behavior:
1.
1.
1.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
(pytorch) PS D:\test> python .\collect_env.py
Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Microsoft Windows 10 专业版
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.8
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] torch==1.4.0
[pip] torchvision==0.5.0
[conda] libblas 3.8.0 15_mkl https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge
[conda] libcblas 3.8.0 15_mkl https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge
[conda] liblapack 3.8.0 15_mkl https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge
[conda] liblapacke 3.8.0 15_mkl https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge
[conda] mkl 2020.0 166 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] pytorch 1.4.0 py3.8_cuda101_cudnn7_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] torchvision 0.5.0 py38_cu101 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
## Additional context
<!-- Add any other context about the problem here. -->
cc @suo
|
needs reproduction,oncall: jit,triaged
|
low
|
Critical
|
580,458,222 |
flutter
|
cannot find symbol import io.flutter.embedding.engine.plugins.lifecycle.FlutterLifecycleAdapter;
|
when i want to build app for android with command "flutter build apk --release", i get this error
```bash
cannot find symbol
import io.flutter.embedding.engine.plugins.lifecycle.FlutterLifecycleAdapter;
```
that is inside local_auth plugin. i opened it with android studio and see that the FlutterLifeCycleAdapter cannot be imported.
and only one there is one class in that path and its name is "HiddenLifecycleReference".
this is output of my build command
```bash
$ flutter build apk --release
You are building a fat APK that includes binaries for android-arm, android-arm64, android-x64.
If you are deploying the app to the Play Store, it's recommended to use app bundles or split the APK to reduce the APK size.
To generate an app bundle, run:
flutter build appbundle --target-platform android-arm,android-arm64,android-x64
Learn more on: https://developer.android.com/guide/app-bundle
To split the APKs per ABI, run:
flutter build apk --target-platform android-arm,android-arm64,android-x64 --split-per-abi
Learn more on: https://developer.android.com/studio/build/configure-apk-splits#configure-abi-split
Running Gradle task 'assembleRelease'...
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':vibrate:verifyReleaseResources'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade
> 1 exception was raised by workers:
com.android.builder.internal.aapt.v2.Aapt2Exception: Android resource linking failed
C:\Users\Pedram\.gradle\caches\transforms-2\files-2.1\60a5a3b619f09ca92e9f1b4798715d52\core-1.0.0\res\values\values.xml:57:5-88:25: AAPT: error: resource android:attr/fontStyle not found.
C:\Users\Pedram\.gradle\caches\transforms-2\files-2.1\60a5a3b619f09ca92e9f1b4798715d52\core-1.0.0\res\values\values.xml:57:5-88:25: AAPT: error: resource android:attr/font not found.
C:\Users\Pedram\.gradle\caches\transforms-2\files-2.1\60a5a3b619f09ca92e9f1b4798715d52\core-1.0.0\res\values\values.xml:57:5-88:25: AAPT: error: resource android:attr/fontWeight not found.
C:\Users\Pedram\.gradle\caches\transforms-2\files-2.1\60a5a3b619f09ca92e9f1b4798715d52\core-1.0.0\res\values\values.xml:57:5-88:25: AAPT: error: resource android:attr/fontVariationSettings not found.
C:\Users\Pedram\.gradle\caches\transforms-2\files-2.1\60a5a3b619f09ca92e9f1b4798715d52\core-1.0.0\res\values\values.xml:57:5-88:25: AAPT: error: resource android:attr/ttcIndex not found.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 24s
Running Gradle task 'assembleRelease'... ۲۵٫۰s
The built failed likely due to AndroidX incompatibilities in a plugin. The tool is about to try using Jetfier to solve the incompatibility.
Building plugin audioplayers...
Running Gradle task 'assembleAarRelease'... ۲٫۴s
√ Built build\app\outputs\repo.
Building plugin barcode_scan...
Running Gradle task 'assembleAarRelease'... ۲٫۷s
√ Built build\app\outputs\repo.
Building plugin connectivity...
Running Gradle task 'assembleAarRelease'... ۲٫۲s
√ Built build\app\outputs\repo.
Building plugin connectivity_macos...
Running Gradle task 'assembleAarRelease'... ۳٫۴s
√ Built build\app\outputs\repo.
Building plugin flutter_downloader...
Running Gradle task 'assembleAarRelease'... ۱۲٫۰s
√ Built build\app\outputs\repo.
Building plugin flutter_money_formatter...
Running Gradle task 'assembleAarRelease'... ۲٫۲s
√ Built build\app\outputs\repo.
Building plugin flutter_plugin_android_lifecycle...
Running Gradle task 'assembleAarRelease'... ۳٫۱s
√ Built build\app\outputs\repo.
Building plugin flutter_secure_storage...
Running Gradle task 'assembleAarRelease'... ۳٫۹s
√ Built build\app\outputs\repo.
Building plugin local_auth...
Running Gradle task 'assembleAarRelease'... ۲٫۵s
> Task :assembleAarRelease UP-TO-DATE
> Task :preBuild UP-TO-DATE
> Task :preReleaseBuild UP-TO-DATE
> Task :compileReleaseAidl NO-SOURCE
> Task :compileReleaseRenderscript UP-TO-DATE
> Task :checkReleaseManifest UP-TO-DATE
> Task :generateReleaseBuildConfig UP-TO-DATE
> Task :generateReleaseResValues UP-TO-DATE
> Task :generateReleaseResources UP-TO-DATE
> Task :packageReleaseResources UP-TO-DATE
> Task :processReleaseManifest UP-TO-DATE
> Task :generateReleaseRFile UP-TO-DATE
> Task :prepareLintJar UP-TO-DATE
> Task :generateReleaseSources UP-TO-DATE
> Task :javaPreCompileRelease
> Task :compileReleaseJavaWithJavac FAILED
> Task :mergeReleaseConsumerProguardFiles UP-TO-DATE
> Task :mergeReleaseShaders UP-TO-DATE
> Task :compileReleaseShaders UP-TO-DATE
> Task :generateReleaseAssets UP-TO-DATE
> Task :packageReleaseAssets UP-TO-DATE
> Task :packageReleaseRenderscript NO-SOURCE
> Task :processReleaseJavaRes NO-SOURCE
> Task :compileReleaseNdk NO-SOURCE
> Task :mergeReleaseJniLibFolders UP-TO-DATE
> Task :transformNativeLibsWithMergeJniLibsForRelease UP-TO-DATE
> Task :transformNativeLibsWithSyncJniLibsForRelease UP-TO-DATE
17 actionable tasks: 2 executed, 15 up-to-date
C:\Users\Pedram\AppData\Roaming\Pub\Cache\hosted\pub.dartlang.org\local_auth-0.6.1+3\android\src\main\java\io\flutter\plugins\localauth\LocalAuthPlugin.java:20: error: cannot find symbol
import io.flutter.embedding.engine.plugins.lifecycle.FlutterLifecycleAdapter;
^
symbol: class FlutterLifecycleAdapter
location: package io.flutter.embedding.engine.plugins.lifecycle
C:\Users\Pedram\AppData\Roaming\Pub\Cache\hosted\pub.dartlang.org\local_auth-0.6.1+3\android\src\main\java\io\flutter\plugins\localauth\LocalAuthPlugin.java:186: error: cannot find symbol
lifecycle = FlutterLifecycleAdapter.getActivityLifecycle(binding);
^
symbol: variable FlutterLifecycleAdapter
location: class LocalAuthPlugin
C:\Users\Pedram\AppData\Roaming\Pub\Cache\hosted\pub.dartlang.org\local_auth-0.6.1+3\android\src\main\java\io\flutter\plugins\localauth\LocalAuthPlugin.java:199: error: cannot find symbol
lifecycle = FlutterLifecycleAdapter.getActivityLifecycle(binding);
^
symbol: variable FlutterLifecycleAdapter
location: class LocalAuthPlugin
3 errors
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':compileReleaseJavaWithJavac'.
> Compilation failed; see the compiler error output for details.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 2s
```
The plugin local_auth could not be built due to the issue above.
|
c: crash,platform-android,p: local_auth,package,a: build,P3,team-android,triaged-android
|
medium
|
Critical
|
580,478,181 |
nvm
|
[Bug/Feature] Error in not compatibility
|
Hello,
Sorry if already this is in issue, but i have try search google without any good results, so i want to help others.
OS: Ubuntu 18.04
NVM: latest (0.35.3)
OpenSSL: 1.1.0h (1.1.0h-2.0+ubuntu16.04.1+deb.sury.org+1)
npm: v8.17.0
this set is working, but when i change version to v10.19.0 or v12.16.1 (latest new versions) i have got error: "error:2406C06E:random number generator:RAND_DRBG_instantiate:error retrieving entropy"
So i think is here not compatibility between OS,OpenSSL and Node.
Then i downgrade to v8.17.0 work good.
Maybe must got requirements check for npm/node version installing ?
|
needs followup
|
low
|
Critical
|
580,493,098 |
ant-design
|
Table header can't scroll when append a new column
|
- [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[https://github.com/wang12124468/antd-bugs-table/tree/bugs-header-scroll-when-append-column](https://github.com/wang12124468/antd-bugs-table/tree/bugs-header-scroll-when-append-column)
### Steps to reproduce
1. Download from the repository
2. Then checkout branch bugs-header-scroll-when-append-column
3. Run the app and click the button of '追加一列'.
### What is expected?
Scroll the table, the body and header will scroll together.
### What is actually happening?
Scroll the table, the header can't scroll.
| Environment | Info |
|---|---|
| antd | 4.0.2 |
| React | * |
| System | * |
| Browser | * |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
🐛 Bug,Inactive,4.x
|
low
|
Critical
|
580,564,644 |
pytorch
|
torch.jit.script report error when using index to subscript nn.ModuleList
|
## 🐛 Bug
torch.jit.script report error when using index to subscript nn.ModuleList
## To Reproduce
Steps to reproduce the behavior:
1. test.py
<pre>import torch.nn as nn
class my_model(nn.Module):
def __init__(self):
super(my_model, self).__init__()
self.m = nn.ModuleList([nn.Conv3d(3, 64, kernel_size=(5, 7, 7), stride=(1, 2, 2), padding=(2, 3, 3), bias=False),
nn.BatchNorm3d(64),
nn.ReLU(True),
nn.MaxPool3d(kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1))])
def forward(self, x):
for i in range(0, 4):
x = self.m[i](x)
return x
</pre>
2. import test.py and use torch.jit.script:
<pre>
(pytorch) PS D:\test> python
Python 3.8.2 | packaged by conda-forge | (default, Feb 28 2020, 16:38:51) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from test import *
>>> import torch
>>> model=my_model()
>>> model.eval()
my_model(
(m): ModuleList(
(0): Conv3d(3, 64, kernel_size=(5, 7, 7), stride=(1, 2, 2), padding=(2, 3, 3), bias=False)
(1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): MaxPool3d(kernel_size=(1, 3, 3), stride=(1, 2, 2), padding=(0, 1, 1), dilation=1, ceil_mode=False)
)
)
>>> torch.jit.script(model)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\__init__.py", line 1255, in script
return torch.jit._recursive.recursive_script(obj)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 534, in recursive_script
return create_script_module(nn_module, infer_methods_to_compile(nn_module))
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 296, in create_script_module
return create_script_module_impl(nn_module, concrete_type, cpp_module, stubs)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 340, in create_script_module_impl
create_methods_from_stubs(concrete_type, stubs)
File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\jit\_recursive.py", line 259, in create_methods_from_stubs
concrete_type._create_methods(defs, rcbs, defaults)
RuntimeError:
'module' object is not subscriptable:
File "D:\test\test.py", line 12
def forward(self, x):
for i in range(0, 4):
x = self.m[i](x)
~~~~~~~~ <--- HERE
return x
</pre>
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
<pre>
(pytorch) PS D:\test> python .\collect_env.py
Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Microsoft Windows 10 专业版
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.8
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] torch==1.4.0
[pip] torchvision==0.5.0
[conda] libblas 3.8.0 15_mkl https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge
[conda] libcblas 3.8.0 15_mkl https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge
[conda] liblapack 3.8.0 15_mkl https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge
[conda] liblapacke 3.8.0 15_mkl https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge
[conda] mkl 2020.0 166 https://mirrors.ustc.edu.cn/anaconda/pkgs/main
[conda] pytorch 1.4.0 py3.8_cuda101_cudnn7_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
[conda] torchvision 0.5.0 py38_cu101 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
(pytorch) PS D:\test>
</pre>
## Additional context
<!-- Add any other context about the problem here. -->
cc @suo
|
triage review,oncall: jit
|
low
|
Critical
|
580,615,264 |
pytorch
|
Maybe gpu_kernel shouldn't ASSERT_HOST_DEVICE_LAMBDA
|
I am not really sure why this assert exists (don't you only need a device lambda for gpu kernels?), but I was recently working on #31278 and discovered a case where it was actively harmful to define things as a `__host__ __device__` lambda. Consider this code:
```
at::native::gpu_kernel(iter,
[seeds] GPU_LAMBDA (scalar_t count, scalar_t prob) {
curandStatePhilox4_32_10_t state;
curand_init(
seeds.first,
blockIdx.x * blockDim.x + threadIdx.x,
seeds.second,
&state);
auto uniform_lambda = [&state] GPU_LAMBDA () {
return curand_uniform(&state);
};
BaseSampler<accscalar_t, decltype(uniform_lambda)> standard_uniform(uniform_lambda);
```
This doesn't work, because lambdas with captures only work for pure `__device__` lambdas, not `__host__ __device__` lambdas. So to make it work I had to make a separate `__device__` only template function, and then call it from the `__host__ __device__` function. Why couldn't I just pass a device only lambda directly?
cc @colesbury
|
triaged
|
low
|
Major
|
580,653,550 |
godot
|
function name can be built in names, but not overrides
|
**Godot version:** 3.2.1
**OS/device including version:** Any
**Issue description:**
a function name can be built in names (built in func, const), but when calling it without indexing is invalid.
1. we have to make built in methods overrideable (like python)
2. or prevent function names shadows built in.
**Steps to reproduce:**

**Minimal reproduction project:**
N/A
|
topic:gdscript,documentation
|
low
|
Major
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.