id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
βŒ€
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
480,317,387
flutter
Elide updates to backing stores when the contents of that layer don't change.
During custom composition (custom embedders & iOS only today), the backing stores of all layers and rendered into presented on each updates. This is in spite of the fact that some layers may not have need updated during that frame. These updates need to be elided. Once this optimization is in place, `FlutterLayer::did_update` can sometimes return `false`. Currently, it is always `true`.
platform-ios,engine,e: embedder,P2,team-ios,triaged-ios
low
Minor
480,318,949
flutter
windows.locales returns different value depending on how app is run
I was check the value of `window.locals` in different time of app running: > 1. App start. 2. `window.onLocaleChanged()` get called. 3. User interact. ## Steps to Reproduce 1. Create new flutter project. 2. Update `main.dart` file as bellow. 3. Running app with debugging will see result 1. 4. Kill the app & directly open the app will see result 2. ```dart import 'dart:async'; import 'dart:ui'; import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatefulWidget { @override State<StatefulWidget> createState() => MyAppState(); } class MyAppState extends State<MyApp> { var text = "empty"; var textStreamController = StreamController<String>(); @override void dispose() { textStreamController.close(); super.dispose(); } @override void initState() { super.initState(); var localList = window.locales; //<----- this after: app start print(localList); text = 'Flutter Demo Home Page:${localList ?? "null"}'; textStreamController.sink.add(text); window.onLocaleChanged = () { localList = window.locales; //<----- this after: onLocaleChanged text = 'Flutter Demo Home Page(New):${localList ?? "null"}'; textStreamController.sink.add(text); }; } void onClick() { var localList = window.locales; //<----- this after: user interact text = 'Flutter Demo Home Page(Click):${localList ?? "null"}'; textStreamController.sink.add(text); showToast(localList ?? 'null'); } @override Widget build(BuildContext context) { return MaterialApp( home: Material( child: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ Text(text), StreamBuilder( stream: textStreamController.stream, builder: (context, AsyncSnapshot<String> snapshot) { return Text( '${snapshot.data}', style: TextStyle(color: Colors.green), ); }, ), FlatButton( onPressed: onClick, color: Colors.blue, child: Text('Click'), ) ], ), ), ), ); } } ``` ### The result when app `debug & running`: [![enter image description here][1]][1][![enter image description here][2]][2] ### The result when `app stop debug & running` [![enter image description here][3]][3][![enter image description here][4]][4] ## Is this as expected as design ? > - The locals don't have initial value when `directly run` ? - The locals have initial value when `debugging` ? [1]: https://i.stack.imgur.com/iti9G.png [2]: https://i.stack.imgur.com/pJf17.png [3]: https://i.stack.imgur.com/30qaG.png [4]: https://i.stack.imgur.com/uGUjH.png
framework,a: internationalization,d: api docs,P2,team-framework,triaged-framework
low
Critical
480,324,861
pytorch
TensorIterator "builder" options should be documented.
We got rid of the TensorIterator builder for efficiency reasons, but the current documentation is now pretty confusing. For example, all of the options aren't clear (maybe they should start with a common prefix?) and none of them actually contain documentation, for example: https://github.com/pytorch/pytorch/blob/bd054e7cef00da543b80844aeb7ae0ac78c91675/aten/src/ATen/native/TensorIterator.h#L281-L283 I have no idea what Device and Dtype are for.
module: internals,triaged
low
Minor
480,330,889
flutter
Document the structs in the embedder API that are sealed.
This can be done via comments in the header. Most structs are safe to modify by appending members at the end of the struct. However, some are not (these are the ones without `struct_size` or the ones that are non-pointer union members in other structs.)
engine,d: api docs,e: embedder,P3,team-engine,triaged-engine
low
Minor
480,332,924
go
runtime: unexpected fault address runtime.memhash16
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> go version go1.12.5 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ cat /etc/*release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS" NAME="Ubuntu" VERSION="14.04.5 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.5 LTS" VERSION_ID="14.04" $ uname -a Linux {pod name} 4.15.0-1033-aws #35-Ubuntu SMP Wed Feb 6 13:29:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux </pre></details> ### What did you do? <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> ### What did you expect to see? ### What did you see instead? Our application is occasionally crashing with the following error: ``` unexpected fault address 0x432200 fatal error: fault [signal SIGSEGV: segmentation violation code=0x2 addr=0x432200 pc=0x432200] goroutine 2373 [running]: runtime.throw(0x1306be9, 0x5) /root/.gimme/versions/go1.12.5.linux.amd64/src/runtime/panic.go:617 +0x72 fp=0xc007f9cd28 sp=0xc007f9ccf8 pc=0x45dc92 runtime.sigpanic() /root/.gimme/versions/go1.12.5.linux.amd64/src/runtime/signal_unix.go:397 +0x401 fp=0xc007f9cd58 sp=0xc007f9cd28 pc=0x473251 runtime.memhash16(0xc007f9ce1e, 0xd2bd649e, 0x0) /root/.gimme/versions/go1.12.5.linux.amd64/src/runtime/alg.go:56 fp=0xc007f9cd60 sp=0xc007f9cd58 pc=0x432200 runtime.mapassign(0x1146c80, 0xc001b329f0, 0xc007f9ce1e, 0x11) /root/.gimme/versions/go1.12.5.linux.amd64/src/runtime/map.go:593 +0x73 fp=0xc007f9cde8 sp=0xc007f9cd60 pc=0x43e273 ...snip... ``` The code in question looks like: ```golang type MessageReader struct { //... latency map[string]map[uint16]int64 } func (mr *MessageReader) captureLatency(timestamp int64, cluster string, partition uint16) { if _, ok := mr.latency[cluster]; !ok { mr.latency[cluster] = make(map[uint16]int64) } mr.latency[cluster][partition] = time.Now().Unix() - timestamp } ``` This particular application is processing messages and calling this once per message at a rate of ~75k / second. We've seen this happen twice over the past week or so.
NeedsInvestigation,compiler/runtime
low
Critical
480,353,224
bitcoin
ASN-based bucketing of the network nodes
Currently we bucket peers (or potential peers) based on /16 network groups which directly correlate to the IP-addresses. This is done to diversify connections every node maintains, for example to avoid connecting to the nodes all belonging to the same region/provider. Currently peers.dat (serialized version of addrman) does not store ip->bucket mappings explicitly, and all the known ips from peers.dat are re-hashed and re-bucketed at every restart (although it's very cheap). ## Idea It was [recently](http://www.erisian.com.au/bitcoin-core-dev/log-2019-06-20.html) suggested by @TheBlueMatt to use ASN-based bucketing instead. This is strictly better because if the goal is to diversify connections: the distribution of IPs among the ASNs is not uniform, and because of that netgroup-based bucketing may result in having 8 peers from just 2 large ASNs. If we allow connecting to each ASN at most once, this would increase the security of the network. We have @sipa's script to create a compressed representation of mapping (ip->ASN), which is less than 2 megabytes. However, there are integration-related design questions. ### Distribution of the .map file During the meeting there was a rough consensus (not unanimous though, @jnewbery ) that mapping file should be distributed along with the release, instead of becoming part of the binary. If you want to question these, feel free to comment below. ### Legacy /16 bucketing There was a suggestion of having an old method as well. I think we should do it. ### ~Loading the map~ Maybe there will be concerns here, I have an understanding for now.
P2P
medium
Critical
480,367,195
pytorch
Check PyTorch version when initializing process groups
@VitalyFedyunin asked whether `c10d` checks PyTorch versions during initialization. IIUC, we don't check that currently, and it will work in the best-effort manner, which might lead to weird errors (e.g., when pickling format changes). @pietern @zhaojuanmao @pritamdamania87 Please do correct me if I misunderstand the implementation. cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera
oncall: distributed,module: bootcamp,triaged,enhancement
low
Critical
480,389,102
puppeteer
page.addScriptTag({content: 'window.__foobar'}) should throw when CSP prevents operation
With https://crrev.com/683391, chrome no longer throws exceptions when not executing inline script tags due to CSP. We can fix this with probing runtime first - checking if inline scripts have any power. Alternatively, we might want to check current page CSP policy.
bug,confirmed,P3
low
Minor
480,417,863
rust
Desugaring of struct update syntax is unnecessarily restrictive for private fields
Take the following example struct: ```rust mod module { #[derive(Default)] pub struct Foo { pub a: i32, b: u32 } } ``` Let's say you don't have access to `b`, and you want to create an instance of `Foo` with a non-default value for `a`. The idiomatic way to do that is with the struct update syntax, like so: ```rust let foo = Foo { a: 2, ..Foo::default() }; ``` Right now, that fails with the following error: ``` error[E0451]: field `b` of struct `main::module::Foo` is private --> src/main.rs:15:7 | 15 | ..Foo::default() | ^^^^^^^^^^^^^^ field `b` is private ``` That error is unintuitive since we never directly reference `b` in our source code. What's more, it's entirely unnecessary, since it's merely the side effect of what seems to be the current desugaring strategy, as follows: ```rust let foo = { // we create temp_a before calling Foo::default() to preserve source code ordering let temp_a = 2; let temp = Foo::default(); Foo { a: temp_a, b: temp.b } }; ``` Ideally we'd desugar to the following, which would allow the initial example to compile as expected: ```rust let foo = { let temp_a = 2; let mut temp = Foo::default(); temp.a = temp_a; temp }; ``` This issue proposes moving to the second method. I can't see any disadvantages to doing that, although that may just be because I'm not looking close enough. This admittedly changes the language's behavior, but I'm submitting it as an issue instead of an RFC because the current behavior seems unintentional and this is an extremely minor change. I'll happily create an RFC if that's necessary, though.
T-lang,C-feature-request
low
Critical
480,435,637
vscode
Bring the "track changes" feature to VS Code
Anyone who has worked with Visual Studio is likely familiar with the "track changes" feature, which adds green/yellow bars to the side of the editor, as seen in the attached image. ![TrackChangesIndicators](https://user-images.githubusercontent.com/36111895/62987162-7bf43e00-be0c-11e9-8fdd-f9ca3d2a9060.png) Green bars are new lines added and saved since opening the file; yellow bars are unsaved lines. Unfortunately, this doesn't exist in VS Code. However, VS Code does have a very similar feature called "gutter indicators" as mentioned in [the documentation](https://code.visualstudio.com/docs/editor/versioncontrol#_gutter-indicators) but it only applies if you're using version control, whereas in Visual Studio, the feature works with locally stored projects/solutions as well. Since the functionality already exists in VS Code (for version control), are there any plans to extend it to also work with local files too, just like in Visual Studio?
feature-request,editor-contrib
medium
Major
480,443,545
scrcpy
Auto reconnect mode
It would be nice if the app would wait for the same device to reconnect to it automatically (in case of a restart etc)
feature request
medium
Major
480,443,817
scrcpy
Better support for wireless adb
I would love to have a dedicated input for wifi ip or maybe scanning currently i'm using: ```batch @echo off set ip=192.168.2.50:5555 title SCRCPY - Wireless (IP: %ip%) adb devices :START adb connect %ip% scrcpy --bit-rate 2M --max-size 700 --show-touches --turn-screen-off REM timeout /t 3 GOTO START ```
feature request
low
Minor
480,448,439
pytorch
Port `masked_fill` operator from the TH code to Aten
`masked_fill` is the point-wise operator so porting if from the TH code to Aten (and TensorIterator) expected to be easy. Such migration will help to clean up the code, simplify dispatch as well as provide immediate 2-3x operator performance gain. Porting guide: https://github.com/pytorch/pytorch/wiki/TH-to-ATen-porting-guide Example PR with porting of the adaptive_avg_pool2d: https://github.com/pytorch/pytorch/pull/14714 How to use TensorIterator: https://github.com/pytorch/pytorch/wiki/How-to-use-TensorIterator
module: cuda,module: cpu,triaged,module: porting,better-engineering
low
Major
480,465,751
flutter
TextField doesn't appear within a direction:Axis.vertical Wrap
code: ![image](https://user-images.githubusercontent.com/8746914/62993013-4bea8080-be88-11e9-84ea-93f3908b57bd.png) screenshot: <img src="https://user-images.githubusercontent.com/8746914/62993022-5442bb80-be88-11e9-9b76-9d620f7147fa.png" width ="300">
a: text input,framework,f: material design,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design
low
Minor
480,488,591
go
cmd/compile: reflected methods have extra wrapping
### What version of Go are you using (`go version`)? 1.12.7 ### Does this issue reproduce with the latest release? Probably. ### What operating system and processor architecture are you using (`go env`)? amd64, or amd64p32. Reproduces on playground, probably not specific to it. ### What did you do? https://play.golang.org/p/tEu9qKfYdEB Obtain a method using (reflect.Value).MethodByName(), convert it to a function type, and call it. The method has a stack trace. ### What did you expect to see? A function pointer identical to one I could call directly. ### What did you see instead? A fancy wrapped function pointer which is doing some kind of marshalling and then unmarshalling of its parameters. ```trace: goroutine 1 [running]: main.(*Updater).Update(0x414020, 0x7d0, 0x13, 0x0) /tmp/sandbox371765888/prog.go:15 +0x80 main.callFn(...) /tmp/sandbox371765888/prog.go:22 main.main() /tmp/sandbox371765888/prog.go:29 +0x160 trace: goroutine 1 [running]: main.(*Updater).Update(0x414020, 0x7d0, 0x0, 0x446494) /tmp/sandbox371765888/prog.go:15 +0x80 reflect.callMethod(0x43e320, 0x454f70, 0x454f64, 0x0) /usr/local/go/src/reflect/value.go:690 +0x180 reflect.methodValueCall(0x7d0, 0x414020, 0x201, 0x138901, 0x121ac0, 0x43e320, 0x125060, 0x414020, 0x213, 0x1d, ...) /usr/local/go/src/reflect/asm_amd64p32.s:35 +0x40 main.callFn(...) /tmp/sandbox371765888/prog.go:22 main.main() /tmp/sandbox371765888/prog.go:30 +0x180 ``` (The stray "callFn" is an attempt to ensure that the compiler isn't just outsmarting me here by taking short-cuts when it can easily see that the method it's calling is a method of a local object...) It seems to me like it would be really nice if there were a way for the methods reflect.MethodByName yields to be the same functions that you get from method values.
NeedsInvestigation,compiler/runtime
low
Minor
480,494,643
flutter
Video_player plugin: Get the current played frame number
I was wondering if there is/will be any feature to get the current played frame of a video in progress. Right now, when I use the plugin, I can easily attach a listener callback to the videoController and In can easily do stuff like this: ``` videoPlayerController.addListener(() { print("VIDEO POSITION IS ${videoPlayerController.value.position.inMilliseconds}"); print("VIDEO DURATION (s) ${videoPlayerController.value.duration.inSeconds}"); print("VIDEO DURATION (milsec) ${videoPlayerController.value.duration.inMilliseconds}"); print("VIDEO DUR (mcrosec) ${videoPlayerController.value.duration.inMicroseconds}"); }); ``` But I am really not sure how to proceed (if possible) to get the current frame played at any time while de video is playing, or, for example, if i pause the video and i slide to another time position, how do i know in which frame I am? Would it be a feature interesting to develop??
c: new feature,p: video_player,package,team-ecosystem,P3,triaged-ecosystem
low
Minor
480,525,938
pytorch
Value_select to perform region-wise selection
## πŸš€ Feature ```python import torch a = torch.tensor([[0,0,1,1,1],[0,0,1,1,2],[3,3,3,3,2]]) c = 2 feat = torch.randn(c, *a.size()) num_sp = a.max().long() + 1 # For now, I have to use torch.masked_select for many times output = [] for i in range(num_sp): # (c*1), mean of each SP. output.append(torch.masked_select(feat, a==i).view(c, -1).mean(-1, keepdim=True)) output = torch.stack(output) print('feat') print(feat) print('ref_mask') print(ref_mask) print('output') print(output) # Something like this will be better output, return_index = torch.value_select(feat, mask_ref = a, values=torch.arange(num_sp)) ``` ```bash feat tensor([[[-0.6140, -1.5337, 1.4974, -1.2668, -0.3866], [-0.1943, 0.0868, 1.0380, 0.1135, 1.1820], [-1.4744, 1.5889, -0.2627, -1.7951, -0.0045]], [[-0.8168, -0.6949, -1.9554, -0.5807, -0.2200], [-0.7170, -0.8191, 0.5452, 1.6736, -0.6598], [-0.6220, 0.8880, -0.6597, 0.6438, -1.5148]]]) ref_mask tensor([[0, 0, 1, 1, 1], [0, 0, 1, 1, 2], [3, 3, 3, 3, 2]]) output tensor([[[-0.5638], [-0.7619]], [[ 0.1991], [-0.1075]], [[ 0.5887], [-1.0873]], [[-0.4858], [ 0.0625]]]) ``` ## Motivation In some cases, label matrix (ref_mask above) indicates which class the pixel should be categorized to. Suppose we get some labels from a super-pixel algorithm, then I want to calculate the mean of SP region or perform other operations for each region. Then `value_select` function is quite useful. In a word, it performs `region-wise` selection and will be useful for other tasks too. ## Pitch As the pixels contained in each region is not constant, then we need to return the indexes for each region, logged in `return_index`.
triaged,function request
low
Minor
480,589,306
vscode
macOS Text Selection: shift+left should expand selection
- VSCode Version: 1.37.0 - OS Version: macOS Mojave 10.14.6 On macOS, the standard text selection behaviour is to expand the current selection when shift + the left arrow key is pressed: > Here is the **example** text with 'example' selected (e.g. by double-clicking the text) > Here is th**e example** text on macOS in most applications after pressing shift + the left arrow key twice. > Here is the **exampl**e text on macOS in Visual Studio code after the same action. To be consistent with platform conventions, shift + the left arrow should ideally expand the selection. I believe the current behaviour matches the behaviour on Windows. If there is already a configuration option that changes this behaviour, then I would suggest that the default on macOS be changed to the standard system behaviour.
feature-request,macos,editor-commands
low
Major
480,590,273
TypeScript
Documentation: usage of types/interfaces defined in modules on global
## Search Terms - export from module to global - global namespace - declare as global - assign to global - globalThis ## Suggestion Documentation says that > It can be surprisingly difficult to access or declare values in the global scope, perhaps because we’re writing our code in modules... (https://devblogs.microsoft.com/typescript/announcing-typescript-3-4/#type-checking-for-globalthis) It shows an example how typed variable may be declared in the global scope. But it doesn't cover scenario when one needs to declare variable/function/namespace which use a type/interface declared in some module. I think that the documentation should clearly explain that it's not possible or show an example how to do it (if it's possible). ## Use Case One creates a function with input parameter and/or return type declared in module. She wants to make this function available on global. It's an anti-pattern in the same way as augmenting types from external modules, but as discussed here #5292 it exists in real life. One may have a well organised non-poluting-global modules-based codebase. But at some point by some reason she have to provide an interface for a global consumer. Moving all exposed interfaces and types to global and rewrite the codebase which used to consume them from modules would be ugly and painful. ## Example ```typescript import { SomeType, SomeInterface } from 'some/module'; function functionForGlobalConsumer(input: SomeType): SomeInterface { // some code... } globalThis.functionForGlobalConsumer = functionForGlobalConsumer; ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Docs
low
Minor
480,602,481
electron
feat: implement prevent shutdown / sign out procedure
<!-- As an open source project with a dedicated but small maintainer team, it can sometimes take a long time for issues to be addressed so please be patient and we will get back to you as soon as we can. --> ### Preflight Checklist <!-- Please ensure you've completed the following steps by replacing [ ] with [x]--> * [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project. * [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to. * [x] I have searched the issue tracker for a feature request that matches the one I want to file, without success. (there was something like this but has been closed, i hope this will receive a new consideration) ### Problem Description <!-- Is your feature request related to a problem? Please add a clear and concise description of what the problem is. --> OS : Windows I would like a way to prevent windows sign out/shutdown until i finish/stop my running tasks. ### Proposed Solution <!-- Describe the solution you'd like in a clear and concise manner --> Add the following functions `app.preventShutDown(reason)` When it is called the following should happen: [ ] call the `ShutdownBlockReasonCreate` API to prevent the shutdown. [ ] all windows that receive the `WM_QUERYENDSESSION` event should return false `app.releaseShutDown()` This function should call `ShutdownBlockReasonDestroy` witch will allow the sign out/shutdown of windows. ### Alternatives Considered <!-- A clear and concise description of any alternative solutions or features you've considered. --> For now the method i use to achieve this result is using the node-ffi addon. But that is a complicated and not very efficient method.
enhancement :sparkles:
low
Minor
480,618,392
electron
globalShortcuts don't work for alternate keyboards on Mac 10.14.6
* **Electron Version:** * 6.0.2 * **Operating System:** * mac 10.14.6 ### Expected Behavior globalShortcuts work regardless of the chosen keyboard layout ### Actual Behavior On Mac, globalShortcuts seem to default to either qwerty or the keyboards layout (I think? I don't have a dvorak keyboard) rather than the chosen keyboard layout. ### To Reproduce Here's a fiddle that I tested on all three platforms. Windows works perfectly, Linux work with a caveat that it doesn't pick up keyboard changes made while the app is running, and Mac is broken. https://gist.github.com/Kilian/995123ef2e31d6d15ee147a613831bed To test this, add dvorak to your keyboard layouts. If set to Dvorak, the `l` is in the position of the `p` on a qwerty layout. If you press `ctrl + p` on a physical qwerty keyboard but are using the dvorak layout that should correspond to `ctrl + l` and pressing it should give a console.log in fiddle.
platform/macOS,bug :beetle:,status/confirmed,5-0-x,7-0-x,8-x-y,6-1-x,7-1-x,9-x-y,10-x-y,11-x-y
medium
Critical
480,642,594
rust
How to install a single target from source without building the whole compiler?
I'm using Arch Linux and my distribution provides a (usually) up-to-date version of rustc & cargo. In particular, I have a system-wide copy of the `x86_64-unknown-linux-gnu`, placed in `/usr/lib/rustlib`. Please note that rustup is not used here at all. Sometimes I need to cross-compile to other targets, such as `x86_64-pc-windows-gnu` or `wasm32-wasi`. This has issues, for instance https://github.com/rust-lang/rust/issues/49078. It also results in unnecessary duplication. Currently it's possible to build a Rust cross compiler manually, such as [the rust-mingw AUR package](https://aur.archlinux.org/packages/mingw-w64-rust/). Still, this requires building the whole LLVM, compiler, cargo, which takes a lot of time and this is basically emulation of rustup without rustup (there are separate copies of rustc, cargo, etc.) Is it possible to only compile the required bits, so that one can just supplement the existing system-wide instance of Rust with new target? We can assume that their version will match. For reference, the rustup steps for `wasm32-wasi` are: ``` rustup target add wasm32-wasi ``` for `x86_64-pc-windows-gnu`: ``` rustup install stable-x86_64-pc-windows-gnu rustup target add x86_64-pc-windows-gnu ```
A-cross,T-compiler,C-feature-request
low
Minor
480,669,087
TypeScript
"Move to new file" breaks imports when using re-exports
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.5.2 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** re-export es modules import move to new file **Code** Given: ``` ts // ./foo.ts export const bar = 1; export const foo = 1; ``` ``` ts // ./re-exports.ts export { foo } from './foo'; ``` ```ts // ./main.ts import { foo } from './re-exports'; console.log(foo); ``` If I action "move to new file" on this line from `./foo.ts`: ``` ts export const foo = 1; ``` **Expected behavior:** - The import in `./main.ts` should still work **Actual behavior:** - The import in `./main.ts` is now broken (because the re-export in `./re-exports.ts` is now broken) **Related Issues:** https://github.com/microsoft/TypeScript/issues/32344
Bug,Domain: Refactorings
low
Critical
480,688,273
rust
warning lint on chained assignment
(This is a first impl on https://users.rust-lang.org/t/warn-about-using-the-value-of-an-assignment-expression/31324, thanks to @sourcefrog ) Code like `a = b = 3` is a "common" pattern in C, python and other languages. People who are coming to Rust may be surprised to see, that it doesn't work the intended way. There's no warning whatsoever, just a clippy lint: ```text warning: this let-binding has unit value --> src/main.rs:2:5 | 2 | let a; | ^^^^^^ ``` I managed to implement this: ```text warning: chaining assignment is not supported in Rust --> a.rs:4:5 | 4 | a = b = 3; | ^^^^^^^^^ | = note: `#[warn(chained_assignment)]` on by default ``` But I have a few questions: What does the user actually want? Does he want 1 `a = 3; b = 3` 2 `a = b == 3` The first one only works if the value is `Copy` for obvious reasons. The second one only works if the value implements `PartialEq`. I would like to see this in the Rust compiler, what do you think? What is missing? Am I overlooking something? Is everything covered? Would it brake any existing code?
C-enhancement,A-lints,A-diagnostics,T-lang,T-compiler,A-suggestion-diagnostics
low
Major
480,704,065
opencv
Documentation deprecated
Hi, OpenCV version > 4.1.1 Please update the docs to remove features that disappeared... https://docs.opencv.org/4.1.1/d9/d80/classcv_1_1cuda_1_1CascadeClassifier.html https://answers.opencv.org/question/205087/how-to-use-cascadeclassifiercreate-in-opencv-40/ Or add a big "no longer working, go back to 3.4.XX..." And remove sample code that therefore cannot work samples/gpu/cascadeclassifier.cpp Or clog out a big "no longer working, go back to 3.4.XX..."
category: documentation,category: gpu/cuda (contrib),Hackathon
low
Minor
480,769,119
kubernetes
JSONPatch accepts invalid paths with a spurious leading segment
**What happened**: ```sh kubectl patch namespaces/default -o json --type json -p '[{"op":"replace", "path":"bogus/metadata/annotations","value":{"k":"v"}}]' ``` The patch was applied and metadata.annotations modified **What you expected to happen**: The invalid path (no leading `/`) would be rejected **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: The JSONPatch library splits the path by `/` and discards the first segment, assuming it is empty: https://github.com/kubernetes/kubernetes/blob/f6a70ef27163d1171412b6f97049968896b8d1f9/vendor/github.com/evanphx/json-patch/patch.go#L307-L313 Before fixing this, we should consider the compatibility implications of rejecting patches that were previously accepted. /sig api-machinery /cc @jpbetz
kind/bug,sig/api-machinery,priority/important-longterm,lifecycle/frozen
low
Minor
480,784,808
storybook
Addon-docs: Automatically display component import paths
<img width="468" alt="Screen Shot 2019-08-14 at 1 12 23 PM" src="https://user-images.githubusercontent.com/52427513/63041165-2e6fe380-be95-11e9-893d-c85404ee2dc2.png"> Above screenshot is from React Styleguidist, which automatically displays a copy / paste link of the import path for end user convenience. I think this would be a great feature for Storybook. Here's a link where you can see this live in action: https://react-styleguidist.js.org/examples/basic/ (Developers do not need to assist this on a per component basis, this is just configured somewhere once and every component gets this copy & paste feature). @shilman
feature request,addon: docs
medium
Critical
480,785,414
pytorch
Auto tuner takes too much time in serialized model
## πŸ› Bug Serialized models have a huge overhead on a first run. With the model I'm using, it takes at least more than 3-4 minutes (than I'm just shutting it down, though it doesn't respond to `SiGSTOP` or `SIGINT`). I am providing the slightly trimmed version of the final model, but still you can see the huge overhead here too. ## To Reproduce Steps to reproduce the behavior: I'm providing the trimmed version of `poseresnet` (`posenet.py`): ``` import torch import torch.nn as nn BN_MOMENTUM = 0.1 class Bottleneck_down(nn.Module): expansion = 4 def __init__(self, inplanes, planes, stride=1): super(Bottleneck_down, self).__init__() self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM) self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm2d(planes * self.expansion, momentum=BN_MOMENTUM) self.relu = nn.ReLU(inplace=True) #self.downsample = downsample self.stride = stride self.downsample = nn.Sequential( nn.Conv2d(inplanes, planes * self.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(planes * self.expansion, momentum=BN_MOMENTUM), ) def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out = self.relu(out) out = self.conv3(out) out = self.bn3(out) residual = self.downsample(x) out += residual out = self.relu(out) return out resnet_spec = {50: (Bottleneck_down, [3, 4, 6, 3]), 101: (Bottleneck_down, [3, 4, 23, 3]), 152: (Bottleneck_down, [3, 8, 36, 3])} class PoseResNet_2(nn.Module): def __init__(self, block, layers, num_joints, num_input_channels=3, deconv_with_bias=False, num_deconv_layers=3, num_deconv_filters=(256, 256, 256), num_deconv_kernels=(4, 4, 4), final_conv_kernel=1, ): super().__init__() self.num_joints = num_joints self.num_input_channels = num_input_channels self.inplanes = 64 self.deconv_with_bias = deconv_with_bias self.num_deconv_layers, self.num_deconv_filters, self.num_deconv_kernels = num_deconv_layers, num_deconv_filters, num_deconv_kernels self.conv1 = nn.Conv2d(num_input_channels, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64, momentum=BN_MOMENTUM) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) def _make_layer(self, block, planes, blocks, stride=1): layers = [] layers.append(block(self.inplanes, planes, stride)) self.inplanes = planes * block.expansion for i in range(1, blocks): layers.append(block(self.inplanes, planes)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) return x def get_pose_net_2(): block_class, layers = resnet_spec[152] model = PoseResNet_2( block_class, layers, 17, ) return model ``` and the experiment file itself: ``` import torch import torch.nn as nn import poseresnet import time torch.backends.cudnn.benchmark=True device = torch.device("cuda:0") module = poseresnet.get_pose_net_2().to(device) module = torch.jit.script(module) print("serialized") module(torch.randn((4, 3, 304, 304), device=device)) module(torch.randn((4, 3, 304, 304), device=device)) torch.cuda.synchronize() start = time.time() for i in range(400): module(torch.randn((4, 3, 304, 304), device=device)) torch.cuda.synchronize() print(time.time() - start) ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Environment - PyTorch Version (e.g., 1.2.0): - OS (e.g., Linux): - GPU models and configuration: RTX 2080Ti
oncall: jit,triaged
low
Critical
480,787,107
material-ui
[Select] getting slow when loading 2000 options
## Current Behavior 😯 - When Select Component rendered with 2000 options, getting delay to show the drop-down and select the value <!-- Describe what happens instead of the expected behavior. --> ## Expected Behavior πŸ€” Need to be fast on showing the drop-down and selecting the value <!-- Describe what should happen. --> ## Steps to Reproduce πŸ•Ή https://codesandbox.io/s/material-demo-5ygxr <!-- Provide a link to a live example (you can use codesandbox.io) and an unambiguous set of steps to reproduce this bug. Include code to reproduce, if relevant (which it most likely is). This codesandbox.io template _may_ be a good starting point: https://codesandbox.io/s/github/mui-org/material-ui/tree/master/examples/create-react-app If you're using typescript a better starting point would be https://codesandbox.io/s/github/mui-org/material-ui/tree/master/examples/create-react-app-with-typescript If YOU DO NOT take time to provide a codesandbox.io reproduction, should the COMMUNITY take time to help you? --> ## Your Environment 🌎 <!-- Include as many relevant details about the environment with which you experienced the bug. If you encounter issues with typescript please include version and tsconfig. --> | Tech | Version | | ----------- | ------- | | Material-UI | v4.3.2 | | React | 16.9.1 | | Browser | chrome | | etc. | |
performance,component: select
medium
Critical
480,787,110
pytorch
Consider changing the behavior of Tensor.__contains__(Tensor) to make more sense
## πŸ› Bug ``` import torch x = torch.tensor([1, 2, 3]) y = torch.tensor([5, 6, 3]) y in x # True ``` ## Expected behavior This particular case should be False. Related: https://github.com/pytorch/pytorch/pull/17733 https://github.com/pytorch/pytorch/pull/24156 The incorrect semantics was introduced in PyTorch 1.2 (May), but no one else has complained about it yet in the three months since. ## Environment pytorch master. cc @mruberry @rgommers @heitorschueroff
triaged,module: numpy,module: ux
low
Critical
480,789,384
rust
Confusing error deriving PartialEq when child type impls PartialEq<OtherType>
Compiling the following code ```rust struct TypeA; struct TypeB; #[derive(PartialEq)] struct TypeC(TypeA); impl std::cmp::PartialEq<TypeA> for TypeB { fn eq(&self, other: &TypeA) -> bool { true } } impl std::cmp::PartialEq<TypeB> for TypeA { fn eq(&self, other: &TypeB) -> bool { true } } fn main() { let one = TypeC(TypeA); let two = TypeC(TypeA); if one == two { eprintln!("yay"); } } ``` produces, ``` error[E0308]: mismatched types --> src/main.rs:8:14 | 8 | struct TypeC(TypeA); | ^^^^^ expected struct `TypeB`, found struct `TypeA` | = note: expected type `TypeB` found type `TypeA` ``` Which I found very confusing. It would be helpful, at least, if we could indicate that this error occurred during macro expansion? Playground here: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=ac13781b2c40c1966e18adfb919c236c
A-type-system,C-enhancement,A-diagnostics,A-macros,T-compiler,T-types
low
Critical
480,812,250
rust
Tracking issue for `#![feature(maybe_uninit_slice)]`
This is a tracking issue for the RFC "Deprecate `uninitialized` in favor of a new `MaybeUninit` type" (rust-lang/rfcs#1892). Most of this has been stabilized, this issue now only tracks the below unstable methods. ### Public API ```rust impl<T> [MaybeUninit<T>] { pub unsafe fn assume_init_drop(&mut self); pub const unsafe fn assume_init_ref(&self) -> &[T]; pub const unsafe fn assume_init_mut(&mut self) -> &mut [T]; pub const fn slice_as_ptr(this: &[MaybeUninit<T>]) -> *const T; pub const fn slice_as_mut_ptr(this: &mut [MaybeUninit<T>]) -> *mut T; } ``` ### Steps / History - [x] Implementation - [ ] Make slice methods inherent: https://github.com/rust-lang/rust/pull/129259 - [ ] Ensure documentation has examples - [ ] Final comment period (FCP) - [ ] Stabilization PR ### Unresolved Questions - Should `slice_as_ptr`/`slice_as_mut_ptr` be methods (with some `Self` parameter) instead of functions?
T-libs-api,B-unstable,C-tracking-issue,requires-nightly,A-slice,Libs-Tracked,A-raw-pointers
medium
Critical
480,816,615
pytorch
Unified representation for enum types
Right now we have several enums in PyTorch: * dtype https://github.com/pytorch/pytorch/blob/master/torch/csrc/Dtype.cpp * device (actually a tuple) https://github.com/pytorch/pytorch/blob/master/torch/csrc/Device.cpp * qscheme https://github.com/pytorch/pytorch/blob/master/torch/csrc/QScheme.cpp (and I'm sure there's more) These use the Python C API to define the representation in Python, and in TorchScript they use special cases in conversion functions. For example, for dtype: https://github.com/pytorch/pytorch/blob/0f8d1fbe96114131e59d7b064a336b1acb420125/torch/csrc/jit/pybind_utils.h#L339 https://github.com/pytorch/pytorch/blob/0f8d1fbe96114131e59d7b064a336b1acb420125/torch/csrc/jit/pybind_utils.h#L445 Proposal: * Keep defining enums in C++, as today * Use pybind11 to bind the values into Python (maybe unnecessary) * Add native enum support to TorchScript * Extend TorchBind to support these enum types https://github.com/pytorch/pytorch/blob/master/torch/custom_class.h#L62 cc @suo
oncall: jit,triaged
low
Minor
480,846,500
vscode
[css] Add support for CSS @supports selector() function
The **CSSΒ Conditional Rules Module LevelΒ 4** adds [theΒ `selector()` function toΒ theΒ `@supports` at&#x2011;rule](https://drafts.csswg.org/css-conditional-4/#at-supports-ext), whichΒ isΒ used toΒ test whether theΒ user agent [supports aΒ newΒ **CSSΒ Selectors** feature](https://drafts.csswg.org/css-conditional-4/#support-definition-ext) (e.g.Β theΒ `:is(…)`Β pseudo&#x2011;class orΒ multi&#x2011;value `:not(…)`). **VisualΒ StudioΒ Code** should provide syntaxΒ highlighting andΒ auto&#x2011;completion forΒ this.
feature-request,css-less-scss,grammar
low
Minor
480,848,330
TypeScript
Disallow values of type 'symbol' from being passed to 'Number' function/constructor or 'String' constructor
## Suggestion Change the `NumberConstructor` and `StringConstructor` interfaces to disallow passing values of type `Symbol`. Currently it accepts values of type `any`. Attempting to call `Number(Symbol())`, `new Number(Symbol())`, or `new String(Symbol())` all throw `TypeError`s at runtime. **Note**: `String(Symbol())` is fine - only `new String(Symbol())` will throw. ## Use Cases Any time a value of type `any` is passed to `Number`, there is a risk of that value being a `symbol` and throwing a `TypeError`. Object keys are explicitly allowed to be of type `symbol` (as `number | string | symbol`). Passing an arbitrary object key to `Number` can throw a `TypeError` as well. ## Examples ### Checking if a value can be coerced to a number I just ran into this issue when checking if an object key could be a valid array index. The following should be an error as it will throw if `value` is of type `symbol`. ``` function keyCanBeArrayIndex(value: string | number | symbol): boolean { return !Number.isNaN(Number(value)); } ``` Having this be invalid would have forced me to write something like: ``` function canBeArrayIndex(value: string | number | symbol): boolean { try { return !Number.isNaN(Number(value)); } catch { return false; } } ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Awaiting More Feedback
low
Critical
480,851,583
pytorch
Visual Studio Code not providing autosuggestions for submodules
Visual Studio Code is not providing autocomplete suggestions for (at least) the `nn` and `optim` submodules as of PyTorch 1.2, despite the type stubs for them being valid (as shown by mypy making correct use of them). This seems to be an upstream bug with Microsoft's Python language server implementation, https://github.com/microsoft/python-language-server/issues/1301.
triaged
low
Critical
480,855,019
pytorch
Default warning handler in C++ doesn't seem to unique warnings
This default warning handler just prints the warning to cerr: https://github.com/pytorch/pytorch/blob/4bfd33ed36f853bd6a5c3d410eeebbcee1d1653c/c10/util/Exception.cpp#L77 However, this causes issues if TORCH_WARN is used, for example, in an operator implementation. We could possibly unique the warnings here based on the SourceLocation and only print them once
module: error checking,triaged
low
Minor
480,872,810
rust
Bad compiler warning when code resolves to multiple experimental APIs
With Rust 1.36.0, `Range` has two implementations of `is_empty`, one in an inherent impl, the other in an impl for the `std::iter::ExactSizeIterator` trait. Both of these methods are experimental in this version of Rust. Given the following code: ```rust fn main() { let r = 1u16..10; println!("{:?}", r.is_empty()); } ``` This produces the following output: ``` error[E0034]: multiple applicable items in scope --> src/main.rs:3:24 | 3 | println!("{:?}", r.is_empty()); | ^^^^^^^^ multiple `is_empty` found | = note: candidate #1 is defined in an impl for the type `std::ops::Range<_>` = note: candidate #2 is defined in an impl of the trait `std::iter::ExactSizeIterator` for the type `std::ops::Range<u16>` = help: to disambiguate the method call, write `std::iter::ExactSizeIterator::is_empty(r)` instead ``` The problem is this warning does not tell me that either method is experimental. I also get the exact same error with `Range::is_empty`: ```rust use std::ops::Range; fn main() { let r = 1u16..10; println!("{:?}", Range::is_empty(&r)); } ``` Applying the suggested advice to use the `ExactSizeIterator` version produces the expected error (telling me that it's experimental and how to enable it), but the only way to get the expected error when trying to call the inherent impl is to disable the implicit prelude: ```rust #![no_implicit_prelude] extern crate std; fn main() { let r = 1u16..10; std::println!("{:?}", r.is_empty()); } ```
C-enhancement,A-diagnostics,T-lang,T-libs-api,T-compiler
low
Critical
480,875,481
flutter
Allow programatically maximizing windows in GLFW embedding
That could be useful to be able to set window icon, state (for example, start app maximized) and window start position. As far as I found, currently only window width, height and title can be changed. Or I'm wrong?
engine,a: desktop,e: glfw,P3,team-linux,triaged-linux
low
Minor
480,925,365
opencv
JS: TypeScript type definitions
I didn't found any opencv typings in DefinitelyTyped and before I start writing some basic types I would like to know if there are any initiative towards this or what are the plans so we have typechecking for OpenCV.js project. Thanks
feature,category: javascript (js),effort: ∞
medium
Major
480,985,707
TypeScript
[Feature Request] Non-Union Generic Type Params
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Search Terms + non-union + generic + type param + one of ## Suggestion A way to annotate that a generic type param **will not** accept union types. I don't have the syntax or keyword in mind, but I'll just go ahead and use `nonUnion` as a kind of type param modifier, ```ts type NonUnionX<nonUnion T> = ( /*Implementation*/ ); ``` ## Use Cases #### What do you want to use this for? I work with pretty complex types (in my opinion); and **a lot** of them. When working on a complex type, I tend to break the problem down into smaller subproblems. The base case is usually assuming that all type params are **non-union**. After that, I build on top of the base case and implement types that support union type params. ----- Here is an example type that is a base case, [`PrimaryKey_NonUnion<TableT>`](https://github.com/AnyhowStep/tsql/blob/master/src/primary-key/primary-key.ts#L19) ----- And here is an example of a type building upon the base case, [`PrimaryKey_Output<TableT>`](https://github.com/AnyhowStep/tsql/blob/master/src/primary-key/primary-key.ts#L31) It distributes `TableT` and uses `PrimaryKey_NonUnion`. The result is a union if `TableT` is a union. ----- And here is an example of another type building upon the base case, [`PrimaryKey_Input<TableT>`](https://github.com/AnyhowStep/tsql/blob/master/src/primary-key/primary-key.ts#L43) It distributes `TableT` and uses `PrimaryKey_NonUnion`. Then, it uses `UnionToIntersection<>` to combine the results into one type. ----- That experimental repository of mine is filled with instances of generic types that support union types and those that do not. There are times where you **really** do not want to pass a union type to a generic type param because it'll result in bugs that may not be noticed till later. #### What shortcomings exist with current approaches? One approach is to just write a comment that says, ```ts /** * + Assumes `T` is not a union * + Assumes `U` may be a union * + Assumes `V` is not a union */ ``` This gets **very** error-prone when you start having hundreds of types. ----- Another approach is to give your types names that are descriptive, ```ts type SomeOperation_NonUnionT_UnionU_NonUnionV< T, U, V > = ( //Implementation ); ``` This is still error-prone; you may still use it incorrectly. Even if the name of the type says `NonUnionT`, you may still pass a union type to `T`. ## Examples ```ts type NonUnionX<nonUnion T> = ( /*Implementation*/ ); //OK type nonUnionX1 = NonUnionX<string>; //Error, Type `NonUnionX` expects non-union for type parameter `0` // `string|number` is a union type type nonUnionX2 = NonUnionX<string|number>; // ~~~~~~~~~~~~~ type UnionX<T> = ( T extends any ? //OK! `T` has been distributed NonUnionX<T> : never ); //OK type unionX1 = UnionX<string>; //OK type unionX2 = UnionX<string|number>; type Blah<T> = ( //Error, Type `NonUnionX` expects non-union for type parameter `0` // `T` may be a union type NonUnionX<T> // ~ ); ``` ----- A **non-approach** is to expose only types that handle union type params and to leave non-union implementations unexported. This is not a useful approach because it means building new types in a different file using these non-union implementations becomes impossible. Since they're unexported. ## Checklist My suggestion meets these guidelines: * [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [X] This wouldn't change the runtime behavior of existing JavaScript code * [X] This could be implemented without emitting different JS based on the types of the expressions * [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals). ## Similar issues https://github.com/microsoft/TypeScript/issues/24085 https://github.com/microsoft/TypeScript/issues/27808
Suggestion,Awaiting More Feedback
low
Critical
481,005,922
flutter
Flutter.gradle uses implementation instead of api, which may cause problems in some contexts
See https://github.com/flutter/flutter/pull/27237 - which was closed because it fell out of date and lacked tests. From that PR: >1. Use `flutter create -t module flutter_module` and `flutter make-host-app-editable` to create a project. >2. Add a flutter plugin dependency to pubspec.yaml. >For a simple plugin example: ```/** FlutterPlugin */ public class FlutterPlugin implements MethodCallHandler { /** Plugin registration. */ public static void registerWith(Registrar registrar) { final MethodChannel channel = new MethodChannel(registrar.messenger(), "flutter_plugin"); channel.setMethodCallHandler(new FlutterPlugin()); } // This method should be called in app module. Plugin don't know when to init. public static void init() { // dosth } @Override public void onMethodCall(MethodCall call, Result result) { if (call.method.equals("getPlatformVersion")) { result.success("Android " + android.os.Build.VERSION.RELEASE); } else { result.notImplemented(); } } } ``` /cc @blasten @zanderso @supsaiyajin
platform-android,tool,t: gradle,P2,team-android,triaged-android
low
Minor
481,025,299
java-design-patterns
Store and Process pattern
**Description:** The Store and Process design pattern focuses on decoupling data storage from data processing to allow for flexible and scalable data stream handling. This pattern is particularly useful in scenarios where data needs to be ingested, stored, and then processed asynchronously. The main elements of this pattern include: 1. **Data Store:** A storage component that receives and holds incoming data. This component should be capable of handling high-throughput data ingestion and provide reliable storage. 2. **Processor:** A processing component that retrieves data from the store, processes it, and outputs the results. This processor can work independently of the data ingestion process, allowing for flexible and scalable data handling. 3. **Decoupling:** By separating the storage and processing responsibilities, the system can scale each component independently, optimizing for performance and resource usage. **References:** - [Store and Process Design Pattern - Jenkov Tutorials](https://jenkov.com/tutorials/data-streaming/store-and-process.html) - [Project Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki) **Acceptance Criteria:** 1. Implement a data storage component that can ingest and store high-throughput data reliably. 2. Implement a data processing component that retrieves data from the storage component and processes it asynchronously. 3. Ensure the data storage and processing components are decoupled to allow independent scaling and optimization.
epic: pattern,type: feature
medium
Major
481,053,038
pytorch
SyncBatchNorm error when using model.eval() with DistributedDataParallel
## πŸ› Bug <!-- A clear and concise description of what the bug is. --> ``` Traceback (most recent call last): File "train.py", line 977, in <module> main() File "train.py", line 97, in main main_worker(ngpus_per_node, args, options) File "train.py", line 742, in main_worker total_loss.backward() File "/mnt/lustre/matao/anaconda3/envs/py36/lib/python3.6/site-packages/torch/tensor.py", line 107, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/mnt/lustre/matao/anaconda3/envs/py36/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [64]] is at version 1423; expected version 1420 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! ``` ## To Reproduce Steps to reproduce the behavior: 1. Use `nn.SyncBatchNorm.convert_sync_batchnorm(model)` to convert the nn.BatchNorm2d to SyncBatchNorm2d layer; 2. Use `torch.nn.parallel.DistributedDataParallel(model.cuda(args.gpu), device_ids=[args.gpu], find_unused_parameters=True)` to wrap the model; 3. Start training, e.g., 100 epochs; 4. At 40th epoch, I use `model.eval()` to fix the parameters of Batchnorm layer, and continue to train other parameters of model, and the error happens now. Brief code for the clear representation: ``` import torch.multiprocessing as mp import torch.distributed as dist import torch.nn as nn dist.init_process_group(backend='nccl', init_method=args.dist_url, world_size=world_size, rank=rank) model = nn.SyncBatchNorm.convert_sync_batchnorm(model) model = torch.nn.parallel.DistributedDataParallel(model.cuda(args.gpu), device_ids=[args.gpu], find_unused_parameters=True) train_sampler = torch.utils.data.distributed.DistributedSampler(train_dset) model.train() for epoch in range(0, epochs): if epoch >= 40 and model.training: model.eval() loss.back_ward() ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> - PyTorch Version: 1.1 - OS: Linux - How you installed PyTorch: conda - Python version: 3.6 - CUDA/cuDNN version: 9.0 - GPU models and configuration: - Any other relevant information: ## Additional context <!-- Add any other context about the problem here. --> I fix batchnorm layer at 40th epoch for the better performance of my model's training. And this will work when I use `nn.Dataparallel()` on single node multi gpus, but it doesn't work as I mentioned above on multi nodes multi gpus. And I'm sure the error is caused by setting `model.eval()`, is this the conflict between `SyncBatchNorm` and `model.eval()`?
needs reproduction,oncall: distributed,module: autograd,triaged
low
Critical
481,053,117
pytorch
Significantly slower in latest version than in 0.4.0
Hi, if I run the following script to train a model, one can aware significantly performance drawbacks in 1.2.0 than in 0.4.0. ``` import torch import torch.nn as nn import torch.optim as optim from torch.optim.lr_scheduler import MultiStepLR import time class DummyModel(nn.Module): def __init__(self): super(DummyModel, self).__init__() self.conv1Ds= nn.Sequential( nn.Conv1d(60, 512, 5, padding=2), nn.ReLU(), nn.BatchNorm1d(512), nn.Conv1d(512, 512, 3, padding=1), nn.ReLU(), nn.BatchNorm1d(512), nn.Conv1d(512, 512, 3, dilation=2, padding=2), nn.ReLU(), nn.BatchNorm1d(512), nn.Conv1d(512, 512, 3, dilation=4, padding=4), nn.ReLU(), nn.BatchNorm1d(512), nn.Conv1d(512, 512, 3, dilation=6, padding=6), nn.ReLU(), nn.BatchNorm1d(512), nn.Conv1d(512, 512, 1), nn.ReLU(), nn.BatchNorm1d(512)).cuda() self.linears = nn.Sequential( nn.Linear(512, 512), nn.ReLU(), nn.BatchNorm1d(512), nn.Linear(512, 512), nn.ReLU(), nn.BatchNorm1d(512), nn.Linear(512, 30)).cuda() def forward(self, x): x = x.permute(0, 2, 1) x = self.conv1Ds(x) x = x.permute(0, 2, 1) x = self.linears(x) return x def train(): batch_size = 150 feature = torch.rand(batch_size, 512, 60).cuda() label = torch.rand(batch_size, 30).long().cuda() criterion = nn.CrossEntropyLoss(ignore_index=-1).cuda() optimizer = optim.Adam(net.parameters(), lr=0.0005) for epoch in range(20): net.zero_grad() predict = net(feature) loss = criterion(predict, label) loss.backward() optimizer.step() if __name__ == '__main__': print('torch version: {}'.format(torch.__version__)) from_time = time.time() net = DummyModel() train() running_time = time.time() - from_time print('running time: {}'.format(running_time)) ``` I'm not sure if this is a duplicate as [this one](https://github.com/pytorch/pytorch/issues/14456), since according to [soumith](https://github.com/pytorch/pytorch/issues/15054#issuecomment-488482160), the newly installed pytorch should have the correct cudnn that fixed this issure. Config: OS: Ubuntu 16.04.6 LTS Cuda: 10.1 NVIDIA Driver Version: 418.56 GPU: 1080Ti
needs reproduction,module: performance,module: cuda,triaged
low
Major
481,078,817
go
cmd/go: checksum mismatch lacks resolution information
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version devel +5f45a3337e Wed Aug 14 19:49:15 2019 +0000 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GO111MODULE="auto" GOARCH="amd64" GOBIN="" GOCACHE="/home/iand/.cache/go-build" GOENV="/home/iand/.config/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GONOPROXY="" GONOSUMDB="" GOOS="linux" GOPATH="/home/iand" GOPRIVATE="" GOPROXY="https://proxy.golang.org,direct" GOROOT="/opt/go" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/opt/go/pkg/tool/linux_amd64" GCCGO="gccgo" AR="ar" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="/home/iand/wip/--elided---/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build133450829=/tmp/go-build -gno-record-gcc-switches" </pre></details> ### What did you do? Ran `go mod tidy` ### What did you expect to see? Instructions for investigating the cause and potential resolution of a checksum mismatch. ### What did you see instead? ``` 10:24 $ go mod tidy verifying [email protected]+incompatible: checksum mismatch downloaded: h1:y0IMTfclpMdsdIbr6uwmJn5/WZ7vFuObxDMdrylFM3A= sum.golang.org: h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo= SECURITY ERROR This download does NOT match the one reported by the checksum server. The bits may have been replaced on the origin server, or an attacker may have intercepted the download attempt. For more information, see 'go help module-auth'. ``` This information, while correct, is not very informative for most users. It tells them the problem but doesn't indicate how they can resolve it. Their build will not progress until they resolve the problem. `go help module-auth` provides very detailed information on the operation of the module checksum mechanism but doesn't help the typical user get their build working. In my case `go clean -modcache` solved my problem although I am left with the feeling that I just did the module analog of "switching it off and on again". The message would be more informative if it told me where the downloaded code is and how I can cross reference that with the code the checksum server saw.
NeedsInvestigation,modules
low
Critical
481,098,609
pytorch
When running model forward with large batch size, it reports the error: THCudaTensor sizes too large for THCDeviceTensor conversion
## πŸ› Bug <!-- A clear and concise description of what the bug is. --> <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior When running forward for a language model with a large batch (700 instances per gpu), a tensor storing prediction scores is created with the size of about 700 (batch size) * 128 (sequence length) * 30000 (vocabulary size), and then it reports the following error: ~~~~ File "/home/work/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/work/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 904, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/home/work/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1970, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/home/work/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1790, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: THCudaTensor sizes too large for THCDeviceTensor conversion at /pytorch/aten/src/THC/THCDeviceTensorUtils.cuh:71 ~~~~ However, it's probably not the out-of-memory error because it's still below the gpu memory capacity. <!-- A clear and concise description of what you expected to happen. --> ## Environment * item PyTorch version: 1.0.1.post2 * Is debug build: No * CUDA used to build PyTorch: 10.0.130 * OS: Ubuntu 18.04.2 LTS * GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0 * CMake version: version 3.10.2 * Python version: 3.6 * Is CUDA available: Yes * CUDA runtime version: Could not collect * GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB GPU 1: Tesla V100-SXM2-32GB GPU 2: Tesla V100-SXM2-32GB GPU 3: Tesla V100-SXM2-32GB * Nvidia driver version: 418.67 * cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.0 ## Additional context <!-- Add any other context about the problem here. -->
module: cuda,triaged
low
Critical
481,155,399
TypeScript
feature: ability to extract union of valid index numbers for tuple
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Search Terms <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily --> indexof array indexof tuple restrict argument to tuple index ## Suggestion <!-- A summary of what you'd like to see added or changed --> I'd like the ability to restrict a type to a valid index number of a tuple. For example, given the tuple `[string]`, I'd like to extract the type `0`. Given the tuple `[string, number, object]`, I'd like to extract the type `0 | 1 | 2`. ## Use Cases <!-- What do you want to use this for? What shortcomings exist with current approaches? --> _[see related S.O. question](https://stackoverflow.com/questions/57509699/how-to-restrict-method-argument-to-index-number-of-tuple-type)_ Currently, if a generic class receives a type argument which is a tuple, it isn't possible to create a method on the class which restricts its argument to a valid index of that tuple. ## Examples <!-- Show how this would be used and what the behavior would be --> ```ts class FormArray<T extends [any, ...any[]]> { constructor(public value: T) {} // how to restrict `I` to only valid index numbers of `T` ? get<I extends keyof T>(index: I): T[I] { return this.value[index]; } } ``` ## Implementation ideas I imagine there are several routes to achieving my goal. #### Idea 1 (perhaps heavy handed): Add a new keyword, such as `indexof` which can only be used on arrays and tuples. If indexof is used on an array, it will always return `number`. If indexof is used on a tuple, it returns a union of `0 .... tuple length - 1` (e.g. if the tuple was of length 3 `indexof` would return `0 | 1 | 2`). #### Idea 2: Ability to cast a string type to the equivalent number type. For example, the ability to cast type `"0"` to `0`. This would allow using the `keyof` keyword to get all properties of a tuple, cast property types to numbers (if possible), and then filter out any property types which aren't numbers (i.e. filter out `"length"`, `"splice"`, etc. and leave `0 | 1 | 2`). For example, [as pointed out in this comment](https://github.com/microsoft/TypeScript/issues/27995#issuecomment-441157546), it is currently possible to get the indexes of a tuple in string form (i.e. `"0" | "1" | "2"`). ```ts type ArrayKeys = keyof any[]; type Indices<T> = Exclude<keyof T, ArrayKeys>; Indices<[string, number]>; // "0" | "1" ``` #### Idea 3: As pointed out in a comment, you can get the index numbers of a tuple using the following type: ```ts type Indices<T extends {length: number}> = Exclude<Partial<T>["length"], T['length']>; ``` Unfortunately, the result of this type is not considered a `keyof` the input tuple (which results in a type error if you try and use the type as a key for the tuple). If there were some way of using a type assertion to tell the compiler that this is, in fact, a `keyof T`, that might also work. _note: this type differs from the type presented in **idea 2** (above) because, unlike this type, the type in idea 2 is a `keyof T`_ ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Awaiting More Feedback
low
Critical
481,220,374
create-react-app
node_modules Mocks are not picked up by jest with latest react-scripts
<!-- Please note that your issue will be fixed much faster if you spend about half an hour preparing it, including the exact reproduction steps and a demo. If you're in a hurry or don't feel confident, it's fine to report bugs with less details, but this makes it less likely they'll get fixed soon. In either case, please use this template and fill in as many fields below as you can. Note that we don't provide help for webpack questions after ejecting. You can find webpack docs at https://webpack.js.org/. --> ### Describe the bug When upgrading from [email protected] to latest the mocks are no longer picked up from /\_\_mocks\_\_ directory ### Did you try recovering your dependencies? <!-- Your module tree might be corrupted, and that might be causing the issues. Let's try to recover it. First, delete these files and folders in your project: * node_modules * package-lock.json * yarn.lock Then you need to decide which package manager you prefer to use. We support both npm (https://npmjs.com) and yarn (http://yarnpkg.com/). However, **they can't be used together in one project** so you need to pick one. If you decided to use npm, run this in your project directory: npm install -g npm@latest npm install This should fix your project. If you decided to use yarn, update it first (https://yarnpkg.com/en/docs/install). Then run in your project directory: yarn This should fix your project. Importantly, **if you decided to use yarn, you should never run `npm install` in the project**. For example, yarn users should run `yarn add <library>` instead of `npm install <library>`. Otherwise your project will break again. Have you done all these steps and still see the issue? Please paste the output of `npm --version` and/or `yarn --version` to confirm. --> YES ### Which terms did you search for in User Guide? <!-- There are a few common documented problems, such as watcher not detecting changes, or build failing. They are described in the Troubleshooting section of the User Guide: https://facebook.github.io/create-react-app/docs/troubleshooting Please scan these few sections for common problems. Additionally, you can search the User Guide itself for something you're having issues with: https://facebook.github.io/create-react-app/ If you didn't find the solution, please share which words you searched for. This helps us improve documentation for future readers who might encounter the same problem. --> jest manual mocks broken ### Environment <!-- To help identify if a problem is specific to a platform, browser, or module version, information about your environment is required. This enables the maintainers quickly reproduce the issue and give feedback. Run the following command in your React app's folder in terminal. Note: The result is copied to your clipboard directly. `npx create-react-app --info` Paste the output of the command in the section below. --> Environment Info: System: OS: macOS 10.14.5 CPU: (8) x64 Intel(R) Core(TM) i7-7920HQ CPU @ 3.10GHz Binaries: Node: 12.8.0 - /usr/local/bin/node Yarn: 1.17.3 - ~/.yarn/bin/yarn npm: 6.10.2 - /usr/local/bin/npm Browsers: Chrome: 76.0.3809.100 Firefox: 68.0.1 Safari: 12.1.1 npmPackages: react: ^16.9.0 => 16.9.0 react-dom: ^16.9.0 => 16.9.0 react-scripts: 3.1.1 => 3.1.1 npmGlobalPackages: create-react-app: 2.1.3 ### Steps to reproduce <!-- How would you describe your issue to someone who doesn’t know you or your project? Try to write a sequence of steps that anybody can repeat to see the issue. --> (Write your steps here:) 1. create a manual mock of any node_modules package 2. put it to __mocks__ directory 3. run your test ### Expected behavior <!-- How did you expect the tool to behave? It’s fine if you’re not sure your understanding is correct. Just write down what you thought would happen. --> Mocks are picked up by jest ### Actual behavior <!-- Did something go wrong? Is something broken, or not behaving as you expected? Please attach screenshots if possible! They are extremely helpful for diagnosing issues. --> Mocks are not picked up ### Reproducible demo <!-- If you can, please share a project that reproduces the issue. This is the single most effective way to get an issue fixed soon. There are two ways to do it: * Create a new app and try to reproduce the issue in it. This is useful if you roughly know where the problem is, or can’t share the real code. * Or, copy your app and remove things until you’re left with the minimal reproducible demo. This is useful for finding the root cause. You may then optionally create a new project. This is a good guide to creating bug demos: https://stackoverflow.com/help/mcve Once you’re done, push the project to GitHub and paste the link to it below: --> (Paste the link to an example project and exact instructions to reproduce the issue.) <!-- What happens if you skip this step? We will try to help you, but in many cases it is impossible because crucial information is missing. In that case we'll tag an issue as having a low priority, and eventually close it if there is no clear direction. We still appreciate the report though, as eventually somebody else might create a reproducible example for it. Thanks for helping us help you! --> 1. git clone https://github.com/tomitrescak/react-boilerplate -b Demo 2. cd react-boilerplate 3. yarn 4. yarn test 5. p -> 'button' 6. error: Trans not found (not being picked up by mocks) To see that is works with previous react-scripts do 1. yarn add [email protected] 2. yarn test 3. You will recieve a snapshot erro which is fine
issue: bug,contributions: up for grabs!
medium
Critical
481,228,343
go
cmd/go: fetching dependencies can be very aggressive when going via an HTTP proxy
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.12.8 windows/amd64 </pre> ### Does this issue reproduce with the latest release? Yes. ### What operating system and processor architecture are you using (`go env`)? `windows/amd64` ### What did you do? <!-- If possible, provide a recipe for reproducing the error. A complete runnable program is good. A link on play.golang.org is best. --> 1. Set `http_proxy` environment variable 2. `git clone https://github.com/weaveworks/eksctl && cd eksctl` 3. `go build ./cmd/eksctl` ### What did you expect to see? All dependencies get downloaded and binary gets built. If errors occur, there are a few retries with a back-off. ### What did you see instead? Some dependencies get downloaded, some fail to download with 407 error in `git fetch`. Binary is not built. I had to rerun `go build` multiple times before it succeeded.
NeedsInvestigation,modules
low
Critical
481,228,846
go
proposal: identify large changes & add more process
The proposal process was designed in 2015, when the language was effectively frozen and we were not making large changes. Even now, the vast majority of proposed changes are small. But some are large. The lightweight process we have works well for small changes, and we wouldn't want to add unnecessary process to those. But if we can reliably identify large changes then we can think about adding extra process for those. One way to identify large changes is with a simple checklist of how much impact it would have. For example: - Is the change user-visible at all? - Does it require any changes to any documentation? - Are its effects confined to a single package? - Does it require changes to the language spec? - Does it require users to change existing scripts or workflows? - Does it require updating introductory materials? - And so on. Once a change has been identified as large, we could add more process, although we need to decide exactly what that is too. One idea is to keep iterating on design drafts before making an official proposal. Another is to require an experimental prototype be available. And so on. We can collect those ideas here too.
Proposal
low
Major
481,232,197
godot
Provide an error message for each function that currently returns an Error enum
Following on from https://github.com/godotengine/godot/issues/31393, I noticed that I get the following error: ``` E 0:00:04:0301 Condition ' err ' is true. returned: ERR_CANT_OPEN <C Source> scene/resources/resource_format_text.cpp:1488 @ save() <Stack Trace> Game.gd:56 @ save() SavePopup.gd:70 @ _on_new_save_confirmed() NewSavePopup.gd:14 @ _on_confirm_pressed() ``` I would like to show an `AcceptDialog` that says something like: > Error saving game: can't open "blah/blah/save.tscn" Currently I have to check the return type of the function and account for every possible type of failure for that function - something which is not currently documented, and hence would require me to look through Godot's source. In the comments of another bug report, @bojidar-bg proposed a more elaborate return type: https://github.com/godotengine/godot/issues/7643#issuecomment-280099751 > One other way I was thinking about in the past is to have > > ``` > CallResult Object::call_error(method_name, args...); > class CallResult { > Variant result = NULL; > Error error = OK; > String error_text = ""; > } > ``` > > That way we can make a simple try catch already, with something like this: > > ``` > func call_something_that_fails(): > var result = some_object.call_error("oh_no", argument1, argument2) > if result.error != OK: > raise "How totally expected" > return result.error > else: > return result.result > ``` This would be perfect: it saves me from having to account for each possible error that the function can return and I can just show the message directly to the user. Is it possible to do this in a backwards-compatible way, or is not really possible?
enhancement,discussion,topic:core
low
Critical
481,247,424
youtube-dl
Unsupported URL: mytaratata.com
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist - [x ] I'm reporting a new site support request - [x ] I've verified that I'm running youtube-dl version **2019.08.13** - [x ] I've checked that all provided URLs are alive and playable in a browser - [x ] I've checked that none of provided URLs violate any copyrights - [x] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs https://mytaratata.com/taratata/228/no-one-is-innocent-la-peur-2007 https://mytaratata.com/taratata/203/cassius-toop-toop-2007 ## Description youtube-dl https://mytaratata.com/taratata/203/cassius-toop-toop-2007 [generic] cassius-toop-toop-2007: Requesting header WARNING: Falling back on generic information extractor. [generic] cassius-toop-toop-2007: Downloading webpage [generic] cassius-toop-toop-2007: Extracting information ERROR: Unsupported URL: https://mytaratata.com/taratata/203/cassius-toop-toop-2007 WRITE DESCRIPTION HERE
site-support-request
low
Critical
481,256,501
rust
pgo: -Cprofile-use can't find file
````bash git clone https://github.com/matthiaskrgr/cargo-cache cd cargo-cache RUSTFLAGS="-Cprofile-generate=target/pgodata" cargo build --release ./target/release/cargo-cache ./target/release/cargo-cache ./target/release/cargo-cache q hello ./target/release/cargo-cache r ./target/release/cargo-cache --top-cache-items 10 llvm-profdata merge -o target/merged.profdata ./target/pgodata/ file target/merged.profdata # target/merged.profdata: LLVM indexed profile data, version 5 RUSTFLAGS="-Cprofile-use=target/merged.profdata" cargo build --release -j 1 ```` ```` Compiling serde v1.0.98 error: File `target/merged.profdata` passed to `-C profile-use` does not exist. error: aborting due to previous error error: Could not compile `serde`. To learn more, run the command again with --verbose. ```` The file is definitely there, but for some reason it is not found?? `rustc 1.38.0-nightly (c43d03a19 2019-08-14)`
A-frontend,C-enhancement,A-diagnostics,T-compiler,T-cargo,A-PGO
low
Critical
481,265,595
flutter
Add FlutterView systemGestureInsets tests in the Flutter engine
Currently, JUnit tests are blocked for tests requiring Android SDK 29 (Q) because of issues with Robolectric, our Android testing framework. To make progress on high priority tickets, we will land PRs now and land tests as a follow-up. This issue is created to track where such tests need to be added once the issues with Robolectric have been resolved. Issue tracking implementation of Robolectric support of Q: https://github.com/flutter/flutter/issues/38471 Edit: The above issue has now been fixed, so the only thing that's needed to be done here is to introduce tests for systemGestureInsets! Tests required: - [ ] FlutterView systemGestureInsets tests - https://github.com/flutter/engine/pull/10413
a: tests,c: new feature,platform-android,engine,P3,team-android,triaged-android
low
Major
481,266,326
pytorch
1.0rc0-6216 installs empty directories under include and duplicates under /
The FreeBSD ports framework complains: ``` @dir include/c10/cuda/test/impl @dir include/c10/hip @dir include/c10/test/core/impl @dir include/c10/test/util @dir include/caffe2/contrib/aten/docs @dir include/caffe2/contrib/docker-ubuntu-14.04 @dir include/caffe2/contrib/ideep @dir include/caffe2/contrib/nnpack @dir include/caffe2/contrib/opencl/OpenCL @dir include/caffe2/contrib/playground/resnetdemo @dir include/caffe2/contrib/pytorch @dir include/caffe2/contrib/script/examples @dir include/caffe2/contrib/tensorboard ... @dir include/caffe2/python/trt @dir include/caffe2/share/contrib/depthwise @dir include/caffe2/share/contrib/nnpack @dir include/caffe2/test/assets @dir include/caffe2/utils/hip @dir include/torch/csrc/api/src/data/datasets @dir include/torch/csrc/api/src/data/samplers @dir include/torch/csrc/api/src/nn/modules @dir include/torch/csrc/api/src/optim @dir include/torch/csrc/api/src/python @dir include/torch/csrc/api/src/serialize @dir include/torch/csrc/jit/backends @dir include/torch/csrc/jit/docs @dir include/torch/csrc/jit/generated @dir /usr/torch/csrc/jit/script @dir /usr/torch/csrc/jit @dir /usr/torch/csrc @dir /usr/torch ``` I have to remove them manually for the FreeBSD port, but they shouldn't be installed by the project in the first place. ```/usr/torch``` has duplicates of what is already installed under ```/usr/local/torch```.
module: build,triaged
low
Minor
481,268,256
terminal
Terminal cannot use SVG files in backgroundImage, icon (xaml limitation?)
Over at UsingJsonSettings.md, in [the section about adding a background image](https://github.com/microsoft/terminal/blob/master/doc/user-docs/UsingJsonSettings.md#add-a-custom-background-to-the-wsl-debian-terminal-profile) it directs me to download the .SVG image in Step 1 Step 4 then says to use the .JPG: ( `"backgroundImage": "ms-appdata:///Roaming/openlogo.jpg",` ) but this doesn't display the image in the Terminal. Changing Step 4 to use the .SVG also doesn't display the image Changing the file to be an actual .JPG (downloaded from the [Debian logo page](https://www.debian.org/logos/) ) causes the image to appear. The docs definitely need to be tweaked :) (I'm not sure if .SVG is supposed to work or not, otherwise I'd do a PR to offer a fix)
Issue-Bug,Area-Settings,Product-Terminal,Tracking-External,External-Blocked-WinUI3
low
Major
481,274,057
pytorch
Building Python bits separate from C++ bits and making one play well with the other
I am in the process of creating of the FreeBSD port for PyTorch. So far I am able to build the C++ part (with ```BUILD_PYTHON=OFF```), and am intending to create a port/package named ```science/pytorch```. Since there are C++ examples available I concluded that the C++ part is usable in itself. Obviously, the next step will be to create the pythonic port that would use the C++ library. Are there instructions available on how to build the python part once the C++ library is available? I'm asking because the source tree is the same for some reason.
module: build,triaged,enhancement
low
Minor
481,277,499
pytorch
Label tracking meta-issue (edit me to get automatically CC'ed on issues! cc bot)
This issue is used by [pytorch-probot](https://github.com/pytorch/test-infra/torchci) to manage subscriptions to labels. To subscribe yourself to a label, add a line `* label @yourusername`, or add your username to an existing line (space separated) in the body of this issue. **DO NOT COMMENT, COMMENTS ARE NOT RESPECTED BY THE BOT.** As a courtesy to others, please do not edit the subscriptions of users who are not you. * high priority @ezyang @gchanan @zou3519 @kadeng @msaroufim * critical @ezyang @gchanan @zou3519 @kadeng * module: binaries @seemethere @malfet @osalpekar @atalman * module: autograd @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan * module: complex @ezyang @anjali411 @dylanbespalko @mruberry @nikitaved @amjames * module: complex_autograd @anjali411 @albanD @soulitze @nikitaved * module: devx @ZainRizvi @kit1980 @huydhn @clee2000 * module: doc infra @ezyang @zou3519 @svekars * module: docs @svekars @brycebortree @sekyondaMeta @AlannaBurke * module: ci @seemethere @malfet @pytorch/pytorch-dev-infra * module: typing @ezyang @malfet @xuzhao9 @gramster * module: dataloader @andrewkho @divyanshk @SsnL @VitalyFedyunin @dzhulgakov * module: data @andrewkho @divyanshk @VitalyFedyunin @dzhulgakov * module: determinism @mruberry @kurtamohler * module: nn @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki * module: fft @mruberry * module: tests @mruberry @ZainRizvi * module: numpy @mruberry @rgommers * module: python array api @mruberry @rgommers @asmeurer @leofang @AnirudhDagar @asi1024 @emcastillo @kmaehashi * module: bc-breaking @ezyang @gchanan * module: quansight @ezyang @rgommers @mruberry * oncall: distributed @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o * module: distributed_checkpoint @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 * module: rpc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @jjlilley @osalpekar @jiayisuse @mrzzd * module: fsdp @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang * module: dtensor @wanchaol @tianyu-l @wz337 @XilunWu @d4l3k * module: tensorpipe @osalpekar @jiayisuse @lw @beauby @pritamdamania87 @mrshenli @jjlilley @gqchen * module: hub @nairbv @NicolasHug @vmoens @jdsgomes * oncall: jit @EikanWang @jgong5 @wenzhe-nrv @sanchitintel * module: windows @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex * module: linear algebra @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano * module: distributions @fritzo @neerajprad @alicanb @nikitaved * module: optimizer @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar * module: pt2 optimizer @mlazos @janeyx99 * oncall: quantization @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim * module: quantize script @jerryzh168 * module: mkldnn @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal * module: rnn @mikaylagawarecki * module: cpp @jbschlosser * module: cpp-extensions @malfet @zou3519 @xmfan * module: nestedtensor @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ * module: named tensor @zou3519 * shadow review @ezyang * module: performance @msaroufim * module: cpu @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 * module: memory format @jamesr66a * module: sparse @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip * backward compatibility @houseroad @nikitaved @jcaip * module: cuda @ptrblck @msaroufim @eqy * module: cublas @csarofeen @ptrblck @xwang233 @eqy * module: cudnn @csarofeen @ptrblck @xwang233 @eqy * module: vision @datumbox @vfdev-5 * module: xla @bdhirsh * module: build @malfet @seemethere * module: random @pbelevich * module: amp (automated mixed precision) @mcarilli @ptrblck @leslie-fang-intel @jgong5 * module: type promotion @nairbv @mruberry * module: rocm @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd * internals @ezyang @bhosmer @smessmer @ljk53 @bdhirsh * module: internals @ezyang @bhosmer @smessmer @ljk53 @bdhirsh * module: dispatcher @ezyang @bhosmer @smessmer @ljk53 @bdhirsh * module: codegen @ezyang @bhosmer @bdhirsh @kadeng * oncall: transformer/mha @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki * module: multi-headed-attention @drisspg @mikaylagawarecki * csprng @pbelevich * module: serialization @mruberry @mikaylagawarecki * module: pooling @mikaylagawarecki * module: sorting and selection * module: __torch_function__ @hameerabbasi @rgommers @ezyang * module: tf32 @zasdfgbnm @ptrblck * module: selective build @dhruvbird @ljk53 * module: tensor creation @gchanan @mruberry * module: special @mruberry @kshitij12345 * module: functional UX @albanD @jbschlosser @mruberry @zou3519 * module: arm @malfet @snadampal @milpuz01 * module: profiler @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise * oncall: profiler @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise * module: reductions * module: deploy @dzhulgakov * module: mta @crcrpar @mcarilli @janeyx99 * module: macos @malfet @albanD * module: multiprocessing @VitalyFedyunin @albanD * module: multithreading @albanD * module: __torch_dispatch__ @albanD @zou3519 @ezyang @kadeng * module: cuda graphs @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng * module: vmap @zou3519 * module: jetson @ptrblck @puririshi98 * module: ZeroTensor @anjali411 * module: jiterator @mruberry * module: functionalization @bdhirsh @ezyang * module: pytree @zou3519 @XuehaiPan * module: __torch_dispatch__ @Chillee @ezyang @zou3519 @albanD @samdow * module: intel @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 * module: primTorch @ezyang @mruberry * module: library @anjali411 * module: scatter & gather ops * module: mps @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen * module: op_tags @anjali411 * oncall: fx @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv * fx @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv * module: fx @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv * oncall: pt2 @chauhang @penguinwu * module: pt2-dispatcher @zou3519 @bdhirsh @penguinwu @yf225 * tensor subclass @ezyang @albanD * module: functorch @zou3519 @Chillee @samdow @kshitij12345 * module: meta tensors @ezyang @eellison @bdhirsh * module: decompositions @SherlockNoMad * module: python frontend @albanD * module: dynamo @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames * module: inductor @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov * module: nvfuser @kevinstephano @jjsjann123 * NNC @EikanWang @jgong5 * intel @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 * module: derivatives @albanD * skip-pr-sanity-checks @albanD * module: elastic @dzhulgakov * oncall: r2p @dzhulgakov * module: dynamic shapes @ezyang @penguinwu @bobrenjc93 * module: error checking @malfet * module: backend @bdhirsh * module: float8 @yanbing-j @vkuzo @albanD @kadeng @penguinwu * module: fakeTensor @eellison * vllm-compile @zou3519 * module: higher order operators @zou3519 @ydwu4 @penguinwu * oncall: export @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @penguinwu * module: opcheck @zou3519 * module: checkpoint @soulitzer * module: flaky-tests @clee2000 @wdvr * module: user triton @oulgen @aakhundov @davidberard98 * module: startup-tracing-compile @oulgen @jamesjwu @aorenste @anijain2305 @laithsakka @penguinwu * module: aotinductor @desertfire @chenyang78 @penguinwu * module: copy on write @kurtamohler * module: mtia @egienvalue * module: xpu @gujinghui @EikanWang @fengyuan14 @guangyey * module: PrivateUse1 @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens * oncall: distributed checkpointing @LucasLLC @pradeepfn * upstream triton @bertmaher @int3 @davidberard98 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov * module: compiled autograd @xmfan @yf225 * module: core aten @manuelcandales @SherlockNoMad @angelayi * module: flex attention @Chillee @drisspg @yanboliang @BoyuanFeng * module: slowgradcheck @soulitzer * oncall: debug-build @malfet @albanD * module: accelerator @albanD @guangyey @EikanWang
triaged
high
Critical
481,282,758
opencv
JS: npm package
I see there's no package for opencv.js so JavaScript users can install it using `npm install` as is accustomed by JS users. The benefits of this would be also that is better integrated with web bundlers like webpack, parcel, browserify which simplify development workflow used in generally for web development. I think for supporting this, the .wasm version could be better. BTW the current open.js file, although it works since it supports UMD, it would not be the best since it will be combined by bundlers (so the resulting .js file will contain it and will be huge bloking the user's apps). Instead it should be downloaded separately optionally allowing users to define an alternative url for it (like a CDN). I'm working on a proof f concept for this, and also to support / tests for node.js as API and a little CLI tool too, and I wonder if something like this would be accepted as a PR to at least start the discussion. Thanks
category: javascript (js)
medium
Critical
481,307,204
flutter
`flutter upgrade` crashes with ProcessException trying to run git on windows
*@lanaspectrum commented on Aug 14, 2019, 12:01 PM UTC:* ## What happened I got the exception when I tried to run Flutter upgrade following a prompt that a new version is now available ## Version information Android Studio `3.4.2` β€’ Flutter plugin `io.flutter 38.2.1` β€’ Dart plugin `183.6270` Error getting Flutter sdk information. ## Exception null ``` java.lang.Throwable: Flutter device daemon #1: process exited during startup. Exit code: 255, stderr: CreateProcessW failed 5 CreateProcessW failed 5 CreateProcessW failed 5 Unhandled exception: ProcessException: Access is denied. Command: "\"C:\Program Files\Git\cmd\git.EXE\"" log -n 1 --pretty=format:%H #0 _ProcessImpl._runAndWait (dart:io-patch/process_patch.dart:496:7) #1 _runNonInteractiveProcessSync (dart:io-patch/process_patch.dart:641:18) #2 Process.runSync (dart:io-patch/process_patch.dart:66:12) #3 LocalProcessManager.runSync (package:process/src/interface/local_process_manager.dart:93:20) #4 _runWithLoggingSync (package:flutter_tools/src/base/process.dart:349:48) #5 runSync (package:flutter_tools/src/base/process.dart:321:10) #6 _runGit (package:flutter_tools/src/version.dart:535:10) #7 new FlutterVersion (package:flutter_tools/src/version.dart:23:26) #8 runInContext.<anonymous closure> (package:flutter_tools/src/context_runner.dart:85:29) #9 AppContext._generateIfNecessary.<anonymous closure> (package:flutter_tools/src/base/context.dart:99:41) #10 __InternalLinkedHashMap&_HashVMBase&MapMixin&_LinkedHashMapMixin.putIfAbsent (dart:collection-patch/compact_hash.dart:291:23) #11 AppContext._generateIfNecessary (package:flutter_tools/src/base/context.dart:87:20) #12 AppContext.get (package:flutter_tools/src/base/context.dart:115:32) #13 FlutterVersion.instance (package:flutter_tools/src/version.dart:186:49) #14 new _DefaultUsage (package:flutter_tools/src/reporting/usage.dart:148:58) #15 new Usage (package:flutter_tools/src/reporting/usage.dart:70:9) #16 runInContext.<anonymous closure> (package:flutter_tools/src/context_runner.dart:104:20) #17 AppContext._generateIfNecessary.<anonymous closure> (package:flutter_tools/src/base/context.dart:99:41) #18 __InternalLinkedHashMap&_HashVMBase&MapMixin&_LinkedHashMapMixin.putIfAbsent (dart:collection-patch/compact_hash.dart:291:23) #19 AppContext._generateIfNecessary (package:flutter_tools/src/base/context.dart:87:20) #20 AppContext.get (package:flutter_tools/src/base/context.dart:115:32) #21 Usage.instance (package:flutter_tools/src/reporting/usage.dart:76:40) #22 flutterUsage (package:flutter_tools/src/reporting/usage.dart:60:33) #23 _handleToolError (package:flutter_tools/runner.dart:120:7) <asynchronous suspension> #24 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:77:13) <asynchronous suspension> #25 _rootRunBinary (dart:async/zone.dart:1148:13) #26 _CustomZone.runBinary (dart:async/zone.dart:1037:19) #27 runZoned.<anonymous closure> (dart:async/zone.dart:1479:21) #28 _CustomZone.handleUncaughtError (dart:async/zone.dart:1003:19) #29 Future._propagateToListeners (dart:async/future_impl.dart:589:16) #30 Future._completeError (dart:async/future_impl.dart:491:5) #31 _SyncCompleter._completeError (dart:async/future_impl.dart:55:12) #32 _Completer.completeError (dart:async/future_impl.dart:27:5) #33 _AsyncAwaitCompleter.completeError (dart:async-patch/async_patch.dart:40:18) #34 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart) <asynchronous suspension> #35 _rootRun (dart:async/zone.dart:1124:13) #36 _CustomZone.run (dart:async/zone.dart:1021:19) #37 _runZoned (dart:async/zone.dart:1516:10) #38 runZoned (dart:async/zone.dart:1500:12) #39 run.<anonymous closure> (package:flutter_tools/runner.dart:61:18) <asynchronous suspension> #40 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:154:29) <asynchronous suspension> #41 _rootRun (dart:async/zone.dart:1124:13) #42 _CustomZone.run (dart:async/zone.dart:1021:19) #43 _runZoned (dart:async/zone.dart:1516:10) #44 runZoned (dart:async/zone.dart:1463:12) #45 AppContext.run (package:flutter_tools/src/base/context.dart:153:18) <asynchronous suspension> #46 runInContext (package:flutter_tools/src/context_runner.dart:58:24) <asynchronous suspension> #47 run (package:flutter_tools/runner.dart:50:10) #48 main (package:flutter_tools/executable.dart:65:9) <asynchronous suspension> #49 main (file:///C:/Users/oladimeji/flutter/packages/flutter_tools/bin/flutter_tools.dart:8:3) #50 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:303:32) #51 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:172:12) at com.intellij.openapi.diagnostic.Logger.error(Logger.java:137) at io.flutter.run.daemon.DeviceDaemon$Command.start(DeviceDaemon.java:239) at io.flutter.run.daemon.DeviceService.chooseNextDaemon(DeviceService.java:230) at io.flutter.utils.Refreshable.runInBackground(Refreshable.java:221) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ``` *This issue was moved by [devoncarew](https://github.com/devoncarew) from [flutter/flutter-intellij#3780](https://github.com/flutter/flutter-intellij/issues/3780).*
c: crash,tool,P2,team-tool,triaged-tool
low
Critical
481,315,851
pytorch
Use a ScriptModule on GPU that was saved from CPU
## πŸ› Bug When a ``nn.Module`` that have been instantiated on CPU is traced and saved, it expects a `CPUFloatType` tensor even if the `ScriptModule` is mapped onto GPU. ## To Reproduce Steps to reproduce the behavior: 1.Build a `nn.Module` on CPU 1.Trace and save the `ScriptModule` in a file 1.Load the file with `map_location=GPU` ```python module = torch.jit.trace_module(model, {'forward': torch.rand(1, 1, 200, 200)}) torch.jit.save(module, 'test_torchscript_cpu.pt') script_module = torch.jit.load('test_torchscript_cpu.pt', map_location=torch.device('cuda:0')) input_tensor = torch.rand(1, 1, 200, 200).to(torch.device('cuda:0')) # --> RuntimeError: expected device cuda:0 and dtype Float but got device cpu and dtype Float input_tensor = torch.rand(1, 1, 200, 200)) # --> RuntimeError: Input type (Variable[CPUFloatType]) and weight type (Variable[CUDAFloatType]) should be the same script_module(input_tensor) ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> ## Expected behavior I expected that whether a `nn.Module` was traced on GPU or CPU, when loaded on CPU (resp. GPU) it needs a tensor from the CPU(resp. GPU) ## Environment PyTorch version: 1.2.0 Is debug build: No CUDA used to build PyTorch: 10.0.130 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.10.2 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: GeForce GTX 1080 Ti GPU 1: GeForce GTX 1080 Ti Nvidia driver version: 430.14 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.0 Versions of relevant libraries: [pip] numpy==1.16.4 [pip] torch==1.2.0 [pip] torchsummary==1.5.1 [pip] torchvision==0.4.0a0+6b959ee [conda] Could not collect ## What works If I expect the model to run on GPU, it needs to be loaded onto GPU before tracing, then saved ```python model.to(torch.device('cuda:0') input_tensor = torch.rand(1, 1, 200, 200).to(torch.device('cuda:0')) module = torch.jit.trace_module(model, {'forward': input_tensor}) torch.jit.save(module, 'test_torchscript_gpu.pt') script_module = torch.jit.load('test_torchscript_gpu.pt', map_location=torch.device('cuda:0')) script_module(input_tensor) ``` cc @suo
oncall: jit,module: serialization,triaged
low
Critical
481,340,496
flutter
InkWell from UserAccountsDrawerHeader not showing
The `UserAccountsDrawerHeader` is supposed to have a `InkWell` in its `_AccountDetailsState`, but this isn't visible at all when using it. This can be confirmed by downloading the Flutter Gallery App and taping at the user details in the Drawer Navigation example.
framework,f: material design,has reproducible steps,P2,found in release: 3.0,found in release: 3.1,team-design,triaged-design
low
Minor
481,348,832
godot
Godot ignores Viewports with size 0 even though their size is determined by OpenVR
**Godot version:** 3.1.1 **OS/device including version:** Ubuntu 19.10 Nvidia GTX 1080 Nvidia Driver: 430.34 Headset: HTC Vive SteamVR Version: 1.7.4 **Issue description:** Attempting to use arvr with a secondary viewport results in a blank screen on the headset **Steps to reproduce:** Add godot-openvr addon Add camera to scene Add viewport to scene Add VR components (inc arvarcamera) to viewport Disable HDR on viewport, enable arvr Add following script to root of scene: ``` var vr = ARVRServer.find_interface("OpenVR") if vr and vr.initialize(): OS.vsync_enabled = false Engine.target_fps = 90 ``` Start scene Expected Result: Main window should render the fixed camera VR headset should render the ARVRcamerea Actual Results: Main window renders fixed camera VR headset is blank **Minimal reproduction project:** https://github.com/mrmakeit/VR-Viewport-Test Additional Notes: The version of the openvr addon does not seem to matter. I have used both the latest and one downloaded for 3.0.6 with exactly the same results Everything works as expected with 3.0.6. I've opened an issue in godot-openvr as well, but since the bug shows up only in 3.1.1, regardless of addon version, I thought I'd open a issue here as well. Issue for godot-openvr: godotvr/godot_openvr#61
bug,topic:core,usability,topic:xr
low
Critical
481,364,947
rust
object lifetime bound defaults in trait bindings are incompletely implemented
In https://github.com/rust-lang/rust/pull/63376, we fixed the behavior of default object lifetime bounds in "binding arguments", as described in [this gist](https://gist.github.com/nikomatsakis/e0ac0276744d10b952cf4ea3d3d8a814). For example the following type; ```rust dyn Foo<Bar = dyn Baz> ``` will now default `dyn Baz` to `dyn Baz + 'static`. However, the rules were incompletely implemented. If the trait has a lifetime parameter, we currently require an explicit lifetime bound. The expected behavior is to use the bounds declared on the associated type to set the default. So, given `Foo` like so: ```rust trait Foo<'a> { type Bar: 'a + ?Sized; } ``` then `dyn Foo<'x, Bar = dyn Baz>` would default to `dyn Baz + 'a`. Presently, however, it gives an error. NB. There is one "design decision" to be made; does a where clause like `where Self::Bar: 'x` adjust the default? And how does that work when extended to GATs? My personal opinion is that we should limit ourselves to bounds declared on the associated type itself (and hence `where Self::Bar: 'x` would not count towards the default object lifetime bound), but this merits a bit of further discussion. This is largely because the complexity to specify and implement the behavior around where clauses and with GATs is kind feels out of proportion with the benefit.
A-type-system,A-lifetimes,A-trait-system,T-lang,T-compiler,T-types,A-trait-objects
low
Critical
481,386,694
TypeScript
Quick fix for using bracket accessor instead of get/set on Map
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.6.0-dev.20190814 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** - quick fix - code action - getCodeFixes **Code** With `strict: true`: ```ts class Foo { public readonly map = new Map<string, string>(); constructor() { console.log(this.map['af123']);`` } } ``` This produces the error: ``` Element implicitly has an 'any' type because type 'Map<string, string>' has no index signature. Did you mean to call 'get' ? ``` **Expected behavior:** The error message suggests a fix. We should should be a quick fix that changes the code to call `.get` **Actual behavior:** No quick fix returned.
Suggestion,Domain: Quick Fixes,Experience Enhancement
low
Critical
481,431,511
pytorch
Build PyTorch 1.2.0 occur `recipe for target bin/test_parallel' failed
## πŸ› Bug When building PyTorch 1.2.0, occur `recipe for target 'bin/test_parallel' failed` error. ## To Reproduce Steps to reproduce the behavior: 1. `git clone https://github.com/pytorch/pytorch.git` 2. `git git checkout tags/v1.2.0 -b build-v1.2.0` 3. `python setup.py install` ## Environment ``` PyTorch version: 1.2.0+cpu Is debug build: No CUDA used to build PyTorch: None OS: Ubuntu 18.04.2 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.10.2 Python version: 3.7 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA ``` ## Additional context Here is the [build.log](https://gist.github.com/rivergold/750cd567d35ffe0820b6fe2c98f713ea)
module: build,triaged
low
Critical
481,439,013
TypeScript
union types not type guarded
<!-- 🚨 STOP 🚨 𝗦𝗧𝗒𝗣 🚨 𝑺𝑻𝑢𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.5.2 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:*extended function type restriction* **Code** ```ts class C { name: string; constructor(private handler: F) { this.name = handler.name; } } type F = (...args: any[]) => any; type F2 = F & { inject?: string[] } function f(x: C | F2):C { if (typeof x == 'function') x = new C(f); return x; } ``` **Expected behavior:** x in function f should be of type C at the point of the return statement **Actual behavior:** x is considered as C or F2, apparently because of the combined type (it works well if we replace F2 with F) **Playground Link:** https://www.typescriptlang.org/play/index.html#code/MYGwhgzhAEDCBQBve1XQHZgLYFMBc0EALgE4CW6A5gNwprAD26xJArsEQyQBQAO5ANzBEc0ABZh0AExA4SBAGIBKOqmRoN0ImLIQAdJlzQAvOMky5B7DloaAvvAfwiAT16iFJ6Nz2+wJSggCSRcAbQBdJRMAPmgQ2mc3DwAmL08AMmhEaAoAKxwOAH4CFgpKCOgnADNWdA4yJmgq7gAPAlhoAB9oBWSlPAR1NDIq71d3BlGWk1MAchq6ogb0WZVNNGnTdBwAdzhuKqUEjRIcIlYSdGgW2jsgA **Related Issues:** #27143
Bug,Fix Available,Rescheduled
low
Critical
481,441,358
neovim
Vim script syntax highlighting messed up after fold marker
- `nvim --version`: v0.3.8 - `vim -u DEFAULTS` (version: ) behaves differently? no - Operating system/version: linux - Terminal name/version: kitty - `$TERM`: xterm-kitty ### Steps to reproduce using `nvim -u NORC` Open a .vim file. Run `:set foldenable | set foldmethod=marker`. Declare a variable `let g:my_var = 'test'`. Insert a newline and create a fold on the previous line using `o<esc>kzfj`. Type some vimscript inside the fold. ### Actual behaviour Syntax highlighting doesn't work correctly inside the fold unless a space is added between the string and the initial fold marker (`"{{{`). ### Expected behaviour Syntax highlighting works as usual. ##### Without space: ![2019-08-16-011255_maim](https://user-images.githubusercontent.com/29720696/63144067-99b9d280-bfc7-11e9-84c1-f2434cca1f35.png) ##### With space: ![2019-08-16-011447_maim](https://user-images.githubusercontent.com/29720696/63144082-a76f5800-bfc7-11e9-9538-197a30efabb4.png)
runtime,bug-vim,syntax,needs:vim-patch
low
Minor
481,465,055
youtube-dl
download from boomerang.com audio and video
- [x] I'm reporting a new site support request - [x] I've verified that I'm running youtube-dl version **2019.08.13** - [x] I've checked that all provided URLs are alive and playable in a browser - [x] I've checked that none of provided URLs violate any copyrights - [x] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs <!-- Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours. --> - Single video: https://watch.boomerang.com/watch/227/1 ## Description <!-- Provide any additional information. --> I try to download audio and video from boomerang but it is not supported there are free episodes for testing WRITE DESCRIPTION HERE
site-support-request
low
Critical
481,512,895
rust
rustc (>= 1.20.0) fails to optimize moves in trivial cases
The example code below generates extra stack copies of String (meta)data in function `got()` which is expected to produce identical *optimized* code with `expect()`. For quickly verifying the issue, compare `sub rsp, $FRAME_SIZE` instructions which initialize stack frames in the beginning of `got()` & `expect()` functions compiled with `-C opt-level=3` (or measure & compare the generated code sizes). [Rust Playground link](https://play.rust-lang.org/?version=nightly&mode=release&edition=2018&gist=0438199d12435fdcdcc8e65605b762af). rustc versions before 1.20.0 produce expected optimized assembly. ```rust pub fn got() -> String { let mut res = String::new(); let s0 = String::from("foobar"); res.push_str(&s0); let s0 = s0; res.push_str(&s0); let s0 = s0; res.push_str(&s0); let s0 = s0; res.push_str(&s0); res } pub fn expect() -> String { let mut res = String::new(); let s0 = String::from("foobar"); res.push_str(&s0); res.push_str(&s0); res.push_str(&s0); res.push_str(&s0); res } /* * s0: String required, s0: &str generates correctly optimized assembly * res.push_str(...) line repetitions can be increased for a greater effect * let s0 = s0 used for illustration purposes (variable names do not matter) */ ```
A-LLVM,I-slow,C-enhancement,T-compiler,C-optimization
low
Major
481,517,156
react
SSR: Cannot set property 'memoizedState' of null
<!-- Note: if the issue is about documentation or the website, please file it at: https://github.com/reactjs/reactjs.org/issues/new --> **Do you want to request a *feature* or report a *bug*?** A bug? **What is the current behavior?** > Cannot set property 'memoizedState' of null **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem. Your bug will get fixed much faster if we can run your code and it doesn't have dependencies other than React. Paste the link to your JSFiddle (https://jsfiddle.net/Luktwrdm/) or CodeSandbox (https://codesandbox.io/s/new) example below:** ```js const processLink = html => { return renderToStaticMarkup(<Link />) }; const RichText = ({ html }) => { const htmlProcessed = useMemo(() => processLink(html), [html]); } ``` See https://codesandbox.io/s/cannot-set-property-memoizedstate-of-null-mrxfr **What is the expected behavior?** **Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?** 16.8~16.9
Type: Bug
medium
Critical
481,525,587
create-react-app
Automatically convert manifest.json fields to corresponding iOS meta tags
### Is your proposal related to a problem? <!-- Provide a clear and concise description of what the problem is. For example, "I'm always frustrated when..." --> On iOS, Safari doesn't make use of the manifest.json icons, splash screen, and other settings. Instead, it expects some special meta tags in the HTML page. ### Describe the solution you'd like <!-- Provide a clear and concise description of what you want to happen. --> It would be very useful if CRA included some webpack plugin that automatically reads the manifest.json entries and injects the needed HTML tags in the index.html page. ### Describe alternatives you've considered <!-- Let us know about other solutions you've tried or researched. --> N/A ### Additional context <!-- Is there anything else you can add about the proposal? You might want to link to related issues here, if you haven't already. --> I know there are some plugins such as webpack-pwa-manifest, but they require a special format that then generates both the manifest.json **and** the HTML tags. I think it would make more sense if the plugin directly read the manifest.json settings instead, so that we don't tie the users to a custom syntax.
issue: proposal,needs triage
low
Minor
481,537,032
opencv
CUDA (opencv_contrib) LINK : fatal error LNK1181: Unable to open the input file "Files/NVIDIA.obj"
<!-- If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses. If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute). Please: * Read the documentation to test with the latest developer build. * Check if other person has already created the same issue to avoid duplicates. You can comment on it if there already is an issue. * Try to be as detailed as possible in your report. * Report only one problem per created issue. This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library. --> ##### System information (version) <!-- Example - OpenCV => 3.1 - Operating System / Platform => Windows 64 Bit - Compiler => Visual Studio 2015 --> - OpenCV => 4.1.0 - Operating System / Platform => Windows 10 64 Bit - Compiler => Visual Studio 2015 --> NMake ##### Detailed description ``` LINK: command "C:\PROGRA~2\MICROS~1.0\VC\bin\amd64\link.exe /nologo @CMakeFiles\opencv_world.dir\objects1.rsp /out:..\..\bin\opencv_world410.dll /implib:..\..\lib\opencv_world410.lib /pdb:D:\projects\cpp\opencv-4.1.0\cmake-build-release\bin\opencv_world410.pdb /dll /version:4.1 /NODEFAULTLIB:atlthunk.lib /NODEFAULTLIB:atlsd.lib /NODEFAULTLIB:libcmt.lib /DEBUG /machine:x64 /INCREMENTAL:NO -LIBPATH:C:\PROGRA~1\NVIDIA~2\CUDA\v9.1\lib\x64 ..\..\3rdparty\lib\ippiw.lib ..\..\3rdparty\ippicv\ippicv_win\icv\lib\intel64\ippicvmt.lib cudart_static.lib nppc.lib nppial.lib nppicc.lib nppicom.lib nppidei.lib nppif.lib nppig.lib nppim.lib nppist.lib nppisu.lib nppitc.lib npps.lib cublas.lib cufft.lib -LIBPATH:C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.1/lib/x64 cublas.lib cufft.lib ..\..\3rdparty\lib\libprotobuf.lib ..\..\lib\ade.lib ..\..\3rdparty\lib\zlib.lib ..\..\3rdparty\lib\libjpeg-turbo.lib ..\..\3rdparty\lib\libwebp.lib ..\..\3rdparty\lib\libpng.lib ..\..\3rdparty\lib\libtiff.lib ..\..\3rdparty\lib\libjasper.lib ..\..\3rdparty\lib\IlmImf.lib C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\lib\x64\cuda.lib C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\lib\x64\nvcuvid.lib comctl32.lib gdi32.lib ole32.lib setupapi.lib ws2_32.lib ..\..\3rdparty\lib\zlib.lib ..\..\3rdparty\lib\ittnotify.lib C:\Program Files\OSGeo4W64\lib\hdf5.lib ..\..\3rdparty\lib\quirc.lib cudart_static.lib nppc.lib nppial.lib nppicc.lib nppicom.lib nppidei.lib nppif.lib nppig.lib nppim.lib nppist.lib nppisu.lib nppitc.lib npps.lib cublas.lib cufft.lib C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\lib\x64\cuda.lib C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\lib\x64\nvcuvid.lib comctl32.lib gdi32.lib ole32.lib setupapi.lib ws2_32.lib C:\Program Files\OSGeo4W64\lib\hdf5.lib kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib /MANIFEST /MANIFESTFILE:..\..\bin\opencv_world410.dll.manifest" failed (exit code 1181) with the following output: LINK : fatal error LNK1181: Unable to open the input file "Files/NVIDIA.obj" ``` ##### Steps to reproduce <!-- to add code example fence it with triple backticks and optional file extension ```.cpp // C++ code example ``` or attach as .txt or .zip file --> > Step 1 ``` cmake.exe -DCMAKE_BUILD_TYPE=Release -DOPENCV_EXTRA_MODULES_PATH=D:\projects\cpp\opencv_contrib-4.1.0\modules -DOPENCV_ENABLE_NONFREE=ON -DWITH_OPENCL=ON -DWITH_CUDA=ON "-DCUDA_TOOLKIT_ROOT_DIR=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.1" -DWITH_FFMPEG=ON -DINSTALL_CREATE_DISTRIB=ON -DENABLE_SSE=ON -DENABLE_SSE2=ON -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF -G "CodeBlocks - NMake Makefiles" D:\projects\cpp\opencv-4.1.0 ``` > Step 2 ``` NMake ``` > Cmake output: ``` C:\Users\wangxiaoming\AppData\Local\JetBrains\Toolbox\apps\CLion\ch-0\192.5728.100\bin\cmake\win\bin\cmake.exe -DCMAKE_BUILD_TYPE=Release -DOPENCV_EXTRA_MODULES_PATH=D:\projects\cpp\opencv_contrib-4.1.0\modules -DOPENCV_ENABLE_NONFREE=ON -DWITH_OPENCL=ON -DWITH_CUDA=ON "-DCUDA_TOOLKIT_ROOT_DIR=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.1" -DWITH_FFMPEG=ON -DINSTALL_CREATE_DISTRIB=ON -DENABLE_SSE=ON -DENABLE_SSE2=ON -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF -G "CodeBlocks - NMake Makefiles" D:\projects\cpp\opencv-4.1.0 -- The CXX compiler identification is MSVC 19.0.24215.1 -- The C compiler identification is MSVC 19.0.24215.1 -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64/cl.exe -- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64/cl.exe -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64/cl.exe -- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64/cl.exe -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Performing Test HAVE_CXX11 (check file: cmake/checks/cxx11.cpp) -- Performing Test HAVE_CXX11 - Success -- Found PythonInterp: C:/Users/wangxiaoming/AppData/Local/Programs/Python/Python36/python.exe (found suitable version "3.6.5", minimum required is "2.7") CMake Warning at cmake/OpenCVDetectPython.cmake:81 (message): CMake's 'find_host_package(PythonInterp 2.7)' founds wrong Python version: PYTHON_EXECUTABLE=C:/Users/wangxiaoming/AppData/Local/Programs/Python/Python36/python.exe PYTHON_VERSION_STRING=3.6.5 Consider specify 'PYTHON2_EXECUTABLE' variable via CMake command line or environment variables Call Stack (most recent call first): cmake/OpenCVDetectPython.cmake:275 (find_python) CMakeLists.txt:689 (include) -- Found Python2: C:/Python27/python.exe (found version "2.7.15") found components: Interpreter -- Found PythonInterp: C:/Python27/python.exe (found version "2.7.15") -- Found PythonLibs: C:/Python27/libs/python27.lib (found suitable exact version "2.7.15") -- Found PythonInterp: C:/Users/wangxiaoming/AppData/Local/Programs/Python/Python36/python.exe (found suitable version "3.6.5", minimum required is "3.2") -- Could NOT find PythonLibs: Found unsuitable version "3.6.4", but required is exact version "3.6.5" (found C:/ProgramData/Anaconda3/libs/python36.lib) -- WARNING: Option ENABLE_SSE='ON' is deprecated and should not be used anymore -- Behaviour of this option is not backward compatible -- Refer to 'CPU_BASELINE'/'CPU_DISPATCH' CMake options documentation -- WARNING: Option ENABLE_SSE2='ON' is deprecated and should not be used anymore -- Behaviour of this option is not backward compatible -- Refer to 'CPU_BASELINE'/'CPU_DISPATCH' CMake options documentation -- Performing Test HAVE_CPU_SSE3_SUPPORT (check file: cmake/checks/cpu_sse3.cpp) -- Performing Test HAVE_CPU_SSE3_SUPPORT - Success -- Performing Test HAVE_CPU_SSSE3_SUPPORT (check file: cmake/checks/cpu_ssse3.cpp) -- Performing Test HAVE_CPU_SSSE3_SUPPORT - Success -- Performing Test HAVE_CPU_SSE4_1_SUPPORT (check file: cmake/checks/cpu_sse41.cpp) -- Performing Test HAVE_CPU_SSE4_1_SUPPORT - Success -- Performing Test HAVE_CPU_POPCNT_SUPPORT (check file: cmake/checks/cpu_popcnt.cpp) -- Performing Test HAVE_CPU_POPCNT_SUPPORT - Success -- Performing Test HAVE_CPU_SSE4_2_SUPPORT (check file: cmake/checks/cpu_sse42.cpp) -- Performing Test HAVE_CPU_SSE4_2_SUPPORT - Success -- Performing Test HAVE_CXX_ARCH:AVX (check file: cmake/checks/cpu_fp16.cpp) -- Performing Test HAVE_CXX_ARCH:AVX - Success -- Performing Test HAVE_CXX_ARCH:AVX2 (check file: cmake/checks/cpu_avx2.cpp) -- Performing Test HAVE_CXX_ARCH:AVX2 - Success -- Performing Test HAVE_CPU_AVX_512F_SUPPORT (check file: cmake/checks/cpu_avx512.cpp) -- Performing Test HAVE_CPU_AVX_512F_SUPPORT - Failed -- AVX_512F is not supported by C++ compiler -- Performing Test HAVE_CPU_AVX512_SKX_SUPPORT (check file: cmake/checks/cpu_avx512skx.cpp) -- Performing Test HAVE_CPU_AVX512_SKX_SUPPORT - Failed -- AVX512_SKX is not supported by C++ compiler -- Dispatch optimization AVX512_SKX is not available, skipped -- Performing Test HAVE_CPU_BASELINE_FLAGS -- Performing Test HAVE_CPU_BASELINE_FLAGS - Success -- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_1 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_1 - Success -- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_2 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_2 - Success -- Performing Test HAVE_CPU_DISPATCH_FLAGS_FP16 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_FP16 - Success -- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX -- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX - Success -- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX2 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX2 - Success -- Check if the system is big endian -- Searching 16 bit integer -- Looking for sys/types.h -- Looking for sys/types.h - found -- Looking for stdint.h -- Looking for stdint.h - found -- Looking for stddef.h -- Looking for stddef.h - found -- Check size of unsigned short -- Check size of unsigned short - done -- Using unsigned short -- Check if the system is big endian - little endian -- Looking for fseeko -- Looking for fseeko - not found -- Check size of off64_t -- Check size of off64_t - failed -- libjpeg-turbo: VERSION = 2.0.2, BUILD = opencv-4.1.0-libjpeg-turbo -- Check size of size_t -- Check size of size_t - done -- Check size of unsigned long -- Check size of unsigned long - done -- Looking for include file intrin.h -- Looking for include file intrin.h - found -- Looking for assert.h -- Looking for assert.h - found -- Looking for fcntl.h -- Looking for fcntl.h - found -- Looking for inttypes.h -- Looking for inttypes.h - found -- Looking for io.h -- Looking for io.h - found -- Looking for limits.h -- Looking for limits.h - found -- Looking for malloc.h -- Looking for malloc.h - found -- Looking for memory.h -- Looking for memory.h - found -- Looking for search.h -- Looking for search.h - found -- Looking for string.h -- Looking for string.h - found -- Performing Test C_HAS_inline -- Performing Test C_HAS_inline - Success -- Check size of signed short -- Check size of signed short - done -- Check size of unsigned short -- Check size of unsigned short - done -- Check size of signed int -- Check size of signed int - done -- Check size of unsigned int -- Check size of unsigned int - done -- Check size of signed long -- Check size of signed long - done -- Check size of signed long long -- Check size of signed long long - done -- Check size of unsigned long long -- Check size of unsigned long long - done -- Check size of unsigned char * -- Check size of unsigned char * - done -- Check size of ptrdiff_t -- Check size of ptrdiff_t - done -- Looking for memmove -- Looking for memmove - found -- Looking for setmode -- Looking for setmode - found -- Looking for strcasecmp -- Looking for strcasecmp - not found -- Looking for strchr -- Looking for strchr - found -- Looking for strrchr -- Looking for strrchr - found -- Looking for strstr -- Looking for strstr - found -- Looking for strtol -- Looking for strtol - found -- Looking for strtol -- Looking for strtol - found -- Looking for strtoull -- Looking for strtoull - found -- Looking for lfind -- Looking for lfind - found -- Performing Test HAVE_SNPRINTF -- Performing Test HAVE_SNPRINTF - Success -- Check if the system is big endian -- Searching 16 bit integer -- Using unsigned short -- Check if the system is big endian - little endian -- IPPICV: Download: ippicv_2019_win_intel64_20180723_general.zip -- found Intel IPP (ICV version): 2019.0.0 [2019.0.0 Gold] -- at: D:/projects/cpp/opencv-4.1.0/cmake-build-release/3rdparty/ippicv/ippicv_win/icv -- found Intel IPP Integration Wrappers sources: 2019.0.0 -- at: D:/projects/cpp/opencv-4.1.0/cmake-build-release/3rdparty/ippicv/ippicv_win/iw -- CUDA detected: 9.1 -- CUDA NVCC target flags: -gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-D_FORCE_INLINES -- Could not find OpenBLAS include. Turning OpenBLAS_FOUND off -- Could not find OpenBLAS lib. Turning OpenBLAS_FOUND off -- Looking for sgemm_ -- Looking for sgemm_ - not found -- Looking for pthread.h -- Looking for pthread.h - not found -- Found Threads: TRUE -- Could NOT find BLAS (missing: BLAS_LIBRARIES) -- LAPACK requires BLAS -- A library with LAPACK API not found. Please specify library location. -- Found apache ant: C:/Program Files/apache-ant-1.9.11/bin/ant.bat (1.9.11) -- Found JNI: C:/Users/wangxiaoming/AppData/Local/JetBrains/Toolbox/apps/CLion/ch-0/192.5728.100/jbr/lib/jawt.lib -- VTK is not found. Please set -DVTK_DIR in CMake to VTK build directory, or to VTK install subdirectory with VTKConfig.cmake file -- ADE: Download: v0.1.1d.zip -- OpenCV Python: during development append to PYTHONPATH: D:/projects/cpp/opencv-4.1.0/cmake-build-release/python_loader -- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE) -- FFMPEG: Download: opencv_ffmpeg.dll -- FFMPEG: Download: opencv_ffmpeg_64.dll -- FFMPEG: Download: ffmpeg_version.cmake -- Looking for mfapi.h -- Looking for mfapi.h - found -- Looking for d3d11_4.h -- Looking for d3d11_4.h - not found -- Caffe: NO -- Protobuf: NO -- Glog: NO -- freetype2: NO -- harfbuzz: NO -- Module opencv_ovis disabled because OGRE3D was not found -- No preference for use of exported gflags CMake configuration set, and no hints for include/library directories provided. Defaulting to preferring an installed/exported gflags CMake configuration if available. -- Failed to find installed gflags CMake configuration, searching for gflags build directories exported with CMake. -- Failed to find gflags - Failed to find an installed/exported CMake configuration for gflags, will perform search for installed gflags components. -- Failed to find gflags - Could not find gflags include directory, set GFLAGS_INCLUDE_DIR to directory containing gflags/gflags.h -- Failed to find glog - Could not find glog include directory, set GLOG_INCLUDE_DIR to directory containing glog/logging.h -- Module opencv_sfm disabled because the following dependencies are not found: Eigen Glog/Gflags -- Processing WORLD modules... -- module opencv_cudev... -- module opencv_core... -- module opencv_cudaarithm... -- module opencv_flann... -- module opencv_hdf... -- module opencv_imgproc... -- module opencv_ml... -- module opencv_phase_unwrapping... -- module opencv_plot... -- module opencv_quality... -- module opencv_reg... -- module opencv_surface_matching... -- module opencv_cudafilters... -- module opencv_cudaimgproc... -- module opencv_cudawarping... -- module opencv_dnn... -- module opencv_fuzzy... -- module opencv_gapi... -- module opencv_hfs... -- module opencv_imgcodecs... -- module opencv_photo... -- module opencv_videoio... -- module opencv_xphoto... -- module opencv_cudacodec... -- module opencv_highgui... -- module opencv_bioinspired... -- module opencv_dnn_objdetect... -- module opencv_features2d... -- module opencv_line_descriptor... -- module opencv_saliency... -- module opencv_text... -- Tesseract: NO -- module opencv_calib3d... -- module opencv_ccalib... -- module opencv_cudafeatures2d... -- module opencv_cudastereo... -- module opencv_datasets... -- module opencv_objdetect... -- module opencv_rgbd... -- module opencv_shape... -- module opencv_structured_light... -- module opencv_video... -- module opencv_xfeatures2d... -- xfeatures2d/boostdesc: Download: boostdesc_bgm.i -- xfeatures2d/boostdesc: Download: boostdesc_bgm_bi.i -- xfeatures2d/boostdesc: Download: boostdesc_bgm_hd.i -- xfeatures2d/boostdesc: Download: boostdesc_binboost_064.i -- xfeatures2d/boostdesc: Download: boostdesc_binboost_128.i -- xfeatures2d/boostdesc: Download: boostdesc_binboost_256.i -- xfeatures2d/boostdesc: Download: boostdesc_lbgm.i -- xfeatures2d/vgg: Download: vgg_generated_48.i -- xfeatures2d/vgg: Download: vgg_generated_64.i -- xfeatures2d/vgg: Download: vgg_generated_80.i -- xfeatures2d/vgg: Download: vgg_generated_120.i -- module opencv_ximgproc... -- module opencv_xobjdetect... -- module opencv_aruco... -- module opencv_bgsegm... -- module opencv_cudabgsegm... -- module opencv_cudalegacy... -- module opencv_cudaobjdetect... -- module opencv_dpm... -- module opencv_face... -- data: Download: face_landmark_model.dat -- module opencv_optflow... -- module opencv_stitching... -- module opencv_tracking... -- module opencv_cudaoptflow... -- module opencv_stereo... -- module opencv_superres... -- module opencv_videostab... -- Processing WORLD modules... DONE -- Excluding from source files list: modules/imgproc/src/sumpixels.avx512_skx.cpp -- Excluding from source files list: <BUILD>/modules/world/layers/layers_common.avx512_skx.cpp -- -- General configuration for OpenCV 4.1.0 ===================================== -- Version control: unknown -- -- Extra modules: -- Location (extra): D:/projects/cpp/opencv_contrib-4.1.0/modules -- Version control (extra): unknown -- -- Platform: -- Timestamp: 2019-08-16T09:50:10Z -- Host: Windows 10.0.17763 AMD64 -- CMake: 3.14.5 -- CMake generator: NMake Makefiles -- CMake build tool: nmake -- MSVC: 1900 -- Configuration: Release -- -- CPU/HW features: -- Baseline: SSE SSE2 SSE3 -- requested: SSE3 -- required: SSE SSE2 -- Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX AVX2 -- requested: SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX -- SSE4_1 (13 files): + SSSE3 SSE4_1 -- SSE4_2 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2 -- FP16 (0 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX -- AVX (4 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX -- AVX2 (27 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 -- -- C/C++: -- Built as dynamic libs?: YES -- C++ Compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64/cl.exe (ver 19.0.24215.1) -- C++ flags (Release): /DWIN32 /D_WINDOWS /W4 /GR /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /FS /EHa /wd4127 /wd4251 /wd4324 /wd4275 /wd4512 /wd4589 /MP8 /MD /O2 /Ob2 /DNDEBUG -- C++ flags (Debug): /DWIN32 /D_WINDOWS /W4 /GR /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /FS /EHa /wd4127 /wd4251 /wd4324 /wd4275 /wd4512 /wd4589 /MP8 /MDd /Zi /Ob0 /Od /RTC1 -- C Compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64/cl.exe -- C flags (Release): /DWIN32 /D_WINDOWS /W3 /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /FS /MP8 /MD /O2 /Ob2 /DNDEBUG -- C flags (Debug): /DWIN32 /D_WINDOWS /W3 /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /FS /MP8 /MDd /Zi /Ob0 /Od /RTC1 -- Linker flags (Release): /machine:x64 /INCREMENTAL:NO -- Linker flags (Debug): /machine:x64 /debug /INCREMENTAL -- ccache: NO -- Precompiled headers: NO -- Extra dependencies: cudart_static.lib nppc.lib nppial.lib nppicc.lib nppicom.lib nppidei.lib nppif.lib nppig.lib nppim.lib nppist.lib nppisu.lib nppitc.lib npps.lib cublas.lib cufft.lib -LIBPATH:C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.1/lib/x64 -- 3rdparty dependencies: -- -- OpenCV modules: -- To be built: aruco bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev datasets dnn dnn_objdetect dpm face features2d flann fuzzy gapi hdf hfs highgui img_hash imgcodecs imgproc line_descriptor ml objdetect optflow phase_unwrapping photo plot quality reg rgbd saliency shape stereo stitching structured_light superres surface_matching text tracking video videoio videostab world xfeatures2d ximgproc xobjdetect xphoto -- Disabled: python2 -- Disabled by dependency: - -- Unavailable: cnn_3dobj cvv freetype java js matlab ovis python3 sfm ts viz -- Applications: apps -- Documentation: NO -- Non-free algorithms: YES -- -- Windows RT support: NO -- -- GUI: -- Win32 UI: YES -- VTK support: NO -- -- Media I/O: -- ZLib: build (ver 1.2.11) -- JPEG: build-libjpeg-turbo (ver 2.0.2-62) -- WEBP: build (ver encoder: 0x020e) -- PNG: build (ver 1.6.36) -- TIFF: build (ver 42 - 4.0.10) -- JPEG 2000: build (ver 1.900.1) -- OpenEXR: build (ver 1.7.1) -- HDR: YES -- SUNRASTER: YES -- PXM: YES -- PFM: YES -- -- Video I/O: -- DC1394: NO -- FFMPEG: YES (prebuilt binaries) -- avcodec: YES (58.35.100) -- avformat: YES (58.20.100) -- avutil: YES (56.22.100) -- swscale: YES (5.3.100) -- avresample: YES (4.0.0) -- GStreamer: NO -- DirectShow: YES -- Media Foundation: YES -- DXVA: NO -- -- Parallel framework: Concurrency -- -- Trace: YES (with Intel ITT) -- -- Other third-party libraries: -- Intel IPP: 2019.0.0 Gold [2019.0.0] -- at: D:/projects/cpp/opencv-4.1.0/cmake-build-release/3rdparty/ippicv/ippicv_win/icv -- Intel IPP IW: sources (2019.0.0) -- at: D:/projects/cpp/opencv-4.1.0/cmake-build-release/3rdparty/ippicv/ippicv_win/iw -- Lapack: NO -- Eigen: NO -- Custom HAL: NO -- Protobuf: build (3.5.1) -- -- NVIDIA CUDA: YES (ver 9.1, CUFFT CUBLAS NVCUVID) -- NVIDIA GPU arch: 30 35 37 50 52 60 61 70 -- NVIDIA PTX archs: -- -- OpenCL: YES (NVD3D11) -- Include path: D:/projects/cpp/opencv-4.1.0/3rdparty/include/opencl/1.2 -- Link libraries: Dynamic load -- -- Python (for build): C:/Python27/python.exe -- -- Java: -- ant: C:/Program Files/apache-ant-1.9.11/bin/ant.bat (ver 1.9.11) -- JNI: C:/Users/wangxiaoming/AppData/Local/JetBrains/Toolbox/apps/CLion/ch-0/192.5728.100/jbr/include C:/Users/wangxiaoming/AppData/Local/JetBrains/Toolbox/apps/CLion/ch-0/192.5728.100/jbr/include/win32 C:/Users/wangxiaoming/AppData/Local/JetBrains/Toolbox/apps/CLion/ch-0/192.5728.100/jbr/include -- Java wrappers: NO -- Java tests: NO -- -- Install to: D:/projects/cpp/opencv-4.1.0/cmake-build-release/install -- ----------------------------------------------------------------- -- -- Configuring done -- Generating done -- Build files have been written to: D:/projects/cpp/opencv-4.1.0/cmake-build-release ``` > NMake output ``` [ 2%] Built target zlib [ 5%] Built target libjpeg-turbo [ 8%] Built target libtiff [ 16%] Built target libwebp [ 19%] Built target libjasper [ 20%] Built target libpng [ 25%] Built target IlmImf [ 27%] Built target ippiw [ 32%] Built target libprotobuf [ 32%] Built target quirc [ 32%] Built target ittnotify [ 33%] Built target ade [ 33%] Built target opencv_videoio_plugins Scanning dependencies of target opencv_world [ 33%] Building CXX object modules/world/CMakeFiles/opencv_world.dir/__/core/src/system.cpp.obj system.cpp [ 33%] Linking CXX shared library ..\..\bin\opencv_world410.dll LINK: command "C:\PROGRA~2\MICROS~1.0\VC\bin\amd64\link.exe /nologo @CMakeFiles\opencv_world.dir\objects1.rsp /out:..\..\bin\opencv_world410.dll /implib:..\..\lib\opencv_world410.lib /pdb:D:\projects\cpp\opencv-4.1.0\cmake-build-release\bin\opencv_world410.pdb /dll /version:4.1 /NODEFAULTLIB:atlthunk.lib /NODEFAULTLIB:atlsd.lib /NODEFAULTLIB:libcmt.lib /DEBUG /machine:x64 /INCREMENTAL:NO -LIBPATH:C:\PROGRA~1\NVIDIA~2\CUDA\v9.1\lib\x64 ..\..\3rdparty\lib\ippiw.lib ..\..\3rdparty\ippicv\ippicv_win\icv\lib\intel64\ippicvmt.lib cudart_static.lib nppc.lib nppial.lib nppicc.lib nppicom.lib nppidei.lib nppif.lib nppig.lib nppim.lib nppist.lib nppisu.lib nppitc.lib npps.lib cublas.lib cufft.lib -LIBPATH:C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.1/lib/x64 cublas.lib cufft.lib ..\..\3rdparty\lib\libprotobuf.lib ..\..\lib\ade.lib ..\..\3rdparty\lib\zlib.lib ..\..\3rdparty\lib\libjpeg-turbo.lib ..\..\3rdparty\lib\libwebp.lib ..\..\3rdparty\lib\libpng.lib ..\..\3rdparty\lib\libtiff.lib ..\..\3rdparty\lib\libjasper.lib ..\..\3rdparty\lib\IlmImf.lib C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\lib\x64\cuda.lib C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\lib\x64\nvcuvid.lib comctl32.lib gdi32.lib ole32.lib setupapi.lib ws2_32.lib ..\..\3rdparty\lib\zlib.lib ..\..\3rdparty\lib\ittnotify.lib C:\Program Files\OSGeo4W64\lib\hdf5.lib ..\..\3rdparty\lib\quirc.lib cudart_static.lib nppc.lib nppial.lib nppicc.lib nppicom.lib nppidei.lib nppif.lib nppig.lib nppim.lib nppist.lib nppisu.lib nppitc.lib npps.lib cublas.lib cufft.lib C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\lib\x64\cuda.lib C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\lib\x64\nvcuvid.lib comctl32.lib gdi32.lib ole32.lib setupapi.lib ws2_32.lib C:\Program Files\OSGeo4W64\lib\hdf5.lib kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib /MANIFEST /MANIFESTFILE:..\..\bin\opencv_world410.dll.manifest" failed (exit code 1181) with the following output: LINK : fatal error LNK1181: 无法打开输ε…₯ζ–‡δ»Άβ€œFiles/NVIDIA.obj” NMAKE : fatal error U1077: β€œC:\Users\wangxiaoming\AppData\Local\JetBrains\Toolbox\apps\CLion\ch-0\192.5728.100\bin\cmake\win\bin\cmake.exe”: θΏ”ε›žδ»£η β€œ0xffffffff” Stop. NMAKE : fatal error U1077: β€œ"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\nmake.exe"”: θΏ”ε›žδ»£η β€œ0x2” Stop. NMAKE : fatal error U1077: β€œ"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64\nmake.exe"”: θΏ”ε›žδ»£η β€œ0x2” Stop. ```
priority: low,category: build/install,category: gpu/cuda (contrib)
low
Critical
481,557,432
flutter
Allow nested hero widgets
## Scenario In my application I have a Markdown editor with three screens: 1. Screen to look at markdown (Screen A) 2. Screen to edit markdown (Screen B) 3. Screen to enlarge pictures in markdown (Screen C) To make the transition more comfortable for the user, I would like to use two independent `Hero` transitions: 1. From the first screen to the second screen to edit markdown (A -> B) Transition goes from a `MarkdownView` widget to a `MarkdownEdit` widget. 2. From the first screen to the third screen to view images in markdown enlarged (A -> C) Transition goes from an `Image` widget inside the `MarkdownView` widget to an `Image` widget on the new screen. ## Problem The second `Hero` widget is inside the first `Hero` widget and causes an assert error because of this PR #28470, because one should not nest `Hero` widgets. I don't know why this is technically impossible. The nested `Hero` widgets have different `Hero` tags and on the second and third screen there is only one `Hero` widget each. If it's needed, I will write a small program to demonstrate the error, but since the error has already been mentioned several times in the given issues, I would like to present a scenario in which it makes sense to nest Hero widgets. I would also be happy about alternative solutions. ## Related issues #1398 introduces the problem, but unfortunately close without explanation. #28470 adds the assert that causes the error, there are a few people complaining about the changes, but it is not further discussed. ## Logs ``` I/flutter (24765): ══║ EXCEPTION CAUGHT BY WIDGETS LIBRARY β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• I/flutter (24765): The following assertion was thrown building Hero(tag: image-hero, I/flutter (24765): dirty, state: _HeroState#1595c): I/flutter (24765): A Hero widget cannot be the descendant of another Hero widget. I/flutter (24765): 'package:flutter/src/widgets/heroes.dart': I/flutter (24765): Failed assertion: line 359 pos 7: 'context.ancestorWidgetOfExactType(Hero) == null' I/flutter (24765): I/flutter (24765): Either the assertion indicates an error in the framework itself, or we should provide substantially I/flutter (24765): more information in this error message to help you determine and fix the underlying cause. I/flutter (24765): In either case, please report this assertion by filing a bug on GitHub: I/flutter (24765): https://github.com/flutter/flutter/issues/new?template=BUG.md I/flutter (24765): I/flutter (24765): When the exception was thrown, this was the stack: I/flutter (24765): #2 _HeroState.build (package:flutter/src/widgets/heroes.dart:359:7) I/flutter (24765): #3 StatefulElement.build (package:flutter/src/widgets/framework.dart:4012:27) I/flutter (24765): #4 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:3924:15) I/flutter (24765): #5 Element.rebuild (package:flutter/src/widgets/framework.dart:3721:5) I/flutter (24765): #6 ComponentElement._firstBuild (package:flutter/src/widgets/framework.dart:3907:5) I/flutter (24765): #7 StatefulElement._firstBuild (package:flutter/src/widgets/framework.dart:4053:11) I/flutter (24765): #8 ComponentElement.mount (package:flutter/src/widgets/framework.dart:3902:5) I/flutter (24765): #9 Element.inflateWidget (package:flutter/src/widgets/framework.dart:3084:14) ... I/flutter (24765): #470 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:958:9) I/flutter (24765): #471 _WidgetsFlutterBinding&BindingBase&GestureBinding&ServicesBinding&SchedulerBinding._handleDrawFrame (package:flutter/src/scheduler/binding.dart:874:5) I/flutter (24765): #475 _invoke (dart:ui/hooks.dart:236:10) I/flutter (24765): #476 _drawFrame (dart:ui/hooks.dart:194:3) I/flutter (24765): (elided 5 frames from class _AssertionError and package dart:async) I/flutter (24765): ════════════════════════════════════════════════════════════════════════════════════════════════════ ``` ``` [βœ“] Flutter (Channel unknown, v1.7.8+hotfix.3, on Mac OS X 10.14.5 18F132, locale en-DE) [βœ“] Android toolchain - develop for Android devices (Android SDK version 28.0.3) [βœ“] Xcode - develop for iOS and macOS (Xcode 10.2) [βœ“] iOS tools - develop for iOS devices [βœ“] Chrome - develop for the web [βœ“] Android Studio (version 3.4) [βœ“] VS Code (version 1.34.0) [βœ“] Connected device (3 available) β€’ No issues found! ```
framework,a: animation,f: material design,f: routes,customer: crowd,c: proposal,P3,team-design,triaged-design
high
Critical
481,567,029
pytorch
Crashes on torch.cuda.memory_allocated(device)
## πŸ› Bug I have two Nvidia cards and after setting in the environment`CUDA_VISIBLE_DEVICES=1`, then running import torch device = torch.device("cuda:1") print(torch.cuda.memory_allocated(device)) I get Traceback (most recent call last): File "bug.py", line 4, in <module> print(torch.cuda.memory_allocated(device)) File "/home/hovnatan/miniconda3/lib/python3.7/site-packages/torch/cuda/__init__.py", line 461, in memory_allocated return torch._C._cuda_memoryAllocated(device) RuntimeError: 0 <= device && device < device_num INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1565272271120/work/c10/cuda/CUDACachingAllocator.cpp:667, please report a bug to PyTorch. Invalid device argument. ## Environment Collecting environment information... PyTorch version: 1.2.0 Is debug build: No CUDA used to build PyTorch: 10.0.130 OS: Ubuntu 16.04.6 LTS GCC version: (Ubuntu 8.3.0-16ubuntu3~16.04) 8.3.0 CMake version: version 3.15.1 Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 7.5.17 GPU models and configuration: GPU 0: GeForce RTX 2080 GPU 1: GeForce RTX 2080 Nvidia driver version: 418.56 cuDNN version: Could not collect Versions of relevant libraries: [pip] numpy==1.17.0 [pip] torch==1.2.0 [pip] torchvision==0.4.0a0+6b959ee [conda] mkl 2019.4 243 [conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch [conda] torchvision 0.4.0 py37_cu100 pytorch
module: error checking,triaged
low
Critical
481,600,401
rust
Remove repr(simd) attribute and use a lang-item instead
After #63531, we should, at some point, expose a `#[lang = "packed_simd"] struct Simd<T, const N: usize>([T; N]);` from libcore that we handle specially in the compiler instead of using a `#[repr(simd)]` attribute for it.
C-cleanup,T-compiler,A-SIMD
low
Minor
481,615,712
godot
Android: copy/paste menu not available for LineEdit/TextEdit
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** <!-- Specify commit hash if non-official. --> 3.1.1 stable flathub **OS/device including version:** <!-- Specify GPU model and drivers if graphics-related. --> Linux (Ubuntu 18.04 LTS) + Android JDK: jdk-8u222-ojdkbuild-linux-x64 **Issue description:** <!-- What happened, and what was expected. --> Normal Android text input has a context menu, shown after pressing the screen for a longer while. That menu allows to copy or cut the text (if a text was selected) or paste previously copied text (if an empty text input was selected). An Android application exported by Godot doesn't show that context menu. **Steps to reproduce:** **Minimal reproduction project:** <!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
bug,platform:android,topic:porting,topic:gui
low
Critical
481,637,075
godot
Check texture dimension and warn when exporting for mobile/web platforms
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** <!-- Specify commit hash if non-official. --> 3.x **OS/device including version:** <!-- Specify GPU model and drivers if graphics-related. --> iOS and Android **Issue description:** <!-- What happened, and what was expected. --> It would be better to check texture dimension for mobile when exporting. **Steps to reproduce:** **Minimal reproduction project:** <!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
enhancement,platform:web,platform:ios,platform:android,topic:editor,topic:export
low
Critical
481,661,729
pytorch
[Tensorboard] Write summaries to S3 or GCS bucket
## πŸš€ Feature When creating a SummaryWriter, specifying a path in an S3 or GCP bucket should directly write to the bucket instead of the local filesystem ## Motivation Both in tensorflow and tesorboardX you can specify s3:// or gs:// paths in your logdir, which greatly simplifies distributed training and monitoring, you also can launch tensorbaord directly from your local machine, or a notebook pointing directly to the bucket, which means no need to launch a machine to share results, just the URL to the results inside the bucket ## Additional context tensorboardX implementation https://github.com/lanpa/tensorboardX/blob/master/tensorboardX/record_writer.py#L57
feature,triaged,module: tensorboard
medium
Major
481,669,099
storybook
More control over sidebar content
**Is your feature request related to a problem? Please describe.** One of my stories has a lot of text about the project and is broken up into several parts (very minimal on code). Because they are not Stories in the traditional sense, they are not indexed in the sidebar where it would be ideal. **Describe the solution you'd like** With `addon-docs` enabled, perhaps allow indexing to occur on text groupings with `<Article>` tags just like we do for `<Story>`, like so: ``` <Article name="History"> Lorem ipsum dolor sit amet... </Article> <Article name="Setup"> Lorem ipsum dolor sit amet... </Article> ``` As we aim to consolidate documentation with code seamlessly into one view I think it would be nice to index documentation to the sidebar equally. **Describe alternatives you've considered** I tried wrapping `<Story>` around the text but Storybook didn't like that. **Are you able to assist bring the feature to reality?** No; wish I could!
feature request,addon: docs
low
Critical
481,671,102
pytorch
Benchmark cudnn version of grid sampler
As discussed in #23923 , we should benchmark the perf difference between cudnn version and native version of grid sampler. cudnn version has very limited support, we should remove it if the perf doesn't vary much. cc @csarofeen @ptrblck @VitalyFedyunin @ngimel
module: performance,module: cudnn,triaged
low
Minor
481,673,999
rust
rustdoc: Poor CPU usage on Windows during thousands of doctests
When running the doctests for the `core` crate on my 28-core system the system unfortunately never really gets above 40% cpu usage, yet the doctests being compiled are embarassingly parallel and should result in full usage of the cores available to execute. For now this can be reproduced with `./x.py test src/libcore --doc` and watching CPU usage via `python src/ci/cpu-usage-over-time.py` One suspicion I have for this is that rustdoc has always use in-process compilation for all the doctests, but there may be an excessive amount of synchronization in LLVM or rustc which prevents all CPU cores from being used. I'm not sure if this is the cause though.
O-windows,T-rustdoc,I-compiletime,C-bug,A-doctests
medium
Major
481,674,140
godot
Basis.get_euler() can return pitch outside of [-Ο€/2, Ο€/2] range
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** v3.2.dev.custom_build.188a10df8 de8ce3e **OS/device including version:** Linux Mint 19.2 x86_64 **Issue description:** Euler angles are supposed to be returned in the [-Ο€, Ο€] range for yaw and roll, and [-Ο€/2, Ο€/2] range for pitch. However, when yaw and roll are both zero, pitch is returned in the [-Ο€, Ο€] range. **Steps to reproduce:** * Write a script that can rotate an object. * Print the angles returned by basis.get_euler() **Minimal reproduction project:** [EulerTest.zip](https://github.com/godotengine/godot/files/3510055/EulerTest.zip) This project contains two Aircraft nodes, which are identical except for their overall orientation, and their script which print either "Aircraft 1" or "Aircraft 2". You can rotate both planes with the arrow keys and page up/down keys, and you can reset them by pressing R. If you pitch up or down, you will notice that pitch values will differ when passing Ο€/2 (90Β°). Any amount of rotation along roll and yaw will make the incorrect pitch value jump to the correct value. Aircraft 2 is for demonstration purposes, it is rotated such that its axes follow the XYZ convention the get_euler() function is based upon: X forward, Y to the right, Z down. When rotated back to the same orientation as Aicraft 1, pitch is returned in the expected range. This might be related to the rotation not being exactly 90Β°, though. The pitch issue might come from the rotation back to Godot orientation of Y up.
bug,discussion,topic:core,confirmed
low
Minor
481,704,441
godot
Warn user when saving Signal with unsavable binding
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** <!-- Specify commit hash if non-official. --> 3.1 Feature request pretty much. When creating Signals in editor tools, plugins, and tool scripts, you can inadvertently create connections with binding arguments that can not be saved to a scene file. For example node references, (Which are often attached so you can easily identify who is emitting a signal.) I think it would be a good idea to print a warning when such a signal is being saved, so that there can be no unexpected runtime behavior.
enhancement,topic:editor
low
Minor
481,705,509
pytorch
Port CPU_tensor_apply functions to TensorIterator (umbrella issue)
- [x] Implement `cpu_serial_kernel` and `cpu_serial_kernel_vec` for TensorIterator #24472 - [ ] #24479 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Normalization.cpp:173 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Normalization.cpp#L173 - [ ] #24480 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Normalization.cpp:205 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Normalization.cpp#L205 - [ ] #24481 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Normalization.cpp:213 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Normalization.cpp#L213 - [ ] #24482 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Normalization.cpp:284 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Normalization.cpp#L284 - [ ] #24483 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Normalization.cpp:290 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Normalization.cpp#L290 - [ ] #24484 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Normalization.cpp:305 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Normalization.cpp#L305 - [ ] #24485 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Normalization.cpp:310 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Normalization.cpp#L310 - [ ] #24486 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Normalization.cpp:318 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Normalization.cpp#L318 - [ ] #24487 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Loss.cpp:80 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Loss.cpp#L80 - [ ] #24488 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Distributions.cpp:142 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Distributions.cpp#L142 - [ ] #24489 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Distributions.cpp:151 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Distributions.cpp#L151 - [ ] #24490 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Distributions.cpp:176 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Distributions.cpp#L176 - [ ] #24491 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Distributions.cpp:189 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Distributions.cpp#L189 - [ ] #24492 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Distributions.cpp:201 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Distributions.cpp#L201 - [ ] #24493 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Distributions.cpp:220 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Distributions.cpp#L220 - [ ] #24494 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Distributions.cpp:235 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Distributions.cpp#L235 - [ ] #24495 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Distributions.cpp:266 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Distributions.cpp#L266 - [ ] #24496 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/Distributions.cpp:286 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/Distributions.cpp#L286 - [x] #24497 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/TensorCompare.cpp:18 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/TensorCompare.cpp#L18 - [ ] #24498 Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/TensorCompare.cpp:30 https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/TensorCompare.cpp#L30 - [ ] Remove `aten/src/ATen/CPUApplyUtils.h`, `torch/include/ATen/CPUApplyUtils.h` and `aten/src/ATen/test/apply_utils_test.cpp`
triaged,better-engineering,module: CPU_tensor_apply
low
Minor
481,709,252
pytorch
Migrate CPU_tensor_apply to TensorIterator in aten/src/ATen/native/TensorCompare.cpp:30
Migrate CPU_tensor_apply to TensorIterator https://github.com/pytorch/pytorch/blob/eabfca3577ea85df2d68bdf747c62dd4a5fff5cf/aten/src/ATen/native/TensorCompare.cpp#L30 How to use TensorIterator: https://github.com/pytorch/pytorch/wiki/How-to-use-TensorIterator Additional Instructions:#24478 Blocked by:#24472
triaged,better-engineering,module: CPU_tensor_apply
low
Minor
481,710,764
pytorch
Failed to compile PyTorch on IBM Power 9 architecture with CUDA 10
I started to build `pytorch` and `libtorch` on a IBM Power 9 machine with RedHat 7.6 and cuda 10.0 installed. For both `pytorch` and `libtorch` I got an error during building NVCC. I appreciate any help or comment. For `pytorch`, I tried `python setup.py install` and it resulted an error during building caffe2: ``` [1667/1694] Building NVCC (Device) object modules/detectron/CMakeFiles/c...ir/caffe2_detectron_ops_gpu_generated_sigmoid_cross_entropy_loss_op.cu.o ninja: build stopped: subcommand failed. Traceback (most recent call last): File "tools/build_libtorch.py", line 23, in <module> rerun_cmake=True, cmake_only=False, cmake=CMake()) File "/c/bb04na2a/vol/bigdisk/plx/rl/pytorch/tools/build_pytorch_libs.py", line 64, in build_caffe2 cmake.build(my_env) File "/c/bb04na2a/vol/bigdisk/plx/rl/pytorch/tools/setup_helpers/cmake.py", line 329, in build self.run(build_args, my_env) File "/c/bb04na2a/vol/bigdisk/plx/rl/pytorch/tools/setup_helpers/cmake.py", line 133, in run check_call(command, cwd=self.build_dir, env=env) File "/bigdisk/plx/rl/anaconda3/lib/python3.7/subprocess.py", line 347, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '160']' returned non-zero exit status 1. ``` For `libtorch` I called `python tools/build_libtorch.py` and similarly it resulted an error during building caffe2: ``` [2633/2662] Building NVCC (Device) object modules/detectron/CMakeFiles/c...tectron_ops_gpu.dir/caffe2_detectron_ops_gpu_generated_sample_as_op.cu.o ninja: build stopped: subcommand failed. Traceback (most recent call last): File "setup.py", line 756, in <module> build_deps() File "setup.py", line 321, in build_deps cmake=cmake) File "/c/bb04na2a/vol/bigdisk/plx/rl/pytorch/tools/build_pytorch_libs.py", line 64, in build_caffe2 cmake.build(my_env) File "/c/bb04na2a/vol/bigdisk/plx/rl/pytorch/tools/setup_helpers/cmake.py", line 329, in build self.run(build_args, my_env) File "/c/bb04na2a/vol/bigdisk/plx/rl/pytorch/tools/setup_helpers/cmake.py", line 133, in run check_call(command, cwd=self.build_dir, env=env) File "/bigdisk/plx/rl/anaconda3/lib/python3.7/subprocess.py", line 347, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '160']' returned non-zero exit status 1. (base) machine1> (base) machine1> ls tools/ amd_build build_pytorch_libs.py cwrap generated_dirs.txt __init__.py pyi setup_helpers aten_mirror.sh build_variables.py docker git_add_generated_dirs.sh jit pytorch.version shared autograd clang_format.py download_mnist.py git-pre-commit nnwrap README.md build_libtorch.py clang_tidy.py flake8_hook.py git_reset_generated_dirs.sh __pycache__ run-clang-tidy-in-ci.sh (base) machine1> python tools/build_variables.py Traceback (most recent call last): File "tools/build_variables.py", line 5, in <module> load("@bazel_skylib//lib:new_sets.bzl", "sets") NameError: name 'load' is not defined (base) machine1> python tools/build_pytorch_libs.py Traceback (most recent call last): File "tools/build_pytorch_libs.py", line 6, in <module> from .setup_helpers import escape_path ModuleNotFoundError: No module named '__main__.setup_helpers'; '__main__' is not a package ```
module: build,triaged
low
Critical
481,712,299
flutter
Do macOS engine binaries need LTO?
For Android and iOS we build the release and profile engine binaries with LTO. This adds a significant amount of time to the build process, but results in smaller binaries. For the macOS desktop profile and release engines, the benefit of space saving might not be as pronounced. Additionally, we're a bit under capacity in terms of macOS infra, so skipping LTO at least in the short-term has some benefits. cc @dnfield @chinmaygarde @stuartmorgan
engine,platform-mac,a: desktop,P3,team-macos,triaged-macos
low
Major
481,743,652
youtube-dl
unable to download video from duboku.net
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.08.13. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights. - Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a new site support request - [x] I've verified that I'm running youtube-dl version **2019.08.13** - [x] I've checked that all provided URLs are alive and playable in a browser - [x] I've checked that none of provided URLs violate any copyrights - [x] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs <!-- Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours. --> - Single video: https://www.youtube.com/watch?v=BaW_jenozKc - Single video: https://youtu.be/BaW_jenozKc - Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc ## Description <!-- Provide any additional information. If work on your issue requires account credentials please provide them or explain how one can obtain them. --> unable to download video from https://www.duboku.net/vodplay/459-1-7.html please add support for duboku.net
site-support-request
low
Critical
481,748,074
go
cmd/cover: incorrect coverage for source file generated by x/tools/cmd/goyacc
# What version of Go are you using? go version go1.12.7 windows/amd64 ### What did you do? ~~~ go get github.com/rillig/go-yacc-examples cd $GOPATH/src/github.com/rillig/go-yacc-examples/json go generate go test -test.coverprofile coverage.out ~~~ ### What did you expect to see? The `coverage.out` file contains coverage markers for `json.y`, as that is where the source code comes from, according to the `//line` comments. ### What did you see instead? The `coverage.out` file contains: ~~~ github.com/rillig/go-yacc-examples/json/y.go:25.0,27.0 1 0 ~~~ y.go lines 25 to 27 contain: ~~~ var yyToknames = [...]string{ "$end", "error", ~~~ There is no code to be covered here, and these lines do not form a block at all. Some lines below that, y.go contains: ~~~ //line yaccpar:1 ~~~ The output of `go tool objdump` on the test executable contains: ~~~ TEXT github.com/rillig/go-yacc-examples/json.(*yyParserImpl).Lookahead(SB) yaccpar yaccpar:25 0x586a00 b801000000 MOVL $0x1, AX yaccpar:25 0x586a05 488d0d74341e00 LEAQ something(SB), CX yaccpar:25 0x586a0c f00fc101 LOCK XADDL AX, 0(CX) yaccpar:26 0x586a10 488b442408 MOVQ 0x8(SP), AX yaccpar:26 0x586a15 488b8010010000 MOVQ 0x110(AX), AX yaccpar:26 0x586a1c 4889442410 MOVQ AX, 0x10(SP) yaccpar:26 0x586a21 c3 RET ~~~ This makes me suspect that the file name `yaccpar` is not taken into account when generating the coverage data. See https://youtrack.jetbrains.com/issue/GO-7513
NeedsInvestigation
low
Critical
481,761,221
pytorch
[jit] Dict iterator invalidation doesn't match Python
This runs in TorchScript but throws an error in Python ```python @torch.jit.script def fn(): a_dict = {'a': 1, 'b': 2, 'c': 3} sum = 0 for key in a_dict: a_dict[str(a_dict[key])] = 1 sum += a_dict[key] return sum print(fn()) ``` It outputs ``` 6 ``` but should output ``` Traceback (most recent call last): File "../test.py", line 15, in <module> print(fn()) File "../test.py", line 10, in fn for key in a_dict: RuntimeError: dictionary changed size during iteration ``` cc @suo
oncall: jit,triaged,jit-backlog
low
Critical
481,772,419
rust
Audit sources of shared state across rustc
Infrastructure for parallel compilation has landed in rustc, but the shared state involved has not been fully documented (in terms of what state exists, invariants, atomicity, lock ordering, etc) or assessed for removal by refactoring. This issue tracks the PR history for parallelization infrastructure, which is being re-audited to seed an initial list of shared state to assess. The initial output of the audit is [this paper doc](https://paper.dropbox.com/doc/rustc-shared-state-audit--Ai__733hU70wACu6fchZrhFMAg-zsOrKjkMZbBYZJWhgXPke), which will ultimately be turned into a set of individual issues. ### PR List - [x] #46709 - [x] #46779 - [x] #47906 - [x] #48586 - [x] #48587 - [x] #48690 - [x] #48691 - [x] #48808 - [x] #48811 - [x] #48904 - [x] #48936 - [x] #49030 - [ ] #49045 - make queries thread-safe - [x] #49349 - [x] #49396 - [x] #49558 - [ ] #49732 - make incremental compilation thread-safe - [x] #49834 - Make SelectionCache and EvaluationCache thread-safe - [ ] #49882 - More thread-safety changes - [ ] #49894 - Rename InternedString to LocalInternedString and introduce a new thread-safe InternedString - [x] #50108 - Make GlobalCtxt thread-safe - [x] #50699 - Blocking Rayon queries - [x] #51383 - Run some stuff in parallel - [x] #51487 - Make more passes incremental - [x] #56614 - Replace LockCell with atomic types - [x] #56946 - Add support for using a jobserver with Rayon - [x] #57065 - Optimize try_mark_green and eliminate the lock on dep node colors - [x] #57232 - Parallelize and optimize parts of HIR map creation - [x] #57308 - Make CompileController thread-safe - [x] #58010 - Move privacy checking later in the pipeline and make some passes run in parallel - [x] #58019 - Combine all builtin late lints and make lint checking parallel - [x] #58679 - Refactor passes and pass execution to be more parallel - [x] #59540 - Use arenas to avoid Lrc in queries #1 - [x] #59545 - Use arenas to avoid Lrc in queries #2 - [x] #59804 - Clean up jobserver integration - [x] #59809 - Make trait_methods_not_found use a lock ### Issue creation - [ ] Issues created for individual pieces of state, together with a tracking issue
T-compiler,WG-compiler-performance,A-parallel-queries
low
Minor
481,782,830
flutter
Re-evaluate findBundlePath() because it has been changed to simply return a constant.
Re-evaluate findBundlePath() because it has been changed to simply return a constant. There may still be add-to-app situations where a developer wants to provide their own app bundle path.
platform-android,engine,P2,team-android,triaged-android
low
Major
481,791,720
flutter
AppBarTheme not localized correctly
<!-- Thank you for using Flutter! If you are looking for support, please check out our documentation or consider asking a question on Stack Overflow: * https://flutter.dev/ * https://api.flutter.dev/ * https://stackoverflow.com/questions/tagged/flutter?sort=frequent If you have found a bug or if our documentation doesn't have an answer to what you're looking for, then fill our the template below. Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports --> ## Steps to Reproduce <!-- Please tell us exactly how to reproduce the problem you are running into. Please attach a small application (ideally just one main.dart file) that reproduces the problem. You could use https://gist.github.com/ for this. If the problem is with your application's rendering, then please attach a screenshot and explain what the problem is. --> **Minimal Reproduction + Workaround:** https://gist.github.com/JonasWanke/40b49e683810641f9f682ddfda28cc2f I am using a customized `AppBarTheme` because I want the background color to be a light grey instead of the primary color. My primary color is dark, hence the inferred white text color is not legible on the new bar. Therefore I manually overwrite the `AppBarTheme.textTheme` to the default `textTheme` of my (light) theme. The problem with this method is that no geometry is set on the new `textTheme` (see screenshot below) because [ThemeData.localize](https://github.com/flutter/flutter/blob/20e59316b8/packages/flutter/lib/src/material/theme_data.dart#L870) (used by [Theme.of](https://github.com/flutter/flutter/blob/20e59316b8/packages/flutter/lib/src/material/theme.dart#L127)) only applies geometry to `primaryTextTheme`, `accentTextTheme` and `textTheme`. Maybe this could be extended to include something like the following: ``` appBarTheme: baseTheme.appBarTheme.copyWith( textTheme: localTextGeometry.merge(baseTheme.appBarTheme.textTheme) ) ``` *Note:* Though I didn't test it, this problem might also occur with other widget's themes. ![image](https://user-images.githubusercontent.com/19330937/63198888-756bfd80-c07c-11e9-8632-b2105c537a88.png) ## Logs <!-- Run your application with `flutter run --verbose` and attach all the log output below between the lines with the backticks. If there is an exception, please see if the error message includes enough information to explain how to solve the issue. ``` ``` --> <!-- Run `flutter analyze` and attach any output of that command below. If there are any analysis errors, try resolving them before filing this issue. --> `flutter analyze`: ``` Analyzing flutter_bug_appbartheme... No issues found! (ran in 13.9s) ``` <!-- Finally, paste the output of running `flutter doctor -v` here. --> `flutter doctor -v`: ``` [√] Flutter (Channel master, v1.8.5-pre.117, on Microsoft Windows [Version 10.0.18362.267], locale en-US) β€’ Flutter version 1.8.5-pre.117 at C:\programs\flutter β€’ Framework revision 83a8a575ee (6 days ago), 2019-08-10 12:38:20 -0700 β€’ Engine revision ff49ca1c6e β€’ Dart version 2.5.0 (build 2.5.0-dev.1.0 f29f41f1a5) [√] Android toolchain - develop for Android devices (Android SDK version 28.0.3) β€’ Android SDK at C:\Users\jowan\AppData\Local\Android\sdk β€’ Android NDK location not configured (optional; useful for native profiling support) β€’ Platform android-28, build-tools 28.0.3 β€’ Java binary at: C:\programs\android-studio 3.5-beta.4\jre\bin\java β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03) β€’ All Android licenses accepted. [√] Visual Studio - develop for Windows (Visual Studio Community 2019 16.1.0) β€’ Visual Studio at C:\Program Files (x86)\Microsoft Visual Studio\2019\Community β€’ Visual Studio Community 2019 version 16.1.28917.181 [√] Android Studio (version 3.4) β€’ Android Studio at C:\Program Files\Android\Android Studio β€’ Flutter plugin version 36.1.1 β€’ Dart plugin version 183.6270 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01) [√] Android Studio (version 3.5) β€’ Android Studio at C:\programs\android-studio 3.5-beta.4 β€’ Flutter plugin version 36.1.1 β€’ Dart plugin version 183.6270 β€’ Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b03) [!] IntelliJ IDEA Ultimate Edition (version 2019.2) β€’ IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA 2019.1.1 X Flutter plugin not installed; this adds Flutter specific functionality. X Dart plugin not installed; this adds Dart specific functionality. β€’ For information about installing plugins, see https://flutter.dev/intellij-setup/#installing-the-plugins [√] VS Code, 64-bit edition (version 1.36.1) β€’ VS Code at C:\Program Files\Microsoft VS Code β€’ Flutter extension version 3.3.0 [√] Connected device (2 available) β€’ SM G955F β€’ 9889db373853574a45 β€’ android-arm64 β€’ Android 9 (API 28) β€’ Windows β€’ Windows β€’ windows-x64 β€’ Microsoft Windows [Version 10.0.18362.267] ! Doctor found issues in 1 category. ``` (using VS Code)
framework,f: material design,a: quality,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design
low
Critical
481,810,211
flutter
Expose canvas transform matrix
## Use case dart:ui canvas doesn't expose the current trasformation matrix. We can use canvas.translate(), scale(), rotate() to manipulate it but we can't get the resulting matrix after the transformations have been added. Skia exposes it (https://skia.org/user/api/SkCanvas_Reference#SkCanvas_getTotalMatrix), just Flutter doesn't. Why? For hit testing, for instance. There is hit testing at the canvas level, but if, say, I draw many objects to my canvas and later need to learn where the user has tapped, the normal way would be to ask for the actual matrix after drawing each object, calling a MapRect(bounds) so that its bounds are mapped according to the transform, store the resulting rect and use that later for hit testing. There is a workaround: duplicate all transform manipulation functions in your code and in addition to passing them on to the canvas, also calculate the same transformation for yourself and use that later. Not really nice, requires implementing all matrix operations and stack support for save/restore while the real solution wouldn't mean any extra work for Flutter at all, just allowing a single function to get through to the engine below. ## Proposal Add this getTotalMatrix() function.
c: new feature,engine,P3,team-engine,triaged-engine
low
Major
481,812,278
rust
missing symbol issue in some environments (rustdoc bug?)
I have encountered an issue whereby I get a missing symbol error, specifically in Ubuntu Xenial and Bionic Beaver environments on travis, but not Trusty. I believe it is either a rustdoc bug, or an issue with differences in system libs/utilities it uses. It relates specifically only to a symbol guarded by a cargo feature flag. I have [a set of crates](https://github.com/jnqnfe/pulse-binding-rust) for binding to the PulseAudio (PA) system libraries. The crates all have a set of features relating to system library version compatibility. The crates are compatible with PA versions 8-12. PA v12 added a single new function symbol, use of which is controlled via a cargo feature. PA did not add any new functions after v8 until v12. This issue has suddenly cropped up since travis has shifted their default environment over from Trusty to Xenial... I want to cut through a lot of the mess that resulted in my arriving here, issues with trying to use workspaces+features which I now know are not supported and such, you can read discussion [here](https://travis-ci.community/t/failure-due-to-system-libs-now-being-older/4712) if you *really* want to, but for simplicity, please just ignore the mess of travis and workspace changes, and look past the fact that in the below tests we're enabling PA v12 features on PA v4/8/11 systems; the point here is investigating the wierdness of the missing symbol error I've noticed. (and apologies if already reported, there we 90 pages of 'symbol' related issues and i gave up trying to look for an existing report). The short version of what is relevant is that I believe that parts of my library that are not directly used in a test get optimised out of the tests, hence no missing symbol linking errors when the system lib is too old a version; hence why tests passed successfully on Trusty which only had PA v4.x. However for some reason this one specific symbol does not get optimised out (i guess) resulting in a missing symbol error on Xenial/Bionic environments only. This is despite this symbol itself not being involved in any tests... Compare the following travis tests: Trusty (PA v4): https://travis-ci.org/jnqnfe/pulse-binding-rust/jobs/572936139 Xenial (PA v8): https://travis-ci.org/jnqnfe/pulse-binding-rust/jobs/572946063 Bionic (PA v11): https://travis-ci.org/jnqnfe/pulse-binding-rust/jobs/572936129 The only difference is the Ubuntu version selected in the travis config. Trusty passes, while the other two complain about the symbol. All three are using the same version of `rustc` and `cargo`... Only `rustdoc` tests are involved in the failure, with the entire set either being 'ignored' or 'failed'. It passes on Trusty though. edit: typo fixes
A-linkage,O-linux,T-rustdoc,T-compiler,C-bug,E-needs-mcve
low
Critical
481,820,893
rust
inline attribute on async fn doesn't work properly
[Playground link](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=3e8d2c145ac7edf44e6eb84ae65e1390) In the playground, an `async fn` is marked `#[inline(always)]`: ```rust #[inline(always)] pub async fn test() -> i32 { 12345 } ``` However, if you compile it in debug mode (where inlining only happens for `#[inline(always)]` functions), and search for `12345` in the generated assembly, you can see that it is not inlined. Indeed, there are multiple levels of function calls that are not inlined: `run_it` β†’ `GenFuture<T>::poll` β†’ `std::future::set_task_context` β†’ `GenFuture<T>::poll::{closure}` β†’ `playground::test::{{closure}}` That last closure is the generator that contains the actual body of `test`. `#[inline(always)]` *is* taking effect on the post-transformation function `test`, but all that does is initialize the generator struct. As long as async is implemented based on generators, this will be hard to fix. Even if the generator itself were marked `alwaysinline`, that wouldn't affect `GenFuture` or `set_task_context`, both of which are from `libstd`. Related to #62918, since if you want an async fn to be `#[inline(always)]`, you probably also want to get rid of the TLS usage by `set_task_context`.
I-slow,A-attributes,A-codegen,T-compiler,C-bug,A-async-await,AsyncAwait-Triaged,requires-nightly
low
Critical
481,825,955
pytorch
Installs empty directories under Python's sitelibdir
The FreeBSD ports framework complains: ``` Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/caffe2_protos.dir Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/caffe2_pybind11_state.dir/python Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/dispatch Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/op_registration Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/cpu Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/detail Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkl Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/sparse Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/quantized Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/TH/vector Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/aten/src/THNN Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/test/cpp/jit Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/third_party/miniz-2.0.8 Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/api/src/data/datasets Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/api/src/data/samplers Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/api/src/nn/modules Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/api/src/optim Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/api/src/serialize Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/autograd/functions Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/autograd/generated Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/distributed/rpc Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/fuser/cpu Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/generated Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/passes/utils Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/script Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/testing Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/__/torch/csrc/utils Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/contrib/aten Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/core/nomnigraph/Representations Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/core/nomnigraph/tests Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/db Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/distributed Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/mpi Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/observers Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/onnx Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/operators/experimental/c10/cpu Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/operators/rnn Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/opt Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/perfkernels Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/predictor/emulator Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/quantization/server Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/queue Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/serialize Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/sgd Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/share/contrib/depthwise Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/transforms Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/utils/math Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/CMakeFiles/torch.dir/utils/threadpool Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/aten/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/aten/src/ATen/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/aten/src/ATen/cmake-exports Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/aten/src/ATen/core/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/aten/src/ATen/quantized/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/aten/src/ATen/test/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/aten/src/TH/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/aten/src/THNN/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/contrib/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/contrib/aten/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/contrib/gloo/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/contrib/nccl/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/contrib/opencl/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/contrib/prof/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/contrib/shm_mutex/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/core/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/core/nomnigraph/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/db/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/distributed/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/ideep/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/image/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/lib_c10d/CMakeFiles/c10d.dir Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/mobile/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/mobile/contrib/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/mobile/contrib/ios/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/mpi/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/observers/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/onnx/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/onnx/torch_ops/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/operators/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/operators/rnn/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/opt/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx.dir Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/predictor/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/predictor/emulator/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/proto/CMakeFiles/Caffe2_PROTO.dir Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/python/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/quantization/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/queue/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/serialize/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/sgd/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/share/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/share/contrib/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/share/contrib/depthwise/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/api/src/python Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/autograd/functions Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/autograd/generated Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/distributed/c10d Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/distributed/rpc Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/passes/onnx Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/jit/script Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/multiprocessing Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/nn Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/onnx Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/tensor Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/CMakeFiles/torch_python.dir/csrc/utils Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/lib/libshm/CMakeFiles/shm.dir Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/torch/lib/libshm/CMakeFiles/torch_shm_manager.dir Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/transforms/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/utils/CMakeFiles Error: Orphaned: @dir %%PYTHON_SITELIBDIR%%/caffe2/video/CMakeFiles ``` There's no need for emprt directories. Particularly, installing directories called ```CMakeFiles``` is meaningless. ```1.0rc0-6216-geabfca357```
module: build,triaged
low
Critical
481,830,906
godot
Cannot load GDNative DLL from within the PCK file
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** <!-- Specify commit hash if non-official. --> 3.1.1 **OS/device including version:** <!-- Specify GPU model and drivers if graphics-related. --> Linux cross compiling to Windows **Issue description:** <!-- What happened, and what was expected. --> GDnative libraries cannot be loaded from the .pck file when the pattern `*.dll` is included in the resources regex on the export tab. **Steps to reproduce:** **Minimal reproduction project:** <!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
enhancement,platform:windows,confirmed,topic:gdextension
low
Critical