id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
453,846,074
scrcpy
Input (screen presses) via command line?
Hello, It's not listed as a feature but please can you tell me if it's possible to register touches via the command line. By this I mean in specific locations, not just the home or back button for example. I understand this to be a feature of ADB using the following command, however I was just curious if it's possible since the following code feels a little sluggish: adb.exe -s XXXXXXXX shell input tap 350 1000 Alternatively if that's not a feature and you have an idea on how I can speed this up, I'd be super intrigued to hear. Many thanks
feature request
low
Major
453,848,520
youtube-dl
Feature: Add option to treat batch-file URLs as chunks and merge them
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a feature request - [x] I've verified that I'm running youtube-dl version **2019.06.08** - [x] I've searched the bugtracker for similar feature requests including closed ones ## Description <!-- Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible. --> Sometimes it is possible to download the previous chunks of a m3u8 livestream by specifying a custom m3u8 file or feeding all the chunks in a URL batch file. In this case I didn't want to exactly start from the beginning of the stream's rewind, which is 2 hours, but just a portion somewhere in the middle, anyhow it seems that the rewind isn't even tied to the raw content files so they may still be up even after the web-player's 2 hour limit. I tried the batch-file method with the --merge-output-format option but it didn't work, it downloaded fine, I just need a manual concat, not sure about that yet. Through the course of figuring out this issue, I saw that the m3u8 chunklist file contained all the relative URLs back to where the 2 hour rewind limit is, containing the part I wanted, so actually I wouldn't have needed to manually generate all those full URLs for the chunks using LibreOffice Calc (which isn't that hard as it is time consuming) so basically I could just do that and use the custom m3u8, except youtube-dl doesn't support such a thing, becuase all the chunk links are relative... I even try putting those chunks into full URLs, it didn't work as if it was an URL, and --batch-file just downloaded the same without merge. Example URL: https://rtvslolive.akamaized.net/hls/live/584144-b/tv_slo1/slo1_720p/chunklist.m3u8
request
low
Critical
453,849,051
youtube-dl
Feature: Add support for custom m3u8 chunklist from local file.
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a feature request - [x] I've verified that I'm running youtube-dl version **2019.06.08** - [x] I've searched the bugtracker for similar feature requests including closed ones ## Description <!-- Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible. --> Referr to #21340
request
low
Critical
453,876,619
neovim
Show search match close to the word being search
<!-- Before reporting: search existing issues and check the FAQ. --> - `nvim --version`: v0.4.0-984-g3dd31b2b6 - Vim (version: ) behaves differently? no - Operating system/version: fedora 30 - Terminal name/version: zsh - `$TERM`: xterm-256color ### Steps to reproduce using `nvim -u NORC` ``` nvim -u NORC # serach for a word ``` ### Actual behaviour The search match is show at the rightest place ``` /word [1/20] ``` ### Expected behavior The search match is show close to the search word (left) ``` [1/20] /word ``` or ``` /word [1/20] ``` I have a width monitor, at the first time it was difficult to see this feature, since the word being search is at the right and the current match is at the left.
enhancement,ui,ui-extensibility
low
Minor
453,881,044
youtube-dl
Feature: Add ability to force remux to custom container even when not necessary
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a feature request - [x] I've verified that I'm running youtube-dl version **2019.06.08** - [x] I've searched the bugtracker for similar feature requests including closed ones ## Description <!-- Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible. --> It may be handy to support remuxing the finished downloads with ffmpeg to a custom container format (eg. MKV) even when not necessary, for the purposes of having a backup archive filenames and formats consistent across the board (in case of a mix of MKVs, MP4, etc) as well as opening up possibilities for combinating with other features such as described here: #21344
request
low
Critical
453,881,790
youtube-dl
Feature: Support merging subtitle-s + formats into supported container
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a feature request - [x] I've verified that I'm running youtube-dl version **2019.06.08** - [x] I've searched the bugtracker for similar feature requests including closed ones ## Description <!-- Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible. --> There could be an additional subtitle option/command to pack the subtitles into the final file such as a .mkv, I'm pretty much exclusively downloading DASH/OPUS and merging into MKVs already unless unavailable, it should properly support such downloads which already use merging so that it doesn't merge twice and avoids conflicts. Now it can be a proper subtitle-type command, or it could be a merge type option, like "add this type of subs/formats when doing merging", combined with for example a separate command of forced merging described in #21343 it would then be possible do it for everything, not just those downloads that need to be remuxed. This kind of approach would make it more flexible and less conflict-prone combining stuff and making sure ffmpeg is feeded proper merging config, should be easier to implement in that regard too, IMO, unless you guys think it's not a big deal. It's less convenient and less obvious that such a possibility exists, so the whole thing is less new-user friendly, if that's a concern, if not then ofcourse no big deal. -- Also not only multiple subtitle languages, but multiple subtitle FORMATS of the SAME language should be supported for merging as well. They do not appear the same and have a lot of differences and not all players support all formats equally. For example in VLC, a particular TTML english youtube subtitle has a non-transparent black-background which makes subtitles easier to see under, however that blocks the video, but the same english language in VTT format has all-transparent background which revelas more video, but the subtitles may be harder to discern when video in the background is of similar color. Editing the subtitles for hundreds/thousands of files would be tedious, etc., it should be easier to just download/merge both formats.
request
low
Critical
453,882,432
youtube-dl
Feature: Support downloading multiple subtitle formats of the same language at once
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a feature request - [x] I've verified that I'm running youtube-dl version **2019.06.08** - [x] I've searched the bugtracker for similar feature requests including closed ones ## Description <!-- Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible. --> I first thought this is a bug, but then I changed my mind based on some educated guessing, I may be wrong. I have manually checked and `ttml` is downloadable fine from the YT videos I'm downloading in this case. `--all-subs` doesn't seem to do it, it apparently applies only to "all languages" in this case it's only english, and it takes a format it thinks it's most appropriate, `vtt`. `--sub-format` only takes in one valid format or a preference, it does not support comma separated multiple values. These options should be renamed as they feel too vague and it's not clear whether `--all-subs` should be used with `--write-sub` or not, lack of context. It could be `--sub-formats` to reflect it could accept commas, or a new option `--all-sub-formats` Additionally `--all-subs` should be renamed to `--all-sub-langs` , but it's already inconsitent because I found out `--all-subs` doesn't need `--write-sub` so from the name it seems as an optional secondary option but it is infact a standalone command which means it should be called `--write-all-sub-langs`, however I thought that "write" meant baking it into the file with ffmpeg or even into the video it self, maybe get or save could be a better term, but it's not that big deal. However it should be named appropriately so that it wouldn't conflict with exactly that idea of merging/packing the subs into a supported container described here: #21344 Still, those terms would rather be "pack" "merge" so it may not be an issue. -- Currently it's only practically possible downloading other subtitle formats in a separate session with `--skip-download` option.
request
low
Critical
453,904,170
go
cmd/go: accept main packages as dependencies in go.mod files
If this issue is a duplicate (other than #25922 and #30515): apologies ahead of time, and please feel free to close it. ### Summary Many Go programs today not only depend on other packages, but they depend on other programs themselves. In other words, a module (whether a program or a library), may depend on other `main` programs for various reasons, such as code generation, peer connections, rpc connections and more. When Go Modules first came out, the same exact question came up here: https://github.com/golang/go/issues/25922. The question was revolving around programs that depended on *tools* while this issue is talking more abstractly about programs that depend on *any* `main` program regardless of whether it's a tool or not. The [answer](https://github.com/golang/go/issues/25922#issuecomment-402918061) from @rsc at the time was that it was appropriate to use a `tools.go` file with `// +build tools` to force go.mod to record `main` package requirements. The answer was appropriate for Go 1.11 when Modules was still highly experimental, but it might be worth reconsidering when Go Modules becomes the official dependency manager for all Go code out there. The reason being, the `tools.go` is more of a workaround than first-class support. Having first-class support for `main` package dependencies would be more developer friendly than ignored-import-paths with an ignore build tag. Furthermore, a module should be able to depend on other Go code regardless of whether it's `package main` or `package <lib>`. It's also worth mentioning that other tools (such as https://github.com/myitcv/gobin) exist to make this workaround a little bit easier. But it's worth drawing a comparison of how other languages have an external binary for dependency management such as `node/npm`, `ruby/bundle`, `rust/cargo` while Go only has `go`. It would be odd to have a whole new program just to manage `main` dependencies. ### Proposal #### I propose that Go provides first class support for modules that depend on `main` packages The proposal has two goals: 1. To have a user-friendly way to manage `main` package dependencies inside of a module. 2. Be able to execute a `main` program at that recorded version from `1`. _How_ to achieve that is left out of the issue description, but can definitely be discussed in comments. I'll start with one suggestion just as a thought experiment. Ultimately, what I would love to see is that if my coworker git-cloned my project that depended on other `main` packages such as `github.com/golang/protobuf/protoc-gen-go`, they would be able to get the precise version that I intended to use `protoc-gen-go` with. Thanks ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.12 darwin/amd64 </pre> ### Does this issue reproduce with the latest release? Yes cc: @ianthehat
NeedsDecision,modules
medium
Major
453,906,105
rust
Tracking issue for `Result::into_ok`
I would like to propose adding a `unwrap_infallible` associated function to `core::result::Result`. The purpose is to convert `Result<T, core::convert::Infallible>` to a `T`, as the error case is impossible. The implementation would basically be: ```rust impl<T> Result<T, Infallible> { #[inline] pub fn unwrap_infallible(self) -> T { match self { Ok(value) => value, } } } ``` An example use-case I have is a wrapper type with generic value verification, like `Handle<T:Verify>`. Some verifications can fail, but some can not. When verification is infallible, a `Result<Handle<T>, Infallible>` will be returned. Since some can fail, the `Handle` type implements `TryFrom` and not `From`. Because of the blanket implementation of `TryFrom` for all types implementing `From`, I can't additionally add a `From` conversion for the infallible cases. This blanket implementation makes sense, as it allows an API to take a `T: TryFrom` and handle all possible conversions, even infallible ones. But for the API consumer it would be beneficial to be able to go from an infallible `Result<T, Infallible>` to a plain `T` without having to manually match or use `expect`. The latter is shorter and chainable, but has the disadvantage that it will still compile when the types are changed and the conversion is no longer infallible. It might be that there is a different solution to infallible conversions via `TryFrom` in the future, for example via specialization. I believe that having an `unwrap_infallible` would still be beneficial in many other cases where generically fallible actions can be infallible in certain situations. One example is when working with a library that is based on fallible operations with a user supplied error type, but where the specific implementation from the user is infallible. I'd be happy to work on a PR for this if it's acceptable to add, though I might require some guidance with regards to stability attributes, feature flags and so on. It's a bit hard to find information on that, and experimentation is costly as it takes me a while to complete a `./x.py test src/libcore` :)
T-libs-api,B-unstable,C-tracking-issue,A-result-option,Libs-Tracked,Libs-Small
medium
Critical
453,911,416
TypeScript
Return type annotations ignored with recursive closures using JSDoc
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.5.1 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** - "implicit any type" - "circular reference" - "jsdoc" - "recursive" - "closure" **Code** I have a fairly complex use case in a project which uses vanilla JS with type annotations in JSDoc comments. The long and the short is that there is a function which returns a function, which may recursively call itself and will reassign some closure variables. Here is a silly example which gets the point across and demonstrates the same issue: ```js /** * @returns {function(): number} */ function circular() { let rand = Math.random(); return /** @type {function(): number} */ (function tryAgain() { if (rand < 0.5) { return rand; } rand = Math.random(); return tryAgain(); }); } ``` **Expected behavior:** TypeScript should know that the return type of `tryAgain` is a number. **Actual behavior:** When run with: ```bash tsc --allowJs --checkJs --noEmit --strict --target ES2017 *.js ``` The following error is thrown: ``` error TS7023: 'tryAgain' implicitly has return type 'any' because it does not have a return type annotation and is referenced directly or indirectly in one of its return expressions. ``` At the very least, this error seems pretty erroneous. The `tryAgain` function has _two_ return type annotations (I have tried both styles in an attempt to fix this). The larger issue is that I need some way to get TypeScript to compile this code without a massive refactor. **Playground Link:** None (code is JavaScript). **Related Issues:** This has a circular reference similar to [#26623](https://github.com/Microsoft/TypeScript/issues/26623). However for that issue the solution was to add an explicit return type annotation. In my case (perhaps because I am using JSDoc), TypeScript seems to be ignoring all explicit annotations.
Suggestion,Awaiting More Feedback
low
Critical
453,911,864
flutter
google_maps_flutter: Allow setting map feature listeners lazily
It is only possible to set `onTap` callbacks on `Marker/Circle/Polyline/Polygon` in the respective class's constructor. This prevents object creation in separate Isolates. It would be great if either the `onTap` fields were mutable or if `GoogleMap` hosted global `onTap[Marker|Circle|...]` callbacks.
p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem
low
Minor
453,927,966
youtube-dl
--reject-title exact match
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions - Search the bugtracker for similar questions: http://yt-dl.org/search-issues - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm asking a question - [x] I've looked through the README and FAQ for similar questions - [x] I've searched the bugtracker for similar questions including closed ones ## Question <!-- Ask your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. --> I'm trying to download videos from a certain channel that contain the word "First" in their title, but I want to avoid downloading videos that contain "Mythic+" in their title. The thing is, there are a lot of those "First" videos that contain the word "Mythic" and I can't download them because of my "Mythic+" filter. I'm using: `youtube-dl -f '(299/137/298/136)+(251/bestaudio)' -o '/Volumes/Oliver HDD/Method/%(autonumber)s - %(release_date)s - %(title)s.%(ext)s' --match-title 'First' --reject-title 'Mythic+' https://www.youtube.com/watch?v=RubXDO7TYVs` And I'm getting: "Method VS G'huun WORLD FIRST - Mythic Uldir" title matched reject pattern "Mythic+". Is there any way to do this? I'd appreciate any help!
question
low
Critical
453,928,310
flutter
[image_picker] GIF resize loses animation
Specifying maxWidth and maxHeight when picking a GIF will cause the animation to be lost.
platform-android,p: image_picker,package,has reproducible steps,P2,found in release: 3.10,found in release: 3.11,team-android,triaged-android
low
Major
453,939,789
go
x/tools/gopls: bad completion insertion after syntax error
In this code ```go package foo func foo() { foo(a ...interface{}) f<> } ``` Completing "f" to "foo" ends up inserting "ffoo" instead of "foo". This is because the syntax error on the previous line completely obscures the "f" *ast.Ident, so we don't detect it properly as the prefix. Perhaps if we are in a BadExpr we could "manually" detect the surrounding identifier so completion still works to some degree? /cc @stamblerre
help wanted,NeedsInvestigation,gopls
low
Critical
453,946,228
godot
Colliding objects (RigidBody2d) gain energy
<!-- Please search existing issues for potential duplicates before filing yours: https://github.com/godotengine/godot/issues?q=is%3Aissue --> **Godot version:** 3.1.1 <!-- Specify commit hash if non-official. --> **OS/device including version:** MacBookPro (Retinia 15", Mid 2015), 16gb, AMD Radeon R9 M370X 2048 MB, Intel Iris Pro 1536 MB, OS: MacOS High Sierra 10.13.6 <!-- Specify GPU model and drivers if graphics-related. --> **Issue description:** See enclosed screenshot. With "elastic" collisions, the "energy" of the system appears to increase, and eventually, a ball disappears. Eventually, all balls except one disappear. ![screen_shot](https://user-images.githubusercontent.com/45950182/59952827-8e28b000-944b-11e9-9544-0deadc2028ba.jpg) I have several circular balls (`RigidBody2d`) enclosed by walls (`StaticBody2d`). They are given a random initial velocity and `linear_damp`, `angular_damp`, and `friction` are all set to 0. <!-- What happened, and what was expected. --> Everything works fine for a while. The balls all collide with one another and the walls (like particles in an ideal gas). However, the "energy" (basically, the scalar speed) of the entire system slowly increases. After a while, a ball node completely disappears. The node is still in the list, however, and there are no noticeable error messages. Eventually, all the balls except one disappear. **Steps to reproduce:** Just run, and you will see this happen. Once running, click to add balls with random initial velocities. - try it with, say, 10 balls. Then let things progress. You will see the energy increasing, and after about 15 minutes (on my machine), a ball will disappear. After a while, another will disappear, and eventually, you will be left with a single ball, which seems to be fine as it bounces off the walls. **Minimal reproduction project:** [gas.zip](https://github.com/godotengine/godot/files/3316114/gas.zip) <!-- Recommended as it greatly speeds up debugging. Drag and drop a zip archive to upload it. -->
bug,confirmed,topic:physics
low
Critical
454,037,683
PowerToys
Disable Taskbar thumbnail preview
I would be really nice with an option to turn off the live taskbar thumbnail preivews. It's really annoying when using Remote Desktop in windowed mode or when using Parallells on a Mac. Often it overlaps another taskbar and you have to wait for it to disappear.
Idea-Enhancement,Product-Tweak UI Design
low
Minor
454,040,989
go
cmd/compile: prove pass unable to eliminate bounds check when a variable is assigned from len
### What version of Go are you using (`go version`)? <pre> $ go version go version devel +98100c56da Mon Jun 3 01:37:58 2019 +0000 linux/amd64 </pre> ### What did you do? Consider - ```go func bce2(s string) { n := len(s) buf := make([]byte, n+1) for i := 0; i <= n; i++ { _ = buf[i] // bounds check } } ``` The compiler is unable to prove that `buf[i]` will be always in range. But it should be, because n is positive, and len(buf) = n+1. A hint of `_ = buf[n]` or alternatively, changing the bounds from `<= n` to `< len(buf)` fixes it. This actually came from a real-world code from my levenshtein library. See https://github.com/agnivade/levenshtein/commit/1e1f2aee4191dd89ec83158f03f6e4c9cc60bcd9#diff-12f7126b3ca34e44fe76e482d22fba93R46. ### What did you expect to see? No bounds check ### What did you see instead? Bounds check Apologies if this is already filed somewhere else. I did not see it in my search. @zdjones @rasky Also @mvdan (we had a conversation on this on slack)
Performance,NeedsInvestigation,compiler/runtime
low
Major
454,049,593
vue-element-admin
浏览器窗口卡死
<!-- 注意:为更好的解决你的问题,请参考模板提供完整信息,准确描述问题,信息不全的 issue 将被关闭。 Note: In order to better solve your problem, please refer to the template to provide complete information, accurately describe the problem, and the incomplete information issue will be closed. --> ## Bug report(问题描述) 使用vue-admin-element打包之后,把文件发布到服务器上,如果js有报错,整个窗口就会卡死,关都关不掉,必须强制关闭浏览器 #### Steps to reproduce(问题复现步骤) 1.npm run build:prod 2.发布到线上 3.访问线上地址,如果js有报错就卡死 <!-- 1. [xxx] 2. [xxx] 3. [xxxx] --> #### Screenshot or Gif(截图或动态图) #### Link to minimal reproduction(最小可在线还原demo) <!-- Please only use Codepen, JSFiddle, CodeSandbox or a github repo --> #### Other relevant information(格外信息) - Your OS: - Node.js version: - vue-element-admin version:
need repro :mag_right:
low
Critical
454,068,614
flutter
Refactor platform views to respect the Activity lifecycle on Android
Refactor platform views to respect the Activity lifecycle on Android. Up to the point of writing this ticket, the platform views API has been based on the old embedding, which always has an Activity available. The new embedding, and add-to-app use-cases in general, do not always have an Activity available. In fact, Activitys can come and go. This ticket is to make platform views compatible with the lifecycle realities of the new Android embedding and add-to-app use-cases. The expected steps are as follows: - [ ] Platform Views registers part of itself as an `ActivityAware` `FlutterPlugin` to be notified of `Activity`s coming and going. - [ ] When an `Activity` appears, the Platform Views system instantiates all platform Views that the app developer wants. When that `Activity` disappears, the Platform Views system destroys all existing platform Views that were previously instantiated. - [ ] The Platform Views system automatically saves platform View state upon `Activity` disconnection, and restores that state upon `Activity` connection. - [ ] The Platform Views system exposes Dart hooks that notify app developers about when a given platform View has been destroyed on the platform side, and when that platform View has been (re)created.
platform-android,engine,a: existing-apps,a: platform-views,P3,team-android,triaged-android
low
Major
454,068,839
opencv
cv2 resize channel limited
##### System information (version) - opencv-python == 4.1.0.25 - numpy == 1.15.0 - Operating System == MacOS Mojave 10.14.5 ##### Detailed description resize data which is multiple channels like 586 dimensions, code is as below: ``` data = np.random.uniform(low=0.0, high=1.0, size=(128, 586, 586)) data = cv2.resize(data, (256, 586), interpolation=cv2.INTER_LINEAR) ``` --------------------------------------------------------------------------- error Traceback (most recent call last) <ipython-input-36-4816150e3a28> in <module>() 1 data = np.random.uniform(low=0.0, high=1.0, size=(128, 586, 586)) ----> 2 data = cv2.resize(data, (256, 586), interpolation=cv2.INTER_LINEAR) error: OpenCV(4.1.0) /Users/travis/build/skvark/opencv-python/opencv/modules/imgproc/src/resize.cpp:3361: error: (-215:Assertion failed) !dsize.empty() in function 'resize'
category: python bindings,category: imgproc,RFC
low
Critical
454,072,725
flutter
[google_maps_flutter] ACCESS_FINE_LOCATION permission always return NOT GRANTED on android
Hi! I’m testing a Google_maps_flutter sample. ```console E[/GoogleMapController]()(25499): Cannot enable MyLocation layer as location permissions are not granted ``` Google maps are displayed fine, but my current location function does not work as the above errors occur. Of course, Manifest also declared! `<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>` I've tried the Google map app on simulator, but it works fine. https://github.com/flutter/plugins/pull/910 I think it's related to this issue. I'm using Google_maps_flutter version 0.5.16 The Android emulator is using the API 28 version. Let me know if anyone has a similar problem. 🙌 ``` [✓] Flutter (Channel stable, v1.5.4-hotfix.2, on Mac OS X 10.14.4 18E226, locale ko-KR) • Flutter version 1.5.4-hotfix.2 at /Users/Riky/Library/Flutter • Framework revision 7a4c33425d (6 weeks ago), 2019-04-29 11:05:24 -0700 • Engine revision 52c7a1e849 • Dart version 2.3.0 (build 2.3.0-dev.0.5 a1668566e5) [✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3) • Android SDK at /Users/Riky/Library/Android/sdk • Android NDK at /Users/Riky/Library/Android/sdk/ndk-bundle • Platform android-28, build-tools 28.0.3 • ANDROID_HOME = /Users/Riky/Library/Android/sdk • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01) • All Android licenses accepted. [✓] iOS toolchain - develop for iOS devices (Xcode 10.2) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 10.2, Build version 10E125 • ios-deploy 1.9.4 • CocoaPods version 1.6.1 [✓] Android Studio (version 3.3) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin version 31.3.1 • Dart plugin version 181.5656 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1248-b01) [✓] IntelliJ IDEA Ultimate Edition (version 2018.2) • IntelliJ at /Applications/IntelliJ IDEA.app • Flutter plugin version 31.3.2 • Dart plugin version 182.3569.4 [✓] VS Code (version 1.35.0) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.1.0 [✓] Connected device (1 available) • Android SDK built for x86 64 • emulator-5554 • android-x64 • Android 9 (API 28) (emulator) • No issues found! ```
platform-android,customer: crowd,p: maps,package,has reproducible steps,P2,found in release: 1.22,found in release: 1.26,found in release: 2.0,found in release: 2.3,team-android,triaged-android
low
Critical
454,079,302
electron
downloadURL should have an optional destination argument
Handling the save path in will-download event works, sometimes. But right now I'm having problem when I have a custom file location picker to let the user pick a recent location. When I call downloadURL the only way I see to use this custom path is by using some hacky variables and compare dates to check if it's recent. A simple downloadURL(url, destination) would make things so much easier. v5.0.2
enhancement :sparkles:
low
Minor
454,104,061
kubernetes
Calculate oom_score_adj in a CPU-agnostic way, taking in consideration Pod Priority too
**What would you like to be added**: I would like to start a conversation to understand if we may improve `GetContainerOOMScoreAdjust` to: 1. Calculate the `oom_score_adj` in a CPU-agnosticy way 2. Take in account Pod Priority too **Why is this needed**: Currently, the `kubelet` sets a `oom_score_adj` value for each container based on the Pod QoS: - `Guaranteed`: `-998` - `BestEffort`: `1000` - `Burstable`: `min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999)` The QoS class depends both on the CPU and memory requests/limits, which means the `oom_score_adj` also depends on the CPU resources and not just on the memory resources, while OOM score is a purely memory management thing. Moreover, `oom_score_adj` is also Pod priority agnostic (except that critical pods always get `-998`, regardless their QoS class) which can cause higher priority pods to get evicted before lower priority pods under node memory pressure.
priority/backlog,sig/node,kind/feature,triage/accepted
medium
Major
454,126,725
puppeteer
Unable to move mouse on elements using Pointer Lock API
<!-- STEP 1: Are you in the right place? - For general technical questions or "how to" guidance, please search StackOverflow for questions tagged "puppeteer" or create a new post. https://stackoverflow.com/questions/tagged/puppeteer - For issues or feature requests related to the DevTools Protocol (https://chromedevtools.github.io/devtools-protocol/), file an issue there: https://github.com/ChromeDevTools/devtools-protocol/issues/new. - Problem in Headless Chrome? File an issue against Chromium's issue tracker: https://bugs.chromium.org/p/chromium/issues/entry?components=Internals%3EHeadless&blocking=705916 For issues, feature requests, or setup troubles with Puppeteer, file an issue right here! --> ### Steps to reproduce **Tell us about your environment:** * Puppeteer version: all versions (tried on 1.12.2 and 1.17.0) * Platform / OS version: macOS 10.14.4 * URLs (if applicable): https://mdn.github.io/dom-examples/pointer-lock/ * Node.js version: 12.0.0 **What steps will reproduce the problem?** _Please include code that reproduces the issue._ ```javascript const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto('https://mdn.github.io/dom-examples/pointer-lock/'); const el = await page.$('canvas'); const bbox = await el.boundingBox(); const x = bbox.x + bbox.width / 2; const y = bbox.y + bbox.height / 2; console.log(x, y); await page.waitFor(100); await page.mouse.move(x, y); await page.waitFor(100); await page.mouse.down(); await page.waitFor(100); await page.mouse.move(x + 100, y + 100); await page.waitFor(100); await page.mouse.up(); await page.waitFor(100); await page.screenshot({path: 'screenshot.png'}); await browser.close(); ``` **What is the expected result?** Element should receive mouse events. **What happens instead?** Nothing happens, seems like Puppeteer actions are ignored.
bug,confirmed,P3
low
Critical
454,171,515
TypeScript
Allow extending types referenced through interfaces
## Suggestion Allow things like the following: ```ts interface I extends HTMLElementTagNameMap['abbr'] {} ``` Currently, the following error message is given: "An interface can only extend an identifier/qualified-name with optional type arguments." Which I don't even understand. I believe it requires no further clarification or justification, but please let me know if that is the case...
Suggestion,In Discussion,Has Repro
medium
Critical
454,177,923
pytorch
RuntimeError: cublas runtime error
My env is : Collecting environment information... PyTorch version: 0.4.1 Is debug build: No CUDA used to build PyTorch: 9.0.176 OS: Ubuntu 16.04.5 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 CMake version: version 3.14.0 Python version: 3.5 Is CUDA available: Yes CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: GeForce RTX 2080 Nvidia driver version: 410.48 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.0 Versions of relevant libraries: [pip3] numpy==1.16.4 [pip3] torch==0.4.1 [pip3] torch-sparse==0.4.0 [pip3] torchvision==0.2.1 [conda] blas 1.0 mkl [conda] cuda100 1.0 0 pytorch [conda] cuda90 1.0 h6433d27_0 pytorch [conda] mkl 2018.0.3 1 [conda] mkl-service 1.1.2 py36h90e4bf4_5 [conda] mkl_fft 1.0.6 py36h7dd41cf_0 [conda] mkl_random 1.0.1 py36h4414c95_1 [conda] pytorch 1.1.0 py3.6_cuda10.0.130_cudnn7.5.1_0 pytorch [conda] torch 1.2.0a0+1252899 pypi_0 pypi [conda] torchvision 0.3.0 py36_cu10.0.130_1 pytorch My error is: /home/lab/.local/lib/python3.5/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead. warnings.warn(warning.format(ret)) THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=663 error=11 : invalid argument /home/lab/.local/lib/python3.5/site-packages/torch/nn/modules/upsampling.py:225: UserWarning: nn.UpsamplingBilinear2d is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.UpsamplingBilinear2d is deprecated. Use nn.functional.interpolate instead.") /home/lab/.local/lib/python3.5/site-packages/torch/nn/modules/upsampling.py:122: UserWarning: nn.Upsampling is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.Upsampling is deprecated. Use nn.functional.interpolate instead.") Traceback (most recent call last): File "tools/demo.py", line 189, in <module> demo() File "tools/demo.py", line 175, in demo corner_pred = eval_net(seg_pred, vertex_pred).cpu().detach().numpy()[0] File "/home/lab606/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/lab/.local/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 121, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/lab/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "tools/demo.py", line 55, in forward return ransac_voting_layer_v3(mask, vertex_pred, 512, inlier_thresh=0.99) File "/home/lab/pvnet/lib/ransac_voting_gpu_layer/ransac_voting_gpu.py", line 592, in ransac_voting_layer_v3 ATA=torch.matmul(normal.permute(0,2,1),normal) # [vn,2,2] RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:411
triaged,module: cublas
low
Critical
454,223,922
flutter
[google_maps] CameraUpdate.newCameraPosition bearing, tilt & zoom always 0 if empty values
## Use case Currently if you call animateCamera method and give no values to bearing, tilt or zoom they will always default to 0. Now when it does this, the map will move (bearing, tilt & zoom) even when it's not wanted. ``` await controller.animateCamera( CameraUpdate.newCameraPosition( CameraPosition( target: latLng, zoom: 17.0, ), ), ); ``` ## Proposal - Add possibility to get current tilt, bearing & zoom via controller. - Modify current animateCamera implementation so it will accept empty values and use current map values instead. ## Additional info It seems that you can store current camera information via onCameraMove callback on GoogleMap widget.
p: maps,package,c: proposal,team-ecosystem,P3,triaged-ecosystem
low
Minor
454,287,438
TypeScript
Manually widen a type for conditional/mapped type
## Search Terms `manual widen` ## Suggestion I would like some keyword or other mechanism to refer to the widened version of a type, e.g. `widened 42` would be `number` and `widened 'foo'` would be `string`. Importantly, `widened 42 & { foo: 'bar' }` would be `number & { foo: 'bar' }`. ## Use Cases Our project uses a ton of [branded primitives](https://github.com/Microsoft/TypeScript/wiki/FAQ#can-i-make-a-type-alias-nominal), which are accomplished by casting a number or string to `number & Branding` or `string & Branding`. We use this for all sorts of things: we have `number & Pixels` to indicate that a number is in pixels (and to force numbers passed to a function expecting pixels to use a number branded as pixels), we use `string & Url` to indicate a string is a URL, and so on. We also have functions that operate on these types in a generic fashion, where we want to retain the branding. For example, ```ts declare function plus<N extends number>(a: N, b: N): N; ``` This `plus` function is supposed to make sure we are adding pixels to pixels (or whatever unit), and retain the fact that their sum is also a number of pixels. The problem comes in when we also have `const offset = 5 as 5 & Pixels;`. We define it as having that literal value for convenience (mostly, it shows up in Intellisense), and we have quite a few of these. More relevantly, the implementations of `times` and `dividedBy` (which I’m avoiding putting here as they are vastly more complex to cover canceling out units) can and should be able to take just plain numbers, but when I call `times(offset, 2)` it is inferred as `times<Pixels, 2>` and the return value retains the `2` even though obviously the runtime value is almost-certainly not going to be `2`. I asked [for a solution to this on Stack Overflow](https://stackoverflow.com/q/56497519/778430), and was informed the only solution was to manually recreate the type by covering all of the potential brandings—which I basically have done, except that in the end there needs to be a generic overload, because we have a lot of situations where these functions are called by functions/classes that are also generic, and so only have `N extends number`, so the only overload available is the final one—which just retains whatever `N` was, even if `N` was a literal. I want to use `widen N` instead. ## Examples ```ts const c = 5 as const; const w: widened typeof c = 3; function plus<N extends number>(a: N, b: N): widened N { return a + b; } class Pixels { private '__ in': 'pixel'; } const offset = 5 as 5 & Pixels; declare const width: Pixels; const offsetWidth = plus(width, offset); // Pixels, not 5 & Pixels ``` ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Awaiting More Feedback
low
Minor
454,298,149
go
x/build/cmd/gopherbot: should not consider backport requests inside backport issues
A backport issue should never itself be backported, so gopherbot should reject requests to backport inside such issues. See https://github.com/golang/go/issues/32261#issuecomment-496157952 where it happened by accident and caused confusing backport-backport issues to be created. Also happened in https://github.com/golang/go/issues/34881#issuecomment-541459267.
help wanted,Builders,NeedsFix
low
Minor
454,352,330
pytorch
Strange latency overhead of F.conv2d
## Issue description I'm trying to optimize the inference latency, but found that there's a strange overhead happens in one single conv layer. Even though it still fits `y=kx+b` linear relationship, but the `b` is extremely large. The profiling result goes as follows. |[filter, filter, input_channels, output_channels]|Latency for execute 50 times (s)| |---|---| |[3, 3, 16, 8]|0.024658| |[3, 3, 16, 16]|0.032011| |[3, 3, 16, 32]|0.031948| |[3, 3, 16, 64]|0.037025| |[3, 3, 16, 128]|0.049538| |[3, 3, 16, 256]|0.062251| |[3, 3, 16, 512]|0.105888| ## Code example ```python import datetime import torch import torch.nn.functional as F import torch.nn as nn for i in range(7): shape = [3,3,16,2**(i+3)] kernel_value = np.random.rand(shape[0],shape[1],shape[2],shape[3]).astype(np.float32) kernel = torch.as_tensor(np.transpose(kernel_value, (3,2,0,1))) input_value = np.random.rand(1,16,32,32).astype(np.float32) x = torch.as_tensor(input_value) before = datetime.datetime.now() for j in range(100): if j==50: before = datetime.datetime.now() tmp = F.conv2d(x, weight=kernel,bias=None,stride=1,padding=(3-1)//2) after = datetime.datetime.now() interval = after-before print(str(shape)+"\t"+str(get_seconds(interval))) ``` ## System Info ``` PyTorch version: 1.1.0 Is debug build: No CUDA used to build PyTorch: None OS: Mac OSX 10.14.5 GCC version: Could not collect CMake version: version 3.14.0 Python version: 2.7 Is CUDA available: No CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA Versions of relevant libraries: [pip] numpy==1.14.5 [pip] torch==1.1.0 [conda] Could not collect ```
module: performance,module: cpu,triaged
low
Critical
454,371,233
TypeScript
Document --incremental and composite project APIs
Follow-up from #29978, suggested by @MLoughry at https://github.com/microsoft/TypeScript/issues/29978#issuecomment-499674541
Help Wanted,API,Docs,Fix Available
high
Critical
454,412,640
go
compress/gzip: provide user access to size trailer (when possible)
The current `gzip.Reader` reads and verifies the trailer crc and size at the end, but the user currently has no way of reading (to pre-allocate output resources) the size. How much appetite do we have for adding something like a `func ReadSize(io.ReadSeeker) (uint32, error)` utility to the `compress/gzip` package? For concreteness, a case that I really have in mind is [decompression in Shopify/sarama](https://github.com/Shopify/sarama/blob/94536b3e82d393e4b5dfa36475b40fc668e9486f/decompress.go), leading to dynamic `bytes.Buffer` reallocation growth under `ioutil.ReadAll`. Using that example, perhaps another / a better option would be something like `func Decode([]byte) ([]byte, error)` like both snappy and zstd provide? However there's some appeal to being able to do the classic "seek to the end, read the size, pre-allocate, seek back to 0 and start streaming" idiom...
NeedsInvestigation,FeatureRequest
low
Critical
454,445,787
rust
Lifetime mismatch from itself
I found myself encountering error[E0623], but the error message is not understandable and seems buggy: it says some variable's lifetime mismatches from itself. Minimum Reproducible: ``` #[derive(Debug)] enum IntOrRef<'a> { I(u32), R(&'a u32), } #[derive(Debug)] struct V<'a> { v: Vec<IntOrRef<'a>>, } fn f(v: &mut V) { match v.v.last().unwrap() { IntOrRef::I(i) => {v.v.push(IntOrRef::R(&i));} IntOrRef::R(_)=> {} } } fn main() { let mut v = V { v: vec![IntOrRef::I(1)] }; f(&mut v); dbg!(v); } ``` Result: ``` error[E0623]: lifetime mismatch --> src/bin/temp.rs:14:37 | 12 | fn f(v: &mut V) { | ------ | | | these two types are declared with different lifetimes... 13 | match v.v.last().unwrap() { 14 | IntOrRef::I(i) => {v.v.push(IntOrRef::R(&i));} | ^^^^^^^^^^^^^^^ ...but data from `v` flows into `v` here error: aborting due to previous error For more information about this error, try `rustc --explain E0623`. ``` For me, this is not understandable, and I do not know how to fix it. In the real code, I want to copy a long-lived variable's short-lived reference to a new one and failed. I wonder: 1. How should I annotate lifetimes to get this work? 2. What does it mean by "mismatching from itself"? ## Meta `rustc --version --verbose`: ``` rustc 1.35.0 (3c235d560 2019-05-20) binary: rustc commit-hash: 3c235d5600393dfe6c36eeed34042efad8d4f26e commit-date: 2019-05-20 host: x86_64-apple-darwin release: 1.35.0 LLVM version: 8.0 ```
C-enhancement,A-diagnostics,A-lifetimes,T-compiler,D-newcomer-roadblock
low
Critical
454,486,523
flutter
Odd animation when scrolling in TextFormField when it is in TabView and PageStorageKey
## Description Wrap a series of `TextFormField` objects inside of a `ListView` where the `ListView` has a `PageStorageKey`. The `TabView` may not actually be required to repro, but having it helped narrow the problem to the presence of the `PageStorageKey`. In both tabs, the tab loses the scroll state. But only Tab 1 that has the `PageStorageKey` has the issue. As you can see in the below recording, as the user continues the scroll process, the text starts oddly animating in from the left. A screen record of the issue can be found here: https://github.com/jpeiffer/pagestoragekey/blob/master/PageStorageKey.mp4?raw=true ## Repro Code ```dart import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( home: MyHomePage(), ); } } class MyHomePage extends StatelessWidget { MyHomePage(); @override Widget build(BuildContext context) { return DefaultTabController( length: 2, child: Scaffold( appBar: AppBar( title: Text('PageStorageKey'), bottom: TabBar( tabs: <Widget>[ Tab( child: Text('PageScrollKey'), ), Tab( child: Text('No PageScrollKey'), ), ], ), ), body: TabBarView( children: <Widget>[ ListView.builder( key: PageStorageKey('1'), itemCount: 1000, itemBuilder: (BuildContext context, int index) { return ListTile( title: TextFormField( initialValue: '1: Initial text $index', ), ); }, ), ListView.builder( itemCount: 1000, itemBuilder: (BuildContext context, int index) { return ListTile( title: TextFormField( initialValue: '2: Initial text $index', ), ); }, ), ], ), ), ); } } ``` ## Work Around I found adding a `PageStorageKey` to each `TextFormField` and giving it a unique id solved the animation and the doesn't-save-scroll-position problem. That feels like a very non-obvious solution though and doesn't seem like it was intended to be "the right way" to solve this. ## Flutter Doctor ``` [✓] Flutter (Channel beta, v1.6.3, on Mac OS X 10.14.5 18F132, locale en-US) • Flutter version 1.6.3 at /Users/jpeiffer/flutter • Framework revision bc7bc94083 (3 weeks ago), 2019-05-23 10:29:07 -0700 • Engine revision 8dc3a4cde2 • Dart version 2.3.2 (build 2.3.2-dev.0.0 e3edfd36b2) [✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3) • Android SDK at /Users/jpeiffer/Library/Android/sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-28, build-tools 28.0.3 • ANDROID_HOME = /Users/jpeiffer/Library/Android/sdk • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01) • All Android licenses accepted. [✓] iOS toolchain - develop for iOS devices (Xcode 10.2.1) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 10.2.1, Build version 10E1001 • ios-deploy 1.9.4 • CocoaPods version 1.5.3 [✓] Android Studio (version 3.4) • Android Studio at /Applications/Android Studio.app/Contents • Flutter plugin version 35.3.1 • Dart plugin version 183.6270 • Java version OpenJDK Runtime Environment (build 1.8.0_152-release-1343-b01) [✓] VS Code (version 1.35.0) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.0.2 [✓] Connected device (2 available) • Android SDK built for x86 • emulator-5554 • android-x86 • Android 9 (API 28) (emulator) • macOS • macOS • darwin-x64 • Mac OS X 10.14.5 18F132 • No issues found! ```
framework,a: animation,f: material design,f: scrolling,has reproducible steps,P2,found in release: 3.3,found in release: 3.6,team-design,triaged-design
low
Major
454,486,974
flutter
Add ability to scroll through buttons in BottomNavigationBar
Hi , Please Add **Draggable** (scroll) Feature To **Bottom Navigation Bar** Like TabBarView It's Very Important For UX Thanks
c: new feature,framework,f: material design,P3,team-design,triaged-design
low
Minor
454,567,404
go
cmd/vet: copylock does not warn when "lock_type_var = *func_return_pointer_to_locktype()"
I am using Go 1.12.1. cmd/vet does not report any warning for example code below: ```go package main type noCopy struct{} func (*noCopy) Lock() {} func (*noCopy) Unlock() {} func (*noCopy) MuteUnusedWarning() {} func returnNCP() *noCopy { var nc noCopy return &nc } func main() { var nc noCopy nc = *returnNCP() nc.MuteUnusedWarning() } ``` I read code of the analyzer copylock.go: ``` func lockPathRhs(pass *analysis.Pass, x ast.Expr) typePath { 224 if _, ok := x.(*ast.CompositeLit); ok { 225 return nil 226 } 227 if _, ok := x.(*ast.CallExpr); ok { 228 // A call may return a zero value. 229 return nil 230 } 231 if star, ok := x.(*ast.StarExpr); ok { 232 if _, ok := star.X.(*ast.CallExpr); ok { 233 // A call may return a pointer to a zero value. 234 return nil 235 } ``` my confusion is why line 234 just return nil without check the return type of the function (BTW, in my example code, "return &nc" will pass the "checkCopyLocksReturnStmt" check, b/c the return type is a pointer). All this result to the mute of cmd/vet. Could this case be caught by cmd/vet in future?
NeedsInvestigation,Analysis
low
Major
454,589,440
rust
What is a trailing expression in a block exactly?
Is it determined syntactically or semantically? Before or after macro expansion? Answering these questions is necessary to specify expansion of macros (stable fn-like ones or unstable attribute ones) in expression and statement positions. The current implementation is sometimes inconsistent. Below I'll be dumping some code examples expanded using different expansion models in hope to come up with some rules that are both self-consistent and backward compatible. cc https://github.com/rust-lang/rust/issues/33953
A-frontend,A-parser,A-macros,T-lang
medium
Major
454,615,476
flutter
setPreferredOrientations([up, down]) doesn't actually allow down
<!-- Thank you for using Flutter! If you are looking for support, please check out our documentation or consider asking a question on Stack Overflow: * https://flutter.dev/ * https://api.flutter.dev/ * https://stackoverflow.com/questions/tagged/flutter?sort=frequent If you have found a bug or if our documentation doesn't have an answer to what you're looking for, then fill our the template below. Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports --> ``` @override void initState() { super.initState(); SystemChrome.setPreferredOrientations([ DeviceOrientation.portraitUp, DeviceOrientation.portraitDown, ]); } ``` If I use this code. The screen remains in portraitUp but never portraitDown even if I rotate the phone.
platform-android,platform-ios,engine,a: layout,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-engine,triaged-engine
low
Critical
454,695,247
vscode
Global regex search with "Not matching character" doesn't match newline
Issue Type: <b>Bug</b> I was trying to search for ``` Promise\.all\(\[[^\]]*await[^\]]*\]\) ``` but I had no way to turn on multiline support so that `^\]` would match newlines too. So searching for below would not be found. ``` return await Promise.all([ await this.someFunction1(), await this.someFunction2(), ]); ``` I later put in the below and was happy to find that it suddely worked, but knew that it would only find if await was on the first line after the bracket, and also if there was no chars other than newline space or tab. ``` Promise\.all\(\[[^\]]*[\n\r \t]await[^\]]*\]\) ``` But, I then started to find others that actually SHOULDN'T have worked.. It was at this point, I realsed that `^\]` was working as expected the first time, and was matching on newline chars.. (Perfect, I removed my tab/space hack).. To find, it stopped working again.. It seems I need atleast 1 `\n` in the regex to make it work. My quick fix was to add this at the very end as such ``` Promise\.all\(\[[^\]]*await[^\]]*\]\)\n{0} ``` But this is obviously a dirty hack and not an obvious one, be far better to test for `Not Matching` and turn multiline back on, similar to what ever test you are doing for searching for `\n` and turning on multiline VS Code version: Code 1.35.0 (553cfb2c2205db5f15f3ee8395bbd5cf066d357d, 2019-06-04T01:17:12.481Z) OS version: Windows_NT x64 10.0.10240 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Xeon(R) CPU E5-1620 v3 @ 3.50GHz (8 x 3492)| |GPU Status|2d_canvas: enabled<br>checker_imaging: disabled_off<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>rasterization: enabled<br>surface_synchronization: enabled_on<br>video_decode: enabled<br>webgl: enabled<br>webgl2: enabled| |Load (avg)|undefined| |Memory (System)|15.92GB (5.33GB free)| |Process Argv|| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (19)</summary> Extension|Author (truncated)|Version ---|---|--- markdown-preview-github-styles|bie|0.1.6 npm-intellisense|chr|1.3.0 ssh|chr|0.0.4 vue-peek|dar|1.0.2 vscode-eslint|dba|1.9.0 vscode-ts-auto-return-type|ebr|1.0.1 tslint|eg2|1.0.43 RunOnSave|eme|0.0.18 prettier-vscode|esb|1.9.0 todo-tree|Gru|0.0.134 node-module-intellisense|lei|1.5.0 camelcasenavigation|map|1.0.1 rainbow-csv|mec|1.1.1 node-modules-resolve|nau|1.0.2 uuid-generator|net|0.0.4 vetur|oct|0.21.0 gitconfig|sid|2.0.0 open-in-browser|tec|2.0.0 sort-lines|Tyr|1.8.0 </details> <!-- generated by issue reporter -->
feature-request,search
medium
Critical
454,741,351
kubernetes
Upgrade tests do not check pod instances and restart counts are identical for workload objects
**What happened**: An upgrade of only the API server should not result in pod recreations or container restarts. Today, this can accidentally be triggered by introducing new defaults to types within PodSpec (c.f. https://github.com/kubernetes/kubernetes/pull/69988, https://github.com/kubernetes/kubernetes/issues/69445, https://github.com/kubernetes/kubernetes/issues/78633). The workload upgrade tests do not currently check if pod instances and container restart counts are identical. **What you expected to happen**: At the end of the upgrade test setup step, record the pod instances for the workload object in question, their uids, container restart counts, and associated node names and versions. In the verification step post-upgrade, if the associated nodes still exist at the same versions, verify the same pod instances still exist with the same container restart counts. For bonus points, an unrelated update (e.g. adding an annotation to the workload object) should be performed, to verify that no defaults are added in the update path that would result in a spurious rollout of new pods. (note that test/e2e/upgrades tests are not currently running at all, see https://github.com/kubernetes/kubernetes/issues/94487) /sig testing /sig apps /priority important-soon
kind/bug,priority/important-soon,sig/apps,sig/testing,lifecycle/frozen
medium
Major
454,741,527
go
cmd/compile: similar returns not optimized
### What version of Go are you using (`go version`)? <pre> $ go version go version devel +323212b Sun Jun 9 16:23:11 2019 +0000 linux/amd64 </pre> ### Does this issue reproduce with the latest release? Yes. ### What did you do? I compiled the following function: ```go func f1(x int, b bool) int { if b { return x } return x } ``` ### What did you expect to see? I expected it to be compiled to a move and a return, given that the return value is always the same. ### What did you see instead? Instead, it was compiled to: ```asm movblzx "".b+16(SP), AX testb AL, AL jeq f1_pc20 movq "".x+8(SP), AX movq AX, "".~r2+24(SP) ret f1_pc20: movq "".x+8(SP), AX movq AX, "".~r2+24(SP) ret ``` I think the compiler should recognize both cases to be the same. Note: if this function is called from another one, the optimization is applied. For example: ```go func f2(x int, b bool) int { return f1(x, b) } ``` is compiled to ```asm movq "".x+8(SP), AX movq AX, "".~r2+24(SP) ret ```
Performance,NeedsFix,compiler/runtime
low
Minor
454,774,533
flutter
[web] Flush handleBeginFrame microtasks before handleDrawFrame
Flutter for web deviates from Flutter mobile in that it does not flush microtasks scheduled by `handleBeginFrame` before calling `handleDrawFrame`. This is done so we do not split the request animation frame into two separate VM events; animation frame event must be synchronous. Relevant code: https://github.com/flutter/flutter_web/blob/3db7f7cf1da54a7a2a5e490b8587addd7d7e892f/packages/flutter_web_ui/lib/src/engine.dart#L150 Flutter Dart embedder controls the event loop, and for this particular case ensures that `handleBeginFrame` and `handleDrawFrame` are glued to each other and behave like events. ## Short-term solution 1. Run `handleBeginFrame` in a zone, capture all microtasks and flush them manually. 1. Schedule `handleDrawFrame` in a microtask. The second step is added for extra protection from code that reaches into `Zone.root`/`Zone.parent` and schedules microtasks outside the zone created in step 1. It's not a 100% solution though, because if root zone microtasks schedule more microtasks (e.g. by calling `await` multiple times), they will still be executed after `handleDrawFrame`. ## Long-term solution Dart compilers (cc @vsmenon) give us a way to fully control the event loop, such that users cannot bypass it via `Zone.root`.
c: crash,framework,c: API break,platform-web,P2,team-web,triaged-web
low
Major
454,816,902
vscode
The new application icon lacks resolution on HiDPI systems
- VSCode Version: 1.35.0 - OS Version: Win 10 Pro x64 1809 build 17763.503 Steps to Reproduce: 1. Pin VSCode to the Start menu on a HiDPI system (for instance at 3840×2160 resolution with 200% custom scaling); 2. Open the start menu and observe that the icon appears blurry due to insufficient resolution The VSCode and VSCode Insider icons appear blurry and suffer from upscaling artefacts: ![Untitled](https://user-images.githubusercontent.com/4053575/59293272-d7f3e800-8c4c-11e9-81a6-94c80c95d81a.png)
bug,windows,icon-brand
low
Major
454,836,027
flutter
The DraggableScrollableSheet to go all the way up/down when fling/dragged.
Hi, I want the list when its draggable to go all the way up or down as I fling or drag. Now in the DraggableScrollableNotification I only get the currentExtent which I believe is not enough data to implement that functionality. There is no way to set the currentExtent as well. It would be nice if(possible) we could get all the details of the drag like we do in the GestureDetector. If there is a way this can be implemented with the current implementation please help me out on this. Thanks Sourabh
c: new feature,framework,f: material design,P3,team-framework,triaged-framework
low
Minor
454,842,408
pytorch
Batch Dataloader and Dataset
## 🚀 Feature A dataloader and dataset that operate at the batch level, rather than the item level, pulling batches from contiguous blocks of memory and avoiding random access patterns in the dataloader. ## Motivation Loading data item by item and coallating into a batch is very inefficient, particularly in the case of tabular or text data where the items are small. This is compounded further when you want to use large batch sizes. By pre shuffling the data each epoch (when required) we can grab each batch as a single read from contiguous memory. This much faster and scales better with batch size, removing the necessity of multiprocessing, which adds complexity in the form of bus errors when not enough shared memory is available (https://github.com/pytorch/pytorch/issues/5040), CUDA init issues when forking (https://github.com/pytorch/pytorch/issues/4377), etc. This forking issue was one of my original motivations as it solves the issue of using the dataloader in conjunction with RAPIDS or any other code that calls CUDA before the dataloader workers are forked. It should also solve the issue on windows with the speed of dataloaders, at least for tabular and text data, (https://github.com/pytorch/pytorch/issues/12831) as spawning is not necessary. Using the proposed method results in better GPU utilization, and better throughput when training in the tests on tabular data that I've run. With no multiprocessing I've measured a 5-15% improvement* in throughput over an 8 worker vanilla dataloader (more were tried but it maxed out at 8). I've also been able to increase batch sizes for tabular data into the 800K+ range with no loss of accuracy and get a 2x performance improvement over the best multiprocessor dataloader I could run without running into bus error issues that cropped up with large batch sizes. *depends on tensor and batch size ## Pitch I've created source for a batch dataloader and batch dataset modelled after their vanilla counterparts and would love to see it integrated into the PyTorch repo. Usage is similar, and I've tried to stick to the pytorch variable naming and formatting. Code can be found here: https://github.com/rapidsai/dataloaders/tree/master/pytorch/batch_dataloader It should hopefully be ready to go; I've tested it with both base pytorch and with ignite, but more eyes on it would definitely be beneficial, particularly in use cases beyond tabular like text or small images. It should be applicable to anyone who isn't doing large images or a lot of image augmentation. It's undergone an internal (NVidia) review of @ptrblck who was immensely helpful in refining it and @ngimel who reviewed the codebase and had helpful suggestions regarding memory pinning. I'm happy to work with the team to create test cases similar to those for dataset and dataloader and would love feedback on it. ## Alternatives One possible solution to the CUDA Init before fork issue is to spawn, however as seen in windows this is significantly slower and I had trouble getting it working. ## Additional context I'm also working on versions of this that work with larger than CPU memory datasets and on a version that works in GPU memory doing a 0-copy transform of a rapids cudf dataframe via dlpack.
feature,module: dataloader,triaged
low
Critical
454,846,961
TypeScript
Suggestion: extract to function expression
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Please read the FAQ first, especially the "Common Feature Requests" section. --> ## Search Terms <!-- List of keywords you searched for before creating this issue. Write them down here so that others can find this suggestion more easily --> function expression declaration extract method refactoring code fixes ## Suggestion "Extract to function" is very useful, but it only supports generating function declarations. Some people (such as me) prefer to use function expressions instead of declarations. For example: ``` ts const fn = () => { const foo = 1; const bar = foo + 2 } ``` If I select line 3 and extract to function, I get ``` ts const fn = () => { const foo = 1; const bar = newFunction(foo) } function newFunction(foo: number) { return foo + 2; } ``` I want: ``` ts const newFunction = (foo: number) => foo + 2; const fn = () => { const foo = 1; const bar = newFunction(foo) } ``` Some ideas: - An option to toggle between expressions/declarations - Separate refactorings for both expressions and declarations ## Use Cases <!-- What do you want to use this for? What shortcomings exist with current approaches? --> ## Examples <!-- Show how this would be used and what the behavior would be --> ## Checklist My suggestion meets these guidelines: * [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [x] This wouldn't change the runtime behavior of existing JavaScript code * [x] This could be implemented without emitting different JS based on the types of the expressions * [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [x] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,Awaiting More Feedback
low
Critical
454,942,741
TypeScript
Performance Microbenchmark Infrastructure
(Ron to provide details on scenarios to support)
Infrastructure
low
Major
454,944,138
three.js
setProjectionFromUnion makes incorrect assumptions
The comment from `setProjectionFromUnion` states: > Assumes 2 cameras that are parallel and share an X-axis For Magic Leap devices, that is not the case and because of this, this function calculates incorrect values which results in the scene not rendering correctly. It's observable by virtually pinning content to a wall or whiteboard and then walking away. You will then see that the content will rotate or move from its location. [WebXR](https://www.w3.org/TR/webxr/#xrview-interface) specifies that projection matrices are not supposed to be decomposed: > It is strongly recommended that applications use this matrix without modification or decomposition. Failure to use the provided projection matrices when rendering may cause the presented frame to be distorted or badly aligned, resulting in varying degrees of user discomfort.
Bug
low
Critical
454,949,399
flutter
Add2app: When moving Flutter module the host project will no longer compile on iOS
When using "create module -t" for a Flutter project it creates a file <project>/.ios/Flutter/Generated.xcconfig and inside of it is an environment variable FLUTTER_APPLICATION_PATH which as the absolute path to the project. (example FLUTTER_APPLICATION_PATH=/Users/aaclarke/dev/NavFlutter/fluttermod=/Users/aaclarke/dev/NavFlutter/fluttermod). If you move your project to a different directory and compile you app it will no longer compile getting errors like: "The path /Users/aaclarke/dev/NavFlutter/fluttermod does not exist" There isn't an obvious way to fix this for users. We could create a tool that will refresh this xcconfig file or we could store the path relative to the .xcodeproject file since that's less likely to change. When we can't find FLUTTER_APPLICATION_PATH we could also suggest the user refreshes the xcconfig.
platform-ios,tool,a: existing-apps,P3,team-ios,triaged-ios
low
Critical
454,957,594
rust
`impl Trait` with multiple lifetimes imposes a strange lifetime requirement
Consider the [following code](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=4973e923873462ad1d3630ffbd463cd6): ```rust pub struct Store; impl Store { fn scan<'a>(&'a self) -> Box<dyn Iterator<Item = u64> + 'a> { panic!() } } pub struct Transaction<'a> { kv: &'a Store, reads: Vec<u64>, } impl<'a> Transaction<'a> { pub fn scan(&mut self) -> impl Iterator<Item = u64> + 'a + '_ { let iter = self.kv.scan(); iter.map(move |k| { self.reads.push(k); k }) } } ``` Compiling with `#![feature(nll)]` yields the following error: ``` error: lifetime may not live long enough --> src/lib.rs:24:9 | 18 | impl<'a> Transaction<'a> { | -- lifetime `'a` defined here 19 | pub fn scan(&mut self) -> | - let's call the lifetime of this reference `'1` ... 24 | / iter.map(move |k| { 25 | | self.reads.push(k); 26 | | k 27 | | }) | |__________^ returning this value requires that `'1` must outlive `'a` error: aborting due to previous error ``` That is, it's requiring that the reference to `Transaction` outlive `'a`. Unfortunately, it doesn't explain at all why that requirement was imposed. Also, just returning `+ '_` complains: ``` error[E0700]: hidden type for `impl Trait` captures lifetime that does not appear in bounds --> src/lib.rs:20:9 | 20 | impl Iterator<Item = u64> + '_ | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | note: hidden type `std::iter::Map<std::boxed::Box<dyn std::iter::Iterator<Item = u64>>, [closure@src/lib.rs:24:18: 27:10 self:&mut Transaction<'a>]>` captures the lifetime 'a as defined on the impl at 18:6 --> src/lib.rs:18:6 | 18 | impl<'a> Transaction<'a> { | ^^ ``` This is odd to me because if we return `Box<dyn Iterator<Item = u64> + '_>`, everything works fine (and nobody complains about `'a`). Furthermore, the equivalent to the `impl Trait` using an explicit `existential type TransactionScan<'a, 'b>: Iterator<Item = u64>;` works fine as well. This may be related to/a dupe of #49431, but the errors produced are different so I'm not sure.
C-enhancement,A-diagnostics,A-lifetimes,T-compiler,A-NLL,A-impl-trait
low
Critical
454,970,402
godot
Issue with AnimtationTree AutoAdvance property
**Godot version:** v3.1.1.stable.official **OS/device including version:** Linux 4.15.0-51-generic (Linux Mint 19.1) GPU: Radeon HD 8870M Driver: amdgpu Renderer: GLES2 **Issue description:** Auto_play property of AnimationNodeStateMachineTransition ignore "condition" property (as described [here](http://docs.godotengine.org/en/latest/classes/class_animationnodestatemachinetransition.html#class-animationnodestatemachinetransition-property-advance-condition)). **Steps to reproduce:** * Using a 3D model with some animations * Create a StateMachineTree with simplest configuration. * Add transitions * Configure it to auto_advance * Add conditions * Change conditions using script **Images:** ![configuration](https://user-images.githubusercontent.com/1191611/59316623-bf5bf000-8c95-11e9-9998-b8ae7363a3cf.gif) ![script](https://user-images.githubusercontent.com/1191611/59316644-d39fed00-8c95-11e9-8873-414e6a3b829b.png) ![output](https://user-images.githubusercontent.com/1191611/59316648-ddc1eb80-8c95-11e9-878e-f3aac933f60f.png) My entire project is attached download( [action_rpg_prototype.tar.gz](https://github.com/godotengine/godot/files/3279156/action_rpg_prototype.tar.gz)) to help debug Check scene /characters/knight/knight.tscn and to play use /assets/manager/MainScene.tscn
bug,topic:core
low
Critical
455,024,860
terminal
Search is often broken while console process is running
# Environment ```none Windows build number: 10.0.18362.113 Windows Terminal version (if applicable): Any other software? ``` # Steps to reproduce have a long running process (e.g. msbuild), and ctrl+F to find something. Often the match will be off by one line 10.0.18362.113 # Expected behavior I can find the text properly # Actual behavior The text that gets selected when searching is not the right text
Product-Conhost,Help Wanted,Area-Interaction,Issue-Bug,Priority-3
low
Critical
455,029,085
flutter
Generate flavor Xcode build setting (like FLUTTER_FLAVOR=dev)
<!-- Thank you for using Flutter! If you are looking for support, please check out our documentation or consider asking a question on Stack Overflow: * https://flutter.dev/ * https://api.flutter.dev/ * https://stackoverflow.com/questions/tagged/flutter?sort=frequent If you have found a bug or if our documentation doesn't have an answer to what you're looking for, then fill our the template below. Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports --> ## Use case <!-- Please tell us the problem you are running into that led to you wanting a new feature. Is your feature request related to a problem? Please give a clear and concise description of what the problem is. Describe alternative solutions you've considered. Is there a package on pub.dev/flutter that already solves this? --> i think add flavor info to Generated.xcconfig is best way to slove ios flavors ## Proposal <!-- Briefly but precisely describe what you would like Flutter to be able to do. Consider attaching images showing what you are imagining. Does this have to be provided by Flutter directly, or can it be provided by a package on pub.dev/flutter? If so, maybe consider implementing and publishing such a package rather than filing a bug. -->
c: new feature,platform-ios,tool,t: xcode,P2,team-ios,triaged-ios
low
Critical
455,030,286
pytorch
[FR] Diagonal Transform for Distributions
## 🚀 Feature Add a transform function in distributions that only transforms diagonal elements of matrix valued random variables, similar to the TransformDiagonal bijector in TensorFlow Probability. ## Motivation Important for reparametrizing random objects defined on the space of covariance matrices (e.g. Inverse Wishart distribution) to unconstrained space (first, take Cholesky, then transform the diagonal only as proposed here with log transform).
module: distributions,feature,triaged
low
Minor
455,032,745
flutter
Custom ScrollPhysics breaks NestedScrollView
## Steps to Reproduce I have a small app here: https://github.com/maks/flutter-space that demonstrates the issue. Its basically just the example code from the [NestedScrollView documentation](https://api.flutter.dev/flutter/widgets/NestedScrollView-class.html) but using a custom Physics class that just uses the ScrollSpringSimulation to try to force the AppBar to always snap back to the fully expanded position. ## What I expect to happen: is that the Appbar always snaps back to the fully expanded position, using: ``` ScrollSpringSimulation(spring, position.pixels, target, 0, tolerance: tolerance); ``` ## What actually happens: When the user fling is low velocity or scroll gesture is slow, the Appbar behaves as expected. BUT if there is a fast enough fling or rapid scroll gesture, it stops the processing of the physics simulation for some reason, making the Appbar "stuck" in a partially expanded state. Touching anywhere causes the simulation to be run again and then it correctly scrolls back to the fully expanded position. I've observed this on Android and iOS, and explicitly set the physics for the "inner" CustomScrollView to ClampingScrollPhysics just to demonstrate the issue more easily on iOS without running into other issues such as #33367, though that issue may be somewhat related to this one. ![sliverappbar-scroll-physics-issue](https://user-images.githubusercontent.com/71999/59327405-c5e97680-8d2c-11e9-83ea-4e0eacca9d98.gif) ## Logs ``` flutter doctor Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel stable, v1.5.4-hotfix.2, on Linux, locale en_AU.UTF-8) [✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3) [✓] Android Studio (version 3.4) [✓] Android Studio (version 3.3) [✓] VS Code (version 1.35.0) [✓] Connected device (1 available) • No issues found! ```
framework,a: animation,f: material design,f: scrolling,has reproducible steps,P2,found in release: 3.3,found in release: 3.7,team-design,triaged-design
low
Major
455,064,648
flutter
Dismissible ListTile Overflow
Dismissible list tiles will overflow when there are two lists in a row layout. Steps to reproduce: 1. Put two ListViews (wrapped in Expanded) next to each other in a row layout. 2. Wrap ListTiles in a Dismissible. 3. Now if you swipe a tile in the second list to the left, the tile's content will overflow it's list and cover parts of the first list.
framework,f: material design,a: quality,has reproducible steps,P2,found in release: 2.5,found in release: 2.6,team-design,triaged-design
low
Major
455,073,500
create-react-app
add support for FBT
Facebook [FBT translations library](https://github.com/facebookincubator/fbt) seems very promising. I was wondering if we can somehow add optional support to it in `create-react-app` projects, Right now in order to use a project must be ejected in order to add the proper babel configuration. If that's something you find reasonable to support, I'll be more than happy to write a PR for it given the right guidance on how you would approach adding such a support. I wonder if we can come up with something similar to relay-macros solution. Thanks
tag: new feature
low
Major
455,078,720
puppeteer
lower level postdata
`postData` only takes a string and i thing that is too high abstract for me. I would like to try posting binary data, preferable with ArrayBuffer, ArrayBufferView or better yet: A byte stream. so I'm able to pipe a large file. ```js page.on('request', request => { const overrides = {} if (request.url === 'https://httpbin.org/post') { overrides.method = 'POST' overrides.postData = stream || uint8Array || string } request.continue(overrides) }) ``` Would be awesome if one could utilize `node-fetch`'s (Or any other whatwg fetch) Request class. that's following a spec. And have something that also follows service workers request interceptor naming convention ```js const fetch = require('node-fetch') const { Request, Headers } = fetch // Simular to service worker `evt.respondWith` (but for request) // can pass in a promise that resolves to a request also (just like service worker respondWith) something.requestWith( new Request(body, { method, headers }) ) ```
feature,confirmed
low
Major
455,130,052
rust
unhelpful "expected item" error when putting let binding outside of function
````rust let s = String::new(); fn main() { println!("Hello world"); } ```` ```` Compiling playground v0.0.1 (/playground) error: expected item, found keyword `let` --> src/main.rs:1:1 | 1 | let s = String::new(); | ^^^ expected item ```` Could we have some suggestions what kind of item should be put there? :) Also there seems to be no error code which is strange.. `rustc 1.37.0-nightly (02564de47 2019-06-10)`
C-enhancement,A-diagnostics,A-parser,T-compiler,D-newcomer-roadblock
low
Critical
455,148,953
go
x/mobile: gomobile bind: add flag to disable package name prefix in iOS classes
<!-- Please answer these questions before submitting your issue. Thanks! --> ### What version of Go are you using (`go version`)? <pre> $ go version go version go1.12.5 darwin/amd64 </pre> ### Does this issue reproduce with the latest release? Yes ### What operating system and processor architecture are you using (`go env`)? <details><summary><code>go env</code> Output</summary><br><pre> $ go env GOARCH="amd64" GOBIN="" GOCACHE="/Users/ealymbaev/Library/Caches/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/Users/ealymbaev/go" GOPROXY="" GORACE="" GOROOT="/usr/local/Cellar/go/1.12.5/libexec" GOTMPDIR="" GOTOOLDIR="/usr/local/Cellar/go/1.12.5/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="/Users/ealymbaev/projects/go-fee-rate-kit/go.mod" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/k6/bvkt3fss5y16k_y3vyftfq1h0000gp/T/go-build635284137=/tmp/go-build -gno-record-gcc-switches -fno-common" </pre></details> When binding an iOS framework with: ```gomobile bind -target ios some-package``` in generated header files all classes DO have prefix of package name. I tried using `-prefix` flag, but it adds even additional prefix to class names. I need a way to remove package name prefix, so the class name remain exactly the same as in Go code.
NeedsInvestigation,FeatureRequest,mobile
low
Critical
455,208,774
TypeScript
Add new `sourceRootDir` setting
<!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 3.5.1 <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** sourceRoot, sources, sourcemap **Code** 1. Create a module like: `src/nested/foo.ts` 2. Use the following `tsconfig.json` with tsc: ```json { "include": ["src"], "compilerOptions": { "baseUrl": ".", "outDir": "dist", "sourceMap": true, "sourceRoot": "src" } } ``` **Expected behavior:** Inside `nested/foo.js.map`, the `sourceRoot` should be `../src/` instead of `src/`. Otherwise, the paths in the `sources` array cannot be found. **Actual behavior:** The `sourceRoot` is `src/` regardless of nesting. **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> N/A **Related Issues:** <!-- Did you find other bugs that looked similar? --> None
Suggestion,Awaiting More Feedback
medium
Critical
455,221,789
pytorch
C++ module API footgun: assigning to parameter doesn't update `parameters()` list
In https://github.com/pytorch/pytorch/issues/21679#issue-455182954 I saw a user do this: ``` auto m = torch::nn::BatchNorm(torch::nn::BatchNormOptions(2)); m->weight = torch::from_blob(&data[0][0],{2}); m->bias = torch::from_blob(&data[0][2],{2}); m->running_mean = torch::from_blob(&data[0][4],{2}); m->running_var = torch::from_blob(&data[0][6],{2}); ``` This is bad, you shouldn't do it. The reason is because you have created a new tensor and stuck it on the parameter slot, but the *registered* parameter (stored in the parameters vector) isn't updated in this case. I didn't see anything documented against this.
module: docs,triaged
low
Minor
455,254,184
pytorch
The cuda problem in caffe2
Hi I install Caffe2 with Python3.6.8, cuda 9.0.176, cudnn 7 ,Driver Version: 384.130 OS ubuntu 17.10 Caffe2 install use: conda install pytorch-nightly cuda90 -c pytorch when I try to run maskrcnn demo as below the "CUDA driver version is insufficient for CUDA runtime version" error occurs . pls help me! Thanks a lot! python tools/infer_simple.py \ --cfg configs/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml \ --output-dir /tmp/detectron-visualizations \ --image-ext jpg \ --wts https://dl.fbaipublicfiles.com/detectron/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl \ demo THE ERROR DETAIL : ``` WARNING cnn.py: 25: [====DEPRECATE WARNING====]: you are creating an object from CNNModelHelper class which will be deprecated soon. Please use ModelHelper object with brew module. For more information, please refer to caffe2.ai and python/brew.py, python/brew_test.py for more information. INFO net.py: 60: Loading weights from: /tmp/detectron-download-cache/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl INFO net.py: 96: conv1_w loaded from weights file into gpu_0/conv1_w: (64, 3, 7, 7) Traceback (most recent call last): File "/home/bruce/detectron/tools/infer_simple.py", line 185, in <module> main(args) File "/home/bruce/detectron/tools/infer_simple.py", line 135, in main model = infer_engine.initialize_model_from_cfg(args.weights) File "/home/bruce/detectron/detectron/core/test_engine.py", line 329, in initialize_model_from_cfg model, weights_file, gpu_id=gpu_id, File "/home/bruce/detectron/detectron/utils/net.py", line 112, in initialize_gpu_from_weights_file src_blobs[src_name].astype(np.float32, copy=False)) File "/home/bruce/anaconda3/envs/maskrcnn/lib/python3.6/site-packages/caffe2/python/workspace.py", line 352, in FeedBlob return _Workspace_feed_blob(ws, name, arr, device_option) File "/home/bruce/anaconda3/envs/maskrcnn/lib/python3.6/site-packages/caffe2/python/workspace.py", line 711, in _Workspace_feed_blob return ws.create_blob(name).feed(arr, device_option) File "/home/bruce/anaconda3/envs/maskrcnn/lib/python3.6/site-packages/caffe2/python/workspace.py", line 741, in _Blob_feed return blob._feed(arg, device_option) RuntimeError: [enforce fail at common_gpu.cc:98] error == cudaSuccess. 35 vs 0. Error at: /opt/conda/conda-bld/pytorch-nightly_1560316055483/work/caffe2/core/common_gpu.cc:98: CUDA driver version is insufficient for CUDA runtime version frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::string const&, void const*) + 0x59 (0x7f34ba1a08a9 in /home/bruce/anaconda3/envs/maskrcnn/lib/python3.6/site-packages/caffe2/python/../../torch/lib/libc10.so) frame #1: caffe2::CaffeCudaGetDevice() + 0x8f6 (0x7f346effedc6 in /home/bruce/anaconda3/envs/maskrcnn/lib/python3.6/site-packages/caffe2/python/../../torch/lib/libcaffe2_gpu.so) frame #2: <unknown function> + 0x2cf8165 (0x7f3470847165 in /home/bruce/anaconda3/envs/maskrcnn/lib/python3.6/site-packages/caffe2/python/../../torch/lib/libcaffe2_gpu.so) frame #3: <unknown function> + 0x68eb7 (0x7f34ba8c2eb7 in /home/bruce/anaconda3/envs/maskrcnn/lib/python3.6/site-packages/caffe2/python/caffe2_pybind11_state_gpu.cpython-36m-x86_64-linux-gnu.so) frame #4: <unknown function> + 0x6a1ab (0x7f34ba8c41ab in /home/bruce/anaconda3/envs/maskrcnn/lib/python3.6/site-packages/caffe2/python/caffe2_pybind11_state_gpu.cpython-36m-x86_64-linux-gnu.so) ```
caffe2
low
Critical
455,258,923
react-native
FlatList automatically scrolls after adding new items
FlatList automatically scrolls after change data and adds new data in the front or middle of the data list. It only works correctly when adds new item to the end of list I checked scroll offset and understand that FlatList scrolls to keep the latest content Y offset I mean when the content size changes, the latest Y offset now is not where the user was before! but FlatList scrolls to it React Native version: react-native: 0.59.4 System: OS: Windows 7 CPU: (2) x64 Intel(R) Pentium(R) CPU G4400 @ 3.30GHz Memory: 310.09 MB / 3.87 GB Binaries: Yarn: 1.15.2 - C:\Program Files (x86)\Yarn\bin\yarn.CMD npm: 6.9.0 - C:\Program Files\nodejs\npm.CMD IDEs: Android Studio: Version 3.2.0.0 AI-181.5540.7.32.5056338 **Describe what you expected to happen:** It shouldn't scroll to new position when I add new items to list, and should keep latest position where user was **Example code:** <FlatList inverted style={{flex: 1}} data={this.data} keyExtractor={(item, index) => item.id} renderItem={this.renderItem} ref={ref => this.flatList = ref} /> Code that adds new items to list: this.data = [ ...newItems, ...this.data ];
Component: FlatList,Bug
high
Critical
455,270,629
rust
`impl_trait_in_bindings` and pick-constraint region bounds
I'm working on https://github.com/rust-lang/rust/issues/56238. In the process, I'm extending how `impl Trait` lifetime inference works -- in particular in scenarios involving multiple, unrelated lifetimes, such as `impl Trait<'a, 'b>`. The challenge here is that each region `'h` in the hidden type must be equal to `'a` or `'b`, but we can't readily express that relationship in terms of our usual "outlives relationships". The solver is thus extended with a "pick constraint", written `pick 'h from ['a, 'b]`, which expresses that `'h` must be equal to `'a` or `'b`. The current integration into the solver, however, requires that the regions involved are lifetime parameters. This is always true for `impl Trait` used at function boundaries, but it is not true for let bindings. The challenge is that if you have a program like: ```rust #![feature(impl_trait_in_bindings)] trait Foo<'a> { } impl Foo<'_> for &u32 { } fn main() { let _: impl Foo<'_> = &44; // let's call the region variable for `'_` `'1` } ``` then we would wind up with `pick '0 from ['1, 'static]`, where `'0` is the region variable in the hidden type (`&'0 u32`) and `'1` is the region variable in the bounds `Foo<'1>`. This is tricky because both `'0` and `'1` are being inferred -- so making them equal may have other repercussions. For the time being, I've chosen to include some assertions that this scenario never comes up. I'm tagging a FIXME in the code with this issue number. I was going to create some tests, but owing to the ICE https://github.com/rust-lang/rust/issues/60473 (not unrelated to this issue, actually), that proved difficult, so I'll just post comments in here instead.
T-lang,T-compiler,A-impl-trait,F-member_constraints,F-impl_trait_in_bindings,requires-nightly
low
Major
455,311,339
pytorch
Batched Conv2d for sequence data
## 🚀 Feature A module that allows for batched 2d convolutions on sequence data. Essentially, a module that takes as input a tensor of dimensions (batch_size, sequence, channel, H, W) and returns a tensor of dimensions (batch_size,sequence,channel,H',W'). Note: This function is **NOT** conv3d. The convolution is 2d, and is applied individually to each element of the sequence. ## Motivation Using conv2d on video data has its uses such as detecting objects in each frame. A batched implementation of this will considerably speed up processing for such applications. Current alternative is to loop the application of conv2d for the length of the sequence.
module: convolution,triaged,module: batching,function request
low
Minor
455,337,187
PowerToys
copy/move to folder
A powertoy rightclick menu item to copy/move to a different folder. That would solve a lot of people's problem who save it in desktop and then thinking how to move properly to a specific folder without opening extra explorer etc. used to have it.
Idea-New PowerToy,Status-In progress,Product-File Explorer
low
Major
455,374,349
terminal
[host] If you resize in a fullscreen app and the cursor is past the right of a line, but above the bottom of the buffer, then we'll insert an extraneous newline with the cursor on it.
From MSFT:19938308 Prior to fixing MSFT:16861099, if you change the size of the font in TMUX, then we'll incorrectly do a resize operation (despite the fact that the buffer hasn't changed size). During that resize operation, we won't be able to find the cursor position, because the cursor is on a space character in the middle of the buffer. When we get to the end of the loop, we'll try and newline to where the cursor is, but because it's above the end of the buffer, we'll leave it in the wrong spot. If we fix that, then the buffer will still look wrong. This is because after we emitted the last line of text in the buffer, we must have called NewlineCursor, because the cursor is left at col 0 on the new last line of the buffer, and the status line will be on the second-last line of the buffer. If we fix the first part of this bug (being unable to find the cursor position), then the newline will still be there unfortunately. Whatever part of the resize operation that's responsible for newlining the cursor during the reflowing of the last line needs to not do that. The old easy repro case was to open tmux then change the font size. You'd see a new line below the status line. However, MSFT:16861099 fixes that particular repro case. You can't just resize tmux, because it will get SIGWINCH'd, then redraw it's status line in the right place. I'd need to write another test manually.
Product-Conhost,Help Wanted,Area-Output,Issue-Bug,Priority-3
low
Critical
455,375,973
godot
3D CollisionObject signals do not trigger when mouse is in "captured" mode
**Godot version:** 3.1.1 **OS/device including version:** Windows 10 Pro version 1809, Ryzen 5 2600X, GTX 660 **Issue description:** First off, I'm not sure if this behavior is intended. The [documentation](https://docs.godotengine.org/en/3.1/classes/class_input.html) says that mouse movement and button presses should still be handled when the mouse mode is set to "captured," but CollisionObject event signals only respond to the mouse when it's in the other three modes. This includes mouse_entered(), mouse_exited(), and input_event() signals, though I only included an input_event() signal in the minimum reproduction project. It is possible to work around this issue by ray casting from the camera, but that requires a bit more code and a tiny performance decrease. **Steps to reproduce:** 1. Set up a signal on a CollisionObject. 2. set_mouse_mode(Input.MOUSE_MODE_CAPTURED) 3. Attempt to trigger the signal. **Minimal reproduction project:** [Test.zip](https://github.com/godotengine/godot/files/3282746/Test.zip)
discussion,confirmed,topic:input
medium
Major
455,376,231
terminal
[Docs] SetScreenBufferInfo* Doesn't actually set all the members in a SCREENBUFFER_INFO* struct. It only uses a subset of them. Documentation should reflect this.
From MSFT:10210059
Product-Conhost,Issue-Docs,Area-CodeHealth
low
Minor
455,401,753
kubernetes
yaml may fail to parse spuriously
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! If the matter is security related, please disclose it privately via https://kubernetes.io/security/ --> **What happened**: JSON with comments in it (IE Yaml) will be "detected" as JSON because it matches `^\s+{.*` in the first 100 bytes and be parsed as JSON even though it is valid YAML instead. **What you expected to happen**: We should parse valid YAML as YAML. We should probably parse everything as YAML since it is a superset of JSON. **How to reproduce it (as minimally and precisely as possible)**: **Anything else we need to know?**: The code is here: https://github.com/kubernetes/kubernetes/blob/14e322cc827ca546eb80b721a825a43311a99437/pkg/util/yaml/decoder.go#L202 https://github.com/kubernetes/kubernetes/blob/14e322cc827ca546eb80b721a825a43311a99437/pkg/util/yaml/decoder.go#L305-L327 Thread in slack: https://kubernetes.slack.com/archives/C13J86Z63/p1560351860082800 **Environment**: n/a cc @smarterclayton @dims @liggitt @Katharine @chuckha /sig apimachinery
kind/bug,sig/api-machinery,lifecycle/frozen
medium
Critical
455,412,262
pytorch
(LLD 8.0.0) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache
There's something wrong with `TORCH_STATIC` when building with MKL DNN. Steps to reproduce: 1. Apply this diff to turn on static build: ``` diff --git a/caffe2/CMakeLists.txt b/caffe2/CMakeLists.txt index 4b45b7bc8..b6e59c509 100644 --- a/caffe2/CMakeLists.txt +++ b/caffe2/CMakeLists.txt @@ -554,6 +554,7 @@ if (TORCH_STATIC) else() add_library(torch SHARED ${DUMMY_EMPTY_FILE}) endif() +add_library(torch_static STATIC ${DUMMY_EMPTY_FILE}) target_link_libraries(torch caffe2) ``` Run build with `USE_MKLDNN=1`: ``` [1/8] Linking CXX executable bin/FileStoreTest [2/8] Linking CXX executable bin/ProcessGroupMPITest [3/8] Linking CXX executable bin/ProcessGroupGlooTest [4/8] Linking CXX executable bin/torch_shm_manager [5/8] Linking CXX shared library lib/libcaffe2_detectron_ops.so FAILED: lib/libcaffe2_detectron_ops.so : && /private/home/ezyang/ccache/lib/c++ -fPIC -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow -O3 -rdynamic -shared -Wl,-soname,libcaffe2_detectron_ops.so -o lib/libcaffe2_detectron_ops.so modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/group_spatial_softmax_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/ps_roi_pool_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/roi_pool_f_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/sample_as_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/select_smooth_l1_loss_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/sigmoid_cross_entropy_loss_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/sigmoid_focal_loss_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/smooth_l1_loss_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/softmax_focal_loss_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/spatial_narrow_as_op.cc.o modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/upsample_nearest_op.cc.o -L/scratch/ezyang/pytorch-tmp-env/lib -Wl,-rpath,/scratch/ezyang/pytorch-tmp/build/lib:/scratch/ezyang/pytorch-tmp-env/lib: lib/libcaffe2.so /usr/lib/gcc/x86_64-linux-gnu/7/libgomp.so -lpthread lib/libprotobuf.a lib/libc10.so -lpthread -lmkl_intel_lp64 -lmkl_gnu_thread -lmkl_core -fopenmp -lm /usr/lib/x86_64-linux-gnu/libdl.so lib/libmkldnn.a -lpthread && : ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: guard variable for ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store()::t_store_ in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::utils::computation_cache<ideep::tensor::reorder, 1024ul, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::t_store() (.part.760)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: guard variable for ideep::stream::default_stream()::s in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::tensor::reorder::operator()(ideep::tensor const&, ideep::tensor const&)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::stream::default_stream()::s in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::tensor::reorder::operator()(ideep::tensor const&, ideep::tensor const&)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::stream::default_stream()::s in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::tensor::reorder::operator()(ideep::tensor const&, ideep::tensor const&)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::stream::default_stream()::s in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::tensor::reorder::operator()(ideep::tensor const&, ideep::tensor const&)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::stream::default_stream()::s in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::tensor::reorder::operator()(ideep::tensor const&, ideep::tensor const&)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::stream::default_stream()::s in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::tensor::reorder::operator()(ideep::tensor const&, ideep::tensor const&)) ld: error: can't create dynamic relocation R_X86_64_DTPOFF32 against symbol: ideep::stream::default_stream()::s in readonly segment; recompile object files with -fPIC or pass '-Wl,-z,notext' to allow text relocations in the output >>> defined in modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o >>> referenced by batch_permutation_op.cc >>> modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/batch_permutation_op.cc.o:(ideep::tensor::reorder::operator()(ideep::tensor const&, ideep::tensor const&)) ld: error: too many errors emitted, stopping now (use -error-limit=0 to see all errors) collect2: error: ld returned 1 exit status [6/8] Linking CXX executable bin/TCPStoreTest [7/8] Linking CXX shared library lib/libtorch_python.so ninja: build stopped: subcommand failed. Building wheel torch-1.2.0a0+75faa72 -- Building version 1.2.0a0+75faa72 cmake --build . --target install --config Release -- -j 48 Traceback (most recent call last): File "setup.py", line 752, in <module> build_deps() File "setup.py", line 320, in build_deps build_dir='build') File "/scratch/ezyang/pytorch-tmp/tools/build_pytorch_libs.py", line 70, in build_caffe2 cmake.build(my_env) File "/scratch/ezyang/pytorch-tmp/tools/setup_helpers/cmake.py", line 287, in build self.run(build_args, my_env) File "/scratch/ezyang/pytorch-tmp/tools/setup_helpers/cmake.py", line 100, in run check_call(command, cwd=self._build_dir, env=env) File "/scratch/ezyang/pytorch-tmp-env/lib/python3.7/subprocess.py", line 341, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '48']' returned non-zero exit status 1. ``` If you turn off MKLDNN the problem goes away. I don't have time to investigate, so filing a bug here. cc @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh
module: build,triaged,module: static linking,module: mkldnn
low
Critical
455,420,093
flutter
ext.flutter.structuredErrors isn't compatible with code that manipulates FlutterError.onError multiple times
If you have code that temporarily adds a FlutterError.onError handler, then puts it back, and while that code has its handler hooked you temporarily turn on then off the inspector's support of structured errors, you'll lose the value of FlutterError.onError. There's other similar failures likely to happen here. cc @jacob314 @goderbauer
framework,f: inspector,P3,team-framework,triaged-framework
low
Critical
455,498,493
go
proposal: spec: improvements to raw strings
## Background This proposal was branched off of #32190, which was a proposed HEREDOC syntax for Go. It was concluded that HEREDOC was not the correct syntax for Go to use, however the proposal did point out a large problem that Go currently has: # Problem There is only one option to use raw strings, which is the backtick. The nature of how raw strings works means that raw strings themselves cannot contain backticks, meaning that the current workaround for including a backtick in a raw string is: ``` var str := `My backtick is `+"`"+` hard to use` ``` Raw strings are often used for storing large strings, such as strings containing other languages, or Go code itself. In many languages, the backtick has significant meaning. For instance: 1. SQL uses backticks to signify a string that represents an identifier, such as a database name or table name. While they are not required for all identifiers, they are required if the identifier contains invalid characters (spaces, commas, etc) or if the identifier matches names with a keyword. It also seems to be good practice in general to surround database and table names in backticks. * Example: ``SELECT * FROM `database`.`table` `` 2. Kotlin uses backticks in a similar fashion. * Example: ``fun `a method with spaces`() { ... }`` 3. JavaScript uses backticks to indicate format strings, which allow people to embed expressions inside of their strings. * Example: ``let str = `HELLO ${name.toLocaleUpperCase()}` `` Of course there are far more examples of languages where the backtick is a significant character in the language. This makes embedding these languages in Go very hard. # Proposed Solution If there were a fixed number of ways to declare raw strings, the problem would, no matter what, arise that you would be unable to put Go code inside of Go code without some kind of need to transform the code. This means that there needs to be a variable way to create raw strings. This proposal highlights one brought up [here](https://github.com/golang/go/issues/32190#issuecomment-496755530). It essentially improves on the current way to declare raw strings, allowing the following syntax: ``` var stmt = SQL` SELECT `foo` FROM `bar` WHERE `baz` = "qux" `SQL var old = ` this, of course, still works ` var new = 고`this also works you can also use 고 AND `backticks` (separately) in the string! `고 ``` Essentially, raw strings can be prefixed with a delimiter, and the string is then terminated with a backtick followed by the same delimeter. Strings which are densely populated with words and backticks may make it hard to pick a word to use as the delimiter for the raw string, as the word may appear inside the string, which would end the string early and cause a syntax error. Allowing _any identifier_ to be used as a delimiter would allow non-ascii characters to be used as well, meaning that in special cases, when it's _really_ needed, one can use a non-ascii character as their delimiter. # Concerns @jimmyfrasche https://github.com/golang/go/issues/32190#issuecomment-497415469 > Implementation-wise, the problem with user-specified delimiters is that they have to be handled during lexing, which adds complexity to a simple, though still somewhat involved, stage and would need a lot of explanation in the language spec. I don't like the idea of complicating the language. I do not work with the internals of the language, so I am unsure of the magnitude of complication to the lexer that this change would bring. If it is too much, I don't think that it would at all be worth it, and maybe one of the alternatives below would be a better fit. @ianlancetaylor https://github.com/golang/go/issues/32190#issuecomment-501508497 > My only concern with \[syntax] is that it doesn't lead with the fact that it is a string. C++ (R"delim( string )delim")) and Rust (r#" string "#) and Swift (#" string "#) are more clear as to when a string is starting. I share this sentiment. My response to this [here](https://github.com/golang/go/issues/32190#issuecomment-501505269) was that establishing a convention to use short, noticable identifiers (ie `RAW`, `JS`, `SQL`, etc) help with noticing where the string starts and ends. This could (possibly) be enforced by `golint`, but I'm not sure if that is a good idea or not. ## Other Alternatives brought up In #32190, there were several other alternatives that tried to achieve the same goal: ### Variable numbers of backticks Essentially, you could start the raw string with a certain number of backticks, and it would have to end with the same number of backticks. ``` ````` A raw string which can contain up to 4 ```` backticks in a row inside of it ````` ``` This solution still had problems though. Strings cannot start with an even number of backticks, because any even number of backticks could also be interpreted as an empty string, introducing ambiguities. It also causes developers a bit of fuss when trying to get it to work inside of markdown, as markdown uses multiple backticks in order to signify a block of code. Also, the strings could not start or end with backticks, which would be an unfortunate consequence. ### Variable number of backticks + no empty raw strings This one is a breaking change, however I think it is my favorite solution out of all of the alternatives. It's the exact same as the previous one, but Go also introduces a breaking change to disallow empty raw strings. There is no need for raw strings to be used to represent an empty string, since the normal `""` can do that, and is much more preferable. The _only_ code this would break is people who have used a raw string to define an empty string by doing something like ```x := `` ``` or ```funcCall(``, ...)```. It may be good to do some research on if empty raw strings are ever used in real code. This solution still has the issue of being annoying to use with markdown's code fences. The argument was used that we shouldn't make language decisions based on other languages, however I personally do not like this argument. Sharing code is part of what a programmer does, and Markdown is a very widely used markup language that uses multiple backticks in a row to define a code fence. This feature may make it a bit difficult to share Go code over anything that uses Markdown (slack, github, discord, and other services). Despite making it difficult to share code via markdown-enabled chats, it is still easy to share code via something like `gist.github.com` or `play.golang.org`. If my original proposal proves to not work very well (doesn't feel Go-like, too difficult to implement, etc) I would love for this solution to be accepted in place. ### Variable number of backticks + a quote This proposal is actually pretty nice. It's similar to the previous proposal. Essentially, the starting is N backticks (N >= 2) followed by a quotation mark, and the ending delimiter is a quotation mark followed by the same number of backticks. Example: ``` s := ``"this is a `raw` "string" literal"`` fmt.Print(s) // prints: // this is a `raw` "string" literal ``` This syntax is actually very nice in my opinion. It fixes the "odd-number-only" ambiguity from the previous example, as well as fixing the Markdown issue (as code fences must occur on their own line). It also fixes the "strings starting/ending with backticks" issue. The only issue with this syntax is that it doesn't seem to work well with existing raw strings. I don't personally have data about how often this occurs, but I'd imagine that there are several times where raw strings are used to describe strings with quotes in them, making code like `` x := `"this is a string"` `` common. Newcomers to Go may see this and think that the `` `" `` is the delimiter to the raw string, when in reality the `` ` `` is the delimiter and the `"` is part of the string. However that critique may be a bit nitpicky. I do like this syntax a lot. ### Choosing a symbol pair that nobody uses This alternative stated that Go should add another symbol to use to declare raw strings in Go. For instance, `⇶` to start the string and `⬱` to end the string. Go code is defined to be UTF8 so file formatting issues should not happen. Another proposed idea was `≡` (`U+2261 IDENTICAL TO`). This solution also has problems. What if our string has both backticks AND strange symbols (for instance if you were defining a list of mathematical symbols)? Or, what if you were trying to embed Go syntax inside of your strings? Also, the symbol is hard to type and not easy to find, so it may not be a good fit as a string delimiter. ### Variable number of a special character In https://github.com/golang/go/issues/32590#issuecomment-687854491, another solution that I quite like was brought up, using a variable number of special characters. They propose using `^`, and then the delimiters for the string become ``^` `` and `` `^``, where the number of `^` symbols is variable. They also created an implementation of it [here](https://github.com/golang/go/issues/32590#issuecomment-735034749). For example: ``` s := ^^` func main() { sql := ^`SELECT `foo` FROM `bar` WHERE `baz` = "qux"`^ fmt.Println(sql) } `^^ fmt.Print(s) // prints: // // func main() { // sql := ^`SELECT `foo` FROM `bar` WHERE `baz` = "qux"`^ // fmt.Println(sql) // } // ``` ### Other languages 1. C++ `R"delim(string)delim"` * In my opinion, I personally hate the asymmetry of prefix-strings, they look sloppy to me and seem too much like they were trying to hack in features, so I don't really like this solution. 2. Rust `r#"string"#` * Same issue that I had with C++: the asymmetry and "hackiness" of prefix-strings ruins it for me. Also, a fixed number of ways to define a string means that if one wants to put a Go raw string inside of a string (ie pattern matching for code generation), they will run into issues. 3. Swift `#"string"#` * Again, a fixed number of ways to define a string means it's hard to pattern-match Go raw strings for code generation. It's important that we have _some kind_ of variable delimiter, as that way if the string we are embedding somehow contains it, it is easy to change the string's delimiter in order to avoid the issue. The delimiter doesn't have to be an identifier like it is in this main proposal, it could also be varying the number of backticks like the one a few paragraphs up. # Conclusion Raw strings in Go are often used to be able to copy-paste text to be used as strings, or to embed code from other languages (such as JS, SQL, or even Go) into Go. However, if that text contains backticks, we need some way to make sure that those backticks do not terminate the string early. I believe that the way to do this is allowing an identifier to precede the string, and to make sure that the terminating backtick must be followed by the same identifier in order to terminate the string. ``` var markdown = MD` ### Thank you for reading :) `MD ```
LanguageChange,Proposal,LanguageChangeReview
high
Critical
455,516,046
go
syscall/js: performance considerations
I was porting some frontend Go code to be compiled to WebAssembly instead of GopherJS, and noticed the performance was noticeably reduced. The Go code in question makes a lot of DOM manipulation calls and queries, so I decided to benchmark the performance of making calls from WebAssembly to the JavaScript APIs via `syscall/js`. I found it's approximately 10x slower than native JavaScript. Results of running a benchmark in Chrome 75.0.3770.80 on macOS 10.14.5: ``` 131.212518 ms/op - WebAssembly via syscall/js 61.850000 ms/op - GopherJS via syscall/js 12.040000 ms/op - GopherJS via github.com/gopherjs/gopherjs/js 11.320000 ms/op - native JavaScript ``` Here's the benchmark code I used, written to be self-contained: <details><summary>Source Code</summary> #### main.go ```Go package main import ( "fmt" "runtime" "syscall/js" "testing" "time" "honnef.co/go/js/dom/v2" ) var document = dom.GetWindow().Document().(dom.HTMLDocument) func main() { loaded := make(chan struct{}) switch readyState := document.ReadyState(); readyState { case "loading": document.AddEventListener("DOMContentLoaded", false, func(dom.Event) { close(loaded) }) case "interactive", "complete": close(loaded) default: panic(fmt.Errorf("internal error: unexpected document.ReadyState value: %v", readyState)) } <-loaded for i := 0; i < 10000; i++ { div := document.CreateElement("div") div.SetInnerHTML(fmt.Sprintf("foo <strong>bar</strong> baz %d", i)) document.Body().AppendChild(div) } time.Sleep(time.Second) runBench(BenchmarkGoSyscallJS, WasmOrGJS+" via syscall/js") if runtime.GOARCH == "js" { // GopherJS-only benchmark. runBench(BenchmarkGoGopherJS, "GopherJS via github.com/gopherjs/gopherjs/js") } runBench(BenchmarkNativeJavaScript, "native JavaScript") document.Body().Style().SetProperty("background-color", "lightgreen", "") } func runBench(f func(*testing.B), desc string) { r := testing.Benchmark(f) msPerOp := float64(r.T) * 1e-6 / float64(r.N) fmt.Printf("%f ms/op - %s\n", msPerOp, desc) } func BenchmarkGoSyscallJS(b *testing.B) { var total float64 for i := 0; i < b.N; i++ { total = 0 divs := js.Global().Get("document").Call("getElementsByTagName", "div") for j := 0; j < divs.Length(); j++ { total += divs.Index(j).Call("getBoundingClientRect").Get("top").Float() } } _ = total } func BenchmarkNativeJavaScript(b *testing.B) { js.Global().Set("NativeJavaScript", js.Global().Call("eval", nativeJavaScript)) b.ResetTimer() js.Global().Get("NativeJavaScript").Invoke(b.N) } const nativeJavaScript = `(function(N) { var i, j, total; for (i = 0; i < N; i++) { total = 0; var divs = document.getElementsByTagName("div"); for (j = 0; j < divs.length; j++) { total += divs[j].getBoundingClientRect().top; } } var _ = total; })` ``` #### wasm.go ```Go // +build wasm package main import "testing" const WasmOrGJS = "WebAssembly" func BenchmarkGoGopherJS(b *testing.B) {} ``` #### gopherjs.go ```Go // +build !wasm package main import ( "testing" "github.com/gopherjs/gopherjs/js" ) const WasmOrGJS = "GopherJS" func BenchmarkGoGopherJS(b *testing.B) { var total float64 for i := 0; i < b.N; i++ { total = 0 divs := js.Global.Get("document").Call("getElementsByTagName", "div") for j := 0; j < divs.Length(); j++ { total += divs.Index(j).Call("getBoundingClientRect").Get("top").Float() } } _ = total } ``` </details> I know `syscall/js` is documented as "Its current scope is only to allow tests to run, but not yet to provide a comprehensive API for users", but I wanted to open this issue to discuss the future. Performance is important for Go applications that need to make a lot of calls into the JavaScript world. What is the current state of `syscall/js` performance, and are there known opportunities to improve it? /cc @neelance @cherrymui @hajimehoshi
Performance,NeedsInvestigation,arch-wasm
medium
Critical
455,525,195
angular
ngSubmit should trigger after Asynchronous Validation completes
<!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 Oh hi there! 😄 To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅--> # 🚀 feature request ### Relevant Package <!-- Can you pin-point one or more @angular/* packages the are relevant for this feature request? --> <!-- ✍️edit: --> This feature request is for @angular/forms Specifically, Reactive Forms ### Description <!-- ✍️--> ``` **in Template** <form [formGroup]="myFormGroup" (ngSubmit)="mFG_submit()"></form> **In Component** mFG_submit() { if (!myFormGroup.valid) return; //logic that will run if and only if the entire form is valid after both sync and async validators have finished executing } ``` The problem with the above code is that if myFormGroup has any controls with asynchronous validators, then ngSubmit is triggered after the async validators have started executing, but before they have finished executing. As as result, when the 'if' gate inside mFG_submit() is hit, it returns immediately, because the form status is 'PENDING'. As a result, even if the asynchronous validator returns true, i,e, the control is valid, the submission logic is never executed. ### Describe the solution you'd like <!-- ✍️--> One solution is to check on every submit if the form status is "PENDING", and then listen to status changes, and emit ngSubmit again, the moment status changes. I would like to have such behaviour provided by the framework Currently, I use this directive ```Typescript @Directive({ selector: 'form[formGroup]' }) export class ResubmitIfPendingDirective { constructor( private fgd: FormGroupDirective ) { this.subscriptions.add(this.resubmission_sub); } private subscriptions: Subscription = new Subscription(); private resubmission_sub: Subscription = new Subscription(); ngOnInit() { //listen to ngSubmit of the form this.subscriptions.add(this.fgd.ngSubmit.subscribe(() => { //if you are already subscribed to status changes, unsubscribe this.subscriptions.remove(this.resubmission_sub); this.resubmission_sub.unsubscribe(); //if your form is PENDING when submitted, subscribe to status changes if (this.fgd.control.pending) { this.resubmission_sub = this.fgd.control.statusChanges .pipe( //don't do anything if new emitted status is PENDING filter(() => !this.fgd.control.pending), //status no longer PENDING, time to resubmit //and stop observing statusChanges first() // above 2 pipes can be combined, I separated for clarity // first(() => !this.fgd.control.pending) ).subscribe(() => this.fgd.ngSubmit.emit()); this.subscriptions.add(this.resubmission_sub) //since the validation has already been run, //and there have no change in the values of the controls, //the status of this follow-up ngSubmit will either emit //VALID or INVALID //therefore, the ngSubmit we emit will simply //unsubscribe resubmission_sub //and never re-enter this if() block //Thus, No infinite loop of submits } })); } ngOnDestroy() { //stop listening to ngSubmit and status changes this.subscriptions.unsubscribe(); } } ```
type: bug/fix,help wanted,area: forms,forms: validators,P3
medium
Critical
455,535,802
nvm
nvm installation fails on fedora 19 with "Unable to access nvm.git" error
<!-- Thank you for being interested in nvm! Please help us by filling out the following form if you‘re having trouble. If you have a feature request, or some other question, please feel free to clear out the form. Thanks! --> - Operating system and version: Fedora 19 running on Virtual Box - `nvm debug` output: nvm debug bash: nvm: command not found... <!-- do not delete the following blank line --> ```sh ``` </details> - `nvm ls` output: <details> <!-- do not delete the following blank line --> ```sh ``` </details> - How did you install `nvm`? (e.g. install script in readme, Homebrew): - What steps did you perform? sudo wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash - What happened? nvm isntallation failed with the following error: => Downloading nvm from git to '/home/sukumad1/.nvm' => Cloning into '/home/sukumad1/.nvm'... fatal: unable to access 'https://github.com/creationix/nvm.git/': Peer reports incompatible or unsupported protocol version. Failed to clone nvm repo. Please report this! - What did you expect to happen? expected nvm to install correctly - Is there anything in any of your profile files (`.bashrc`, `.bash_profile`, `.zshrc`, etc) that modifies the `PATH`? <!-- if this does not apply, please delete this section --> - If you are having installation issues, or getting "N/A", what does `curl -I --compressed -v https://nodejs.org/dist/` print out? * About to connect() to nodejs.org port 443 (#0) * Trying 104.20.23.46... * Connected to nodejs.org (104.20.23.46) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using TLS_RSA_WITH_AES_128_CBC_SHA * Server certificate: * subject: CN=*.nodejs.org,OU=PositiveSSL Wildcard,OU=Domain Control Validated * start date: Aug 14 00:00:00 2017 GMT * expire date: Nov 20 23:59:59 2019 GMT * common name: *.nodejs.org * issuer: CN=COMODO RSA Domain Validation Secure Server CA,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GB > HEAD /dist/ HTTP/1.1 > User-Agent: curl/7.29.0 > Host: nodejs.org > Accept: */* > Accept-Encoding: deflate, gzip > < HTTP/1.1 200 OK HTTP/1.1 200 OK < Date: Thu, 13 Jun 2019 05:00:28 GMT Date: Thu, 13 Jun 2019 05:00:28 GMT < Content-Type: text/html Content-Type: text/html < Connection: keep-alive Connection: keep-alive < Set-Cookie: __cfduid=d2f9d2d2a458eb04fbb99b1eac5b0d56c1560402028; expires=Fri, 12-Jun-20 05:00:28 GMT; path=/; domain=.nodejs.org; HttpOnly Set-Cookie: __cfduid=d2f9d2d2a458eb04fbb99b1eac5b0d56c1560402028; expires=Fri, 12-Jun-20 05:00:28 GMT; path=/; domain=.nodejs.org; HttpOnly < CF-Cache-Status: HIT CF-Cache-Status: HIT < Expires: Thu, 13 Jun 2019 09:00:28 GMT Expires: Thu, 13 Jun 2019 09:00:28 GMT < Cache-Control: public, max-age=14400 Cache-Control: public, max-age=14400 < Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" < Vary: Accept-Encoding Vary: Accept-Encoding < Server: cloudflare Server: cloudflare < CF-RAY: 4e618045dd7dd732-SYD CF-RAY: 4e618045dd7dd732-SYD < Content-Encoding: gzip Content-Encoding: gzip < * Connection #0 to host nodejs.org left intact <!-- do not delete the following blank line --> ```sh ``` </details> Note: I have already updated by GIT and also ran the following: yum update -y nss curl libcurl Still not working
needs followup
low
Critical
455,539,829
flutter
Consider enabling semantics in tests and debug mode always
See https://github.com/flutter/flutter/issues/31139 We have asserts in various places (and want to add more) to guard against incorrect semantics constructions. Violating these asserts can result in crashes in the a11y bridge implementations at runtime in release mode. However, even adding more asserts could still result in developers not seeing the assertion errors in their apps unless they test in an environment where accessibility services are enabled - e.g. the iOS Simulator where we always turn them on, or an Android device with accessibility settings enabled. Unfortunately, it is very easy to test only on an Android device and find that everything appear to work when it does not. We should consider enabling semantics for all tests and in debug mode, so that developers can see these errors firing and be more likely to fix or at least report them before going to production. /cc @goderbauer @jonahwilliams @Hixie @tvolkert
a: tests,framework,a: accessibility,a: debugging,c: proposal,P3,team-framework,triaged-framework
low
Critical
455,578,447
TypeScript
wiki doc: @template reference re: jsdoc (and old usejsdoc.org reference)
<!-- 🚨 STOP 🚨 𝗦𝗧𝗢𝗣 🚨 𝑺𝑻𝑶𝑷 🚨 Half of all issues filed here are duplicates, answered in the FAQ, or not appropriate for the bug tracker. Even if you think you've found a *bug*, please read the FAQ first, especially the Common "Bugs" That Aren't Bugs section! Please help us by doing the following steps before logging an issue: * Search: https://github.com/Microsoft/TypeScript/search?type=Issues * Read the FAQ: https://github.com/Microsoft/TypeScript/wiki/FAQ Please fill in the *entire* template below. --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** N/A <!-- Search terms you tried before logging this (so others can find this issue more easily) --> **Search Terms:** docs, template, wiki **Code** N/A **Expected behavior:** `@template` is not a recognized tag per https://jsdoc.app/ , even if could be compatible as a custom tag. I think your docs at https://github.com/Microsoft/TypeScript/wiki/JSDoc-support-in-JavaScript should make clear that while the other items in the list of jsdoc tags are standard jsdoc, the particular tag `@template` is an exception and is not standard jsdoc. Also, the reference to "usejsdoc.org" should be changed to its new site: jsdoc.app. (See https://github.com/jsdoc/jsdoc/issues/1642 on the lack of a working redirect.) **Actual behavior:** 1. https://github.com/Microsoft/TypeScript/wiki/JSDoc-support-in-JavaScript seems to imply that `@template` is a valid, standard jsdoc tag or at least doesn't clarify that it is a custom tag and not one with behaviors compatible with jsdoc. 2. The docs link to usejsdoc.org **Playground Link:** <!-- A link to a TypeScript Playground "Share" link which demonstrates this behavior --> N/A **Related Issues:** <!-- Did you find other bugs that looked similar? --> None found. @sandersn
Help Wanted,Docs,PursuitFellowship
low
Critical
455,583,649
godot
Memory leak(~180KB) in EditorHelpSearch::Runner
**Godot version:** 3.2.dev.custom_build.15425b450 **OS/device including version:** Ubuntu 19.04 **Issue description:** When I opened and search some things in editor help search, and also change viewport from 2D, 3D to Script Editor, then this errors in Valgrind shows: ``` Invalid read of size 8 at 0x2B33F7E: Tree::create_item(TreeItem*, int) (tree.cpp:3036) by 0x279CB9E: EditorHelpSearch::Runner::_create_class_item(TreeItem*, DocData::ClassDoc const*, bool) (editor_help_search.cpp:478) by 0x279C8A0: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:464) by 0x279C342: EditorHelpSearch::Runner::_phase_class_items() (editor_help_search.cpp:383) by 0x279B5CE: EditorHelpSearch::Runner::_slice() (editor_help_search.cpp:286) by 0x279E764: EditorHelpSearch::Runner::work(unsigned long) (editor_help_search.cpp:579) by 0x2799774: EditorHelpSearch::_notification(int) (editor_help_search.cpp:125) by 0x279F219: EditorHelpSearch::_notificationv(int, bool) (editor_help_search.h:42) by 0x3687F79: Object::notification(int, bool) (object.cpp:950) by 0x2907118: SceneTree::_notify_group_pause(StringName const&, int) (scene_tree.cpp:975) by 0x2904CFD: SceneTree::idle(float) (scene_tree.cpp:522) by 0x13C55F2: Main::iteration() (main.cpp:1919) Address 0x36457248 is 296 bytes inside a block of size 312 free'd at 0x584697B: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x37C3642: Memory::free_static(void*, bool) (memory.cpp:181) by 0x1694759: void memdelete<TreeItem>(TreeItem*) (memory.h:122) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) Block was alloc'd at at 0x584574F: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x37C3259: Memory::alloc_static(unsigned long, bool) (memory.cpp:85) by 0x37C3202: operator new(unsigned long, char const*) (memory.cpp:42) by 0x2B33EDA: Tree::create_item(TreeItem*, int) (tree.cpp:3031) by 0x279CB9E: EditorHelpSearch::Runner::_create_class_item(TreeItem*, DocData::ClassDoc const*, bool) (editor_help_search.cpp:478) by 0x279C8A0: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:464) by 0x279C877: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:460) by 0x279C342: EditorHelpSearch::Runner::_phase_class_items() (editor_help_search.cpp:383) by 0x279B5CE: EditorHelpSearch::Runner::_slice() (editor_help_search.cpp:286) by 0x279E764: EditorHelpSearch::Runner::work(unsigned long) (editor_help_search.cpp:579) by 0x2799774: EditorHelpSearch::_notification(int) (editor_help_search.cpp:125) by 0x279F219: EditorHelpSearch::_notificationv(int, bool) (editor_help_search.h:42) Invalid write of size 8 at 0x2B33FF4: Tree::create_item(TreeItem*, int) (tree.cpp:3051) by 0x279CB9E: EditorHelpSearch::Runner::_create_class_item(TreeItem*, DocData::ClassDoc const*, bool) (editor_help_search.cpp:478) by 0x279C8A0: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:464) by 0x279C342: EditorHelpSearch::Runner::_phase_class_items() (editor_help_search.cpp:383) by 0x279B5CE: EditorHelpSearch::Runner::_slice() (editor_help_search.cpp:286) by 0x279E764: EditorHelpSearch::Runner::work(unsigned long) (editor_help_search.cpp:579) by 0x2799774: EditorHelpSearch::_notification(int) (editor_help_search.cpp:125) by 0x279F219: EditorHelpSearch::_notificationv(int, bool) (editor_help_search.h:42) by 0x3687F79: Object::notification(int, bool) (object.cpp:950) by 0x2907118: SceneTree::_notify_group_pause(StringName const&, int) (scene_tree.cpp:975) by 0x2904CFD: SceneTree::idle(float) (scene_tree.cpp:522) by 0x13C55F2: Main::iteration() (main.cpp:1919) Address 0x36457248 is 296 bytes inside a block of size 312 free'd at 0x584697B: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x37C3642: Memory::free_static(void*, bool) (memory.cpp:181) by 0x1694759: void memdelete<TreeItem>(TreeItem*) (memory.h:122) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) Block was alloc'd at at 0x584574F: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x37C3259: Memory::alloc_static(unsigned long, bool) (memory.cpp:85) by 0x37C3202: operator new(unsigned long, char const*) (memory.cpp:42) by 0x2B33EDA: Tree::create_item(TreeItem*, int) (tree.cpp:3031) by 0x279CB9E: EditorHelpSearch::Runner::_create_class_item(TreeItem*, DocData::ClassDoc const*, bool) (editor_help_search.cpp:478) by 0x279C8A0: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:464) by 0x279C877: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:460) by 0x279C342: EditorHelpSearch::Runner::_phase_class_items() (editor_help_search.cpp:383) by 0x279B5CE: EditorHelpSearch::Runner::_slice() (editor_help_search.cpp:286) by 0x279E764: EditorHelpSearch::Runner::work(unsigned long) (editor_help_search.cpp:579) by 0x2799774: EditorHelpSearch::_notification(int) (editor_help_search.cpp:125) by 0x279F219: EditorHelpSearch::_notificationv(int, bool) (editor_help_search.h:42) Invalid read of size 8 at 0x2B33F7E: Tree::create_item(TreeItem*, int) (tree.cpp:3036) by 0x279CB9E: EditorHelpSearch::Runner::_create_class_item(TreeItem*, DocData::ClassDoc const*, bool) (editor_help_search.cpp:478) by 0x279C8A0: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:464) by 0x279C877: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:460) by 0x279C342: EditorHelpSearch::Runner::_phase_class_items() (editor_help_search.cpp:383) by 0x279B5CE: EditorHelpSearch::Runner::_slice() (editor_help_search.cpp:286) by 0x279E764: EditorHelpSearch::Runner::work(unsigned long) (editor_help_search.cpp:579) by 0x2799774: EditorHelpSearch::_notification(int) (editor_help_search.cpp:125) by 0x279F219: EditorHelpSearch::_notificationv(int, bool) (editor_help_search.h:42) by 0x3687F79: Object::notification(int, bool) (object.cpp:950) by 0x2907118: SceneTree::_notify_group_pause(StringName const&, int) (scene_tree.cpp:975) by 0x2904CFD: SceneTree::idle(float) (scene_tree.cpp:522) Address 0x3778de68 is 296 bytes inside a block of size 312 free'd at 0x584697B: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x37C3642: Memory::free_static(void*, bool) (memory.cpp:181) by 0x1694759: void memdelete<TreeItem>(TreeItem*) (memory.h:122) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) Block was alloc'd at at 0x584574F: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x37C3259: Memory::alloc_static(unsigned long, bool) (memory.cpp:85) by 0x37C3202: operator new(unsigned long, char const*) (memory.cpp:42) by 0x2B33EDA: Tree::create_item(TreeItem*, int) (tree.cpp:3031) by 0x279CB9E: EditorHelpSearch::Runner::_create_class_item(TreeItem*, DocData::ClassDoc const*, bool) (editor_help_search.cpp:478) by 0x279C8A0: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:464) by 0x279C877: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:460) by 0x279C877: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:460) by 0x279C342: EditorHelpSearch::Runner::_phase_class_items() (editor_help_search.cpp:383) by 0x279B5CE: EditorHelpSearch::Runner::_slice() (editor_help_search.cpp:286) by 0x279E764: EditorHelpSearch::Runner::work(unsigned long) (editor_help_search.cpp:579) by 0x2799774: EditorHelpSearch::_notification(int) (editor_help_search.cpp:125) Invalid write of size 8 at 0x2B33FF4: Tree::create_item(TreeItem*, int) (tree.cpp:3051) by 0x279CB9E: EditorHelpSearch::Runner::_create_class_item(TreeItem*, DocData::ClassDoc const*, bool) (editor_help_search.cpp:478) by 0x279C8A0: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:464) by 0x279C877: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:460) by 0x279C342: EditorHelpSearch::Runner::_phase_class_items() (editor_help_search.cpp:383) by 0x279B5CE: EditorHelpSearch::Runner::_slice() (editor_help_search.cpp:286) by 0x279E764: EditorHelpSearch::Runner::work(unsigned long) (editor_help_search.cpp:579) by 0x2799774: EditorHelpSearch::_notification(int) (editor_help_search.cpp:125) by 0x279F219: EditorHelpSearch::_notificationv(int, bool) (editor_help_search.h:42) by 0x3687F79: Object::notification(int, bool) (object.cpp:950) by 0x2907118: SceneTree::_notify_group_pause(StringName const&, int) (scene_tree.cpp:975) by 0x2904CFD: SceneTree::idle(float) (scene_tree.cpp:522) Address 0x3778de68 is 296 bytes inside a block of size 312 free'd at 0x584697B: free (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x37C3642: Memory::free_static(void*, bool) (memory.cpp:181) by 0x1694759: void memdelete<TreeItem>(TreeItem*) (memory.h:122) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) by 0x2B2274A: TreeItem::clear_children() (tree.cpp:827) by 0x2B2286F: TreeItem::~TreeItem() (tree.cpp:847) by 0x1694748: void memdelete<TreeItem>(TreeItem*) (memory.h:120) Block was alloc'd at at 0x584574F: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x37C3259: Memory::alloc_static(unsigned long, bool) (memory.cpp:85) by 0x37C3202: operator new(unsigned long, char const*) (memory.cpp:42) by 0x2B33EDA: Tree::create_item(TreeItem*, int) (tree.cpp:3031) by 0x279CB9E: EditorHelpSearch::Runner::_create_class_item(TreeItem*, DocData::ClassDoc const*, bool) (editor_help_search.cpp:478) by 0x279C8A0: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:464) by 0x279C877: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:460) by 0x279C877: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:460) by 0x279C342: EditorHelpSearch::Runner::_phase_class_items() (editor_help_search.cpp:383) by 0x279B5CE: EditorHelpSearch::Runner::_slice() (editor_help_search.cpp:286) by 0x279E764: EditorHelpSearch::Runner::work(unsigned long) (editor_help_search.cpp:579) by 0x2799774: EditorHelpSearch::_notification(int) (editor_help_search.cpp:125) Leaked instance: ImageTexture:2077 - Resource name: Path: Leaked instance: TreeItem:27291 Leaked instance: TreeItem:27330 Leaked instance: TreeItem:27195 Leaked instance: TreeItem:27275 Leaked instance: TreeItem:27337 Leaked instance: TreeItem:27237 Leaked instance: TreeItem:27329 Leaked instance: TreeItem:27226 Leaked instance: TreeItem:27338 Leaked instance: TreeItem:27230 Leaked instance: TreeItem:27219 Leaked instance: Image:2536 - Resource name: Path: Leaked instance: TreeItem:27241 Orphan StringName: ImageTexture Orphan StringName: TreeItem Orphan StringName: Image 10,672 bytes in 11 blocks are definitely lost in loss record 284 of 288 at 0x584574F: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x37C3259: Memory::alloc_static(unsigned long, bool) (memory.cpp:85) by 0x13EAB18: PoolVector<unsigned char>::resize(int) (pool_vector.h:577) by 0x36440A1: Image::shrink_x2() (image.cpp:1393) by 0x18FB537: ImageLoaderSVG::_create_image(Ref<Image>, PoolVector<unsigned char> const*, float, bool, bool) (image_loader_svg.cpp:129) by 0x18FB809: ImageLoaderSVG::create_image_from_string(Ref<Image>, char const*, float, bool, bool) (image_loader_svg.cpp:144) by 0x20FB8BB: editor_generate_icon(int, bool, float, bool) (editor_themes.cpp:92) by 0x20FEB0E: editor_register_and_generate_icons(Ref<Theme>, bool, int, bool) (editor_themes.cpp:202) by 0x2102380: create_editor_theme(Ref<Theme>) (editor_themes.cpp:386) by 0x211C618: create_custom_theme(Ref<Theme>) (editor_themes.cpp:1180) by 0x202790A: EditorNode::EditorNode() (editor_node.cpp:5450) by 0x13C1FE4: Main::start() (main.cpp:1614) 178,688 (3,432 direct, 175,256 indirect) bytes in 11 blocks are definitely lost in loss record 288 of 288 at 0x584574F: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so) by 0x37C3259: Memory::alloc_static(unsigned long, bool) (memory.cpp:85) by 0x37C3202: operator new(unsigned long, char const*) (memory.cpp:42) by 0x2B33EDA: Tree::create_item(TreeItem*, int) (tree.cpp:3031) by 0x279CB9E: EditorHelpSearch::Runner::_create_class_item(TreeItem*, DocData::ClassDoc const*, bool) (editor_help_search.cpp:478) by 0x279C8A0: EditorHelpSearch::Runner::_create_class_hierarchy(EditorHelpSearch::Runner::ClassMatch const&) (editor_help_search.cpp:464) by 0x279C342: EditorHelpSearch::Runner::_phase_class_items() (editor_help_search.cpp:383) by 0x279B5CE: EditorHelpSearch::Runner::_slice() (editor_help_search.cpp:286) by 0x279E764: EditorHelpSearch::Runner::work(unsigned long) (editor_help_search.cpp:579) by 0x2799774: EditorHelpSearch::_notification(int) (editor_help_search.cpp:125) by 0x279F219: EditorHelpSearch::_notificationv(int, bool) (editor_help_search.h:42) by 0x3687F79: Object::notification(int, bool) (object.cpp:950) ``` I think that this is very similar issue as #29215 **Minimal project**: This should happens with empty project
bug,topic:editor,confirmed
low
Critical
455,605,406
vue
Regular slot and scoped slot with same name shouldn't be allowed
### Version 2.6.10 ### Reproduction link [https://codepen.io/lee88688/pen/jjPpBm?editors=1010](https://codepen.io/lee88688/pen/jjPpBm?editors=1010) ### Steps to reproduce as seen in codepen, "hello" component have only one slot. but rendered 2 div tag at last. I have a look at Vue's code [renderSlot ](https://github.com/vuejs/vue/blob/530ca1b2db315fbd0e360807b2031d26665c5d3d/src/core/instance/render-helpers/render-slot.js#L8) which seems renderSlot function would treat the normal slot as scoped slot. why it would be like this? the following code is render function of hello component which is compiled by Vue.compile. _t is renderSlot function. when normal slot and scoped slot in same component the problem above will be present. ```javascript (function anonymous() { with (this) { return _c('div', [_t("default"), _v(" "), _t("default", null, { "x": x })], 2) } } ) ``` ### What is expected? render just one slot. ### What is actually happening? render one slot twice. <!-- generated by vue-issues. DO NOT REMOVE -->
contribution welcome,feature request,warnings
low
Major
455,615,803
create-react-app
use axios is not working in ie9-ie11
**### Environment** node version: 11.12.0, npm version: 6.7.0 create-react-app:3.0.1 This problem is specific for IE11 running on Windows 10 **index.js below:** ``` import 'react-app-polyfill/ie9'; import 'react-app-polyfill/ie11'; import 'react-app-polyfill/stable'; import 'es6-promise/auto' ``` **server.js below:** ``` export function getBlockLists(para){ let url = `/tk/block/query/blockList`; let result = myAxios.get(url, {params:para}) console.log(result,33398) return result; } ``` **component below:** ``` getLists_block(){ console.log(this.state.page,111) getBlockLists(this.state.page) .then(({data})=>{ console.log(data,222) }) .catch((res)=>{ console.log(res,333) }) } componentDidMount(){ this.getLists_block(); } ``` **but in ie11 ,not request ;error below:** ![QQ20190613-1](https://user-images.githubusercontent.com/23735606/59418461-a5d4b880-8dfb-11e9-93e9-d4d7ae25deff.jpg)
issue: needs investigation
low
Critical
455,625,281
TypeScript
Separate directories for map files
Hi! I find it rather odd that the option mapRoot exists without a corresponding option which outputs the map files to said directory. I have to manually move them after transpiling, which is annoying (AND keep the same directory structure). Here is a SO question which is valid for my case aswell: https://stackoverflow.com/questions/43052200/can-i-place-webpack-source-maps-and-source-code-files-in-seprate-folders I guess it's kinda related to: https://github.com/microsoft/TypeScript/issues/6723
Suggestion,Awaiting More Feedback
low
Minor
455,682,828
vue
Computed properties can have widely different performance characteristics on client and server (because they are not cached during SSR)
### Version [`v2.4.3`](https://github.com/vuejs/vue/releases/tag/v2.4.3) - current ([`v2.6.10`](https://github.com/vuejs/vue/releases/tag/v2.6.10)) ### Reproduction link [https://jsfiddle.net/anc6Lf23/2/](https://jsfiddle.net/anc6Lf23/2/) ### Steps to reproduce 1. Open the JSFiddle 2. You will see the time it takes to compute a value without caching of computed properties (~1 second) in the output 3. Set "CACHE = true" in the first line of the JS part to see the time it takes to compute the value with caching (~2 ms) This issue concerns SSR but for simplicity I created the fiddle which emulates the behavior of the server render: - `CACHE = true` – the behavior we usually have in the client - `CACHE = false` – the behavior during SSR ### What is expected? I would expect computed properties to have comparable performance characteristics on the server and client, so that I don't need to write custom code. I.e. I would expect computed properties to be cached during SSR. ### What is actually happening? Computed properties are not cached and therefore have drastically different performance characteristics in some cases. --- ### Description if the issue Since [computed properties are not cached during SSR](https://ssr.vuejs.org/guide/universal.html#component-lifecycle-hooks) some components unexpectedly take significantly longer to render. This is the case if it is heavy to compute the property or if it is accessed a lot. I would usually expect a computed property to have constant time complexity (`O(1)`) no matter how often we access it. But on the server it suddenly becomes linear time complexity (`O(n)`). This is especially critical when the computed is accessed in a loop. When we have multiple computed properties relaying on each other, each containing loops, this gets *really* bad. Then it has polynomial time with the exponent being the amount of nested computed properties (E.g. `O(n^3)` for three levels of computes, like in the JSFiddle) ### Real world example I noticed this issue because our server renderer suddenly took multiple seconds (5-8 seconds) to respond after enabling a new component for SSR. Normally rendering the app on the server takes about 100ms. The effected component is part of a proprietary code base but it is similar to the one in the JSFiddle. You can see the component in production here: - Open: https://www.ikea.com/de/de/bereiche/wohnzimmer/ - Click on the "Serien" button - The flyout content is the effected component After finding out about this I also investigated other occurrences of this: In our Vuex store we did not have any nested getters with loops, like described above, however some of the getters are somewhat heavy to compute (~1ms) and accessed a lot in various templates. So I decided to introduce a very simple custom caching layer to our store. This sped up server rendering by about 20%. **This could also be a low hanging fruit for optimizing SSR performance:** Based on analyzing our own app, I would roughly estimate that caching all computed properties could speed up server rendering by about 30% in an average Vue.js app. For me this issue was hard to understand because the affected code seemed harmless at first. ### Mitigation To mitigate this issue you can move access to computes out of loops: access them once, store them in a local variable and then use this variable in the loop. This is generally a good idea since any look up of a property on a Vue.js VM has a small cost. However this is not possible if you have a loop inside your templates. ### References - The code that controls this behavior: https://github.com/vuejs/vue/blob/0948d999f2fddf9f90991956493f976273c5da1f/src/core/instance/state.js#L208-L219 - The issue that lead to this behavior being introduced: vuejs/vuex#877 - The commit that introduced this behavior 06741f32 - Earlier occasion of someone stumbling over this: - nuxt/nuxt.js#2447 - https://forum.vuejs.org/t/ssr-performance-problems-due-to-lack-of-caching/24653/5 <!-- generated by vue-issues. DO NOT REMOVE -->
discussion,feat:ssr
low
Major
455,733,448
rust
`error[E0275]: overflow evaluating the requirement` from type inference
Example: https://play.rust-lang.org/?version=nightly&mode=debug&edition=2015&gist=03db32f00662d579ef0d91306b7c2ef5 If the trait type is listed explicitly instead of using type inference, then the error goes away. Likewise, if the recursive implementation for `Vec` is commented out, the error goes away. This seems like something which should work, but the compiler gets stuck looking at the implementations for `Vec`, when there is a readily available implementation for `&Bar` which would work just fine.
T-compiler,A-inference,C-bug
low
Critical
455,736,379
pytorch
Improve multithreaded random number generation (RNG)
We received some internal reports (for FB employees: https://fb.workplace.com/groups/329222650990087/permalink/438424466736571/) that multithreaded processing with RNG performs poorly. This is probably because the majority of our CPU code takes out a lock (locking) for the entirety of the kernel in question. @syed-ahmed do any of your upcoming plans for RNG fix this problem?
module: cpu,triaged,module: random,module: multithreading
low
Minor
455,751,301
flutter
Ability to disable Dismissible's clipper
Dismissible implements a [_DismissibleClipper](https://github.com/flutter/flutter/blob/f31fc1bd0faa9b706468cd4fed58b55e7d8eb509/packages/flutter/lib/src/widgets/dismissible.dart#L557) which prevents the background widget from displaying overflowing content beyond the current sliding state. This causes widgets that are not perfectly square (For example, Cards with rounded corners) to incorrectly hide the background. [Example video](https://i.imgur.com/q28z3oA.mp4) ## Steps to Reproduce Create the default example project Replace the two `Text` in the column by ```dart Container( height: 200, child: Dismissible( key: ValueKey("hi"), child: Card( child: Text("hi"), ), background: Card( color: Colors.red, ), ), ) ``` ``` [✓] Flutter (Channel master, v1.7.4-pre.74, on Linux, locale en_US.UTF-8) • Flutter version 1.7.4-pre.74 at /opt/flutter • Framework revision 75b5ceccd9 (20 hours ago), 2019-06-12 14:03:56 -0400 • Engine revision ab5c14b949 • Dart version 2.3.3 (build 2.3.3-dev.0.0 3166bbf24b) ```
c: new feature,framework,f: material design,c: proposal,has reproducible steps,P3,workaround available,team-design,triaged-design,found in release: 3.16,found in release: 3.18
low
Major
455,765,041
vscode
Breakpoint moves with copy line up/down action
**Version:** - VSCode Version: 1.35.0 - OS Version: Windows 10.0.17134.799 **Steps to Reproduce:** 1. Create a new code file (of any supported language). 2. Write a few lines of code. 3. Place a break-point on a line. 4. Copy the line using _Copy Line Up_ or _Copy Line Down_ actions. **Bug:** The break-point moves with the copied line. It should stay. **Does this issue occur when all extensions are disabled?** Yes
editor-contrib,under-discussion
low
Critical
455,770,894
youtube-dl
ProgramCode from Page Source as filename?
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: - First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2019.06.08. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED. - Search the bugtracker for similar feature requests: http://yt-dl.org/search-issues. DO NOT post duplicates. - Finally, put x into all relevant boxes (like this [x]) --> - [x] I'm reporting a feature request - [x] I've verified that I'm running youtube-dl version **2019.06.08** - [x] I've searched the bugtracker for similar feature requests including closed ones ## Description <!-- Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible. --> I'm downloading videos from UFC FIGHT PASS and they use a weird file label where there's a subtitle (for example the title of the PPV but not the 'UFC 2' part of the title), if I go on the webpage and click on View Page Source (on Firefox) and search for "UFC 2" I see that it's under ProgramCode. Is there a way this can be included in YouTube-DL so I can use in my file label? That way it can be (ProgramCode) %(title)s.%(ext)s Here's an example view-source:https://www.ufc.tv/video/no-way-out Search for ProgramCode and you'll see what I mean.
request
low
Critical
455,816,529
TypeScript
Add a new type PublicOf
## Suggestion Add the `PublicOf` type suggested in this [issue's comment](https://github.com/microsoft/TypeScript/issues/471#issuecomment-381842426) by @DanielRosenwasser ## Use Cases Implement a `class` public properties/methods without the need to implement private/protected ones. ## Examples ```typescript // type to add type PublicOf<T> = { [P in keyof T]: T[P] } ``` ```typescript // use case class C { private foo(); public bar(); } // No need to implement private method foo ! class D implements PublicOf<C> { public bar() {} } ``` ## Checklist My suggestion meets these guidelines: * [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code * [X] This wouldn't change the runtime behavior of existing JavaScript code * [X] This could be implemented without emitting different JS based on the types of the expressions * [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, etc.) * [X] This feature would agree with the rest of [TypeScript's Design Goals](https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals).
Suggestion,In Discussion
low
Minor
455,825,327
pytorch
symbol lookup error: libmkl_intel_lp64.so: undefined symbol: mkl_blas_dsyrk (binaries built with static linking -DBUILD_SHARED_LIBS=OFF fail due to dynamic linker problem)
Steps to reproduce: ``` mkdir build (cd build && cmake -DBUILD_CAFFE2_OPS=OFF -DBUILD_SHARED_LIBS=OFF -DUSE_CUDA=OFF -DUSE_MKLDNN=OFF -DPYTHON_EXECUTABLE=$(which python) -DBUILD_BINARY=ON .. && make -j) ``` Try to run one of the built binaries, e.g., `caffe2_benchmark`. It fails with: ``` build/bin/caffe2_benchmark: symbol lookup error: /scratch/ezyang/pytorch-tmp-env/lib/libmkl_intel_lp64.so: undefined symbol: mkl_blas_dsyrk ``` The cause of this problem is that the binaries are being linked with mkl, but they aren't being linked with `-Wl,--no-as-needed` and so the static library that contains `mkl_blas_dsyrk` is being pruned too. I made a little stub application like: ``` add_executable(example-app "example-app.cpp") target_include_directories(example-app PRIVATE ${ATen_CPU_INCLUDE}) target_link_libraries(example-app PRIVATE caffe2 -Wl,--no-as-needed caffe2::mkl ) set_property(TARGET example-app PROPERTY CXX_STANDARD 11) ``` and this solved the problem
module: build,triaged,module: static linking,module: third_party,has workaround
low
Critical
455,853,629
go
cmd/go: add a new flag to disable unzipping after go mod download
The `go mod download` caches the module files into the `$GOPATH/pkg/mod/cache/download` by default and extracts them into the `$GOPATH/pkg/mod` after the downloads are complete. This's fine. But today I encountered a problem that reminds me that we may need to disable the unzipping operation at some point. --- So, I ran a program similar to the following using a non-root user: ```go package main import ( "io/ioutil" "log" "os" "os/exec" ) func main() { tempDir, err := ioutil.TempDir("", "foobar") if err != nil { log.Fatalf("failed to create temp dir: %v", err) } cmd := exec.Command("go", "mod", "download", "-json", "golang.org/x/text@latest") cmd.Env = append(os.Environ(), "GO111MODULE=on", "GOPATH="+tempDir) cmd.Dir = tempDir if err := cmd.Run(); err != nil { log.Fatalf("failed to download module: %v", err) } if err := os.RemoveAll(tempDir); err != nil { log.Fatalf("failed to remove temp dir: %v", err) } } ``` Then I got the following failed output: ```bash 2019/06/14 01:03:21 failed to remove temp dir: unlinkat /var/folders/j6/2330_fdx4tn9t9tx4y117w700000gn/T/foobar128162500/pkg/mod/golang.org/x/[email protected]/codereview.cfg: permission denied exit status 1 ``` Actually, I just need to download the module files and save them somewhere else (not in the `$GOPATH/pkg/mod/cache/download`), then clear the download history after the download is complete. But I got stuck in that `unlinkat` operation. In my scenario, I don't actually need the `go` command to extract the zip files I downloaded into the `$GOPATH/pkg/mod`, but the `go` command doesn't ask me if it can do that. This not only caused unnecessary computing, but also causes problems such as the inability to remove the unzipped files if you don't have enough permissions. So I think we should probably add a new flag like `-disable-unzip` to `go mod download` to disable the unzipping operation.
NeedsInvestigation,FeatureRequest,GoCommand,modules
low
Critical
455,869,584
pytorch
torch.save also saves docstrings into pickle for some reason
Steps to reproduce: 1. Run: ``` from torch import nn import torch torch.save(nn.Conv2d(1, 1, 1, 1), 'moo.pt') ``` 2. Open `moo.pt` in your editor Expected result: a bunch of unintelligible binary gobbeldygook Actual result: I see a docstring!!! ``` <80>^B<8a> lü<9c>Fù j¨P^Y.<80>^BMé^C.<80>^B}q^@(X^P^@^@^@protocol_versionq^AMé^CX^M^@^@^@little_endianq^B<88>X ^@^@^@type_sizesq^C}q^D(X^E^@^@^@shortq^EK^BX^C^@^@^@intq^FK^DX^D^@^@^@longq^GK^Duu.<80>^B(X^F^@^@^@moduleq^@ctorch.nn.modules.conv Conv2d q^AX7^@^@^@/data/users/ezyang/pytorch-tmp/torch/nn/modules/conv.pyq^BX¸^[^@^@class Conv2d(_ConvNd): r"""Applies a 2D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size :math:`(N, C_{\text{in}}, H, W)` and output :math:`(N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}})` can be precisely described as: ... ```
module: serialization,triaged
low
Major
455,912,586
rust
annotate-snippet emitter: Include suggestions in output
Part of https://github.com/rust-lang/rust/issues/59346 In order for the new `AnnotateSnippetEmitterWriter` to include suggestions in the output, we essentially have to pass `&db.suggestions` to `emit_messages_default` and deal with a couple of edge-cases. Relevant `FIXME`: https://github.com/rust-lang/rust/blob/57a3300c2538fd1044ce45d9ef3b82182acb57ae/src/librustc_errors/annotate_snippet_emitter_writer.rs#L35-L36 `emitter.rs` equivalent: https://github.com/rust-lang/rust/blob/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/librustc_errors/emitter.rs#L84-L115 * The tricky part is figuring out the first half of the conditional. We probably need it in the new emitter, too. Is it enough to just copy it over? Maybe extract that code so that the code is shared in both emitters? * Otherwise it's just passing `&db.suggestions` through * Should take into account the MAX_SUGGESTIONS value somewhere (add a UI test for this) <!-- TRIAGEBOT_START --> <!-- TRIAGEBOT_ASSIGN_START --> This issue has been assigned to @phansch via [this comment](https://github.com/rust-lang/rust/issues/61809#issuecomment-526804971). <!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"phansch"}$$TRIAGEBOT_ASSIGN_DATA_END --> <!-- TRIAGEBOT_ASSIGN_END --> <!-- TRIAGEBOT_END -->
C-enhancement,A-diagnostics,T-compiler,E-help-wanted
low
Critical
455,914,240
flutter
Semantics for Material Date picker emits unnecessary semantics nodes
This may not be a bug with DatePicker, but rather with the semantics compiler. Reproduction: With semantics enabled, open a DatePicker and dump the tree. You'll see several `SemanticsNode`s for the weekday header (each one is actually an `ExcludeSemantics`), and for each empty container for days of the week that are part of the previous month. The tree should not include these nodes - they do not effect anything meaningful and make for extra time in tree operations.
framework,f: material design,a: accessibility,P2,team-design,triaged-design
low
Critical
455,919,289
flutter
Fix semantics offset rect for inline widgets
In `Paragraph.assembleSemanticsNode`, we are not properly including the offset of the placeholder rect in the semantics rect for the placeholder. We should fix this, and fix that tests regarding this. See https://github.com/flutter/flutter/pull/34368
framework,a: accessibility,a: typography,P2,team-framework,triaged-framework
low
Minor
455,925,820
kubernetes
Remove the /cluster directory
For years we have publicly stated that the /cluster directory is deprecated and not maintained. However every cycle it is updated and there are bugs found+fixed by sig-cluster-lifecycle. I'd like to enumerate what needs to get done in order for us to wholesale remove the /cluster directory. /assign @dims @spiffxp @justinsb @timothysc /cc @liggitt @neolit123
area/test,sig/scalability,area/provider/gcp,sig/cluster-lifecycle,kind/feature,sig/testing,priority/important-longterm,lifecycle/frozen,sig/cloud-provider,area/code-organization,needs-triage
medium
Critical
455,942,525
TypeScript
Placeholder Type Declarations
# Background There are times when users need to express that a type *might* exist depending on the environment in which code will eventually be run. Typically, the intent is that if such a type can be manufactured, a library can support operations on that type. One common example of this might be the `Buffer` type built into Node.js. A library that operates in both browser and Node.js contexts can state that it handles a `Buffer` if given one, but the capabilities of `Buffer` aren't important to the declarations. ```ts export declare function printStuff(str: string): void; /** * NOTE: Only works in Node.js */ export declare function printStuff(buff: Buffer): void; ``` One technique to get around this is to "forward declare" `Buffer` with an empty interface in the global scope which can later be merged. ```ts declare global { interface Buffer {} } export declare function printStuff(str: string): void; /** * NOTE: Only works in Node.js */ export declare function printStuff(buff: Buffer): void; ``` For consuming implementations, a user might need to say that a type not only exists, but also supports some operations. To do so, it can add those members appropriately, and as long as they are identical, they will merge correctly. For example, imagine a library that can specially operate on HTML DOM nodes. ```ts function printStuff(node: HTMLElement) { console.log(node.innerText); } ``` A user might be running in Node.js or might be running with `"lib": ["dom"]`, so our implementation can forward-declare `HTMLElement`, while also declaring that it contains `innerText`. ```ts declare global { interface HTMLElement { innerText: string; } } export function printStuff(node: HTMLElement) { console.log(node.innerText); } ``` # Issues Using interface merging works okay, but it has some problems. ## Conflicts Interface merging doesn't always correctly resolve conflicts between declarations in two interfaces. For example, imagine two declarations of `Buffer` that merge, where a function that takes a `Buffer` expects it to have a `toString` property. If both versions of `toString` are declared as a method, the two appear as overloads which is slightly undesirable. ```ts declare global { interface Buffer { // We only need 'toString' toString(): string; } } export function printStuff(buff: Buffer) { console.log(buff.toString()); } //// // in @types/node/index.d.ts //// interface Buffer { toString(encoding?: string, start?: number, end?: number): string; } ``` Alternatively, if any declaration of `toString` is a simple property declaration, then all other declarations will be considered collisions which will cause errors. ```ts declare global { interface Buffer { toString(): string } } //// // in @types/node/index.d.ts //// interface Buffer { toString: (encoding?: string, start?: number, end?: number) => string; } ``` The former is somewhat undesirable, and the latter is unacceptable. ## Limited to Object Types Another problem with the trick of using interfaces for forward declarations is that it only works for classes and interfaces. It doesn't work for, say, type aliases of union types. It's important to consider this because it means that the forward-declaration-with-an-interface trick breaks as soon as you need to convert an interface to a union type. For example, we've been taking steps recently to convert `IteratorResult` to a union type. ## Structural Compatibility An empty interface declaration like ```ts interface Buffer {} ``` allows assignment from every type except for `unknown`, `null`, and `undefined`, because any other type is assignable to the empty object type (`{}`). # Proposal Proposed is a new construct intended to declare the existence of a type. ```ts exists type Foo; ``` A *placeholder type declaration* acts as a placeholder until a type implementation is available. It provides a type name in the current scope, even when the concrete implementation is unknown. When a non-placeholder declaration is available, all references to that type are resolved to an *implementation type*. The example given is relatively simple, but placeholder types can also support constraints and type parameters. ```ts // constraints exists type Foo extends { hello: string }; // type parameters exists type Foo<T>; // both! exists type Foo<T, U> extends { toString(): string }; ``` A formal grammar might appear as follows. > *PlaceholderTypeDeclaration* :: <br /> > &emsp;&emsp;`exists` <sub>[No *LineTerminator* here]</sub> `type` *BindingIdentifier* *TypeParameters*<sub>opt</sub> *Constraint*<sub>opt</sub> `;` <br /> ## Implementation Types A placeholder type can co-exist with what we might call an *implementation type* - a type declared using an interface, class, or type alias with the same name as the placeholder type. In the presence of an implementation type, a placeholder defers to that implementation. In other words, for all uses of a type name that references both a placeholder and an implementation, TypeScript will pretend the placeholder doesn't exist. ## Upper Bound Constraints A placeholder type is allowed to declare an upper bound, and uses the same syntax as any other type parameter constraint. ```ts exists type Bar extends { hello: string }; ``` This allows implementations to specify the bare-minimum of functionality on a type. ```ts exists type Greeting extends { hello: string; } function greet(msg: Greeting) { console.log(msg.hello); } ``` If a constraint isn't specified, then the upper bound is implicitly `unknown`. When an implementation type is present, the implementation is checked against its constraint to see whether it is compatible. If not, an implementation should issue an error. ```ts exists type Foo extends { hello: string }; // works! type Foo = { hello: string; world: number; }; exists type Bar extends { hello: string; } // error! type Bar = { hello: number; // <- wrong implementation of 'hello' world: number; } ``` ## Type Parameters A placeholder type can specify type parameters. These type parameters specify a minimum type argument count for consumers, and a minimum type parameter count for implementation types - and the two may be different! For example, it is perfectly valid to specify only type arguments which don't have defaults at use-sites of a placeholder type. ```ts exists type Bar<T, U = number>; // Acceptable to omit an argument for 'U'. function foo(x: Bar<string>) { // ... } ``` But an implementation type *must* declare all type parameters, even default-initialized ones. ```ts exists type Bar<T, U = number>; // Error! // The implementation of 'Bar' needs to define a type parameter for 'U', // and it must also have a default type argument of 'number'. interface Bar<T> { // ... } ``` Whenever multiple placeholder type or implementation type declarations exist, their type parameter names must be the same. Different instantiations of placeholders that have type parameters are only related when their type arguments are identical - so for the purposes of variance probing, type parameters are considered *invariant* unless an implementation is available. ## Relating Placeholder Types Because placeholder types are just type variables that recall their type arguments, relating placeholders appears to fall out from the existing relationship rules. The intent is * Two instantiations of the same placeholder type declaration are only related when their type arguments are identical. * A placeholder type is assignable to any type whose constraint is a subtype of the target. In effect, two rules in any of our type relationships should cover this: > * *S* and *T* are identical types. > * *S* is a type parameter and the constraint of *S* is \[\[related to]] *T*. ## Merging Declarations Because different parts of an application may need to individually declare that a type exists, multiple placeholder types of the same name can be declared, and much like `interface` declarations, they can "merge" in their declarations. ```ts exists type Beetlejuice; exists type Beetlejuice; exists type Beetlejuice; ``` In the event that multiple placeholder types merge, every corresponding type parameter must be identical. On the other hand, placeholder constraints can all differ. ```ts interface Man { man: any } interface Bear { bear: any } interface Pig { pig: any } exists type ManBearPig extends Man; exists type ManBearPig extends Bear; exists type ManBearPig extends Pig; ``` When multiple placeholder types are declared, their constraints are implicitly intersected to a single upper-bound constraint. In our last example, `ManBearPig`'s upper bound is effectively `Man & Bear & Pig`. In our first example with `Beetlejuice`, the upper bound is `unknown & unknown & unknown` which is just `unknown`. # Prior Art C and C++ also support forward declarations of types, and is typically used for opaque type handles. The core idea is that you can declare that a type exists, but can never directy hold a value of that type because its shape/size is never known. Instead, you can only deal with pointers to these forward declared types. ```ts struct FileDescriptor; FileDescriptor* my_open(char* path); void my_close(FileDescriptor* fd); ``` This allows APIs to abstract away the shape of forward-declared types entirely, meaning that the size/shape can change. Because these can only be pointers, there isn't much you can do with them at all (unlike this implementation). Several other programming languages also support some concept of "opaque" or "existential" types, but are generally not used for the same purposes. Java has wildcards in generics, which is typically used to allow one to say only a bit about how a collection can be used (i.e. you can only write `Foo`s to some collection, or read `Bar`s, or you can do absolutely nothing with the elements themselves). Swift allows return types to be opaque in the return type by specifying that it is returning `some SuperType` (meaning some type variable that extends `SuperType`). # FAQ and Rationale ## Why can placeholder types have multiple declarations with different constraints? We have two "obvious" options. 1. Enforce that constraints are all identical to each other. 2. Allow upper-bound constraints to be additive. This is effectively like intersecting the constraints so that a given type implementation has to satisfy all placeholder declarations. I believe that additive constraints are the more desirable behavior for a user. The idea is that different parts of your application may need different capabilities, and given that `interface`s can already model this with interface merging, using intersections provides a similar mechanism. ## Are these just named bounded existential types? ~~In part, yes! When no implementation type exists, a placeholder type acts as a bounded existential type variable.~~ Sorry I'm not sure what you're talking about. Please move along and don't write blog posts about how TypeScript is adding bounded existential types. ## Can placeholder types escape scopes? ```ts function foo() { exists type Foo; return null as any as Foo; } ``` Maybe! It might be possible to disallow placeholder types from escaping their declaring scope. It might also be reasonable to say that a placeholder can only be declared in the top level of a module or the global scope. ## Do we need the `exists` keyword? !["Drop the `exists` - it's cleaner](https://user-images.githubusercontent.com/972891/59453489-dea56b00-8dc4-11e9-97f9-f6afc4aa58c2.png) Maybe we don't need the `exists` keyword - I am open to doing so, but wary that we are unnecessarily abusing the same syntax. I'd prefer to be explicit that this is a new concept with separate syntax, but if we did drop the `exists`, we would change the grammar to the following. > *PlaceholderTypeDeclaration* :: <br /> > &emsp;&emsp;`type` <sub>[No *LineTerminator* here]</sub> *BindingIdentifier* *TypeParameters*<sub>opt</sub> *Constraint*<sub>opt</sub> `;` <br />
Suggestion,In Discussion
high
Critical
455,955,433
flutter
Determine if iOS's rapid render upon app switch is vsync'ed
Determine if iOS's rapid render upon app switch is vsync'ed
team,platform-ios,engine,P2,team-ios,triaged-ios
low
Minor
455,962,437
terminal
If setting change requires restart, notify user
# Summary If a setting is changed that requires the Terminal to restart in order for the setting change to take effect, consider displaying a message to the user prompting them to restart their Terminal. Also, consider making it easier to identify settings that require a Terminal restart if changed, perhaps using an emoji/visual glyph of some kind? # Related This feature is related to #1252 suggesting that the Terminal should support restart/resume. # Proposed technical implementation details (optional) <!-- A clear and concise description of what you want to happen. -->
Area-Settings,Product-Terminal,Issue-Task
low
Minor