id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
97,439,818 | opencv | 2.4.10 Stitching, C++ - Unhandled exception | Transferred from http://code.opencv.org/issues/4236
```
|| M W on 2015-03-11 12:00
|| Priority: Normal
|| Affected: branch '2.4' (2.4-dev)
|| Category: stitching
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
## 2.4.10 Stitching, C++ - Unhandled exception
```
Im trying to stitch three images together. To do this, I chose OpenCV 2.4.10 and Microsoft Visual C++ 2010 Express.
The Images are 1500x1500px and in CV_8UC3 when read.
Im building for 32bit and already got a few other things working with OpenCV, so I guess the project is set up correctly with paths etc..
The odd thing is, that I get this error only sometimes, and only if I try to stitch more than two images.
Here the error message:
<pre>
Unhandled exception at 0x5841dcaa in Stitching Test.exe: 0xC0000005: Access violation reading location 0x00000004.
</pre>
After that I get automatically to line 99 of "Chores.cpp" or line 189 of "TaskCollection.cpp", so i think thats the source of the error. (Path C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\crt\src)
And here the code:
<pre><code class="cpp">
#include <iostream>
//OPENCV
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>
//OPENCV STITCHING
#include <opencv2\stitching\stitcher.hpp>
using namespace std;
using namespace cv;
int main(){
Mat panoramaImage;
vector<Mat> inputImages;
inputImages.push_back(imread("../../V1.bmp"));
inputImages.push_back(imread("../../V2.bmp"));
inputImages.push_back(imread("../../V3.bmp"));
Stitcher stitcher = Stitcher::createDefault();
Stitcher::Status stitcherStatus = stitcher.stitch(inputImages, panoramaImage);
imshow("Stitching Result", panoramaImage);
waitKey(0);
return 0;
}
</code></pre>
```
## History
##### M W on 2015-03-11 12:04
```
"Thread in questions":http://answers.opencv.org/question/57234/opencv-stitching-c-unhandled-exception/
```
| bug,auto-transferred,priority: normal,affected: 2.4,category: stitching | low | Critical |
97,439,950 | opencv | convexityDefects result is located on the hull but has a large depth | Transferred from http://code.opencv.org/issues/4242
```
|| Christoph Pacher on 2015-03-17 16:48
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: imgproc, video
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x86 / Windows
```
## convexityDefects result is located on the hull but has a large depth
```
Hi,
Over here:
http://answers.opencv.org/question/57454/convexitydefects-computes-wrong-result/
I posted my experience with wrong results from convexityDefects along some sample code.
I get wrong results with the hull ordered cw or ccw, so I am not sure if this is connect to Bugfix #3545.
Is this a bug?
Thanks
Christoph
```
## History
##### Vadim Pisarevsky on 2015-04-27 12:41
```
- Category set to imgproc, video
```
| bug,auto-transferred,priority: normal,category: imgproc,category: video,affected: 3.4 | low | Critical |
97,440,150 | opencv | Hot colormap does not match Octave/Matlab | Transferred from http://code.opencv.org/issues/4254
```
|| Christopher Boyd on 2015-03-26 01:06
|| Priority: Low
|| Affected: branch 'master' (3.0-dev)
|| Category: imgproc, video
|| Tracker: Bug
|| Difficulty: Easy
|| PR:
|| Platform: Any / Any
```
## Hot colormap does not match Octave/Matlab
```
Octave updated the Hot colormap to match Matlab back in 2013.
I noticed this on Android with OCV 2.4.10, but confirmed that the issue exists in the master branch as well.
Octave moved the transition points from 2/5 and 4/5 to 3/8 and 6/8, respectively.
My Android code to replicate these colormaps with a LinearGradient:
// New octave:
return getDrawable(new int[] {0xFF000000, 0xFFFF0000, 0xFFFFFF00, 0xFFFFFFFF}, new float[] {0f, 3/8f, 6/8f, 1f});
// Current OCV:
return getDrawable(new int[] {0xFF000000, 0xFFFF0000, 0xFFFFFF00, 0xFFFFFFFF}, new float[] {0f, 2/5f, 4/5f, 1f});
Moreover, the current colormaps seem to simply be hard-coded values rather than functions.
Is there a reason for that?
```
## History
##### Christopher Boyd on 2015-03-26 01:12
```
Octave change log:
http://hg.savannah.gnu.org/hgweb/octave/rev/6964e6b92fc1
```
##### Vadim Pisarevsky on 2015-04-27 11:22
```
- Category set to imgproc, video
```
| bug,auto-transferred,category: imgproc,category: video,priority: low,affected: 3.4 | low | Critical |
97,440,337 | opencv | getTextSize return incorrect value | Transferred from http://code.opencv.org/issues/4261
```
|| philippe pichard on 2015-04-01 20:21
|| Priority: Low
|| Affected: branch '2.4' (2.4-dev)
|| Category: imgproc, video
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Linux
```
## getTextSize return incorrect value
```
With the code sample of Bug #321.
In this example text goes out of left, top and bottom borders. (tested with v2.4.11)
Text = Some bugs left ! {:[)
FontFace = 6
FontScale = 3
Thickness = 3
Baseline = 28
```
## History
##### philippe pichard on 2015-04-01 20:22
```
Previous bug was #3217
```
##### Vadim Pisarevsky on 2015-04-27 11:20
```
- Priority changed from Normal to Low
- Category set to imgproc, video
```
| bug,auto-transferred,category: imgproc,category: video,priority: low,affected: 2.4 | low | Critical |
97,440,555 | opencv | VideoReader_GPU read ending before video complete | Transferred from http://code.opencv.org/issues/4268
```
|| Anthony Trujillo on 2015-04-07 17:50
|| Priority: Normal
|| Affected: 2.4.9 (latest release)
|| Category: gpu (cuda)
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
## VideoReader_GPU read ending before video complete
```
Hello,
I am using opencv 2.4.9 in windows x64 and for some odd reason ,the read function in the gpu::VideoReader_GPU class is returning before the video is complete. I verified by a frame count i created on the video. Also, i tested with multiple .avi files with the same result on each. What is interesting is there is no error on the return. I was wondering if this had been fixed in 2.4.10. Below is the call stack.
> opencv_gpu249d.dll!cv::gpu::VideoReader_GPU::Impl::grab(cv::gpu::GpuMat & frame) Line 225 C++
opencv_gpu249d.dll!cv::gpu::VideoReader_GPU::read(cv::gpu::GpuMat & image) Line 352 + 0x17 bytes C++
```
## History
##### Vadim Pisarevsky on 2015-04-27 11:18
```
- Category set to gpu (cuda)
```
| bug,auto-transferred,priority: normal,affected: 2.4,category: gpu/cuda (contrib) | low | Critical |
97,440,881 | opencv | add a way to compile opencv_contrib from an installed OpenCV | Transferred from http://code.opencv.org/issues/4280
```
|| Vincent Rabaud on 2015-04-15 05:27
|| Priority: Normal
|| Affected: None
|| Category: build/install
|| Tracker: Feature
|| Difficulty:
|| PR:
|| Platform: None / None
```
## add a way to compile opencv_contrib from an installed OpenCV
```
opencv_contrib can be built in the same build space as OpenCV but only in that case.
Now, this is not convenient for package maintainers (me :) ): usually, a repository gives birth to one or several packages. Here, we want two repos to give one package.
Do you think there could be a way to install the proper OpenCV macros in the main OpenCV repo so that we can build opencv_contrib independently ? Thx
```
## History
| auto-transferred,priority: normal,feature,category: build/install,RFC | low | Minor |
97,441,015 | opencv | Documentation issue (OpenCV on Eclipse) | Transferred from http://code.opencv.org/issues/4291
```
|| Patrice Gagnon on 2015-04-22 20:13
|| Priority: Normal
|| Affected: branch '2.4' (2.4-dev)
|| Category: documentation
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Mac OSX
```
## Documentation issue (OpenCV on Eclipse)
```
Hi,
I just got source code according to this:
http://docs.opencv.org/doc/tutorials/introduction/desktop_java/java_dev_intro.html
As of this writing, git checkout 2.4 appears to be 2.4.11. It builds fine on OSX, Thank you.
In the following document:
http://docs.opencv.org/doc/tutorials/introduction/java_eclipse/java_eclipse.html#java-eclipse
I can read: "Select External Folder... and browse to select the folder C:\OpenCV-2.4.6\build\java\x64. If you have a 32-bit system you need to select the x86 folder instead of x64."
My opencv/build directory doesn't contain a java sub-directory. The native library must be generated somewhere else, at least on Mac.
Please let me know where I can find the native libs!
Thanks.
```
## History
##### Vadim Pisarevsky on 2015-04-27 11:10
```
- Category set to documentation
```
| bug,auto-transferred,priority: normal,category: documentation,affected: 2.4 | low | Critical |
97,441,044 | opencv | Does cvtColor handle non-linearities in the sRGB color space? | Transferred from http://code.opencv.org/issues/4293
```
|| Ying Xiong on 2015-04-24 15:45
|| Priority: Normal
|| Affected: None
|| Category: imgproc, video
|| Tracker: Feature
|| Difficulty:
|| PR:
|| Platform: None / None
```
## Does cvtColor handle non-linearities in the sRGB color space?
```
I am wondering whether OpenCV has functions to handle the non-linearities in the sRGB color space.
Say I want to convert an JPEG image from sRGB color space into XYZ color space. As specified in this [Wiki page](https://en.wikipedia.org/wiki/SRGB#Specification_of_the_transformation), one needs to first undo the nonlinearities to convert to linear RGB space, and then multiply with the 3x3 color transform matrix. However, I couldn't find any such discussions in the [cvtColor](http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#cvtcolor) documentation. Did I miss something?
Thanks a lot in advance!
```
## History
##### Vadim Pisarevsky on 2015-05-25 11:14
```
- Category set to imgproc, video
```
| auto-transferred,priority: normal,feature,category: imgproc,category: video | low | Minor |
97,441,555 | opencv | Move cvSnakeImage to the 3.0 Release of OpenCV | Transferred from http://code.opencv.org/issues/4320
```
|| Alex Rothberg on 2015-05-07 07:07
|| Priority: Normal
|| Affected: None
|| Category: imgproc, video
|| Tracker: Feature
|| Difficulty:
|| PR:
|| Platform: None / None
```
## Move cvSnakeImage to the 3.0 Release of OpenCV
```
There was a PR with some sample code: https://github.com/Itseez/opencv/pull/3737
A blog demoing how to use the old cvSnakeImage function: http://eric-yuan.me/active-contour-snakes/
```
## History
##### Vadim Pisarevsky on 2015-05-25 11:15
```
- Category set to imgproc, video
```
| auto-transferred,priority: normal,feature,category: imgproc,category: video | low | Minor |
97,441,586 | opencv | Some Matlab bindings are not built | Transferred from http://code.opencv.org/issues/4323
```
|| Alex Levinshtein on 2015-05-07 19:36
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: matlab bindings
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
## Some Matlab bindings are not built
```
I have noticed that after building Matlab bindings for OpenCV, mex files are generated and then compiled from the following directory: opencv\build\modules\matlab\src .
However, there is a subdirectory called "private" in that folder (i.e., opencv\build\modules\matlab\src\private) which contains many useful bindings such as the FAST corner detector. These mex files are not compiled, when I build OpenCV. When I tried to manually move them from opencv\build\modules\matlab\src\private to opencv\build\modules\matlab\src, they started to compile but there are compilation errors. Any idea on how to compile these files, or are they in a "private" folder because they are still buggy and not ready?
```
## History
| bug,auto-transferred,priority: normal,affected: 3.4,category: matlab bindings | low | Critical |
97,441,667 | opencv | VideoCapture leaking handles | Transferred from http://code.opencv.org/issues/4327
```
|| Jim Curry on 2015-05-09 17:17
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: highgui-camera
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
## VideoCapture leaking handles
```
I am using OpenCV VideoCapture on Win 7-64 with Python 2.7.9 to capture images from several webcams. The code works but leaks handles severely. I have reduced the code to the snippet below and still see the leak. I have tried OpenCV versions 2.4.10 and 3.0.0rc1 and a version from about a year ago. This was originally asked at http://answers.opencv.org/ and I was advised to bring it here. Any ideas on how to correct this would be appreciated. Thanks.
<pre>
import cv2
def webcam():
n=0
while n<4:
cap=cv2.VideoCapture(n)
cap.release()
cv2.waitKey(1)
n=n+1
while True:
webcam()
</pre>
```
## History
##### Vadim Pisarevsky on 2015-05-14 18:35
```
Since you have the hardware configuration where you can reproduce the problem and we don't, I wonder if the problem exists in C++ version as well. I mean, we need to figure out whether it's highgui or python bindings problem
- Category set to highgui-camera
```
##### Jim Curry on 2015-05-16 14:38
```
The issue does exist in C++ using both 2.4.10 and 3.0.0rc1.
```
##### Jim Curry on 2015-06-04 23:34
```
This issue is now confirmed in 2.4.11 and 3.0.0.
```
| bug,auto-transferred,priority: normal,category: videoio(camera),affected: 3.4 | low | Critical |
97,441,723 | opencv | CPU CascadeClassifier with HOG not working anymore with opencv 3.0.0.rc1 | Transferred from http://code.opencv.org/issues/4336
```
|| Valentino Proietti on 2015-05-14 09:47
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: objdetect
|| Tracker: Bug
|| Difficulty: Hard
|| PR:
|| Platform: x64 / Linux
```
## CPU CascadeClassifier with HOG not working anymore with opencv 3.0.0.rc1
```
Hi,
While moving from opencv 2.4.10 to opencv 3.0.0rc1 I found that the CPU CascadeClassifier is not working anymore with HOG.
Digging into the code I discovered that the HOGEvaluator is completely missing in 3.0.
This is the FeatureEvaluator::create() method from the *2.4* version:
<pre>
Ptr<FeatureEvaluator> FeatureEvaluator::create( int featureType )
{
return featureType == HAAR ? Ptr<FeatureEvaluator>(new HaarEvaluator) :
featureType == LBP ? Ptr<FeatureEvaluator>(new LBPEvaluator) :
featureType == HOG ? Ptr<FeatureEvaluator>(new HOGEvaluator) :
Ptr<FeatureEvaluator>();
}
</pre>
And this is the same method from the *3.0* version:
<pre>
Ptr<FeatureEvaluator> FeatureEvaluator::create( int featureType )
{
return featureType == HAAR ? Ptr<FeatureEvaluator>(new HaarEvaluator) :
featureType == LBP ? Ptr<FeatureEvaluator>(new LBPEvaluator) :
Ptr<FeatureEvaluator>();
}
</pre>
They are both extracted from the "cascadedetect.cpp" source file of the relative version.
The "opencv_traincascade" utility included in the 3.0.0rc1 version still permits us to use HOG for training but we don't know how to use the training results anymore.
Is it a bug or a decision ?
If the latter: how are supposed to be used files produced by "opencv_traincascade" with HOG ?
Thank you
Valentino
```
## History
##### Alex D on 2015-05-15 12:58
```
Valentino Proietti wrote:
> Hi,
> While moving from opencv 2.4.10 to opencv 3.0.0rc1 I found that the CPU CascadeClassifier is not working anymore with HOG.
> Digging into the code I discovered that the HOGEvaluator is completely missing in 3.0.
>
> This is the FeatureEvaluator::create() method from the *2.4* version:
> [...]
>
> And this is the same method from the *3.0* version:
> [...]
>
> They are both extracted from the "cascadedetect.cpp" source file of the relative version.
>
> The "opencv_traincascade" utility included in the 3.0.0rc1 version still permits us to use HOG for training but we don't know how to use the training results anymore.
> Is it a bug or a decision ?
> If the latter: how are supposed to be used files produced by "opencv_traincascade" with HOG ?
>
> Thank you
> Valentino
I have exactly the same problem.
```
##### Vadim Pisarevsky on 2015-05-18 14:30
```
- Category set to objdetect
```
##### Vadim Pisarevsky on 2015-05-28 14:24
```
We decided to drop the current HOG cascades in OpenCV 3.x. The implemented HOG features are quite weird - different from Dalal's interpretation of HOG, different from P. Dollar integral channel features. In xobjdetect we slowly grow superior ICF/ACF+WaldBoost-based detector, which is there already and will be improved during 2015.
- Difficulty set to Hard
- Assignee set to Vadim Pisarevsky
- Status changed from New to Cancelled
```
##### Valentino Proietti on 2015-05-29 09:56
```
I'm just wondering why this "design decision" has not been evidenced as needed on the release notes, but may be it's just my fault.
Thank you very much indeed Vadim !!!
```
##### Steven Puttemans on 2015-06-10 09:36
```
Valentino Proietti wrote:
> I'm just wondering why this "design decision" has not been evidenced as needed on the release notes, but may be it's just my fault.
>
> Thank you very much indeed Vadim !!!
Actually that is a very good point. It should be mentioned if functionality is dropped. We should remove the ability to train HOG features than also from the training interface. Will take a look at it later on! I will reopen this bugreport simply so I can link the PR for the deletion of the functionality.
- Assignee changed from Vadim Pisarevsky to Steven Puttemans
- Status changed from Cancelled to Open
```
| bug,auto-transferred,priority: normal,category: objdetect,affected: 3.4,RFC | medium | Critical |
97,441,845 | opencv | Build error with mingw32 when TBB is enabled: MonitorFromRect not declared | Transferred from http://code.opencv.org/issues/4352
```
|| Miguel Munoz on 2015-05-20 22:09
|| Priority: Low
|| Affected: branch 'master' (3.0-dev)
|| Category: build/install
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x86 / Windows
```
## Build error with mingw32 when TBB is enabled: MonitorFromRect not declared
```
Building OpenCV in Windows using mingw32 fails in file modules/highgui/src/window_w32.cpp when TBB is used. A possible fix is provided in a attached file.
The error message is:
error: 'MonitorFromRect' was not declared in this scope
Configuration details:
- OpenCV 3.0 RC1
- Windows 8.1
- TDM-GCC-32 with GCC 4.9.2, also tested with MinGW with GCC 4.8.1 (same problem)
- TBB version 4.3_20150424oss, compiled from sources with same GCC
The problem doesn't appear with TDM-GCC-64.
The problem seems to be caused by the TBB includes indirectly defining _WIN32_WINNT to 0x0400 (maybe by including windows.h?).
The problem seems to be fixed if in the file highgui/src/precomp.hpp the _WIN32_WINNT check and the include of windows.h before the includes from opencv_core (in particular private.hpp, which in turn includes TBB).
See attached file.
Possibly related issues and posts:
http://answers.opencv.org/question/30589/with_win32ui-errors-compiling-opencv-248-with-mingw/
http://answers.opencv.org/question/54152/with_win32ui-with_tbb-errors-compiling-opencv-2410-with-mingw/
http://answers.opencv.org/question/29666/opencv-monitorfromrect-was-not-declared-in-this-scope/
http://stackoverflow.com/questions/21103042/error-while-building-opencv-monitorfromrect-was-not-declared-in-this-scope
```
## History
##### Vadim Pisarevsky on 2015-05-21 19:28
```
thanks for the report! We do not support MinGW anymore, thus I lower the bug priority. You are welcome to submit the patch though as described here:
http://code.opencv.org/projects/opencv/wiki/How_to_contribute
- Priority changed from Normal to Low
- Category set to build/install
```
| bug,auto-transferred,priority: low,category: build/install,affected: 3.4 | low | Critical |
97,442,078 | opencv | cv2.dft output changes between loop iterations for the same input array in complex output mode | Transferred from http://code.opencv.org/issues/4364
```
|| Roger Olivé on 2015-05-27 20:48
|| Priority: Normal
|| Affected: 2.4.9 (latest release)
|| Category: python bindings
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: Any / Any
```
## cv2.dft output changes between loop iterations for the same input array in complex output mode
```
I posted a question in OpenCV answers because I've been getting +changing outputs+ for DFT operations in Python over +the same input array+ under very specific conditions:
http://answers.opencv.org/question/62514/strange-output-when-using-cv2dft-inside-loops/
After some research, it seems to be a bug that appears when using cv2.dft under the following conditions:
* Calling cv2.dft to convert at least two different random arrays inside a loop. (haven't tried with more)
* One of the converted arrays is real and the other complex.
* flags=cv2.DFT_COMPLEX_OUTPUT specified at least for the DFT of the real input.
* The array returned by the second DFT is NOT explicitly assigned to a variable name.
* The problem doesn't seem to appear in C++. Looks specific to the Python bindings.
I only tried it on Linux x86_64 but I have a feeling this is not something HW and/or OS specific.
Compact code to reproduce the issue (courtesy of mshabunin from OpenCV answers):
<pre>
import cv2
import numpy as np
A = np.random.rand(16384)
X = np.random.rand(2, 16384)
for i in range (0,5):
Am = cv2.dft(A, flags=cv2.DFT_COMPLEX_OUTPUT)
print("DFT min: %f" % np.min(Am))
cv2.dft(X)
</pre>
```
## History
| bug,auto-transferred,priority: normal,category: python bindings,affected: 2.4 | low | Critical |
97,442,223 | opencv | OpenCV-Java crashes JVM when calling Core.kmeans with NaN element | Transferred from http://code.opencv.org/issues/4371
```
|| Hendy Irawan on 2015-06-01 09:55
|| Priority: Normal
|| Affected: 2.4.9 (latest release)
|| Category: java bindings
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Linux
```
## OpenCV-Java crashes JVM when calling Core.kmeans with NaN element
```
If any input to kmeans() contains NaN element, the JVM will crash immediately (see attachment).
Although NaN is invalid input, it should throw a Java Exception with appropriate error message, even more helpful if showing the coordinate of the affected element.
Affects both 2.4.8 and 2.4.11.
Adding -Xcheck:jni to JVM arguments doesn't help.
```
## History
| bug,auto-transferred,priority: normal,affected: 2.4,category: java bindings | low | Critical |
97,442,401 | opencv | opencv_contrib_world build fails | Transferred from http://code.opencv.org/issues/4391
```
|| Dmitry Budnikov on 2015-06-08 16:45
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
## opencv_contrib_world build fails
```
I'm trying to build OpenCV library together with opencv_contrib_world module. Cmake generates opencv_contrib_world project with multiple Object Libraries representing contrib modules source files. Linker generates following errors:
Error 1 error LNK1181: cannot open input file '...\modules\bgsegm\opencv_bgsegm_object.dir\Release\bgfg_gaussmix.obj' ...\modules\contrib_world\LINK opencv_contrib_world
Actually missed objects exist in the following folder:
'...\modules\bgsegm\opencv_bgsegm.dir\Release\bgfg_gaussmix.obj'
I switched off BG segmentation module and obtained similar errors for object files from other contrib modules.
```
## History
| bug,auto-transferred,priority: normal,category: build/install,affected: 3.4 | low | Critical |
97,442,460 | opencv | Core.multiply crashes JVM | Transferred from http://code.opencv.org/issues/4393
```
|| razvan chisu on 2015-06-09 15:26
|| Priority: High
|| Affected: branch 'master' (3.0-dev)
|| Category: core
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Mac OSX
```
## Core.multiply crashes JVM
```
Core.multiply(Mat, Scalar, Mat) crashes the Java VM under Mac OS X, using Java 1.8.0_45 under Mac OS 10.10.3
(It does work under Windows 7 x64 with the same JVM version)
Same goes for Core.add(Mat, Scalar, Mat) and Core.divide(Mat, Scalar, Mat), but Core.substract(Mat, Scalar, Mat) works fine.
Core.multiply(Mat, Mat, Mat) works fine, too.
```
## History
| bug,auto-transferred,category: core,affected: 3.4 | low | Critical |
97,442,536 | opencv | “R6025 pure virtual function call” with the pre-built staticlib and demo code bg_sub.cpp | Transferred from http://code.opencv.org/issues/4403
```
|| Chen ZHANG on 2015-06-12 16:33
|| Priority: High
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x86 / Windows
```
## “R6025 pure virtual function call” with the pre-built staticlib and demo code bg_sub.cpp
```
I've asked the question on StackOverflow.com:
http://stackoverflow.com/questions/30807850/opencv3-0-0-backgroundsubtractorapply-leads-to-r6025-pure-virtual-function-ca
I'm running the "BackgroundSubtractor demo":https://github.com/Itseez/opencv/blob/master/samples/cpp/tutorial_code/video/bg_sub.cpp in opencv3.0-gold-release with vs2013, while the following error occurs when exiting the program:
!http://i.stack.imgur.com/iCHA3.png!
I was using the *staticlib*, and I find the error gone when I change my linking library to the *dynamic* version, so it seems that something is wrong with the *staticlib*, I'm not sure how to track & debug the error, hope someone could point it out and better to fix it ;)
```
## History
| bug,auto-transferred,category: samples,affected: 3.4 | low | Critical |
97,442,715 | opencv | OpenCL-related warning when OpenCL is disabled | Transferred from http://code.opencv.org/issues/4409
```
|| Andreas Unterweger on 2015-06-15 09:18
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Other
```
## OpenCL-related warning when OpenCL is disabled
```
Building a static version of OpenCV 3.0 on Ubuntu 14.04 with OpenCL disabled
<pre>
cmake -D WITH_OPENCL=OFF -D BUILD_SHARED_LIBS=OFF
</pre>
yields the following message when linking against the compiled version of OpenCV (path redacted):
<pre>
[Redacted]/lib/libopencv_core.a(ocl.cpp.o): In function `initOpenCLAndLoad': ocl.cpp:(.text.initOpenCLAndLoad+0x2b): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
</pre>
Why is there OpenCL-invoking code in the compiled library? Is WITH_OPENCL=OFF not enough or is this a bug?
```
## History
##### Alexander Alekhin on 2015-06-15 09:31
```
Hi, I believe this patch should resolve this issue: https://github.com/Itseez/opencv/pull/4102
But main reason of this warning is in linker flags of the final application or OpenCV build used with this app.
More info about this warning can be found here: http://stackoverflow.com/questions/2725255/create-statically-linked-binary-that-uses-getaddrinfo
```
##### Andreas Unterweger on 2015-06-15 09:36
```
Alexander Alekhin wrote:
> Hi, I believe this patch should resolve this issue: https://github.com/Itseez/opencv/pull/4102
Thanks, I'm looking forward to the next release including this patch.
```
| bug,auto-transferred,priority: normal,affected: 3.4,category: t-api | low | Critical |
97,442,921 | opencv | Missing symbols for cv::rectangle with two points, works fine when using a rect | Transferred from http://code.opencv.org/issues/4416
```
|| Marco Marchesi on 2015-06-16 14:57
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: samples
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Mac OSX
```
## Missing symbols for cv::rectangle with two points, works fine when using a rect
```
Hi, testing "kalman.cpp" sample with Xcode I get the following missing symbols:
Undefined symbols for architecture x86_64:
"cv::_InputArray::getMatVector(std::__1::vector<cv::Mat, std::__1::allocator<cv::Mat> >&) const", referenced from:
vtable for cv::_InputOutputArray in example.cpp.o
"cv::_InputArray::getUMatVector(std::__1::vector<cv::UMat, std::__1::allocator<cv::UMat> >&) const", referenced from:
vtable for cv::_InputOutputArray in example.cpp.o
ld: symbol(s) not found for architecture x86_64
It looks related to the issue closed in the topic: "Missing symbols for cv::rectangle with two points, works fine when using a rect" that claimed that CUDA 7.0 and libc++ solved the problem.
I have built OpenCV 3.0 with CUDA 7.0 and default compiler (Mac OS X 10.10). Why does the issue still appear?
Thanks
Marco
```
## History
| bug,auto-transferred,priority: normal,category: samples,affected: 3.4 | low | Critical |
97,442,996 | opencv | DescriptorExtractor.create method fails with 'Specified descriptor extractor type is not supported' | Transferred from http://code.opencv.org/issues/4419
```
|| Mark Grubb on 2015-06-17 00:59
|| Priority: Blocker
|| Affected: branch 'master' (3.0-dev)
|| Category: java bindings
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
## DescriptorExtractor.create method fails with 'Specified descriptor extractor type is not supported'
```
private static final DescriptorExtractor descriptorExtractor = DescriptorExtractor.create(DescriptorExtractor.BRIEF);
Caused by: CvException [org.opencv.core.CvException: cv::Exception: c:\builds\master_packslaveaddon-win64-vc12-static\opencv\modules\features2d\misc\java\src\cpp\features2d_manual.hpp:374: error: (-5) Specified descriptor extractor type is not supported. in function cv::javaDescriptorExtractor::create
]
```
## History
##### Philipp Hasper on 2015-06-22 06:21
```
BRIEF has been moved to the contrib repository (https://github.com/Itseez/opencv_contrib) Are you using the pre-built OpenCV or did you build it yourself? In the latter case, you should include the build of contrib as explained in the link
```
##### be rak on 2015-06-22 08:27
```
unfortunately, it won't work even with the contrib repo on board.
none of the xfeature2d classes are available in java atm.
if you take a close look at features2d_manual.hpp, you can see, that it was even tried to pull them in, but then discarded.
i made a quick attempt with conditional blocks like #ifdef HAVE_OPENCV_XFEATURES2D, but that did not work either - the opencv2/xfeatures2d.hpp header is not available, when compiling this.
imho, the whole features2d_manual.hpp factory approach should be removed in favour of ORB.create() , SIFT.create(), etc.
```
##### Mark Grubb on 2015-06-30 19:00
```
Philipp Hasper wrote:
> BRIEF has been moved to the contrib repository (https://github.com/Itseez/opencv_contrib) Are you using the pre-built OpenCV or did you build it yourself? In the latter case, you should include the build of contrib as explained in the link
I am using pre-built.
It appears the build is faulty.
```
| bug,auto-transferred,affected: 3.4,category: java bindings | low | Critical |
97,443,051 | opencv | Cannot find header files when compiling matlab bindings | Transferred from http://code.opencv.org/issues/4424
```
|| Anup Parikh on 2015-06-22 03:34
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: matlab bindings
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
## Cannot find header files when compiling matlab bindings
```
I get this error when trying to compile the latest source from the master branch. I'm using MSVS 2013 pro on windows. I've copied the compiler output below, not sure if anything else is needed. Curiously, absdiff is the first alphabetical file in the matlab module, so I don't know if that means all the rest of the files were also not compiled, or if it's just a coincidence, since the output says this is the only file that failed.
1>------ Build started: Project: opencv_matlab_sources, Configuration: Release x64 ------
2>------ Build started: Project: opencv_matlab, Configuration: Release x64 ------
2> Building Custom Rule C:/opencv/modules/matlab/CMakeLists.txt
2> CMake does not need to re-run because C:\opencv\build\modules\matlab\CMakeFiles\generate.stamp is up-to-date.
2> Compiling Matlab source files. This could take a while...
2> CMake Error at C:/opencv/modules/matlab/compile.cmake:47 (message):
2> Failed to compile absdiff: absdiff.cpp
2>
2> c:\opencv\modules\matlab\include\opencv2\matlab\mxarray.hpp(502) : warning
2> C4099: 'matlab::ArgumentParser::Variant' : type name first seen using
2> 'struct' now seen using 'class'
2>
2> c:\opencv\modules\matlab\include\opencv2\matlab\mxarray.hpp(488) : see declaration of 'matlab::ArgumentParser::Variant'
2>
2> c:\opencv\modules\matlab\include\opencv2\matlab\mxarray.hpp(565) : warning
2> C4267: '=' : conversion from 'size_t' to 'int', possible loss of data
2>
2> c:\opencv\modules\matlab\include\opencv2\matlab\mxarray.hpp(567) : warning
2> C4267: '=' : conversion from 'size_t' to 'int', possible loss of data
2>
2> c:\opencv\modules\matlab\include\opencv2\matlab\mxarray.hpp(570) : warning
2> C4267: '=' : conversion from 'size_t' to 'int', possible loss of data
2>
2> c:\opencv\modules\matlab\include\opencv2\matlab\mxarray.hpp(572) : warning
2> C4267: '=' : conversion from 'size_t' to 'int', possible loss of data
2>
2> C:/opencv/modules/core/include\opencv2/core/cvdef.h(59) : fatal error
2> C1083: Cannot open include file: 'opencv2/hal/defs.h': No such file or
2> directory
2>
2>
2>
2>
2>
2>
2>
2>C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets(170,5): error MSB6006: "cmd.exe" exited with code 1.
3>------ Build started: Project: opencv_test_matlab, Configuration: Release x64 ------
========== Build: 2 succeeded, 1 failed, 99 up-to-date, 0 skipped ==========
```
## History
| bug,auto-transferred,priority: normal,affected: 3.4,category: matlab bindings | low | Critical |
97,443,106 | opencv | Suspicious synchronization in modules/videoio/src/cap_openni.cpp | Transferred from http://code.opencv.org/issues/4425
```
|| Alexandr Konovalov on 2015-06-22 07:09
|| Priority: Low
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: Any / Any
```
## Suspicious synchronization in modules/videoio/src/cap_openni.cpp
```
In modules/videoio/src/cap_openni.cpp I see very suspicious code. As mtx is placed on stack, only effect of mtx.lock()/unlock() is memory fences, not mutual exclusion. So this is either merely confusing a code reader, or a sign of synchronization bug.
class TBBApproximateSynchronizer: public ApproximateSynchronizerBase {
...
virtual inline void pushDepthMetaData( xn::DepthMetaData& depthMetaData ) {
...
tbb::mutex mtx;
mtx.lock();
if( depthQueue.try_push(depthPtr) == false )
{
if( approxSyncGrabber.getIsCircleBuffer() )
{
CV_Assert( depthQueue.try_pop(tmp) );
CV_Assert( depthQueue.try_push(depthPtr) );
}
}
mtx.unlock();
}
}
```
## History
| bug,auto-transferred,priority: low,category: videoio,affected: 3.4 | low | Critical |
97,443,204 | opencv | StereoSGBM doesn't work with SADWindowSize greater than 11 instead from Python the disparity map looks fine! | Transferred from http://code.opencv.org/issues/4430
```
|| Matteo Sacchi on 2015-06-23 16:58
|| Priority: High
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Linux
```
## StereoSGBM doesn't work with SADWindowSize greater than 11 instead from Python the disparity map looks fine!
```
The disparity map computed by StereoSGBM with a SADWindowSize > 11 contains a lot of artifacts. The same operation from Python works.
```
## History
| bug,auto-transferred,category: calib3d,affected: 3.4 | low | Critical |
97,443,240 | opencv | SURF_CUDA does not match SURF API and is out of date | Transferred from http://code.opencv.org/issues/4432
```
|| Andrew Hundt on 2015-06-24 01:04
|| Priority: High
|| Affected: branch 'master' (3.0-dev)
|| Category: nonfree
|| Tracker: Bug
|| Difficulty: Easy
|| PR:
|| Platform: Any / Any
```
## SURF_CUDA does not match SURF API and is out of date
```
API mismatch:
* SURF_CUDA in opencv_contrib
> * https://github.com/Itseez/opencv_contrib/blob/master/modules/xfeatures2d/include/opencv2/xfeatures2d/cuda.hpp#L86
* SURF in opencv_contrib
> * https://github.com/Itseez/opencv_contrib/blob/master/modules/xfeatures2d/include/opencv2/xfeatures2d/nonfree.hpp#L116
SURF_CUDA using the old is using OpenCV 2.x rather than 3.x API. For example, there is no SURF_CUDA::create function. It appears this fell through the cracks.
I created an opencv_contrib issue as well: https://github.com/Itseez/opencv_contrib/issues/280
```
## History
| bug,auto-transferred,category: nonfree,affected: 3.4 | low | Critical |
97,443,332 | opencv | Different result when finding the similarity between 2 pictures of 1 person in LBPH by changing the order of prediction | Transferred from http://code.opencv.org/issues/4437
```
|| Quang Huy Nguyen on 2015-06-25 09:34
|| Priority: High
|| Affected: 2.4.9 (latest release)
|| Category: None
|| Tracker: Bug
|| Difficulty: Easy
|| PR:
|| Platform: ARM / Android
```
## Different result when finding the similarity between 2 pictures of 1 person in LBPH by changing the order of prediction
```
I use LBPH face recognizer to find the similarity between 2 pictures of 1 person (one pics took in light condition and the other took in dark condition).
I know that in LBPH, the number of pics in training set doesn't matter as FisherFaces or EigenFaces.
So, I use pic 1 as base image, and pic 2 as the training set. When I train (pic2), then predict (pic1), the result is 105
Next, I use pic 2 as base image, and pic 1 as the training set. When I train (pic1), then predict (pic2), the result is 85
Meanwhile, I expect that the both of the 2 experiments must produce the same result.
Digging further in the source code, I find out that LBPH predict function use the function: compareHist( InputArray _H1, InputArray _H2, int method ). In this function, the LBPH assign the third parameter as: CV_COMP_CHISQR which is one of 4 method of comparing 2 histograms (CV_COMP_CORREL, CV_COMP_CHISQR, CV_COMP_INTERSECT, CV_COMP_BHATTACHARYYA)
I thought the problem is the formula of CV_COMP_CHISQR itself: http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html
In this formula, if I consider HISTOGRAM of pic 1 as H1, and HISTOGRAM of pic 2 as H2, the result of d(H1, H2) will be different from d(H2, H1)
So my suggestion is in order to avoid this, can you guy repare the *compareHist* in *LBPH predict function* to use CV_COMP_CORREL method. Thus, we can ensure that d(H1, H2) = d(H2, H1) and lead to predict(pic1) = predict(pic2).
Besides, I'm using opencv android sdk 2.4.11. It's great if somebody can fix this and regenerate the jar file for opencv android sdk.
Thanks!
```
## History
##### be rak on 2015-06-27 11:14
```
>> I use LBPH face recognizer to find the similarity between 2 pictures of 1 person
this already sounds like a bad idea. opencv's facerecognizer classes are for face *identification* (nearest item in database), not for face *verification* (same/not-same)
for the latter, you will need a fundamentally different approach, like learning a threshold on difference images. for more elaborate schemes, look e.g. [here](http://vis-www.cs.umass.edu/lfw/results.html)
>> I know that in LBPH, the number of pics in training set doesn't matter as FisherFaces or EigenFaces.
this is clearly wrong. though LBPH does not build a global model, you still need a couple of images to improve the 1-nearest-neighbour search.
>> (one pics took in light condition and the other took in dark condition).
so, it's pretty obvious, that due to the sparsity of of your trainset, it prefers a different person with the same lighting conditions. again, you need pose and lighting normalization preprocessing to eliminate this (and ofc. more images).
>> Meanwhile, I expect that the both of the 2 experiments must produce the same result.
a wrong assumption.
>> I thought the problem is the formula of CV_COMP_CHISQR itself
there you at least got a point, imho. CV_COMP_CHISQR is indeed asymmetric.
but if you do *real* tests for the identification task (like a 10-fold cross-validation) you will still find, that is more accurate than the more symmetric CV_COMP_CHISQR_ALT, and that your proposed CV_COMP_CORREL method fares worst of all.
```
| bug,auto-transferred,category: contrib,affected: 2.4 | low | Critical |
97,443,396 | opencv | kalman.cpp | Transferred from http://code.opencv.org/issues/4440
```
|| Volodymyr Ivanchenko on 2015-06-27 00:05
|| Priority: Normal
|| Affected: 2.4.9 (latest release)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: Any / Any
```
## kalman.cpp
```
You kalman.cpp sample (not module) seems to compare wrong entities. In particular, you compare measurement with prediction while
you have to compare measurement with estimation. In other word, instead of using prediction = KF.statePre use KF.statePost that you
have to update BEFORE drawing lines. In the current code, your comparison isn't indicative of Kalman Filter efficiency and only reflects
the fact that during the initialization of the filter you set process noise level (1e-5) to be much smaller value than measurement noise (1e-1).
Your help() function, however, describes the situation correctly: "The real and the estimated points are connected with yellow line segment”
so your intentions were right but the implementation was wrong.
```
## History
| bug,auto-transferred,priority: normal,category: samples,affected: 2.4 | low | Critical |
97,443,512 | opencv | medianBlur() performance drop on ARM between v2.4 and v3.0 | Transferred from http://code.opencv.org/issues/4446
```
|| Tom Mises on 2015-06-28 01:48
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: ARM / Mobile/Embedded Linux
```
## medianBlur() performance drop on ARM between v2.4 and v3.0
```
medianBlur() is significantly slower on ARM Cortex-A8 in OpenCV 3.0 than in OpenCV 2.4. The performance drop was at least 50%, I tested this a couple weeks ago and do not remember exactly.
The 2.4 version was from Gumstix repository. The 3.0 was compiled with the following flags: @-DENABLE_VFPV3=ON -DENABLE_NEON=ON -DSOFTFP=ON@.
After reviewing the code, I see the difference is that some NEON intrinsics were added in OpenCV 3.0. I am no expert in NEON, however, based on my research, when using intrinsics without thorough understanding of NEON and thorough testing, it is easy to end up with the code which actually performs worse than the compiler-optimized code. I suspect this is what happened here.
As a point of interest, I needed this for a bitmap and I ended up writing the straightforward function attached below. After compiling with @gcc -Ofast -mfpu=neon@, it is 20 times faster than medianBlur().
<pre>
void median5x5(uint8_t *mask_in, uint8_t *mask_out, uint16_t width, uint16_t height) {
uint32_t first_idx = 2 * width + 2;
uint32_t last_idx = width*height - 2 * width - 3;
for(uint32_t i = first_idx; i < last_idx; i++) {
uint8_t sum = 0;
for(int8_t ver_offset = -2; ver_offset < 3; ver_offset++) {
for(int8_t hor_offset = -2; hor_offset < 3; hor_offset++) {
sum += *(mask_in + i + ver_offset*width + hor_offset);
}
}
*(mask_out + i) = sum > 12;
}
}
</pre>
```
## History
| bug,auto-transferred,priority: normal,category: imgproc,affected: 3.4 | low | Critical |
97,443,568 | opencv | findContours() is not for 8-connected neighbours but for 4-connected neighbours? | Transferred from http://code.opencv.org/issues/4448
```
|| thesby thesby on 2015-06-29 09:16
|| Priority: High
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x86 / Windows
```
## findContours() is not for 8-connected neighbours but for 4-connected neighbours?
```
Thank you for your reading.
I have report the same bug #4444 as opencv2.4.9, but I found the bug exists in opencv3.0 too, so I want to report again.
I draw a binary picture, and set all pixels as 0. Then I set '1' to the diagonal pixels. If findContours really use 8-connected neighbours, it will find only one connected region. Actually, it does not. it just find many regions. Thus, I doubt if findContours() use 4-connected neighbours, not 4-connected neighbours as the document depicts. In addition, I use CV_CHAIN_APPROX_NONE.
That's my code.
int main()
{
using namespace cv;
Scalar color{ 255 };
cv::Mat mbin(700, 700, CV_8UC1);
mbin.setTo(cv::Scalar::all(255));
cv::rectangle(mbin, Point(10, 10), Point(200, 200), 0, -1);
/*cv::rectangle(mbin, Point(201, 201), Point(400, 400), 0, -1);*/
for (int i = 0; i < 700; ++i)
{
mbin.at<char>(i, i) = 0;
mbin.at<char>(i, 699 - i) = 0;
}
cv::circle(mbin, cv::Point(350, 350), 100, 0, 1);
//cv::circle(mbin, cv::Point(350, 100), 30, 0, -1);
cv::imshow("mbin", mbin); cv::waitKey();
vector<vector<cv::Point> > contours;
vector<cv::Vec4i> hierarchy;
findContours(mbin, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE, cv::Point(0, 0));
Mat drawing = Mat::zeros(mbin.size(), CV_8UC3);
RNG rng(12345);
for (size_t i = 0; i< contours.size(); i++)
{
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
drawContours(drawing, contours, (int)i, color, 2, 8, hierarchy, 0, Point());
}
/// Show in a window
namedWindow("Contours", WINDOW_AUTOSIZE);
imshow("Contours", drawing);
waitKey();
return 0;
}
I think it's very important since we often use this function. And I hope that function findContours provide a optional parameter to use 8-connected neighbours or 4-connected neighbours methods as users' need.
Thank you very much.
```
## History
##### enpe enpe on 2015-07-12 19:42
```
Your description does not exactly match the example (you speak of only diagonals, your example draws a more elaborate pattern, cf. screenshots below).
A fully working version of your example can be found here: https://gist.github.com/89630565044c87198100.git
The black borders which separate the white areas in your mask are very thin. I am not sure, if the "algorithm":http://docs.opencv.org/trunk/d0/de3/citelist.html#CITEREF_Suzuki85 of @findContours@ designed for the case you describe. In my version of your example if you set the @lineWidth = 2@ the algorithm behaves as expected.
So, I am not sure the case you describe qualifies as a bug. Maybe it's rather a missing feature.
!00_mask.png!
!01_mask_magnified.png!
!02_contours.png!
!03_contours_magnified.png!
- File 01_mask_magnified.png added
- File 02_contours.png added
- File 03_contours_magnified.png added
- File 00_mask.png added
```
| bug,auto-transferred,category: imgproc,affected: 2.4,affected: 3.4 | low | Critical |
97,443,929 | opencv | calcHist should handle parallel operations for small images better | Transferred from http://code.opencv.org/issues/4465
```
|| Damon Maria on 2015-07-06 00:00
|| Priority: Normal
|| Affected: None
|| Category: core
|| Tracker: Feature
|| Difficulty:
|| PR:
|| Platform: None / None
```
## calcHist should handle parallel operations for small images better
```
At the moment calcHist always tries to split the image up by rows and perform the operation in parallel under TBB.
There are a couple of problems:
* If the height of the image is less than the threads TBB is using then OpenCV sends a grainSize of 0 to TBB which is invalid and violates an assertion in TBB
* For small (in height) images on machines with many cores the overhead of TBB can far exceed the benefit of splitting up the operation, and may even cause stalls inside TBB (see #4353)
```
## History
##### Alexander Alekhin on 2015-07-15 10:01
```
Current implementation of calcHist contains TBB only optimization under "#if HAVE_TBB"
Need to verify existed parallel optimization and to check possibility of migration to "parallel_for_"
P.S. Also need to check another code under HAVE_TBB
- Status changed from New to Open
```
| auto-transferred,priority: normal,feature,category: core | low | Minor |
97,443,954 | opencv | Error building OpenCV 3.0.0 on MacOS X | Transferred from http://code.opencv.org/issues/4466
```
|| freya chen on 2015-07-06 23:17
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Mac OSX
```
## Error building OpenCV 3.0.0 on MacOS X
```
I keep getting this error when building the latest opencv-3.0.0 using 'make -j8', what should I do? :
[ 0%] [ 1%] Built target opencv_hal
Built target zlib
[ 5%] Built target libjpeg
[ 9%] Built target libjasper
[ 15%] Built target libwebp
[ 15%] Built target opencv_cudev
[ 16%] Built target libpng
[ 16%] Building NVCC (Device) object modules/core/CMakeFiles/cuda_compile.dir/src/cuda/cuda_compile_generated_gpu_mat.cu.o
[ 19%] Built target libtiff
[ 24%] Built target IlmImf
In file included from <built-in>:325:
In file included from <command line>:11:
In file included from /usr/local/cuda/include/cuda_runtime.h:104:
/usr/local/cuda/include/common_functions.h:65:10: fatal error: 'string.h' file not found
#include <string.h>
^
1 error generated.
CMake Error at cuda_compile_generated_gpu_mat.cu.o.cmake:206 (message):
Error generating
/Users/Freya/Documents/opencv-3.0.0/build/modules/core/CMakeFiles/cuda_compile.dir/src/cuda/./cuda_compile_generated_gpu_mat.cu.o
make[2]: *** [modules/core/CMakeFiles/cuda_compile.dir/src/cuda/cuda_compile_generated_gpu_mat.cu.o] Error 1
make[1]: *** [modules/core/CMakeFiles/opencv_core.dir/all] Error 2
make: *** [all] Error 2
```
## History
##### Alexander Alekhin on 2015-07-07 12:05
```
If CUDA optimizations are not critical then you can add to CMake "-D WITH_CUDA=OFF" parameter (and cleanup build directory to build from scratch).
Refer to our MacOS builds (steps 5-6 contain build parameters): http://build.opencv.org/buildbot/builders/master-mac
If you want to use CUDA please check your CUDA installation and try to build CUDA examples.
If all is fine with CUDA setup then please provide more info about CUDA (version) and CMake logs.
```
| bug,auto-transferred,priority: normal,category: build/install,affected: 3.4 | low | Critical |
97,444,093 | opencv | Ignoring the flags of solvePnP and silently executing EPNP | Transferred from http://code.opencv.org/issues/4472
```
|| Philipp Hasper on 2015-07-08 08:42
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: calibration, 3d
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: Any / Any
```
## Ignoring the flags of solvePnP and silently executing EPNP
```
The way how uPnP and DLS have been disabled in [1] is highly counter-intuitive:
Just executing EPNP regardless of the given flag [2] is confusing. If somebody wants to evaluate the OpenCV version of uPnP, let him do that (and not silently executing epnp in the background). Imagine somebody wants to evaluate the available PnP solvers in OpenCV and finds out that it does not matter which one he chooses because the results are all the same.
So we have to
a) remove the upnp and dls flags completely or
b) leave them in and document the unstable behavior with a warning in the online documentation.
The latter option might lead to somebody improving the algorithms
[1]: https://github.com/Itseez/opencv/pull/3828
[2]: https://github.com/vpisarev/opencv/blob/ca19ae8b5ae53afe3ca0084c55178b8d0796b28b/modules/calib3d/src/solvepnp.cpp#L66
```
## History
| bug,auto-transferred,priority: normal,category: calib3d,affected: 3.4 | low | Critical |
97,444,200 | opencv | MSER slower in 3.0.0 than 2.4.X | Transferred from http://code.opencv.org/issues/4478
```
|| Huck Wach on 2015-07-09 23:52
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Linux
```
## MSER slower in 3.0.0 than 2.4.X
```
The below simple code can be built with either 2.4.X or 3.0.0 by uncommenting or commenting the #define respectively. It simply opens an image file (I'll let you provide your own) and runs MSER on it. On both my Linux box and my Windows box I see that its 3-4X slower in 3.0.0 than 2.4.X. I have built both versions of OpenCV from source using cmake and stuck with the defaults...I have verified that the build settings are as close as they can be.
<pre>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2\features2d\features2d.hpp"
#include "opencv2\imgproc\imgproc.hpp"
#include <iostream>
#include <chrono>
#include <ctime>
#define USE249
using namespace std;
int main(int argc, char** argv)
{
std::cout << "OpenCV version: "
<< CV_MAJOR_VERSION << "."
<< CV_MINOR_VERSION << "."
<< CV_SUBMINOR_VERSION
<< std::endl;
cv::Mat im = cv::imread("C:/example.jpg", 1);
if (im.empty())
{
cout << "Cannot open image!" << endl;
return -1;
}
cv::Mat gray;
cv::cvtColor(im, gray, cv::COLOR_BGR2GRAY);
int mnArea = 40 * 40;
int mxArea = im.rows*im.cols*0.4;
std::vector< std::vector< cv::Point > > ptblobs;
std::vector<cv::Rect> bboxes;
std::chrono::time_point<std::chrono::system_clock> start, end;
start = std::chrono::system_clock::now();
#ifndef USE249
cv::Ptr<cv::MSER> mser = cv::MSER::create(1, mnArea, mxArea, 0.25, 0.2);
mser->detectRegions(gray, ptblobs, bboxes);
#else
cv::MserFeatureDetector mser(1, mnArea, mxArea, 0.25, 0.2);
mser(gray, ptblobs);
#endif
end = std::chrono::system_clock::now();
std::chrono::duration<double> elapsed_seconds = end - start;
std::time_t end_time = std::chrono::system_clock::to_time_t(end);
std::cout << "finished computation at " << std::ctime(&end_time)
<< "elapsed time: " << elapsed_seconds.count() << "s\n";
cv::namedWindow("image", cv::WINDOW_NORMAL);
cv::imshow("image", im);
cv::waitKey(0);
return 0;
}
</pre>
```
## History
| bug,auto-transferred,priority: normal,category: features2d,affected: 3.4 | low | Critical |
97,444,220 | opencv | Opencv python module "cv" always reporting "Runtime error, module compiled against API version 9 but this version of numpy is 4" | Transferred from http://code.opencv.org/issues/4479
```
|| Kun Huang on 2015-07-10 03:01
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Linux
```
## Opencv python module "cv" always reporting "Runtime error, module compiled against API version 9 but this version of numpy is 4"
```
I have compiled opencv python module with numpy 1.9.0, and when I'm writing code importing cv2, it reports such an error.
I'm quite sure my numpy version is 9, not 4. I recompiled opencv, it report this error again.
How can I fix this error ?
```
## History
##### Alexander Alekhin on 2015-07-10 11:02
```
This message is emitted by numpy development package. Looks like OpenCV uses wrong numpy development files or Python uses wrong numpy runtime library.
1) Could you check numpy version used in Python:
<pre>
import numpy
print(numpy.__version__)
print(numpy.__file__)
</pre>
2) Try to check used numpy development files:
<pre>
import cv2
print cv2.getBuildInformation()
</pre>
Find line with numpy and "include" dir.
a) Paths for "numpy.__file__" and include dir should be similar.
b) You can also check "numpy/_numpyconfig.h" file here for lines "NPY_ABI_VERSION"/"NPY_API_VERSION"
- Priority changed from High to Normal
```
| bug,auto-transferred,priority: normal,category: python bindings,affected: 3.4 | low | Critical |
97,444,453 | opencv | createShapeContextDistanceExtractor()->computeDistance throws exception for certain shaped | Transferred from http://code.opencv.org/issues/4490
```
|| Do Bi on 2015-07-21 13:15
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x86 / Windows
```
## createShapeContextDistanceExtractor()->computeDistance throws exception for certain shaped
```
Using these two images
1) http://i.imgur.com/WKaFDl4.png
2) http://i.imgur.com/5ayU6qb.png
I get the following error
@OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)size.p[0] && (unsigned)(i1 * DataType<_Tp>::channels) < (unsigned)(size.p[1] * channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1<< 3) - 1))*4) & 15) == elemSize1()) in cv::Mat::at, file C:\builds\master_PackSlave-win32-vc12-static\opencv\modules\core\include\opencv2/core/mat.inl.hpp, line 894@
when running this code:
@#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/shape/shape.hpp"
#include <math.h>
#include <iostream>
using namespace cv;
using namespace std;
vector<Point> GetContour(const Mat& img)
{
cv::threshold(img, img, 128, 255, cv::THRESH_BINARY);
vector<vector<Point>> conts;
cv::findContours(img, conts, cv::RETR_LIST, cv::CHAIN_APPROX_SIMPLE);
if (conts.empty())
return vector<Point>();
return conts.front();
}
int main()
{
auto shape1 = GetContour(cv::imread("D:/shape1.png", cv::IMREAD_GRAYSCALE));
auto shape2 = GetContour(cv::imread("D:/shape2.png", cv::IMREAD_GRAYSCALE));
cout << createShapeContextDistanceExtractor()->computeDistance(shape1, shape2) << endl;
}@
in VC2013.
With other shapes or with createHausdorffDistanceExtractor instead of createShapeContextDistanceExtractor it does not crash.
```
## History
| bug,auto-transferred,priority: normal,affected: 3.4,category: shape | low | Critical |
97,444,511 | opencv | Build Failure on Mac OS X (GLX support) | Transferred from http://code.opencv.org/issues/4495
```
|| Hendi Saleh on 2015-07-22 03:56
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: highgui-gui
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
## Build Failure on Mac OS X (GLX support)
```
On Mac OS X 10.6.8 configured with OpenGL, build stops at modules/highgui/src/window_QT.cpp:3364 with a not-declared error of a GLX function.
<pre>
<builddir>/opencv/modules/highgui/src/window_QT.cpp: In member function ‘virtual void GlFuncTab_QT::generateBitmapFont(const std::string&, int, int, bool, bool, int, int, int) const’:
<builddir>/opencv/modules/highgui/src/window_QT.cpp:3364: error: ‘glXUseXFont’ was not declared in this scope
<builddir>/opencv/modules/highgui/src/window_QT.cpp:3355: warning: unused variable ‘cvFuncName’
make[2]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/src/window_QT.cpp.o] Error 1
make[1]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/all] Error 2
make: *** [all] Error 2
</pre>
This is, to the best of my knowledge, the only usage of a GLX function when building with Mac OS X. It isn't appropriate to use GLX on Mac OS X because Apple uses AGL's aglUseFont for the same purpose.
This error may be related to improper @#ifdef@s: At the top of window_QT.cpp, @#ifdef Q_WS_X11@ is used to guard the inclusion of GL/glx.h. However, at line 3364, a *#ifndef Q_WS_WIN* is used to guard the use of glXUseXFont. This allows a non-Windows machine possessing X11 but lacking GLX (i.e. Mac OS X) to reach the faulty call.
```
## History
| bug,auto-transferred,priority: normal,category: highgui-gui,affected: 3.4 | low | Critical |
97,524,983 | go | x/mobile/exp/audio: audio is choppy on Moto X running Android 5.1 | Steps to reproduce:
1) go install golang.org/x/mobile/example/audio (for a Moto X device running Android 5.1)
2) Run the audio demo on the device
Expected result:
Audio sounds the same as when the audio demo is run on a desktop machine
Observed result:
Audio playback is very "choppy." Getting random pops in the middle and end of the audio sample as the gopher bounces off the walls.
| mobile | low | Minor |
97,542,530 | go | runtime: Additional Allocator Metrics in `runtime.MemStats` | Hi, I am wondering if it would be amenable to include several additional core metrics in `runtime.MemStats`, namely the following measures:
1. No. of spans pending being released by size class.
This helps server authors understand the discrepancy between reported
heap reservation/allocation versus process RSS.
2. No. of live spans (with active allocations contained therein) by size class.
Essential corollary for no. 1.
3. Measure of sum of span occupancy by size class.
This helps server and runtime authors understand level of span reuse
and divine potential problems with heap fragmentation viz-a-viz no. 2.
4. Sum of span allocations by size class (not of inner object allocations).
This helps server authors understand the aggregate throughput of
memory flow in realtime, a measure of efficiency that is useful to
capture when comparing releases and automated canary tests.
5. Sum of span releases by size class (not of inner object allocations).
Useful corollary for no. 4.
6. Summary statistics about age for spans by size classes: min, median, average, max.
Throughput in no. 4 and no. 5 is useful, but this takes the level of detail to a
deeper level.
7. Cumulative sums of individual allocations made for each span size class.
Useful throughput measure for individual allocations.
These would be inordinately beneficial in gross measurement of fleet memory allocator performance as well as offer server authors deeper telemetric insights into the lifecycles of their allocations. pprof is great for one-off diagnosis but not real-time analytics of the fleet.
I would be happy coming to compromise on these, especially to enhance the language and requirements as well as to possibly volunteer time in the implementation of these representations should we come to agreement.
| compiler/runtime | low | Major |
97,579,074 | TypeScript | Handle default constructor when extending null | From this [comment](https://github.com/Microsoft/TypeScript/issues/3696#issuecomment-122143431), and [bug](https://bugs.ecmascript.org/show_bug.cgi?id=4429.) we would like to provide appropriate handling for the following case:
``` typescript
class N extends null {}
var n = new N();
```
Depended on how ES6 spec handles above case (i.e. fix the calling to `super` or just leave it as is), we will have to make appropriate change into our emitted ES5 and ES6.
If default constructor will not call `super` when extends null,in ES5 we should change our following emitted JS:
``` typescript
// When no constructor is defined:
_super && _super.apply(this, arguments);
// In the presence of an explicit 'super(arg1, arg2, ..., argN)' call:
_super && _super.call(this, arg1, arg2, ... argN);
```
If default contructor will call `super`, we should give an error for not having explicit constructor and possibly leave down-level emitted js as is
| Bug | low | Critical |
97,717,144 | go | build: hardcoded gcc command lines do not allow using TDM64 gcc on 32-bit Windows | TDM64 is bi-arch gcc distribution with unusual configuration : 32-bit compiler produces 64-bit object files / executables by default, -m32 is required for 32-bit compilation.
See http://tdm-gcc.tdragon.net/quirks for details.
There are few places in Go source tree where C compiler name and required switches ( in particular -m32/-m64 ) are not obtained from environment variables or by running go env :
- on tip in src/runtime/syscall_windows_test.go
- initialization of cbDLLs variable
- C compiler run in function TestReturnAfterStackGrowInCallback
- test for C compiler availability in TestStdcallAndCDeclCallbacks and TestReturnAfterStackGrowInCallback
- on 1.4.2
- in src/make.bat
- in src/runtime/syscall_windows_test.go : initialization of cbDLLs variable
- in misc/cgo/testso/test.bat
As a result you simply can't build 1.4.2 using TDM64 on 32-bit Windows : compiled dist.exe is 64-bit.
On tip runtime test fails because of 64-bit test.dll :
```
--- FAIL: TestStdcallAndCDeclCallbacks (1.18s)
panic: Failed to load D:\opt\usr\src\go\tmp\TestCDeclCallback075906618\test.dll: %1 is not a valid Win32 application. [recovered]
panic: Failed to load D:\opt\usr\src\go\tmp\TestCDeclCallback075906618\test.dll: %1 is not a valid Win32 application.
goroutine 103859 [running]:
testing.tRunner.func1(0x10d774a0)
D:/opt/usr/src/go/jtip/src/testing/testing.go:449 +0x131
syscall.MustLoadDLL(0x111c9100, 0x39, 0x1120c8d0)
D:/opt/usr/src/go/jtip/src/syscall/dll_windows.go:62 +0x64
runtime_test.(*cbTest).run(0x10dbbf04, 0x10d774a0, 0x111c9100, 0x39)
D:/opt/usr/src/go/jtip/src/runtime/syscall_windows_test.go:399 +0x2d
runtime_test.TestStdcallAndCDeclCallbacks(0x10d774a0)
D:/opt/usr/src/go/jtip/src/runtime/syscall_windows_test.go:451 +0x406
testing.tRunner(0x10d774a0, 0x711084)
D:/opt/usr/src/go/jtip/src/testing/testing.go:455 +0x8f
created by testing.RunTests
D:/opt/usr/src/go/jtip/src/testing/testing.go:560 +0x6e9
goroutine 1 [chan receive]:
testing.RunTests(0x657f3c, 0x7108e0, 0xaa, 0xaa, 0x59a401)
D:/opt/usr/src/go/jtip/src/testing/testing.go:561 +0x71b
testing.(*M).Run(0x10db9f7c, 0x10d27fa0)
D:/opt/usr/src/go/jtip/src/testing/testing.go:493 +0x68
main.main()
runtime/_test/_testmain.go:880 +0x100
goroutine 102820 [chan receive]:
testing.(*T).Parallel(0x10d77b00)
D:/opt/usr/src/go/jtip/src/testing/testing.go:421 +0x65
runtime_test.TestStackGrowth(0x10d77b00)
D:/opt/usr/src/go/jtip/src/runtime/stack_test.go:74 +0x25
testing.tRunner(0x10d77b00, 0x710f04)
D:/opt/usr/src/go/jtip/src/testing/testing.go:455 +0x8f
created by testing.RunTests
D:/opt/usr/src/go/jtip/src/testing/testing.go:560 +0x6e9
goroutine 102821 [chan receive]:
testing.RunTests.func1(0x10d28380, 0x10d77b00)
D:/opt/usr/src/go/jtip/src/testing/testing.go:564 +0x3e
created by testing.RunTests
D:/opt/usr/src/go/jtip/src/testing/testing.go:565 +0x765
goroutine 102823 [chan receive]:
testing.RunTests.func1(0x10d28380, 0x10d77b60)
D:/opt/usr/src/go/jtip/src/testing/testing.go:564 +0x3e
created by testing.RunTests
D:/opt/usr/src/go/jtip/src/testing/testing.go:565 +0x765
goroutine 102822 [chan receive]:
testing.(*T).Parallel(0x10d77b60)
D:/opt/usr/src/go/jtip/src/testing/testing.go:421 +0x65
runtime_test.TestStackGrowthCallback(0x10d77b60)
D:/opt/usr/src/go/jtip/src/runtime/stack_test.go:157 +0x25
testing.tRunner(0x10d77b60, 0x710f10)
D:/opt/usr/src/go/jtip/src/testing/testing.go:455 +0x8f
created by testing.RunTests
D:/opt/usr/src/go/jtip/src/testing/testing.go:560 +0x6e9
```
| OS-Windows | low | Critical |
97,819,238 | TypeScript | Variable merging | ```
declare var test: (s: string) => void;
declare var test: {
(s: string) => void;
tests: any[];
}
```
At the moment TS compiler shows error: `Subsequent variable declarations must have the same type.`
| Suggestion,Needs Proposal | low | Critical |
97,930,139 | opencv | VideoCapture fails when compiled with FFMPEG > 1.2.12 (OpenCV 2.4.11) | I'm working with Hollywood-2 video dataset, an example video `actioncliptrain00002.avi` is at my OneDrive: http://1drv.ms/1Mxprmh.
I do sth like:
``` c++
#include <cstdio>
#include <opencv2/highgui/highgui.hpp>
int main()
{
cv::VideoCapture in("actioncliptrain00002.avi");
cv::Mat frame;
for(int i = 1; ; i++)
{
bool ok = in.read(frame);
printf("%d: ok = %s, rows = %d, cols = %d\n", i, ok ? "true" : "false", frame.rows, frame.cols);
if(!ok)
{
printf("Not ok. Breaking.\n");
break;
}
}
}
```
If compiled with ffmpeg 1.2.12, it outputs:
```
1: ok = true, rows = 224, cols = 528
2: ok = true, rows = 224, cols = 528
............
241: ok = true, rows = 224, cols = 528
242: ok = false, rows = 0, cols = 528
Not ok. Breaking.
```
If compiled with any ffmpeg >= 2.0.7 (including the latest 2.7.2), it outputs:
```
[mpeg4 @ 0x169ce40] header damaged
[mpeg4 @ 0x16ded60] Context scratch buffers could not be allocated due to unknown size.
<more ffmpeg errors just like above>
1: ok = false, rows = 0, cols = 0
Not ok. Breaking.
```
My OpenCV/ffmpeg installation script is here: http://pastebin.com/KD4YWpg5
Both ffmpeg 1.2.12 and 2.7.2 are able to do `ffmpeg -i actioncliptrain00002.avi dump/%06d.png` without any errors, which makes me think it's an OpenCV bug in the way it talks to ffmpeg. I have a stable reproducing setup and am happy to provide any further information or help, if needed.
The full Hollywood-2 video dataset is available at ftp://ftp.irisa.fr/local/vistas/actions/Hollywood2-actions.tar.gz
| bug,priority: low,category: videoio,affected: 2.4 | low | Critical |
97,953,395 | opencv | Bayer VNG Pixel Offset | It appears that the VNG Bayer demosaicing will create a vertical pixel offset in the output imagery. I have tested the output of all three available algorithms; linear, VNG, and EA. Only VNG appears to have this problem.
Source Bayer image

Result from COLOR_BAYER_GB2BGR_VNG

Difference showing vertical offset

| bug,category: imgproc,priority: low | low | Minor |
97,958,902 | youtube-dl | Restart download on error | I usually pipe youtube-dl output to mplayer like this: `./youtube-dl -q -o- https://www.youtube.com/watch?v=... | mplayer -` and sometimes I get this error:
```
ERROR: unable to download video data: [Errno 104] Connection reset by peer
```
After this the download it aborted and I only can continue from the beginning.
It would be great if youtube-dl on fail would try to restart download from the position where it failed before exiting with error message.
| request | low | Critical |
97,989,192 | youtube-dl | Paper cut: Downloading audio - output has media dependant strings | Just a minor paper cut:
So i downloaded some audio only content and saw:
```
[soundcloud:user] irllink: Downloading track page 1
[soundcloud:user] irllink: Downloading track page 2
[soundcloud:user] irllink: End page received
[download] Downloading playlist: irllink
[soundcloud:user] playlist irllink: Collected 21 video ids (downloading 21 of them)
[download] Downloading video 1 of 21
[soundcloud] irllink/take-it-easy: Resolving id
[soundcloud] irllink/take-it-easy: Downloading info JSON
[soundcloud] 216455131: Downloading track url
[soundcloud] 216455131: Checking download video format URL
[soundcloud] 216455131: Checking http_mp3_128_url video format URL
[download] Destination: Take It Easy!-216455131.mp3
[download] 100% of 6.35MiB in 00:12
```
> "Collected 21 video ids"
and
> "Checking download video format URL"
So youtube talks about videos when all it is handling is audio. I don't know if there is an easy way to make it site depended. But the easy way is probably to use a media independent word. Like "media":
> Collected 21 media ids
> "Checking download media format URL"
I am sure there are much better solutions. I would like to hear your ideas. :)
| request | low | Minor |
98,199,121 | javascript | `if` statements – one line vs. one expression | Until I started using ESLint with your configuration I saw nothing against the rules in this:
``` js
$ cat if.js
const [one, two] = [1, 2];
if (one !== two) throw new Error(
'One does not equal two'
);
```
But ESLint does:
``` sh
$ eslint if.js
if.js
3:0 error Expected { after 'if' condition curly
✖ 1 problem (1 error, 0 warnings)
```
Is this intended? In my opinion the pattern I’ve been using is explicit – and more readable than this:
``` js
if (one !== two) {
throw new Error(
'One does not equal two'
);
}
```
The parens form a visual brace-like block much like in #438.
| question,needs eslint rule change/addition | low | Critical |
98,254,006 | neovim | $NVIM, v:parent | It'd be super cool if Neovim could keep track of whether you're in a terminal or not, and when you _are_ in a terminal change the meaning of the terminal escape sequence (`<c-\><c-n>`) so that it exits from a terminal that you've spawned inside of a child Neovim instance. So, waste deep in Neovims, you'd always be able to escape from the current Neovim terminal with a single escape; always just `<c-\><c-n>` instead of `<c-\><c-\><c-n>` or `<c-\><c-\><c-\><c-n>` or `<c-\><c-\><c-\><c-\><c-n>`.
This would match perfectly with the fact that you can already `-- INSERT --` your way into child Neovims with single strokes of `i`. Navigating in _both_ directions would become extremely easy.
The workflow improvement would be massive.
| enhancement,job-control,terminal,channels-rpc | low | Major |
98,374,595 | javascript | Consider allowing anonymous function expressions | At the moment [you forbid](https://github.com/airbnb/javascript/blob/ea093e0373cc4dfa07c69ef7c75d81bd06bdf0c2/packages/eslint-config-airbnb/.eslintrc#L140) unnamed function expressions like `function() {}`. That’s a good thing.
But when using the new function bind syntax with **[trine](https://github.com/jussi-kalliokoski/trine)**-style [libraries](https://github.com/roobie/mori-ext), you can’t use arrow functions because they inherit `this` from the outer scope.
So a good old anonymous function expression:
``` js
[my, collection]::map(function() {
return !!this;
});
```
…often feels more natural than inventing obscure names:
``` js
[my, collection]::map(function justCastToBoolean() {
return !!this;
});
```
| question,pull request wanted,needs eslint rule change/addition | low | Major |
98,415,865 | kubernetes | Make sure to warn the user that apiserver won't listen when etcd is screwed up | As see in #12085.
| area/apiserver,priority/awaiting-more-evidence,sig/api-machinery,lifecycle/frozen | low | Major |
98,422,921 | rust | Correctly handle dllimport on Windows | Currently the compiler makes basically no attempt to correctly use `dllimport`. As a bit of a refresher, the Windows linker requires that if you're importing symbols from a DLL that they're tagged with `dllimport`. This helps wire things up correctly at runtime and link-time. To help us out, though, the linker will patch up a few cases where `dllimport` is missing where it would otherwise be required. If a function in another DLL is linked to without `dllimport` then the linker will inject a local shim which adds a bit of indirection and runtime overhead but allows the crate to link correctly. For importing constants from other DLLs, however, MSVC linker requires that dllimport is annotated correctly. MinGW linkers can sometimes workaround it (see [this commit](https://github.com/llvm/llvm-project/commit/eac1b05f1db91097eb37d975e07789ce039cf7f7) description.
If we're targeting windows, then the compiler currently puts `dllimport` on all imported constants from external crates, regardless of whether it's actually being imported from another crate. We rely on the linker fixing up all imports of functions. This ends up meaning that some crates don't link correctly, however (see this comment: https://github.com/rust-lang/rust/issues/26591#issuecomment-123513631).
We should fix the compiler's handling of dllimport in a few ways:
- Functions should be tagged with `dllimport` where appropriate
- FFI functions should also be tagged with `dllimport` where appropriate
- Constants should not always be tagged with `dllimport` if they're not actually being imported from a DLL.
I currently have a few thoughts running around in my head for fixing this, but nothing seems plausible enough to push on.
EDIT: Updated as @mati865 requested [here](https://github.com/rust-lang/rust/issues/27438#issuecomment-665739234). | A-linkage,O-windows,P-low,T-lang,T-compiler,C-bug | high | Critical |
98,525,073 | three.js | Blend Trees | Unity & Unreal Engine use blend trees to blend between different clips in a realistic way. Details are here:
https://github.com/mrdoob/three.js/issues/6881#issuecomment-126839845
https://github.com/mrdoob/three.js/issues/6881#issuecomment-126890409
/ping @crobi
| Enhancement | low | Major |
98,548,611 | go | runtime: whole span reclaimer is expensive | Objects larger than 32K are handled by a large object allocator (largeAlloc) because they fall outside the size classes for which there are span caches. Currently, in order to control heap growth, the large object allocator sweeps until it reclaims the number of pages being allocated before performing the allocation (mHeap_Alloc_m -> mHeap_ReclaimList). This path is also taken when growing the heap.
However, this process can be quite expensive. It first tries sweeping large object spans. _If_ this succeeds, the process is fairly efficient; it may require looking at a large number of spans that don't have a free object, but it's not clear if it's possible to eliminate this linear constant. However, empirically, this often fails. This may simply be because earlier large allocations swept spans that were larger than they were allocating and they didn't keep track of this credit. In this case, it falls back to sweeping small object spans until it's able to free enough (potentially non-contiguous) pages that way. As a result of this, in the x/benchmarks garbage benchmark, the large object sweeper winds up sweeping ~30 times more bytes than it's trying to allocate.
I believe we can eliminate this entire mechanism if, during marking, the garbage collector also keeps a per span "super mark" that is set if any objects in the span are marked. At the point where we would set this, we already have the span and, assuming we're willing to dedicate a byte per span for this, it can be set with a blind (possibly even non-temporal) write. At the end of GC, after mark termination, we can immediately free these spans (both large object spans and small object spans with no reachable objects). It takes roughly 1ms per heap GB to walk the span list, so even assuming a little more overhead for adding free spans to span lists, it seems very likely this would take less time than we currently spend sweeping these spans. This is also trivially parallelizable, and we probably have to do this walk anyway (#11484). Additionally, this will reduce load on concurrent sweep and coalesce neighboring spans much earlier, making larger regions of memory available for large object allocation immediately.
This idea is based on various (mostly pairwise) discussions between myself, @RLH, @rsc, and @dvyukov.
| compiler/runtime | low | Minor |
98,552,683 | opencv | OpenCV 3.0: Python Stitcher hangs OSX | I installed opencv 3.0.0 with homebrew on Macbook Pro (intel) OSX 10.10.4 Yosemite. When I run the following code:
``` python
imgs = []
for i in range(1,6):
file = 'stitching_img/S%d.jpg'%i
print file
im = cv2.imread(file)
imgs.append(im)
st=cv2.createStitcher()
ret,pano = st.stitch(imgs)
print 'ret',ret
if ret: cv2.imwrite('pano.png',pano)
```
My entire computer locks up after the 6 images are loaded. I originally did a `cv2.imshow(im)` to ensure the images were loaded correctly and they were. I took it out thinking showing an image was what locked up my system and forcing a reboot, however, that didn't help and I sill had to reboot.
| priority: low,affected: 3.4,platform: ios/osx,category: 3rdparty | low | Minor |
98,558,285 | youtube-dl | Join multiple audio streams into mkv | Following up on #5784. youtube-dl is currently not capable of downloading-and-merging multiple audio tracks (perhaps even video tracks). It already supports merging into MKV containers on some occassions (e.g. "Requested formats are incompatible for merge and will be merged into mkv."), and so it could just go the same route in case of multi-audio/multi-sub.
```
$ ./youtube-dl -f best+en+jp http://www.gdcvault.com/play/1014631/Classic-Game-Postmortem-PAC
[GDCVault] Classic-Game-Postmortem-PAC: Downloading webpage
[GDCVault] Classic-Game-Postmortem-PAC: Downloading XML
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "./youtube-dl/__main__.py", line 19, in <module>
File "./youtube-dl/youtube_dl/__init__.py", line 410, in main
File "./youtube-dl/youtube_dl/__init__.py", line 400, in _real_main
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 1504, in download
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 667, in extract_info
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 713, in process_ie_result
File "./youtube-dl/youtube_dl/YoutubeDL.py", line 1125, in process_video_result
ValueError: too many values to unpack
```
| request | low | Critical |
98,566,300 | youtube-dl | Download comments from Youtube video | Is there any way to download comments from YouTube videos? Can you add a new option for example: --write-comments like it with description of video?
I am first on GitHub, and i am not a programmer, so excuse me if i do something wrong.
I found some information about this request, may be it will be useful:
https://finnaarupnielsen.wordpress.com/2009/09/21/getting-comments-from-youtube-via-pythons-gda/
| request | low | Minor |
98,623,999 | go | x/crypto/ocsp: ParseResponse pitfalls | There are two pitfalls in the ParseResponse method of "x/cryto/ocsp". The first being that if you forget to pass in an issuer then the response will be parsed but signature verification will not be performed. The second is that some people might assume that when err != nil, Response.Status == Good. I would suggest an additional idiot proof method VerifyResponse which might look a little like this:
https://play.golang.org/p/eTkpQi_gDk
| NeedsInvestigation | low | Minor |
98,665,014 | go | runtime: reenable TestCgoCallbackGC on dragonfly | For example,
```
go test -v -run=TestCgoCallbackGC
=== RUN TestCgoCallbackGC
--- PASS: TestCgoCallbackGC (175.54s)
PASS
ok runtime 175.548s
```
Not sure the reason but it perhaps harms the dragonfly buildbot.
| OS-Dragonfly,NeedsFix,compiler/runtime | low | Major |
98,867,377 | nvm | Honor .node-version file | Forgive me if this has been discussed before, but I didn't find anything beyond https://github.com/creationix/nvm/issues/110#issuecomment-40525629.
It would be really nice to see the various Node version managing tools coalesce around a common "version config file", analogous to Ruby's `.ruby-version` for RVM and rbenv - `.node-version`. This would basically just be an alias for `.nvmrc`, since that functionality's already been added. Since `n` already supports `.node-version`, this would make it so that regardless of the Node version manager used, there could be some degree of guarantee that the correct version will be used.
| feature requests | high | Critical |
99,071,136 | opencv | cv::gpu::HoughCircles return different result than cv::HoughCircles (Bugfix #3447) | I noticed not transferred, open issue... (btw. I bet Assignee is my mal)
Should it be here? Is there any bug in transfer method?
http://code.opencv.org/issues/3447
I will prepare example image and someone could explain the difference of measuring diameters with it (maybe I will found time), cause I do not know what type of diff's could it be... First part of task is clear for me, I think. If Łukasz shouldn't do it, I will try.
Regards.
| feature,priority: low,affected: 2.4,category: gpu/cuda (contrib) | low | Critical |
99,074,893 | go | x/mobile: crash in gobind interface call | <b>What version of Go are you using (go version)?</b>
go version go1.5beta2 linux/amd64
gomobile version +99196c8 Sat Aug 1 23:05:44 2015 +0000 (android); androidSDK=/usr/local/android-sdk-linux/platforms/android-22
<b>What operating system and processor architecture are you using?</b>
Development: Linux Mint 64bit / Windows 10 64bit on i7 (920)
Phone: Samsung Galaxy note 4 on ARMv7
<b>What did you do?</b>
Created a simple go library:

Compiled a .arr file : gomobile bind {folder path}
Added the .arr file to an Android Studio project.

<b>*Red box indicates which method call that crashes the application, during runtime.</b>
<b>What did you expect to see?</b>
The instance of the object getting appended correctly to the Go slice, or to show a stack trace / error message.
<b>What did you see instead?</b>
Passing null as parameter, shows a null value exception.
However, the example above shuts the application down on my phone completely, with no exception or stack trace, even in debug mode.
Try-catch does not help in this situation.
| mobile | medium | Critical |
99,108,981 | kubernetes | Convert all component command-line flags to versioned configuration | Forked from #1627, #5472, #246, #12201, and other issues.
We configure Kubernetes components like apiserver and controller-manager with command-line flags currently. Every time we change these flags, we break everyone's turnup automation, at least for anything not in our repo: hosted products, other repos, blog posts, ...
OTOH, the scheduler and kubectl read configuration files. We should be treating all configuration similarly, as versioned APIs, with backward compatibility, deprecation policies, documentation, etc. How we distribute the configuration is a separate issue -- #1627.
cc @davidopp @thockin @dchen1107 @lavalamp
| priority/important-longterm,sig/architecture,lifecycle/frozen | medium | Critical |
99,244,929 | go | runtime: check indirect calls for nosplit | See https://go-review.googlesource.com/#/c/10362/.
Is it needed?
| NeedsInvestigation | low | Major |
99,321,697 | go | cmd/gofmt: Inconsistent space before open curly brace | Using go1.5beta3.
This is obviously subjective, so feel free to close.
When struct definition is placed on a single line, it removes the space between the `struct` and `{`.
``` go
type Foo struct {
int
}
type Bar struct{ int }
```
When an anonymous struct is used and instantiated, it lacks a space before the `{` as well.
``` go
for _, x := range []struct {
b bool
n int
}{ // There is NO space between '}' and '{'
{false, 0},
{true, 2},
{false, 16},
} { // There IS a space between '}' and '{'
fmt.Println(x)
}
```
| NeedsDecision | low | Minor |
99,339,389 | go | x/tools/refactor/rename: fails if cgo build would fail | I have some CGO_CFLAGS and CGO_LDFLAGS settings I have to use to build a project I have, but gorename inside emacs doesn't have access to those.
gorename fails on this project (and ones that depend on it) because it cannot find the C headers. However, gorename doesn't support cgo and so doesn't really need to inspect those headers.
It would be nice for rename to not care about cgo so that we didn't have to mess with cgo's env variables for each project (esp since some might, in rare cases, have conflicting needs).
| Tools,Refactoring | low | Minor |
99,352,008 | TypeScript | Some SVG types would benefit from stricter appendChild() overloads | ``` typescript
interface SVGFEComponentTransferElement {
appendChild(newChild: SVGFEFuncAElement): SVGFEFuncAElement;
appendChild(newChild: SVGFEFuncBElement): SVGFEFuncBElement;
appendChild(newChild: SVGFEFuncGElement): SVGFEFuncGElement;
appendChild(newChild: SVGFEFuncRElement): SVGFEFuncRElement;
appendChild(newChild: Comment): Comment;
}
interface SVGFEMergeElement {
appendChild(newChild: SVGFEMergeNodeElement): SVGFEMergeNodeElement;
appendChild(newChild: Comment): Comment;
}
...
```
to override the generic appendChild(Node) inherited from Node, and so on. This would make it an error to append a `<g>` to a `<feMerge>` for example.
| Suggestion,Help Wanted,Domain: lib.d.ts | low | Critical |
99,411,960 | opencv | opencv_test_gpu is failing | Based on the following topic http://answers.opencv.org/question/67953/opencv-2411-gpu-features I decided to do some tests with the GPU test suite.
System: Linux Ubuntu 14.04 x64 bit with CUDA7.0 support and OpenCV2.4 latest branch --> which built perfectly fine.
Configured the opencv_extra repo by
- cloning the repository into a folder
- performed a `git checkout 2.4`
- adding a system environment variable export `OPENCV_TEST_DATA_PATH=/home/spu/Documents/github/opencv_extra/`
Then I ran the sample by executing `./opencv_test_gpu` and it starts running just fine. Here and there a test fails due to it not being to able to find a specific image --> but less then before configuring opencv_extra repo.
Finally it crashes down on the following line
<pre>
[ OK ] GPU_ImgProc/CvtColor.Luv2LRGBA/22 (2 ms)
[ RUN ] GPU_ImgProc/CvtColor.Luv2LRGBA/23
[ OK ] GPU_ImgProc/CvtColor.Luv2LRGBA/23 (2 ms)
[ RUN ] GPU_ImgProc/CvtColor.RGBA2mRGBA/0
[ OK ] GPU_ImgProc/CvtColor.RGBA2mRGBA/0 (293 ms)
[ RUN ] GPU_ImgProc/CvtColor.RGBA2mRGBA/1
Segmentation fault (core dumped)
</pre>
As you can see, no error, no warning, just a simple core dump ... anyone has a clue what could be wrong?
| bug,priority: normal,affected: 2.4,category: gpu/cuda (contrib) | low | Critical |
99,482,050 | youtube-dl | Use description info to automatically divide output | In some large videos, the uploader usually marks in the description the parts of the video, putting the minutes and seconds of the beginning of a part. Something like this:
TITLE VIDEO
blablablablablabla...
00:00 Intro
05:00 First Part
10:00 Second Part
etc. etc. etc.
It could be an option what read a description like that (if exists, if not exists it do nothing different) and divide the output file using every line with a time consecutively. In the example case: Making some output videos like "Title Video 1 Intro" (00:00-05:00), "Title Video 2 First Part" (05:00-10:00), "Title Video 3 Second Part" (10:00-End Video) of the whole video "Title Video". It could ignore the rest of the lines of the description without a time, use the rest of the line with the time as name of the file (normally it usually is in the beginning of the line, but it could be detected in any position with a proper RegExp, erasing the time and copying the rest of the line) and add a number of the consecutively files outputs order. Of course, need some postprocessing after download.
It was useful to have the parts of the video, like i was my case, i have to download the whole video and edit to cut the parts i need, and with this i can have the parts already prepared. It can be useful too for that EPs and LPs uploaded entire in one video, and it can be downloaded in separate tracks directly. And it can be useful not only in Youtube.
| request | low | Major |
99,516,790 | TypeScript | Negated types | Sometimes it is desired to forbid certain types from being considered.
Example: `JSON.stringify(function() {});` doesn't make sense and the chance is high it's written like this by mistake. With negated types we could eliminate a chance of it.
``` typescript
type Serializable = any ~ Function; // or simply ~ Function
declare interface JSON {
stringify(value: Serializable): string;
}
```
Another example
``` typescript
export NonIdentifierExpression = ts.Expression ~ ts.Identifier
```
| Suggestion,In Discussion | high | Critical |
99,662,451 | opencv | OpenCV + Python multiprocessing breaks on OSX | I'm trying to use OpenCV with Python's multiprocessing module, but it breaks on me even with very simple code. Here is an example:
``` php
import cv2
import multiprocessing
import glob
import numpy
def job(path):
image = cv2.imread(path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
return path
if __name__ == "__main__":
main_image = cv2.imread("./image.png")
main_image = cv2.cvtColor( main_image, cv2.COLOR_BGR2GRAY)
paths = glob.glob("./data/*")
pool = multiprocessing.Pool()
result = pool.map(job, paths)
print 'Finished'
for value in result:
print value
```
If I remove `main_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)` script works, but with it in there it doesn't, even though that line shouldn't affect jobs processed by the pool.
All image paths lead to simple images, about ~20 of them in total.
Funny enough code works fine if I create images in memory with `numpy` instead of reading them with `imread()`.
I'm guessing OpenCV uses some shared variables behind the scene that aren't protected from race conditions.
My environment: Mac OS X 10.10, OpenCV 3.0.0, Python 2.7.
The last few lines of stack trace are:
```
Application Specific Information:
crashed on child side of fork pre-exec
Thread 0 Crashed:: Dispatch queue: com.apple.root.default-qos
0 libdispatch.dylib 0x00007fff8668913f dispatch_apply_f + 769
1 libopencv_core.3.0.dylib 0x000000010ccebd14 cv::parallel_for_(cv::Range const&, cv::ParallelLoopBody const&, double) + 152
2 libopencv_imgproc.3.0.dylib 0x000000010c8b782e void cv::CvtColorLoop<cv::RGB2Gray<unsigned char> >(cv::Mat const&, cv::Mat&, cv::RGB2Gray<unsigned char> const&) + 134
3 libopencv_imgproc.3.0.dylib 0x000000010c8b1fd4 cv::cvtColor(cv::_InputArray const&, cv::_OutputArray const&, int, int) + 23756
4 cv2.so 0x000000010c0ed439 pyopencv_cv_cvtColor(_object*, _object*, _object*) + 687
5 org.python.python 0x000000010bc64968 PyEval_EvalFrameEx + 19480
6 org.python.python 0x000000010bc5fa42 PyEval_EvalCodeEx + 1538\
```
BTW - I got other OpenCV functions to crash when used with Python multiprocessing too, above is just the smallest example I could produce that reflects the problem.
Also, I got above algorithm (and much more complicated ones) to work in multithreaded C++ programs, using same OpenCV build on same machine, so I guess the issue lies on Python bindings side.
| bug,priority: normal,category: python bindings,category: core,affected: 3.4 | high | Critical |
99,681,691 | TypeScript | TypeChecker#getConstantValue(node) doesn't properly resolve values | Courtesy of @ivogabe from [here](https://gitter.im/Microsoft/TypeScript?at=55c4bdfdcdd8bb455f2f8c7e).
> I'm working with the compiler api and I noticed that `TypeChecker#getConstantValue(node)` only returns a value if I first call `TypeChecker#getSymbolAtLocation(node)`. `node` is an ElementAccessExpression or a PropertyAccessExpression. Is that a bug or by design?
| Bug,Help Wanted,API | low | Critical |
99,694,027 | go | runtime: check for no preemption during write barrier | Check that write barrier routines cannot be preempted.
Go1.5Maybe because we may have time to add the checks to a local copy
and at least verify that there aren't any missing go:nosplits that are needed
(or add the ones that are needed) in the release branch.
| compiler/runtime | low | Minor |
99,803,170 | rust | Inference fails when a type using a default also implements Deref. | The current code fails compilation:
``` rust
#![feature(default_type_parameter_fallback)]
#![allow(unused_variables)]
use std::ops::Deref;
#[derive(Clone, Debug)]
pub struct Foo<T: Clone + Default = ()> {
bar: T,
}
impl<T: Clone + Default> Foo<T> {
#[inline(always)]
pub fn new() -> Self {
Foo {
bar: Default::default(),
}
}
#[inline(always)]
pub fn with<U: Clone + Default>(self, value: U) -> Foo<U> {
Foo {
bar: value,
}
}
}
impl<T: Clone + Default> Deref for Foo<T> {
type Target = T;
#[inline(always)]
fn deref(&self) -> &T {
&self.bar
}
}
fn main() {
Foo::new().with(0u8);
}
```
With:
```
test.rs:37:13: 37:22 error: the type of this value must be known in this context
test.rs:37 Foo::new().with(0u8);
^~~~~~~~~
```
/cc @bluss @jroesch
| A-type-system,T-compiler,A-inference,C-bug,T-types | low | Critical |
99,843,682 | neovim | Alternate file lost when using :terminal | Hello.
If I open a terminal with `:terminal` and get out of it, the alternate file is lost.
I get the error `E23: No alternate file` when trying to access it.
Thanks to fix this issue.
| bug,terminal,has:repro | low | Critical |
99,903,709 | opencv | License compiler | Due to the many flags for third party libraries, it is hard to scrape all licenses together. It would be neat to have cmake generating a compiled license file based on the used flags.
E.g. the new third party repository could have standardized parseable license files
| priority: normal,feature,category: infrastructure | low | Minor |
99,997,484 | go | x/tools/cmd/goyacc: add API to convert between state numbers and strings | I'd like to have an API to convert between state numbers (as defined by the external "lexer" constant) and their string representation.
There is an internal function that sort of do it - yyTokname but there is a magical offset between what the yacc internal code call state number, and the defined constant. And the magical constant differ between 1.4 and 1.5. For the other direction one have to go the yyToknames table itself, again with (a slightly different) magic offset that differ between version.
So the request is for simple API to do this conversion (an alternative is to give some type to the constant and then we can use the stringer tool).
| FeatureRequest,Tools | low | Minor |
100,015,265 | opencv | imagestorage.cpp NegReader function has code that does not make sense | Looking at the grabbing of the following image by the NegImage reader, which can be found [here](https://github.com/Itseez/opencv/blob/master/apps/traincascade/imagestorage.cpp#L50), we can find the following code snippet
<pre>
round += last / count;
round = round % (winSize.width * winSize.height);
last %= count;
_offset.x = std::min( (int)round % winSize.width, src.cols - winSize.width );
_offset.y = std::min( (int)round / winSize.width, src.rows - winSize.height );
</pre>
We can see in the header file that last is defined as integer, and count is defined as size_t, and is the amount of negative files available. This will result in the fact that `last/count` always equals 0, resulting in round always being 0. this means that `_offset.x and _offset.y` are also always 0 and thus this code seems actually to do nothing.
@mdim since you implemented this code, could you explain the reason behind this?
@vpisarev are we correct to state that removing these calculations would have no influence at all?
Somewhere it seems logical that for each image you always start at `x=0 and y=0` because else you simple ignore information of the negative sample windows.
| bug,priority: low,category: apps | low | Major |
100,072,671 | youtube-dl | Support PBS Kids | Would it be possible to support pbskids.org shows? For instance, http://pbskids.org/catinthehat/video/index.html
| site-support-request | low | Major |
100,145,359 | react | Include DevTools as Public API | The idea is to have a multi-version extension that allow you to attach a "debugger" to a running React instance. This debugger protocol injects intercepted functions into a particular version of React which exposes development hooks.
Effectively this: https://github.com/facebook/react-devtools/tree/devtools-next/backend/integration
This is not intended to expose stateful reflection APIs in production use since it will negatively affect performance.
| Type: Big Picture,React Core Team | low | Critical |
100,158,031 | react | Hibernating State (Not Necessarily Serialized) | Relay and others currently abuses some internals to get some persistent identity of a component. This effectively is used to restore the state of a component after it has been temporarily unmounted. It is also common to abuse Flux stores for this use case. Basically, since this capability doesn't currently exists, you're encouraged to use Flux for everything just in case you need this capability later on.
The purpose of this issue is to discuss a public API for hibernating the state of a component and then restoring it once the component remounts.
The use cases can be broken down into three scenarios:
- **List Item Out of View**: E.g. Infinite scrolling (such as "table views") where one row eventually needs to be reclaimed to save the memory used by the tree that is out of view.
- **Detail View**: In a master-detail view, clicking one item in the list switches the state of the detail view. When you click on the original view again.
- **Back/Forward Button**: You want to save a snapshot of the state when you navigate. The new view can then change the state and when you hit the back button to return to a previous state, you want to restore the original state of the subtree.
We would like to support this at least in a non-serialized form. You could imagine having an API that serializes this to JSON or some other data structure too but that's a potential follow up and not necessarily part of this.
One potential API:
``` js
class View {
state = { stateKeys: [{}, {}, {}] }
render() {
return <ChildView key={this.state.stateKeys[this.props.index]} />;
}
}
```
Basically, an object is used as a key. Unlike the normal key semantics, the state of the subtree is kept in memory indefinitely. We use a WeakMap to keep the state. If the object ever goes away, the GC will collect the state of that subtree. This solves all three use cases.
| Type: Big Picture,React Core Team | medium | Critical |
100,160,881 | TypeScript | Getting completions when writing out a dot in a parameter should suggest the splat operator | At present the completions returned seem arbitrary. Example:
``` ts
function(arg: string, ./**/): any {
}
```
Getting completions at the marker yields the typical alphabetical list of global things - worse yet, in VSCode, pressing `.` again auto-completes the first one (`.AbstractWorker.` isn't really a valid argument name)! Very frustrating.
| Suggestion,Help Wanted | low | Major |
100,162,479 | react | Externalize the State Tree (or alternatives) | React provides the notion of implicitly allowing a child component to store state (using the `setState` functionality). However, it is not just used for business logic state. It is also used to remember DOM state, or tiny ephemeral state such as scroll position, text selection etc. It is also used for temporary state such as memoization.
This is kind of a magic black box in React and the implementation details are largely hidden. People tend to reinvent the wheel because of it, and invent their own state management systems. E.g. using Flux.
There is still plenty of use cases for Flux, but not all state belongs in Flux stores.
Manually managing the adding/removing of state nodes for all of this becomes a huge burden. So, regardless you're not going to keep doing this manually, you'll end up with your own system that does something similar. We need a convenient and standard way to handle this across components. This is not something that should be 100% in user space because then components won't be able to integrate well with each other. Even if you think you're not using it, because you're not calling setState, you still are relying on the capability being there.
It undermines the ecosystem and eventually everyone will reconverge on a single external state library anyway. We should just make sure that gets baked into React.
We designed the state tree so that the state tree data structure would be opaque so that we can optimize the internals in clever ways. It blocks many anti-patterns where external users breaks through the encapsulation boundaries to touch someone else's state. That's exactly the problem React's programming model tries to address.
However, unfortunately this state tree is opaque to end users. This means that there are a bunch of legitimate use cases are not available to external libraries. E.g. undo/redo, reclaiming memory, restoring state between sessions, debugging tools, hot reloading, moving state from server to the client and more.
We could provide a standard externalized state-tree. E.g. using an immutable-js data structure. However, that might make clever optimizations and future features more difficult to adopt. It also isn't capable of fully encapsulating the true state of the tree which may include DOM state, it may be ok to treat this state differently as a heuristic but the API need to account for it. It also doesn't allow us to enforce a certain level of encapsulation between components.
Another approach is to try to add support for more use cases to React, one-by-one until the external state tree doesn't become useful anymore. I've created separate issues for the ones we we're already planning on supporting:
#4593 Debugger Hooks as Public API
#4594 Hibernating State (not the serialized form)
What else do we need?
Pinging some stake holders:
@leebyron @swannodette @gaearon @yungsters @ryanflorence
| Type: Big Picture,React Core Team | medium | Critical |
100,170,503 | java-design-patterns | External Configuration Store Pattern | **Description:**
The External Configuration Store design pattern involves storing configuration settings outside of the application, which allows for the modification of these settings without the need to redeploy the application. This pattern is particularly useful for cloud-based applications where environments can change dynamically, and configurations need to be updated frequently.
**Main elements of External Configuration Store pattern:**
1. **Configuration Source:** A centralized external source where configurations are stored. This can be a database, a cloud-based service, or a configuration server.
2. **Configuration Retrieval:** Mechanism to fetch the configuration from the external store. This can be done at application startup and periodically at runtime to ensure the latest configuration is always applied.
3. **Configuration Management:** Tools and interfaces to manage, update, and monitor the configurations in the external store.
4. **Fallback Mechanism:** In case the external configuration source is unavailable, the application should have a fallback mechanism, such as default settings or cached configurations.
**References:**
- [External Configuration Store - Microsoft Docs](https://docs.microsoft.com/en-us/azure/architecture/patterns/external-configuration-store)
- [12 Factor App - Store config in the environment](https://12factor.net/config)
- [GitHub Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki)
**Acceptance Criteria:**
1. Implement a mechanism to retrieve configurations from an external store (e.g., AWS SSM, Consul, etc.) and apply them to the application at startup and during runtime.
2. Create a configuration management interface to allow for easy updates and monitoring of configuration settings.
3. Ensure a fallback mechanism is in place to handle scenarios where the external configuration store is unreachable. This should include default settings or cached configurations. | info: help wanted,epic: pattern,type: feature | low | Major |
100,190,201 | go | net/rpc: lose cached Request on EOF | It looks like a (performance) bug at https://github.com/golang/go/blob/master/src/net/rpc/server.go#L578 - I think before `req = nil` this should be added:
``` go
server.freeRequest(req)
```
| NeedsInvestigation | low | Critical |
100,347,237 | opencv | More details on supported platform | I was trying to force opencl to use my nvidia gpu since the default was always using the intel version.
I found page http://docs.opencv.org/modules/ocl/doc/introduction.html which explained that I should set the environment variable OPENCV_OPENCL_DEVICE but after reading this, I wasn't exactly sure of the string that I should write. I guessed "NVIDIA" but I wasn't sure if it was supposed to be in lower case, upper case, whether it mattered.
Maybe showing the common options might help the next reader
| bug,priority: low,category: documentation,category: ocl | low | Minor |
100,443,781 | java-design-patterns | Finite State Machine pattern | **Description:**
The Finite State Machine (FSM) design pattern is a behavioral design pattern used to model the behavior of a system that can be in one of a finite number of states. The system transitions from one state to another in response to external events or internal actions. This pattern is useful in scenarios where an object’s behavior is dependent on its state, and the state can change in response to events.
**Main elements of Finite State Machine pattern:**
1. **States:** Define the possible states the system can be in.
2. **Transitions:** Define the rules or conditions that trigger a change from one state to another.
3. **Events:** Actions or occurrences that cause state transitions.
4. **Context:** Maintains the current state of the system and facilitates state transitions.
5. **State Interface:** Defines the behavior associated with each state. Each state implements this interface to define its specific behavior.
**References:**
- [State Design Pattern - Refactoring Guru](https://refactoring.guru/design-patterns/state)
- [State Machines - Martin Fowler](https://martinfowler.com/articles/state-machines.html)
- [GitHub Contribution Guidelines](https://github.com/iluwatar/java-design-patterns/wiki)
**Acceptance Criteria:**
1. Implement the FSM pattern with at least three distinct states and the transitions between them.
2. Create a context class that maintains the current state and handles state transitions based on events.
3. Provide example scenarios demonstrating the FSM pattern, including state definitions, transitions, and event handling.
| info: help wanted,epic: pattern,type: feature | medium | Critical |
100,512,566 | opencv | videostab: motion-inpainting option crashes | When using the videostab's motion-inpaint option like in [this example](https://github.com/Itseez/opencv/blob/09b9b0fb9e9c9dd8c9e0d65705f8f19aa4c27f8a/samples/cpp/videostab.cpp#L509-L514), it crashes with an Error::StsNotImplemented "DensePyrLkOptFlowEstimatorGpu doesn't support errors calculation".
The callstack is as follows:
> [optical_flow.cpp#L134](https://github.com/Itseez/opencv/blob/09b9b0fb9e9c9dd8c9e0d65705f8f19aa4c27f8a/modules/videostab/src/optical_flow.cpp#L134)
> [inpainting.cpp#L401](https://github.com/Itseez/opencv/blob/09b9b0fb9e9c9dd8c9e0d65705f8f19aa4c27f8a/modules/videostab/src/inpainting.cpp#L401)
> [inpainting.cpp#L107](https://github.com/Itseez/opencv/blob/09b9b0fb9e9c9dd8c9e0d65705f8f19aa4c27f8a/modules/videostab/src/inpainting.cpp#L107)
> [stabilizer.cpp#L231-L232](https://github.com/Itseez/opencv/blob/09b9b0fb9e9c9dd8c9e0d65705f8f19aa4c27f8a/modules/videostab/src/stabilizer.cpp#L231-L232)
> ....
The problem seems to be that the OutputArray `flowMask_` is given but the `DensePyrLkOptFlowEstimatorGpu` rejects it. In general, the field `flowMask_` does not seem to be used.
**Steps to reproduce**:
Run the videostab with the following options
> videostab test.avi --motion-inpaint=yes --est-trim=no
(you can also leave the --est-trim option - but the above command reaches the crashing statement much faster)
| bug,priority: normal,affected: 3.4,category: gpu/cuda (contrib),category: videostab | low | Critical |
100,535,407 | rust | macOS packages and Windows MSIs are not signed | This possibly applies to other platforms as well.
<img width="532" alt="screen shot 2015-08-12 at 14 35 56" src="https://cloud.githubusercontent.com/assets/47542/9224034/b2a452de-40ff-11e5-88c0-fd3b9f38d930.png">
Currently, the Rust installer comes up with this nice warning, making the user navigate to a settings pane and acknowledge to really start the installer. Administrators can also decide to completely deactivate this.
I think at least the official installers of Rust should be signed using an Apple Developer Certificate.
| O-windows,O-macos,C-enhancement,P-medium,T-infra | medium | Critical |
100,595,518 | rust | Tracking issue for Ipv{4,6}Addr convenience methods | The below is a list of methods left to be stabilized under the `ip` feature. The path forward today is unclear; if you'd like to push through any method on here the libs team is interested in a PR with links to the associated RFCs or other official documentation. Let us know!
- [ ] [`IpAddr::is_documentation`](https://doc.rust-lang.org/nightly/std/net/enum.IpAddr.html#method.is_documentation)
- [ ] [`Ipv6Addr::is_documentation`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_documentation)
****
- [ ] [`IpAddr::is_global`](https://doc.rust-lang.org/nightly/std/net/enum.IpAddr.html#method.is_global)
- [ ] [`Ipv4Addr::is_global`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_global)
- [ ] [`Ipv6Addr::is_global`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_global)
****
- [ ] [`Ipv4Addr::is_shared`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_shared)
- ~~[`Ipv4Addr::is_ietf_protocol_assignment`](https://doc.rust-lang.org/1.55.0/std/net/struct.Ipv4Addr.html#method.is_ietf_protocol_assignment)~~ https://github.com/rust-lang/rust/pull/86439
- [ ] [`Ipv4Addr::is_benchmarking`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_benchmarking)
- [ ] [`Ipv4Addr::is_reserved`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv4Addr.html#method.is_reserved)
****
- [ ] [`Ipv6Addr::is_unicast_global`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_unicast_global)
- [ ] [`Ipv6Addr::is_unicast_link_local`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_unicast_link_local)
- ~~[`Ipv6Addr::is_unicast_link_local_strict`](https://doc.rust-lang.org/1.53.0/std/net/struct.Ipv6Addr.html#method.is_unicast_link_local_strict)~~ https://github.com/rust-lang/rust/pull/85819
- ~~[`Ipv6Addr::is_unicast_site_local`](https://doc.rust-lang.org/1.54.0/std/net/struct.Ipv6Addr.html#method.is_unicast_site_local)~~ https://github.com/rust-lang/rust/pull/85820
- [ ] [`Ipv6Addr::is_unique_local`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_unique_local)
- [ ] [`Ipv6Addr::multicast_scope`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.multicast_scope)
****
- [ ] [`Ipv6Addr::is_ipv4_mapped`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.is_ipv4_mapped)
- [x] [`Ipv6Addr::to_ipv4_mapped`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.to_ipv4_mapped) https://github.com/rust-lang/rust/pull/96906
- [x] [`Ipv6Addr::to_canonical`](https://doc.rust-lang.org/nightly/std/net/struct.Ipv6Addr.html#method.to_canonical) https://github.com/rust-lang/rust/pull/115955
- [x] [`IpAddr::to_canonical`](https://doc.rust-lang.org/nightly/std/net/enum.IpAddr.html#method.to_canonical) https://github.com/rust-lang/rust/pull/115955
## Steps
- [ ] Implementation https://github.com/rust-lang/rust/pull/22015
- [ ] Implementation https://github.com/rust-lang/rust/pull/34694
- [ ] Stabilization attempt https://github.com/rust-lang/rust/pull/66584
- [ ] Stabilization attempt https://github.com/rust-lang/rust/pull/76098
- [ ] Stabilization PR(s)
Subsets of the listed methods can be stabilized, rather than attempting to stabilize everything at once.
## Unresolved Questions
- [ ] Differences between Rust and other languages https://github.com/rust-lang/rust/pull/76098#issuecomment-761234042
- [ ] More specific case of the above: does the IPv6 unicast interface do what we expect? https://github.com/rust-lang/rust/issues/85604
- [ ] Do the provided methods pass ipcheck? https://github.com/rust-lang/libs-team/tree/93b78eef2e0d455a3e69c05333cd8f276e4e95f1/tools/ipcheck. Last known run: https://github.com/rust-lang/rust/pull/76098#issuecomment-760651861
From @KodrAus in https://github.com/rust-lang/rust/pull/76098#issuecomment-760841554:
- [ ] Should we replace the `Ipv6Addr::is_unicast_*` methods with a `Ipv6Addr::unicast_scope` method that returns a `Ipv6UnicastScope` enum (https://github.com/rust-lang/rust/pull/76098#issuecomment-735019872)?
- [ ] Should we change the behavior of `Ipv6Addr::to_ipv4` to ignore deprecated IPv4-compatible addresses, or deprecate the whole method in favor of the more correct `Ipv6Addr::to_ipv4_mapped` method (https://github.com/rust-lang/rust/pull/76098#discussion_r530465408)?
- [ ] Are we ok with `Ipv6Addr::is_*` methods now properly considering mapped (non-deprecated) IPv4 addresses? I'd personally be comfortable considering the old behavior a bug.
- [ ] Are there any behavioral differences between other language implementations that we should investigate? (https://github.com/rust-lang/rust/pull/76098#issuecomment-760651861) | T-libs-api,B-unstable,C-tracking-issue,A-io,Libs-Tracked | high | Critical |
100,599,794 | rust | Tracking issue for string patterns | (Link to original RFC: https://github.com/rust-lang/rfcs/pull/528)
This is a tracking issue for the unstable `pattern` feature in the standard library. We have many APIs which support the ability to search with any number of patterns generically within a string (e.g. substrings, characters, closures, etc), but implementing your own pattern (e.g. a regex) is not stable. It would be nice if these implementations could indeed be stable!
Some open questions are:
- Have these APIs been audited for naming and consistency?
- Are we sure these APIs are as conservative as they need to be?
- Are we sure that these APIs are as performant as they can be?
- Are we sure that these APIs can be used to implement all the necessary forms of searching?
cc @Kimundi
| T-libs-api,B-unstable,C-tracking-issue,A-str,Libs-Tracked | high | Critical |
100,603,186 | TypeScript | Hierarchy with getNavigationBarItems with modules | Calling getNavigationBarItems on a file with the following code:
```
module A {
class Hello {
}
}
module B {
}
```
will return 3 items, a module, a class and a module:

I had expected to find the Hello class item as a childItem for the first module A item. Is this a bug or by design?
| Bug,Help Wanted,API | low | Critical |
100,668,365 | go | syscall: add bind/mount operations to SysProcAttr on Linux | It would be nice to be able to pass a list of bind mounts to ForkAndExec via SysProcAttr, that Go bind mounts after forking (and execing): https://golang.org/pkg/syscall/#SysProcAttr. Alternatively, it would be nice to have the ability to pass a lambda to it that would call some syscalls (or raw syscalls) before it execs.
| Thinking,FeatureRequest,compiler/runtime | low | Major |
100,771,275 | thefuck | "history" rule runs very slow when bash history is huge | thefuck became very slow since a few versions ago. After specifying `THEFUCK_DEBUG=true` I have found the problem:
```
...
DEBUG: Trying rule: history; took: 0:00:07.860372
...
```
I have 100k+ lines in ~/.bash_history, and that should be the reason. Is there any way to speed up the rule, or maybe I have to clean up bash history regularly?
Thanks!
| enhancement | low | Critical |
100,793,092 | rust | add pop() to HashSet etc.? | In Python I've often found the `set.pop()` method useful which would remove and return a single arbitrary element of the set (or raise if empty). In Rust this should return an `Option<T>`, of course.
I haven't found such a method on `HashSet`, have I just not looked hard enough?
| A-collections,T-libs-api,C-feature-accepted | medium | Critical |
100,806,045 | rust | FreeBSD i386 test failures | FreeBSD 32-bit has never passed the tests since the buildbot was set up.
| O-x86_64,O-freebsd,C-bug,O-x86_32 | low | Critical |
100,822,103 | rust | Tracking issue for crates that are compiler dependencies | This is a tracking issue for the unstable `rustc_private` feature of the standard distribution. It's pretty unfortunate that we have to explicitly prevent everyone from linking to compiler internals via stability attributes, it'd be better if we just didn't ship them at all perhaps.
Is there a better solution? Must we rely on instability forever?
| T-libs-api,B-unstable,C-tracking-issue,Libs-Tracked | medium | Critical |
100,950,039 | opencv | videostab: trim-ratio must not be greater than 0.44 | Contrary to the docu in the [videostab sample](https://github.com/Itseez/opencv/blob/09b9b0fb9e9c9dd8c9e0d65705f8f19aa4c27f8a/samples/cpp/videostab.cpp#L118) where it states that the value range of trim-ratio is between 0 and 0.5,
- when trim-ratio >= 0.44, you get an "buffer smaller than minimum size" warning on the console (probably from FFmpeg) but the program seems fine
- when trim-ratio == 0.5, the program just stops with the message "processed frames: 0"
- when trim-ratio > 0.6, an assertion is thrown inside the OpenCV core (a given matrix roi has negative width). Maybe checking the parameter inside of the videostab module would be better
**Steps to reproduce**:
Run the [videostab example](https://github.com/Itseez/opencv/blob/09b9b0fb9e9c9dd8c9e0d65705f8f19aa4c27f8a/samples/cpp/videostab.cpp#L509-L514)
> videostab test.avi --est-trim=no --trim-ratio=0.44
> videostab test.avi --est-trim=no --trim-ratio=0.5
> videostab test.avi --est-trim=no --trim-ratio=0.6
| bug,priority: normal,category: samples,affected: 3.4 | low | Minor |
101,011,423 | rust | False ambiguity due to overlap between higher-ranked and other where clause | The test at http://is.gd/xHYV2z was extracted from the `qcollect-traits` package on crates.io. It fails to compile due to a false ambiguity between the higher-ranked where-clause and the other one. This is particularly frustrating because the ambiguity occurs as part of projection and the higher-ranked where clause doesn't even list a binding for `Output`.
The correct fix is probably to prune one where clause or the other as part of selection, but this is a bit tricky with the current region setup due to https://github.com/rust-lang/rust/issues/21974.
``` rust
pub trait ImmutableSequenceTypes<'a, T:'a>{
type Output;
}
pub trait MutableSequenceTypes<'a, T:'a>: ImmutableSequenceTypes<'a, T> {
}
fn foo<'t,X,T:'t>()
where X: for<'a> ImmutableSequenceTypes<'a, T>,
X: ImmutableSequenceTypes<'t, T, Output=&'t T>
{
bar::<<X as ImmutableSequenceTypes<'t,T>>::Output>();
}
fn bar<A>() { }
fn main() { }
```
(Due to this bug, PR #27641 causes a regression in `qcollect-traits`, but the problem is pre-existing.)
| A-trait-system,T-compiler,C-bug,T-types,A-higher-ranked | low | Critical |
101,081,688 | youtube-dl | Support for Hudl | Would it be possible to support http://www.hudl.com ? It uses JSON for the video URLs, so hopefully it won't be too difficult :)
| site-support-request | low | Major |
101,098,596 | TypeScript | Consider exporting 'getOwnEmitOutputFilePath' | Courtesy of @ivogabe on [Gitter on August 14, 2015 5:26 AM](https://gitter.im/Microsoft/TypeScript?at=55cdde93057d8c9d3a6d55d2):
> Question about the API, is there an exported method that returns the output filename of a source file? It should follow the options `outDir` & `rootDir` of course.
| Suggestion,Help Wanted,API | low | Minor |
101,099,679 | nvm | Should export functions to the shell | NVM suffers from the following:
```
$ tail -n +1 entrypoint.sh childscript.sh
==> entrypoint.sh <==
#!/bin/bash
function inheritedfunction() {
echo "I am an inherited function"
}
exec ./childscript.sh
==> childscript.sh <==
#!/bin/bash
function localfunction() {
echo "I am a local function"
}
localfunction
inheritedfunction
$ ./entrypoint.sh
I am a local function
./childscript.sh: line 8: inheritedfunction: command not found
```
Essentially if you `exec` (either explicitly with `exec ./childscript` or implicitly by `./childscript`) then NVM becomes unavailable.
`gvm` and `rbenv` solve this by having a shell shim that modifies the path and making sure that the binaries are real binaries, really on the $PATH.
Short term suggestion would be to `export -f every_function_in_nvm` (it's a long list):
``` diff
+ export -f nvm
+ export -f nvm_add_iojs_prefix
+ export -f nvm_alias
+ export -f nvm_alias_path
+ export -f nvm_binary_available
+ export -f nvm_checksum
+ export -f nvm_download
+ export -f nvm_ensure_default_set
+ export -f nvm_ensure_version_installed
+ export -f nvm_ensure_version_prefix
+ export -f nvm_find_nvmrc
+ export -f nvm_find_up
+ export -f nvm_format_version
+ export -f nvm_get_arch
+ export -f nvm_get_latest
+ export -f nvm_get_os
+ export -f nvm_has
+ export -f nvm_has_system_iojs
+ export -f nvm_has_system_node
+ export -f nvm_install_iojs_binary
+ export -f nvm_install_node_binary
+ export -f nvm_install_node_source
+ export -f nvm_iojs_prefix
+ export -f nvm_is_alias
+ export -f nvm_is_iojs_version
+ export -f nvm_is_valid_version
+ export -f nvm_ls
+ export -f nvm_ls_current
+ export -f nvm_ls_remote
+ export -f nvm_ls_remote_iojs
+ export -f nvm_ls_remote_iojs_org
+ export -f nvm_match_version
+ export -f nvm_node_prefix
+ export -f nvm_normalize_version
+ export -f nvm_npm_global_modules
+ export -f nvm_num_version_groups
+ export -f nvm_prepend_path
+ export -f nvm_print_implicit_alias
+ export -f nvm_print_npm_version
+ export -f nvm_print_versions
+ export -f nvm_rc_version
+ export -f nvm_remote_version
+ export -f nvm_remote_versions
+ export -f nvm_resolve_alias
+ export -f nvm_resolve_local_alias
+ export -f nvm_strip_iojs_prefix
+ export -f nvm_strip_path
+ export -f nvm_supports_source_options
+ export -f nvm_tree_contains_path
+ export -f nvm_validate_implicit_alias
+ export -f nvm_version
+ export -f nvm_version_dir
+ export -f nvm_version_greater
+ export -f nvm_version_greater_than_or_equal_to
+ export -f nvm_version_path
+}
```
But, with this `childscript` is able to at least run nvm. Unfortunately this means that `$ env` becomes essentially useless, as this `export -f` is a bashism which exports the function body as an environment variable with the same name as the function to allow it to pass through `man (3) execv`
| needs followup | low | Major |
Subsets and Splits