GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 48,676,209 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-02-08T01:13:00.000 | 2 | 1 | 0 | Sampling from a multivariate probability density function in python | 48,675,954 | 0.379949 | python,random,statistics,probability,probability-density | There a few different paths one can follow here.
(1) If P(x,y,z) factors as P(x,y,z) = P(x) P(y) P(z) (i.e., x, y, and z are independent) then you can sample each one separately.
(2) If P(x,y,z) has a more general factorization, you can reduce the number of variables that need to be sampled to whatever's conditional on the others. E.g. if P(x,y,z) = P(z|x, y) P(y | x) P(x), then you can sample x, y given x, and z given y and x in turn.
(3) For some particular distributions, there are known ways to sample. E.g. for multivariate Gaussian, if x is a sample from a mean 0 and identity covariance Gaussian (i.e. just sample each x_i as N(0, 1)), then y = L x + m has mean m and covariance S = L L' where L is the lower-triangular Cholesky decomposition of S, which must be positive definite.
(4) For many multivariate distributions, none of the above apply, and a more complicated scheme such as Markov chain Monte Carlo is applied.
Maybe if you say more about the problem, more specific advice can be given. | I have a multivariate probability density function P(x,y,z), and I want to sample from it. Normally, I would use numpy.random.choice() for this sort of task, but this function only works for 1-dimensional probability densities. Is there an equivalent function for multivariate PDFs? | 0 | 1 | 965 |
0 | 48,683,822 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-08T10:47:00.000 | 0 | 3 | 0 | RGB in OpenCV. What does it mean? | 48,683,621 | 0 | python,opencv,image-processing | As long as you don't change the extension of the image file, the pixel values don't change because they're stored in memory and your display or printer are just the way you want to see the image and often you don't get the same thing because it depends on the technology and different filters applied to you image before they're displayed or printed.. | Assume we are reading and loading an image using OpenCV from a specific location on our drive and then we read some pixels values and colors, and lets assume that this is a scanned image.
Usually if we open scanned image we will notice some differences between the printed image (before scanning) and the image if we open it and see it on the display screen.
The question is:
The values of the pixels colors that we get from OpenCV. Are they according to our display screen color space or we get exactly the same colors we have in the scanned image (printed version) ?? | 0 | 1 | 135 |
0 | 48,694,488 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-02-08T19:23:00.000 | 0 | 2 | 0 | Get 3D coordinates in OpenCV having X,Y and distance to object | 48,693,266 | 0 | python,c++,opencv | The "without calibration" bit dooms you, sorry.
Without knowing the focal length (or, equivalently, the field of view) you cannot "convert" a pixel into a ray.
Note that you can sometimes get an approximate calibration directly from the camera - for example, it might write a focal length for its lens into the EXIF header of the captured images. | I am trying to convert X,Y position of a tracked object in an image to 3D coordinates.
I got the distance to the object based on the size of the tracked object (A marker) but now I need to convert all of this to a 3D coordinate in the space. I have been reading a lot about this but all of the methods I found require a calibration matrix to achieve this.
In my case I don't need a lot of precision but I need this to work with multiple cameras without calibration. Is there a way to achieve what I'm trying to do? | 0 | 1 | 630 |
0 | 48,694,562 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-02-08T19:23:00.000 | 0 | 2 | 0 | Get 3D coordinates in OpenCV having X,Y and distance to object | 48,693,266 | 0 | python,c++,opencv | If you're using some sort of micro controller, it may be possible to point a sensor towards that object that's seen through the camera to get the distance.
You would most likely have to have a complex algorithm to get multiple cameras to work together to return the distance. If there's no calibration, there would be no way for those cameras to work together, as Francesco said. | I am trying to convert X,Y position of a tracked object in an image to 3D coordinates.
I got the distance to the object based on the size of the tracked object (A marker) but now I need to convert all of this to a 3D coordinate in the space. I have been reading a lot about this but all of the methods I found require a calibration matrix to achieve this.
In my case I don't need a lot of precision but I need this to work with multiple cameras without calibration. Is there a way to achieve what I'm trying to do? | 0 | 1 | 630 |
0 | 49,092,434 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-02-08T21:07:00.000 | 0 | 5 | 0 | pandas read csv ignore newline | 48,694,790 | 0 | python,pandas,biopython | There is no good way to do this.
BioPython alone seems to be sufficient, over a hybrid solution involving iterating through a BioPython object, and inserting into a dataframe | i have a dataset (for compbio people out there, it's a FASTA) that is littered with newlines, that don't act as a delimiter of the data.
Is there a way for pandas to ignore newlines when importing, using any of the pandas read functions?
sample data:
>ERR899297.10000174
TGTAATATTGCCTGTAGCGGGAGTTGTTGTCTCAGGATCAGCATTATATATCTCAATTGCATGAATCATCGTATTAATGC
TATCAAGATCAGCCGATTCT
every entry is delimited by the ">"
data is split by newlines (limited to, but not actually respected worldwide
with 80 chars per line) | 0 | 1 | 9,964 |
0 | 48,696,674 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-08T23:36:00.000 | 0 | 2 | 0 | Matching genes in string in Python | 48,696,556 | 0 | python,regex,subset | For character handling at such micro level the query will end up being clunky with high response time — if you're lucky to write a working one.
This's more of a script kind of operation. | I'm trying to match text strings (gene names) in a column from one file to text strings in a column of another, in order to create a subset the second.
For simplicity, the first will look more or less like this:
hits = ["IL1", "NRC31", "AR", etc.]
However, the column of interest in the second df looks like this:
68 NFKBIL1;NFKBIL1;ATP6V1G2;NFKBIL1;NFKBIL1;NFKBI
236 BARHL2
272 ARPC2;ARPC2
324 MARCH5
...
11302 NFKBIL1;NFKBIL1;ATP6V1G2;NFKBIL1;NFKBIL1;NFKBI
426033 ABC1;IL1;XYZ2
...
425700 IL17D
426295 RAB3IL1
426474 IL15RA;IL15RA
I came up with:
df2[df2.UCSC_RefGene_Name.str.contains('|'.join(hits), na=False)]
But I need to match the gene IL1 if it falls in the middle of the string (e.g.row 426033 above) but not similar genes (e.g. row 426295 above).
How do I use regular expressions to say:
"Match the any of the strings in hits when they have ';' or 'a blankspace' at either the beginning or the end of the gene name, but not when they have other letters or numbers on either side (which indicate a different gene with a similar name)?
I also need to exclude any rows with NA in dataframe 2.
Yes, I know there are regex syntax documents, but there are too many moving parts here for me to make sense of them. | 0 | 1 | 271 |
0 | 48,697,994 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-02-09T02:41:00.000 | 1 | 1 | 0 | What is the normalization method so that there is no negative value? | 48,697,967 | 1.2 | python,normalization,image-preprocessing | Without more information about your source code and the packages you're using, this is really more of a data science question than a python question.
To answer your question, a more than satisfactory method in most circumstances it min-max scaling. Simply normalize each coordinate of your images between 0 and 1. Whether or not that is appropriate for your circumstance depends on what you intend to do next with that data. | I am trying to normalize MR image.
There is a negative value in the MR image.
So the MR image was normalized using the Gaussian method, resulting in a negative area.
But i don't want to get negative area.
My question:
What is the normalization method so that there is no negative value?
Thanks in advance | 0 | 1 | 424 |
0 | 48,707,014 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-02-09T05:38:00.000 | 0 | 1 | 0 | Extending a trendline in a lmfit plot | 48,699,357 | 1.2 | python,lmfit | More detail about what you are actually doing would be helpful. That is, vague questions can really only get vague answers.
Assuming you are doing curve fitting with lmfit's Model class, then once you have your Model and a set of Parameters (say, after a fit has refined them to best match some data), then you can use those to evaluate the Model (with the model.eval() method) for any values of the independent variable (typically called x). That allows on a finer grid or extending past the range of the data you actually used in the fit.
Of course, predicting past the end of the data range assumes that the model is valid outside the range of the data. It's hard to know when that assumption is correct, especially when you have no data ;). "It's tough to make predictions, especially about the future." | I have fitted a curve using lmfit but the trendline/curve is short. Please how do I extend the trendline/curve in both directions because the trendline/curve is hanging. Sample codes are warmly welcome my senior programmers. Thanks. | 0 | 1 | 213 |
0 | 49,782,296 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-02-09T07:15:00.000 | 14 | 1 | 0 | How to use RASA NLU with RASA CORE | 48,700,554 | 1.2 | python,rasa-nlu,rasa-core | RASA NLU is the natural language understanding piece, which is used for taking examples of natural language and translating them into "intents." For example: "yes", "yeah", "yep" and "for sure" would all be translated into the "yes" intent.
RASA CORE on the other hand is the engine that processes the flow of conversation after the intent of the user has already been determined. RASA CORE can use other natural language translators as well, so while it pairs very nicely with RASA NLU they don't both have to be used together.
As an example if you were using both:
User says "hey there" to RASA core bot
Rasa core bot calls RASA NLU to understand what "hey there" means
RASA NLU translates "hey there" into intent = hello (with 85% confidence)
Rasa core receives "hello" intent
Rasa core runs through it's training examples to guess what it should do when it receives the "hello" intent
Rasa core predicts (with 92% confidence) that it should respond with the "utter_hello" template
Rasa core responds to user "Hi, I'm your friendly Rasa bot"
Hope this helps. | I am new to chatbot application and RASA as well, can anyone please help me to understand how should i use RASA NLU with RASA CORE. | 0 | 1 | 1,115 |
0 | 48,713,292 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-02-09T14:33:00.000 | -1 | 1 | 0 | multiplicative group using SymPy | 48,708,133 | -0.197375 | python,sympy | after reading more on the subject ,
Multiplicative group Z * p
In classical cyclic group gryptography we usually use multiplicative group Z p * , where p is prime.
Z p *
= { 1, 2, .... , p - 1} combined with multiplication of integers mod p
So it is simply
G2= [ i for i in range(1, n-1 )] #G2 multiplicativ Group of ordern | I'm trying to create a multiplicative group of order q.
This code generates an additive cyclic group of order 5
from sympy.combinatorics.generators import cyclic
list(cyclic(5))
[(4), (0 1 2 3 4), (0 2 4 1 3), (0 3 1 4 2), (0 4 3 2 1)]
Any help ? | 0 | 1 | 463 |
0 | 48,715,905 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-10T00:08:00.000 | 0 | 2 | 0 | support vector regression time series forecasting - python | 48,715,867 | 0 | python,scikit-learn,time-series,svm | given multi-variable regression, y =
Regression is a multi-dimensional separation which can be hard to visualize in ones head since it is not 3D.
The better question might be, which are consequential to the output value `y'.
Since you have the code to the loadavg in the kernel source, you can use the input parameters. | I have a dataset of peak load for a year. Its a simple two column dataset with the date and load(kWh).
I want to train it on the first 9 months and then let it predict the next three months . I can't get my head around how to implement SVR. I understand my 'y' would be predicted value in kWh but what about my X values?
Can anyone help? | 0 | 1 | 2,181 |
0 | 48,735,246 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 9 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 1 | python,python-3.x,python-2.7,tensorflow,pip | Uninstalling Python and then reinstalling solved my issue and I was able to successfully install TensorFlow. | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 51,831,928 | 0 | 0 | 0 | 0 | 13 | true | 305 | 2018-02-10T12:35:00.000 | 228 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 1.2 | python,python-3.x,python-2.7,tensorflow,pip | As of October 2020:
Tensorflow only supports the 64-bit version of Python
Tensorflow only supports Python 3.5 to 3.8
So, if you're using an out-of-range version of Python (older or newer) or a 32-bit version, then you'll need to use a different version. | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 65,537,792 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 7 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 1 | python,python-3.x,python-2.7,tensorflow,pip | (as of Jan 1st, 2021)
Any over version 3.9.x there is no support for TensorFlow 2. If you are installing packages via pip with 3.9, you simply get a "package doesn't exist" message. After reverting to the latest 3.8.x. Thought I would drop this here, I will update when 3.9.x is working with Tensorflow 2.x | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 55,988,352 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 71 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 1 | python,python-3.x,python-2.7,tensorflow,pip | I installed it successfully by pip install https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.8.0-py3-none-any.whl | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 59,836,416 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 5 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 0.043451 | python,python-3.x,python-2.7,tensorflow,pip | Looks like the problem is with Python 3.8. Use Python 3.7 instead. Steps I took to solve this.
Created a python 3.7 environment with conda
List item Installed rasa using pip install rasa within the environment.
Worked for me. | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 69,111,030 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 1 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 0.008695 | python,python-3.x,python-2.7,tensorflow,pip | using pip install tensorflow --user did it for me | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 49,432,863 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 36 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 1 | python,python-3.x,python-2.7,tensorflow,pip | I am giving it for Windows
If you are using python-3
Upgrade pip to the latest version using py -m pip install --upgrade pip
Install package using py -m pip install <package-name>
If you are using python-2
Upgrade pip to the latest version using py -2 -m pip install --upgrade pip
Install package using py -2 -m pip install <package-name>
It worked for me | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 53,488,421 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 42 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 1 | python,python-3.x,python-2.7,tensorflow,pip | if you are using anaconda, python 3.7 is installed by default, so you have to downgrade it to 3.6:
conda install python=3.6
then:
pip install tensorflow
it worked for me in Ubuntu. | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 67,496,288 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 0 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 0 | python,python-3.x,python-2.7,tensorflow,pip | This issue also happens with other libraries such as matplotlib(which doesn't support Python > 3.9 for some functions) let's just use COLAB. | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 60,302,029 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 0 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 0 | python,python-3.x,python-2.7,tensorflow,pip | use python version 3.6 or 3.7 but the important thing is you should install the python version of 64-bit. | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 61,057,983 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | -2 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | -0.01739 | python,python-3.x,python-2.7,tensorflow,pip | I solved the same problem with python 3.7 by installing one by one all the packages required
Here are the steps:
Install the package
See the error message:
couldn't find a version that satisfies the requirement -- the name of the module required
Install the module required.
Very often, installation of the required module requires the installation of another module, and another module - a couple of the others and so on.
This way I installed more than 30 packages and it helped. Now I have tensorflow of the latest version in Python 3.7 and didn't have to downgrade the kernel. | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 62,932,939 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 3 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 0.026081 | python,python-3.x,python-2.7,tensorflow,pip | For version TensorFlow 2.2:
Make sure you have python 3.8
try:
python --version
or
python3 --version
or
py --version
Upgrade the pip of the python which has version 3.8
try:
python3 -m pip install --upgrade pip
or
python -m pip install --upgrade pip
or
py -m pip install --upgrade pip
Install TensorFlow:
try:
python3 -m pip install TensorFlow
or python -m pip install TensorFlow
or py -m pip install TensorFlow
Make sure to run the file with the correct python:
try:
python3 file.py
or python file.py
or py file.py | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 64,305,430 | 0 | 0 | 0 | 0 | 13 | false | 305 | 2018-02-10T12:35:00.000 | 0 | 23 | 0 | Could not find a version that satisfies the requirement tensorflow | 48,720,833 | 0 | python,python-3.x,python-2.7,tensorflow,pip | In case you are using Docker, make sure you have
FROM python:x.y.z
instead of
FROM python:x.y.z-alpine. | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | 0 | 1 | 663,737 |
0 | 48,729,205 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-11T07:03:00.000 | 0 | 3 | 1 | How do I install packages for python ML on ubuntu? | 48,729,174 | 0 | python,numpy,ubuntu,matplotlib,installation | U cannot install it using apt-get. u need to install pip first. After you install pip, just google about how to install different packages using pip | I am having problems trying to install the following packages on Ubuntu:
scipy
numpy
matplotlib
pandas
sklearn
When I execute the command:
sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose
I get the following message:
Reading package lists... Done
Building dependency tree
Reading state information... Done
build-essential is already the newest version (12.1ubuntu2).
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies.
libatlas-dev : Depends: libblas-dev but it is not going to be installed
libatlas3-base : Depends: libgfortran3 (>= 4.6) but it is not going to be installed
Depends: libblas-common but it is not going to be installed
python-dev : Depends: libpython-dev (= 2.7.11-1) but it is not going to be installed
Depends: python2.7-dev (>= 2.7.11-1~) but it is not going to be installed
python-scipy : Depends: python-decorator but it is not going to be installed
Depends: libgfortran3 (>= 4.6) but it is not going to be installed
Recommends: python-imaging but it is not going to be installed
python-setuptools : Depends: python-pkg-resources (= 20.7.0-1) but it is not going to be installed
rstudio : Depends: libjpeg62 but it is not going to be installed
Depends: libgstreamer0.10-0 but it is not going to be installed
Depends: libgstreamer-plugins-base0.10-0 but it is not going to be installed
Recommends: r-base (>= 2.11.1) but it is not going to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
So the packages are failing to install, but I really need these packages to begin a new project, how can I successfully install these packages? | 0 | 1 | 428 |
0 | 48,739,426 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2018-02-11T11:44:00.000 | 4 | 4 | 0 | How to upgrade tensorflow with GPU on google colaboratory | 48,731,124 | 1.2 | python,tensorflow,google-colaboratory | Even if you will install gpu version !pip install tensorflow-gpu==1.5.0 it will still fail to import it because of the cuda libraries. Currently I have not found a way to use 1.5 version with GPU. So I would rather use 1.4.1 with gpu than 1.5 without gpu.
You can send them a feedback ( Home - Send Feedback ) and hopefully if enough people will send something similar, they will update the new gpu version. | Currently google colaboratory uses tensorflow 1.4.1. I want to upgrade it to 1.5.0 version. Each time when i executed !pip install --upgrade tensorflow command, notebook instance succesfully upgrades the tensorflow version to 1.5.0. But after upgrade operation tensorflow instance only supports "CPU".
When i have executed this command it shows nothing :
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
Should there be another way for upgrading tensorflow ? such as upgrading to tensorflow-gpu package ? Also when will notebooks will come with upgraded tensorflows ? | 0 | 1 | 15,191 |
0 | 51,146,080 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-12T03:01:00.000 | 0 | 1 | 0 | OOM when training on GPU external server | 48,738,979 | 0 | python-2.7,tensorflow,out-of-memory,gpu | You can try using model.fit_generator instead. | I am trying to train my deep learning code using Keras with tensorflow backend on a remote server with GPU. However, even the GPU server states OOM.
This was the output:
2018-02-09 14:19:28.918619: I tensorflow/core/common_runtime/bfc_allocator.cc:685] Stats:
Limit: 10658837300
InUse: 10314885120
MaxInUse: 10349312000
NumAllocs: 8762
MaxAllocSize: 1416551936
2018-02-09 14:19:28.918672: W tensorflow/core/common_runtime/bfc_allocator.cc:277] ************__********************************************************************************xxxxxx 2018-02-09 14:19:28.918745: W tensorflow/core/framework/op_kernel.cc:1182] Resource exhausted: OOM when allocating tensor of shape [13772,13772] and type float 2018-02-09 14:19:29.294784: E tensorflow/core/common_runtime/executor.cc:643] Executor failed to create kernel. Resource exhausted: OOM when allocating tensor of shape [13772,13772] and type float [[Node: training_4/RMSprop/zeros = Constdtype=DT_FLOAT, value=Tensor, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
Would there be any ways to resolve this issue? I tried adjusting for batch size, it initially worked when batch size with 100 but when i reduced it to 50, it showed this error. Afterwhich i tried batch size 100 but it displayed this same error again.
I tried to search on how to suspend training binary while running evaluation but did not get much.
Would greatly appreciate your help in this! Thank you!! | 0 | 1 | 102 |
0 | 49,017,753 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-12T18:28:00.000 | 0 | 1 | 0 | Import my python module to rstudio | 48,753,128 | 0 | python,rstudio,r-markdown,python-import | First I had to make a setup.py file for my project.
activate the virtual environment corresponding to my project source activate, then run python setup.py develop
Now, I can import my own python library from R as I installed it in my environment. | I have developed few modules in python and I want to import them to rstudio RMarkdown file. However, I am not sure how I can do it.
For example, I can't do from code.extract_feat.cluster_blast import fill_df_by_blast as fill_df as I am used to do it in pycharm.
Any hint?
Thanks. | 0 | 1 | 378 |
0 | 49,515,251 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-02-12T19:52:00.000 | 0 | 1 | 0 | Uploading CSV - 'utf-8' codec can't decode byte 0x92 in position 16: invalid start byte | 48,754,469 | 0 | python-3.x | Try the below:
pd.read_csv("filepath",encoding='cp1252').
This one should work as it worked for me. | I have been trying to upload a csv file using pandas .read() function. But as you can see from my title this is what I get
"'utf-8' codec can't decode byte 0x92 in position 16: invalid start byte"
And it's weird because from the same folder I was able to upload a different csv file without problems.
Something that might have caused this is that the file was previously xlsx and I manually converted it to cvs?
Please Help
Python3 | 0 | 1 | 603 |
0 | 48,777,636 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-02-13T00:58:00.000 | 0 | 1 | 0 | Running tensorflow in ipython | 48,757,970 | 0 | tensorflow,ipython | I think I figured out the problem. pip was pointing to /Library/Frameworks/Python.framework/Versions/3.4/bin/pip
My ipython was pointing to /opt/local/bin/ipython
I re-installed tensorflow within my virtual environment by calling /opt/local/bin/pip-2.7 install --upgrade tensorflow
Now I can use tensorflow within ipython. | tensorflow works using python in a virtualenv I created, but tensorflow doesn't work in the same virtualenv with ipython. This is the error I get:
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name mock was given, but was not able to be found.
I have tried installing ipython within the virtual environment. This is the message I get:
Requirement already satisfied: ipython in /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
If I try to uninstall ipython within the virtual environment. I get this message:
Not uninstalling ipython at /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
Any ideas on how to get this to work? I don't know how to force the install of ipython to be inside the virtual environment. I've tried deleting the virtual environment and making a new one from scratch, but I get the same error. | 0 | 1 | 213 |
0 | 48,761,331 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-02-13T04:35:00.000 | 0 | 1 | 0 | What does tensorflow nonmaximum suppression function's argument "score" do to this function? | 48,759,535 | 1.2 | python,tensorflow,computer-vision | The scores argument decides the sorting order. The method tf.image.non_max_suppression goes through (greedily, so all input entries are covered) input bounding boxes in order decided by this scores argument, selects only those bounding boxes from them which are not overlapping (more than iou_threshold) with boxes already selected.
NMS first look at bottom right coordinate and sort according to it and calculate IoU...
This is not correct, can you site any resource which made you think this way? | I read the document about the function and I understood how NMS works. What I'm not clear is scores argument to this function. I think NMS first look at bottom right coordinate and sort according to it and calculate IoU then discard some boxes which have IoU greater than the threshold that you set. In this theory scores argument does absolutely nothing and the document doesn't tell much about scores arguments. I want to know how the argument affect the function. Thank you. | 0 | 1 | 537 |
0 | 48,762,415 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-02-13T07:00:00.000 | 1 | 1 | 0 | Image Classification using Tensorflow | 48,761,144 | 1.2 | python-3.x,tensorflow,computer-vision,softmax,sigmoid | Since you're doing single label classification, softmax is the best loss function for this, as it maps your final layer logit values to a probability distribution. Sigmoid is used when it's multilabel classification.
It's always better to use a momentum based optimizer compared to vanilla gradient descent. There's a bunch of such modified optimizers like Adam or RMSProp. Experiment with them to see what works best. Adam is probably going to give you the best performance.
You can add an extra label no_class, so your task will now be a 6+1 label classification. You can feed in some random images with no_class as the label. However the distribution of your random images must match the test image distribution, else it won't generalise. | I am doing transfer-learning/retraining using Tensorflow Inception V3 model. I have 6 labels. A given image can be one single type only, i.e, no multiple class detection is needed. I have three queries:
Which activation function is best for my case? Presently retrain.py file provided by tensorflow uses softmax? What are other methods available? (like sigmoid etc)
Which Optimiser function I should use? (GradientDescent, Adam.. etc)
I want to identify out-of-scope images, i.e. if users inputs a random image, my algorithm should say that it does not belong to the described classes. Presently with 6 classes, it gives one class as a sure output but I do not want that. What are possible solutions for this?
Also, what are the other parameters that we may tweak in tensorflow. My baseline accuracy is 94% and I am looking for something close to 99%. | 0 | 1 | 124 |
0 | 48,771,960 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-02-13T13:18:00.000 | 0 | 1 | 0 | estimation of subpixel values from images in Python | 48,767,750 | 0 | python,opencv,image-processing,python-imaging-library | There are two common methods:
bilinear interpolation,
bicubic interpolation.
These evaluate an intermediate value, based on the values at four or sixteen neighboring pixels, using weighting functions based on the fractional parts of the coordinates.
Lookup these expressions.
From my experience, the bilinear quality is often sufficient. | I have an image and and am transforming it with a nonlinear spatial transformation. I have a written a function that, for every pixel (i, j) in the destination image array, returns a coordinate (y, x) in the source array.
The returned coordinate is a floating point value, meaning that it corresponds to a point that lies between the pixels in the source image.
Does anyone know if there an established method in PIL or opencv to interpolate the value of this subpixel, or should I roll my own? Thanks! | 0 | 1 | 1,342 |
0 | 48,769,576 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2018-02-13T14:31:00.000 | 0 | 1 | 0 | Why would I need stacks and queues for Depth First Search? | 48,769,149 | 0 | python,search,data-structures | I realized that I misread the assignment. It said:
"Important note: Make sure to use the Stack, Queue and PriorityQueue data structures provided to you in util.py! These data structure implementations have particular properties which are required for compatibility with the autograder."
I had misread it as saying that I need to use all of them, when it is really saying that if I want to use them I should use their version. | I'm working on a project from the Berkeley AI curriculum, and they require me to use stacks, queues, and priority queues in my Depth First Graph Search implementation. I stored my fringe in a priority queue and my already visited states in a set. What am I supposed to use stacks and queues for in this assignment?
I'm not a student at Berkeley and I'm just using their curriculum for an independent study in high school and I got permission from my instructor to ask this online, so this is not a case of cheating on homework. | 0 | 1 | 86 |
0 | 62,222,676 | 0 | 0 | 0 | 0 | 1 | false | 31 | 2018-02-13T15:46:00.000 | 36 | 2 | 0 | What is the difference between save a pandas dataframe to pickle and to csv? | 48,770,542 | 1 | python,pandas,csv,pickle | csv
✅human readable
✅cross platform
⛔slower
⛔more disk space
⛔doesn't preserve types in some cases
pickle
✅fast saving/loading
✅less disk space
⛔non human readable
⛔python only
Also take a look at parquet format (to_parquet, read_parquet)
✅fast saving/loading
✅less disk space than pickle
✅supported by many platforms
⛔non human readable | I am learning python pandas.
I see a tutorial which shows two ways to save a pandas dataframe.
pd.to_csv('sub.csv') and to open pd.read_csv('sub.csv')
pd.to_pickle('sub.pkl') and to open pd.read_pickle('sub.pkl')
The tutorial says to_pickle is to save the dataframe to disk. I am confused about this. Because when I use to_csv, I did see a csv file appears in the folder, which I assume is also save to disk right?
In general, why we want to save a dataframe using to_pickle rather than save it to csv or txt or other format? | 0 | 1 | 25,209 |
0 | 48,770,832 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2018-02-13T15:59:00.000 | 4 | 2 | 0 | Why linspace was named like that in numpy? | 48,770,786 | 1.2 | python,numpy | A linear space. So in other words, from a straight line over an interval we take n samples. | I'm learning python and numpy. The docstring of numpy.linspace says
Return evenly spaced numbers over a specified interval.
Returns num evenly spaced samples, calculated over the interval
[start, stop].
So I guess the "space" part of linspace means "space". But what does "lin" stand for? | 0 | 1 | 672 |
0 | 48,787,920 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-14T12:33:00.000 | 0 | 1 | 0 | seed=1, TensorFlor- Xavier_initializer | 48,787,340 | 0 | python,tensorflow | It's to define the random seed. By this means, the weight values are always initialized by the same values.
From Wiki: A random seed is a number (or vector) used to initialize a pseudo-random number generator. | What does seed=1 is doing in the following code:
W3 = tf.get_variable("W3", [L3, L2], initializer = tf.contrib.layers.xavier_initializer(seed=1)) | 0 | 1 | 160 |
0 | 48,790,015 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-14T14:20:00.000 | 0 | 1 | 0 | scipy.optimize.least_squares - limit number of jacobian evaluations | 48,789,406 | 0 | python,optimization,scipy,least-squares | According to the help of scipy.optimize.least_squares, max_nfev is the number of function evaluations before the program exits :
max_nfev : None or int, optional
Maximum number of function evaluations before the termination.
If None (default), the value is chosen automatically:
Again, according to the help, there is no MaxIterations argument but you can define the tolerance in f (ftol) that is the function you want to minimize or x (xtol) the solution, before exiting the code.
You can also use scipy.optimize.minimize(). In it, you can define a maxiter argument which will be in the options dictionary.
If you do so, beware that the function you want to minimize must be your cost function, meaning that you will have to code your least square function.
I hope it will be clear and useful to you | I am trying to use scipy.optimize.least_squares(fun= my_fun, jac=my_jac, max_nfev= 1000) with two callable functions: my_fun and my_jac
both fuctions: my_fun and my_jac, use an external software to evaluate their value, this task is much time consuming, therefore I prefer to control the number of evaluations for both
the trf method uses the my_fun function for evaluating if trust region is adequate and the my_jac function for determine both the cost function and the jacobian matrix
There is an input parameter max_nfev. does this parameter count only for the fun evaluations? does it consider also the jac evaluations?
moreover, in matlab there are two parameters for the lsqnonlin function, MaxIterations and MaxFunctionEvaluations. does it exist in scipy.optimize.least_squares?
Thanks
Alon | 0 | 1 | 675 |
0 | 49,395,441 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-02-14T20:22:00.000 | 0 | 1 | 0 | Translating entire coordinates of array to new origin | 48,795,574 | 1.2 | python,arrays,numpy,coordinates,translation | My initial question was very misleading - my apologies for the confusion.
I've since solved the problem by translating my local array (data cube) within a global array.
To accomplish this, I needed to first plot my data within a larger array (such as a Mayavi scene, which I did). Then, within this scene, I moved my data (eg. using actors in Mayavi) to be centered at the global array's origin. Pretty simple actually - the point here being that my initial question was flawed; thank you all for the help and advice. | I have a 128-length (s) array cube with unique values held at each point inside. At the center of this cube is the meat of the data (representing an object), while on the inner borders of the cube, there are mostly zero values.
I need to shift this entire array such that the meat of the data is actually at the origin (0,0) instead of at (s/2, s/2, s/2)... such that my new coordinate origin is actually at (-s/2, -s/2, -s/2). What is the best way to tackle this problem?
Edit: Sorry for the lack of data - I'm using a .mrc file. This is all to circumvent a plotting issue in mayaVI using its contour3d method. Perhaps I should be finding a way to translate my plotted object (with mayaVI) instead of translating my raw data array? But aren't these two technically the same thing? | 0 | 1 | 297 |
0 | 54,771,885 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2018-02-14T20:50:00.000 | 3 | 2 | 0 | [Tensorflow][Object detection] ValueError when try to train with --num_clones=2 | 48,795,950 | 0.291313 | python,tensorflow,object-detection | You don't mention which type of model you are training - if like me you were using the default model from the TensorFlow Object Detection API example (Faster-RCNN-Inception-V2) then num_clones should equal the batch_size. I was using a GPU however, but when I went from one clone to two, I saw a similar error and setting batch_size: 2 in the training config file was the solution. | I wanted to train on multiple CPU so i run this command
C:\Users\solution\Desktop\Tensorflow\research>python
object_detection/train.py --logtostderr
--pipeline_config_path=C:\Users\solution\Desktop\Tensorflow\myFolder\power_drink.config --train_dir=C:\Users\solution\Desktop\Tensorflow\research\object_detection\train
--num_clones=2 --clone_on_cpu=True
and i got the following error
Traceback (most recent call last): File "object_detection/train.py",
line 169, in
tf.app.run() File "C:\Users\solution\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\platform\app.py",
line 124, in run
_sys.exit(main(argv)) File "object_detection/train.py", line 165, in main
worker_job_name, is_chief, FLAGS.train_dir) File "C:\Users\solution\Desktop\Tensorflow\research\object_detection\trainer.py",
line 246, in train
clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue]) File
"C:\Users\solution\Desktop\Tensorflow\research\slim\deployment\model_deploy.py",
line 193, in create_clones
outputs = model_fn(*args, **kwargs) File "C:\Users\solution\Desktop\Tensorflow\research\object_detection\trainer.py",
line 158, in _create_losses
train_config.merge_multiple_label_boxes) ValueError: not enough values to unpack (expected 7, got 0)
If i set num_clones to 1 or omitted it, it works normally.
I also tries setting --ps_tasks=1 which doesn't help
any advice would be appreciated | 0 | 1 | 2,859 |
0 | 48,822,919 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-02-16T08:30:00.000 | 0 | 1 | 0 | Probability for correct Image Classification in Tensorflow | 48,822,796 | 1.2 | python,image-processing,tensorflow | Single label classification is not something Neural Networks can do "off-the-shelf".
How do you train it ? With only data relevant to your target domain ? Your model will only learn to output one.
You have two strategies:
you use the same strategy as in the "HotDog or Not HotDog app", you put the whole imagenet in two different folders, one with the class you want, the other one containing everything else.
You use the convnet as feature extractor and then use a second model like a One-Class SVM.
You have to understand that doing one class classification is not a simple and direct problem like binary classification could be. | I am using Tensorflow retraining model for Image Classification. I am doing single label classification.
I want to set a threshold for correct classification.
In other words, if the highest probability is less than a given threshold, I can say that the image is "unknown" i.e. if np.max(results) < 0.5 -> set label as "unknown".
So, is there any industry standard to set this threshold. I can set a random value say 60%, but is there any literature to back this threshold ?
Any links or references will be very helpful.
Thanks a lot. | 0 | 1 | 288 |
0 | 48,829,716 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-02-16T10:24:00.000 | 2 | 1 | 0 | create environment module to work with opencv-python on hpc nodes | 48,824,675 | 1.2 | python-2.7,opencv,hpc,torque,environment-modules | The Python module uses a system library (namely libSM.so.6 : library support for the freedesktop.org version of X) that is present on the head node, but not on the compute nodes (which is not very surprising)
You can either:
ask the administrators to have that library installed systemwide on the compute nodes through the package manager ;
or locate the file on the head node (probably in /usr/lib or /usr/lib64 or siblings), and copy it in /home/trig/privatemodules/venv_python275/lib/python2.7/site-packages/cv2/, where Python should find it. If Python still does not find it, run export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/trig/privatemodules/venv_python275/lib/python2.7/site-packages/cv2/ in your Torque script after you load the module.
or you can search for the source for libSM and compile it in your home directory | I have a task to train neural networks using tensorflow and opencv-python on HPC nodes via Torque.
I have made privatemodule with python virtualenv and installed tensorflow and opencv-python modules in it.
In the node I can load my python module.
But when I try to run training script I get following error:
Traceback (most recent call last):
File "tensornetwork/train_user_ind_single_subj2.py", line 16, in <module>
from reader_user_ind_single_subj import MyData
File "/home/trig/tensornetwork/reader_user_ind_single_subj.py", line 10, in <module>
import cv2
File "/home/trig/privatemodules/venv_python275/lib/python2.7/site-packages/cv2/__init__.py", line 4, in <module>
from .cv2 import *
ImportError: libSM.so.6: cannot open shared object file: No such file or directory
The training script can run on head node, but cant on compute node.
Can you suggest how to modify my module or add a new module to make training run on compute node using Torque. | 0 | 1 | 388 |
0 | 48,833,452 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-02-16T10:55:00.000 | 0 | 1 | 0 | Clustering Customers with Python (sklearn) | 48,825,248 | 0 | python,cluster-analysis,customer | Avoid comparing Silhouettes of different projections or scalings. Internal measures tend to be too sensitive.
Do not use tSNE for clustering (Google for the discussion on stats.SE, feel free to edit the link into this answer). It will cause false separation and false adjacency; it is a visualization technique.
PCA will scale down high variance axes, and scale up low variance directions. It is to be expected that this overall decreases the quality if the main axis is what you are interested in (and it is expected to help if it is not). But if PCA visualization shows only one big blob, then a Silhouette of 0.7 should not be possible. For such a high silhouette, the clusters should be separable in the PCA view. | I work at an ecommerce company and I'm responsible for clustering our customers based on their transactional behavior. I've never worked with clustering before, so I'm having a bit of a rough time.
1st) I've gathered data on customers and I've chosen 12 variables that specify very nicely how these customers behave. Each line of the dataset represents 1 user, where the columns are the 12 features I've chosen.
2nd) I've removed some outliers and built a correlation matrix in order to check of redundant variables. Turns out some of them are highly correlated ( > 0.8 correlation)
3rd) I used sklearn's RobustScaler on all 12 variables in order to make sure the variable's variability doesn't change much (StandardScaler did a poor job with my silhouette)
4th) I ran KMeans on the dataset and got a very good result for 2 clusters (silhouette of >70%)
5th) I tried doing a PCA after scaling / before clustering to reduce my dimension from 12 to 2 and, to my surprise, my silhouette started going to 30~40% and, when I plot the datapoints, it's just a big mass at the center of the graph.
My question is:
1) What's the difference between RobustScaler and StandardScaler on sklearn? When should I use each?
2) Should I do : Raw Data -> Cleaned Data -> Normalization -> PCA/TSNE -> Clustering ? Or Should PCA come before normalization?
3) Is a 12 -> 2 dimension reduction through PCA too extreme? That might be causing the horrible silhouette score.
Thank you very much! | 0 | 1 | 214 |
0 | 61,806,979 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2018-02-17T09:52:00.000 | 3 | 7 | 0 | Heroku: deploying Deep Learning model | 48,840,025 | 0.085505 | python,tensorflow,heroku,keras,deep-learning | A lot of these answers are great for reducing slug size but if anyone still has problems with deploying a deep learning model to heroku it is important to note that for whatever reason tensorflow 2.0 is ~500MB whereas earlier versions are much smaller. Using an earlier version of tensorflow can greatly reduce your slug size. | I have developed a rest API using Flask to expose a Python Keras Deep Learning model (CNN for text classification). I have a very simple script that loads the model into memory and outputs class probabilities for a given text input. The API works perfectly locally.
However, when I git push heroku master, I get Compiled slug size: 588.2M is too large (max is 500M). The model is 83MB in size, which is quite small for a Deep Learning model. Notable dependencies include Keras and its tensorflow backend.
I know that you can use GBs of RAM and disk space on Heroku. But the bottleneck seems to be the slug size. Is there a way to circumvent this? Or is Heroku just not the right tool for deploying Deep Learning models? | 1 | 1 | 7,474 |
0 | 48,840,524 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-02-17T10:21:00.000 | 0 | 3 | 0 | numpy got installed in Python3.5 but not in Python3.6 | 48,840,282 | 0 | python,python-3.5,python-3.6 | Cannot comment since I don't the rep.
If your default python is 3.5 when you check python --version, the way to go would be to find the location of the python executable for the desired version (here 3.6).
cd to that folder and then run the command given by Mike. | I have both Python 3.5 and Python 3.6 on my laptop. I am using Ubuntu 16.04. I used pip3 to install numpy. It is working with Python3.5 but not with Python3.6. Please help. | 0 | 1 | 2,139 |
0 | 64,853,941 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-02-18T20:57:00.000 | 0 | 2 | 0 | What does np.polyfit do and return? | 48,856,497 | 0 | python,numpy | These are essentially the beta and the alpha values for the given data.
Where beta necessarily demonstrates the degree of volatility or the slope | I went through the docs but I'm not able to interpret correctly
IN my code, I wanted to find a line that goes through 2 points(x1,y1), (x2,y2), so I've used
np.polyfit((x1,x2),(y1,y2),1)
since its a 1 degree polynomial(a straight line)
It returns me [ -1.04 727.2 ]
Though my code (which is a much larger file) runs properly, and does what it is intended to do - i want to understand what this is returning
I'm assuming polyfit returns a line(curved, straight, whatever) that satisfies(goes through) the points given to it, so how can a line be represented with 2 points which it is returning? | 0 | 1 | 6,890 |
0 | 48,877,243 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-02-19T06:47:00.000 | 1 | 2 | 0 | how to manually give weight to features using python in machine learning | 48,860,824 | 0.099668 | python,regression,jupyter-notebook,decision-tree | The whole point of using machine learning is to let it decide on its own how much weight should be given to which predictor based on its importance in predicting the label correctly.
It just doesn't makes any sense trying to do this on your own and then also use machine learning. | I have a data set with continuous label ranging from one to five with nine different features. So I wanted to give weight to each feature manually because some of the features have very less dependency on the label so I wanted to give more weight to those features which have more dependency on the label. How can I manually give weight to each feature? Will it be possible to give weight like this?
I went through different documentations I can only find how to give weight to the label. What I only find is eliminating features ranking features etc. But I wanted to give weight to each feature manually also I wanted to tune these weights (Sometimes the feature weight will be different for different scenario so I wanted to tune the weight according to that)
Will it be possible ? | 0 | 1 | 839 |
0 | 48,877,568 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-02-19T06:47:00.000 | 0 | 2 | 0 | how to manually give weight to features using python in machine learning | 48,860,824 | 0 | python,regression,jupyter-notebook,decision-tree | Don't assign weights manually, let the model learn the weights itself.
It will automatically decide which features are more important. | I have a data set with continuous label ranging from one to five with nine different features. So I wanted to give weight to each feature manually because some of the features have very less dependency on the label so I wanted to give more weight to those features which have more dependency on the label. How can I manually give weight to each feature? Will it be possible to give weight like this?
I went through different documentations I can only find how to give weight to the label. What I only find is eliminating features ranking features etc. But I wanted to give weight to each feature manually also I wanted to tune these weights (Sometimes the feature weight will be different for different scenario so I wanted to tune the weight according to that)
Will it be possible ? | 0 | 1 | 839 |
0 | 48,870,492 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-02-19T10:43:00.000 | 6 | 3 | 1 | Convert hdf5 to netcdf4 in bash, R, python or NCL? | 48,864,357 | 1.2 | python,r,hdf5,netcdf4,ncl | with netcdf-c library you can: $ nccopy in.h5 out.nc | Is there a quick and simple way to convert HDF5 files to netcdf(4) from the command line in bash? Alternatively a simple script that handle such a conversion automatically in R, NCL or python ? | 0 | 1 | 5,482 |
0 | 49,065,611 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2018-02-20T07:57:00.000 | 2 | 2 | 0 | Classification: skewed data within a class | 48,880,273 | 0.197375 | python,tensorflow,neural-network,keras,multilabel-classification | You're on the right track.
Usually, you would either balance your data set before training, i.e. reducing the over-represented class or generate artificial (augmented) data for the under-represented class to boost its occurrence.
Reduce over-represented class
This one is simpler, you would just randomly pick as many samples as there are in the under-represented class, discard the rest and train with the new subset. The disadvantage of course is that you're losing some learning potential, depending on how complex (how many features) your task has.
Augment data
Depending on the kind of data you're working with, you can "augment" data. That just means that you take existing samples from your data and slightly modify them and use them as additional samples. This works very well with image data, sound data. You could flip/rotate, scale, add-noise, in-/decrease brightness, scale, crop etc.
The important thing here is that you stay within bounds of what could happen in the real world. If for example you want to recognize a "70mph speed limit" sign, well, flipping it doesn't make sense, you will never encounter an actual flipped 70mph sign. If you want to recognize a flower, flipping or rotating it is permissible. Same for sound, changing volume / frequency slighty won't matter much. But reversing the audio track changes its "meaning" and you won't have to recognize backwards spoken words in the real world.
Now if you have to augment tabular data like sales data, metadata, etc... that's much trickier as you have to be careful not to implicitly feed your own assumptions into the model. | I'm trying to build a multilabel-classifier to predict the probabilities of some input data being either 0 or 1. I'm using a neural network and Tensorflow + Keras (maybe a CNN later).
The problem is the following:
The data is highly skewed. There are a lot more negative examples than positive maybe 90:10. So my neural network nearly always outputs very low probabilities for positive examples. Using binary numbers it would predict 0 in most of the cases.
The performance is > 95% for nearly all classes, but this is due to the fact that it nearly always predicts zero...
Therefore the number of false negatives is very high.
Some suggestions how to fix this?
Here are the ideas I considered so far:
Punishing false negatives more with a customized loss function (my first attempt failed). Similar to class weighting positive examples inside a class more than negative ones. This is similar to class weights but within a class.
How would you implement this in Keras?
Oversampling positive examples by cloning them and then overfitting the neural network such that positive and negative examples are balanced.
Thanks in advance! | 0 | 1 | 1,027 |
0 | 48,882,154 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-02-20T09:41:00.000 | 0 | 2 | 0 | import tensorflow with python 2.7.6 | 48,882,088 | 0 | python,tensorflow | Your computer seems to be incompatible with the library tensorflow.
Your computer needs to be able to use FMA instructions but can't. | Python terminal getting abort with following msg:
/grid/common//pkgs/python/v2.7.6/bin/python
Python 2.7.6 (default, Jan 17 2014, 04:05:53)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import tensorflow as tf
2018-02-20 01:40:11.268134: F tensorflow/core/platform/cpu_feature_guard.cc:36] The TensorFlow library was compiled to use FMA instructions, but these aren't available on your machine.
Abort | 0 | 1 | 230 |
0 | 48,882,437 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-02-20T09:41:00.000 | 0 | 2 | 0 | import tensorflow with python 2.7.6 | 48,882,088 | 0 | python,tensorflow | You need to compile TensorFlow on the same computer. | Python terminal getting abort with following msg:
/grid/common//pkgs/python/v2.7.6/bin/python
Python 2.7.6 (default, Jan 17 2014, 04:05:53)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import tensorflow as tf
2018-02-20 01:40:11.268134: F tensorflow/core/platform/cpu_feature_guard.cc:36] The TensorFlow library was compiled to use FMA instructions, but these aren't available on your machine.
Abort | 0 | 1 | 230 |
0 | 48,884,112 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-20T11:08:00.000 | 0 | 1 | 0 | pytesseract - Read text from images with more accuracy | 48,883,888 | 0 | opencv,python-tesseract | Localize your detection by setting the rectangles where Tesseract has to look. You can then restrict according to rectangle which type of data is present at that place example: numerical,alphabets etc.You can also make a dictionary file for tesseract to improve accuracy(This can be used for detecting card holder name by listing common names in a file). If there is disturbance in the background then design a filter to remove it. Good Luck! | I am working on pytesseract. I want to read data from Driving License kind of thing. Presently i am converting .jpg image to binary(gray scale) format using opencv but i am not accurate result. How do you solve this? Is there any standard size of image? | 0 | 1 | 424 |
1 | 50,014,778 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-20T16:56:00.000 | 0 | 3 | 0 | detecting when the camera view is blocked (black frame) | 48,890,390 | 0 | python,opencv,camera,background-subtraction,opencv-contour | A possible cause for this error could be mild jitters in the frame that occur due to mild shaking of the camera
If your background subtraction algorithm isn't tolerant enough to low-value colour changes, then a tamper alert will be triggered even if you shake the camera a bit.
I would suggest using MOG2 for background subtraction | I'm trying to detect camera tampering (lens being blocked, resulting in a black frame). The approach I have taken so far is to apply background subtraction and then finding contours post thresholding the foreground mask. Next, I find the area of each contour and if the contour area is higher than a threshold value (say, larger than 3/4th of the camera frame area), then the camera is said to be tampered.
Although upon trying this approach, there is a false tamper alert, even when the camera captures full view.
Not sure, how to go about this detection.
Any help shall be highly appreciated | 0 | 1 | 1,009 |
0 | 64,678,289 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-02-20T18:02:00.000 | -1 | 3 | 0 | One-hot-encoding with missing categories | 48,891,538 | -0.066568 | python,scikit-learn,one-hot-encoding | Basically first we need to apply fit_transform for the base data and next apply transform for the sample data, so sample data also will get the exact no.of columns w.r.t base data. | I have a dataset with a category column. In order to use linear regression, I 1-hot encode this column.
My set has 10 columns, including the category column. After dropping that column and appending the 1-hot encoded matrix, I end up with 14 columns (10 - 1 + 5).
So I train (fit) my LinearRegression model with a matrix of shape (n, 14).
After training it, I want to test it on a subset of the training set, so I take only the 5 first and put them through the same pipeline. But these 5 first only contain 3 of the categories. So after going through the pipeline, I'm only left with a matrix of shape (n, 13) because it's missing 2 categories.
How can I force the 1-hot encoder to use the 5 categories ?
I'm using LabelBinarizer from sklearn. | 0 | 1 | 2,335 |
0 | 49,717,503 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2018-02-20T23:31:00.000 | 0 | 1 | 0 | How to add directory to a python running inside virtualenv | 48,895,898 | 1.2 | python-3.x,tensorflow,virtualenv,python-3.4,virtualenvwrapper | I had faced a similar issue for the same hardware. If i am guessing right and you are following the same set of install instructions , install the. Whl for tensorflow without using sudo as using the sudo even from inside the virtual environment installs it in the place as seen by the root directory and not inside the vital environment. | I have installed tensorflow and opencv on odroid xu4. Tensorflow was installed using a .whl file for raspberry pi and it built successfully. Opencv was built successfully inside virtualenv environment. I can import opencv as import cv2 from inside virtual environment for python but not tensorflow. Tensorflow is getting imported from outside virtual environment even though .whl file for the same was run from inside the virtual environment. I have researched a lot regarding this and couldn't figure out a solution to make tensorflow work from inside virtualenv.
These are the things i know.
1) I know from where python3 is importing tensorflow when run outside the virtualenv
2) I know from where python3 is accessing all the packages from inside the virtualenv.
3) I am unable to import tensorflow from python inside the virtualenv
4)virtualenv was configured for python3.
5)importing OpenCV works fine from inside virtualenv.
Can someone please suggest how to link python3 when run inside virtualenv to also look for the directory of tensorflow which i know. | 0 | 1 | 255 |
0 | 48,897,354 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-02-21T02:33:00.000 | 1 | 2 | 0 | Convert numpy array of a image into blocks | 48,897,331 | 0.099668 | python,numpy | Please provide your array structure.
you can use img_arrary.reshape(8,8), to work total elements must be 64 | I have of a numpy array of a image.I want to convert this image into 8*8 block using python.How should I do this? | 0 | 1 | 778 |
0 | 51,193,611 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-02-21T04:04:00.000 | 0 | 1 | 0 | catboost cv producing log files | 48,897,988 | 0 | python,catboost | Try setting training the parameter allow_writing_files to False. | A number of TSV files and json files are being created when I used the cross validation CV object. I cannot find any way to prevent CV from not producing these in the documentation and end up deleting them manually. These files are obviously coming from CV (I have checked) and are named after the folds or general results such as time remaining and test scores.
Anyone know of the argument to set to turn it off? | 0 | 1 | 317 |
0 | 48,899,306 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-02-21T06:13:00.000 | 1 | 1 | 0 | How to build a decoder using dynamic rnn in Tensorflow? | 48,899,234 | 1.2 | python,tensorflow,recurrent-neural-network,sequence-to-sequence,encoder-decoder | If for example, you are using Tensorflow's attention_decoder method, pass a parameter "loop_function" to your decoder. Google search for "extract_argmax_and_embed", that is your loop function. | I know how to build an encoder using dynamic rnn in Tensorflow, but my question is how can we use it for decoder?
Because in decoder at each time step we should feed the prediction of previous time step.
Thanks in advance! | 0 | 1 | 342 |
1 | 48,952,393 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-02-21T16:58:00.000 | 0 | 1 | 0 | PyOpenGL how to rotate a scene with the mouse | 48,911,436 | 1.2 | python,pygame,blender,pyopengl | Ok I think i have found what you should do
just for the people that have trouble with this like I did this is the way you should do it:
to rotate around a cube with the camera in opengl:
your x mouse value has to be added to the z rotator of your scene
and the cosinus of your y mouse value has to be added to the x rotator
and then the sinus of your y mouse value has to be subtracted of your y rotator
that should do it | I am trying to create a simple scene in 3d (in python) where you have a cube in front of you, and you are able to rotate it around with the mouse.
I understand that you should rotate the complete scene to mimic camera movement but i can't figure out how you should do this.
Just to clarify I want the camera (or scene) to move a bit like blender (the program).
Thanks in advance | 0 | 1 | 553 |
0 | 48,920,286 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2018-02-21T17:53:00.000 | 5 | 2 | 0 | Keras - how to set weights to a single layer | 48,912,449 | 1.2 | python,keras | Keras expects the layer weights to be a list of length 2. First element is the kernel weights and the second is the bias.
You can always call get_weights() on the layer to see shape of weights of that layer. set_weights() would expect exactly the same. | I'm trying to set the weights of a hidden layer.
I'm assuming layers[0] is the inputs, and I want to set the weights of the first hidden layer so set the index to 1.
model.layers[1].set_weights(weights)
However, when I try this I get an error:
ValueError: You called `set_weights(weights)` on layer "dense_64" with a weight list of length 100, but the layer was expecting 2 weights. Provided weights: [ 1.0544554 1.27627635 1.05261064 1.10864937 ...
The hidden layer has 100 nodes.
As it is telling me that it expects two weights, is one the weight and one the bias? | 0 | 1 | 8,132 |
0 | 60,953,415 | 0 | 0 | 0 | 0 | 2 | false | 13 | 2018-02-22T10:29:00.000 | 0 | 4 | 0 | Choosing subset of farthest points in given set of points | 48,925,086 | 0 | python,algorithm,computational-geometry,dimensionality-reduction,multi-dimensional-scaling | Find the maximum extent of all points. Split into 7x7x7 voxels. For all points in a voxel find the point closest to its centre. Return these 7x7x7 points. Some voxels may contain no points, hopefully not too many. | Imagine you are given set S of n points in 3 dimensions. Distance between any 2 points is simple Euclidean distance. You want to chose subset Q of k points from this set such that they are farthest from each other. In other words there is no other subset Q’ of k points exists such that min of all pair wise distances in Q is less than that in Q’.
If n is approximately 16 million and k is about 300, how do we efficiently do this?
My guess is that this NP-hard so may be we just want to focus on approximation. One idea I can think of is using Multidimensional scaling to sort these points in a line and then use version of binary search to get points that are furthest apart on this line. | 0 | 1 | 3,420 |
0 | 48,925,457 | 0 | 0 | 0 | 0 | 2 | false | 13 | 2018-02-22T10:29:00.000 | 1 | 4 | 0 | Choosing subset of farthest points in given set of points | 48,925,086 | 0.049958 | python,algorithm,computational-geometry,dimensionality-reduction,multi-dimensional-scaling | If you can afford to do ~ k*n distance calculations then you could
Find the center of the distribution of points.
Select the point furthest from the center. (and remove it from the set of un-selected points).
Find the point furthest from all the currently selected points and select it.
Repeat 3. until you end with k points. | Imagine you are given set S of n points in 3 dimensions. Distance between any 2 points is simple Euclidean distance. You want to chose subset Q of k points from this set such that they are farthest from each other. In other words there is no other subset Q’ of k points exists such that min of all pair wise distances in Q is less than that in Q’.
If n is approximately 16 million and k is about 300, how do we efficiently do this?
My guess is that this NP-hard so may be we just want to focus on approximation. One idea I can think of is using Multidimensional scaling to sort these points in a line and then use version of binary search to get points that are furthest apart on this line. | 0 | 1 | 3,420 |
0 | 48,930,465 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-22T14:47:00.000 | 2 | 1 | 0 | Normalize 2D array given mean and std value | 48,930,303 | 0.379949 | python-3.x,numpy,scikit-learn,normalization | Normalization is: (X - Mean) / Deviation
So do just that: (2d_data - mean) / std | l have a dataset called 2d_data which has a dimension=(44500,224,224) such that 44500 is the number of sample.
l would like to normalize this data set using the following mean and std values :
mean=0.485 and std=0.229
How can l do that ?
Thank you | 0 | 1 | 716 |
0 | 48,954,577 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-02-22T18:31:00.000 | 1 | 1 | 0 | Converting NumPy floats to ints without loss of precision | 48,934,830 | 1.2 | python,numpy,opencv | "Part of our algorithm involves running a convex hull on some of the points in this space, but cv2.convexHull() requires an ndarray with dtype = int."
cv2.convexHull() also accepts numpy array with float32 number.
Try using cv2.convexHull(numpy.array(a,dtype = 'float32')) where a is a list of dimension n*2 (n = no. of points). | I am working on a vision algorithm with OpenCV in Python. One of the components of it requires comparing points in color-space, where the x and y components are not integers. Our list of points is stored as ndarray with dtype = float64, and our numbers range from -10 to 10 give or take.
Part of our algorithm involves running a convex hull on some of the points in this space, but cv2.convexHull() requires an ndarray with dtype = int.
Given the narrow range of the values we are comparing, simple truncation causes us to lose ~60 bits of information. Is there any way to have numpy directly interpret the float array as an int array? Since the scale has no significance, I would like all 64 bits to be considered.
Is there any defined way to separate the exponent from the mantissa in a numpy float, without doing bitwise extraction for every element? | 0 | 1 | 563 |
0 | 48,936,596 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-02-22T20:22:00.000 | 1 | 3 | 0 | gradient boosting- features contribution | 48,936,542 | 0.066568 | python,scikit-learn | Use the feature_importances_ property. Very easy. | Is there a way in python by which I can get contribution of each feature in probability predicted by my gradient boosting classification model for each test observation. Can anyone give actual mathematics behind probability prediction in gradient boosting classification model and how can it be implemented in Python. | 0 | 1 | 1,153 |
0 | 48,942,976 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-23T07:09:00.000 | 0 | 1 | 0 | Calculate confidence score of document | 48,942,865 | 0 | python,machine-learning,deep-learning | How about adding up/taking the mean of your title scores(since they'd be on the same scale) and content scores for all the methods so now you'll have a single title score and single content score.
To get a single score for a document, you'll have to combine the title and content scores. To do that, you can take a weighted average(you'll have to decide the weights) or you can multiply these scores to get a single metric. Although these may not be close to zero or one, as is your requirement
As an alternate method, you can create a dataset with the added/averaged up title scores and content scores and manually create the confidence score column with zeros and ones. Using this data you can build a logistic regression model to classify your documents with confidence scores of zeros and ones. This will give you the weights as well and more insight to what you are actually looking for | Using different methods, I am scoring documents & it's title. Now I want to aggregate all these scores into single score(confidence score). I want to use unsupervised method. I want confidence score in terms of probability or percentage.
Here , M= Method No, TS = document title score, CS = document content score
eg 1
Doc1 (expected confidence score close to 0)
M - TS - CS
1 - 0.03 - 0.004
2 - 0.054 - 0.06
3 - 0.09 - 0.12
Doc2 (expected confidence score close to 1)
M - TS - CS
1 - 0.50 - 0.63
2 - 0.74 - 0.90
3 - 0.615 - 0.833
Here my hypothis is confidence score should be colse to zero for document-1 and close to 1 for document-2.
It is also possible that all Documents will have lower scores for all the methods(eg 2), so the confidence scores should be close to zero for all documents.
eg.2
Doc1 (expected confidence score close to 0)
M - TS - CS
1 - 0.03 - 0.004
2 - 0.054 - 0.06
3 - 0.09 - 0.12
Doc2 (expected confidence score close to 0)
M - TS - DS
1 - 0.001 - 0.003
2 - 0.004 - 0.005
3 - 0.0021 - 0.013
Can anyone explain me or provide some resource to calculate confidence score? | 0 | 1 | 189 |
0 | 55,395,113 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-02-24T10:31:00.000 | 0 | 1 | 0 | N grams for Sentiment Analysis | 48,961,822 | 1.2 | python,nltk,sentiment-analysis,n-gram | Use textblob package. It offers a simple API to access its methods and perform basic NLP tasks. NLP is natural language processing. Which process your text by tokenization, noun extract, lemmatization, words inflection, NGRAMS etc. There also some other packages like spacy, nltk. But textblob will be better for beginners. | I am doing sentiment analysis on reviews of products from various retailers. I was wondering if there was an API that used n grams for sentiment analysis to classify a review as a positive or negative. I have a CSV file filled with reviews which I would like to run it in python and hence would like an API or a package rather than a tool.
Any direction towards this would be great.
Thanks | 0 | 1 | 469 |
0 | 48,970,082 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2018-02-24T13:11:00.000 | -1 | 1 | 0 | using NLTK to find the related verbs to a specific noun | 48,963,243 | -0.197375 | python,nlp,nltk | Given a corpus of documents, you can apply part of speech tagging to get verb roots, nouns and mapping of those nouns to those verb roots. From there you should be able to deduce the most common 'relations' an 'entity' expresses, although you may want to describe your relations as something that occurs between two different entity types, and harvest more relations than just noun/verb root.
Just re-read this answer and there is definitely a better way to approach this, although not with NLTK. You should take a look at fasttext or another language vectorization library and then use euclidean distance or cosine similarity to find the words closest to University, and then filter by part of speech (verb in this case). | Is there any way to find the related verbs to a specific noun by using NLTK. For example for the word "University" I'd like to have the verbs "study" and "graduate" as an output. I mainly need this feature for relation extraction among some given entities. | 0 | 1 | 277 |
0 | 48,977,717 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-02-25T13:17:00.000 | 0 | 4 | 0 | Installing tensorflow on GPU | 48,973,883 | 0 | python,tensorflow,gpu | First of all, if you want to see a performance gain, you should have a better GPU, and second of all, Tensorflow uses CUDA, which is only for NVidia GPUs which have CUDA Capability of 3.0 or higher. I recommend you use some cloud service such as AWS or Google Cloud if you really want to do deep learning. | I've installed tensorflow CPU version. I'm using Windows 10 and I have AMD Radeon 8600M as my GPU. Can I install GPU version of tensorflow now? Will there be any problem? If not, where can I get instructions to install GPU version? | 0 | 1 | 691 |
0 | 48,974,256 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-02-25T13:17:00.000 | -1 | 4 | 0 | Installing tensorflow on GPU | 48,973,883 | -0.049958 | python,tensorflow,gpu | It depends on your graphic card, it has to be nvidia, and you have to install cuda version corresponding on your system and SO. Then, you have install cuDNN corresponding on the CUDA version you had installed
Steps:
Install NVIDIA 367 driver
Install CUDA 8.0
Install cuDNN 5.0
Reboot
Install tensorflow from source with bazel using the above configuration | I've installed tensorflow CPU version. I'm using Windows 10 and I have AMD Radeon 8600M as my GPU. Can I install GPU version of tensorflow now? Will there be any problem? If not, where can I get instructions to install GPU version? | 0 | 1 | 691 |
0 | 54,447,128 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-02-26T03:28:00.000 | 0 | 2 | 0 | what is the difference between tf.nn.convolution and tf.nn.conv2d? | 48,981,022 | 0 | python,tensorflow,machine-learning,neural-network,deep-learning | Functionally, dilations augument in tf.nn.conv2d is the same as dilations_rate in tf.nn.convolution as well as rate in tf.nn.atrous_conv2d.
They all represent the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. The dilation factor for each dimension of input specifying the filter upsampling/input downsampling rate otherwise known as atrous convolution.
The usage differs slightly.
Let rate k >= 1 represent the dilation rate,
in tf.nn.conv2d, the rate k is passed as list of ints [1, k, k,1] for [batch, rate_height, rate_width, channel].
in tf.nn.convolution, rate k is passed as a sequence of N ints as [k,k] for [rate_height, rate_width].
in tf.nn.atrous_conv2d, rate k is a positive int32, a single value for both height and width. This library is deprecated and exists only for backwards compatibility.
Hope it helps :) | I want to make dilated convolution on a feature. In tensorflow I found tf.nn.convolution and tf.nn.conv2d. But tf.nn.conv2d doesn't seem to support dilated convolution.
So I tried using tf.nn.convolution.
Do the 2 formulations below give the same result?
tf.nn.conv2d(x, w, strides=[1, 1, 2, 2], padding='SAME',data_format='NCHW')
tf.nn.convolution(x, w, strides=[1, 1, 2, 2], padding='SAME',data_format='NCHW') | 0 | 1 | 2,014 |
0 | 49,021,501 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-27T10:06:00.000 | 0 | 1 | 0 | find anomalies in records of categorical data | 49,006,013 | 0 | python,machine-learning,statistics,data-science | Take a look on nearest neighborhoods method and cluster analysis. Metric can be simple (like squared error) or even custom (with predefined weights for each category).
Nearest neighborhoods will answer the question 'how different is the current row from the other row' and cluster analysis will answer the question 'is it outlier or not'. Also some visualization may help (T-SNE). | I have a dataset with m observations and p categorical variables (nominal), each variable X1,X2...Xp has several different classes (possible values). Ultimately I am looking for a way to find anomalies i.e to identify rows for which the combination of values seems incorrect with respect to the data I saw so far. So far I was thinking about building a model to predict the value for each column and then build some metric to evaluate how different the actual row is from the predicted row. I would greatly appreciate any help! | 0 | 1 | 826 |
0 | 49,011,315 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-27T14:40:00.000 | 0 | 1 | 0 | ImportError: numpy.core.multiarray failed to import on windows | 49,011,268 | 0 | python,numpy | Numpy 1.8.1 is very out of date - you should upgrade to the latest version (1.14.1 as of writing) and that error will be resolved.
Out of interest, I've seen this question asked before - are you following a guide that is out of date or something? | I am using python 2.7 on windows 10 . I installed numpy-1.8.1-win32-superpack-python2.7 and extracted opencv-3.4.0-vc14_vc15.
I copied cv2.pyd from opencv\build\python\2.7\x86 and pasted to C:\Python27\Lib\site-packages.
I could import numpy without any error. While I run import cv2 it gives an error like
RuntimeError: module compiled against API version 0xa but this version of numpy is 0x9
Traceback (most recent call last):
File "", line 1, in
import cv2
ImportError: numpy.core.multiarray failed to import. | 0 | 1 | 4,894 |
0 | 49,017,149 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-02-27T20:06:00.000 | 1 | 5 | 0 | What is the best approach to let C# and Python communicate for this machine learning task? | 49,017,084 | 1.2 | c#,python,unity3d,tensorflow,machine-learning | You have a few options:
Subprocess
You can open the python script via the Unity's C# then send stdout and stdin data to and from the process. In the Python side it's as simple as input() and print(), and in the C# side it's basically reading and writing from a Stream object (as far as I remember)
UDP/TCP sockets
You can make your Python a UDP/TCP server (preferrably UDP if you have to transfer a lot of data, and it might be simpler to code). Then you create a C# client and send requests to the Python server. The python server will do the processing (AI magic, yayy!) then return the results to the Unity's C#. In C# you'd have to research the UdpClient class, and in Python, the socket module. | I'm developing a simple game for a university project using Unity. This game makes use of machine learning, so I need TensorFlow in order to build a Neural Network (NN) to accomplish certain actions in the game depending on the prediction of the NN.
In particular my learning approach is reinforcement learning. I need to monitor the states and the rewards in the environment [coded in C#], and pass them to the NN [coded in Python]. Then the prediction [from Python code] should be sent back to the environment [to C# code].
Sadly I'm quite confused on how to let C# and Python communicate. I'm reading a lot online but nothing really helped me. Can anybody clear my ideas? Thank you. | 0 | 1 | 2,512 |
0 | 50,096,071 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2018-02-27T22:22:00.000 | 4 | 1 | 1 | TypeError: can't pickle memoryview objects when running basic add.delay(1,2) test | 49,018,923 | 0.664037 | python-3.x,celery,typeerror,pickle,memoryview | After uninstalling librabbitmq, the problem was resolved. | Trying to run the most basic test of add.delay(1,2) using celery 4.1.0 with Python 3.6.4 and getting the following error:
[2018-02-27 13:58:50,194: INFO/MainProcess] Received task:
exb.tasks.test_tasks.add[52c3fb33-ce00-4165-ad18-15026eca55e9]
[2018-02-27 13:58:50,194: CRITICAL/MainProcess] Unrecoverable error:
SystemError(' returned a result with an error set',) Traceback (most
recent call last): File
"/opt/myapp/lib/python3.6/site-packages/kombu/messaging.py", line 624,
in _receive_callback
return on_m(message) if on_m else self.receive(decoded, message) File
"/opt/myapp/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 570, in on_task_received
callbacks, File "/opt/myapp/lib/python3.6/site-packages/celery/worker/strategy.py",
line 145, in task_message_handler
handle(req) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/worker.py", line
221, in _process_task_sem
return self._quick_acquire(self._process_task, req) File "/opt/myapp/lib/python3.6/site-packages/kombu/async/semaphore.py",
line 62, in acquire
callback(*partial_args, **partial_kwargs) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/worker.py", line
226, in _process_task
req.execute_using_pool(self.pool) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/request.py",
line 531, in execute_using_pool
correlation_id=task_id, File "/opt/myapp/lib/python3.6/site-packages/celery/concurrency/base.py",
line 155, in apply_async
**options) File "/opt/myapp/lib/python3.6/site-packages/billiard/pool.py", line 1486,
in apply_async
self._quick_put((TASK, (result._job, None, func, args, kwds))) File
"/opt/myapp/lib/python3.6/site-packages/celery/concurrency/asynpool.py",
line 813, in send_job
body = dumps(tup, protocol=protocol) TypeError: can't pickle memoryview objects
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File
"/opt/myapp/lib/python3.6/site-packages/celery/worker/worker.py", line
203, in start
self.blueprint.start(self) File "/opt/myapp/lib/python3.6/site-packages/celery/bootsteps.py", line
119, in start
step.start(parent) File "/opt/myapp/lib/python3.6/site-packages/celery/bootsteps.py", line
370, in start
return self.obj.start() File "/opt/myapp/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 320, in start
blueprint.start(self) File "/opt/myapp/lib/python3.6/site-packages/celery/bootsteps.py", line
119, in start
step.start(parent) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 596, in start
c.loop(*c.loop_args()) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/loops.py", line
88, in asynloop
next(loop) File "/opt/myapp/lib/python3.6/site-packages/kombu/async/hub.py", line 354,
in create_loop
cb(*cbargs) File "/opt/myapp/lib/python3.6/site-packages/kombu/transport/base.py", line
236, in on_readable
reader(loop) File "/opt/myapp/lib/python3.6/site-packages/kombu/transport/base.py", line
218, in _read
drain_events(timeout=0) File "/opt/myapp/lib/python3.6/site-packages/librabbitmq-2.0.0-py3.6-linux-x86_64.egg/librabbitmq/init.py",
line 227, in drain_events
self._basic_recv(timeout) SystemError: returned a result with an error set
I cannot find any previous evidence of anyone hitting this error. I noticed from the celery site that only python 3.5 is mentioned as supported, is that the issue or is this something I am missing?
Any help would be much appreciated!
UPDATE: Tried with Python 3.5.5 and the problem persists. Tried with Django 4.0.2 and the problem persists.
UPDATE: Uninstalled librabbitmq and the problem stopped. This was seen after migration from Python 2.7.5, Django 1.7.7 to Python 3.6.4, Django 2.0.2. | 0 | 1 | 3,865 |
0 | 49,023,961 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-02-28T06:35:00.000 | 1 | 1 | 0 | set dimension of svd algorithm in python | 49,023,337 | 1.2 | python,numpy,scipy,svd | If A is a 3 x 5 matrix then it has rank at most 3. Therefore the SVD of A contains at most 3 singular values. Note that in your example above, the singular values are stored as a vector instead of a diagonal matrix. Trivially this means that you can pad your matrices with zeroes at the bottom. Since the full S matrix contains of 3 values on the diagonal followed by the rest 0's (in your case it would be 64x64 with 3 nonzero values), the bottom rows of V and the right rows of U don't interact at all and can be set to anything you want.
Keep in mind that this isn't the SVD of A anymore, but instead the condensed SVD of the matrix augmented with a lot of 0's. | svd formular: A ≈ UΣV*
I use numpy.linalg.svd to run svd algorithm.
And I want to set dimension of matrix.
For example: A=3*5 dimension, after running numpy.linalg.svd, U=3*3 dimension, Σ=3*1 dimension, V*=5*5 dimension.
I need to set specific dimension like U=3*64 dimension, V*=64*5 dimension. But it seems there is no optional dimension parameter can be set in numpy.linalg.svd. | 0 | 1 | 646 |
0 | 49,028,439 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-02-28T11:00:00.000 | -1 | 1 | 0 | Keep sklearnt model in memory to speed up prediction | 49,027,972 | -0.197375 | python,python-2.7,scikit-learn | Can't you store only the parameters of your SVM classifier with clf.get_params() instead of the whole object? | I have trained a SVM model with sklearn, I need to connect this to php. To do this I am using exec command to call in the console the python script, where I load the model with pickle and predict the results. The problem is that loading the model with pickle takes some time (a couple of seconds) and I would like it to be faster. Is there a way of having this model in memory so I don't need to load with pickle every time? | 0 | 1 | 186 |
0 | 51,527,999 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-02-28T20:37:00.000 | 2 | 1 | 0 | How to find markov blanket for a node? | 49,038,111 | 0.379949 | python,weka,markov,rweka,markov-models | Find all parents of the node
Find all children of the node
Find all parents of the children of the node
These altogether gives you the Markov blanket for a given node. | I want to do feature selection using markov blanket algorithm. I am wondering is there any API in java/weka or in python to find the markov blanket .
Consider I have a dataset. The dataset has number of variables and one one target variable. I want to find the markov blanket of the target variable.
Any information would be appreciated | 0 | 1 | 1,298 |
0 | 49,043,299 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-03-01T05:28:00.000 | 1 | 2 | 0 | Custom Yaxis plot in matplotlib python | 49,043,162 | 0.099668 | python,matplotlib | This should work
matplotlib.pyplot.yticks(np.arange(start, stop+1, step)) | Let's say if I have Height = [3, 12, 5, 18, 45] and plot my graph then the yaxis will have ticks starting 0 up to 45 with an interval of 5, which means 0, 5, 10, 15, 20 and so on up to 45. Is there a way to define the interval gap (or the step). For example I want the yaxis to be 0, 15, 30, 45 for the same data set. | 0 | 1 | 40 |
0 | 49,045,428 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-03-01T08:08:00.000 | 0 | 1 | 0 | How to checkwhether an index in a tensorarray has been initialized? | 49,045,210 | 0 | tensorflow,python-3.5 | The only option as I see it is creating an initialization loop where every index is set to 0. This eliminates the problem but may not be an ideal way. | Is it in anyway possible to check whether an index in a TensorArray has been initialized?
As I understand TensorArrays can't be initialized with default values.
However I need a way to increment the number on that index which I try to do by reading it, adding one and then writing it to the same index.
If the index is not initialized however this will fail as it cannot read an uninitialized index.
So is there a way to check if it has been initialized and otherwise write a zero to initialize it? | 0 | 1 | 69 |
0 | 68,770,190 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-03-01T11:02:00.000 | 0 | 2 | 0 | Error indices[0] = 0 is not in [0, 0) while training an object-detection model with tensorflow | 49,048,262 | 0 | python,tensorflow,object-detection | I had the same issue using the centernet_mobilenetv2 model, but I just deleted the num_keypoints parameter in the pipeline.config file and then all was working fine. I don't know what is the problem with that parameter but I was able to run the training without it. | So I am currently attempting to train a custom object-detection model on tensorflow to recognize images of a raspberrypi2. Everything is already set up and running on my hardware,but due to limitations of my gpu I settled for the cloud. I have uploaded my data(train & test records ans csv-files) and my checkpoint model. That is what I get from the logs:
tensorflow:Restoring parameters from /mobilenet/model.ckpt
tensorflow:Starting Session.
tensorflow:Saving checkpoint to path training/model.ckpt
tensorflow:Starting Queues.
tensorflow:Error reported to Coordinator: <class tensorflow.python.framework.errors_impl.InvalidArgumentError'>,
indices[0] = 0 is not in [0, 0)
I also have a folder called images with the actual .jpg files and it is also on the cloud, but for some reason I must specify every directory with a preceeding forward slash / and that might be a problem, as I currently do not know whether some of the files are trying to import these images ,but could not find the path because of the missing /.
If any of you happens to share a solution I would be really thankful.
EDIT : I fixed it by downloading an older version of the models folder in tensorflow and the model started training, so note to the tf team. | 0 | 1 | 591 |
0 | 49,081,754 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-03-01T21:37:00.000 | 1 | 1 | 0 | Semantically weighted mean of word embeddings | 49,059,089 | 0.197375 | python,vector,semantics,word2vec,word-embedding | Actually averaging of word vectors can be done in two ways
Mean of word vectors without tfidf weights.
Mean of Word vectors multiplied with tfidf weights.
This will solve your problem of word importance. | Given a list of word embedding vectors I'm trying to calculate an average word embedding where some words are more meaningful than others. In other words, I want to calculate a semantically weighted word embedding.
All the stuff I found is on just finding the mean vector (which is quite trivial of course) which represents the average meaning of the list OR some kind of weighted average of words for document representation, however that is not what I want.
For example, given word vectors for ['sunglasses', 'jeans', 'hats'] I would like to calculate such a vector which represents the semantics of those words BUT with 'sunglasses' having a bigger semantic impact. So, when comparing similarity, the word 'glasses' should be more similar to the list than 'pants'.
I hope the question is clear and thank you very much in advance! | 0 | 1 | 937 |
0 | 49,065,671 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-03-02T04:45:00.000 | 0 | 1 | 0 | Training a categorical classification example | 49,062,970 | 1.2 | python,machine-learning | For the first question, I would say, you don't need to convert it, but it would make the evaluation on the test set easier.
Your classifier will output one hot encoded values, which you can convert back to string, and evaluate those values, however I think if you would have the test targets as 0-1s would help.
For the second one, you need to fit the standardscaler on the train set, and use (transform) that on the test set. | I am new to Machine Learning. I am currently solving a classification problem which has strings as its target. I have split the test and training sets and I have dealt with the string attributes by converting them by OneHotEncoder and also, I am using StandardScaler to scale the numerical features of the training set.
My question is for the test set, do I need to convert the test set targets which are still in string format such as I did with training set's string targets using the OneHotEncoder, or do I leave the test set alone as it is and the Classifier will do the job itself? Similarly for the numerical attributes do I have to use StandardScaler to scale the numerical attributes in the test set or the Classifier will do this itself once the training is done on the training set? | 0 | 1 | 86 |
0 | 49,074,441 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-03-02T17:23:00.000 | 0 | 1 | 0 | Read_CSV() with a non-constant file location | 49,074,246 | 0 | python,pandas,csv | The expression sorted(glob.glob("DailyDownload/*/*_YEH.csv"))[-1] will return one file from the most recent day's downloads. This might work for you if you are certain that only one file per day will be downloaded.
A better solution might be to grab all the files (glob.glob("DailyDownload/*/*_YEH.csv") and then somehow mark them as you process them. Perhaps store the list of processed files in a database? Or delete each file as you complete the processing? | Quick question here I'd like to use pandas read_csv to bring in a file for my python script but it is a daily drop and both the filename and file location changes each day...
My first thought is to get around this by prompting the user for the path? Or is there a more elegant solution that can be coded?
The filepath (with name) is something like this:
DailyDownload>20180301>[randomtags]_YEH.csv | 0 | 1 | 34 |
0 | 49,102,970 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2018-03-03T02:18:00.000 | 1 | 3 | 0 | Error while importing TensorFlow: Illegal instruction (core dumped) | 49,079,990 | 1.2 | python,linux,tensorflow | Compiling tensorflow from source solved the problem, so it seems my system wasn't supported. | I installed tensorflow following the instructions on their site, but when I try to run import tensorflow as tf I get the following error: Illegal instruction (core dumped). I tried this with the CPU and GPU versions, using Virtualenv and "native" pip, but the same error occurs in every case.
The parameters of my PC:
OS : LinuxMint 18.3
CPU: AMD Athlon Dual Core 4450e
GPU: GTX 1050 Ti
I found that some people experienced this error when they compiled tensoflow from source and misconfigured some flags. Could it be that my CPU is too old and not supported? Is it possible that compiling from source solves this issue? | 0 | 1 | 3,188 |
0 | 49,102,029 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-03-04T13:56:00.000 | 0 | 1 | 0 | How can I find pairs of numbers in a matrix without using so many nested loops? | 49,096,206 | 1.2 | python,algorithm,performance | You may check if the algorithm outlined below are fast enough.
Sort the numbers in the 3D array that are in the given range and keep track of the indexes.
Now do a nested loop where the outer loop find candidates for the smallest number and the inner for the largest. The inner loop starts with the next number in the list and terminates as soon as you find numbers that correspond to none-overlapping sub-fields (first number that satisfy condition 2, all remaining numbers fails condition 3) or the difference are greater than the best pair of numbers already found (this and all remaining numbers fails condition 3). Update the information for the best candidate pair if appropriate when the inner loop terminates. | I have to write an algorithm that will find two numbers in a 3D array (nested lists) that are:
That are in a given range (min < num1, num2, < max)
Do not overlap
Are as close in value as possible ( abs(num1 - num1) is
minimal)
If there exists more pairs of numbers that satisfy 1),
2) and 3), pick the ones whose sum is maximal
Original data is a N x N field consisting of elementary squares that each have a single random number in them. The problem is to find two sub-fields whose sums satisfy the 4 conditions written. I calculate all possible sums and store them in the 3D array sums[i][j][k] with coordinates of starting point (i, j) and its size (k). I need to keep track of indexes to ensure that fields do not overlap.
Right now I am doing this using 6 nested for loops (one for each index, 3 indexes per number) and lots of if statements (to check that sums are in range and fields do not overlap) and then simply iterating over every possible combination which is really slow.
Is there a faster way to do it (maybe without so many loops)? Only standard libraries are allowed | 0 | 1 | 149 |
0 | 50,023,202 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-03-05T01:51:00.000 | 0 | 1 | 0 | Python Pandas , Date '9999dec31'd | 49,102,424 | 0 | python,pandas,sas | I have run into the same error, but with SQL server data, not SAS. I believe Python Pandas may be trying to store this as a pandas datetime, which stores its values in nanoseconds. 9999DEC31 has far too many nanoseconds, as you might expect, for it to handle. You could try reading it in as an integer of days since the SAS epoch, or a string, and use the datetime module (datetime.date) to store it. Or read in the year, month, day separately and recombine using class datetime.date(year, month, day). Whatever gives you the least amount of grief. I can confirm that datetime.date CAN handle 9999DEC31.
Because datetime.date is not a native Pandas class, your column would be stored by Pandas as dtype "object". But never fear, if you've done it right, every single element would be a datetime.date.
Please note: If you need to work with those datetime.dates in Pandas, you would have to use the methods and objects provided in datetime, which differ from the pandas.datetime.
I hope that helps. Let me know if you need more info. | I am doing something very simple but it seems that it does not work. I am importing a SAS table into pandas's dataframe. for the date column. I have NA which is actually using '9999dec31'd to represent it, which is 2936547 in numeric value. Python Pandas.read_sas() cant work with this value because it is too big. Any workaround?
Thank you, | 0 | 1 | 305 |
0 | 49,103,642 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-03-05T04:32:00.000 | 0 | 1 | 0 | Tensorflow: Combining Loss Functions in LSTM Model for Domain Adaptation | 49,103,531 | 0 | python,tensorflow,deep-learning,keras,lstm | From an implementation perspective, the short answer would be yes. However, I believe your question could be more specific, maybe what you mean is whether you could do it with tf.estimator? | Can any one please help me out?
I am working on my thesis work. Its about Predicting Parkinson disease, Since i want to build an LSTM model to adapt independent of patients. Currently i have implemented it using TensorFlow with my own loss function.
Since i am planning to introduce both labeled train and unlabeled train data in every batch of data to train the model. I want to apply my own loss function on this both labeled and unlabeled train data and also want to apply cross entropy loss only on labeled train data. Can i do this in tensorflow?
So my question is, Can i have combination of loss functions in a single model training on different set of train data? | 0 | 1 | 297 |
0 | 50,983,852 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-03-05T05:32:00.000 | 0 | 2 | 0 | Compare frames from videos opencv python | 49,104,023 | 0 | python,opencv | I don't know why it doesn't work but to solve your problem I would suggest to implement a new function which returns true even if there is a small difference for each pixel color value.
Using the appropriate threshold, you should be able to exclude false negatives. | I've been trying to compare videos from frames taken from video using opencv videocapture() python!
Took the first frame from a video let's call it frame1 and when I saved the video and took the same first frame again let's call it frame2
Comparing frame 1 and frame 2 returns false. When I expected true.
I also saved the frame as an image in png(lossless format) and saved video and again same first frame. But they don't match? How to get the same frame everytime when dealing with videos opencv! Python | 0 | 1 | 1,712 |
0 | 49,108,001 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-03-05T08:42:00.000 | 2 | 1 | 0 | Building a dashboard in Dash | 49,106,413 | 0.379949 | python,plotly-dash | I have similar experience. A lot said python is more readable, while I agree, however, I don't find it as on par with R or Shiny in their respective fields yet. | I have used Shiny for R and specifically the Shinydashboard package to build easily navigatable dashboards in the past year or so. I have recently started using the Python, pandas, etc ecosystem for doing data analysis. I now want to build a dashboard with a number of inputs and outputs. I can get the functionality up running using Dash, but defining the layout and look of the app is really time consuming compared to using the default layout from the shinydashboard's package in R.
The convenience that Shiny and Shinydashboard provides is:
Easy layout of components because it is based on Bootstrap
A quite nice looking layout where skinning is build in.
A rich set of input components where the label/title of the input is bundled together with the input.
My question is now this:
Are there any extensions to Dash which provides the above functionality, or alternatively some good examples showing how to do the above? | 0 | 1 | 836 |
0 | 49,135,392 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-03-05T10:43:00.000 | 1 | 2 | 0 | How do I read/convert an HDF file containing a pandas dataframe written in Python 2.7 in Python 3.6? | 49,108,596 | 1.2 | python,python-3.x,python-2.7,pandas | Not exactly a solution but more of a workaround.
I simply read the files in their corresponding Python versions and saved them as a CSV file, which can then be read any version of Python. | I wrote a dataframe in Python 2.7 but now I need to open it in Python 3.6, and vice versa (I want to compare two dataframes written in both versions).
If I open a Python2.7-generated HDF file using pandas in Python 3.6, this is the error produced:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xde in position 1: ordinal not in range(128)
If I open a Python3.6-generated HDF file using pandas in Python 2.7, this is the error:
ValueError: unsupported pickle protocol: 4
For both cases I simply saved the file by df.to_hdf.
Does anybody have a clue how to go about this? | 0 | 1 | 348 |
0 | 49,116,554 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-03-05T17:20:00.000 | 0 | 1 | 0 | randomly split DataFrame by group? | 49,116,070 | 0 | python-3.x,pandas,numpy,random,scikit-learn | Oh, there is an easy way !
create a list / array of unique group_ids
create a random mask for this list
and use the mask to split the file | I have a DataFrame where multiple rows share group_id values (very large number of groups).
Is there an elegant way to randomly split this data into training and test data in a way that the training and test sets do not share group_id?
The best process I can come up with right now is
- create mask from msk = np.random.rand()
- apply it to the DataFrame
- check test file for rows that share group_id with training set and move these rows to training set.
This is clearly non-elegant and has multiple issues (including the possibility that the test data ends up empty). I feel like there must be a better way, is there?
Thanks | 0 | 1 | 324 |
0 | 49,120,540 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-03-05T21:24:00.000 | 0 | 2 | 0 | Error in prediction using sknn.mlp | 49,119,718 | 0 | python,windows,pyspark | I don't work with Python on Windows, so this answer will be very vague, but maybe it will guide you in the right direction. Sometimes there are cross-platform errors due to one module still not being updated for the OS, frequently when another related module gets an update. I recall something happened to me with a django application which required somebody more familiar with Windows to fix it for me.
Maybe you could try with an environment using older versions of your modules until you find the culprit. | I use Anaconda on a Windows 10 laptop with Python 2.7 and Spark 2.1. Built a deep learning model using Sknn.mlp package. I have completed the model. When I try to predict using the predict function, it throws an error. I run the same code on my Mac and it works just fine. Wondering what is wrong with my windows packages.
'NoneType' object is not callable
I verified input data. It is numpy.array and it does not have null value. Its dimension is same as training one and all attributed are the same. Not sure what it can be. | 0 | 1 | 52 |
0 | 49,144,887 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-03-05T21:24:00.000 | 0 | 2 | 0 | Error in prediction using sknn.mlp | 49,119,718 | 0 | python,windows,pyspark | I finally solved the problem on windows. Here is the solution in case you face it.
The Theano package was faulty. I installed the latest version from github and then it threw another error as below:
RuntimeError: To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environment.
In order to solve this, I created a variable named MKL_Threading_Layer under user environment variable and passed GNU. Reset the kernel and it was working.
Hope it helps! | I use Anaconda on a Windows 10 laptop with Python 2.7 and Spark 2.1. Built a deep learning model using Sknn.mlp package. I have completed the model. When I try to predict using the predict function, it throws an error. I run the same code on my Mac and it works just fine. Wondering what is wrong with my windows packages.
'NoneType' object is not callable
I verified input data. It is numpy.array and it does not have null value. Its dimension is same as training one and all attributed are the same. Not sure what it can be. | 0 | 1 | 52 |
0 | 52,918,725 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-03-06T08:04:00.000 | 0 | 3 | 0 | Object center detection using Convnet is always returning center of image rather than center of object | 49,126,007 | 0 | python,computer-vision,deep-learning,keras,conv-neural-network | Since you haven't mentioned it in the details, the following suggestions (if you haven't implemented them already), could help:
1) Normalizing the input data (say for e.g, if you are working on input images, x_train = x_train/255 before feeding the input to the layer)
2) Try linear activation for the last output layer
3) Running the fitting over higher epochs, and experimenting with different batch sizes | I have a small dataset of ~150 images. Each image has an object (rectangle box with white and black color) placed on the floor. The object is same in all images but the pattern of the floor is different. The objective is to train network to find the center of the image. Each image is of dimension 256x256x3.
Train_X is of size 150x256x256x3 and Train_y is of size 150x2 (150 here indicates the total number of images)
I understand 150 images is too small a dataset, but I am ok giving up on some accuracy so I trained data on Conv nets. Here is the architecture of convnet I used
Conv2D layer (filter size of 32)
Activation Relu
Conv2D layer (filter size of 64)
Activation Relu
Flattern layer
Dense(64) layer
Activation Relu
Dense(2)
Activation Softmax
model.compile(loss='mse', optimizer='sgd')
Observation: Trained model always return the normalized center of image 0.5,0.5 as the center of 'object' even on the training data. I was hoping to get center of a rectangular object rather than the center of the image when I run predict function on train_X. Am I getting this output because of my conv layer selections? | 0 | 1 | 599 |
0 | 49,131,939 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-03-06T12:30:00.000 | 0 | 1 | 0 | Do I need to provide sentences for training Spacy NER or are paragraphs fine? | 49,130,905 | 1.2 | nlp,python-3.5,spacy | Paragraphs should be fine. Could you give an example input data point? | I am trying to train a new Spacy model to recognize references to law articles. I start using a blank model, and train the ner pipe according to the example given in the documentation.
The performance of the trained model is really poor, even with several thousands on input points. I am tryong to figure out why.
One possible answer is that I am giving full paragraphs to train on, instead of sentences that are in the examples. Each of these paragraphs can have multiple references to law articles. Is this a possible issue?
Turns out I was making a huge mistake in my code. There is nothing wrong with paragraphs. As long as your code actually supplies them to spacy. | 0 | 1 | 444 |
0 | 49,181,090 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2018-03-07T12:34:00.000 | -2 | 3 | 0 | Stop contourf interpolating values | 49,152,116 | -0.132549 | python,matplotlib,matplotlib-basemap,contourf | Given that the question has not been updated to clearify the actual problem, I will simply answer the question as it is:
No, there is no way that contour would not interpolate because the whole concept of a contour plot is to interpolate the values. | I am trying to plot some 2D values in a Basemap with contourf (matplotlib).
However, contourf by default interpolates intermediate values and gives a smoother image of the data.
Is there any way to make contourf to stop interpolating between values?
I have tried by adding the keyword argument interpolation='nearest' but contourf does not use it.
Other option would be to use imshow, but there are some functionalities of contourf that do not work with imshow.
I am using python 3.6.3 and matplotlib 2.1.2 | 0 | 1 | 6,384 |
0 | 49,273,055 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-03-07T22:29:00.000 | 1 | 1 | 0 | How to make dynamic hierarchical TensorFlow neural net? | 49,162,276 | 1.2 | python,tensorflow,neural-network,artificial-intelligence | I don't know if and how this would work in TF. But specific "dynamic" deep learning libraries exist that might give a better fit to your use case. PyTorch for example. | I'm investigating on using tensorflow for an experimental AI algorithm using dynamic neural nets alowing the system to scale(remove and add) layers and width of layers. How should one go about this?
And the followup is if I want to also make the nets hierarchical so that they converge to two values (the classifier and the estimate of how sure it is). E.g. If there is a great variance not explained by the neural net it might give a 0,4 out of 1 as a classifier but also a "sure" value indicating how good the neural net "feels" about the estimate. To compare to us humans we can grasp a concept and also grade how confident we are. Thet hierarchical structure I also want to be dynamic connecting subnets together and disconnecting and also removing them entirely from the system.
My main question: Is this an experiment I should do in tensorflow?
I understand this is not a true technical question. But I hope if you feel it being out of bounds try to edit it to a more objective question. | 0 | 1 | 195 |
0 | 49,189,473 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2018-03-08T06:22:00.000 | 0 | 3 | 0 | How to find the optimal number of clusters using k-prototype in python | 49,166,657 | 0 | python,cluster-analysis | Most evaluation methods need a distance matrix.
They will then work with mixed data, as long as you have a distance function that helps solving your problem. But they will not be very scalable. | I am trying to cluster some big data by using the k-prototypes algorithm. I am unable to use K-Means algorithm as I have both categorical and numeric data. Via k prototype clustering method I have been able to create clusters if I define what k value I want.
How do I find the appropriate number of clusters for this.?
Will the popular methods available (like elbow method and silhouette score method) with only the numerical data works out for mixed data? | 0 | 1 | 8,033 |
Subsets and Splits