GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 56,572,030 | 0 | 1 | 0 | 0 | 6 | false | 33 | 2019-02-15T19:21:00.000 | -2 | 11 | 0 | Numpy is installed but still getting error | 54,715,835 | -0.036348 | python,numpy,tensorflow | For me it was about the consoles used.
My cygwin terminal - NOPE, dos - NOPE, but console opened from Anaconda or Spyder etc... all commands (pip install etc) worked. | I am trying to run jupyter notebook and getting following error.
I am using Win 7 with anaconda python 3.7.
ImportError: Something is wrong with the numpy installation. While importing we detected an older version of numpy in ['c:\users\paperspace\anaconda3\envs\tensorflow10\lib\site-packages\numpy']. One method of fixing this is to repeatedly uninstall numpy until none is found, then reinstall this version.
I have followed the steps mentioned in the error but still not working. | 0 | 1 | 64,552 |
0 | 55,652,317 | 0 | 1 | 0 | 0 | 6 | false | 33 | 2019-02-15T19:21:00.000 | 22 | 11 | 0 | Numpy is installed but still getting error | 54,715,835 | 1 | python,numpy,tensorflow | Run
pip3 uninstall numpy
until you receive a message stating no files available with numpy to uninstall and then you can freshly install numpy using
pip install numpy
and that will fix the issue. | I am trying to run jupyter notebook and getting following error.
I am using Win 7 with anaconda python 3.7.
ImportError: Something is wrong with the numpy installation. While importing we detected an older version of numpy in ['c:\users\paperspace\anaconda3\envs\tensorflow10\lib\site-packages\numpy']. One method of fixing this is to repeatedly uninstall numpy until none is found, then reinstall this version.
I have followed the steps mentioned in the error but still not working. | 0 | 1 | 64,552 |
0 | 56,306,185 | 0 | 1 | 0 | 0 | 6 | false | 33 | 2019-02-15T19:21:00.000 | 4 | 11 | 0 | Numpy is installed but still getting error | 54,715,835 | 0.072599 | python,numpy,tensorflow | This Means a Duplicated.
Try pip uninstall numpy or pip3 uninstall numpy then sudo apt-get install python3-numpy
FOR (DEBIAN DIST) | I am trying to run jupyter notebook and getting following error.
I am using Win 7 with anaconda python 3.7.
ImportError: Something is wrong with the numpy installation. While importing we detected an older version of numpy in ['c:\users\paperspace\anaconda3\envs\tensorflow10\lib\site-packages\numpy']. One method of fixing this is to repeatedly uninstall numpy until none is found, then reinstall this version.
I have followed the steps mentioned in the error but still not working. | 0 | 1 | 64,552 |
0 | 70,216,694 | 0 | 1 | 0 | 0 | 6 | false | 33 | 2019-02-15T19:21:00.000 | 0 | 11 | 0 | Numpy is installed but still getting error | 54,715,835 | 0 | python,numpy,tensorflow | Your issue maybe similar to mine below.
How to reproduce it
Started a virtual environment named "jupyter_ve" with numpy=1.21.0 where I launched jupyter.
Opened a notebook using a virtual environment named "project_ve" with numpy=1.20.0 where I launched my notebook kernel using "project_ve"
In Notebook import fails on from numba import njit giving errors install numpy 1.20.0 or lower
Solution
Make sure you install numpy version in the virtual environment you used to start the jupyter.
In my case above I had to install / downgrade numpy in the virtual envrionment where I launched jupyter from. It doesn't matter that I had installed notebook virtual environment kernel. | I am trying to run jupyter notebook and getting following error.
I am using Win 7 with anaconda python 3.7.
ImportError: Something is wrong with the numpy installation. While importing we detected an older version of numpy in ['c:\users\paperspace\anaconda3\envs\tensorflow10\lib\site-packages\numpy']. One method of fixing this is to repeatedly uninstall numpy until none is found, then reinstall this version.
I have followed the steps mentioned in the error but still not working. | 0 | 1 | 64,552 |
0 | 55,599,428 | 0 | 1 | 0 | 0 | 6 | false | 33 | 2019-02-15T19:21:00.000 | 3 | 11 | 0 | Numpy is installed but still getting error | 54,715,835 | 0.054491 | python,numpy,tensorflow | Use conda update --all
This works. | I am trying to run jupyter notebook and getting following error.
I am using Win 7 with anaconda python 3.7.
ImportError: Something is wrong with the numpy installation. While importing we detected an older version of numpy in ['c:\users\paperspace\anaconda3\envs\tensorflow10\lib\site-packages\numpy']. One method of fixing this is to repeatedly uninstall numpy until none is found, then reinstall this version.
I have followed the steps mentioned in the error but still not working. | 0 | 1 | 64,552 |
0 | 70,205,626 | 0 | 0 | 0 | 0 | 1 | false | 71 | 2019-02-15T20:09:00.000 | 0 | 5 | 0 | How to do gradient clipping in pytorch? | 54,716,377 | 0 | python,machine-learning,deep-learning,pytorch,gradient-descent | Well, I met with same err. I tried to use the clip norm but it doesn't work.
I don't want to change the network or add regularizers. So I change the optimizer to Adam, and it works.
Then I use the pretrained model from Adam to initate the training and use SGD + momentum for fine tuning. It is now working. | What is the correct way to perform gradient clipping in pytorch?
I have an exploding gradients problem. | 0 | 1 | 99,166 |
0 | 55,090,778 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2019-02-16T15:09:00.000 | 0 | 1 | 0 | Visual studio can't see Anaconda's modules | 54,724,459 | 1.2 | python,python-3.x,visual-studio,numpy,anaconda | Resolved it myself
In the PATH variable
C:\ProgramData\Anaconda3
C:\ProgramData\Anaconda3\Library\mingw-w64\bin
C:\ProgramData\Anaconda3\Library\usr\bin
C:\ProgramData\Anaconda3\Library\bin
C:\ProgramData\Anaconda3\Scripts
should go first | I have 2 Python environments (Python 3.6 by Visual studio 2017 and Python 3.7 by Anaconda)
When I run a script I get an error that import numpy
Traceback (most recent call last): File
"C:\ProgramData\Anaconda3\lib\site-packages\numpy\core__init__.py",
line 16, in
from . import multiarray ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File
"D:\projects\UdacityNanoDegreeCourse\UdacityNanoDegreeCourse\LearningCurves\LearningCurves.py",
line 2, in
import numpy as np File "C:\ProgramData\Anaconda3\lib\site-packages\numpy__init__.py", line
142, in
from . import add_newdocs File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\add_newdocs.py",
line 13, in
from numpy.lib import add_newdoc File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib__init__.py",
line 8, in
from .type_check import * File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib\type_check.py",
line 11, in
import numpy.core.numeric as _nx File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\core__init__.py",
line 26, in
raise ImportError(msg) ImportError: Importing the multiarray numpy extension module failed. Most likely you are trying to import a
failed build of numpy. If you're working with a numpy git repo, try
git clean -xdf (removes all files not under version control).
Otherwise reinstall numpy.
Original error was: DLL load failed: The specified module could not be
found.
As a temporary fix I run Visual Studio from Anaconda prompt.
How can I fix the environment so that I could run Visual Studio as usually by double click? Do I need to change PATH somehow?
Right click-> Activate Environment didn't help either. | 0 | 1 | 469 |
0 | 54,726,295 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-02-16T18:25:00.000 | 1 | 1 | 0 | Retraining or Continuing Training | 54,726,275 | 1.2 | python,tensorflow | If you load your model before the training instructions, then you are continuing to train the same model. Tensorflow will just load the pre-trained weights of your model and continue updating them during the new training session. | Ive started experimenting with tensorflow lately and there is one thing I'm not quite sure i get right:
If i have a set of training data, train the model on it for N number of epochs and than use model.save("Multi_stage_test.model") to save the model.
After running the program again with the same training data and with the previously trained model loaded with model=load_model("Multi_stage_test.model"). Am I continuing to train the model (the 1 epoch of the training session is a N+1 epoch for the model) or am I retraining the model (the 1 epoch of the training session is a 1 epoch for the model)? | 0 | 1 | 84 |
0 | 54,730,825 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-02-17T05:40:00.000 | 0 | 1 | 0 | Automated feature generation for time series problems - Featuretools | 54,730,480 | 0 | python,featuretools | Featuretools can certainly help with this. Can you provide more information on the dataset and specific prediction problem so we can help? | I'm trying to use featuretools to generate features to help me predict the number of museum visits next month.
Can featuretools generate features for time series? Should I changed the data so that the id is the month or can featuretools do it automatically? | 0 | 1 | 468 |
0 | 54,782,462 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-02-17T21:13:00.000 | 1 | 1 | 0 | Is it possible to train model from multiple datasets for each class? | 54,737,739 | 1.2 | python,tensorflow,object-detection-api | The short answer is yes, it might be problematic, but with some effort you can make it possible.
If you have two urban datasets, and in one you only have annotations for traffic lights, and in the second you only have annotations for cars, then each instance of car in the first dataset will be learned as false example, and each instance of traffic light in the second dataset will be learned as false example.
The two possible outcomes I can think of are:
The model will not converge, since it tries to learn opposite things.
The model will converge, but will be domain specific. This means that the model will only detect traffic lights on images from the domain of the first dataset, and cars on the second.
In fact I tried doing so myself in a different setup, and got this outcome.
In order to be able to learn your objective of learning traffic lights and cars no matter which dataset they come from, you'll need to modify your loss function. You need to tell the loss function from which dataset each image comes from, and then only compute the loss on the corresponding classes (/zero out the loss on the classes do not correspond to it). So returning to our example, you only compute loss and backpropagate traffic lights on the first dataset, and cars on the second.
For completeness I will add that if resources are available, then the better option is to annotate all the classes on all datasets in order to avoid the suggested modification, since by only backpropagating certain classes, you do not enjoy using actual false examples for other classes. | I'm pretty new to object detection. I'm using tensorflow object detection API and I'm now collecting datasets for my project
and model_main.py to train my model.
I have found and transformed two quite large datasets of cars and traffic lights with annotations. And made two tfrecords from them.
Now I want to train a pretrained model however, I'm just curious will it work? When it is possible that an image for example "001.jpg" will have of course some annotated bounding boxes of cars (it is from the car dataset) but if there is a traffic light as well it wouldn't be annotated -> will it lead to bad learning rate? (there can be many of theese "problematic" images) How should I improve this? Is there any workaround? (I really don't want to annotate the images again)
If its stupid question I'm sorry, thanks for any response - some links with this problematic would be the best !
Thanks ! | 0 | 1 | 1,404 |
0 | 54,742,112 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-02-18T05:59:00.000 | 2 | 1 | 0 | Filtering and filling Nan with mode in Pandas | 54,741,230 | 0.379949 | python,pandas | df.loc[df["columnname"]==0,"columnname"]=np.nan
df["columnname"]=df.groupby("groupbycolumnname").columnname.transform(lambda x: x.fillna(x.mode())) | I have many qualitative variables in one column1 and corresponding qualitative values in another column which have blank values. I want to replace the blanks with the MODE value of each qualitative variable in column 1. For example if I have varibles such as Accountant, Engineer, Manager, etc. in column 1 and if I have Bachelor's degree as MODE for accountant, Master's for Engineer in column 2, I want to replace the coressponding blanks with Bachelor's and Master's Correctly. How can I achieve this in pandas? | 0 | 1 | 90 |
0 | 54,777,057 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-02-19T09:11:00.000 | 4 | 1 | 0 | How to use pretrained word2vec vectors in doc2vec model? | 54,762,478 | 1.2 | python,machine-learning,nlp,word2vec,doc2vec | You might think that Doc2Vec (aka the 'Paragraph Vector' algorithm of Mikolov/Le) requires word-vectors as a 1st step. That's a common belief, and perhaps somewhat intuitive, by analogy to how humans learn a new language: understand the smaller units before the larger, then compose the meaning of the larger from the smaller.
But that's a common misconception, and Doc2Vec doesn't do that.
One mode, pure PV-DBOW (dm=0 in gensim), doesn't use conventional per-word input vectors at all. And, this mode is often one of the fastest-training and best-performing options.
The other mode, PV-DM (dm=1 in gensim, the default) does make use of neighboring word-vectors, in combination with doc-vectors in a manner analgous to word2vec's CBOW mode – but any word-vectors it needs will be trained-up simultaneously with doc-vectors. They are not trained 1st in a separate step, so there's not a easy splice-in point where you could provide word-vectors from elsewhere.
(You can mix skip-gram word-training into the PV-DBOW, with dbow_words=1 in gensim, but that will train word-vectors from scratch in an interleaved, shared-model process.)
To the extent you could pre-seed a model with word-vectors from elsewhere, it wouldn't necessarily improve results: it could easily send their quality sideways or worse. It might in some lucky well-managed cases speed model convergence, or be a way to enforce vector-space-compatibility with an earlier vector-set, but not without extra gotchas and caveats that aren't a part of the original algorithms, or well-described practices. | I am trying to implement doc2vec, but I am not sure how the input for the model should look like if I have pretrained word2vec vectors.
The problem is, that I am not sure how to theoretically use pretrained word2vec vectors for doc2vec. I imagine, that I could prefill the hidden layer with the vectors and the rest of the hidden layer fill with random numbers
Another idea is to use the vector as input for word instead of a one-hot-encoding but I am not sure if the output vectors for docs would make sense.
Thank you for your answer! | 0 | 1 | 1,101 |
0 | 54,767,990 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-02-19T12:19:00.000 | 0 | 1 | 0 | Python - Manual - libary installation Failure on Windows with Spacy, Thinc and msgpack-numpy python 3.7 | 54,766,199 | 0 | python,numpy | After many tries, I figured out the simple mistake I was making...
Collecting msgpack-numpy<0.4.4.0 (from thinc==6.12.0) Does not mean, I need 0.4.4.0 it means I need a package less than 0.4.4.0.
So I found msgpack_numpy-0.4.3.2-py2.py3-none-any.whl
It is confusing to list packages as a less than or equal requirement that do not actually exist.
I eventually was able to install spacy after many tries of different combinations of thinc, regex, spacy
regex==2018.01.10
msgpack-numpy 0.4.3.2
thinc-6.12.1
Successfully installed spacy-2.0.18 | All, I am pursuing a path of manual installation of python libraries, one that unfortunately, I cannot deviate from and it has become challenging because some of the libraries are just not easily found from pypi.org. This is a Windows 10 set up using Anaconda for python 3.7
My goal is to install SPACY and I have tried this version: spacy-2.0.18-cp37-cp37m-win_amd64.whl
Which Requires Collecting thinc<6.13.0,>=6.12.1 (from spacy==2.0.18)
Now I can't seem to find 6.13.0 But was able to find 6.12.1 also found thinc-7.0.0.
so I installed thinc-7.0.0 but spacy does not recognize that as >= 6.12.1 not sure if I am interpreting it correctly.
So instead I install thinc-6.12.0-cp37-cp37m-win_amd64.whl
which fails because it is looking for.. Collecting msgpack-numpy<0.4.4.0 (from thinc==6.12.0)
However msgpack-numpy<0.4.4.0 seems undiscoverable.
I have found msgpack_numpy-0.4.4.2-py2.py3-none-any.whl
I also Found msgpack_numpy-0.4.4-py2.py3-none-any.whl
of which neither will be accepted by thinc-6.12.0 as valid.
So have I chosen the wrong spacy version to start with for 3.7?
I tried this path in 3.6 and I think I was able to get it all to work, is the python 3.7 path just broken?
If someone knows the path and the location of the files to get spacy to work, that would be great. Unfortunately, I cannot just issue pip commands at this time..
best regards | 0 | 1 | 132 |
0 | 54,767,624 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-02-19T13:27:00.000 | 0 | 1 | 0 | data structures to recognize particular elements / shapes with similar patterns on the image in python | 54,767,421 | 1.2 | python,algorithm,image-processing,data-structures | If you're planning on using something like PyTorch later on (which would tie in to the sort of neural-network functionality you're pursuing), I'd become familiar with how NumPy operates, as it bridges pretty well into Torch data structures. If it helps, a lot of SciPy and Matplotlib functions work beautifully with Numpy structures right out of the box.
It's hard to tell exactly what you're looking for in these non-neural data structures; where are you worried about performance concerns and bottlenecks?
I'd recommend starting out with some PyTorch (or other deep learning framework) tutorials regarding image recognition and classification; it will get you closer to where you want to end up, and you'll be better able to make decisions about what your eventual program structure needs will be. | I am having a trouble picking the right data structure as/library . I lack in experience in the area of image processing / pattern recognition . The aim is to building a simple prototype to learn recognizing particular shapes from construction plans. I would be great full for any indication about the data structure as I know It will be hard to switch it later on during the project and thus I am not entirely sure which one to pick.
The problem is , I plan to use a kind of neural network / algorithm later on so the performance of processing of the data structure may happen to be my bottle neck.
I was thinking about NumPy / SciPy / PIL / MatPlotLib
I will be extremely grateful for expertise of anyone who has tackled similar problem | 0 | 1 | 47 |
0 | 54,779,427 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-02-19T16:30:00.000 | 0 | 1 | 0 | What is a good AWS solution (DB, ETL, Batch Job) to store large historical trading data (with daily refresh) for machine learning analysis? | 54,770,900 | 0 | mysql,database,amazon-web-services,amazon-dynamodb,mysql-python | If you want to use AWS EMR, then the simplest solution is probably just to run a daily job that dumps data into a file in S3. However, if you want to use something a little more SQL-ey, you could load everything into Redshift.
If your goal is to make it available in some form to other people, then you should definitely put the data in S3. AWS has ETL and data migration tools that can move data from S3 to a variety of destinations, so the other people will not be restricted in their use of the data just because of it being stored in S3.
On top of that, S3 is the cheapest (warm) storage option available in AWS, and for all practical purposes, its throughout is unlimited. If you store the data in a SQL database, you significantly limit the rate at which the data can be retrieved. If you store the data in a NoSQL database, you may be able to support more traffic (maybe) but it will be at significant cost.
Just to further illustrate my point, I recently did an experiment to test certain properties of one of the S3 APIs, and part of my experiment involved uploading ~100GB of data to S3 from an EC2 instance. I was able to upload all of that data in just a few minutes, and it cost next to nothing.
The only thing you need to decide is the format of your data files. You should talk to some of the other people and find out if Json, CSV, or something else is preferred.
As for adding new data, I would set up a lambda function that is triggered by a CloudWatch event. The lambda function can get the data from your data source and put it into S3. The CloudWatch event trigger is cron based, so it’s easy enough to switch between hourly, daily, or whatever frequency meets your needs. | I want to build a machine learning system with large amount of historical trading data for machine learning purpose (Python program).
Trading company has an API to grab their historical data and real time data. Data volume is about 100G for historical data and about 200M for daily data.
Trading data is typical time series data like price, name, region, timeline, etc. The format of data could be retrieved as large files or stored in relational DB.
So my question is, what is the best way to store these data on AWS and what'sthe best way to add new data everyday (like through a cron job, or ETL job)? Possible solutions include storing them in relational database like Or NoSQL databases like DynamoDB or Redis, or store the data in a file system and read by Python program directly. I just need to find a solution to persist the data in AWS so multiple team can grab the data for research.
Also, since it's a research project, I don't want to spend too much time on exploring new systems or emerging technologies. I know there are Time Series Databases like InfluxDB or new Amazon Timestream. Considering the learning curve and deadline requirement, I don't incline to learn and use them for now.
I'm familiar with MySQL. If really needed, i can pick up NoSQL, like Redis/DynamoDB.
Any advice? Many thanks! | 0 | 1 | 82 |
0 | 59,461,549 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-02-19T19:16:00.000 | 0 | 3 | 0 | Decimal class rounding in Pandas | 54,773,447 | 0 | python,pandas,decimal | To round your decimal to 2 significant figures for example:
round(df['yourSeries'].astype('float'), 2)
Or if you just want an int:
round(df['yourSeries'].astype('float')) | i have problems rounding Decimals() inside a Pandas Dataframe. The round() method does not work and using quantize() neither. I've searched for a solution with no luck so far.
round() does nothing, i asume it is meant for float numbers
quantize() won't work because it is not a DataFrame function
Any thoughts? | 0 | 1 | 881 |
0 | 54,805,518 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-02-20T15:02:00.000 | 0 | 1 | 0 | Calculating object labelling consensus area | 54,789,340 | 0 | python,computational-geometry | IMO the simplest is to fill the shapes using polygon filling / circle filling (this is simple, you can roll your own) / path filling (from a seed). Then finding the area of overlap is an easy matter. | Scenario: four users are annotating images with one of four labels each. These are stored in a fairly complex format - either as polygons or as centre-radius circles. I'm interested in quantifying, for each class, the area of agreement between individual raters – in other words, I'm looking to get an m x n matrix, where M_i,j will be some metric, such as the IoU (intersection over union), between i's and j's ratings (with a 1 diagonal, obviously). There are two problems I'm facing.
One, I don't know what works best in Python for this. Shapely doesn't implement circles too well, for instance.
Two, is there a more efficient way for this than comparing it annotator-by-annotator? | 0 | 1 | 21 |
0 | 58,782,823 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2019-02-20T22:36:00.000 | 2 | 1 | 0 | Dataset with lots of zero value as missing value. What should I do? | 54,796,340 | 0.379949 | python,data-science,mining | Well there are a few options.
Take an average of the non zero values and fill all the zeros with the average. This yields 'tacky' results and is not best practice a few outliers can throw off the whole.
Use the median of the non zero values, also not a super option but less likely to be thrown off by outliers.
Binning would be taking the sum of the budgets then say splitting the movies into a certain number of groups like say budgets over or under a million, take the average budget then divide that by the amount of groups you want then use the intervals created from the average if they fall in group 0 give them a zero if group 1 a one etc.
I think finding the actual budgets for the movies and replacing the bad itemized budgets with the real budget would be a good option depending on the analysis you are doing. You could take the median or average of each feature column of the budget make that be the percent of each budget for a movie then fill in the zeros with the percent of budget the median occupies. If median value for the non zero actor_pay column is budget/actor_pay=60% then filling the actor_pay column of a zeroed value with 60 percent of the budget for that movie would be an option.
Hard option create a function that takes the non zero values of a movies budgets and attempts to interpolate the movies budget based upon the other movies data in the table. This option is more like its own project and the above options should really be tried first. | I am currently working on IMDB 5000 movie dataset for a class project. The budget variable has a lot of zero values.
They are missing entries. I cannot drop them because they are 22% of my entire data.
What should I do in Python? Some suggested binning? Could you provide more details? | 0 | 1 | 1,215 |
0 | 54,799,272 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-02-21T02:43:00.000 | 0 | 2 | 0 | Matrix shapes difference between lectures | 54,798,437 | 0 | python,machine-learning,deep-learning | Both ways are fine, it only matters to be consistent, i.e., that all matrix operations are correct. Depending on the shape of the matrix you might have matrix*vector or vector_transposed*matrix, or some variety along these lines.
Playing around with different representations might actually help the understanding in the long run. So I would recommend to follow both lectures and appreciate the differences in how they represent the data etc. | I'm currently learning deep-learning from two lectures. What gets me confused is that there is a notation difference between two lectures when they shape an input matrix X.
In Coursera's lecture, they make a matrix X in shape of (number of features, number of samples), so that they stack the samples vertically. Otherwise, the other lecture stacks the samples horizontally, so that every row represents one sample.
What makes this difference and which one should I follow? | 0 | 1 | 39 |
0 | 54,801,198 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2019-02-21T06:09:00.000 | 3 | 2 | 0 | How to know if the Variable is Categorical or Numerical if all it contains is Digits? | 54,800,350 | 1.2 | python,pandas,dataframe,statistics,data-analysis | The short answer, is: your knowledge of the problem domain / application domain will tell you.
There are some differences that you look for, but to apply these differences, you will still have to use some domain knowledge (sometimes common sense).
The following are some differences which will help you distinguish:
For categorical variables, the set of permitted values is usually fixed, and rarely changes, if at all. In contrast, for numeric variable, the set of values can change, for example, when you receive a new record for the same dataset.
Numeric variables can potentially have values that are not round integers. In your example, even though "distance from office" happens to have integer values, that could be purely incidental, or could have been a choice made by some one about how much numeric precision they want in the data.
For categorical variables, it usually doesn't make sense to talk of averages. For example, there are 2 types of diabetes called Type 1, Type 2, but it just doesn't make sense to talk of an average of these types (Type 1.2357?).
Ask yourself this thumb-rule question: When I perform my data analysis can I express my inferences in terms of specific values of this variable? How about ranges of this variable ("0 to 5 km", "5 to 10 km", etc). For example, can I report any inferences from my data analysis that says "Those whose distance from office is 123, are prone to be successful in their career"? That specific value sounds silly, right? In contrast, if it were a categorical variable such as Type 2 Diabetes, you can always make inferences in terms of the specific value. | I have a dataset which has several Variables.
I want to determine that how can we judge for a variable if it is categorical or numerical other than the method of unique value counts, as for instance one of my variable Disease Type has 31 Unique Values whereas other Variable Distance from Office has 25 Unique Values, both in the form of numbers. | 0 | 1 | 1,458 |
0 | 54,801,115 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2019-02-21T06:09:00.000 | 0 | 2 | 0 | How to know if the Variable is Categorical or Numerical if all it contains is Digits? | 54,800,350 | 0 | python,pandas,dataframe,statistics,data-analysis | <dataframename>.info() will give the total count of each variable along with whether it is non-null and its datatype like float64,object,int64 etc | I have a dataset which has several Variables.
I want to determine that how can we judge for a variable if it is categorical or numerical other than the method of unique value counts, as for instance one of my variable Disease Type has 31 Unique Values whereas other Variable Distance from Office has 25 Unique Values, both in the form of numbers. | 0 | 1 | 1,458 |
0 | 54,821,940 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-02-22T06:24:00.000 | 2 | 2 | 0 | I want all numpy arrays to be forced to be 2 dimensional | 54,821,206 | 0.197375 | python,arrays,numpy | As noted above, the np.matrix class has semantics quite similar to a matlab array.
However, if you goal is to learn numpy as a marketable skill, I would strongly recommend you fully embrace the concept of an ndarray; while there is some historical truth to calling numpy a port of matlab, it is a bit of an insult, as the ndarray is one of the most compelling objective conceptual improvements of numpy over matlab, other than its price.
TLDR; you will have a hard time not getting your application tossed by me if you claim to know numpy, but your code samples smell like ported matlab in any way. | I am coming over from Matlab, and while everything was mostly ported over really well (the community has to be thanked for this, a Matlab license costs way over $1000). There is one thing that I cannot for the life of me find out.
In Matlab, all arrays are 2D (until recently, where they gave you other options). Such that, when I define a scalar, array, matrix they are all considered 2D. This is pretty useful when doing matrix multiplication!
In Python, when using numpy. Unfortunately, I find myself having to use the reshape command quite frequently.
Is there anyway that I can globally set that all array's have 2D dimensions unless stated otherwise?
Edit:
According to the numpy documentation numpy.matrix may be removed in the coming future. What I want to do in essence is that have all output of any numpy operation have the function np.atleast_2d applied to them automatically. | 0 | 1 | 114 |
0 | 54,832,931 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-02-22T12:17:00.000 | 0 | 2 | 0 | OpenCV can't be found but can be imported | 54,826,979 | 1.2 | python,opencv,ubuntu | I found out that I hadn't uninstalled in the build folder, problem solved! | I've installed opencv from source using a virtualenv, however I faced some errors and needed to reinstall it. I tried removing all the files with sudo find / -name "opencv" -exec rm {} \; and checked if the package was removed with pkg-config --modversion opencv, and it said it could not be found, but when I open the terminal with python3 and enter import cv2 then print(cv2.__version__), the terminal returns 4.0.0. How can I completely remove opencv? I'm on Ubuntu 18.04 LTS. | 0 | 1 | 362 |
0 | 54,833,867 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2019-02-22T19:18:00.000 | 2 | 1 | 0 | Can I use a machine learning model as the objective function in an optimization problem? | 54,833,739 | 0.379949 | python,machine-learning,optimization,scikit-learn,scipy | Yes; a linear regression model is a straightforward linear function of coefficients (one of which is the "intercept" or "bias").
The problem you have now is that a more complex model isn't quite so simple. You need to load the model into an appropriate engine. To "call" the model, you feed that engine the input vector (the cognate of a list of arguments), and wait for the model to return the prediction.
You need to wrap this process in a function call, perhaps one that issues the model load and processing as external system / shell commands, and returns the results to your main program. Some applications are large enough that it makes sense to implement a full-bore data stream with listener and reporter to handle the throughput.
Does that get you moving? | I have a data set for which I use Sklearn Decision Tree regression machine learning package to build a model for prediction purposes. Subsequently, I am trying to utilize scipy.optimize package to solve for the minimized solution based on a given constraint.
However, I am not sure if I can take the decision tree model as the objective function for the optimization problem. What should be the approach in a situation like this? I have tried linear regression models such as LarsCV in the past and they worked just fine. But in a linear regression model, you can essentially extract the coefficients and interception point from the model. | 0 | 1 | 904 |
0 | 54,839,647 | 0 | 1 | 1 | 0 | 1 | true | 0 | 2019-02-23T08:18:00.000 | 0 | 1 | 0 | IronPython.Runtime.Exceptions.ImportException: 'No module named pandas' | 54,839,602 | 1.2 | c#,python,pandas | Sorry you cant access third party functionalities/packages (like Pandas here)
The reason:
Pandas / numpy use great parts of c code, that is a no good for IronPython (.NET).!!!!
If you need to use the power of pandas, i suggest you to create a dialog (send/reveive datas or by file or by calling an external python prog) between your IronPython Project and a program python including pandas. | I am running C# project which use result of python file. I'm using IronPython to run python file in visual studio. But the error above come up. Please help me. Many thanks! | 0 | 1 | 1,617 |
0 | 54,842,514 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-02-23T13:41:00.000 | 0 | 5 | 0 | Which loss function should I use in my LSTM and why? | 54,842,219 | 0 | python,python-3.x,tensorflow,keras | You'll want to use a logistic activation. This pushes each logit between 0 and 1, which represents the probability of that category.
Then use categorical cross entropy. This will not make your model a single class classifier since you are using the logistic activation rather than the softmax activation.
As a rule of thumb:
logistic activation pushes values between 0 and 1
softmax pushes values between 0 and 1 AND makes them a valid probability distribution (sum to 1)
cross entropy calculates the difference between distributions of any type. | I try to understand Keras and LSTMs step by step. Right now I build an LSTM there the input is a sentence and the output is an array of five values which can each be 0 or 1.
Example:
Input sentence: 'I hate cookies'
Output example: [0,0,1,0,1]
For this, I am using keras library.
Now I am not sure which loss function I should use. Right now I just know two predefined loss functions a little bit better and both seem not to be good for my example:
Binary cross entropy: Good if I have a output of just 0 or 1
Categorical cross entropy: Good if I have an output of an array with one 1 and all other values being 0.
Both functions would not make any sense for my example. What would you use and why?
Edit
Another Question: Which Activation function would you use in Keras? | 0 | 1 | 7,333 |
0 | 70,772,878 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-02-23T13:41:00.000 | 0 | 5 | 0 | Which loss function should I use in my LSTM and why? | 54,842,219 | 0 | python,python-3.x,tensorflow,keras | When it comes to regression problem in deep learning mean square error MSE is the most preferred loss function but when it comes to categorical problem where you want your output to be 1 or 0, true or false the cross binary entropy is preferable | I try to understand Keras and LSTMs step by step. Right now I build an LSTM there the input is a sentence and the output is an array of five values which can each be 0 or 1.
Example:
Input sentence: 'I hate cookies'
Output example: [0,0,1,0,1]
For this, I am using keras library.
Now I am not sure which loss function I should use. Right now I just know two predefined loss functions a little bit better and both seem not to be good for my example:
Binary cross entropy: Good if I have a output of just 0 or 1
Categorical cross entropy: Good if I have an output of an array with one 1 and all other values being 0.
Both functions would not make any sense for my example. What would you use and why?
Edit
Another Question: Which Activation function would you use in Keras? | 0 | 1 | 7,333 |
0 | 54,928,019 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-02-23T14:52:00.000 | 0 | 1 | 0 | Read existing gurobi .lp-file and add constraints to it | 54,842,821 | 0 | python,gurobi | Without knowing your error message, I assume the problem is that you have not defined x.
The quickest way to fix this would probably be to reconstruct the variable name (that you defined earlier when building the model) from the SI values and then access the variables with getVarByName.
If this is slow because the model is big and you are accessing many variables, you could instead get the array of all variables with model.getVars(), then iterate over this and rebuild your multi-dimensional array (or tuple_dict) x by parsing the names of the variables. | I have a little problem using Gurobi in Python. I have a .lp-file, where my linear programm is saved. To these constraints I want to add some additional constraints. Loading and optimizing works without any problems, but I just cannot add new constraints to my model. I do not know what I am doing wrong...
I hope there is someone, who finds my mistake!
Thanks!
My code looks like below (I made it a bit easier)
SI is a two-dimensional array containing the data for every variable.
from gurobipy import *
model = read("testdatei.lp")
for j in range(len(SI)):
model.addConstr(x[SI[j][0], SI[j][1], SI[j][2], SI[j][3], SI[j][4], SI[j][5]] == 1) | 0 | 1 | 392 |
0 | 54,850,244 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-02-23T15:12:00.000 | 3 | 1 | 0 | How to run predict() on "precomputed" data for clustering in python | 54,842,990 | 0.53705 | python,cluster-analysis | Most clustering algorithms, including AP, have no well-defined way to "predict" on new data. K-means is one of the few cases simple enough to allow a "prediction" consistent with the initial clusters.
Now sklearn has this oddity of trying to squeeze everything into a supervised API. Clustering algorithms have a fit(X, y) method, but ignore y, and are supposed to have a predict method even though the algorithms don't have such a capability.
For affinity propagation, someone at some point decided to add a predict based on k-means: It always predicts the nearest center. Computing the mean only is possible with coordinate data, and hence the method fails with metric=precomputed.
If you want to replicate this behavior, computer the distances to all cluster centers, and choose the argmin, that's all. You can't fit this into the sklearn API easily with "precomputed" metrics. You could require the user to pass a distance vector to all "training" examples for the precomputed metric, but only few of them are needed...
In my opinion, I'd rather remove this method altogether:
It is not in published research on affinity propagation that I know
Affinity propagation is based on concepts of similarity ("affinity") not on distance or means
This predict will not return the same results as the points were labeled by AP, because AP is labeling points using a "propagated responsibility", rather than the nearest "center". (The current sklearn implementation may be losing this information...)
Clustering methods don't have a consistent predict anyway - it's not a requirement to have this.
If you want to do this kind of prediction, just pass the cluster centers to a nearest neighbor classifier. That is what is re-implemented here, a hidden NN classifier. So you get more flexibility if you make prediction a second (classification) step.
Note that it clustering it is not common to do any test-train split, because you don't use the labels anyway, and use only unsupervised evaluation methods (if any at all, because these have their own array of issues) if any at all - you cannot reliably do "hyperparameter optimization" here, but have to choose parameters based on experience and humans looking at the data. | I have my own precomputed data for running AP or Kmeans in python. However when I go to run predict() as I would like to run a train() and test() on the data to see if the clusterings have a good accuracy on the class or clusters, Python tells me that predict() is not available for "precomputed" data.
Is there another way to run a train / test on clustered data in python? | 0 | 1 | 439 |
0 | 58,692,607 | 0 | 1 | 0 | 0 | 3 | false | 0 | 2019-02-23T16:30:00.000 | 1 | 6 | 0 | Pip Install Tensorflow Failed | 54,843,623 | 0.033321 | python,tensorflow,object-detection-api | I had the same issue on windows 10, I find out that tensorflow work with python x64 installation and the command I used is the follow:
pip install --upgrade tf | I am trying to install tensorflow in python using pip command as
pip install tensorflow
, but unfortunately, I received the following error:
Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow
I have also tried to install tensorflow using the following command
pip install --upgrade
https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl, but was again faced with the following error:
tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. Additionally, I have checked the same commands against Python Version 3.5x 3.6x and obviously 3.7 as well, but those didn't work. | 0 | 1 | 10,204 |
0 | 54,843,717 | 0 | 1 | 0 | 0 | 3 | false | 0 | 2019-02-23T16:30:00.000 | 1 | 6 | 0 | Pip Install Tensorflow Failed | 54,843,623 | 0.033321 | python,tensorflow,object-detection-api | Are you sure that you are capitalizing/spelling properly? The command prompt is case-sensitive. The input I use is:
cd C:\path\to\the\directory\python\is\installed\in (cd, space, the path to the directory) then:
python -m pip install TensorFlow
It should work afterwards. | I am trying to install tensorflow in python using pip command as
pip install tensorflow
, but unfortunately, I received the following error:
Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow
I have also tried to install tensorflow using the following command
pip install --upgrade
https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl, but was again faced with the following error:
tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. Additionally, I have checked the same commands against Python Version 3.5x 3.6x and obviously 3.7 as well, but those didn't work. | 0 | 1 | 10,204 |
0 | 54,843,877 | 0 | 1 | 0 | 0 | 3 | false | 0 | 2019-02-23T16:30:00.000 | 2 | 6 | 0 | Pip Install Tensorflow Failed | 54,843,623 | 0.066568 | python,tensorflow,object-detection-api | I think Tensorflow does not currently have support for Python 3.7 and if you have Python 3.7 currently installed this might be the cause of the error message Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow
You can downgrade to Python 3.6.x and install tensorflow using pip then. | I am trying to install tensorflow in python using pip command as
pip install tensorflow
, but unfortunately, I received the following error:
Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow
I have also tried to install tensorflow using the following command
pip install --upgrade
https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl, but was again faced with the following error:
tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform. Additionally, I have checked the same commands against Python Version 3.5x 3.6x and obviously 3.7 as well, but those didn't work. | 0 | 1 | 10,204 |
0 | 54,846,520 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-02-23T21:20:00.000 | 1 | 1 | 0 | What is the input format for word2vec features in SVM classification task? | 54,846,314 | 1.2 | python,classification,svm,word2vec | Vector of many features
From the perspective of an SVM, each dimension of a word vector would be a separate numeric feature - each dimension in that vector represents a numeric metric representing something different.
The same applies for non-SVM classifiers. For example, if you'd have a neural network, and your input features were that word vector of length 300 and (for the sake of a crude example) a bit stating whether that word was capitalized, then you'd concatenate those things and would have 301 numbers as your input; you'd treat that feature just as each of the 300 dimensions. | I am doing a binary classification task using linear SVM in scikit learn. I use nominal features and word vectors. I obtained the word vectors using the pretrained Google word2vec, however, I am not sure how SVM can handle word vectors as a feature.
It seems that I need to "split" each vector in 300 separate features (=300 vector dimensions), because I can't pass the vector as a whole to SVM. But that doesn't seem right, as the vector should be treated as one feature.
What would be the correct way to represent a vector in this case? | 0 | 1 | 947 |
0 | 54,921,745 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-02-24T05:49:00.000 | 0 | 1 | 0 | Why am I getting cv2 error in raspbianOS and not raspberrypi shell | 54,849,139 | 0 | python | Please try to run your code after executing the below command.
python -m pip install opencv-contrib-python | I recently installed and compiled openCV on my raspberrypi.
Now if I use the command import cv2 in my raspian OS (either in the python shell or the IDE) I get an error
no module named cv2 found but the same command works in the raspberrypi shell.
How do I resolve this??
File "home/pi/Desktop/FR.py",line 2,in module
import cv2
File "/usr/lib/python3/dist-packages/thonny/backend.py",line
305,in_custom_import
module=self_original_import(*args,**kw) ImportError:no module named
'cv2' | 0 | 1 | 252 |
0 | 54,855,155 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-02-24T09:50:00.000 | 2 | 1 | 0 | Intent classification with large number of intent classes | 54,850,657 | 1.2 | python,tensorflow,nlp,text-classification | First, word2vec and GloVe are, almost, dead. You should probably consider using more recent embeddings like BERT or ELMo (both of which are sensitive to the context; in other words, you get different embeddings for the same word in a different context). Currently, BERT is my own preference since it's completely open-source and available (gpt-2 was released a couple of days ago which is apparently a little bit better. But, it's not completely available to the public).
Second, when you use BERT's pre-trained embeddings, your model has the advantage of seeing a massive amount of text (Google massive) and thus can be trained on small amounts of data which will increase it's performance drastically.
Finally, if you could classify your intents into some coarse-grained classes, you could train a classifier to specify which of these coarse-grained classes your instance belongs to. Then, for each coarse-grained class train another classifier to specify the fine-grained one. This hierarchical structure will probably improve the results. Also for the type of classifier, I believe a simple fully connected layer on top of BERT would suffice. | I am working on a data set of approximately 3000 questions and I want to perform intent classification. The data set is not labelled yet, but from the business perspective, there's a requirement of identifying approximately 80 various intent classes. Let's assume my training data has approximately equal number of each classes and is not majorly skewed towards some of the classes. I am intending to convert the text to word2vec or Glove and then feed into my classifier.
I am familiar with cases in which I have a smaller number of intent classes, such as 8 or 10 and the choice of machine learning classifiers such as SVM, naive bais or deeplearning (CNN or LSTM).
My question is that if you have had experience with such large number of intent classes before, and which of machine learning algorithm do you think will perform reasonably? do you think if i use deep learning frameworks, still large number of labels will cause poor performance given the above training data?
We need to start labelling the data and it is rather laborious to come up with 80 classes of labels and then realise that it is not performing well, so I want to ensure that I am making the right decision on how many classes of intent maximum I should consider and what machine learning algorithm do you suggest?
Thanks in advance... | 0 | 1 | 1,227 |
0 | 54,856,783 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-02-24T21:29:00.000 | 0 | 1 | 0 | How to revert keras model to previous epoch weights after train_on_batch nan update | 54,856,754 | 1.2 | python,tensorflow,keras,deep-learning,nan | since you have not pasted your code and weights I can't tell you much, but I suspect this problem may be due to dropout or regularisation, if you are using any of the two techniques set the parameters or percentage of dropouts properly as per your network, a high percentage in a small network will lead this sort of problem same with regularization.
and for reverting and saving models use checkpoints. | I'm having trouble resetting my keras model to the weights it had in the previous epoch after I hit a train_on_batch update that makes some of the weights nans.
I have tried to save the model weights after each training step and then to load the "good" (non-nan) weights back into the keras model after a nan training update.
This seems to work fine - when I print the result of model.get_weights() after loading the old weights file into the model, the resulting weights contain no nans (and predict using them also gives a non-nan output).
However, now when I try to train_on_batch again, this time using a new batch, I get a nan update again immediately. I've tried with multiple randomly chosen batches and the nan update happens each time.
Is there something (maybe a parameter) that changes in the model or optimizer configuration when a nan train_on_batch update occurs that needs to be reset for training to continue once I change out the weights?
I would also like to avoid using model.save() and load_model() in the solution.
(keras 2.2.4, tensorflow 1.12.0)
Any thoughts are appreciated! | 0 | 1 | 896 |
0 | 54,862,189 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-02-24T21:57:00.000 | 0 | 1 | 0 | Lucas Kanade: How to calculate distance between tracked points | 54,856,954 | 1.2 | python,opencv,computer-vision,computer-science | In new_pts points have the same index. But they can be not founded - see to the status array: if status[i] == 1 then new_pts[i] contains a new coordinates of the old_pts[i].
For the more robustness it can to search direct flow (goodFeaturesToTrack(frame1) -> LK flow), backward flow (goodFeaturesToTrack(frame2) -> LK flow) and leave the points whose coordinates are equal in both directions. | I'm using lucas-kanade opencv implementation to track objects between frames. I want to be able to do the following two things:
Calculate the distance moved by each point between frames
Track bounding boxes for each object across frames
I have obtained the features to track using cv2.goodFeaturesToTrack(). I also add the bounding boxes of objects to the features to be tracked. Right now I am using the following to calculate distance between the points
np.sqrt(np.square(new_pts - old_pts).sum(axis=1).sum(axis=1)). I am not quite sure if this is the correct way to do this because the indices of the points might be different in the new_pts.
Is the assumption that every index in old_pts corresponds to the same feature in new_pts array correct?
Secondly, is there a way to track bounding boxes across frames using lucas kanade? | 0 | 1 | 241 |
0 | 54,870,187 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-02-25T10:27:00.000 | 1 | 1 | 0 | Is there a way to impose a constraint in tensor flow, could I enforce some rule along the way? | 54,864,100 | 1.2 | python-3.x,tensorflow,constraints | If by "generated by TensorFlow" you mean generated by a neural network, I don't think it is possible to do that in general. You can't really guarantee that the output of a neural network never violates such hard constraints in general, especially at test time.
Here's what you could do:
Add a loss term, something like max(0, (a+b)/2 - 10). This will not guarantee that your constraint is not violated (the optimization of the NN is "best-effort"). This loss function is btw very similar to the hinge loss used in support vector machines.
Use an appropriate activation function. E.g. if you know your data must lie between [0, 1], use the sigmoid activation on the output.
"Project" the output back to the allowed range if it is outside of it.
While the last two options guarantee feasibility, it is not always possible to do that or it is not clear how to do it and - even worse - how this will affect the learning. For example, if you see that (a+b)/2 >= 10 what will you do? Will you decrease b until the constraint is fulfilled, or both trade-off a and b somehow? Sometimes it is possible to define the "closest feasible point" w.r.t. some metric, but not in general. | Is there some way to a constraint on the data generated by tensor flow, for example if my model produced two outputs can you impose some sort of constraint on these, like if a and b where the outputs could you pre-enforce something like (a+b)/2<10? So the model wouldn't break this rule?
Thanks in advance | 0 | 1 | 236 |
0 | 54,865,791 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-02-25T11:49:00.000 | 0 | 2 | 0 | Python packages installing error in deep-learning environment | 54,865,526 | 0 | python,pip,deep-learning | terminal one as administrator if using Windows and use ubuntu before command use sudo. | When I am installing the new package in my deep-learning environment it gives me this error:
Could not install packages due to an EnvironmentError: [WinError 32]
The process cannot access the file because it is being used by another
process: Consider using the --user option or check the permissions.
Please help to resolve this | 0 | 1 | 170 |
0 | 54,871,249 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-02-25T16:55:00.000 | 1 | 2 | 0 | Can Image Recognition deal with classes, where the deciding quality is not directly visible? | 54,871,092 | 1.2 | python,keras,image-recognition | It could and also could not. For a CNN to provide good results with no contextual input, it would mean there must be some for of correlation between the input and the output. So lets say some apps have designs that correlate with age-rating, then yes its possible, otherwise its not unless you give the network something more to work with.
This could actually set you up for a cool experiment to check yourself, run this through some run-of-the-mill CNN, and if it evaluates well (through cross-validation) then youve probably shown the correlation exists
(Note: if the model does not test well, that is not evidence that correlation isnt there, probably isnt likely, but not gauranteed) | I have the following problem and am not quite sure if it is solvable by image recognition (and convolutional neural networks).
I have a dataset of 400k pictures divided into 5 classes. The pictures are screenshots of apps, which are put into the 5 classes depending on what age rating they received.
For example: I have 200k labeled as class 0, which means they are suitable all ages (according to the age rating); I have 50k pictures labeled as class 1 (suitable for children aged 6+) and so on.
With this data I want to train a neural network, that can tell me, which age rating a screenshot (and therefore the corresponding game) likely has.
Is this a problem, which is manageable by image recognition?
I've looked into examples (mostly Keras tutorials) for image recognition and all of them deal with problems, which are distinctly visible (like "does the image show a cat or a dog"). Browsing through my dataset I realized, that some of the pictures are pretty similar, although belonging to different classes.
Can a convolutional neural network (or any other type of image recognition algorithm) deal with classes, where the deciding factor is not directly visible? Is this just a problem of how deep the network is?
I'd be very thankful, if someone could point me in the general direction on where to look for further information. | 0 | 1 | 36 |
0 | 54,871,548 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-02-25T17:17:00.000 | 1 | 4 | 0 | How to standardise different date formats in pandas? | 54,871,446 | 0.049958 | python,pandas | Try removing the format parameter and setting infer_datetime_format=Truein the arguments you pass to pd.to_datetime | I have a dataset in csv format which contains dates in a column. I have imported this dataset into python pandas, and this date column is shown as an object. I need to convert this column to date time but i have a problem. This date column has date format in two formats
1. 11/7/2013 11:51
2. 13-07-2013 08:33:16
I need to convert one format to another one in order to have a standard date format in my python to do analysis. How can i do this?
There are many rows of date in both these formats, so when i try to convert second format to first format using the below code
print(df['date'].apply(lambda x: pd.to_datetime(x, format='%d/%m/%Y
%H:%M')))
i get the below error
ValueError: time data '13-07-2013 08:33:16' does not match format
'%d/%m/%Y %H:%M' (match)
so what would be the best method to standardise this column in one format? | 0 | 1 | 2,154 |
0 | 68,398,101 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-02-25T18:23:00.000 | 0 | 2 | 0 | Plot a density histogram with Plotly | 54,872,488 | 0 | python,plotly | Try this:
go.Histogram(x=some_vec, histnorm="probability density") | I'm looking for a way to plot a density histogram with Plotly. As a density=True with a numpy histogram. My variable is a continuous one from 0 to 20. I already have a count on yaxis with bins. So I'm looking for replace theses counts by percentage (o density). | 0 | 1 | 207 |
0 | 54,880,674 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-02-26T06:07:00.000 | 1 | 3 | 0 | Is AUC a better metric than accuracy in case of imbalenced datasets in machine learning,If not which is the best metric? | 54,879,340 | 0.066568 | python,machine-learning,artificial-intelligence,roc,auc | Neither are good for imbalanced datasets. Use the area under the precision recall curve instead. | Is auc better in handling imbalenced data. As in most of the cases if I am dealing with imbalenced data accuracy is not giving correct idea. Even though accuracy is high, model has poor perfomance. If it's not auc which is the best measure to handle imbalenced data. | 0 | 1 | 1,508 |
0 | 55,558,787 | 0 | 0 | 1 | 0 | 2 | false | 0 | 2019-02-26T08:57:00.000 | 0 | 3 | 0 | how to increase fps for raspberry pi for object detection | 54,881,654 | 0 | python,opencv,raspberry-pi,object-detection,yolo | The raspberry pi not have the GPU procesors and because of that is very hard for it to do image recognition at a high fps . | I'm having low fps for real-time object detection on my raspberry pi
I trained the yolo-darkflow object detection on my own data set using my laptop windows 10 .. when I tested the model for real-time detection on my laptop with webcam it worked fine with high fps
However when trying to test it on my raspberry pi, which runs on Raspbian OS, it gives very low fps rate that is about 0.3 , but when I only try to use the webcam without the yolo it works fine with fast frames.. also when I use Tensorflow API for object detection with webcam on pi it also works fine with high fps
can someone suggest me something please? is the reason related to the yolo models or opencv or phthon? how can I make the fps rate higher and faster for object detection with webcam? | 0 | 1 | 1,551 |
0 | 66,489,611 | 0 | 0 | 1 | 0 | 2 | false | 0 | 2019-02-26T08:57:00.000 | 0 | 3 | 0 | how to increase fps for raspberry pi for object detection | 54,881,654 | 0 | python,opencv,raspberry-pi,object-detection,yolo | My detector on raspberry pi without any accelerator can reach 5 FPS.
I used SSD mobilenet, and quantize it after training.
Tensorflow Lite supplies a object detection demo can reach about 8 FPS on raspberry pi 4. | I'm having low fps for real-time object detection on my raspberry pi
I trained the yolo-darkflow object detection on my own data set using my laptop windows 10 .. when I tested the model for real-time detection on my laptop with webcam it worked fine with high fps
However when trying to test it on my raspberry pi, which runs on Raspbian OS, it gives very low fps rate that is about 0.3 , but when I only try to use the webcam without the yolo it works fine with fast frames.. also when I use Tensorflow API for object detection with webcam on pi it also works fine with high fps
can someone suggest me something please? is the reason related to the yolo models or opencv or phthon? how can I make the fps rate higher and faster for object detection with webcam? | 0 | 1 | 1,551 |
0 | 55,735,532 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-02-26T18:14:00.000 | 0 | 2 | 0 | Running Facenet using OpenVINO | 54,891,713 | 0 | python,tensorflow,openvino | You can use Python APIs to integrate OV inference into your Python application. Please see inference_engine/samples/python_samples folder for existing Python samples. | I am stuck at a problem using OpenVINO. I am trying to run the facenet after converting model using OpenVINO toolkit but I am unable to use .npy and .pickle for complete face recognition. I am successful in converting .pb file to .bin and .xml file using OpenVino toolkit. | 0 | 1 | 636 |
0 | 55,158,313 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-02-26T18:14:00.000 | 0 | 2 | 0 | Running Facenet using OpenVINO | 54,891,713 | 0 | python,tensorflow,openvino | openvino converts the model to intermediate representation, which is compatible across multiple hardwares. It can improve the performance of your model too. Since you have already mentioned that yopu were ableto convert your model to IR format, the next phase is inference for which you can use the .xml and .bin files.
Can you please state what exactly do you intend to carryout with .npy or .pickle files? | I am stuck at a problem using OpenVINO. I am trying to run the facenet after converting model using OpenVINO toolkit but I am unable to use .npy and .pickle for complete face recognition. I am successful in converting .pb file to .bin and .xml file using OpenVino toolkit. | 0 | 1 | 636 |
0 | 54,894,944 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-02-26T21:46:00.000 | 1 | 2 | 0 | Finding Largest Subset of Data where Average Matches Criteria | 54,894,635 | 1.2 | python,optimization,weighted-average | Start by translating the desired range to 0, just for convenience. I'll translate to the lower bound, although the midpoint is also a good choice.
This makes your data set [10, 1, -10, 20, -12]. The set sum is 9; you need it to be in the range 0 to upper_bound * len(data).
This gives you a tractable variation of the "target sum" problem: find a subset of the list that satisfies the sum constraint. In this case, you have two solutions: [10, 1, -10] and [10, 1, -12]. You can find this by enhancing the customary target-sum problems to include the changing sum: the "remaining amount" will include the change from the mean calculation.
Can you finish from there? | I'm trying to find the largest subset sum of a particular data set, where the average of a field in the data set matches predetermined criteria.
For example, say I have a people's weights (example below) and my goal is to find the largest weight total where the average weight of the resulting group is between 200 and 201 pounds.
210
201
190
220
188
Using the above, the largest sum of weights where the average weight is between 200 and 201 pounds is from persons 1, 2, and 3. The sum of their weights is 601, and the average weight between them is 200.3.
Is there a way to program something to do the above, other than brute force, preferably using python? I'm not even sure where to start researching this so any help or guidance is appreciated. | 0 | 1 | 35 |
0 | 54,897,167 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-02-26T21:57:00.000 | 3 | 1 | 0 | Why should I use tf.data? | 54,894,799 | 1.2 | python,numpy,tensorflow,machine-learning | The tf.data module has specific tools which help in building a input pipeline for your ML model. A input pipeline takes in the raw data, processes it and then feeds it to the model.
When should I use tf.data module?
The tf.data module is useful when you have a large dataset in the form of a file such as .csv or .tfrecord. tf.data.Dataset can perform shuffling and batching of samples efficiently. Useful for large datasets as well as small datasets. It could combine train and test datasets.
How can I create batches and iterate through them for training?
I think you can efficiently do this with NumPy and np.reshape method. Pandas can read data files for you. Then, you just need a for ... in ... loop to get each batch amd pass it to your model.
How can I feed NumPy data to a TensorFlow model?
There are two options to use tf.placeholder() or tf.data.Dataset.
The tf.data.Dataset is a much easier implementation. I recommend to use it. Also, has some good set of methods.
The tf.placeholder creates a placeholder tensor which feeds the data to a TensorFlow graph. This process would consume more time feeding in the data. | I'm learning tensorflow, and the tf.data API confuses me. It is apparently better when dealing with large datasets, but when using the dataset, it has to be converted back into a tensor. But why not just use a tensor in the first place? Why and when should we use tf.data?
Why isn't it possible to have tf.data return the entire dataset, instead of processing it through a for loop? When just minimizing a function of the dataset (using something like tf.losses.mean_squared_error), I usually input the data through a tensor or a numpy array, and I don't know how to input data through a for loop. How would I do this? | 0 | 1 | 1,075 |
0 | 54,920,367 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-02-27T17:56:00.000 | 1 | 1 | 0 | Does the Google News Word2Vec model take up storage every time you run it? | 54,911,712 | 0.197375 | python,nlp,gensim,word2vec,word-embedding | Just loading a model won't usually use any more disk storage. (An exception: if load or use needs addressable memory beyond your RAM, you may start using virtual memory, which might show up as less disk space depending on your OS. But, with these sorts of models, you want to avoid relying on any virtual memory, as basic most_similar() operations cycle through the full model, & will be very slow if they're reading from disk each time.)
Loading the model will use memory, then more when 1st doing most_similar(). (That requires unit-normalized vectors, which are calculated the 1st time needed then cached.)
But terminating a notebook should free that memory. (Note that closing a tab may not cleanly terminate a Jupyter notebook. If the notebook is still running at the notebook server, even with no browsers viewing it, it will still use/hold memory.) | This may seem like an odd question but I'm new to this so thought I'd ask anyway.
I want to use this Google News model over various different files on my laptop. This means I will be running this line over and over again in different Jupyter notebooks:
model=word2vec.KeyedVectors.load_word2vec_format("GoogleNews-vectors-negative300.bin",binary=True)
Does this eat 1) Storage (I've noticed my storage filling up exponentially for no reason)
2) Less memory than it would otherwise if I close the previous notebook before running the next.
My storage has gone down by 50GB in one day and the only thing I have done on this computer is run the Google News model (I didn't do most_similar()). Restarting and closing notebooks hasn't helped and there aren't any big files on the laptop. Any ideas?
Thanks. | 0 | 1 | 149 |
0 | 54,914,559 | 0 | 1 | 0 | 0 | 1 | true | 4 | 2019-02-27T20:19:00.000 | 12 | 1 | 0 | what's the difference between "import keras" and "import tensorflow.keras" | 54,913,830 | 1.2 | python-3.x,tensorflow,keras | Tensorflow.keras is an version of Keras API implemented specifically for use with Tensorflow. It is a part of Tensorflow repo and from TF version 2.0 will become main high level API replacing tf.layers and slim.
The only reason to use standalone keras is to maintain framework-agnostic code, i.e. use it with another backend. | I was wondering, what's the difference between importing keras from tensorflow using import tensorflow.keras or just pip installing keras alone and importing it using import keras as both seemed to work nicely so far, the only difference I noticed is that i get Using TensorFlow backend. in the command line every time I execute the one using keras. | 0 | 1 | 1,437 |
0 | 60,148,794 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2019-02-28T03:04:00.000 | 5 | 2 | 0 | Viewing Graph from saved .pbtxt file on Tensorboard | 54,917,785 | 0.462117 | python,tensorflow,tensorboard | Open tensorboard and use the "Upload" button on the left to upload the pbtxt file will directly open the graph in tensorboard. | I just have a graph.pbtxt file. I want to view the graph in tensorboard. But I am not aware of how to do that. Do I have to write any python script or can I do it from the terminal itself? Kindly help me to know the steps involved. | 0 | 1 | 5,032 |
0 | 56,553,227 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-02-28T10:14:00.000 | 0 | 1 | 1 | Model Deployment On GCP Without Cloud Storage Bucket | 54,923,211 | 0 | python,google-cloud-ml | For model deployment you need a Google Cloud Storage using AI Platform.
Another option is to use AI platform training (local or in GCP), output the model (SavedModel format) to a local folder or Cloud Storage and from there using TF Serving in Compute Engine Instance. | I have developed a Tensorflow based machine learning model on local Machine. And I want to deploy it in Google Cloud Platform (Cloud ML Engine) for predictions.
The model reads input data from Google Bigquery and the output predictions has to be written in Google Bigquery only.
There are some data preparation scripts which has to be run before the model prediction is run. I have used Google Cloud Storage for model storage and used it for deployment, i have deployed it successfully.
But, instead of using Google Cloud Storage for saving a model (i.e. .pb or .pkl model file) can i store it on GCP VM (Or Local machine) and call it from Cloud ML Engine for prediction? Is it possible? or I have only a option to upload Model directory to a Cloud Storage bucket which i will use it for prediction?
Could you please help me on this. | 1 | 1 | 147 |
0 | 54,934,592 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-02-28T15:02:00.000 | 0 | 1 | 0 | Python: Image processing create wrinkled paper effect | 54,928,574 | 0 | python,image-processing | Instead of using transparency, assuming you have two images of the same dimensions, one bright with the wrinkled paper and one with a dark text on a white background, you could try to take for every pixel the minimum of the values of the corresponding pixels in the two images.
In this way you should succeed in merging the text (darker than the wrinkled paper) and the wrinkled paper (darker than the original text background). | Maybe it is a little bit hard to describe my problem. I am searching for an algorithm in Python, to create wrinkled paper effect on a white image with some text.
My first try was adding some real wrinkled paper image (with transparency) to the image with text. This looks nice, but hast the side effect, that the text is not really wrinkled.
So I am looking for a better solution, any ideas? Thanks | 0 | 1 | 408 |
0 | 54,950,068 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-02-28T16:24:00.000 | 0 | 1 | 0 | Intersection of interpol1d objects | 54,930,134 | 0 | python,python-3.x,scipy | Use scipy.optimize.brentq for bracketed root-finding:
brentq(lambda x: interp1d(xx, yy)(x) - interp1d(xxx, yyy)(x), -1, 1) | I have 2 cumulative distributions that I want to find the intersection of. To get an underlying function, I used the scipy interpol1d function. What I’m trying to figure out now, is how to calculate their intersection. Not sure how I can do it. Tried fsolve, but I can’t find how to restrict the range in which to search for a solution (domain is limited). | 0 | 1 | 58 |
0 | 54,930,980 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-02-28T17:06:00.000 | 0 | 1 | 0 | model can predict new image | 54,930,833 | 0 | python-3.x,tensorflow,jupyter-notebook | By just looking at what you have posted you should replace image_resized.reshape(1,50,50,3) with image_resized.reshape(1,28,28,3) | ValueError Traceback (most recent call last) in () ----> 1 prediction = model.predict(image_resized.reshape(1,50,50,3)) 2 print('Prediction Score:\n',prediction[0]) ValueError: cannot reshape array of size 2352 into shape (1,50,50,3) | 0 | 1 | 30 |
0 | 54,935,427 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-02-28T21:02:00.000 | 0 | 1 | 0 | Tensorflow data pipeline: Slow with caching to disk - how to improve evaluation performance? | 54,934,207 | 0 | python,tensorflow,machine-learning,tensorflow-datasets | prefetch(1) means that there will be only one element prefetched, I think you may want to have it as big as the batch size or larger.
After first cache you may try to put it second time but without providing a path, so it would cache some in the memory.
Maybe your HDD is just slow? ;)
Another idea is you could just manually write to compressed TFRecord after steps 1-4 and then read it with another dataset. Compressed file has lower I/O but causes higher CPU usage. | I've built a data pipeline. Pseudo code is as follows:
dataset ->
dataset = augment(dataset)
dataset = dataset.batch(35).prefetch(1)
dataset = set_from_generator(to_feed_dict(dataset)) # expensive op
dataset = Cache('/tmp', dataset)
dataset = dataset.unbatch()
dataset = dataset.shuffle(64).batch(256).prefetch(1)
to_feed_dict(dataset)
1 to 5 actions are required to generate the pretrained model outputs. I cache them as they do not change throughout epochs (pretrained model weights are not updated). 5 to 8 actions prepare the dataset for training.
Different batch sizes have to be used, as the pretrained model inputs are of a much bigger dimensionality than the outputs.
The first epoch is slow, as it has to evaluate the pretrained model on every input item to generate templates and save them to the disk. Later epochs are faster, yet they're still quite slow - I suspect the bottleneck is reading the disk cache.
What could be improved in this data pipeline to reduce the issue?
Thank you! | 0 | 1 | 257 |
0 | 71,724,209 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-03-01T16:32:00.000 | 0 | 3 | 0 | What is the real number of CNN layers in yolov3? | 54,948,738 | 0 | python,conv-neural-network,object-detection,yolo | YOLO v3 has 107 layers in total, you should also count shortcut layers, route layers, upsample layers, and YOLO layers(32 in total). So, there are 75+32=107 layers in total. When you see indexes in shortcut or route layers, you will find that we count from 0. Therefore, yolo layers are in 82,94,106 layers. | I'm really confused with the architecture of yolov3. I've read the documentation and paper about it. Some people say that it has 103 convolutional layers, some others say that it has 53 layers. But when you count the convolutional layers in the .cfg file (after downloading it) it comes to about 75! ...What is missed here? What should I do to find it? This question is important for us because we need to cite this architecture in a paper and we need to know the exact size of the layers... | 0 | 1 | 5,589 |
0 | 54,958,225 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-01T18:33:00.000 | 0 | 1 | 0 | Reordering numpy 4D-array | 54,950,408 | 1.2 | python,numpy,matrix-multiplication,numpy-ndarray,array-broadcasting | If your matrix is called matrix, matrix.shape = (1,2,1,4) (as in my example above) does the trick. NumPy will automatically notices if your new shape is "out of bounds", and automatically reorder the data correctly if it's not.
EDIT: You can also use newMatrix = numpy.reshape(matrix, (1,2,1,4)) to create a new matrix as a reshape of your first matrix. | I'm having troubles understanding how to manage and modify numpy matrices. I find it very difficult to "picture" the matrices in my head.
I have a (4x2x1x1) matrix which I want to make into a (1x2x1x4) matrix, such that I can apply matrix multiplication with another matrix which have the shape (3x2x1x1).
Thanks in advance! | 0 | 1 | 130 |
0 | 54,953,548 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-01T22:38:00.000 | 1 | 2 | 0 | Add pixel wise independent noise to image | 54,953,272 | 0.099668 | python,numpy,image-processing,scikit-learn,noise-generator | You have several options. If you want to take random samples with replacement, just use one of the numpy's builtin random modules (i.e., numpy.random.random). You could also use numpy.random.pareto for more dramatic/bursty noise. These methods generate independent samples.
If you have a distribution in the form of a set or array that you want to pull samples from without repetition (for instance you have an array [0.1, 0.3, 0.9] and want to generate noise with ONLY these values), you use python's builtin random.random.choice([0.1, 0.3, 0.9]) to draw independent samples from your custom distribution. You can also specify replace=False. | My question is simple: I have an image and I want to add pixel wise independent noise to the image. The noise can be derived from from any distribution such as Gaussian. What are the available modules in numpy/scikit-learn to do the same?
I do not have any code but I am learning about modules such as numpy.random.normal, etc. and I needed more clarification.
None of the modules explicitly say that if I draw samples from a distribution multiple times, the draws will be independent.
Thank you for suggestions. | 0 | 1 | 561 |
0 | 54,958,224 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-02T11:44:00.000 | 0 | 1 | 0 | Pandas - select lowest value to date | 54,958,144 | 0 | python,pandas | Without seeing your dataset it's hard to help you directly. The problem does boil down to the following. You need to select the range of data you want to work with (so select rows for the date range and columns for the user/speed).
That would look something like x = df.loc[["2-4-2018","2-4-2019"], ['users', 'speed']]
From there you could do a simple x['users'].min() for the value or x['users'].idxmin() for the index of the value.
I haven't played around for a bit with Dataframes, but you're looking for how to slice Dataframes. | I'm new to Pandas.
I've got a dataframe where I want to group by user and then find their lowest score up until that date in the their speed column.
So I can't just use df.groupby(['user'])['speed'].transform('min) as this would give the min of all values not just form the current row to the first.
What can I use to get what I need? | 0 | 1 | 43 |
0 | 54,962,810 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-03-02T17:22:00.000 | 0 | 1 | 0 | Why can python lists hold more data than numpy arrays? | 54,961,068 | 0 | python,arrays,numpy,memory,out-of-memory | Allocate the memory for a numpy array and never create a list in the first place.
memmap should not be neccessary as the original list fits in memory. | Sorry, I feel like this may be a basic question, but I did not find any "solutions" for this.
I am filling a python list with a lot of data, and finally want to convert it to a numpy.array for further processing.
However, when I call numpy.asarray(my_list), I get an out of memory error. Why does that happen? Is it because numpy.array objects are stored in consecutive memory blocks, and there is not enough memory space for that?
How do I best treat such great data volumes then? I guess numpy is definitely the way to go, so I am a bit curious, that I can handle such volumes with simple list objects but not with my current numpy approach.
Again, repeating my most important question: How can I best handle data, which fits into python lists (so I guess overall it somehow still fits in my memory), but cannot be converted to a numpy.array?
Thanks! | 0 | 1 | 85 |
0 | 54,964,640 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-03-02T23:40:00.000 | 0 | 2 | 0 | pandas read csv by taking samples | 54,964,173 | 0 | python,pandas,csv | Use skiprows with a complex selector:
Example would be
rows=1000
prob=.01
print len(pd.read_csv(filename, usecols=[0], skiprows=lambda x: x in random.sample(xrange(0,num_rows), int((1.0-prob)*num_rows ))))
There are also other ways of doing it by reading in chunks and then just skipping rows.
e.g. you could use chunksize=100
skiprows=99 | I have a large CSV file and I just want to take a 1% sample from it. Is there a good way to read the samples directly into pandas data frame without having to read the whole file and then discard 99% of the data? | 0 | 1 | 1,045 |
0 | 58,200,454 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-03-03T14:12:00.000 | 1 | 2 | 0 | How to add augmented images to original dataset using Pytorch? | 54,969,705 | 1.2 | python,pytorch | Why do you want it? Generally speaking, it is enough to increase the number of epochs over the dataset, and your model will see the original and the augmented version of every image at least once (assuming a relatively high number of epochs).
Explanation:
For instance, if your augmentation has a chance of 50% to be applied, after 100 epochs, for every sample you will get ~50 samples of the original image and ~50 augmented samples. So, increasing the dataset size is equivalent to add epochs but (maybe) less efficient in terms of memory (need to store the images in memory to have high performances). | From my understanding, RandomHorizontalFlip etc. replace image rather than adding new images to dataset. How do I increase my dataset size by adding augmented images to dataset using PyTorch?
I have gone through the links posted & haven't found a solution. I want to increase the data size by adding flipped/rotated images - but the post addresses the in-place processing of images.
Thanks. | 0 | 1 | 2,045 |
0 | 56,727,042 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-03T14:26:00.000 | 0 | 2 | 0 | Face Recognition with one shot learning | 54,969,832 | 0 | python,tensorflow,conv-neural-network | As my knowledge, CNN needs lots of data for training the model.So we cant implement one shot learning features on CNN | I'm a newbie at deep learning. I started with face recognition example and I found that there are 2 types of model base on data for pre-trained.
1. One-shot learning with siamese network: Which we can use few data for train the model.
2. Convolutional neural network: Need numerous data for train the model.
Could we combine these methods is using one-shot learning with CNN in tensorflow? | 0 | 1 | 420 |
0 | 54,971,460 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-03T17:06:00.000 | 1 | 2 | 0 | different element data types within numpy array? | 54,971,408 | 0.099668 | python,numpy | From the docs:
dtype : data-type, optional
The desired data-type for the array. If not given, then the type will
be determined as the minimum type required to hold the objects in the
sequence. This argument can only be used to ‘upcast’ the array. For
downcasting, use the .astype(t) method.
So if you set dtype as float64, everything needs to be a float. You can mix types, but then you can't set it as a mismatching type. It will use a type that will fit all data, like a string for example in the case of array(['1', 'Foo', '3.123']). | Just like list in python where [1,"hello", {"python": 10}] it can have all different types within, can numpy array have this as well?
when numpyarray.dtype => dtype('float64') is it implying all elements are of type float? | 0 | 1 | 2,473 |
0 | 54,974,116 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-03-03T20:40:00.000 | 1 | 2 | 0 | Keras: using weights when fitting | 54,973,519 | 0.099668 | python,tensorflow,keras | I think there might be some confusion with what "sample_weight" means in the context of model.fit. When you call model.fit, you are minimizing a loss function. That loss function measures the error between your model prediction and the true values. Some of the samples in your dataset might be more important to you, so you would weigh the loss function more heavily on those samples. So, the "sample_weights" are only used to weigh certain samples in your dataset during training to "better fit" those some samples relative to others. They are an optional argument to model.fit (default just weights each sample equal - what you should do if you don't have a good reason to do otherwise). And (hopefully my explanation was clear enough) do not make any sense in the context of model.predict. | I have a question about using weights in keras. I have some data as events and for each of them there is an associated weight. Therefore, when I do the training of my keras model I was using the sample_weight argument to pass that information.
I then notice that if I want to use the model.predict method there isn't an argument to pass the weights... and now I'm not sure if the type of weights I have are the ones I'm supposed to use in the fit method in the sample_weight.
My question is, what type of weights is the fit method supposed to recieve? Also, is it understood that the predict method doesn't need any weight for the data?
Thanks! | 0 | 1 | 1,419 |
0 | 54,983,029 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-04T11:07:00.000 | 2 | 1 | 0 | Tensorflow cannon import name 'gradients_util' | 54,981,982 | 0.379949 | python,tensorflow | You must use version of benchmark compatible with your TensorFlow version.
Determine it using pip show tensorflow.
Then go to cloned repository of benchmarks, and checkout to required branch, for example git checkout cnn_tf_v1.13_compatible.
Previous comments suggested to use TensorFlow 2.0, but I don't think author of question needs it, as long as it's still unreleased and has a lot of API-breaking changes.
P. S. You should've say that you are trying to launch benchmarks from tensorflow/benchmarks in your question. | After doing
pip install tensorflow-gpu
I am trying to
import gradients_util from tensorflow.python.ops
but I get
cannot import name 'gradients_util'
error. How can I fix this? | 0 | 1 | 1,486 |
0 | 55,593,835 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-04T13:30:00.000 | 0 | 1 | 0 | After Pytorch Upgrade , my model is giving almost random output | 54,984,345 | 1.2 | python,machine-learning,pytorch,torchvision | It was because of drop out layer. Model.eval disables the dropout layer. Pretty simple.
But Now in Pytorh upgrade, if Dropout is not defined specifically in model init function, it will not get disable during eval.
Atleast this was reason for my case. | I trained, tested and still using a model in "Pytorch 0.4.1". It was, and is still working fine (output is what it should be) if I use pitch 0.4.1.
But as i upgrade to version 1.0.1, every time, I try to evaluate same input image, I get different output (Its a regression).
I tried to see what has been changed in those versions, but since i am not getting any errors, warnings anything, i am not sure what should I look for specifically.
PS: I checked the weights, they are also same when I load the model | 0 | 1 | 58 |
0 | 55,231,544 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-03-05T00:20:00.000 | 0 | 1 | 0 | Typical Number of generations for NEAT (Neuro Evolution of Augmenting Topologies)? | 54,993,604 | 0 | python-3.x,es-hyperneat | There are a couple other hyperparameters to consider before deciding NEAT cannot produce a usable neural network for your problem. You will have to make sure that your population is also large enough. Obviously a larger dataset is more helpful, but that is limited. Finally, changes such as mutation rates, aggregation options, activation functions, and your fitness function will all affect the training process for each genome. Feel free to PM if you want suggestions on them. | Does anyone have an estimate of the number of generations one should search before concluding that the NEAT-algorithm is not able to reach the minima?
I am running NEAT on a very small dataset of cancer patients (~5K rows). And after 5000 generations, the concordance index for prediction of survival index is not improving.
Does anyone have any experience of how many generations should one try before you deem this as not efficient for the given problem? | 0 | 1 | 210 |
0 | 54,993,847 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-05T00:37:00.000 | 0 | 1 | 0 | Regression analysis for linear regression | 54,993,739 | 0 | python,machine-learning,sklearn-pandas | If your x values are categorical then it does not necessarily make much sense binding them to a uniform grid. Who's to say category A and B should be spaced apart the same as B and C. Assuming that they are will only lead to incorrect representation of your results.
As your choice of scale is the unknowns, you would be better in terms of visualisation to set your uniform x grid as being the day number and then seeing where the categories would place on the y scale if given a linear relationship.
RMS Error doesn't come into it at all if you don't have quantitative data for x and y. | I have a regression model where my target variable (days) quantitative values ranges between 2 to 30. My RMSE is 2.5 and all the other X variables(nominal) are categorical and hence I have dummy encoded them.
I want to know what would be a good value of RMSE? I want to get something within 1-1.5 or even lesser but I am unaware what I should do to achieve the same.
Note# I have already tried feature selection and removing features will less importance.
Any ideas would be appreciated. | 0 | 1 | 53 |
0 | 55,007,697 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-05T04:09:00.000 | 0 | 2 | 0 | error UnicodeDecodeError: 'utf-8' codec when reading CSV | 54,995,240 | 0 | python-3.x | Thank you both. It seems to work. I read it successfully and until know, variables are responding fine to my calculations. Seems SOLVED. | I recently download the PISA 2012 Student database from PISA. I follow the instructions and successfully read it on SAS. Then I exported as CSV to read it in Python 3, using proc export, but I keep getting this error when trying to read it in python pandas: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc1 in position 24: invalid start byte. What can I do?
pisa2012_Col=pd.read_csv('Pisasubset2012Col.csv') | 0 | 1 | 183 |
0 | 55,010,897 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-05T20:15:00.000 | 3 | 1 | 0 | Pandas `to_csv` String column got converted | 55,010,853 | 1.2 | python,pandas | CSV doesn't have data types. Excel has no way of knowing what you want, so it tries to interpret it. If you are using Excel, click the data tab and 'from csv' and you can specify dtypes on reading it.
Otherwise open the csv file in notepad and you'll see that the data is there. | I have a dataframe with a column of user ids converted from int to string
df['uid'] = df['uid'].astype(str)
However when I write to csv, the column got rounded to the nearest integer in format 1E+12 (the value is still correct when you select the cell).
But to_excel outputs the column correctly, can someone explain a bit?
Thank you! | 0 | 1 | 419 |
0 | 55,014,376 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-05T22:16:00.000 | 0 | 1 | 0 | Tf-idf for SO posts (where tag can only occur once ) | 55,012,475 | 0 | python,nlp,tf-idf | If you want to filter out tags that are so common, you can use conditional probability. eg: python is so common on posts taged pytorch, so P(python|pytorch) will be hight, likes:0.9. You can find a threshold to filter those tags.
Association rule learning is more suitable and more complex than the above. | Using the stackoverflow data dump, I am analyzing SO posts that are tagged with pytorch or keras. Specifically, I count how many times each co tag occurs (ie the tags that aren't pytorch in a pytorch tagged post).
I'd like to filter out the tags that are so common they've lost real meaning for my analysis (like the python tag).
I am looking into Tf-idf
TF reprensents the frequency of word for each document. However, each co-tag can only occur once for a given post (ie you can't tag your post 'html' five times). So the tf for most words would be 1/5, and others less (because post only has 4 tags for instance). Is it still possible to do Tf-Idf given this context? | 0 | 1 | 29 |
0 | 55,375,736 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-06T04:46:00.000 | -1 | 1 | 0 | nyoka package install conflict with keras/tensorflow version | 55,015,742 | -0.197375 | python-3.x,tensorflow,keras,pmml | Which version of Nyoka are you using? The issue was there prior to version 3.0.1. The latest version of nyoka (3.0.6) does not have tensorflow as a dependency. Could you try with the latest version? | I have installed tensorflow-gpu and keras on my gpu machine for deep learning training. The tensorflow version is 1.12. However, nyoka (pmml converter package of python) has conflict because of tensorflow dependencies. I think it uses tensorflow 1.2. Can there be any workaround for it? | 0 | 1 | 171 |
0 | 55,028,215 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2019-03-06T16:39:00.000 | 5 | 4 | 0 | Why does 2- - - -1=3? | 55,028,071 | 1.2 | python | 2 - - - - 1 is the same as 2 - ( - ( - ( - 1))) what is the same as
2 - ( - (1)) = 2 + 1 = 3
As soon as number of minuses is even you actually do "+". | I recently ran into a bug using python (v3.6.8) and pandas (v0.23.4) where I was trying to subtract a date offset. However, I accidentally typed two -- signs and it ended up adding the date offset instead. I did some more experimenting and found that 2--1 will return 3. This makes sense since you could interpret that as 2-(-1), but you can go even farther and string a bunch of negatives together 2----1 will return 3. I also replicated this in R and it does the same thing. Can anyone help me understand what's happening here? | 0 | 1 | 115 |
0 | 55,028,501 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2019-03-06T16:39:00.000 | 2 | 4 | 0 | Why does 2- - - -1=3? | 55,028,071 | 0.099668 | python | mathematically, it's correct. by why would a programming language allow that? maybe i just lack imagination, but i can't think of any reason why you would want to explicitly string together plus or minus signs. and if you did do that, it is likely a typo as in the original post. if it's done through variables, then it should definitely be allowed (ie, a = -1; 2 -a should be 3). some languages allow for i++ to increment i. and python allows i += 1 to increment i. not throwing a syntax error just seems confusing to me, even if it is mathematically correct. | I recently ran into a bug using python (v3.6.8) and pandas (v0.23.4) where I was trying to subtract a date offset. However, I accidentally typed two -- signs and it ended up adding the date offset instead. I did some more experimenting and found that 2--1 will return 3. This makes sense since you could interpret that as 2-(-1), but you can go even farther and string a bunch of negatives together 2----1 will return 3. I also replicated this in R and it does the same thing. Can anyone help me understand what's happening here? | 0 | 1 | 115 |
0 | 55,385,124 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-06T20:47:00.000 | 0 | 1 | 0 | Using non-square input matrix for convolutional autoencoder | 55,031,885 | 0 | python,keras,deep-learning,conv-neural-network,autoencoder | it is hard to tell what is the issue here since no sample code is provided . However most propably your Matrix is diminsions are odd ( example 9×9 ) or becomes odd while pooling. tofix this issue you need to either pad your input to make the matrix diminsions even . our crop the decoder lsyers of your autoencoder to have a matching output size | Is it possible to train convolutional autoencoder (CAE) with non-square (rectangular) input matrix? All the tutorials and resources I have studied on CAE seems to use squared images. The data I am working with is not image. I have hundreds of single cells and for each cell there is a matrix (genomic data) with thousands of genes in rows and hundreds of bins in columns (genomic region of interest for each gene divided into the bins of equal size).
I have tried some models with Keras, but size of input in the encoder part of the model is always different than the size of output matrix in the decoder. So it is giving error. Can someone help me how to solve this problem? | 0 | 1 | 469 |
0 | 55,154,500 | 1 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-07T00:56:00.000 | 0 | 1 | 0 | Google colab Transport endpoint is not connected | 55,034,469 | 1.2 | python,neural-network,jupyter-notebook,jupyter,google-colaboratory | In google colab, you can use their GPU service for upto 12 hours, after that it will halt your execution, if you ran it for 3-4 hours, it will just stop displaying Data continuously on your browser window(if left Idle), and refreshing the window will restore that connection.
In case you ran it for 34 hours, then it will definitely be terminated(Hyphens matter), this is apparently done to discourage people from mining cryptocurrency on their platform, in case you have to run your training for more than 12 hours, all you need to do is enable checkpoints on your google drive, and then you can restart the training, once a session is terminated, if you are good enough with the requests library in python, you can automate it. | I am tuning hyperparameters for a neural network via gridsearch on google colab. I got a "transport endpoint is not connected" error after my code executed for 3 4 hours. I found out that this is because google colab doesn't want people to use the platform for a long time period(not quite sure though).
However, funnily, after the exception was thrown when I reopened the browser, the cell was still running. I am not sure what happens to the process once this exception is thrown.
Thank you | 0 | 1 | 4,543 |
0 | 56,431,054 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-03-07T06:59:00.000 | 0 | 3 | 0 | TensorFlow: Sample Integers from Gumbel Softmax | 55,037,810 | 0 | python,tensorflow,tensorflow-probability | You can do a dot product of the relaxed one hot vector with a vector of [1 2 3 4 ... n]. The result is going to give you the desired scalar.
For instance if your one hot vector is [0 0 0 1], then dot([0 0 0 1],[1 2 3 4]) will give you 4 which is what you are looking for. | I am implementing a program to sample integers from a categorical distribution, where each integer is associated with a probability. I need to ensure that this program is differentiable, so that back propagation can be applied. I found tf.contrib.distributions.RelaxedOneHotCategorical which is very close to what I am trying to achieve.
However, the sample method of this class returns a one-hot vector, instead of an integer. How to write a program that is both differentiable and returns an integer/scalar instead of a vector? | 0 | 1 | 768 |
0 | 67,752,253 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-03-07T06:59:00.000 | 0 | 3 | 0 | TensorFlow: Sample Integers from Gumbel Softmax | 55,037,810 | 0 | python,tensorflow,tensorflow-probability | You can't get what you want in a differentiable manner because argmax isn't differentiable, which is why the Gumbel-Softmax distribution was created in the first place. This allows you, for instance, to use the outputs of a language model as inputs to a discriminator in a generative adversarial network because the activation approaches a one-hot vector as the temperature changes.
If you simply need to retrieve the maximal element at inference or testing time, you can use tf.math.argmax. But there's no way to do that in a differentiable manner. | I am implementing a program to sample integers from a categorical distribution, where each integer is associated with a probability. I need to ensure that this program is differentiable, so that back propagation can be applied. I found tf.contrib.distributions.RelaxedOneHotCategorical which is very close to what I am trying to achieve.
However, the sample method of this class returns a one-hot vector, instead of an integer. How to write a program that is both differentiable and returns an integer/scalar instead of a vector? | 0 | 1 | 768 |
0 | 55,052,713 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-03-07T06:59:00.000 | 0 | 3 | 0 | TensorFlow: Sample Integers from Gumbel Softmax | 55,037,810 | 0 | python,tensorflow,tensorflow-probability | The reason that RelaxedOneHotCategorical is actually differentiable is connected to the fact that it returns a softmax vector of floats instead of the argmax int index. If all you want is the index of the maximal element, you might as well use Categorical. | I am implementing a program to sample integers from a categorical distribution, where each integer is associated with a probability. I need to ensure that this program is differentiable, so that back propagation can be applied. I found tf.contrib.distributions.RelaxedOneHotCategorical which is very close to what I am trying to achieve.
However, the sample method of this class returns a one-hot vector, instead of an integer. How to write a program that is both differentiable and returns an integer/scalar instead of a vector? | 0 | 1 | 768 |
0 | 55,039,173 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-07T08:21:00.000 | 1 | 2 | 0 | Python Select variables in multiple linear regression | 55,039,091 | 0.099668 | python,linear-regression | You are probably looking for a k-fold validation model.
The idea is to randomly select your features, and have a way to validate them against each other.
The idea is to train your model with your feature selection on (k-1) partitions of your data. And validate it against the last partition. You do it for each partition and take the average of your score (MAE / RMSE for instance)
Your score is an objectif figure to compare your models aka your features selections | I have a dependent variable y and 6 independent variables. I want to make a linear regression out of it. I use sklearn library to do it.
The problem is some of my independent variables have correlation more than 0.5. So I can't have them in my model at the same time
I searched throw internet but didn't find any solution to select best set of independent variables to draw linear regression and output the variables that had been selected. | 0 | 1 | 1,223 |
0 | 55,041,022 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-07T09:41:00.000 | 0 | 1 | 0 | Text analysis for unstructured data | 55,040,536 | 0 | python,classification,naivebayes | The Naive Bayes classifier is a supervised learning method and requires you to train it using labelled data in which you know the targets in advance. You can then use it on unlabelled data to predict future values but you can't train it on data with no target values.
It's hard to recommend a different method without knowing more about your task but it sounds like you want to look into unsupervised clustering algorithms. k-means is a relatively simple one to start with. | I have a Question
i do have large amount of Unstructured text data , which i want to classify into different -different sectors .
i am using a Naive Bayes classifier for it
Now, my question is what should i pass in Y?? because i don't have a Target values
and as per the syntax i have to pass it .
mnb = MultinomialNB()
mnb.fit(X,y)
TypeError: fit() missing 1 required positional argument: 'y'
As i said i don't have target value.
How can i do that?
Help will be appreciated | 0 | 1 | 68 |
0 | 55,043,641 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-07T09:48:00.000 | 0 | 1 | 0 | How can I set the branch method used by glpk solver through pyomo? | 55,040,678 | 0 | python,pyomo,glpk | To use --first as branch method
import pyomo.environ as pe
opt_solver = pe.SolverFactory("glpk")
opt_solver.options["first"] = "" | glpsol.exe --help provides the following options:
Options specific to the MIP solver:
--nomip consider all integer variables as continuous (allows solving MIP as pure LP)
--first branch on first integer variable
--last branch on last integer variable
--mostf branch on most fractional variable
--drtom branch using heuristic by Driebeck and Tomlin
(default) | 0 | 1 | 215 |
0 | 55,050,551 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2019-03-07T18:21:00.000 | 2 | 1 | 0 | Multi-type storage array in Python | 55,050,445 | 1.2 | python,arrays,numpy,types,storage | Use np.dtype(object): np.array(board, dtype=np.dtype(object))
As for empty cells: just set them to None.
Edit: as some people have suggested, you might not need a numpy array at all. A list of lists or a dict with tuple indexes might solve your problem just fine and remove the overhead of using numpy. | I'm programming a game in python3.6. There is some pawns on the board which are instance of the class 'pawn'. There is also boulders on the board which are instance of the class boulder. I just want to store these pawns and boulders in an array, like a numpy.array.
I have 2 problems :
There is different type of object in the array, which is not possible with a numpy.array
How can I represent an empty cell on my board, because I can't use an object, which has not the same type as the others.
How can I solve these two problems ? Is there already an object which can represent grid, an array and which accepts different type ? | 0 | 1 | 861 |
0 | 67,356,441 | 0 | 0 | 0 | 0 | 1 | false | 103 | 2019-03-07T18:58:00.000 | 2 | 4 | 0 | Can I run a Google Colab (free edition) script and then shut down my computer? | 55,050,988 | 0.099668 | python,google-colaboratory | use multiprocess in python make one other function and start while loop there! that while loop won't let it sleep! | Can I run a google colab (free edition) script and then shut down my computer?
I am training several deeplearning models with crossvalidation, and therefore I would like to know if I can close the window or the computer with the training running at the same time in the cloud. | 0 | 1 | 96,447 |
0 | 55,065,070 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-08T14:10:00.000 | 0 | 1 | 0 | comparing two columns in the same csv file | 55,064,953 | 0 | python,numpy-slicing | I don't understand the question, and would be very confusing for others too.
However, it seems like a simple loop and condition code. Try using PANDAS with Python, the library to process the CSV in case this helps you. | I have a CSV file with two columns.
One has values like size XL, size L, size M and size S. In the other I only have XL and L.
What I want to do is that when my loop finds XL in the first column it overwrites that cell value with XL and when it doesn't find XL in the next cell it should just skip it.
In the next iteration, it should do the same with L. The file should look something like this after the loop: XL, L, size M, size S.
Can you assist me with python code to implement something like this? | 0 | 1 | 61 |
0 | 55,085,727 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-10T07:57:00.000 | 0 | 1 | 0 | PySpark RDD SortByKey() Not Working Properly | 55,085,647 | 1.2 | python,python-2.7,apache-spark,pyspark | You are sorting strings - as shown by the u prefix on the output (for Unicode, not that it really matters). Therefore it is working as 10 comes before 2 when tested as a string.
Map all your values to integers before working with them and you should be fine. | I want to sort an RDD that I have, which contains a key range of 0-49995 such that (0, value), ... , (49995, value).
I want to sort it in ascending order, and I am using the SortByKey() function, but it seems like it is not working properly because this is the result that I am getting:
test0.sortByKey(True).take(5)
[(u'0', [u'38737', u'18591', u'27383', u'34211', u'337', u'352', u'1532', u'12143', u'12561', u'17880']), (u'1', [u'35621', u'44891', u'14150', u'15356', u'35630', u'13801', u'13889', u'14078', u'25228', u'13805']), (u'10', [u'83', u'18', u'38', u'89', u'3', u'11', u'29', u'41', u'53', u'55']), (u'100', [u'42704', u'122', u'125', u'128', u'131', u'2501', u'11200', u'12049', u'12576', u'18583']), (u'1000', [u'8671', u'955', u'1012', u'1020', u'1378', u'2413', u'7699', u'10276', u'12625', u'12667'])]
It started at key 0, 1, but then skipped to 10 and jumped to 100, then 1000. It should ascending from 0-5. Can someone please tell me what I am doing wrong here?
Thank you! | 0 | 1 | 493 |
0 | 55,093,212 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-03-10T22:26:00.000 | 1 | 1 | 0 | While Loop in Python - Array Manipulation | 55,093,027 | 1.2 | python,python-3.x,numpy,for-loop | Just before for a_class,a_class_weight in zip(classes, class_weights):, you're initializing total_fold_array to [].
That loop executes for exactly as many times as there are elements in classes.
Each iteration of that loop appends a curr_fold_array to total_fold_array.
That is why, at the end of that loop, you have as many elements in total_fold_array, as there are in classes.
You've enclosed all of this in while count != 6:. That seems totally unnecessary -- I think that while loop will execute exactly once. You are returning from that function before the second iteration of that while loop can happen. My guess is that you introduced that while loop hoping that it would somehow limit the number of elements in total_fold_array to 5. But that's not going to happen, because, inside that while loop, the for loop grows total_fold_array to have 7 elements, and this happens in the very first iteration of the while loop. | I am working on the following Python codes.
I am hoping to accomplish the following:
Create a total_fold_array which will hold 5 items (folds)
For each fold, create an array of data from a larger dataset based off of the logic (which I know is correct) inside of my for...zip loop
To help you understand:
The classses and class_weights returns:
[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] and [0.14285714 0.14285714 0.14285714 0.14285714 0.14285714 0.14285714
0.14285714]
The while count !=6 is not working properly. In short, what I am trying to accomplish is to populate the total_fold_array with 5 individual folds, which each contain a number of rows from a dataset.
An example of the current_fold_array might be [A,B,C,D], so then ultimately, I have a total_fold_array which has 5 of those individual folds, which would look like [[A,B,C,D,],[A,B,B,C],[A,A,A,A],[B,C,D,D],[B,B,B,C]]
However, this loop does not do that. Instead, it creates total_fold_array with the length of whatever the length of classes is (in this case 7), instead of having 5 folds within.
My code is below:
I am currently getting a total_fold_array containing 7 items, when instead, it should contain 5. Each item within can have as many items as needed, but the total_fold_array should be 5 items long. I believe there is a logic bug in my code, and I am looking for some help. If I were to use a dataset with 5 classes, this works appropriately.
Please let me know if I need to make this clearer. | 0 | 1 | 910 |
0 | 55,095,505 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2019-03-11T04:56:00.000 | 3 | 2 | 0 | In sklearn regression, is there a command to return residuals for all records? | 55,095,437 | 0.291313 | python,scikit-learn | One option is to use fit() to get predictions and residual is simply the difference between the actual value and predictions. | I know this is an elementary question, but I'm not a python programmer. I have an app that is using the sklearn kit to run regressions on a python server.
Is there a simple command which will return the predictions or the residuals for each and every data record in the sample? | 0 | 1 | 12,667 |
0 | 55,103,222 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2019-03-11T13:21:00.000 | 1 | 3 | 0 | How to save and load my neural network model after training along with weights in python? | 55,102,781 | 0.066568 | python,scikit-learn,pickle | So you write it down yourself. You need some simple steps:
In your code for neural network, store weights in a variable. It could be simply done by using self.weights.weights are numpy ndarrays. for example if weights are between layer with 10 neurons to layer with 100 neurons, it is a 10 * 100(or 100* 10) nd array.
Use numpy.save to save the ndarray.
For next use of your network, use numpy.load to load weights
In the first initialization of your network, use weights you've loaded.
Don't forget, if your network is trained, weights should be frozen. It can be done by zeroing learning rate. | I have trained a single layer neural network model in python (a simple model without keras and tensorflow).
How canI save it after training along with weights in python, and how to load it later? | 0 | 1 | 10,072 |
0 | 55,223,274 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-12T08:57:00.000 | 0 | 1 | 0 | spaCy 2.0: Loading Training Data from excel file Custom NER Model issues | 55,117,481 | 0 | python,xlrd,spacy | Please write the code after 4 space you will get proper format in stack overflow or ctrl+k then write the code. | I have made a custom NER model using spaCy by loading the training data from a text file in the prescribed format and the model is working fine, However If I am trying to load training data from excel file we get the output model but does not getting the entities(no output and also no error).
Model from the training data in text file is working perfectly giving proper outputs but not getting results if loading training data from xlsx file.
No problem in datatypes(same for both the cases).
Even If I am writing the same into a text file and then loading it , facing same issue | 0 | 1 | 371 |
0 | 55,122,132 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-12T12:23:00.000 | 0 | 2 | 0 | tf.gradient acting like tfp.math.diag_jacobian | 55,121,377 | 0 | python,tensorflow,diagonal,gradient | I suppose I have found a solution:
my_grad = tf.gradients(tf.reduce_sum(loss), input)
ensures, that the cross dependencies i!=j are ignored - that works really nicely and fast.. | I try to calculate noise for input data using the gradient of the loss function from the input-data:
my_grad = tf.gradients(loss, input)
loss is an array of size (n x 1) where n is the number of datasets, m is the size of the dataset, input is an array of (n x m) where m is the size of a single dataset.
I need my_grad to be of size (n x m) - so for each dataset the gradient is calulated. But by definition the gradients where i!=j are zero - but tf.gradients allocates huge amount of memory and runs for prettymuch ever...
A version, which calulates the gradients only where i=j would be great - any Idea how to get there? | 0 | 1 | 132 |
0 | 55,127,590 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-03-12T16:58:00.000 | 0 | 2 | 0 | extract feature names from trained model | 55,126,903 | 1.2 | python,scikit-learn,xgboost | Its mandatory that prediction dataset should contain only those columns which are present in training dataset. It even makes sense not to include extra columns because the weights are learnt based on your training dataset. Including any extra column apart from training dataset doesn't provide any value or improve your accuracy, because when you are predicting all you do is multiply the learnt weights of model with new values. Make sure not to inlcude any extra feature for predicting. | I have a pre-trained XGBoost model read from a pickle file. When I was trying to make predictions on a new dataset with some columns outside of the feature set of the model, I received the error message:
training data did not have the following fields: column1, column2,...
I am okay with excluding these columns not existing in training data. Instead of hard-coding the column names (there are many), I would like to just find the intersection between columns of the training and the prediction datasets.
Is there a way I can extract the feature names from the trained model (apparently the model recorded the field names) without having to go back to my training dataset? | 0 | 1 | 1,723 |
0 | 55,143,748 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-03-13T01:30:00.000 | 1 | 1 | 0 | Dask on single OSX machine - is it parallel by default? | 55,133,058 | 1.2 | python,dask,dask-distributed | Yes, Dask is parallel by default.
Unless you specify otherwise, or create a distributed Client, execution will happen with the "threaded" scheduler, in a number of threads equal to your number of cores. Note, however, that because of the python GIL (only one python instruction executed at a time), you may not get as much parallelism as available, depending on how good your specific tasks are at releasing the GIL. That is why you have a choice of schedulers.
Being on OSX, installing with pip: these make no difference. Using dataframes makes a difference in that it dictates the sorts of tasks you're likely running. Pandas is good at releasing the GIL for many operations. | I have installed Dask on OSX Mojave. Does it execute computations in parallel by default? Or do I need to change some settings?
I am using the DataFrame API. Does that make a difference to the answer?
I installed it with pip. Does that make a difference to the answer? | 0 | 1 | 136 |
0 | 55,141,944 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2019-03-13T12:14:00.000 | 0 | 3 | 0 | How to multiply diagonal elements by each other using numpy? | 55,141,674 | 0 | python,numpy,matrix | You could use the identity matrix given by numpy.identity(n) and then multiply it by a n dimensional vector. | For the purpose of this exercise, let's consider a matrix where the element m_{i, j} is given by the rule m_{i, j} = i*j if i == j and 0 else.
Is there an easy "numpy" way of calculating such a matrix without having to resort to if statements checking for the indices? | 0 | 1 | 1,790 |
0 | 55,141,958 | 0 | 1 | 0 | 0 | 2 | true | 0 | 2019-03-13T12:14:00.000 | 1 | 3 | 0 | How to multiply diagonal elements by each other using numpy? | 55,141,674 | 1.2 | python,numpy,matrix | You can use the numpy function diag to construct a diagonal matrix if you give it the intended diagonal as a 1D array as input.
So you just need to create that, like [i**2 for i in range (N)] with N the dimension of the matrix. | For the purpose of this exercise, let's consider a matrix where the element m_{i, j} is given by the rule m_{i, j} = i*j if i == j and 0 else.
Is there an easy "numpy" way of calculating such a matrix without having to resort to if statements checking for the indices? | 0 | 1 | 1,790 |
0 | 55,161,676 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2019-03-14T11:07:00.000 | -1 | 1 | 0 | what's the mean of input_length in K.ctc_batch_cost() | 55,160,939 | -0.197375 | python,tensorflow,keras | CTC loss for one example is calculated on a 2D array (T,C). C must be equal to number of character + 1(blank characters). C contains the probability distribution of the characters at a time stamps. T will be the number of time stamps.
T should be of length 2* max_string_length. All possible encoding of y_true with length T will be used in negative log loss calculation.
It is usually the shape of previous layer output. | I have download a code for ocr using Keras, which applied the CRNN network and use the CTC loss as the loss function.
However, I'm really new to CTC loss and just have trouble with the usage of K.ctc_batch_cost(), especially the meaning of input_length. In the document of keras,
Arguments of tf.keras.backend.ctc_batch_cost(
y_true,
y_pred,
input_length,
label_length
)
y_true: tensor (samples, max_string_length) containing the truth labels.
y_pred: tensor (samples, time_steps, num_categories) containing the prediction, or output of the softmax.
input_length: tensor (samples, 1) containing the sequence length for each batch item in y_pred.
label_length: tensor (samples, 1) containing the sequence length for each batch item in y_true.
However, my problem is just what's the meaning of input_length? is that the dimensional of the output of LSTM? | 0 | 1 | 1,540 |
0 | 55,169,008 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-14T15:52:00.000 | 0 | 2 | 0 | Faster pytorch dataset file | 55,166,874 | 0 | python,machine-learning,dataset,pytorch,lmdb | One trivial solution can be pre-processing your dataset and saving multiple smaller crops of the original 3D volumes separately. This way you sacrifice some disk space for more efficient IO.
Note that you can make a trade-off with the crop size here: saving bigger crops than you need for input allows you to still do random crop augmentation on the fly. If you save overlapping crops in the pre-processing step, then you can ensure that still all possible random crops of the original dataset can be produced.
Alternatively you may try using a custom data loader that retains the full volumes for a few batch. Be careful, this might create some correlation between batches. Since many machine learning algorithms relies on i.i.d samples (e.g. Stochastic Gradient Descent), correlated batches can easily cause some serious mess. | I have the following problem, I have many files of 3D volumes that I open to extract a bunch of numpy arrays.
I want to get those arrays randomly, i.e. in the worst case I open as many 3D volumes as numpy arrays I want to get, if all those arrays are in separate files.
The IO here isn't great, I open a big file only to get a small numpy array from it.
Any idea how I can store all these arrays so that the IO is better?
I can't pre-read all the arrays and save them all in one file because then that file would be too big to open for RAM.
I looked up LMDB but it all seems to be about Caffe.
Any idea how I can achieve this? | 0 | 1 | 1,082 |
Subsets and Splits