GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 45,639,855 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-12-03T16:34:00.000 | 0 | 2 | 0 | how to define an issue with neural networks | 40,949,988 | 0 | python,machine-learning,computer-vision,neural-network | It is good that you have created your own program. I would suggest you to keep experimenting with basic problems, such as MNIST by adding more hidden layers, plotting variation of loss with training iterations using different learning rates, etc.
In general, the learning rate should not be kept high initially when the network weights are random and it is a good practice to keep decreasing the learning rate over training period. Plotting the values of loss or error function w.r.t training iterations will give you good insight regarding this. If learning rate is very high, loss will fluctuate and vary too much. If learning rate is very small, loss will decrease very slowly with training iterations. If you are interested, read about this in Andrew Ng's course or some blog.
About your question regarding number of hidden layers and neurons, it better to start experimenting with lower number initially, such as 1 hidden layer and 30 neurons in your case. In your next experiment, you can have 2 hidden layers, however, keep track of number of learning parameters (weights and biases) compared to training samples you have because small training samples with large number of network parameters can overfit your network.
After experimenting with small problems, you can try same thing with some framework, let's say Tensorflow, after which you can attempt more challenging problems. | I have built a system where the neural network can change size (number of and size of hidden layers, etc). When training it with a learning rate of 0.5, 1 hidden layer of 4 neurons, 2 inputs and 1 output, it successfully learns the XOR and AND problem (binary inputs, etc). Works really well.
When I then make the structure 784 inputs, 1 hidden layer of 30 neurons, and 10 outputs, and apply the MNIST digit set, where each input is a pixel value, I simply cannot get good results (no better than random!). My question is quite theory based: If my code does seem to work with the other problems, should I assume i need to keep experimenting different learning rates, hidden layers, etc for this one? Or should decide theres a more underlying problem?
How do I find the right combo of layers, learning rate, etc? How would you go about this?
Testing is also difficult as it takes about 2 hours to get to a point where it should have learnt... (on a mac)
No, I am not using TensorFlow, or other libraries, because I am challenging myself. Either way, it does work ...to a point!
Many thanks. And apologies for the slightly abstract question - but I know its a problem many beginners have - so I hope it helps others too. | 0 | 1 | 24 |
0 | 40,950,086 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-12-03T16:34:00.000 | 0 | 2 | 0 | how to define an issue with neural networks | 40,949,988 | 0 | python,machine-learning,computer-vision,neural-network | A quick advice may be to solve an intermediate task (e.g. to use your own 5x5 ASCII "pictures" of digits), to have more neurons in the hidden layer, to reduce the data set for quicker simulation, to compare your implementation to other custom implementations in your programming language. | I have built a system where the neural network can change size (number of and size of hidden layers, etc). When training it with a learning rate of 0.5, 1 hidden layer of 4 neurons, 2 inputs and 1 output, it successfully learns the XOR and AND problem (binary inputs, etc). Works really well.
When I then make the structure 784 inputs, 1 hidden layer of 30 neurons, and 10 outputs, and apply the MNIST digit set, where each input is a pixel value, I simply cannot get good results (no better than random!). My question is quite theory based: If my code does seem to work with the other problems, should I assume i need to keep experimenting different learning rates, hidden layers, etc for this one? Or should decide theres a more underlying problem?
How do I find the right combo of layers, learning rate, etc? How would you go about this?
Testing is also difficult as it takes about 2 hours to get to a point where it should have learnt... (on a mac)
No, I am not using TensorFlow, or other libraries, because I am challenging myself. Either way, it does work ...to a point!
Many thanks. And apologies for the slightly abstract question - but I know its a problem many beginners have - so I hope it helps others too. | 0 | 1 | 24 |
0 | 40,960,903 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-12-04T13:34:00.000 | 0 | 1 | 0 | In tensorflow,does embedding matrix remain unchanged? | 40,959,177 | 1.2 | python,tensorflow,deep-learning | Embedding matrix is similar to any other variable. If you set the trainable flag to True it will train it (see tf.Variable) | In tensorflow,we may see these codes.
embeddings=tf.Variable(tf.random_uniform([vocabulary_size,embedding_size],-1.0,1.0))
embed=tf.nn.embedding_lookup(embeddings,train_inputs)
When tensorflow is training,does embedding matrix remain unchanged?
In a blog,it is said that embedding matrix can update.I wonder how does it work.Thanks a lot ! | 0 | 1 | 94 |
0 | 40,962,551 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-04T14:28:00.000 | 0 | 2 | 0 | Python Pandas - Only showing rows in DF for the MAX values of a column | 40,959,626 | 0 | python,pandas | If your index is unique and you are OK with returning one row (in the case of multiple rows having the same max value) then you can use the idxmax method.
df.loc[df['money'].idxmax()]
And if you want to add some flare you can highlight the max value in each column with:
df.loc[df['money'].idxmax()].style.highlight_max() | searched for this, but cannot find an answer.
Say I have a dataframe (apologies for formatting):
a Dave $400
a Dave $400
a Dave $400
b Fred $220
c James $150
c James $150
d Harry $50
And I want to filter the dataframe so it only shows the rows where the third column is the MAXIMUM value, could someone point me in the right direction?
i.e. it would only show Dave's rows
All I can find is ways of showing the rows where its the maximum value for each separate index (the indexes being A, B, C etc)
Thank you in advance | 0 | 1 | 327 |
0 | 41,774,573 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-04T15:42:00.000 | -1 | 2 | 0 | Python Decision Tree Regressor Pruning | 40,960,357 | -0.099668 | python,tree,regression,cart | You can't; use matlab. Struggling with this at the moment. Using a python based home-cooked decision tree is also an option. However, there is no guarantee it will work properly (lots of places you can screw up). And you need to implement with numpy if you want any kind of reasonable run-time (also struggling with this now).
If you still have this problem, I do have a decision tree working with node knowledge and am implementing pruning this weekend...
If I get it to run fast and the code isn't too embarrassingly complicated, I will post a GitHub up here if you are still interested, in exchange for endorsements of ML'ing and Python/Numpy expertise on my LinkedIn. | I'm using scikit-learn to construct regression trees, using tree.DecisionTreeRegression().
I'm giving 56 data samples and it constructs me a Tree with 56 nodes (pruning=0).
How can I implement some pruning to the tree? Any help is appreciated! | 0 | 1 | 1,207 |
0 | 40,977,897 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-12-05T14:46:00.000 | 0 | 1 | 0 | Access key value from JSON array of objects Python | 40,976,901 | 0 | python,json,csv | turns out it was the json.dumps(), should've read more into what it does! Thanks. | I've been researching the past few days on how to achieve this, to no avail.
I have a JSON file with a large array of json objects like so:
[{
"tweet": "@SHendersonFreep @realDonaldTrump watch your portfolios go to the Caribbean banks and on to Switzerland. Speculation without regulation",
"user": "DGregsonRN"
},{
"tweet": "RT @CodeAud: James Mattis Vs Iran.\n\"The appointment of Mattis by @realDonaldTrump got the Iranian military leaders' more attention\". https:\u2026",
"user": "American1765"
},{
"tweet": "@realDonaldTrump the oyou seem to be only fraud I see is you, and seem scared since you want to block the recount???hmm cheater",
"user": "tgg216"
},{
"tweet": "RT @realDonaldTrump: @Lord_Sugar Dopey Sugar--because it was open all season long--you can't play golf in the snow, you stupid ass.",
"user": "grepsalot"
},{
"tweet": "RT @Prayer4Chandler: @realDonaldTrump Hello Mr. President, would you be willing to meet Chairman #ManHeeLee of #HWPL to discuss the #PeaceT\u2026",
"user": "harrymalpoy1"
},{
"tweet": "RT @realDonaldTrump: Thank you Ohio! Together, we made history \u2013 and now, the real work begins. America will start winning again! #AmericaF\u2026",
"user": "trumpemall"
}]
And I am trying to access each key and value, and write them to a csv file. I believe using json.loads(json.dumps(file)) should work in normal json format, but because there is an array of objects, I can't seem to be able to access each individual one.
converter.py:
import json
import csv
f = open("tweets_load.json",'r')
y = json.loads(json.dumps(f.read(), separators=(',',':')))
t = csv.writer(open("test.csv", "wb+"))
# Write CSV Header, If you dont need that, remove this line
t.writerow(["tweet", "user"])
for x in y:
t.writerow([x[0],x[0]])
grab_tweets.py:
import tweepy
import json
def get_api(cfg):
auth = tweepy.OAuthHandler(cfg['consumer_key'], cfg['consumer_secret'])
auth.set_access_token(cfg['access_token'], cfg['access_token_secret'])
return tweepy.API(auth)
def main():
cfg = {
"consumer_key" : "xxx",
"consumer_secret" : "xxx",
"access_token" : "xxx",
"access_token_secret" : "xxx"
}
api = get_api(cfg)
json_ret = tweepy.Cursor(api.search, q="@realDonaldTrump",count="100").items(100)
restapi =""
for tweet in json_ret:
rest = json.dumps({'tweet' : tweet.text,'user' :str(tweet.user.screen_name)},sort_keys=True,indent=4,separators=(',',': '))
restapi = restapi+str(rest)+","
f = open("tweets.json",'a')
f.write(str(restapi))
f.close()
if __name__ == "__main__":
main()
The output so far is looking like:
tweet,user^M
{,{^M
"
","
"^M
, ^M
, ^M
, ^M
, ^M
"""",""""^M
t,t^M
w,w^M
e,e^M
e,e^M
t,t^M
"""",""""^M
:,:^M
, ^M
"""",""""^M
R,R^M
T,T^M
, ^M
@,@^M
r,r^M
e,e^M
a,a^M
l,l^M
D,D^M
o,o^M
n,n^M
a,a^M
l,l^M
What exactly am I doing wrong? | 0 | 1 | 1,675 |
0 | 40,995,578 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-12-06T12:20:00.000 | 0 | 1 | 0 | How can I update npz file in python? | 40,995,291 | 0 | python,numpy,hdf5 | accepted
locals().update(npzfile)
a # and/or b
In the Ipython session, locals() is a large dictionary with variables the you've defined, the input history lines, and various outputs. Update adds the dictionary values of npzfile to that larger one.
By the way, you can also load and save MATLAB .mat files. Use scipy.io.loadmat and savemat. It handles v4 (Level 1.0), v6 and v7 to 7.2 files. But you have same issue - the result is a dictionary.
Octave has an expression form of the load command, that loads the data into a structure
S = load ("file", "options", "v1", "v2", ...) | I have a large data-set that I want save them in a npz file. But because the size of file is big for memory I cant save them in npz file.
Now i want insert data in iterations into npz file.
How can I do this?
Are HDF5 is better for this? | 0 | 1 | 528 |
0 | 41,197,119 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-06T14:33:00.000 | 1 | 1 | 0 | Not enough memory to read .mat result file into Python | 40,997,813 | 0.197375 | python,dymola | You can reduce the size of the simulation result file by using variable selections in Dymola. That will restrict the output to states, parameters, and the variables that match your selection criteria.
The new Dymola 2017 FD01 has a user interface for defining variable selections. | I have been having some issues trying to open a simulation result output file (.mat) in Python. Upon loading the file I am faced with the following error:
ValueError: Not enough bytes to read matrix 'description'; is this a
badly-formed file? Consider listing matrices with whosmat and
loading named matrices with variable_names kwarg to loadmat
Has anyone been successful in rectifying this error? I have heard there is a script DyMat which can manage mat files in Python but haven't had any luck with it so far.
Any suggestions would be greatly appreciated. | 0 | 1 | 374 |
0 | 41,003,996 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-06T20:05:00.000 | 1 | 1 | 0 | Scikit-learn - Cross validates score and predictions at one go? | 41,003,897 | 0.197375 | python,scikit-learn,cross-validation | If you run cross_val_predict then you can check the metric on the result. It is not a waste of compute time because cross_val_predict doesn't compute scores itself.
This won't give you per-fold scores though, only the aggregated score (which is not necessarily bad). I think you can workaround that by creating KFold / ... instance explicitly and then using it to split the cross_val_predict result. | I'm not sure whether I'm missing something really easy but I have a hard time trying to google something out.
I can see there are cross_val_score and cross_val_predict functions in scikit-learn. However, I can't find a way to get both score and predictions at one go. Seems quite obvious as calling the functions above one after another is a waste of computing time. Is there a cross_val_score_predict function or similar? | 0 | 1 | 80 |
0 | 41,021,060 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-12-07T06:09:00.000 | 2 | 2 | 1 | For distributing calculation task, which is better celery or spark | 41,010,560 | 0.197375 | python,apache-spark,celery,distributed,jobs | Adding to the above answer, there are other areas also to identify.
Integration with the existing big data stack if you have.
Data pipeline for ingestion
You mentioned "backend for web application". I assume its for read operation. The response times for any batch application might not be a good fit for any web application.
Choice of streaming can help you get the data into the cluster faster. But it will not guarantee the response times needed for web app. You need to look at HBase and Solr(if you are searching).
Spark is undoubtedly better and faster than other batch frameworks. In streaming there may be few other. As I mentioned above, you should consider the parameters on which your choice is made. | Problem: calculation task can be paralleled easily. but it is needed real-time response.
There can be two approaches.
1. using Celery: runs job in parallel from scratch
2. using Spark: runs job in parallel with spark framework
I think spark is better in scalability perspective. But is it OK Spark as backend of web-application? | 1 | 1 | 3,004 |
0 | 41,012,633 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2016-12-07T06:09:00.000 | 1 | 2 | 1 | For distributing calculation task, which is better celery or spark | 41,010,560 | 1.2 | python,apache-spark,celery,distributed,jobs | Celery :- is really a good technology for distributed streaming And its supports Python language . Which is it self strong in computation and easy to write. The streaming application in Celery supports so many features as well . Its little over head on CPU.
Spark- Its supports various programming language Java,Scala,Python. its not pure streaming its micro batch streaming as per the Spark documentation
If your task can only be full filled by streaming and you dont need the SQl like feature . Then Celery will be the best. But you need various feature along with streaming then SPark will be better . In that case you can take scenario you application will generate the data in how many batches within second . | Problem: calculation task can be paralleled easily. but it is needed real-time response.
There can be two approaches.
1. using Celery: runs job in parallel from scratch
2. using Spark: runs job in parallel with spark framework
I think spark is better in scalability perspective. But is it OK Spark as backend of web-application? | 1 | 1 | 3,004 |
0 | 41,116,628 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-12-08T17:35:00.000 | 2 | 2 | 1 | Add more Python libraries | 41,045,491 | 0.197375 | python,azure-data-lake,u-sql | Assuming the libs work with the deployed Python runtime, try to upload the libraries into a location in ADLS and then use DEPLOY RESOURCE "path to lib"; in your script. I haven't tried it, but it should work. | Is it or will it be possible to add more Python libraries than pandas, numpy and numexpr to Azure Data Lake Analytics? Specifically, we need to use xarray, matplotlib, Basemap, pyresample and SciPy when processing NetCDF files using U-SQL. | 0 | 1 | 360 |
0 | 41,071,087 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-12-09T11:32:00.000 | 2 | 1 | 0 | To what extent is it precise to evaluate Sympy enormous fractions (Rational)? | 41,059,520 | 0.379949 | python,python-3.x,sympy | The size of SymPy Rationals is limited only by your available memory. If you want an approximate but memory bounded number, use a Float. You can convert a Rational into a Float with evalf. | I am using Sympy Rational in an algorithm and I am getting enormous factions. Numerator and Denominator grow up to 10000 digits.
I would like to stop the algorithm as soon as the fractions become unevaluable. So the question is, what is the maximum magnitude I can allow for sympy.Rational? | 0 | 1 | 59 |
0 | 45,069,376 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2016-12-10T11:24:00.000 | 0 | 2 | 0 | opencv-python imshow giving errors in mac | 41,074,980 | 0 | python,macos,opencv | The fix that worked best for me was using mathplotlib instead.
Since you may have to remove all previous versions of OpenCV otherwise and reinstall from source! | I installed opencv-python using pip install, in mac os. Now the cv2.imshow function giving following error
OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage
How can I solve this issue? Why doesn't the pip check opencv dependencies? | 0 | 1 | 2,492 |
0 | 41,079,325 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-12-10T18:26:00.000 | 0 | 2 | 0 | Title: SVC-Scikit Learn issue | 41,078,835 | 0 | python-2.7,error-handling,scikit-learn,svm | Since it cannot used sparse input on dense data, either convert your dense data to sparse data (recommended) or your sparse data to dense data. Use SciPy to create a sparse matrix from a dense one. | I am getting this error in Scikit learn. Previously I worked with K validation, and never encountered error. My data is sparse and training and testing set is divided in the ratio 90:10
ValueError: cannot use sparse input in 'SVC' trained on dense data
Is there any straightforward reason and solution for this? | 0 | 1 | 960 |
0 | 41,079,820 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2016-12-10T18:26:00.000 | 3 | 2 | 0 | Title: SVC-Scikit Learn issue | 41,078,835 | 1.2 | python-2.7,error-handling,scikit-learn,svm | This basically means that your testing set is not in the same format as your training set.
A code snippet would have been great, but make sure you are using the same array format for both sets. | I am getting this error in Scikit learn. Previously I worked with K validation, and never encountered error. My data is sparse and training and testing set is divided in the ratio 90:10
ValueError: cannot use sparse input in 'SVC' trained on dense data
Is there any straightforward reason and solution for this? | 0 | 1 | 960 |
0 | 41,082,749 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-12-11T03:25:00.000 | 1 | 1 | 0 | Throw an exception while doing dataframe sum in Pandas | 41,082,675 | 1.2 | python,pandas,dataframe | You can fix resulting DataFrame using df.replace({'FieldName': {'ErrorError': ''}}) | I have a dataframe in which one of the row is filled with string "Error"
I am trying to add rows of 2 different dataframe. However, since I have the string in one of the row, it is concatenating the 2 strings.
So, I am having the dataframe filled with a row "ErrorError". I would prefer leaving this row empty than concatenating the strings.
Any idea how to do it ?
Thanks
kiran | 0 | 1 | 49 |
0 | 41,088,981 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-12-11T17:18:00.000 | 1 | 3 | 0 | Converting objects from CSV into datetime | 41,088,840 | 1.2 | python,csv,datetime,pandas,dataframe | I found that the problem was to do with missing values within the column. Using coerce=True so df["Date"] = pd.to_datetime(df["Date"], coerce=True) solves the problem. | I've got an imported csv file which has multiple columns with dates in the format "5 Jan 2001 10:20". (Note not zero-padded day)
if I do df.dtype then it shows the columns as being a objects rather than a string or a datetime. I need to be able to subtract 2 column values to work out the difference so I'm trying to get them into a state where I can do that.
At the moment if I try the test subtraction at the end I get the error unsupported operand type(s) for -: 'str' and 'str'.
I've tried multiple methods but have run into a problem every way I've tried.
Any help would be appreciated. If I need to give any more information then I will. | 0 | 1 | 3,761 |
0 | 49,290,726 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-12-13T20:37:00.000 | 0 | 1 | 0 | No module named 'tools' while importing scikits.talkbox | 41,130,126 | 0 | python-3.x,machine-learning,signal-processing | I had the same problem,just run pip install tools | I just installed scikits.talkbox, and tried using it in my program. But I get the following error
'ImportError: No module named 'tools'
How do I solve this problem? | 0 | 1 | 1,498 |
0 | 41,137,195 | 0 | 0 | 0 | 0 | 1 | true | 14 | 2016-12-14T07:15:00.000 | 11 | 2 | 0 | What's the difference between dummy variable and one-hot encoding? | 41,136,853 | 1.2 | python,machine-learning | In fact, there is no difference in the effect of the two approaches (rather wordings) on your regression.
In either case, you have to make sure that one of your dummies is left out (i.e. serves as base assumption) to avoid perfect multicollinearity among the set.
For instance, if you want to take the weekday of an observation into account, you only use 6 (not 7) dummies assuming the one left out to be the base variable. When using one-hot encoding, your weekday variable is present as a categorical value in one single column, effectively having the regression use the first of its values as the base. | I'm making features for a machine learning model. I'm confused with dummy variable and one-hot encoding.For a instance,a category variable 'week' range 1-7.When using one-hot encoding, encode week = 1 as 1,000,000,week = 2 is 0,100,000... .But I can also make a dummy variable 'week_v',and in this way, I must set a
hidden variable which means base variable,and feature week_v = 1 is 100,000,week_v = 2 is 010,000... and
does not appear week_v = 7.So what's the difference between them? I'm using logistic model and then I'll try gbdt. | 0 | 1 | 14,355 |
0 | 41,163,228 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-12-15T11:33:00.000 | -2 | 4 | 0 | Package installation of Keras in Anaconda? | 41,163,150 | -0.099668 | python,anaconda,python-3.5,packaging,keras | Navigate to Anaconda installation folder/Scripts and install with pip command | Python 3.5, I am trying to find command to install a Keras Deep Learning package for Anaconda. The command conda install -c keras does not work, can anyone answer Why it doesn't work? | 0 | 1 | 6,149 |
0 | 41,199,608 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2016-12-16T13:35:00.000 | 0 | 1 | 0 | fastest Connect 4 win checking method | 41,185,646 | 1.2 | python,performance,python-3.x,artificial-intelligence | From your question, it's a bit unclear how your approaches would be implemented. But from the alpha-beta pruning, it seems as if you want to look at a lot of different game states, and in the recursion determine a "score" for each one.
One very important observation is that recursion ends once a 4-in-a-row has been found. That means that at the start of a recursion step, the game board does not have any 4-in-a-row instances.
Using this, we can intuitively see that the new piece placed in said recursion step must be a part of any 4-in-a-row instance created during the recursion step. This greatly reduces the search space for solutions from a total of 69 (21 vertical, 24 horizontal, 12+12 diagonals) 4-in-a-row positions to a maximum of 13 (3 vertical, 4 horizontal, 3+3 diagonal).
This should be the baseline for your second approach. It will require a maximum of 52 (13*4) checks for a naive implementation, or 25 (6+7+6+6) checks for a faster algorithm.
Now it's pretty hard to beat 25 boolean checks for this win-check I'd say, but I'm guessing that your #1 approach trades some extra memory-usage to enable less calculation per recursion step. The simplest way of doing this would be to store 8 integers (single byte is fine for this application) which represent the longest chains of same-color chips that can be found in any of the 8 directions.
Using this, a check for win can be reduced to 8 boolean checks and 4 additions. Simply get the chain lengths on opposite sides of the newly placed chip, check if they're the same color as the chip, and if they are, add their lengths and add 1 (for the newly placed chip).
From this calculation, it seems as if your #1 approach might be the most efficient. However, it has a much larger overhead of maintaining the data structure, and uses more memory, something that should be avoided unless you can pass by reference. Also (assuming that boolean checks and additions are similar in speed) the much harder approach only wins by a factor 2 even when ignoring the overhead.
I've made some simplifications, and some explanations maybe weren't crystal clear, but ask if you have any further questions. | I am trying to make an ai following the alpha-beta pruning method for tic-tac-toe. I need to make checking a win as fast as possible, as the ai will goes through many different possible game states. Right now I have thought of 2 approaches, neither which is very efficient.
Create a large tuple for scoring every possible 4 in a row win conditions, and loop through that.
Using for loops, check horizontally, vertically, diag facing left, and diag facing right. This seems like it would be much slower that #1.
How would someone recommend doing it? | 0 | 1 | 812 |
0 | 41,200,368 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-17T12:47:00.000 | 2 | 3 | 0 | Pandas replace method and object datatypes | 41,198,719 | 0.132549 | python,pandas,dataframe | If the rest of the data in your columns is numeric then you should use pd.to_numeric(df, errors='coerce') | I am using df= df.replace('No data', np.nan) on a csv file containing ‘No data’ instead of blank/null entries where there is no numeric data. Using the head() method I see that the replace method does replace the ‘No data’ entries with NaN. When I use df.info() it says that the datatypes for each of the series is an object.
When I open the csv file in Excel and manually edit the data using find and replace to change ‘No data’ to blank/null entries, although the dataframes look exactly the same when I use df.head(), when I use df.info() it says that the datatypes are floats.
I was wondering why this was and how can I make it so that the datatypes for my series are floats, without having to manually edit the csv files. | 0 | 1 | 1,782 |
0 | 42,058,253 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-12-17T14:08:00.000 | 0 | 1 | 0 | Install opencv in anaconda3 | 41,199,408 | 0 | python,opencv,anaconda | I might be missing something, but i believe you are just missing seting up the envi. variables.
Set Enviromental Variables
Right-click on "My Computer" (or "This PC" on Windows 8.1) -> left-click Properties -> left-click "Advanced" tab -> left-click "Environment Variables..." button.
Add a new User Variable to point to the OpenCV (either x86 for 32-bit system or x64 for 64-bit system.) I am currently on a 64-bit machine.
| 32-bit or 64 bit machine? | Variable | Value |
|---------------------------|--------------|--------------------------------------|
| 32-bit | OPENCV_DIR | C:\opencv\build\x86\vc12 |
| 64-bit | OPENCV_DIR | C:\opencv\build\x64\vc12 |
Append %OPENCV_DIR%\bin to the User Variable PATH.
For example, my PATH user variable looks like this...
Before:
C:\Users\Johnny\Anaconda;C:\Users\Johnny\Anaconda\Scripts
After:
C:\Users\Johnny\Anaconda;C:\Users\Johnny\Anaconda\Scripts;%OPENCV_DIR%\bin | Hello guys i ve just installed anaconda3 in windows 8.1 and opencv 2.4.13 and 3.1.0/ Ive copied from the file c:/..../opencv/build/python/2.7/x64/cv2.pyd and i pasted it to C:\Users.....\Anaconda3\Lib\site-packages. I ve pasted both for opencv 2.4.13 as cv2.pyd and for opencv 3.1.0 as cv2(3)pyd in order to change it when i want to use any of them. My system is 64-bit and i use jupyter notebook. When i run the command import cv2 it write me
ImportError Traceback (most recent call last)
in ()
----> 1 import cv2
In anaconda3 i use python3.5
ImportError: DLL load failed: The specified module could not be found. | 0 | 1 | 1,034 |
0 | 41,252,942 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-19T02:55:00.000 | 1 | 1 | 0 | Why does GridSearchCV give different optimums on repeated runs? | 41,215,169 | 0.197375 | python,machine-learning,scikit-learn,cross-validation,grid-search | Try setting the random seed if you want to get the same result each time. | I am performing parameter selection using GridSearchCv (sklearn package in python) where the model is an Elastic Net with a Logistic loss (i.e a logistic regression with L1- and L2- norm regularization penalties). I am using SGDClassifier to implement this model. There are two parameters I am interested in searching the optimal values for: alpha (the constant that multiplies the regularization term) and l1_ratio (the Elastic Net mixing parameter). My data set has ~300,000 rows. I initialize the model as follows:
sgd_ela = SGDClassifier(alpha=0.00001, fit_intercept=True, l1_ratio=0.1,loss='log', penalty='elasticnet')
and the searching fxn. as follows:
GridSearchCV(estimator=sgd_ela, cv=8, param_grid=tune_para),
with tuning parameters:
tune_para = [{'l1_ratio': np.linspace(0.1,1,10).tolist(),'alpha':[0.00001, 0.0001, 0.001, 0.01, 0.1, 1]}].
I get the best_params (of alpha and l1_ratio) upon running the code. However, in repeated runs, I do not get the same set of best parameters. I am interested to know why is this the case, and if possible, how can I overcome it? | 0 | 1 | 907 |
0 | 41,230,349 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-12-19T18:42:00.000 | 0 | 2 | 0 | can import tensorflow in python 3.4 but not in ipython notebook | 41,228,983 | 0 | python,ubuntu,tensorflow,pip,ipython | Each major version of python has its own site-packages directory. It seems that you have both python 3.4 and 3.5 and you have jupyter installed in 3.5 and tensorflow in 3.4. The easy solution is to install tensorflow in 3.5 as well. This should allow you to use it with the 3.5 notebook kernel. You could attempt to add 3.4 as a kernel, but I am not sure how to do that. | I have been running in circles trying to get tensorflow to work in a jupyter notebook. I installed it via pip on ubuntu and also tried a conda environment (but unless I'm mistaken, getting that to work with ipython is beyond my ability). Tensorflow works fine in python3.4, but not python 3.5, which is used when I load ipython. I'm not sure if this question makes any sense, but can I make it so that ipython uses only python 3.4? The reason I need to use ipython instead of going through the python shell is that I am trying to use the kadenzie tutorial.
Thank you.
Edit: I'm not sure how applicable this is to other people with my problem, but I solved it by changing my conda python version (conda install python=3.4.3), uninstalling ipython, and then reinstalling it. | 0 | 1 | 255 |
0 | 50,797,957 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-12-19T18:42:00.000 | 0 | 2 | 0 | can import tensorflow in python 3.4 but not in ipython notebook | 41,228,983 | 0 | python,ubuntu,tensorflow,pip,ipython | The best way to setup tensorflow with jupyter
1.Install anaconda
2.Create a environment named "tensorflow"
3.activate that environment by the following command in command prompt
Install anaconda
Create a environment named "Tensorflow"
activate that environment by the following command in command prompt
activate tensorflow
then type conda install ipykernel
then when it is installed paste the following command
python -m ipykernel install --user --name myenv --display-name "Python[Tensorflow]"
Then run jupyter notebook in command prompt
after that when you are going to create a new notebook you will see two types of notebook just select the tensorflow one. | I have been running in circles trying to get tensorflow to work in a jupyter notebook. I installed it via pip on ubuntu and also tried a conda environment (but unless I'm mistaken, getting that to work with ipython is beyond my ability). Tensorflow works fine in python3.4, but not python 3.5, which is used when I load ipython. I'm not sure if this question makes any sense, but can I make it so that ipython uses only python 3.4? The reason I need to use ipython instead of going through the python shell is that I am trying to use the kadenzie tutorial.
Thank you.
Edit: I'm not sure how applicable this is to other people with my problem, but I solved it by changing my conda python version (conda install python=3.4.3), uninstalling ipython, and then reinstalling it. | 0 | 1 | 255 |
0 | 60,222,616 | 0 | 1 | 0 | 0 | 1 | false | 86 | 2016-12-20T01:33:00.000 | 2 | 3 | 0 | Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads | 41,233,635 | 0.132549 | python,parallel-processing,tensorflow,distributed-computing | Tensorflow 2.0 Compatible Answer: If we want to execute in Graph Mode of Tensorflow Version 2.0, the function in which we can configure inter_op_parallelism_threads and intra_op_parallelism_threads is
tf.compat.v1.ConfigProto. | Can somebody please explain the following TensorFlow terms
inter_op_parallelism_threads
intra_op_parallelism_threads
or, please, provide links to the right source of explanation.
I have conducted a few tests by changing the parameters, but the results have not been consistent to arrive at a conclusion. | 0 | 1 | 54,304 |
0 | 41,257,567 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-21T07:29:00.000 | 11 | 1 | 0 | OpenCV Error: Assertion failed (L.channels() == 1 && I.channels() == 1) in connectedComponents_sub1 | 41,257,336 | 1 | python,python-2.7,opencv,opencv3.1,connected-components | Let us analyze it:
Assertion failed (L.channels() == 1 && I.channels() == 1)
The images that you are passing to some function should be 1 channel (gray not color).
__extractPlantArea(plant_img)
That happened in your code exactly at the function called __extractPlantArea.
cv2.connectedComponentsWithStats
While you are calling the OpenCV function called connectedComponentsWithStats.
Conclusion:
Do not pass colorful (BGR) image to connectedComponentsWithStats | I got the following error in OpenCV (python) and have googled a lot but have not been able to resolve.
I would be grateful if anyone could provide me with some clue.
OpenCV Error: Assertion failed (L.channels() == 1 && I.channels() == 1)
in connectedComponents_sub1, file /home/snoopy/opencv-
3.1.0/modules/imgproc/src/connectedcomponents.cpp, line 341
Traceback (most recent call last):
File "test.py", line 30, in
plant = analyzeplant.analyzeSideView(plant)
File "/home/snoopy/Desktop/Leaf-201612/my-work-
editing/ripps/src/analyzePlant.py", line 229, in analyzeSideView
plant_img = self.__extractPlantArea(plant_img)
File "/home/snoopy/Desktop/Leaf-201612/my-work-
editing/ripps/src/analyzePlant.py", line 16, in __extractPlantArea
output = cv2.connectedComponentsWithStats(plant, 4, cv2.CV_32S)
cv2.error: /home/snoopy/opencv-
3.1.0/modules/imgproc/src/connectedcomponents.cpp:341: error: (-215) > L.channels() == 1 && I.channels() == 1 in function
connectedComponents_sub1 | 0 | 1 | 9,922 |
0 | 41,295,384 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-21T23:47:00.000 | 0 | 1 | 0 | TensorFlow while_loop parallelization TensorArray | 41,273,756 | 0 | python,tensorflow | You should probably get the parallel execution of the first 5 iterations and the second 5 iterations. I can say for sure if you provide a code sample. | I don't exactly understand how the while_loop parallelization works. Suppose I have a TensorArray having 10 Tensors all of same shape. Now suppose the computations in the loop body for the first 5 Tensors are independent of the computations in the remaining 5 Tensors. Would TensorFlow run these two in parallel? Also if I use a Tensor instead of a TensorArray and made the updates to it using scatter_update, would it pass the gradients properly during backprop? | 0 | 1 | 558 |
0 | 41,279,184 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-12-22T04:59:00.000 | 0 | 1 | 0 | Why NLTK uses regular expressions for word tokenization, but training for sentence tokenization? | 41,276,039 | 1.2 | python,nlp,nltk | I'm not sure if you can say that sentence splitting is harder than (word) tokenisation. But tokenisation depends on sentence splitting, so errors in sentence splitting will propagate to tokenisation. Therefore you'd want to have reliable sentence splitting, so that you don't have to make up for it in tokenisation. And it turns out that once you have good sentence splitting, tokenisation works pretty well with regexes.
Why is that? – One of the major ambiguities in tokenisation (in Latin script languages, at least) is the period ("."): It can be a full stop (thus a token of its own), an abbreviation mark (belonging to that abbreviation token), or something special (like part of a URL, a decimal fraction, ...). Once the sentence splitter has figured out the first case (full stops), the tokeniser can concentrate on the rest. And identifying stuff like URLs is exactly what you would use a regex for, isn't it?
The sentence splitter's main job, on the other hand, is to find abbreviations with a period. You can create a list for that by hand – or you can train it on a big text collection. The good thing is, it's unsupervised training – you just feed in the plain text, and the splitter collects abbreviations. The intuition is: If a token almost always appears with a period, then it's probably an abbreviation. | I am using NLTK in python. I understood that it uses regular expressions in its word tokenization functions, such as TreebankWordTokenizer.tokenize(), but it uses trained models (pickle files) for sentence tokenization. I don't understand why they don't use training for word tokenization? Does it imply that sentence tokenization is a harder task? | 0 | 1 | 198 |
0 | 54,832,398 | 0 | 0 | 1 | 0 | 1 | false | 2 | 2016-12-22T14:30:00.000 | 1 | 2 | 0 | Can't find "gen_training_ops" in the tensorflow GitHub | 41,285,440 | 0.099668 | python,optimization,tensorflow,deep-learning,jupyter-notebook | If you find it, youll realize it just jumps to pyhon/framework, where the actual update is just an assign operation and then gets grouped | I'm working on a new optimizer, and I managed to work out most of the process. Only thing I'm stuck on currently is finding gen_training_ops.
Apparently this file is crucial, because in both implementations of Gradient Descent, and Adagrad optimizers they use functions that are imported out of a wrapper file for gen_training_ops (training_ops.py in the python/training folder).
I can't find this file anywhere, so I suppose I don't understand something and search in the wrong place. Where can I find it? (Or specifically the implementations of apply_adagrad and apply_gradient_descent)
Thanks a lot :) | 0 | 1 | 719 |
0 | 41,299,177 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-12-23T09:23:00.000 | 0 | 3 | 0 | OpenCV VideoCapture device index / device number | 41,298,588 | 0 | python,c++,windows,opencv,usb | If you can differentiate the cameras by their serial number or device and vendor id, you can loop through all video devices before opening with opencv and search for the camera device you want to open. | I have a python environment (on Windows 10) that uses OpenCV VideoCapture class to connect to multiple usb cameras.
As far as I know, there is no way to identify a specific camera in OpenCV other than the device parameter in the VideoCapture class constructor / open method.
The problem is that the device parameter changes depending on how many cameras are actually connected and to which usb ports.
I want to be able to identify a specific camera and find its "device index" or "camera index" no matter how many cameras are connected and to which usb ports.
Can somebody please suggest a way to achieve that functionality? python code is preferable but C++ will also do. | 0 | 1 | 10,992 |
0 | 41,499,342 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-23T09:24:00.000 | 0 | 1 | 0 | Python for Data Analysis: Chp 2 Pg 38 "prop_cumsum" error | 41,298,599 | 0 | python,cumsum,prop | It seems that you invoked sort_index instead of sort_values. The by='prop' doesn't make sense in such a context (you sort the index by the index, not by columns in the data frame).
Also, in my early release copy of the 2nd edition, this appears near the top of page 43. But since this is early release, the page numbers may be fluid. | I'm working on this book and keep running error when i'm run "Prop_cumsum"
> prop_cumsum = df.sort_index(by='prop', ascending=False).prop.cumsum()
/Users/anaconda/lib/python3.5/site-packages/ipykernel/main.py:1:
FutureWarning: by argument to sort_index is deprecated, pls use
.sort_values(by=...) if name == 'main':
--------------------------------------------------------------------------- KeyError Traceback (most recent call
last)
/Users/anaconda/lib/python3.5/site-packages/pandas/indexes/base.py in
get_loc(self, key, method, tolerance) 1944 try:
-> 1945 return self._engine.get_loc(key) 1946 except KeyError:
pandas/index.pyx in pandas.index.IndexEngine.get_loc
(pandas/index.c:4154)()
pandas/index.pyx in pandas.index.IndexEngine.get_loc
(pandas/index.c:4018)()
pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas/hashtable.c:12368)()
pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas/hashtable.c:12322)()
KeyError: 'prop'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call
last) in ()
----> 1 prop_cumsum = df.sort_index(by='prop', ascending=False).prop.cumsum()
/Users/anaconda/lib/python3.5/site-packages/pandas/core/frame.py in
sort_index(self, axis, level, ascending, inplace, kind, na_position,
sort_remaining, by) 3237 raise ValueError("unable
to simultaneously sort by and level") 3238 return
self.sort_values(by, axis=axis, ascending=ascending,
-> 3239 inplace=inplace) 3240 3241 axis = self._get_axis_number(axis)
/Users/anaconda/lib/python3.5/site-packages/pandas/core/frame.py in
sort_values(self, by, axis, ascending, inplace, kind, na_position)
3149 3150 by = by[0]
-> 3151 k = self[by].values 3152 if k.ndim == 2: 3153
/Users/anaconda/lib/python3.5/site-packages/pandas/core/frame.py in
getitem(self, key) 1995 return self._getitem_multilevel(key) 1996 else:
-> 1997 return self._getitem_column(key) 1998 1999 def _getitem_column(self, key):
/Users/anaconda/lib/python3.5/site-packages/pandas/core/frame.py in
_getitem_column(self, key) 2002 # get column 2003 if self.columns.is_unique:
-> 2004 return self._get_item_cache(key) 2005 2006 # duplicate columns & possible reduce dimensionality
/Users/anaconda/lib/python3.5/site-packages/pandas/core/generic.py in
_get_item_cache(self, item) 1348 res = cache.get(item) 1349 if res is None:
-> 1350 values = self._data.get(item) 1351 res = self._box_item_values(item, values) 1352
cache[item] = res
/Users/anaconda/lib/python3.5/site-packages/pandas/core/internals.py
in get(self, item, fastpath) 3288 3289 if not
isnull(item):
-> 3290 loc = self.items.get_loc(item) 3291 else: 3292 indexer =
np.arange(len(self.items))[isnull(self.items)]
/Users/anaconda/lib/python3.5/site-packages/pandas/indexes/base.py in
get_loc(self, key, method, tolerance) 1945 return
self._engine.get_loc(key) 1946 except KeyError:
-> 1947 return self._engine.get_loc(self._maybe_cast_indexer(key)) 1948 1949
indexer = self.get_indexer([key], method=method, tolerance=tolerance)
pandas/index.pyx in pandas.index.IndexEngine.get_loc
(pandas/index.c:4154)()
pandas/index.pyx in pandas.index.IndexEngine.get_loc
(pandas/index.c:4018)()
pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas/hashtable.c:12368)()
pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas/hashtable.c:12322)()
KeyError: 'prop' | 0 | 1 | 187 |
0 | 50,601,700 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-23T09:57:00.000 | 0 | 2 | 0 | TensorFlow from Google - Data Security | 41,299,126 | 0 | python,machine-learning,tensorflow,deep-learning,google-developer-tools | Doesn't TF actually also uses google's models from cloud? I'm pretty sure google uses cloud data to provide better models for TF.
I'd recommend you to stay away from it. Only by writing your models from scratch you will learn to do useful stuff with it long term. I can also recommend weka for java, it's open source and you can only look at the code of the models there an implement it yourself adjusting for your needs. | Does anyone have any idea whether Google collects data that one supplies to Tensorflow? I mean it is open source, but it falls under their licences. | 1 | 1 | 861 |
0 | 41,315,329 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2016-12-24T09:54:00.000 | 2 | 2 | 0 | Number of classes for inception network (Tensorflow) | 41,312,197 | 1.2 | python,computer-vision,tensorflow,conv-neural-network | Only two classes. "Not food" is your background class. If you were trying to detect food or dogs, you could have 3 classes: "food", "dog", "neither food nor dog". | I see that a background class is used as a bonus class. So this class is used in case of not classifying an image in the other classes? In my case, I have a binary problem and I want to understand if an image contains food or not. I need to use 2 classes + 1 background class = 3 classes or only 2 classes? | 0 | 1 | 397 |
0 | 41,329,052 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-12-25T03:15:00.000 | 1 | 1 | 0 | Voice activated password implementation in python | 41,318,435 | 1.2 | python-2.7,numpy,scipy,voice-recognition,voice | It is not possible to compare to speech samples on a sample level (or time domain). Each part of the spoken words might vary in length, so they won't match up, and the levels of each part will also vary, and so on. Another problem is that the phase of the individual components that the sound signal consists of can change too, so that two signals that sound the same can look very different in the time domain. So likely the best solution is to move the signal into the frequency domain. One common way to do this is using the Fast Fourier Transform (FFT). You can look it up, there is a lot of material about this on the net, and good support for it in Python.
Then could could proceed like this:
Divide the sound sample into small segments of a few milliseconds.
Find the principal coefficients of FFT of segments.
Compare the sequences of some selected principal coefficients. | I want to record a word beforehand and when the same password is spoken into the python script, the program should run if the spoken password matches the previously recorded file. I do not want to use the speech recognition toolkits as the passwords might not be any proper word but could be complete gibberish. I started with saving the previously recorded file and the newly spoken sound as numpy arrays. Now I need a way to determine if the two arrays are 'close' to each other. Can someone point me in the right direction for this? | 0 | 1 | 851 |
0 | 65,705,673 | 0 | 0 | 0 | 0 | 1 | false | 13 | 2016-12-26T10:15:00.000 | 2 | 4 | 0 | LineSegmentDetector in Opencv 3 with Python | 41,329,665 | 0.099668 | python-2.7,opencv3.0 | old implementation is not available.
Now it is available as follows:
fld = cv2.ximgproc.createFastLineDetector()
lines = fld.detect(image) | Can a sample implementation code or a pointer be provided for implementing LSD with opencv 3.0 and python? HoughLines and HoughLinesP are not giving desired results in python and want to test LSD in python but am not getting anywhere.
I have tried to do the following:
LSD=cv2.createLineSegmentDetector(0)
lines_std=LSD.detect(mixChl)
LSD.drawSegments(mask,lines_std)
However when I draw lines on the mask I get an error which is:
LSD.drawSegments(mask,lines_std) TypeError: lines is not a numerical tuple
Can someone please help me with this?
Thanks in advance. | 0 | 1 | 18,293 |
0 | 41,346,692 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-12-27T11:07:00.000 | 0 | 2 | 0 | Why does the 'tensorflow' module import fail in Spyder and not in Jupyter Notebook and not in Python prompt? | 41,344,017 | 0 | python,ubuntu,anaconda,environment,spyder | (Posted on behalf of the OP).
It is solved: I reinstalled spyder and it works properly now. Thank you. | I have not used Linux/Unix for more a decade. Why does the 'tensorflow' module import fail in Spyder and not in Jupyter Notebook and not in Python prompt?
SCENARIO:
[terminal] spyder
[spyder][IPython console] Type 'import tensorflow as tf' in the IPython console
CURRENT RESULT:
[spyder][IPython console] Message error: 'ImportError: No module named 'tensorflow''
ADDITIONAL INFORMATION:
OS: Ubuntu 14.04 (VMWare)
Python: Python 3.5.2 :: Anaconda custom (64-bit)
Install of TensorFlow:
[terminal] sudo -s
[terminal] conda create --name=IntroToTensorFlow python=3 anaconda
[terminal] source activate IntroToTensorFlow
[terminal] conda install -c conda-forge tensorflow
PATH = $PATH:/home/mo/anaconda3/envs/IntroToTensorFlow/bin
COMMENTS:
When I replay the following scenario, it works fine:
[terminal] sudo -s
[terminal] source activate IntroToTensorFlow
[terminal] python
[python] import tensorflow as tf
When I replay the tensorflow import in Jupyter Notebook, it works fine too
WHAT I HAVE DONE SO FAR:
I Googled it but I did not find a suitable anwser
I searched in the Stack Overflow questions | 0 | 1 | 1,193 |
0 | 66,783,030 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-12-27T13:25:00.000 | 0 | 1 | 0 | Feeding a seed value to solver in Python Logistic Regression | 41,346,055 | 0 | python,machine-learning,scikit-learn,logistic-regression | You can use the warm_start option (with solver not liblinear), and manually set coef_ and intercept_ prior to fitting.
warm_start : bool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver. | I am using scikit-learn's linear_model.LogisticRegression to perform multinomial logistic regress. I would like to initialize the solver's seed value, i.e. I want to give the solver its initial guess as the coefficients' values.
Does anyone know how to do that? I have looked online and sifted through the code too, but haven't found an answer.
Thanks! | 0 | 1 | 493 |
0 | 41,402,234 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-12-28T10:39:00.000 | 5 | 1 | 0 | How to do a FIFO push-operation for rows on Pandas dataframe in Python? | 41,360,265 | 1.2 | python,pandas | @JohnGalt posted an answer to this on the comments. Thanks a lot. I just wanted to put the answer here just in case if people are looking for similar information in the future.
df.shift(1) df.loc[0] = new_row
df.shift(n) will shift the rows n times, filling the first n rows with na and getting rid of last n rows. The number of rows of df will not change with df.shift.
I hope this is helpful. | I need to maintain a Pandas dataframe with 500 rows, and as the next row becomes available I want to push that new row in and throw out the oldest row from the dataframe. e.g. Let's say I maintain row 0 as newest, and row 500 as oldest. When I get a new data, I would push data to row 0, and it will shift row 0 to row 1, and so on until it pushes row 499 to row 500 (and row 500 gets deleted).
Is there a way to do such a FIFO operation on Pandas? Thanks guys! | 0 | 1 | 2,326 |
0 | 41,366,334 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-12-28T11:25:00.000 | 1 | 1 | 0 | Understanding Spark MLlib ALS.trainImplicit input format | 41,361,080 | 1.2 | python,pyspark,collaborative-filtering | It it not necessary (for implicit) and shouldn't be done (for explicit) so in this case bass only data you actually have. | I`m trying to make a recommender system based on purchase history using trainImplicit. My input is in domain [1, +inf) (the sum of views and purchases).
So the element of my input RDD looks like this: [(user_id,item_id),rating] --> [(123,5564),6] - the user(id = 123) interacted with the item(id=5564) 6 times.
Should I add to my RDD elements such as [(user_id,item_id),rating] --> [(123,2222),0], meaning that given user has never interacted with given item or the ALS.implicitTrain does this implicitly? | 0 | 1 | 363 |
0 | 41,361,442 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2016-12-28T11:29:00.000 | 2 | 3 | 0 | python pandas: create multiple empty dataframes | 41,361,151 | 1.2 | python,pandas | the constructor pd.Dataframe must be called like a function, so followed by parentheses (). Now you are refering to the module pd.dataframes (also note the final 's').
the for x-construction you're using creates a sequence. In this form you can't assign it to the variable x. Instead, enclose everything right of the equal sign '=' in () or []
it's usually not a good idea to use the same variable x both at the left hand side and at the right hand side of the assignment, although it won't give you a language error (but possibly much confusion!).
to connect the names in fdnames to the dataframes, use e.g. a dict:
dataFrames = {(name, pd.DataFrame()) for name in dfnames} | I am trying to create multiple empty pandas dataframes in the following way:
dfnames = ['df0', 'df1', 'df2']
x = pd.Dataframes for x in dfnames
The above mentionned line returns error syntax.
What would be the correct way to create the dataframes? | 0 | 1 | 3,447 |
0 | 41,361,512 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-12-28T11:29:00.000 | 0 | 3 | 0 | python pandas: create multiple empty dataframes | 41,361,151 | 0 | python,pandas | You can't have many data frames within a single variable name, here you are trying to save all empty data frames in x. Plus, you are using wrong attribute name, it is pd.DataFrame and not pd.Dataframes.
I did this and it worked-
dfnames = ['df0', 'df1', 'df2']
x = [pd.DataFrame for x in dfnames] | I am trying to create multiple empty pandas dataframes in the following way:
dfnames = ['df0', 'df1', 'df2']
x = pd.Dataframes for x in dfnames
The above mentionned line returns error syntax.
What would be the correct way to create the dataframes? | 0 | 1 | 3,447 |
0 | 41,371,381 | 0 | 0 | 0 | 0 | 1 | true | 9 | 2016-12-28T23:09:00.000 | 7 | 1 | 0 | Is it possible to merge multiple TensorFlow graphs into one? | 41,370,987 | 1.2 | python,machine-learning,tensorflow | I kicked this around with my local TF expert, and the brief answer is "no"; TF doesn't have a built-in facility for this. However, you could write custom endpoint layers (input and output) with synch operations from Python's process management, so that they'd maintain parallel processing of each input, and concatenate the outputs.
Rationale
I like the way this could be used to get greater accuracy with multiple features, where the features have little or no correlation. For instance, you could train two character recognition models: one to identify the digit, the other to discriminate between left- and right-handed writers.
This would also allow you to examine the internal kernels that evolved for each individual feature, without interdependence with other features: the double-loop of an '8' vs the general slant of right-handed writing.
I also expect that the models for individual features will converge measurably faster than one over-arching training session.
Finally, it's quite possible that the individual models could be used in mix-and-match feature sets. For instance, train another model to differentiate letters, while letting your previously-trained left/right flagger would still have a pretty good guess at the writer's moiety. | I have two models trained with Tensorflow Python, exported to binary files named export1.meta and export2.meta. Both files will generate only one output when feeding with input, say output1 and output2.
My question is if it is possible to merge two graphs into one big graph so that it will generate output1 and output2 together in one execution.
Any comment will be helpful. Thanks in advance! | 0 | 1 | 2,821 |
0 | 41,383,430 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-29T13:25:00.000 | 0 | 1 | 0 | installing sklearn version 0.18.1 in Apache web server | 41,380,710 | 0 | python,apache,flask,scikit-learn | Use anaconda. It will save you so much time with these annoying dependency issues. | I am trying to install the latest version (0.18.1) of sklearn for use in a web app
I am hosting my webapp with apache web server and flask
I have tried sudo apt-get -y install python3-sklearn and this works but installs an older version of sklearn (0.17)
I have also tried pip3 and easy_install and these complete the install but are not picked up by flask or apache.
I get the following error log on my apache server
[Thu Dec 29 13:07:45.505294 2016] [wsgi:error] [pid 31371:tid 140414290982656] [remote 90.201.35.82:25030] from sklearn.gaussian_process import GaussianProcessRegressor
[Thu Dec 29 13:07:45.505315 2016] [wsgi:error] [pid 31371:tid 140414290982656] [remote 90.201.35.82:25030] ImportError: cannot import name 'GaussianProcessRegressor'
This is because I am trying to access some features of sklearn which are not present in 0.17 but are there in 0.18.1
Any ideas? | 1 | 1 | 362 |
0 | 41,382,785 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-12-29T15:34:00.000 | 2 | 3 | 0 | Numpy Array Change indices | 41,382,736 | 1.2 | python,numpy | There are two ways to archive this either np.reshape(x, ndims) or np.transpose(x, dims).
For pictures I propose np.transpose(x, dims) which can be applied using
X_train = np.transpose(X_train, (3,0,1,2)). | I have an numpy-array with 32 x 32 x 3 pictures with X_train.shape: (32, 32, 3, 73257). However, I would like to have the following array-shape: (73257, 32, 32, 3).
How can I accomplish this? | 0 | 1 | 3,420 |
0 | 41,387,325 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-12-29T20:04:00.000 | 0 | 1 | 0 | Split Dartboard into Polygons | 41,386,463 | 0 | python-2.7,opencv | Not sure what the issue is. Normally, x and y coordinates (of the dart) will be given relative to the top-left corner of the image so you will need to add the radius of the dartboard to each to get your coordinates relative to the centre of the dartboard.
There are 20 segments on a dartboard, so each segment will subtend 360/20 or 18 degrees around the centre. You can get the angle from the vertical using tan inverse x/y and test which segment, and therefore, which number that corresponds to.
The distance from the centre will be sqrt(x^2 + y^2) and you can test if that is within the segment corresponding to a treble or a double. | I'm looking for a way to split a dartboard image into polygons so that given an x,y coordinate I can find out which zone the dart fell within. I have found a working python script to detect if the coordinate falls within a polygon (stored as a list of x,y pairs), but I am lost as to how to generate the polygons as a list of points. I'm open to creating the "shape map" if that's the correct term, in whatever way necessary to get it done, I just don't know the correct technology or method to do this.
Any advice is welcome! | 0 | 1 | 109 |
0 | 41,408,698 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-30T23:39:00.000 | 1 | 1 | 0 | Detect object in an image using openCV python on a raspberry pi | 41,404,053 | 0.197375 | python,opencv,image-processing,raspberry-pi | Well I can suggest a way of doing this. So basically what you can do is you can use some kind of Object Detection coupled with a Machine Learning Algo. So the way this might work is you first train your camera to recongnize the closed box. You can take like 10 pics of the closed box(just an example) and train your program to recognize that closed box. So the program will be able to detect when the box is closed. So when the box is not closed(i.e open or missing or something else) then you can code your program appropriately to fire off a signal or whatever it is you are trying to do. So the first obvious step is to write code for object detection. There are numerous ways of doing this alone like Haar Classification, Support Vector Machines. Once you have trained your program to look for the closed box you can then run this program to predict what's happening in every frame of the camera feed. Hope this answered your question! Cheers! | I have a small project that I am tinkering with. I have a small box and I have attached my camera on top of it. I want to get a notification if anything is added or removed from it.
My original logic was to constantly take images and compare it to see the difference but that process is not good even the same images on comparison gives out a difference. I do not know why?
Can anyone suggest me any other way to achieve this? | 0 | 1 | 619 |
0 | 41,404,825 | 0 | 0 | 0 | 0 | 2 | true | 12 | 2016-12-31T02:08:00.000 | 12 | 2 | 0 | statsmodels add_constant for OLS intercept, what is this actually doing? | 41,404,817 | 1.2 | python,linear-regression,statsmodels | It doesn't add a constant to your values, it adds a constant term to the linear equation it is fitting. In the single-predictor case, it's the difference between fitting an a line y = mx to your data vs fitting y = mx + b. | Reviewing linear regressions via statsmodels OLS fit I see you have to use add_constant to add a constant '1' to all your points in the independent variable(s) before fitting. However my only understanding of intercepts in this context would be the value of y for our line when our x equals 0, so I'm not clear what purpose always just injecting a '1' here serves. What is this constant actually telling the OLS fit? | 0 | 1 | 9,060 |
0 | 43,397,319 | 0 | 0 | 0 | 0 | 2 | false | 12 | 2016-12-31T02:08:00.000 | 7 | 2 | 0 | statsmodels add_constant for OLS intercept, what is this actually doing? | 41,404,817 | 1 | python,linear-regression,statsmodels | sm.add_constant in statsmodel is the same as sklearn's fit_intercept parameter in LinearRegression(). If you don't do sm.add_constant or when LinearRegression(fit_intercept=False), then both statsmodels and sklearn algorithms assume that b=0 in y = mx + b, and it'll fit the model using b=0 instead of calculating what b is supposed to be based on your data. | Reviewing linear regressions via statsmodels OLS fit I see you have to use add_constant to add a constant '1' to all your points in the independent variable(s) before fitting. However my only understanding of intercepts in this context would be the value of y for our line when our x equals 0, so I'm not clear what purpose always just injecting a '1' here serves. What is this constant actually telling the OLS fit? | 0 | 1 | 9,060 |
0 | 62,550,444 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-12-31T07:17:00.000 | 1 | 2 | 0 | pandas "cumulative" rolling_corr | 41,406,339 | 0.099668 | python,pandas,rolling-computation | Just use rolling correlation, with a very large window, and min_period = 1. | Is there any built-in pandas' method to find the cumulative correlation between two pandas series?
What it should do is effectively fixing the left side of the window in pandas.rolling_corr(data, window) so that the width of the window increases and eventually the window includes all data points. | 0 | 1 | 506 |
0 | 45,947,287 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-12-31T09:53:00.000 | 0 | 2 | 0 | Does TensorFlow execute entire computation graph with sess.run()? | 41,407,241 | 0 | python,machine-learning,tensorflow | Since Python code of TF only setups the graph, which is actually executed by native implementation of all ops, your variables need to be executed in this underlying environment. This happens by executing two ops - for local and global variables initialization:
session.run(tf.global_variables_initializer(), tf.local_variables_initializer())
On the original question - as far as I know - YES, it computes all the graph, and it requires you to feed placeholders, even if the executed op (in the session) is not dependent on them. | For example, when we compute a variable c as result = sess.run(c), does TF only compute the inputs required for computing c or updates all the variables of the complete computational graph?
Also, I don't seem to be able to do this:
c = c*a*b
as I am stuck with uninitialized variable error even after initializing c as tf.Variable(tf.constant(1)). Any suggestions? | 0 | 1 | 1,412 |
0 | 41,417,067 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-01T15:44:00.000 | 0 | 1 | 0 | Cluster two features in Python | 41,416,652 | 0 | python,machine-learning,scikit-learn,cluster-analysis | You can use the skikit-learn Affinity propagation or Mean-shift libraries for clustering. Those algorithms will output a number of clusters and centers. To use the Y seems to be a different question because you can't plot the multi dimensional point on a 3D plane unless you do some import some other libraries. | I have two sparse scipy matrix's, title and paragraph whose dimensions are (284,183) and (284,4195) respectively. Each row of both matrix's are features from one instance of my dataset. I wish to cluster these without a predefined number of clusters and then plot them.
I also have an array, Y that relates to each row. (284,1). One class is represented by 0, the other by 1. I would like to color the points using this. How can I do this using Python? | 0 | 1 | 156 |
0 | 41,447,225 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-01-03T15:38:00.000 | 0 | 1 | 0 | Scipy.optimize.minimize using a design vector x that contains integers only | 41,447,048 | 0 | python,optimization,scipy,integer,minimum | That is actually a way harder problem speaking math, the same algorithm will not be capable! This problem is np-hard. Maybe check out pyglpk... And check out mixed integer programming. | I'd like to minimize some objective function f(x1,x2,x3) in Python. Its quite a simple function but the problem is that the design vector x=[x1,x2,x3] constains integers only.
So for example I'd like to get the result:
"f is minimum for x=[1, 3, 2]" and not:
"f is minimum for x=[1.12, 3.36, 2.24]" since this would not make any sense for my problem.
Is there any way to rig scipy.minimize to solve this kind of problem? Or is there any other Python library capable of doing this? | 0 | 1 | 267 |
0 | 59,636,728 | 0 | 1 | 0 | 0 | 1 | false | 10 | 2017-01-03T15:56:00.000 | 2 | 6 | 0 | Easy way to add thousand separator to numbers in Python pandas DataFrame | 41,447,383 | 0.066568 | python,pandas,number-formatting,separator | If you want "." as thousand separator and "," as decimal separator this will works:
Data = pd.read_Excel(path)
Data[my_numbers] = Data[my_numbers].map('{:,.2f}'.format).str.replace(",", "~").str.replace(".", ",").str.replace("~", ".")
If you want three decimals instead of two you change "2f" --> "3f"
Data[my_numbers] = Data[my_numbers].map('{:,.3f}'.format).str.replace(",", "~").str.replace(".", ",").str.replace("~", ".") | Assuming that I have a pandas dataframe and I want to add thousand separators to all the numbers (integer and float), what is an easy and quick way to do it? | 0 | 1 | 20,270 |
0 | 41,730,024 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-01-04T09:02:00.000 | 0 | 2 | 0 | Python/Pybrain: How can I fix weights of a neural network during training? | 41,459,860 | 0 | python,python-2.7,neural-network,pybrain | I am struggling with a similar problem.
So far I am using net._setParameters command to fix the weights after each training step, but there should be a better answer..
It might help for the meantime, I am waiting for the better answer as well :-) | I am quite new to neural networks and trying to use pybrain to build and train a network.
I am building my network manually with full connections between all layers (input, two hidden layers, output) and then set some weights to zero using _SetParameters as I don't want connections between some specific nodes.
My problem is that the weights that are zero at the beginning are adapted in the same way as all other weights and therefore no more zero after training the network via backprop. How can I force the "zero-weights" to stay zero through the whole process?
Thanks a lot for your answers.
Fiona | 0 | 1 | 562 |
0 | 41,472,883 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2017-01-04T19:34:00.000 | 5 | 2 | 0 | How Can I Write Charts to Python DocX Document | 41,471,887 | 0.462117 | python,excel,matplotlib,python-docx | The general approach that's currently supported is to export the chart from matplotlib or wherever as an image, and then add the image to the Word document.
While Word allows "MS Office-native" charts to be created and embedded, that functionality is not in python-docx yet. | python beginner here with a simple question. Been using Python-Docx to generate some reports in word from Python data (generated from excel sheets). So far so good, but would like to add a couple of charts to the word document based on the data in question. I've looked at pandas and matplotlib and all seem like they would work great for what I need (just a few bar charts, nothing crazy). But can anyone tell me if it is possible to create the chart in python and have it output to the word document via docx? | 0 | 1 | 10,764 |
0 | 41,486,968 | 0 | 1 | 0 | 0 | 2 | true | 0 | 2017-01-05T10:35:00.000 | 0 | 2 | 0 | Does NLTK return different results on each run? | 41,482,733 | 1.2 | python,python-2.7,nltk | Neither modify their logic or computation in any iterative loop.
In NLTK, tokenzation by default is rule based, using Regular Expressions, to split tokens from a sentence
POS tagging by default uses a trained model for English, and will therefore give the same POS tag per token for the given trained model. If that model is trained again, it will change.
Therefore the basic answer to your question is no | Does Python's NLTK toolkit return different results for each iteration of:
1) tokenization
2) POS tagging?
I am using NLTK to tag a large text file. The tokenized list of tuples has a different size every time. Why is this? | 0 | 1 | 110 |
0 | 41,487,255 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2017-01-05T10:35:00.000 | 0 | 2 | 0 | Does NLTK return different results on each run? | 41,482,733 | 0 | python,python-2.7,nltk | Both the tagger and the tokenizer are deterministic. While it's possible that iterating over a Python dictionary would return results in a different order in each execution of the program, this will not affect tokenization -- and hence the number of tokens (tagged or not) should not vary. Something else is wrong with your code. | Does Python's NLTK toolkit return different results for each iteration of:
1) tokenization
2) POS tagging?
I am using NLTK to tag a large text file. The tokenized list of tuples has a different size every time. Why is this? | 0 | 1 | 110 |
0 | 41,497,154 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-05T14:38:00.000 | 0 | 2 | 0 | What is the matter when I installing spark-python on CentOS | 41,487,708 | 0 | python,apache-spark,cloudera | You are installing spark Python 1.6 which depends on Python 2.6
I think the current stable version is 2.x and the package for that is pyspark. Try installing that. It might require Python 3.0 but thats easy enough to install.
You'll probably need to reinstall the other spark packages as well to make sure they are the right version. | I have a problem installing spark-python on CentOS.
When I installed it using yum install spark-python, I get the following error message.
Error: Package: spark-python-1.6.0+cdh5.9.0+229-1.cdh5.9.0.p0.30.el5.noarch (cloudera-cdh5)
Requires: python26
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
I already installed other spark packages (spark-master, spark-worker...) but it only occurred installing spark-python.
Can anyone help me? | 0 | 1 | 141 |
0 | 41,511,835 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-01-06T18:03:00.000 | 2 | 2 | 0 | How to change Bokeh favicon to another image | 41,511,597 | 0.197375 | python,bokeh | As of Bokeh 0.12.4 it is only possible to remove it, not change it, directly from the python library. This can be done by setting the property logo=None on a plot.toolbar. | Bokeh plots include a Bokeh favicon in the upper right of most plots. Is it possible to replace this icon with another icon? If so, how? | 0 | 1 | 716 |
0 | 61,167,164 | 0 | 0 | 0 | 0 | 1 | false | 27 | 2017-01-07T05:58:00.000 | 7 | 5 | 0 | What is the difference between resize and reshape when using arrays in NumPy? | 41,518,351 | 1 | python,numpy | reshape() is able to change the shape only (i.e. the meta info), not the number of elements.
If the array has five elements, we may use e.g. reshape(5, ), reshape(1, 5),
reshape(1, 5, 1), but not reshape(2, 3).
reshape() in general don't modify data themselves, only meta info about them,
the .reshape() method (of ndarray) returns the reshaped array, keeping the original array untouched.
resize() is able to change both the shape and the number of elements, too.
So for an array with five elements we may use resize(5, 1), but also resize(2, 2) or resize(7, 9).
The .resize() method (of ndarray) returns None, changing only the original array (so it seems as an in-place change). | I have just started using NumPy. What is the difference between resize and reshape for arrays? | 0 | 1 | 29,555 |
0 | 41,527,710 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-01-07T21:19:00.000 | 3 | 2 | 0 | How to determine if an image is dark? | 41,526,677 | 1.2 | python,python-2.7,opencv,image-processing | To determin if an image is dark, simply calculate the average intensity and judge it.
The problem for the recognition although is not that the image is dark, but that it has a low contrast. A bright image with the same contrast would yield the same bad results.
Histogram equalization is a method that is used to improve images for human vision. Humans have difficulties to distinguish between very similar intensity values. A problem that a computer does not have, unless your algorithm is somehow made to mimic human vision with all its flaws.
A low contrast image bears little information. There is no image enhancement algorithm in the world that will add any further information.
I won't get into too much detail about image characterization. You'll find plenty of resources online or in text books.
A simple measure would be to calculate the standard deviation of image regions you are interested in. | I have some images i'm using for face recognition.
Some of the images are very dark.
I don't want to use Histogram equalisation on all the images only on the dark ones.
How can i determine if an image is dark?
I'm using opencv in python.
I would like to understand the theory and the implementation.
Thanks | 0 | 1 | 2,224 |
0 | 41,543,451 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-01-08T15:45:00.000 | 1 | 4 | 0 | How do I safely preallocate an integer matrix as an index matrix in numpy | 41,534,489 | 0.049958 | python,numpy | If you really really want to catch errors that way, initialize your indices with NaN.
IXS=np.full((r,c),np.nan, dtype=int)
That will always raise an IndexError. | I want to preallocate an integer matrix to store indices generated in iterations. In MATLAB this can be obtained by IXS = zeros(r,c) before for loops, where r and c are number of rows and columns. Thus all indices in subsequent for loops can be assigned into IXS to avoid dynamic assignment. If I accidentally select a 0 in my codes, for example, a wrong way to pick up these indices to select elements from a matrix, error can arise.
But in numpy, 0 or other minus values can also be used as indices. For example, if I preallocate IXS as IXS=np.zeros([r,c],dtype=int) in numpy. In a for loop, submatrix specified by the indices assigned into IXS previously can be obtained by X(:,IXS(IXS~=0)) in MATLAB, but the first row/column may be lost if I perform the selection in the same way in numpy.
Further, in a large program with operations of large matrices, preallocation is important in speeding up the computation, and it is easy to locate the error raised by wrong indexing as 0 may be selected in MATLAB. In numpy, if I select an array by for example X[:,IXS[:n]] with wrong n, no error occurs. I have to pay lots of times to check where the error is. More badly, if the final results are not so strange, I may ignore this bug. This always occurs in my program. Thus I have to debug my codes again and again.
I wonder is there a safe way to preallocate such index matrix in numpy? | 0 | 1 | 620 |
0 | 41,564,872 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2017-01-09T21:12:00.000 | 0 | 1 | 0 | Is there an easy way to solve a system of linear equations over Z/2Z in Python? | 41,557,022 | 1.2 | python-3.x,math,matrix | I'd use Sage if this were a quick hack, and maybe consider using something optimized for GF(2) if the matrices are really big, to ensure that only one bit is used for each entry and that addition of several elements can be accomplished using a single XOR operation. One benefit of working over a finite field is that you don't have to worry about numeric stability, so naive Gauss–Jordan would sound like a good approach. | I'm practicing programming and I would like to know what is the easiest way to solve linear system of equations over the field Z/2Z? I found a problem where I managed to reduce the problem to solve a system of about 2200 linear equations over Z/2Z but I'm not sure what is the easiest way to write a solver for the equations. Is there simpler solution that use nested lists to represent a matrix and then manually write the Gauss–Jordan algorithm? | 0 | 1 | 612 |
0 | 41,577,386 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-01-10T03:46:00.000 | 0 | 3 | 0 | Numpy not found after installation | 41,560,796 | 0 | python,numpy,python-3.5 | Winpython has two size, and the smallest "Zero" size doesn't include numpy | I just installed numpy on my PC (running windows 10, running python 3.5.2) using WinPython, but when i try to import it in IDLE with: import numpy I get the ImportError: Traceback (most recent call last):
File "C:\Users\MY_USERNAME\Desktop\DATA\dataScience1.py", line 1, in <module>
import numpy
ImportError: No module named 'numpy'.
Did I possibly install it incorrectly, or do I need to do something else before it can be used? | 0 | 1 | 3,858 |
0 | 44,357,542 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-10T18:15:00.000 | 0 | 1 | 0 | Large graph processing on Hadoop | 41,575,620 | 0 | python,hadoop,graph,random-walk,bigdata | My understanding is, you need to process large graphs which are stored on file systems. There are various distributed graph processing frameworks like Pregel, Pregel+, GraphX, GPS(Stanford), Mizan, PowerGraph etc.
It is worth taking a look at these frameworks. I will suggest coding in C, C++ using openMPI like which can help achieve better efficiency.
Frameworks in Java are not very memory efficient. I am not sure of API of these frameworks in Python.
It is worth taking a look at blogs and papers which give a comparative analysis of these frameworks before deciding on implementing them. | I am working on a project that involves a RandomWalk on a large graph(too big to fit in memory). I coded it in Python using networkx but soon, the graph became too big to fit in memory, and so I realised that I needed to switch to a distributed system. So, I understand the following:
I will need to use a graph database as such(Titan, neo4j, etc)
A graph processing framework such as Apache Giraph on hadoop/ graphx on spark.
Firstly, are there enough APIs to allow me to continue to code in Python, or should I switch to Java?
Secondly, I couldn't find exact documentation on how I can write my custom function of traversal(in either Giraph or graphx) in order to implement the Random Walk algorithm. | 0 | 1 | 480 |
0 | 41,626,482 | 0 | 1 | 0 | 0 | 1 | false | 29 | 2017-01-11T08:55:00.000 | 6 | 7 | 0 | OpenCV - Saving images to a particular folder of choice | 41,586,429 | 1 | python,opencv,image-processing | Thank you everyone. Your ways are perfect. I would like to share another way I used to fix the problem. I used the function os.chdir(path) to change local directory to path. After which I saved image normally. | I'm learning OpenCV and Python. I captured some images from my webcam and saved them. But they are being saved by default into the local folder. I want to save them to another folder from direct path. How do I fix it? | 0 | 1 | 137,529 |
0 | 41,591,155 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-01-11T11:23:00.000 | 2 | 1 | 0 | use Phantom 2 for real time image processing | 41,589,611 | 0.379949 | python-2.7,image-processing,raspberry-pi,phantom-types | I would be surprised if a off-the-shelf multicopter would comprise enough processing power to do any reasonable image processing on-board. It wouldn't make sense for the manufacturer.
But I guess it has some video or streaming capabilties or can be equipped with such. Then you can process the data on a remote computer, given that you are in transmission range.
If you have to process on a remote device it doesn't make any sense to request real-time processing. What for? I mean the multicopter can't do anything useful with real-time results and just for mapping or inspection purposes delay doesn't matter.
In general your question cannot be answered as no one can tell you if any hardware is capable of real-time processing without knowing how much there is to process.
To answer the rest of your questions:
You can connect a raspberry pi to the Phantom.
You can use pyhton 2.7 and opencv to write image processing code.
That you ask things like that makes me think that you might not be up to the job. So unless you have a team of talented people I guess it will take you years to come out with a usable and robust solution. | I have a project to detect ripeness of specific fruit, I will use phantom 2 with autopilot feature to fly through fruit trees and capture images then I want to make real time image processing.
I was searching a lot but didn't find the answers for the following questions.
can I use phantom 2 for real time image processing? can I connect
raspberry pi to the phantom? and what I need? can I use python 2.7 +
opencv lib to write image processing codes? | 0 | 1 | 91 |
0 | 56,167,288 | 0 | 0 | 0 | 0 | 2 | false | 15 | 2017-01-11T12:23:00.000 | 2 | 4 | 0 | Change data type of a specific column of a pandas dataframe | 41,590,884 | 0.099668 | python,pandas | To simply change one column, here is what you can do:
df.column_name.apply(int)
you can replace int with the desired datatype you want e.g (np.int64), str, category.
For multiple datatype changes, I would recommend the following:
df = pd.read_csv(data, dtype={'Col_A': str,'Col_B':int64}) | I want to sort a dataframe with many columns by a specific column, but first I need to change type from object to int. How to change the data type of this specific column while keeping the original column positions? | 0 | 1 | 77,099 |
0 | 41,591,077 | 0 | 0 | 0 | 0 | 2 | false | 15 | 2017-01-11T12:23:00.000 | 26 | 4 | 0 | Change data type of a specific column of a pandas dataframe | 41,590,884 | 1 | python,pandas | df['colname'] = df['colname'].astype(int) works when changing from float values to int atleast. | I want to sort a dataframe with many columns by a specific column, but first I need to change type from object to int. How to change the data type of this specific column while keeping the original column positions? | 0 | 1 | 77,099 |
0 | 41,604,196 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2017-01-11T19:34:00.000 | 0 | 1 | 0 | Translation from Camera Coordinates System to Robotic-Arm Coordinates System | 41,599,283 | 0 | python,opencv,coordinates,robotics,coordinate-transformation | Define your 2D coordinate on the board, create a mapping from the image coordinate (2D) to the 2D board, and also create a mapping from the board to robot coordinate (3D). Usually, robot controller has a function to define your own coordinate (the board). | I am new in robotics and I am working on a project where I need to pass the coordinates from the camera to the robot.
So the robot is just an arm, it is then stable in a fixed position. I do not even need the 'z' axis because the board or the table where everything is going on have always the same 'z' coordinates.
The webcam as well is always in a fixed position, it is not part of the robot and it does not move.
The problem I am having is in the conversion from 2D camera coordinates to a 3D robotic arm coordinates (2D is enough because as stated before the 'z' axis is not needed as is always in a fixed position).
I'd like to know from you guys, which one is the best approach to face this kind of problems so I can start to research.
I've found lot of information on the web but I am getting a lot of confusion, I would really appreciate if someone could address me to the right way.
I don't know if this information are useful but I am using OpenCV3.2 with Python
Thank you in advance | 0 | 1 | 1,200 |
0 | 42,087,865 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-01-12T04:53:00.000 | 0 | 2 | 0 | Updating the supported tags for pip | 41,605,355 | 0 | python,python-3.x,tensorflow,pip | I have the same error when I run this command. I found error that the installed version of python was x86 and TensorFlow is for x64 versions. I reinstalled the python with x64 version and it works now! I hope this works for you too! | I'm trying to install Tensorflow, and received the following error.
tensorflow-0.12.1-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform.
By reading through other questions, I think I've traced the issue to the cp35 tag not being supported by the version of pip I have installed. What's odd is that I believe I installed python 3.5 and the latest version of pip (9.0.1), but have the following supported tags:
[('cp27', 'cp27m', 'win_amd64'), ('cp27', 'none', 'win_amd64'), ('py2', 'none', 'win_amd64'), ('cp27', 'none', 'any'), ('cp2', 'none', 'any'), ('py27', 'none', 'any'), ('py2', 'none', 'any'), ('py26', 'none', 'any'), ('py25', 'none', 'any'), ('py24', 'none', 'any'), ('py23', 'none', 'any'), ('py22', 'none', 'any'), ('py21', 'none', 'any'), ('py20', 'none', 'any')]
How can I go about modifying the supported tags, or is that even the right approach? | 0 | 1 | 2,301 |
0 | 41,620,561 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2017-01-12T17:28:00.000 | 1 | 1 | 0 | Can I apply Cross Validation in a Linear Regression model? | 41,619,431 | 1.2 | python,scikit-learn,linear-regression | Yes, using cross validation will give you a better estimate of your model performance.
Splitting randomly(cross validation) will however not work for time-series and/or all distributions of data.
The "final model" will not be better only your estimate on model performance. | I have a dataset with a total of 58 samples. The dataset has two columns "measured signals" and "people_in_area". Due to it, I am trying to train a Linear Regression model using Scikit-learn. For the moment, I splited 75% of my dataset for training and 25% for testing. However, depending on the order in which the data was before the split, I obtain different R-squared values.
I think that as the dataset is small, depending on the order in which the data was before being splited, different values would be kept as x_test and y_test. Due to it, I am thinking on using "Cross-Validation" on my Linear Regression model to divide the test and train data randomly several times, training it more and, also, being able to test more, obtaining in this way more reliable results. Is this a correct approach? | 0 | 1 | 1,054 |
0 | 46,266,094 | 0 | 0 | 0 | 0 | 1 | false | 24 | 2017-01-13T04:08:00.000 | 7 | 4 | 0 | Download data from a jupyter server | 41,627,247 | 1 | python,download,ipython,jupyter-notebook,jupyter | The download option did not appear for me.
The solution was to open the file (which could not be correctly read as it was a binary file), and to download it from the notebook's notepad. | I'm using ipython notebook by connecting to a server
I don't know how to download a thing (data frame, .csv file,... for example) programatically to my local computer. Because I can't specific declare the path like C://user//... It will be downloaded to their machine not mine | 0 | 1 | 45,980 |
0 | 41,672,695 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2017-01-13T17:01:00.000 | 1 | 1 | 0 | Globally load R libraries in Snakemake | 41,639,782 | 1.2 | python,r,snakemake | I'm afraid not. This has performance reasons on (a) local systems (circumventing the Python GIL) and (b) cluster systems (scheduling to separate nodes).
Even if there was a solution on local machines, it would need to take care that no sessions are shared between parallel jobs. If you really need to safe that time, I suggest to merge those scripts. | I'm currently building my NGS pipeline using Snakemake and have an issue regarding the loading of R libraries. Several of the scripts, that my rules call, require the loading of R libraries. As I found no way of globally loading them, they are loaded inside of the R scripts, which of course is redundant computing time when I'm running the same set of rules on several individual input files.
Is there a way to keep one R session for the execution of several rules and load all required libraries priorly?
Cheers,
zuup | 0 | 1 | 239 |
0 | 41,763,164 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-16T01:09:00.000 | 1 | 1 | 0 | TensorFlow: how to determine if we want to break the training dataset into batches | 41,668,158 | 0.197375 | python,python-3.x,tensorflow,deep-learning,data-science | Generally Deep Learning algorithms are ran on GPUs which has limited memory and thus a limited number of input data samples (in the algorithm commonly defined as batch size) could be loaded at a time.
In general larger batch size reduces the overall computation time (as the internal matrix multiplications are done in a parallel manner in GPU, thus with large batch sizes the time gets saved in reading/writing gradients and possibly some other operations output).
Another probable benefit of large batch size is:
In multi-class classification problems, if the number of classes are large, a
larger batch size makes algorithm generalize better(technically avoids over-fitting) over the different classes (while doing this a standard technique is to keep uniform distribution of classes in a batch).
While deciding batch size there are some other factors which comes into play are: learning rate and the type of Optimization method.
I hope this answers your question to certain extent! | I am learning TensorFlow (as well as general deep learning). I am wondering when do we need to break the input training data into batches? And how do we determine the batch size? Is there a rule of thumb? Thanks! | 0 | 1 | 211 |
0 | 50,435,429 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-16T14:57:00.000 | 0 | 2 | 0 | Is there some way can accomplish stochastic gradient descent not from scratch | 41,679,182 | 0 | python,optimization,machine-learning,tensorflow,deep-learning | Both Theano & Tensorflow have built-in differentiation for you. So you only need to form the loss. | For a standard machine learning problem, e.g, image classification on MNIST, the loss function is fixed, therefor the optimization process can be accomplished simply by calling functions and feed the input into them. There is no need to derive gradients and code the descent procedure by hand.
But now I'm confused when I met some complicated formulation. Say we are solving a semi-supervised problem, and the loss function has two parts:Ls + lambda * Lu. The first part is a normal classification formulation, e.g, cross entropy loss. And the second part varies. In my situation, Lu is a matrix factorization loss, which in specific is:Lu = MF(D, C * W). And the total loss function can be written as:
L = \sum log p(yi|xi) + MF(D, C * W)
= \sum log p(yi|Wi) + MF(D, C * W)
= \sum log p(yi|T * Wi + b) + MF(D, C * W)
Where parameters are W, C, T and b. The first part is a classification loss, and the input xi is a raw of W, i.e. Wi, a vector of size (d, 1). And the label yi can be a one-hot vector of size (c, 1), so parameters T and b map the input to the label size. And the second part is a matrix factorization loss.
Now I'm confused when I'm going to optimize this function using sgd. It can be solved by write down the formulation derive gradients then accomplish a training procedure from scratch. But I'm wondering if there is a simpler way? Because it's easy to use a deep learning tool like Tensorflow or Keras to train a classification model, all u need to do is build a network and feed the data.
So similarly, is there a tool that can automatically compute gradients after I defined the loss function? Because deriving gradients and achieve them from scratch is really annoying. Both the classification loss and matrix factorization loss is very common, so I think the combination can be achieved thoroughly. | 0 | 1 | 115 |
0 | 41,684,697 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-16T18:22:00.000 | 0 | 2 | 0 | network analysis: how to create nodes and edges files from csv | 41,682,737 | 0 | python,r,social-networking | Decide how you want the graph to represent the data. From what you've described one approach would be to have nodes in your graph represent People, and edges represent grants. In that case, create a pairwise lis of people who are on the same grant. Edges are bidirectional by default in iGraph, so you just need each pair once. | I have a two-mode (grant X person) network in csv format. I would like to create personXperson projection of this network and calculate some network measures (including centrality measures of closeness and betweenness, etc.).
What would be my first step? I am guessing creating 2 separate files for Nodes and Edges and run the analysis in R using igraph package?!
Here is a super simplified version of my data (my_data.csv).
Grant, Person
A , 1
A , 2
B , 2
B , 3 | 0 | 1 | 1,405 |
0 | 41,753,582 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2017-01-17T04:52:00.000 | 1 | 2 | 1 | When using qsub to submit jobs, how can I include my locally installed python packages? | 41,689,297 | 0.099668 | python,cluster-computing,pbs,qsub,supercomputers | If you are using pbs professional then try to export PYTHONPATH in your environment and then submit job using "-V" option with qsub. This will make qsub take all of your environment variables and export it for the job.
Else, try setting it using option "-v" (notice small v) and then put your environment variable key/value pair with that option like qsub -v HOME=/home/user job.sh | I have an account on a supercomputing cluster where I've installed some packages using e.g. "pip install --user keras".
When using qsub to submit jobs to the queue, I try to make sure the system can see my local packages by setting "export PYTHONPATH=$PYTHONPATH:[$HOME]/.local/lib/python2.7/site-packages/keras" in the script.
However, the resulting log file still complains that there is no package called keras. How can I make sure the system finds my packages? | 0 | 1 | 2,181 |
0 | 41,700,079 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-01-17T14:38:00.000 | 0 | 2 | 0 | Initialize an empty list of the shape/structure of a given list without numpy | 41,699,897 | 0 | python,list | You can use: B=[[None]*m]*n
It creates a list of n rows of m columns of None. | Given a list A with n rows each having m columns each.
Is there a one liner to create an empty list B with same structure (n rows each with m components)?
Numpy lists can be created/reshaped. Does the python in-built list type support such an argument? | 0 | 1 | 1,097 |
0 | 49,909,264 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2017-01-18T09:36:00.000 | 1 | 3 | 0 | SciKit One-class SVM classifier training time increases exponentially with size of training data | 41,715,835 | 0.066568 | python,scikit-learn,svm | Hope I'm not too late. OCSVM, and SVM, is resource hungry, and the length/time relationship is quadratic (the numbers you show follow this). If you can, see if Isolation Forest or Local Outlier Factor work for you, but if you're considering applying on a lengthier dataset I would suggest creating a manual AD model that closely resembles the context of these off-the-shelf solutions. By doing this then you should be able to work either in parallel or with threads. | I am using the Python SciKit OneClass SVM classifier to detect outliers in lines of text. The text is converted to numerical features first using bag of words and TF-IDF.
When I train (fit) the classifier running on my computer, the time seems to increase exponentially with the number of items in the training set:
Number of items in training data and training time taken:
10K: 1 sec, 15K: 2 sec, 20K: 8 sec, 25k: 12 sec, 30K: 16 sec, 45K: 44 sec.
Is there anything I can do to reduce the time taken for training, and avoid that this will become too long when training data size increases to a couple of hundred thousand items ? | 0 | 1 | 2,038 |
0 | 41,732,675 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2017-01-18T11:54:00.000 | 1 | 1 | 0 | Extract the features from Doc2Vec in Python | 41,718,767 | 0.197375 | python,doc2vec | Yes, if words is a list of word strings, preprocessed/tokenized the same way as training data was fed to the model during training. | For a small project I need to extract the features obtained from Doc2Vec object in gensim.
I have used vector = model.infer_vector(words) is it correct? | 0 | 1 | 439 |
0 | 41,740,148 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-18T17:40:00.000 | 0 | 1 | 0 | How tf-idf is relevant in calculating sentence vectors | 41,725,993 | 0 | python,machine-learning | Any aggregative operation on the word vectors can give you a sentence vector.
You should consider what do you want your representation to mean and choose the operation accordingly.
Possible operations are summing the vectors, averaging them, concatenating, etc. | I am interested to find sentence vectors using word vectors.I read that by multiplying each word's tf-idf weights with their vectors and finding their average we can get whole sentence vector.
Now I want to know that how these tf-idf weights helps us to get sentence vectors i.e how these tf-idf and sentence vector are related? | 0 | 1 | 878 |
0 | 41,792,826 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-19T15:02:00.000 | 1 | 1 | 0 | Tensorflow Inception v3 retraining - attach text/labels to individual images | 41,745,022 | 0.197375 | python,machine-learning,tensorflow,neural-network,deep-learning | You have 3 main options - multiply your classes, multi-label learning or training several models.
The first option is the most straight forward - instead of having teachers who belong to John and teachers who belong to Jane you can have teachers whose class is Teachers_John and class whose class is Teachers_John and learn to classify to those categories as you would learn any other set of categories, or use something like hierarchical softmax.
The second option is to have a set of categories that includes Teachers as well as John and Jane - now your target is not to correctly predict the one most accurate class (Teachers) but several (Teachers and John).
Your last option is to create a hierarchy of models where the first learns to differentiate between John and Jane and others to classify the inner classes for each of them. | I am using the inception v3 model to retrain my own dataset. I have few folder which represent the classes which contain images for each class. What i would like to do is to 'attach' some text ids to these images so when they are retrained and used to run classification/similarity-detection those ids are retrieved too. (basically its image similarity detection)
For instance, Image X is of class 'Teachers' and it belongs to John. When i retrain the model, and run a classification on the new model, i would like to get the Teachers class, but in addition to this i would like to know who is teacher (John).
Any ideas how to go for it?
Regards | 0 | 1 | 596 |
0 | 41,749,141 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2017-01-19T18:07:00.000 | 1 | 2 | 0 | Finding cube root of a number less than 1 using binary search | 41,748,751 | 1.2 | python,algorithm,binary-search | Yes, your instructor's one statement is a flaw. For 0 < x < 1, the root will lie between x and 1. This is true for any power in the range (0, 1) (roots > 1).
You can reflect the statement to the negative side, since this is an odd root. The cube root of -1 <= x <= 0 will be in the range [-1, x]. For x < -1, your range is [x, -1]. It's the mirror-image of the positive cases. I'm not at all clear why the instructor made that asymmetric partitioning. | I am doing MIT6.00.1x course on edX and the professor tells that "If x<1, search space is 0 to x but cube root is greater than x and less than 1".
There are two cases :
1. The number x is between 0 and 1
2. The number x is less than 0 (negative)
In both the cases, the cube root of x will lie between x and 1. I understood that. But what about the search space? Will the initial search space lie between 0 and x? I think it is not that. I think the bold text as cited from the lecture is a flaw! Please enlighten me on this. | 0 | 1 | 1,193 |
0 | 44,792,766 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2017-01-19T18:07:00.000 | 1 | 2 | 0 | Finding cube root of a number less than 1 using binary search | 41,748,751 | 0.099668 | python,algorithm,binary-search | I think I know the problem you're talking about. The only reason she put that is that she deals with the absolute difference:
while abs(guess**3 - cube) >= epsilon
However, the code will need another line to deal with negative cubes all together which will be something along the lines of:
if cube<0: guess = -guess
I hope this helps. | I am doing MIT6.00.1x course on edX and the professor tells that "If x<1, search space is 0 to x but cube root is greater than x and less than 1".
There are two cases :
1. The number x is between 0 and 1
2. The number x is less than 0 (negative)
In both the cases, the cube root of x will lie between x and 1. I understood that. But what about the search space? Will the initial search space lie between 0 and x? I think it is not that. I think the bold text as cited from the lecture is a flaw! Please enlighten me on this. | 0 | 1 | 1,193 |
0 | 41,749,570 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2017-01-19T18:47:00.000 | 0 | 1 | 0 | Memory error with np array when making document term matrix in python 2.7 | 41,749,448 | 0 | python | I assume you are using 32bit python. 32bit python limits your program ram memory to 2 gb (all 32bit programs have this as a hard limit), some of this is taken up by python overhead, more of this is taken up by your program. normal python objects do not need contiguous memory and will map disparate regions of memory
numpy.arrays require contiguous memory allocation, this is much harder to allocate. aditionally np.array(a) + 1 creates a 2nd array and must allocate again a huge contiguous block (in fact most operations).
some possible solutions that come to mind
use 64 bit python ... this will give you orders of magnitude more ram to work with ... you will be unlikely to encounter a memory error with this unless you have a really really really big array (so much so that numpy is probably not the right solution)
use multiprocessing to create a new process with a new 2gb limit that just does the numpy processing stuff
use a different solution than numpy( ie a database) | I am using matrix = np.array(docTermMatrix) to make DTM. But sometimes it will run into memory error problems at this line. How can I prevent this from happening? | 0 | 1 | 84 |
0 | 41,767,302 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2017-01-19T21:46:00.000 | 6 | 1 | 0 | NEAT-Python not finding Visualize.py | 41,752,291 | 1.2 | python,importerror,iterm2,neat,virtual-environment | I think you could simply copying the visualize.py into the same directory as the script you are running.
If you wanted it in your lib/site-packages directory so you could import it with the neat module:
copy visualize.py into lib/site-packages/neat/ and modify __init__.py to add the line import neat.visualize as visualize. Delete the __pycache__ directory. Make sure you have modules installed: Numpy, GraphViz, and Matplotlib. When you've done the above, you should be able to import neat and access neat.visualize.
I don't recommend doing this though for several reasons:
Say you wanted to update your neat module. Your visualize.py file is technically not part of the module. So it wouldn't be updated along with your neat module.
the visualize.py file seems to be written in the context of the examples as opposed to being for general use with the module, so contextually, it doesn't belong there.
At some point in the future, you might also forget that this wasn't a part of the module, but your code acts as if it was part of the API. So your code will break in some other neat installation. | So recently I have found about a NEAT algorithm and wanted to give it a try using NEAT-Python(not sure if this is even the correct source :| ). So I created my virtual environment activated it and installed the neat-python using pip in the VE. When I then tried to run one of the examples from their GitHub page it threw an error like this:
ImportError: No module named visualize
So I checked my source files, and actually the neat-python doesn't include the visualize.py script, however it is in their GitHub repository. I then tried to add it myself by downloading just the visualize.oy script dragging it inside my VE and adding it to all the textfiles the NEAT brought with it, like the installed-filex.txt etc. However it still threw the same error.
I'm still fairly new to VE and GitHub so please don't be too hard on me :] thanks in advance.
-Jorge | 0 | 1 | 8,034 |
0 | 47,515,380 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-01-20T01:52:00.000 | 0 | 1 | 0 | Tableau: How to automate publishing dashboard to Tableau server | 41,754,825 | 0 | python,powershell,scripting,server,tableau-api | Getting data from excel to Tableau Server:
Setup the UNC path so it is accessible from your server. If you do this, you can then set up an extract refresh to read in the UNC path at the frequency desired.
Create an extract with the Tableau SDK.
Use the Tableau SDK to read in the CSV file and generate a file.
In our experience, #2 is not very fast. The Tableau SDK seems very slow when generating the extract, and then the extract has to be pushed to the server. I would recommend transferring the file to a location accessible to the server. Even a daily file copy to a shared drive on the server could be used if you're struggling with UNC paths. (Tableau does support UNC paths; you just have to be sure to use them rather than a mapped drive in your setup.)
It can be transferred as a file and then pushed (which may be fastest) or it can be pushed remotely.
As far as scheduling two steps (python and data extract refresh), I use a poor man's solution myself, where I update a csv file at one point (task scheduler or cron are some of the tools which could be used) and then setup the extract schedule at a slightly later point in time. While it does not have the linkage of running the python script and then causing the extract refresh (surely there is a tabcmd for this), it works just fine for my purposes to put 30 minutes in between as my processes are reliable and the app is not mission critical. | I used python scripting to do a series of complex queries from 3 different RDS's, and then exported the data into a CSV file. I am now trying to find a way to automate publishing a dashboard that uses this data into Tableau server on a weekly basis, such that when I run my python code, it will generate new data, and subsequently, the dashboard on Tableau server will be updated as well.
I already tried several options, including using the full UNC path to the csv file as the live connection, but Tableau server had trouble reading this path. Now I'm thinking about just creating a powershell script that can be run weekly that calls the python script to create the dataset and then refreshes tableau desktop, then finally re-publishes/overwrites the dashboard to tableau server.
Any ideas on how to proceed with this? | 0 | 1 | 1,415 |
0 | 41,767,039 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2017-01-20T04:07:00.000 | 0 | 2 | 0 | Seaborn pairplot not showing KDE | 41,755,950 | 0 | python,matplotlib,seaborn | Looks like the problem was with statsmodels (which seaborn uses to do KDE). I reinstalled statsmodels and that cleared up the problem. | After upgrading to matplotlib 2.0 I have a hard time getting seaborn to plot a pairplot. For example...
sns.pairplot(df.dropna(), diag_kind='kde') returns the following error TypeError: slice indices must be integers or None or have an __index__ method. My data doesn't have any Nans in it. Infact, removing the kde option allows the function to run.
Any idea what is happening? | 0 | 1 | 1,051 |
0 | 49,675,309 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2017-01-20T21:01:00.000 | 0 | 3 | 0 | Does Fortify support Python, Scala, and Apache Spark? | 41,772,263 | 0 | python,scala,apache-spark,fortify | Fortify support python scan. Since it is not compiled, you can directly feed the code to python, it will detect the same, scan and give you the result. | Does Fortify Supports Python, Scala, and Apache Spark? If it supports how to scan these codes using Fortify.
We need to have compiler to scan C++ code using Fortify. This can be done using Microsoft visual studio.
Similarly should we need to have some plugin to scan Python, Scala, and Spark codes? | 0 | 1 | 7,798 |
0 | 60,477,227 | 0 | 0 | 0 | 0 | 2 | false | 22 | 2017-01-21T07:26:00.000 | 0 | 4 | 0 | In-place sort_values in pandas what does it exactly mean? | 41,776,801 | 0 | python,sorting,pandas,in-place | "inplace=True" is more like a physical sort while "inplace=False" is more like logic sort. The physical sort means that the data sets saved in the computer is sorted based on some keys; and the logic sort means the data sets saved in the computer is still saved in the original (when it was input/imported) way, and the sort is only working on the their index. A data sets have one or multiple logic index, but physical index is unique. | Maybe a very naive question, but I am stuck in this: pandas.Series has a method sort_values and there is an option to do it "in place" or not. I have Googled for it a while, but I am not very clear about it. It seems that this thing is assumed to be perfectly known to everybody but me. Could anyone give me some illustrative explanation how these two options differ each other for dummies...?
Thank you for any assistance. | 0 | 1 | 21,797 |
0 | 71,012,398 | 0 | 0 | 0 | 0 | 2 | false | 22 | 2017-01-21T07:26:00.000 | 0 | 4 | 0 | In-place sort_values in pandas what does it exactly mean? | 41,776,801 | 0 | python,sorting,pandas,in-place | inplace = True changes the actual list itself while sorting.
inplace = False will return a new sorted list without changing the original.
By default, inplace is set to False if unspecified. | Maybe a very naive question, but I am stuck in this: pandas.Series has a method sort_values and there is an option to do it "in place" or not. I have Googled for it a while, but I am not very clear about it. It seems that this thing is assumed to be perfectly known to everybody but me. Could anyone give me some illustrative explanation how these two options differ each other for dummies...?
Thank you for any assistance. | 0 | 1 | 21,797 |
0 | 56,411,030 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-01-21T13:25:00.000 | 0 | 2 | 0 | AttributeError: module 'theano' has no attribute 'tests' | 41,779,922 | 0 | python-3.x,deep-learning,theano-cuda | For latest version of Theano (1.04)
import theano generates an error without the nose package installed
install via conda or pip pip install nose / conda install nose | I am trying to use theano gpu on my ubuntu, but each time after it running one time successfully, it will give me the error like this when I try to run next time. No idea why, could anyone help me ?
import theano
Traceback (most recent call last):
File "", line 1, in
File "/home/sirius/anaconda3/lib/python3.5/site-packages/theano/init.py", line 95, in
if hasattr(theano.tests, "TheanoNoseTester"):
AttributeError: module 'theano' has no attribute 'tests' | 0 | 1 | 1,036 |
0 | 50,360,800 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2017-01-21T13:25:00.000 | 0 | 2 | 0 | AttributeError: module 'theano' has no attribute 'tests' | 41,779,922 | 0 | python-3.x,deep-learning,theano-cuda | I met the same problem.I just fix it with conda install nose | I am trying to use theano gpu on my ubuntu, but each time after it running one time successfully, it will give me the error like this when I try to run next time. No idea why, could anyone help me ?
import theano
Traceback (most recent call last):
File "", line 1, in
File "/home/sirius/anaconda3/lib/python3.5/site-packages/theano/init.py", line 95, in
if hasattr(theano.tests, "TheanoNoseTester"):
AttributeError: module 'theano' has no attribute 'tests' | 0 | 1 | 1,036 |
0 | 41,796,793 | 0 | 1 | 0 | 0 | 1 | true | 33 | 2017-01-21T18:36:00.000 | 41 | 9 | 0 | How do I convert timestamp to datetime.date in pandas dataframe? | 41,783,003 | 1.2 | python,date,datetime,pandas | I got some help from a colleague.
This appears to solve the problem posted above
pd.to_datetime(df['mydates']).apply(lambda x: x.date()) | I need to merge 2 pandas dataframes together on dates, but they currently have different date types. 1 is timestamp (imported from excel) and the other is datetime.date.
Any advice?
I've tried pd.to_datetime().date but this only works on a single item(e.g. df.ix[0,0]), it won't let me apply to the entire series (e.g. df['mydates']) or the dataframe. | 0 | 1 | 92,271 |
0 | 47,182,528 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2017-01-22T08:39:00.000 | 1 | 1 | 0 | How to calculate ctc probability for given input and expected output? | 41,788,924 | 0.197375 | python,c++,tensorflow | according to Graves paper [1], the loss for a batch is defined as sum(log(p(z|x))) over all samples (x,z) in this batch.
If you use a batch size of 1, you get log(p(z|x)), that is the log-probability of seeing the labelling z given the input x. This can be achieved with the ctc_loss function from TensorFlow.
You can also implement the relevant parts of the Forward-Backward Algorithm described in Section 4.1 of the paper [1] yourself.
For small input sequences it is feasible to use a naive implementation by constructing the paths shown in Figure 3 and then sum over all that paths in the RNN output.
I did this for a sequence of length 16 and for a sequence of length 100. For the former one the naive approach was sufficient while for the latter one the presented dynamic programming approach was needed.
[1] Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks | I'm doing my first tensorflow project.
I need to get ctc probability (not ctc loss) for given input and my expected sequences.
Is there any api or ways to do it in python or c++?
I prefer python side, but c++ side is also okay. | 0 | 1 | 495 |
0 | 41,855,351 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2017-01-22T11:37:00.000 | 1 | 1 | 0 | Python (Pandas) : When to use replace vs. map vs. transform? | 41,790,392 | 1.2 | python,pandas | As far as I understand, Replace is used when working on missing values and transform is used while doing group_by operations.Map is used to change series or index | I'm trying to clearly understand for which type of data transformation the following functions in pandas should be used:
replace
map
transform
Can anybody provide some clear examples so I can better understand them?
Many thanks :) | 0 | 1 | 2,423 |
Subsets and Splits