GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 39,280,341 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-01T20:21:00.000 | 0 | 2 | 0 | Pandas - how to remove spaces in each column in a dataframe? | 39,280,278 | 0 | python,pandas | data[c] does not return a value, it returns a series (a whole column of data).
You can apply the strip operation to an entire column df.apply. You can apply the strip function this way. | I'm trying to remove spaces, apostrophes, and double quote in each column data using this for loop
for c in data.columns:
data[c] = data[c].str.strip().replace(',', '').replace('\'', '').replace('\"', '').strip()
but I keep getting this error:
AttributeError: 'Series' object has no attribute 'strip'
data is the data frame and was obtained from an excel file
xl = pd.ExcelFile('test.xlsx');
data = xl.parse(sheetname='Sheet1')
Am I missing something? I added the str but that didn't help. Is there a better way to do this.
I don't want to use the column labels, like so data['column label'], because the text can be different. I would like to iterate each column and remove the characters mentioned above.
incoming data:
id city country
1 Ontario Canada
2 Calgary ' Canada'
3 'Vancouver Canada
desired output:
id city country
1 Ontario Canada
2 Calgary Canada
3 Vancouver Canada | 0 | 1 | 5,363 |
0 | 39,290,812 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-01T21:30:00.000 | 0 | 2 | 0 | Solving matrix equation A B = C. with B(n* 1) and C(n *1) | 39,281,149 | 0 | python,matrix,equation-solving | If you're solving for the matrix, there is an infinite number of solutions (assuming that B is nonzero). Here's one of the possible solutions:
Choose an nonzero element of B, Bi. Now construct a matrix A such that the ith column is C / Bi, and the other columns are zero.
It should be easy to verify that multiplying this matrix by B gives C. | I am trying to solve a matrix equation such as A.B = C. The A is the unknown matrix and i must find it.
I have B(n*1) and C(n*1), so A must be n*n.
I used the BT* A.T =C.T method (numpy.linalg.solve(B.T, C.T)).
But it produces an error:
LinAlgError: Last 2 dimensions of the array must be square.
So the problem is that B isn't square. | 0 | 1 | 339 |
0 | 39,334,690 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-05T14:36:00.000 | 0 | 1 | 0 | Failure to import sknn.mlp / Theano | 39,332,901 | 0 | python,installation,attributes,scikit-learn,theano | Apparently it was caused by some issue with Visual Studio. The import worked when I reinstalled VS and restarted the computer.
Thanks @super_cr7 for the prompt reply! | I'm trying to use scikit-learn's neural network module in iPython... running Python 3.5 on a Win10, 64-bit machine.
When I try to import from sknn.mlp import Classifier, Layer , I get back the following AttributeError: module 'theano' has no attribute 'gof' ...
The command line highlighted for the error is class DisconnectedType(theano.gof.type.Type), within theano\gradient.py
Theano version is 0.8.2, everything installed via pip.
Any lights on what may be causing this and how to fix it? | 0 | 1 | 331 |
0 | 39,395,872 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-09-08T04:42:00.000 | 1 | 1 | 0 | Saving Python list containing Tensorflow Sparsetensors to file for later access? | 39,382,725 | 0.197375 | python,json,tensorflow | A Tensor in TensorFlow is a node in the graph which, when run, will produce a tensor. So you can't save the SparseTensor directly because it's not a value (you can serialize the graph). If you do evaluate the sparsetensor, you get a SparseTensorValue object back which can be serialized as it's just a tuple. | I'm creating a list of Sparsetensors in Tensorflow. I want to access them in later sessions of my program. I've read online that you can store Python lists as json files but how do I save a list of Sparsetensors to a json file and then use that later on?
Thanks in advance | 0 | 1 | 70 |
0 | 44,253,561 | 0 | 0 | 0 | 0 | 2 | false | 149 | 2016-09-08T06:03:00.000 | 21 | 10 | 0 | Show distinct column values in pyspark dataframe | 39,383,557 | 1 | python,apache-spark,pyspark,apache-spark-sql | You can use df.dropDuplicates(['col1','col2']) to get only distinct rows based on colX in the array. | With pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique().
I want to list out all the unique values in a pyspark dataframe column.
Not the SQL type way (registertemplate then SQL query for distinct values).
Also I don't need groupby then countDistinct, instead I want to check distinct VALUES in that column. | 0 | 1 | 344,799 |
0 | 60,578,769 | 0 | 0 | 0 | 0 | 2 | false | 149 | 2016-09-08T06:03:00.000 | 1 | 10 | 0 | Show distinct column values in pyspark dataframe | 39,383,557 | 0.019997 | python,apache-spark,pyspark,apache-spark-sql | If you want to select ALL(columns) data as distinct frrom a DataFrame (df), then
df.select('*').distinct().show(10,truncate=False) | With pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique().
I want to list out all the unique values in a pyspark dataframe column.
Not the SQL type way (registertemplate then SQL query for distinct values).
Also I don't need groupby then countDistinct, instead I want to check distinct VALUES in that column. | 0 | 1 | 344,799 |
0 | 52,673,944 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2016-09-09T13:35:00.000 | 20 | 4 | 0 | pandas.read_html not support decimal comma | 39,412,829 | 1 | python,pandas,decimal,xlm | This did not start working for me until I used both decimal=',' and thousands='.'
Pandas version: 0.23.4
So try to use both decimal and thousands:
i.e.:
pd.read_html(io="http://example.com", decimal=',', thousands='.')
Before I would only use decimal=',' and the number columns would be saved as type str with the numbers just omitting the comma.(weird behaviour) For example 0,7 would be "07" and "1,9" would be "19"
It is still being saved in the dataframe as type str but at least I don't have to manually put in the dots. The numbers are correctly displayed; 0,7 -> "0.7" | I was reading an xlm file using pandas.read_html and works almost perfect, the problem is that the file has commas as decimal separators instead of dots (the default in read_html).
I could easily replace the commas by dots in one file, but i have almost 200 files with that configuration.
with pandas.read_csv you can define the decimal separator, but i don't know why in pandas.read_html you can only define the thousand separator.
any guidance in this matter?, there is another way to automate the comma/dot replacement before it is open by pandas?
thanks in advance! | 1 | 1 | 4,626 |
0 | 39,433,909 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-09-11T05:16:00.000 | 1 | 1 | 0 | Find when integral of an interpolated function is equal to a specific value (python) | 39,433,108 | 1.2 | python,scipy | One simple way is to use the CubicSpline class instead. Then it's CubicSpline(x, y).antiderivative().solve(0.05*M) or thereabouts. | I have arrays t_array and dMdt_array of x and y points. Let's call M = trapz(dMdt_array, t_array). I want to find at what value of t the integral of dM/dt vs t is equal to a certain value -- say 0.05*M. In python, is there a nice way to do this?
I was thinking something like F = interp1d(t_array, dMdt_array). Then some kind of root find for where the integral of F is equal to 0.05*M. Can I do this in python? | 0 | 1 | 113 |
0 | 39,446,832 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-12T01:34:00.000 | 0 | 1 | 0 | Training a neural network with two groups of spacial coordinates per observation? | 39,442,327 | 0 | python,machine-learning,scikit-learn,neural-network,regression | If I got this right, you are basically trying to implement classification variables into your input, and this is basically done by adding an input variable for each possible class (in your case "group 1" and "group 2") that holds binary values (1 if the sample belongs to the group, 0 if it doesn't). Wheather or not you would want to retain the actual coordinates depends on wheather you would like your network to process actual spatial data, or simply base it's output on the group that the sample belongs to. As I don't have much experience with the particular module you're using, I am unable to provide actual code, but I hope this helps. | I'm trying to predict an output (regression) where multiple groups have spacial (x,y) coordinates. I've been using scikit-learn's neural network packages (MLPClassifier and MLPRegressor), which I know can be trained with spacial data by inputting a 1-D array per observation (ex. the MNIST dataset).
I'm trying to figure out the best way to tell the model that group 1 has this set of spacial coordinates AND group 2 has a different set of spacial coordinates, and that combination yielded a result. Would it make more sense to input a single array where a group 1 location is represented by 1 and group 2 location is represented by -1? Or to create an array for group 1 and group to and append them? Still pretty new to neural nets - hopefully this question makes sense. | 0 | 1 | 244 |
0 | 39,447,667 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-09-12T08:34:00.000 | 2 | 1 | 0 | Python: Binary image segmentation | 39,446,262 | 1.2 | python-2.7,image-segmentation,binary-image | To my mind, this is exactly what can be done using scipy.ndimage.measurements.label and scipy.ndimage.measurements.find_objects
You have to specify what "touching" means. If it means edge-sharing, then the default structure of ndimage.measurements.label is the one you need so you just need to pass your array. If touching means also corner sharing, you will find the right structure in the docstring.
find_objects can then yield a list of slices for the objects. | Is there a easy way to implement the segmentation of a binary image in python?
My 2d-"images" are numpy arrays. The used values are 1.0 and 0.0. I would require a list of all objects with the value 1.0. Every black pixel is a pixel of an object. An object may contain many touching pixels with the value 1.0.
I can use numpy and also scipy.
I already tried to iterate over all pixels and create sets of pixels and fill the new pixel in old sets (or create a new set). Unfortunately the implementation was poor, extremely buggy and also very slow.
Hopyfully somthing like this already exists or there is an easy way to do this?
Thank you very much | 0 | 1 | 1,440 |
0 | 39,451,022 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-09-12T11:53:00.000 | 0 | 1 | 0 | Python 2.7 opencv Yuv/ YPbPr | 39,449,728 | 1.2 | python-2.7,opencv,yuv | You should take care of how YUV is arranged in memory. There are various formats involved. The most common being YUV NV12 and NV21. In general, data is stored as unsigned bytes. While the range of Y is from 0~255, it is -128~127 for U and V. As both U and V approach 0, you have less saturation and approach grayscale. In the case of both NV12 and NV21, it is cols * rows of Y followed by 0.5 * cols * rows of U and V. Both NV12 and NV21 is a semi-planar format, so U and V are interleaved. The former starts with U and the latter with V. In the case of a planar format, there is no interleaving involved. | So I've heard that the YUV and YPbPr colour system is essentially the same.
When I convert BGR to YUV, presumably to the Color_BGR2YUV opencv command, what are the ranges for the values that return for Y, U and V? Because on Colorizer.org, the values seem to be decimals, but I haven't seen opencv spit out any decimal places before.
So basically what I'm asking (in a very general, but hopefully easily answerable way)
What does YUV look like in an array? (ranges and such comparable to the Colorizers.org) | 0 | 1 | 220 |
0 | 39,481,787 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-09-13T09:34:00.000 | 2 | 2 | 0 | Use of Scaler with LassoCV, RidgeCV | 39,466,671 | 0.197375 | python,machine-learning,scikit-learn | I got the answer through the scikit-learn mailing list so here it is:
'There is no way to use the "efficient" EstimatorCV objects with pipelines.
This is an API bug and there's an open issue and maybe even a PR for that.'
Many thanks to Andreas Mueller for the answer. | I would like to use scikit-learn LassoCV/RidgeCV while applying a 'StandardScaler' on each fold training set. I do not want to apply the scaler before the cross-validation to avoid leakage but I cannot figure out how I am supposed to do that with LassoCV/RidgeCV.
Is there a way to do this ? Or should I create a pipeline with Lasso/Ridge and 'manually' search for the hyperparameters (using GridSearchCV for instance) ?
Many thanks. | 0 | 1 | 726 |
0 | 39,477,667 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2016-09-13T18:47:00.000 | 1 | 4 | 1 | Error "mach-o, but wrong architecture" after installing anaconda on mac | 39,477,023 | 0.049958 | python,macos,python-2.7 | you are mixing 32bit and 64bit versions of python.
probably you installed 64bit python version on a 32bit computer.
go on and uninstall python and reinstall it with the right configuration. | I am getting an architecture error while importing any package, i understand my Python might not be compatible, can't understand it.
Current Python Version - 2.7.10
`MyMachine:desktop *********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportError: dlopen(/Users/*********/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find:
/Users/**********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture
MyMachine:desktop ***********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportError: dlopen(/Users/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find:
/Users/***********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture | 0 | 1 | 7,799 |
0 | 70,210,511 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2016-09-13T18:47:00.000 | 3 | 4 | 1 | Error "mach-o, but wrong architecture" after installing anaconda on mac | 39,477,023 | 0.148885 | python,macos,python-2.7 | Below steps resolved this problem for me.
Quit the terminal.
Go to Finder => Apps
Right Click on Terminal
Get Info
Check the checkbox Open using Rosetta
Now, open the terminal and try again.
PS: Rosetta allows Mac with M1 architecture to use apps built for Mac with Intel chip. Most of the times the reason behind most of the architecture problems is this chip compatibility reason only. So, 'Open using Rosetta' for terminal allows us to use Rosetta by default for such applications. | I am getting an architecture error while importing any package, i understand my Python might not be compatible, can't understand it.
Current Python Version - 2.7.10
`MyMachine:desktop *********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportError: dlopen(/Users/*********/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find:
/Users/**********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture
MyMachine:desktop ***********$ python pythonmath.py
Traceback (most recent call last):
File "pythonmath.py", line 1, in
import math
ImportError: dlopen(/Users/anaconda/lib/python2.7/lib-dynload/math.so, 2): no suitable image found. Did find:
/Users/***********/anaconda/lib/python2.7/lib-dynload/math.so: mach-o, but wrong architecture | 0 | 1 | 7,799 |
0 | 46,030,298 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-14T13:06:00.000 | 0 | 1 | 0 | Matplotlib imshow() | 39,491,258 | 0 | python,matplotlib | Without more detail on your specific problem, it's hard to guess what is the best way to represent your data. I am going to give an example, hopefully it is relevant.
Suppose we are collecting height and weight of a group of people. Maybe the index of the person is your first dimension, and the height and weight depends on who it is. Then one way to represent this data is use height and weight as the x and y axes, and plot each person as a dot in that two dimensional space.
In this example, the person index doesn't really have much meaning, thus no color is needed. | I am stuck with python and matplotlib imshow(). Aim is it to show a twodimensonal color map which represents three dimensions.
My x-axis is represented by an array'TG'(93 entries). My y-axis is a set of arrays dependend of my 'TG' To be precise we have 93 different arrays with the length of 340. My z-axis is also a set of arrays depended of my 'TG' equally sized then y (93x340).
Basically what I have is a set of two-dimensonal measurements which I want to plot in color dependend on a third array. Is there a clever way to do that. I was trying to find out on my own first, but all I found is that most common is the problem with just a z-plane(two-dimensonal plot). So I have two matrices of the order of (93x340) and one array(93). Do you know a helpful advise. | 0 | 1 | 343 |
0 | 39,493,833 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-09-14T14:59:00.000 | 3 | 2 | 0 | Python how does == work for float/double? | 39,493,732 | 1.2 | python,pandas,floating-point | The same string representation will become the same float representation when put through the same parse routine. The float inaccuracy issue occurs either when mathematical operations are performed on the values or when high-precision representations are used, but equality on low-precision values is no reason to worry. | I know using == for float is generally not safe. But does it work for the below scenario?
Read from csv file A.csv, save first half of the data to csv file B.csv without doing anything.
Read from both A.csv and B.csv. Use == to check if data match everywhere in the first half.
These are all done with Pandas. The columns in A.csv have types datetime, string, and float. Obviously == works for datetime and string, so if == works for float as well in this case, it saves a lot of work.
It seems to be working for all my tests, but can I assume it will work all the time? | 0 | 1 | 717 |
0 | 39,517,206 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2016-09-15T17:29:00.000 | 0 | 3 | 0 | Finding closest value in a dictionary | 39,517,040 | 0 | python,python-2.7,loops,dictionary,iteration | Since the values for a given a are strictly increasing with successive i values, you can do a binary search for the value that is closest to your target.
While it's certainly possible to write your own binary search code on your dictionary, I suspect you'd have an easier time with a different data structure. If you used nested lists (with a as the index to the outer list, and i as the index to an inner list), you could use the bisect module to search the inner list efficiently. | I have a dictionary, T, with keys in the form k,i with an associated value that is a real number (float). Let's suppose I choose a particular key a,b from the dictionary T with corresponding value V1—what's the most efficient way to find the closest value to V1 for a key that has the form a+1,i, where i is an integer that ranges from 0 to n? (k,a, and b are also integers.) To add one condition on the values of items in T, as i increases in the key, the value associated to T[a+1,i] is strictly increasing (i.e. T[a+1,i+1] > T[a+1,i]).
I was planning to simply run a while loop that starts from i = 0 and compares the valueT[a+1,i] to V1. To be more clear, the loop would simply stop at the point at which np.abs(T[a+1,i] - V1) < np.abs(T[a+1,i+1] - V1), as I would know the item associated to T[a+1,i] is the closest to T[a,b] = V1. But given the strictly increasing condition I have imposed, is there a more efficient method than running a while loop that iterates over the dictionary elements? i will go from 0 to n where n could be an integer in the millions. Also, this process would be repeated frequently so efficiency is key. | 0 | 1 | 2,461 |
0 | 62,178,663 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-15T21:18:00.000 | 0 | 4 | 0 | How to measure the memory footprint of importing pandas? | 39,520,532 | 0 | python,pandas | After introducing pandas to my script and loaded dataframe with 0.8MB data, ran the script and surprised to see the memory usage got increased from 13MB to 49MB. I suspected my existing script has some memory leak and I used memory profiler to check what is consuming much memory and finally the culprit is pandas. Just import statement which loads the library into memory is taking around 30MB. Importing only specific item like (from pandas import Dataframe) didn't help much in saving memory.
Just import pandas takes around 30MB memory
Once import is done, memory of dataframe object can be checked by using print(df.memory_usage(deep=True)) which depends on the data loaded to dataframe | I am running Python on a low memory system.
I want to know whether or not importing pandas will increase memory usage significantly.
At present I just want to import pandas so that I can use the date_range function. | 0 | 1 | 493 |
0 | 39,520,649 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-15T21:18:00.000 | 3 | 4 | 0 | How to measure the memory footprint of importing pandas? | 39,520,532 | 0.148885 | python,pandas | You may also want to use a Memory Profiler to get an idea of how much memory is allocated to your Pandas objects. There are several Python Memory Profilers you can use (a simple Google search can give you an idea). PySizer is one that I used a while ago. | I am running Python on a low memory system.
I want to know whether or not importing pandas will increase memory usage significantly.
At present I just want to import pandas so that I can use the date_range function. | 0 | 1 | 493 |
0 | 39,530,209 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-16T06:41:00.000 | 0 | 2 | 1 | Configurate Spark by given Cluster | 39,525,214 | 0 | java,python,scala,apache-spark,pyspark | your question is unclear. If the data are on your local machine, you should first copy your data to the cluster on HDFS filesystem. Spark can works in 3 modes with YARN (are u using YARN or MESOS ?): cluster, client and standalone. What you are looking for is client-mode or cluster mode. But if you want to start the application from your local machine, use client-mode. If you have an SSH access, you are free to use both.
The simplest way is to copy your code directly on the cluster if it is properly configured then start the application with the ./spark-submit script, providing the class to use as an argument. It works with python script and java/scala classes (I only use python so I don't really know) | I have to send some applications in python to a Apache Spark cluster. There is given a Clustermanager and some worker nodes with the addresses to send the Application to.
My question is, how to setup and to configure Spark on my local computer to send those requests with the data to be worked out to the cluster?
I am working on Ubuntu 16.xx and already installed java and scala. I have searched the inet but the most find is how to build the cluster or some old advices how to do it, which are out of date. | 0 | 1 | 35 |
0 | 39,576,294 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-09-16T11:10:00.000 | 0 | 1 | 0 | Is it possible to create a polynomial through Numpy's C API? | 39,530,054 | 1.2 | python,c++,numpy,swig | Numpy's polynomial package is largely a collection of functions that can accept array-like objects as the polynomial. Therefore, it is sufficient to convert to a normal ndarray, where the value at index n is the coefficient for the term with exponent n. | I'm using SWIG to wrap a C++ library with its own polynomial type. I'd like to create a typemap to automatically convert that to a numpy polynomial. However, browsing the docs for the numpy C API, I'm not seeing anything that would allow me to do this, only numpy arrays. Is it possible to typemap to a polynomial? | 0 | 1 | 42 |
0 | 39,538,477 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-09-16T18:52:00.000 | 0 | 3 | 0 | Given an acyclic directed graph, return a collection of collections of nodes "at the same level"? | 39,538,363 | 0 | python,algorithm,graph,graph-theory,networkx | Why would bfs not solve it? A bfs algorithm is breadth traversal algorithm, i.e. it traverses the tree level wise. This also means, all nodes at same level are traversed at once, which is your desired output.
As pointed out in comment, this will however, assume a starting point in the graph. | Firstly I am not sure what such an algorithm is called, which is the primary problem - so first part of the question is what is this algorithm called?
Basically I have a DiGraph() into which I insert the nodes [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] and the edges ([1,3],[2,3],[3,5],[4,5],[5,7],[6,7],[7,8],[7,9],[7,10])
From this I'm wondering if it's possible to get a collection as follows: [[1, 2, 4, 6], [3], [5], [7], [8, 9, 10]]
EDIT: Let me add some constraints if it helps.
- There are no cycles, this is guaranteed
- There is no one start point for the graph
What I'm trying to do is to collect the nodes at the same level such that their processing can be parallelized, but within the outer collection, the processing is serial.
EDIT2: So clearly I hadn't thought about this enough, so the easiest way to describe "level" is interms of deepest predecessor, all nodes that have the same depth of predecessors. So the first entry in the above list are all nodes that have 0 as the deepest predecessor, the second has one, third has two and so on. Within each list, the order of the siblings is irrelevant as they will be parallelly processed. | 0 | 1 | 661 |
0 | 39,809,660 | 0 | 1 | 0 | 0 | 1 | false | 17 | 2016-09-16T22:06:00.000 | 10 | 2 | 0 | Google Cloud Vision - Numbers and Numerals OCR | 39,540,741 | 1 | python,ocr,google-cloud-platform,google-cloud-vision,text-recognition | I am unable to tell you why this works, perhaps it has to do with how the language is read, o vs 0, l vs 1, etc. But whenever I use OCR and I am specifically looking for numbers, I have read to set the detection language to "Korean". It works exceptionally well for me and has influenced the accuracy greatly. | I've been trying to implement an OCR program with Python that reads numbers with a specific format, XXX-XXX. I used Google's Cloud Vision API Text Recognition, but the results were unreliable. Out of 30 high-contrast 1280 x 1024 bmp images, only a handful resulted in the correct output, or at least included the correct output in the results. The program tends to omit some numbers, output in non-English languages or sneak in a few special characters.
The goal is to at least output the correct numbers consecutively, doesn't matter if the results are sprinkled with other junk. Is there a way to help the program recognize numbers better, for example limit the results to a specific format, or to numbers only? | 0 | 1 | 5,635 |
0 | 39,543,370 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2016-09-17T00:17:00.000 | 1 | 1 | 0 | Setting default histtype in matplotlib? | 39,541,655 | 1.2 | python,matplotlib,data-analysis | Thank you for prompting me to look at this, as I much prefer 'step' style histograms too! I solved this problem by going into the matplotlib source code. I use anaconda, so it was located in anaconda/lib/site-packages/python2.7/matplotlib.
To change the histogram style I edited two of the files. Assuming that the current directory is matplotlib/, then open up axes/_axes.py and locate the hist() function there (it's on line 5690 on my machine, matplotlib version 1.5.1). You should see the histtype argument there. Change this to 'step'.
Now open up pyplot.py and again locate the hist() function and make the same change to the histtype argument (line 2943 in version 1.5.1 and on my machine). There is a comment about not editing this function, but I only found this to be an issue when I didn't also edit axes/_axes.py as well.
This worked for me! Another alternative would be just to write a wrapper around hist() yourself that changes the default argument. | Is there a way to configure the default argument for histtype of matplotlib's hist() function? The default behavior is to make bar-chart type histograms, which I basically never want to look at, since it is horrible for comparing multiple distributions that have significant overlap.
In case it's somehow relevant, the default behavior I would like to attain is to have histtype='step'. | 0 | 1 | 602 |
0 | 39,577,466 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-18T07:09:00.000 | 0 | 1 | 0 | Can I safely do inference from another thread while training the network? | 39,555,060 | 0 | python,multithreading,thread-safety,locking,tensorflow | Do your inference calls need to be on an up-to-date version of the graph? If you don't mind some delay, you could make a copy of the graph by calling sess.graph.as_graph_def on the training thread, and then create a new session on the inference thread using that graph_def periodically. | I have several threads that either update the weights of my network or run inference on it. I use the use_locking parameter for the optimizer to prevent concurrent updates of the weights.
Inference should always use a recent, and importantly, consistent, version of the weights. In other words, I want to prevent using a weight matrix for inference for which some of the elements are already updated but others are not.
Is this guaranteed? If not, how can I ensure this? There doesn't seem to be a tf.Lock or similar. | 0 | 1 | 67 |
1 | 39,557,406 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-09-18T07:38:00.000 | 2 | 1 | 0 | Calling a C++ CUDA device function from a Python kernel | 39,555,235 | 1.2 | python,cuda,cython,numba,pycuda | As far as I am aware, this isn't possible in either language. Neither exposes the necessary toolchain controls for separate compilation or APIs to do runtime linking of device code. | I'm working on a project that involves creating CUDA kernels in Python. Numba works quite well (what these guys have accomplished is quite incredible), and so does PyCUDA.
My problem is that I want to call a C device function from my Python generated kernel. I couldn't find a way to accomplish this. Numba can call CFFI modules but only in CPU code. In PyCUDA I can add my C device functions to the SourceModule, but I couldn't figure out how to include functions that already exist in another library.
Is there a way to accomplish this? | 0 | 1 | 604 |
0 | 39,563,394 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-18T21:40:00.000 | 0 | 1 | 0 | predict external dataset with models from random forest | 39,562,939 | 0 | python | First of all, you should not save the result of cross validation. Cross validation is not a training method, it is an evaluation scheme. You should build a single model on your whole dataset and use it to predict.
If, for some reason, you can no longer train your model, you can still use this 5 predictions by averaging them (as random forest itself is a simple averagin ensemble of trees), however going back and retraining should give you bettter results. | I used joblib.dump in python to save models from 5 fold cross validation modelling using random forest. As a result I have 5 models for each dataset saved as: MDL_1.pkl, MDL_2.pkl, MDL_3.pkl, MDL_4.pkl, MDL_5.pkl. Now I want to use these models for prediction of external dataset using predict_proba when the final prediction for each line in my external dataset is an average of 5 models. What is the best way to proceed?
Thank you for your help | 0 | 1 | 43 |
0 | 57,812,535 | 0 | 0 | 0 | 0 | 1 | false | 34 | 2016-09-19T06:39:00.000 | 5 | 2 | 0 | Writing Dask partitions into single file | 39,566,809 | 0.462117 | python,dask | you can convert your dask dataframe to a pandas dataframe with the compute function and then use the to_csv. something like this:
df_dask.compute().to_csv('csv_path_file.csv') | New to dask,I have a 1GB CSV file when I read it in dask dataframe it creates around 50 partitions after my changes in the file when I write, it creates as many files as partitions.
Is there a way to write all partitions to single CSV file and is there a way access partitions?
Thank you. | 0 | 1 | 14,834 |
0 | 39,581,743 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-09-19T13:35:00.000 | 0 | 1 | 0 | Is there any K-means++ implementation outside of scikit-learn for Python 2.7? | 39,574,567 | 0 | python-2.7,scipy,scikit-learn,k-means | So, the situation as of today is: there is no distributed Python implementation of KMeans++ other than in scikit-learn. That situation may change if a good implementation finds its way into scipy. | I have nothing against scikit-learn, but I had to install anaconda to get it, which is a bit obtrusive. | 0 | 1 | 72 |
0 | 39,583,123 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-09-19T22:21:00.000 | 1 | 1 | 0 | regarding the tensor shape is (?,?,?,1) | 39,582,974 | 1.2 | python,tensorflow | This output means that TensorFlow's shape inference has only been able to infer a partial shape for the mask tensor. It has been able to infer (i) that mask is a 4-D tensor, and (ii) its last dimension is 1; but it does not know statically the shape of the first three dimensions.
If you want to get the actual shape of the tensor, the main approaches are:
Compute mask_val = sess.run(mask) and print mask_val.shape.
Create a symbolic mask_shape = tf.shape(mask) tensor, compute mask_shape_val = sess.run(mask_shape) and print `mask_shape.
Shapes usually have unknown components if the shape depends on the data, or if the tensor is itself a function of some tensor(s) with a partially known shape. If you believe that the shape of the mask should be static, you can trace the source of the uncertainty by (recursively) looking at the inputs of the operation(s) that compute mask and finding out where the shape becomes partially known. | During debuging the Tensorflow code, I would like to output the shape of a tensor, say, print("mask's shape is: ",mask.get_shape()) However, the corresponding output is mask's shape is (?,?,?,1) How to explain this kind of output, is there anyway to know the exactly value of the first three dimensions of this tensor? | 0 | 1 | 569 |
0 | 39,605,142 | 0 | 0 | 0 | 0 | 1 | false | 9 | 2016-09-20T22:51:00.000 | 6 | 2 | 0 | What does (n,) mean in the context of numpy and vectors? | 39,604,918 | 1 | python,numpy,machine-learning,neural-network | (n,) is a tuple of length 1, whose only element is n. (The syntax isn't (n) because that's just n instead of making a tuple.)
If an array has shape (n,), that means it's a 1-dimensional array with a length of n along its only dimension. It's not a row vector or a column vector; it doesn't have rows or columns. It's just a vector. | I've tried searching StackOverflow, googling, and even using symbolhound to do character searches, but was unable to find an answer. Specifically, I'm confused about Ch. 1 of Nielsen's Neural Networks and Deep Learning, where he says "It is assumed that the input a is an (n, 1) Numpy ndarray, not a (n,) vector."
At first I thought (n,) referred to the orientation of the array - so it might refer to a one-column vector as opposed to a vector with only one row. But then I don't see why we need (n,) and (n, 1) both - they seem to say the same thing. I know I'm misunderstanding something but am unsure.
For reference a refers to a vector of activations that will be input to a given layer of a neural network, before being transformed by the weights and biases to produce the output vector of activations for the next layer.
EDIT: This question equivocates between a "one-column vector" (there's no such thing) and a "one-column matrix" (does actually exist). Same for "one-row vector" and "one-row matrix".
A vector is only a list of numbers, or (equivalently) a list of scalar transformations on the basis vectors of a vector space. A vector might look like a matrix when we write it out, if it only has one row (or one column). Confusingly, we will sometimes refer to a "vector of activations" but actually mean "a single-row matrix of activation values transposed so that it is a single-column."
Be aware that in neither case are we discussing a one-dimensional vector, which would be a vector defined by only one number (unless, trivially, n==1, in which case the concept of a "column" or "row" distinction would be meaningless). | 0 | 1 | 2,722 |
0 | 39,607,825 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-09-21T04:42:00.000 | 0 | 4 | 0 | Number of shortest paths | 39,607,721 | 0 | python,algorithm,chess | Try something. Draw boards of the following sizes: 1x1, 2x2, 3x3, 4x4, and a few odd ones like 2x4 and 3x4. Starting with the smallest board and working to the largest, start at the bottom left corner and write a 0, then find all moves from zero and write a 1, find all moves from 1 and write a 2, etc. Do this until there are no more possible moves.
After doing this for all 6 boards, you should have noticed a pattern: Some squares couldn't be moved to until you got a larger board, but once a square was "discovered" (ie could be reached), the number of minimum moves to that square was constant for all boards not smaller than the board on which it was first discovered. (Smaller means less than n OR less than x, not less than (n * x) )
This tells something powerful, anecdotally. All squares have a number associated with them that must be discovered. This number is a property of the square, NOT the board, and is NOT dependent on size/shape of the board. It is always true. However, if the square cannot be reached, then obviously the number is not applicable.
So you need to find the number of every square on a 200x200 board, and you need a way to see if a board is a subset of another board to determine if a square is reachable.
Remember, in these programming challenges, some questions that are really hard can be solved in O(1) time by using lookup tables. I'm not saying this one can, but keep that trick in mind. For this one, pre-calculating the 200x200 board numbers and saving them in an array could save a lot of time, whether it is done only once on first run or run before submission and then the results are hard coded in.
If the problem needs move sequences rather than number of moves, the idea is the same: save move sequences with the numbers. | Here is the problem:
Given the input n = 4 x = 5, we must imagine a chessboard that is 4 squares across (x-axis) and 5 squares tall (y-axis). (This input changes, all the up to n = 200 x = 200)
Then, we are asked to determine the minimum shortest path from the bottom left square on the board to the top right square on the board for the Knight (the Knight can move 2 spaces on one axis, then 1 space on the other axis).
My current ideas:
Use a 2d array to store all the possible moves, perform breadth-first
search(BFS) on the 2d array to find the shortest path.
Floyd-Warshall shortest path algorithm.
Create an adjacency list and perform BFS on that (but I think this would be inefficient).
To be honest though I don't really have a solid grasp on the logic.
Can anyone help me with psuedocode, python code, or even just a logical walk-through of the problem? | 0 | 1 | 3,995 |
0 | 39,608,395 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-09-21T04:42:00.000 | 0 | 4 | 0 | Number of shortest paths | 39,607,721 | 0 | python,algorithm,chess | My approach to this question would be backtracking as the number of squares in the x-axis and y-axis are different.
Note: Backtracking algorithms can be slow for certain cases and fast for the other
Create a 2-d Array for the chess-board. You know the staring index and the final index. To reach to the final index u need to keep close to the diagonal that's joining the two indexes.
From the starting index see all the indexes that the knight can travel to, choose the index which is closest to the diagonal indexes and keep on traversing, if there is no way to travel any further backtrack one step and move to the next location available from there.
PS : This is a bit similar to a well known problem Knight's Tour, in which choosing any starting point you have to find that path in which the knight whould cover all squares. I have codes this as a java gui application, I can send you the link if you want any help
Hope this helps!! | Here is the problem:
Given the input n = 4 x = 5, we must imagine a chessboard that is 4 squares across (x-axis) and 5 squares tall (y-axis). (This input changes, all the up to n = 200 x = 200)
Then, we are asked to determine the minimum shortest path from the bottom left square on the board to the top right square on the board for the Knight (the Knight can move 2 spaces on one axis, then 1 space on the other axis).
My current ideas:
Use a 2d array to store all the possible moves, perform breadth-first
search(BFS) on the 2d array to find the shortest path.
Floyd-Warshall shortest path algorithm.
Create an adjacency list and perform BFS on that (but I think this would be inefficient).
To be honest though I don't really have a solid grasp on the logic.
Can anyone help me with psuedocode, python code, or even just a logical walk-through of the problem? | 0 | 1 | 3,995 |
0 | 39,613,813 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-09-21T10:09:00.000 | 0 | 1 | 0 | N-grams - not in memory | 39,613,555 | 0 | python,n-gram,language-model | Sounds like you need to store the intermediate frequency counts on disk rather than in memory. Luckily most databases can do this, and python can talk to most databases. | I have 3 milion abstracts and I would like to extract 4-grams from them. I want to build a language model so I need to find the frequencies of these 4-grams.
My problem is that I can't extract all these 4-grams in memory. How can I implement a system that it can estimate all frequencies for these 4-grams? | 0 | 1 | 83 |
0 | 39,621,200 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-21T14:08:00.000 | 0 | 1 | 0 | Deprecated Scikit-learn module prevents joblib from loading it | 39,618,985 | 0 | python,scikit-learn,joblib | After reverting to scikit-learn 0.16.x, I just needed to install OpenBlas for Ubuntu. It appears that the problem was more a feature of the operating system rather than Python. | I have a Hidden Markov Model that has been pickled with joblib using the sklearn.hmm module. Apparently, in version 0.17.x this module has been deprecated and moved to hmmlearn. I am unable to load the model and I get the following error:
ImportError: No module named 'sklearn.hmm'
I have tried to revert back to version 0.16.x but still cannot load the model. I get the following error:
ImportError: libopenblas.so.0: cannot open shared object file: No such file or directory
I do not have access to the source code to recreate the model and re-pickle it
I am running Python 3.5
Has anyone else experienced this problem and have you found a solution? Does anyone know if scikit-learn has any way to guarantee persistence since the deprecation? | 0 | 1 | 576 |
0 | 39,620,443 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-09-21T15:03:00.000 | 1 | 2 | 0 | sklearn Logistic Regression with n_jobs=-1 doesn't actually parallelize | 39,620,185 | 0.099668 | python,python-2.7,parallel-processing,scikit-learn,logistic-regression | the parallel process backend also depends on the solver method. if you want to utilize multi core, the multiprocessing backend is needed.
but solver like 'sag' can only use threading backend.
and also mostly, it can be blocked due to a lot of pre-processing. | I'm trying to train a huge dataset with sklearn's logistic regression.
I've set the parameter n_jobs=-1 (also have tried n_jobs = 5, 10, ...), but when I open htop, I can see that it still uses only one core.
Does it mean that logistic regression just ignores the n_jobs parameter?
How can I fix this? I really need this process to become parallelized...
P.S. I am using sklearn 0.17.1 | 0 | 1 | 2,100 |
0 | 39,640,835 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2016-09-21T23:01:00.000 | 2 | 2 | 0 | Create Folder with Numpy Savetxt | 39,627,787 | 0.197375 | python,numpy | Actually in order to make all intermediate directories if needed the os.makedirs(path, exist_ok=True) . If not needed the command will not throw an error. | I'm trying loop over many arrays and create files stored in different folders.
Is there a way to have np.savetxt creating the folders I need as well?
Thanks | 0 | 1 | 6,937 |
0 | 39,628,096 | 0 | 1 | 0 | 0 | 2 | true | 2 | 2016-09-21T23:01:00.000 | 3 | 2 | 0 | Create Folder with Numpy Savetxt | 39,627,787 | 1.2 | python,numpy | savetxt just does a open(filename, 'w'). filename can include a directory as part of the path name, but you'll have to first create the directory with something like os.mkdir. In other words, use the standard Python directory and file functions. | I'm trying loop over many arrays and create files stored in different folders.
Is there a way to have np.savetxt creating the folders I need as well?
Thanks | 0 | 1 | 6,937 |
0 | 39,664,586 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-22T15:34:00.000 | 0 | 2 | 0 | Is it possible to group tensorflow FLAGS by type and generate a string from them? | 39,643,256 | 0 | python,machine-learning,computer-vision,tensorflow | I'm guessing that you're wanting to automatically store the hyper-parameters as part of the file name in order to organize your experiments better? Unfortunately there isn't a good way to do this with TensorFlow, but you can look at some of the high-level frameworks built on top of it to see if they offer something similar. | Is it possible to group tensorflow FLAGS by type?
E.g.
Some flags are system related (e.g. # of threads) while others are model hyperparams.
Then, is it possible to use the model hyperparams FLAGS, in order to generate a string? (the string will be used to identify the model filename)
Thanks | 0 | 1 | 79 |
0 | 39,650,350 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-09-22T23:24:00.000 | 3 | 1 | 0 | Convert a numpy array into an array of signs with 0 as positive | 39,650,312 | 1.2 | python,numpy | If x is the array, you could use 2*(x >= 0) - 1.
x >= 0 will be an array of boolean values (i.e. False and True), but when you do arithmetic with it, it is effectively cast to an array of 0s and 1s.
You could also do np.sign(x) + (x == 0). (Note that np.sign(x) returns floating point values, even when x is an integer array.) | I have a large numpy array with positive data, negative data and 0s. I want to convert it to an array with the signs of the current values such that 0 is considered positive. If I use numpy.sign it returns 0 if the current value is 0 but I want something that returns 1 instead. Is there an easy way to do this? | 0 | 1 | 2,087 |
0 | 39,652,742 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-23T04:16:00.000 | 0 | 2 | 1 | How to know which .whl module is suitable for my system with so many? | 39,652,553 | 0 | python,python-wheel,python-install | You don't have to know. Use pip - it will select the most specific wheel available. | We have so may versions of wheel.
How could we know which version should be installed into my system?
I remember there is a certain command which could check my system environment.
Or is there any other ways?
---------------------Example Below this line -----------
scikit_learn-0.17.1-cp27-cp27m-win32.whl
scikit_learn-0.17.1-cp27-cp27m-win_amd64.whl
scikit_learn-0.17.1-cp34-cp34m-win32.whl
scikit_learn-0.17.1-cp34-cp34m-win_amd64.whl
scikit_learn-0.17.1-cp35-cp35m-win32.whl
scikit_learn-0.17.1-cp35-cp35m-win_amd64.whl
scikit_learn-0.18rc2-cp27-cp27m-win32.whl
scikit_learn-0.18rc2-cp27-cp27m-win_amd64.whl
scikit_learn-0.18rc2-cp34-cp34m-win32.whl
scikit_learn-0.18rc2-cp34-cp34m-win_amd64.whl
scikit_learn-0.18rc2-cp35-cp35m-win32.whl
scikit_learn-0.18rc2-cp35-cp35m-win_amd64.whl | 0 | 1 | 1,458 |
0 | 39,668,864 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-09-23T19:24:00.000 | 1 | 2 | 0 | homography and image scaling in opencv | 39,668,174 | 0.099668 | python,opencv,coordinate-transformation,homography | The way I see it, the problem is that homography applies a perspective projection which is a non linear transformation (it is linear only while homogeneous coordinates are being used) that cannot be represented as a normal transformation matrix. Multiplying such perspective projection matrix with some other transformations therefore produces undesirable results.
You can try multiplying your original matrix H element wise with:
S = [1,1,scale ; 1,1,scale ; 1/scale, 1/scale, 1]
H_full_size = S * H
where scale is for example 2, if you decreased the size of original image by 2. | I am calculating an homography between two images img1 and img2 (the images contain mostly one planar object, so the homography works well between them) using standard methods in OpenCV in python. Namely, I compute point matches between the images using sift and then call cv2.findHomography.
To make the computation faster I scale down the two images into small1 and small2 and perform the calculations on these smaller copies, so I calculate the homography matrix H, which maps small1 into small2.
However, at the end, I would like to use calculate the homography matrix to project one full-size image img1 onto the other the full-size image img2.
I thought I could simply transform the homography matrix H in the following way H_full_size = A * H * A_inverse where A is the matrix representing the scaling from img1 to small1 and A_inverse is its inverse.
However, that does not work. If I apply cv2.warpPerspective to the scaled down image small1 with H, everything goes as expected and the result (largely) overlaps with small2. If I apply cv2.warpPerspective to the full size image img1 with H_full_size the result does not map to img2.
However, if I project the point matches (detected on the scaled down images) using A (using something like projected_pts = cv2.perspectiveTransform(pts, A)) and then I calculate H_full_size from these, everything works fine.
Any idea what I could be doing wrong here? | 0 | 1 | 2,276 |
0 | 39,671,924 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-24T01:34:00.000 | 0 | 3 | 0 | Is my understanding of Hashsets correct?(Python) | 39,671,661 | 0 | python,algorithm,data-structures | The lookup time wouldn't be O(n) because not all items need to be searched, it also depends on the number of buckets. More buckets would decrease the probability of a collision and reduce the chain length.
The number of buckets can be kept as a constant factor of the number of entries by resizing the hash table as needed. Along with a hash function that evenly distributes the values, this keeps the expected chain length bounded, giving constant time lookups.
The hash tables used by hashmaps and hashsets are the same except they store different values. A hashset will contain references to a single value, and a hashmap will contain references to a key and a value. Hashsets can be implemented by delegating to a hashmap where the keys and values are the same. | I'm teaching myself data structures through this python book and I'd appreciate if someone can correct me if I'm wrong since a hash set seems to be extremely similar to a hash map.
Implementation:
A Hashset is a list [] or array where each index points to the head of a linkedlist
So some hash(some_item) --> key, and then list[key] and then add to the head of a LinkedList. This occurs in O(1) time
When removing a value from the linkedlist, in python we replace it with a placeholder because hashsets are not allowed to have Null/None values, correct?
When the list[] gets over a certain % of load/fullness, we copy it over to another list
Regarding Time Complexity Confusion:
So one question is, why is Average search/access O(1) if there can be a list of N items at the linkedlist at a given index?
Wouldnt the average case be the searchitem is in the middle of its indexed linkedlist so it should be O(n/2) -> O(n)?
Also, when removing an item, if we are replacing it with a placeholder value, isn't this considered a waste of memory if the placeholder is never used?
And finally, what is the difference between this and a HashMap other than HashMaps can have nulls? And HashMaps are key/value while Hashsets are just value? | 0 | 1 | 1,427 |
0 | 39,671,749 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-24T01:34:00.000 | 0 | 3 | 0 | Is my understanding of Hashsets correct?(Python) | 39,671,661 | 0 | python,algorithm,data-structures | For your first question - why is the average time complexity of a lookup O(1)? - this statement is in general only true if you have a good hash function. An ideal hash function is one that causes a nice spread on its elements. In particular, hash functions are usually chosen so that the probability that any two elements collide is low. Under this assumption, it's possible to formally prove that the expected number of elements to check is O(1). If you search online for "universal family of hash functions," you'll probably find some good proofs of this result.
As for using placeholders - there are several different ways to implement a hash table. The approach you're using is called "closed addressing" or "hashing with chaining," and in that approach there's little reason to use placeholders. However, other hashing strategies exist as well. One common family of approaches is called "open addressing" (the most famous of which is linear probing hashing), and in those setups placeholder elements are necessary to avoid false negative lookups. Searching online for more details on this will likely give you a good explanation about why.
As for how this differs from HashMap, the HashMap is just one possible implementation of a map abstraction backed by a hash table. Java's HashMap does support nulls, while other approaches don't. | I'm teaching myself data structures through this python book and I'd appreciate if someone can correct me if I'm wrong since a hash set seems to be extremely similar to a hash map.
Implementation:
A Hashset is a list [] or array where each index points to the head of a linkedlist
So some hash(some_item) --> key, and then list[key] and then add to the head of a LinkedList. This occurs in O(1) time
When removing a value from the linkedlist, in python we replace it with a placeholder because hashsets are not allowed to have Null/None values, correct?
When the list[] gets over a certain % of load/fullness, we copy it over to another list
Regarding Time Complexity Confusion:
So one question is, why is Average search/access O(1) if there can be a list of N items at the linkedlist at a given index?
Wouldnt the average case be the searchitem is in the middle of its indexed linkedlist so it should be O(n/2) -> O(n)?
Also, when removing an item, if we are replacing it with a placeholder value, isn't this considered a waste of memory if the placeholder is never used?
And finally, what is the difference between this and a HashMap other than HashMaps can have nulls? And HashMaps are key/value while Hashsets are just value? | 0 | 1 | 1,427 |
0 | 39,672,721 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-09-24T04:07:00.000 | 0 | 1 | 0 | Organizing records to classes | 39,672,376 | 0 | python,class | I would say yes. Basically I want to:
Take the unique set of data
Filter it so that just a subset is considered (filter parameters can be time of recording for example)
Use a genetic algorithm the filtered data to match on average a target.
Step 3 is irrelevant to the post, I just wanted to give the big picture in order to make my question more clear. | I'm planning to develop a genetic algorithm for a series of acceleration records in a search to find optimum match with a target.
At this point my data is array-like with a unique ID column, X,Y,Z component info in the second, time in the third etc...
That being said each record has several "attributes". Do you think it would be beneficial to create a (records) class considering the fact I will want to to do a semi-complicated process with it as a next step?
Thanks | 0 | 1 | 17 |
0 | 54,791,471 | 0 | 0 | 0 | 0 | 2 | false | 206 | 2016-09-25T21:12:00.000 | 1 | 9 | 0 | Ordering of batch normalization and dropout? | 39,691,902 | 0.022219 | python,neural-network,tensorflow,conv-neural-network | The correct order is: Conv > Normalization > Activation > Dropout > Pooling | The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow.
When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering?
It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch normalization trains to the larger scale numbers of the training outputs, but then that same shift is applied to the smaller (due to the compensation for having more outputs) scale numbers without dropout during testing, then that shift may be off. Does the TensorFlow batch normalization layer automatically compensate for this? Or does this not happen for some reason I'm missing?
Also, are there other pitfalls to look out for in when using these two together? For example, assuming I'm using them in the correct order in regards to the above (assuming there is a correct order), could there be trouble with using both batch normalization and dropout on multiple successive layers? I don't immediately see a problem with that, but I might be missing something.
Thank you much!
UPDATE:
An experimental test seems to suggest that ordering does matter. I ran the same network twice with only the batch norm and dropout reverse. When the dropout is before the batch norm, validation loss seems to be going up as training loss is going down. They're both going down in the other case. But in my case the movements are slow, so things may change after more training and it's just a single test. A more definitive and informed answer would still be appreciated. | 0 | 1 | 130,653 |
0 | 63,051,525 | 0 | 0 | 0 | 0 | 2 | false | 206 | 2016-09-25T21:12:00.000 | 0 | 9 | 0 | Ordering of batch normalization and dropout? | 39,691,902 | 0 | python,neural-network,tensorflow,conv-neural-network | ConV/FC - BN - Sigmoid/tanh - dropout.
If activiation func is Relu or otherwise, the order of normalization and dropout depends on your task | The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow.
When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering?
It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch normalization trains to the larger scale numbers of the training outputs, but then that same shift is applied to the smaller (due to the compensation for having more outputs) scale numbers without dropout during testing, then that shift may be off. Does the TensorFlow batch normalization layer automatically compensate for this? Or does this not happen for some reason I'm missing?
Also, are there other pitfalls to look out for in when using these two together? For example, assuming I'm using them in the correct order in regards to the above (assuming there is a correct order), could there be trouble with using both batch normalization and dropout on multiple successive layers? I don't immediately see a problem with that, but I might be missing something.
Thank you much!
UPDATE:
An experimental test seems to suggest that ordering does matter. I ran the same network twice with only the batch norm and dropout reverse. When the dropout is before the batch norm, validation loss seems to be going up as training loss is going down. They're both going down in the other case. But in my case the movements are slow, so things may change after more training and it's just a single test. A more definitive and informed answer would still be appreciated. | 0 | 1 | 130,653 |
0 | 39,714,032 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-09-26T23:18:00.000 | 0 | 2 | 0 | Need some clarification on Kruskals and Union-Find | 39,713,798 | 0 | python,graph,kruskals-algorithm | Actually the running time of the algorithm is O(E log(V)).
The key to its performance lies on your point 4, more specifically, the implementation of determining for a light edge e = (a, b) if 'a' and 'b' belongs to the same set and, if not, performing the union of their respective sets.
For more clarifications on the topic I recommend you the book: "Introduction to Algoritms", from MIT Press, ISBN 0-262-03293-7, pag 561 (for the general topic of MST) and pag 568 (for Kruskal's Algorithm).
As it states, and I quote:
"The running time of Kruskal’s algorithm for a graph G = (V, E) depends
on the implementation of the disjoint-set data structure. We shall assume the
disjoint-set-forest implementation of Section 21.3 with the union-by-rank and
path-compression heuristics, since it is the asymptotically fastest implementation
known."
Few lines later and with some simple "Time Complexity Theory" calculus, it proves its time complexity to be of O(E log(V)). | Please help me fill any gaps in my knowledge(teaching myself):
So far I understand that given a graph of N vertices, and edges we want to form a MST that will have N-1 Edges
We order the edges by their weight
We create a set of subsets where each vertice is given its own subset. So if we have {A,B,C,D} as our initial set of vertices, we now have {{A}, {B}, {C}, {D}}
We also create a set A that will hold the answer
We go down the list of ordered edges. We look at it's vertices, so V1 and V2. If they are in seperate subsets, we can join the two subsets, and add the edge into the set A that holds our edges. If they are in the same subset, we go to the next option (because its a cycle)
We continue this pattern until we reach the end of the Edge's list or we reach the Number of vertices - 1 for the length of our set A.
If the above assertions are true, my following questions regard the implementation:
If we use a list[] to hold the subsets of the set that contains the vertice:
subsets = [[1][2][3][4][5][6][7]]
and each edge is composed of needing to look for two subsets
so we need to find (6,7)
the result would be
my_path = [(6,7)] #holds all paths
subsets = [[1][2][3][4][5][6,7]]
wouldn't finding the subset in subsets be taking too long to be O(nlog(n))
Is there a better approach or am i doing this correctly? | 0 | 1 | 329 |
0 | 39,716,115 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-09-27T03:22:00.000 | 0 | 2 | 0 | How to put an overlay on a video | 39,715,472 | 0 | python,opencv,video,overlay | What you need are 2 Mat objects- one to stream the camera (say Mat_cam), and the other to hold the overlay (Mat_overlay).
When you draw on your main window, save the line and Rect objects on Mat_overlay, and make sure that it is not affected by the streaming video
When the next frame is received, Mat_cam will be updated and it'll have the next video frame, but Mat_overlay will be the same, since it will not be cleared/refreshed with every 'for' loop iteration. Adding Mat_overlay and Mat_cam using Weighted addition will give you the desired result. | I am currently working in Python and using OpenCV's videocapture and cv.imshow to show a video. I am trying to put an overlay on this video so I can draw on it using cv.line, cv.rectangle, etc. Each time the frame changes it clears the image that was drawn so I am hoping if I was to put an overlay of some sort on top of this that it would allow me to draw multiple images on the video without clearing. Any advice? Thanks ahead! | 0 | 1 | 1,160 |
0 | 39,721,387 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-09-27T03:22:00.000 | 0 | 2 | 0 | How to put an overlay on a video | 39,715,472 | 0 | python,opencv,video,overlay | I am not sure that I have understood your question properly.What I got from your question is that you want the overlay to remain on your frame, streamed from Videocapture, for that one simple solution is to declare your "Mat_cam"(camera streaming variable) outside the loop that is used to capture frames so that "Mat_cam" variable will not be freed every time you loop through it. | I am currently working in Python and using OpenCV's videocapture and cv.imshow to show a video. I am trying to put an overlay on this video so I can draw on it using cv.line, cv.rectangle, etc. Each time the frame changes it clears the image that was drawn so I am hoping if I was to put an overlay of some sort on top of this that it would allow me to draw multiple images on the video without clearing. Any advice? Thanks ahead! | 0 | 1 | 1,160 |
0 | 39,743,987 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-09-28T05:27:00.000 | 0 | 2 | 0 | How do you get a probability of all classes to predict without building a classifier for each single class? | 39,738,703 | 0 | python,machine-learning,scikit-learn | Random forests do indeed give P(Y/x) for multiple classes. In most cases
P(Y/x) can be taken as:
P(Y/x)= the number of trees which vote for the class/Total Number of trees.
However you can play around with this, for example in one case if the highest class has 260 votes, 2nd class 230 votes and other 5 classes 10 votes, and in another case class 1 has 260 votes, and other classes have 40 votes each, you migth feel more confident in your prediction in 2nd case as compared to 1st case, so you come up with a confidence metric according to your use case. | Given a classification problem, sometimes we do not just predict a class, but need to return the probability that it is a class.
i.e. P(y=0|x), P(y=1|x), P(y=2|x), ..., P(y=C|x)
Without building a new classifier to predict y=0, y=1, y=2... y=C respectively. Since training C classifiers (let's say C=100) can be quite slow.
What can be done to do this? What classifiers naturally can give all probabilities easily (one I know is using neural network with 100 out nodes)? But if I use traditional random forests, I can't do that, right? I use the Python Scikit-Learn library. | 0 | 1 | 1,945 |
0 | 39,747,938 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-28T12:42:00.000 | 5 | 2 | 0 | The efficient way of Array transformation by using numpy | 39,747,900 | 0.462117 | python,numpy | Just numpy.transpose(U) or U.T. | How to change the ARRAY U(Nz,Ny, Nx) to U(Nx,Ny, Nz) by using numpy? thanks | 0 | 1 | 79 |
0 | 47,186,550 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-28T16:13:00.000 | 1 | 2 | 0 | MPlot3D Image Manipulation in IPython | 39,752,700 | 0.099668 | python,ipython,spyder,mplot3d | I initially faced same issue
Everything seems to be alright but couldn't rotate the picture.
After toggling between graphical and automatic in
Tools > preferences > IPython console > Graphics > Graphics backend > Backend: ....
I could rotate the image | I am developing a Python program that involves displaying X-Y-Z Trajectories in 3D space. I'm using the Spyder IDE that naturally comes with Anaconda, and I've been running my scripts in IPython Consoles.
So I've been able to generate the 3D plot successfully and use pyplot.show() to display it on the IPython Console. However, when displayed in IPython, only one angle of the graph is shown. And I've read that MPlot3D can be used to create interactive plots. Am I correct in believing that I should be able to rotate and zoom the 3D graph? Or does IPython and/or the Spyder IDE not support this feature? Should I work on rotating the plot image within the script? How do I interact with this plot? | 0 | 1 | 472 |
0 | 46,331,190 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2016-09-28T16:13:00.000 | 4 | 2 | 0 | MPlot3D Image Manipulation in IPython | 39,752,700 | 0.379949 | python,ipython,spyder,mplot3d | Yes, you can rotate and interact with Mplot3d plots in Spyder, you just have to change the setting so that plots appear in a separate window, rather than in the IPython console. Just change the inline setting to automatic:
Tools > preferences > IPython console > Graphics > Graphics backend > Backend: Automatic
Then click Apply, close Spyder, and restart. | I am developing a Python program that involves displaying X-Y-Z Trajectories in 3D space. I'm using the Spyder IDE that naturally comes with Anaconda, and I've been running my scripts in IPython Consoles.
So I've been able to generate the 3D plot successfully and use pyplot.show() to display it on the IPython Console. However, when displayed in IPython, only one angle of the graph is shown. And I've read that MPlot3D can be used to create interactive plots. Am I correct in believing that I should be able to rotate and zoom the 3D graph? Or does IPython and/or the Spyder IDE not support this feature? Should I work on rotating the plot image within the script? How do I interact with this plot? | 0 | 1 | 472 |
0 | 39,778,818 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-09-29T00:44:00.000 | 0 | 2 | 0 | manually building installing python packages in linux so they are recognized | 39,759,680 | 0 | python | think i figured it out. Apparently SLES 11.4 does not include the development headers in the default install from their SDK for numpy 1.8.
And of course they don't offer matplotlib along with a bunch of common python packages.
The python packages per the SLES SDK are the system default are located under/usr/lib64/python2.6/site-packages/ and it is under here I see numpy version 1.8. So using the YAST software manager if you choose various python packages this is where they are located.
To this point without having the PYTHONPATH environment variable set I can launch python, type import numpy, and for the most part use it. But if I try to build matplotlib 0.99.1 it responds that it cannot find the header files for numpy version 1.8, so it knows numpy 1.8 is installed but the development package needs to be installed.
Assuming by development headers they mean .h files,
If I search under /usr/lib64/python2.6/site-packages I have no .h files for anything!
I just downloaded the source for numpy-1.8.0.tar.gz and easily did a python setup.py.build followed by python setup.py install and noticed it installed under /usr/local/lib64/python2.6/site-packages/
Without the PYTHONPATH environment variable set, if i try to build matplotlib I still get the error about header files not found.
but in my bash shell, as root, after I do export PYTHONPATH=/usr/local/lib64/python2.6/site-packages I can successfully do the build and install of matplotlib 0.99.1 which also installs /usr/local/lib64/python2.6/site-packages
Notes:
I also just did a successful build & install of numpy-1.11 and that got thrown in under /usr/local/lib64/python2.6/site-packages however when i try to then build matplotlib 0.99.1 with PYTHONPATH set it reports outright that numpy is not installed that version 1.1 or greater is needed. So here it seems this older version of matplotlib needs to use a certain version range of numpy, that the latest numpy 1.11 is not compatible.
And the only other environment variable i have which is set by the system is PYTHONSTARTUP which points to the file /etc/pythonstart. | My system is SLES 11.4 having python 2.6.9.
I know little about python and have not found where to download rpm's that give me needed python packages.
I acquired numpy 1.4 and 1.11 and I believe did a successful python setup.py build followed by python setup.py install on numpy.
Going from memory I think this installed under /usr/local/lib64/python2.6/...
Next I tried building & installing matplotlib (which requires numpy) and when I do python setup.py build it politely responds with cannot find numpy. So my questions are
do i need to set some kind of python related environment variable, something along the lines of LD_LIBRARY_PATH or PATH ?
As I get more involved with using python installing packages that I have to build from source I need to understand where things currently are per the default install of python, where new things should go, and where the core settings for python are to know how and where to recognize new packages. | 0 | 1 | 44 |
0 | 39,813,792 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-09-29T20:27:00.000 | 0 | 2 | 0 | how to export data to unix system location using python | 39,779,412 | 1.2 | python,unix | I have find the solution. It might because I am using Spyder from anaconda. As long as I use "\" instead of "\", python can recognize the location. | I am trying to write the file to my company's project folder which is unix system and the location is /department/projects/data/. So I used the following code
df.to_csv("/department/projects/data/Test.txt", sep='\t', header = 0)
The error message shows it cannot find the locations. how to specify the file location in Unix using python? | 0 | 1 | 43 |
0 | 39,817,575 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-10-02T13:33:00.000 | 1 | 1 | 0 | Python sorted plot | 39,817,545 | 1.2 | python,sorting,plot,bar-chart,seaborn | Do you save the changes by pd.sort_values? If not, probably you have to add the inplace keyword:
mydf.sort_values(['myValueField'], ascending=False, inplace=True) | I want to use seaborn to perform a sns.barplot where the values are ordered e.g. in ascending order.
In case the order parameter of seaborn is set the plot seems to duplicate the labels for all non-NaN labels.
Trying to pre-sort the values like mydf.sort_values(['myValueField'], ascending=False) does not change the result as seaborn does not seem to interpret it. | 0 | 1 | 144 |
0 | 39,833,244 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-10-03T13:23:00.000 | 4 | 2 | 0 | Convert Pandas Dataframe Date Index and Column to Numpy Array | 39,832,735 | 1.2 | python,arrays,pandas,numpy,dataframe | If A is dataframe and col the column:
import pandas as pd
output = pd.np.column_stack((A.index.values, A.col.values)) | How can I convert 1 column and the index of a Pandas dataframe with several columns to a Numpy array with the dates lining up with the correct column value from the dataframe?
There are a few issues here with data type and its driving my nuts trying to get both the index and the column out and into the one array!!
Help would be much appreciated! | 0 | 1 | 4,044 |
0 | 39,856,855 | 0 | 0 | 0 | 0 | 1 | false | 12 | 2016-10-04T05:30:00.000 | 13 | 4 | 1 | "'CXXABI_1.3.8' not found" in tensorflow-gpu - install from source | 39,844,772 | 1 | python,tensorflow | I solved this problem by copying the libstdc++.so.6 file which contains version CXXABI_1.3.8.
Try run the following search command first:
$ strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep CXXABI_1.3.8
If it returns CXXABI_1.3.8. Then you can do the copying.
$ cp /usr/lib/x86_64-linux-gnu/libstdc++.so.6 /home/jj/anaconda2/bin/../lib/libstdc++.so.6 | I have re-installed Anaconda2.
And I got the following error when 'python -c 'import tensorflow''
ImportError: /home/jj/anaconda2/bin/../lib/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /home/jj/anaconda2/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so)
environment
CUDA8.0
cuDNN 5.1
gcc 5.4.1
tensorflow r0.10
Anaconda2 : 4.2
the following is in bashrc file
export PATH="/home/jj/anaconda2/bin:$PATH"
export CUDA_HOME=/usr/local/cuda-8.0
export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} | 0 | 1 | 24,320 |
0 | 39,858,096 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-10-04T15:29:00.000 | 1 | 1 | 0 | How to visualize DNNs dependent of the output class in TensorFlow? | 39,856,291 | 0.197375 | python-2.7,tensorflow,deep-learning | The "basic" version of this is straightforward. You use the same graph as for training the network, but instead of optimizing w.r.t. the parameters of the network, you optimize w.r.t the input (which has to be a variable with the shape of your input image). Your optimization target is the negative (because you want to maximize, but TF optimizers minimize) logit of your target class. You want to run it with a couple of different initial values for the image.
There's also a few related techniques, if you search for DeepDream and adversarial examples you should find a lot of literature. | In TensorFlow it is pretty straight forward to visualize filters and activation layers given a single input.
But I'm more interested in the opposite way: feeding a class (as one-hot vector) to the output layer and see something like the optimal input image for that specific class.
Is there a way to do so or to run the graph reversed?
Background: I'm using Googles Inception V3 with 15 classes and I've trained the network already with a large amount of data up to a good precision. Now I'm interested in understanding why and how the model distinguishes the different classes. | 0 | 1 | 95 |
0 | 39,892,009 | 0 | 0 | 0 | 1 | 1 | false | 2 | 2016-10-05T07:45:00.000 | 0 | 1 | 0 | DBF Table Join without using Arcpy? | 39,868,163 | 0 | python-2.7,dbf,arcpy,pyshp | Not exactly a programmatical solution for my problem but a practical one:
My shapefile is always static, only the attributes of the features will change. So I copy my original shapefile (only the basic files with endings .shp, .shx, .prj) to my output folder and rename it to the name I want.
Then I create my CSV-File with all calculations and convert it to DBF and save it with the name of my new shapefile to the output folder too. ArcGIS will now load the shapefile along with my own DBF file and I don't even need to do any tablejoin at all!
Now my program runs through in only 50 seconds!
I am still interested in more solutions for the table join problem, maybe I will encounter that problem again in the future where the shapefile is NOT always static. I did not really understand Nan's solution, I am still at "advanced beginner" level in Python :)
Cheers | I have created a rather large CSV file (63000 rows and around 40 columns) and I want to join it with an ESRI Shapefile.
I have used ArcPy but the whole process takes 30! minutes. If I make the join with the original (small) CSV file, join it with the Shapefile and then make my calculations with ArcPy and continously add new fields and calculate the stuff it takes 20 minutes. I am looking for a faster solution and found there are other Python modules such as PySHP or DBFPy but I have not found any way for joining tables, hoping that could go faster.
My goal is already to get away from ArcPy as much as I can and preferable only use Python, so preferably no PostgreSQL and alikes either.
Does anybody have a solution for that? Thanks a lot! | 0 | 1 | 377 |
0 | 39,883,453 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-10-05T19:56:00.000 | 2 | 1 | 0 | How would you store a pyramidal image representation in Python? | 39,882,632 | 1.2 | python,numpy,image-processing | You could just use a list of numpy arrays. Assuming a scale factor of two, for the i,jth pixel at scale n:
The indices of its "parent" pixel at scale n-1 will be (i//2, j//2)
Its "child" pixels at scale n+1 can be indexed by (slice(2*i, 2*(i+1)), slice(2*j, 2*(j+1))) | Suppose I have N images which are a multiresolution representation of a single image (the Nth image being the coarsest one). If my finest scale is a 16x16 image, the next scale is a 8x8 image and so on.
How should I store such data to fastly be able to access, at a given scale and for a given pixel, its unique parent in the next coarser scale and its children at each finest scale? | 0 | 1 | 46 |
0 | 40,723,826 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-10-07T02:27:00.000 | 1 | 1 | 0 | Is it possible to use label spreading scikit algorithm on edgelist? | 39,908,430 | 1.2 | python,scikit-learn | To use Label Spreading you should follow these steps:
1. create a vector of labels (y), where all the unlabeled instances are set to -1.
2. fit the model using your feature data (X) and y.
3. create predict_entropies vector using stats.distributions.entropy(yourmodelname.label_distributions_.T)
4. create an uncertainty index by sorting the predict_entropies vector.
5. send the samples of lowest certainty for label query.
I hope this framework will help you get started. | I have a network edgelist and I want to use the Label Spreading/Label Propagation algorithm from scikit-learn. I have a set of nodes that are labeled and want to spread the labels on the unlabeled portion of the network. I can generate the adjacency matrix or confusion matrix if needed.
Can someone point me in the right direction using scikit? The documentation seems so limited in what I can do with it.
Thank you in advance. | 0 | 1 | 620 |
0 | 39,926,150 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-10-07T20:29:00.000 | 0 | 3 | 0 | NLP Code Mixed : Code Switching | 39,925,325 | 0 | python-3.x,machine-learning,nlp,stanford-nlp,opennlp | I think easy solution is remove navbar-inverse class and place this css.
.navbar {
background-color: blue;
} | I am engaged in a competition where we have to build a system using given data set. I am trying to learn the proceedings in linguistics research.
The main goal of this task is to identify the sentence level sentiment polarity of the code-mixed dataset of Indian languages pairs. Each of the sentences is annotated with language information as well as polarity at the sentence level.
Anyone interested to participate with me??
If anyone can help me over it. It will be great.
Please reach me out soon as possible. | 0 | 1 | 165 |
0 | 50,913,862 | 0 | 0 | 0 | 0 | 2 | false | 29 | 2016-10-08T09:46:00.000 | 4 | 4 | 0 | Cannot import keras after installation | 39,930,952 | 0.197375 | python,ubuntu,tensorflow,anaconda,keras | I had pip referring by default to pip3, which made me download the libs for python3. On the contrary I launched the shell as python (which opened python 2) and the library wasn't installed there obviously.
Once I matched the names pip3 -> python3, pip -> python (2) all worked. | I'm trying to setup keras deep learning library for Python3.5 on Ubuntu 16.04 LTS and use Tensorflow as a backend. I have Python2.7 and Python3.5 installed. I have installed Anaconda and with help of it Tensorflow, numpy, scipy, pyyaml. Afterwards I have installed keras with command
sudo python setup.py install
Although I can see /usr/local/lib/python3.5/dist-packages/Keras-1.1.0-py3.5.egg directory, I cannot use keras library. When I try to import it in python it says
ImportError: No module named 'keras'
I have tried to install keras usingpip3, but got the same result.
What am I doing wrong? Any Ideas? | 0 | 1 | 129,794 |
0 | 55,900,347 | 0 | 0 | 0 | 0 | 2 | false | 29 | 2016-10-08T09:46:00.000 | 0 | 4 | 0 | Cannot import keras after installation | 39,930,952 | 0 | python,ubuntu,tensorflow,anaconda,keras | Firstly checked the list of installed Python packages by:
pip list | grep -i keras
If there is keras shown then install it by:
pip install keras --upgrade --log ./pip-keras.log
now check the log, if there is any pending dependencies are present, it will affect your installation. So remove dependencies and then again install it. | I'm trying to setup keras deep learning library for Python3.5 on Ubuntu 16.04 LTS and use Tensorflow as a backend. I have Python2.7 and Python3.5 installed. I have installed Anaconda and with help of it Tensorflow, numpy, scipy, pyyaml. Afterwards I have installed keras with command
sudo python setup.py install
Although I can see /usr/local/lib/python3.5/dist-packages/Keras-1.1.0-py3.5.egg directory, I cannot use keras library. When I try to import it in python it says
ImportError: No module named 'keras'
I have tried to install keras usingpip3, but got the same result.
What am I doing wrong? Any Ideas? | 0 | 1 | 129,794 |
0 | 41,820,410 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2016-10-08T20:19:00.000 | 0 | 1 | 0 | Feature detection for embedded platform OpenCV | 39,936,967 | 0 | python,algorithm,opencv,raspberry-pi,computer-vision | I have done similar project in my Masters Degree.
I had used Raspberry Pi 3 because it is faster than Pi 2 and has more resources for image processing.
I had used KNN algorithm in OpenCV for Number Detection. It was fast and had good efficiency.
The main advantage of KNN algorithm is it is very light weight. | I'm trying to do object recognition in an embedded environment, and for this I'm using Raspberry Pi (Specifically version 2).
I'm using OpenCV Library and as of now I'm using feature detection algorithms contained in OpenCV.
So far I've tried different approaches:
I tried different keypoint extraction and description algorithms: SIFT, SURF, ORB. SIFT and SURF are too heavy and ORB is not so good.
Then I tried using different algorithms for keypoint extraction and then description. The first approach was to use FAST algorithm to extract key points and then ORB or SURF for description, the results were not good and not rotation invariant, then i tried mixing the others.
I now am to the point where I get the best results time permitting using ORB for keypoint extraction and SURF for description. But it is still really slow.
So do you have any suggestions or new ideas to obtain better results? Am I missing something?
As additional information, I'm using Python 3.5 with OpenCV 3.1 | 0 | 1 | 406 |
0 | 39,973,094 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-10-10T11:53:00.000 | 0 | 2 | 0 | store multiple images efficiently in Python data structure | 39,957,657 | 0 | python,image,algorithm,opencv,image-processing | I used several lists and list.append() for storing the image.
For finding the white regions in the black & white images I used cv2.findNonZero(). | I have several images (their number might increase over time) and their corresponding annotated images - let's call them image masks.
I want to convert the original images to Grayscale and the annotated masks to Binary images (B&W) and then save the gray scale values in a Pandas DataFrame/CSV file based on the B&W pixel coordinates.
So that means a lot of switching back and forth the original image and the binary images.
I don't want to read every time the images from file because it might be very time consuming.
Any suggestion which data structure should be used for storing several types of images in Python? | 0 | 1 | 783 |
0 | 39,973,408 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2016-10-10T11:53:00.000 | 1 | 2 | 0 | store multiple images efficiently in Python data structure | 39,957,657 | 1.2 | python,image,algorithm,opencv,image-processing | PIL and Pillow are only marginally useful for this type of work.
The basic algorithm used for "finding and counting" objects like you are trying to do goes something like this: 1. Conversion to grayscale 2. Thresholding (either automatically via Otsu method, or similar, or by manually setting the threshold values) 3. Contour detection 4. Masking and object counting based on your contours.
You can just use a Mat (of integers, Mat1i) would be Data structure fits in this scenario. | I have several images (their number might increase over time) and their corresponding annotated images - let's call them image masks.
I want to convert the original images to Grayscale and the annotated masks to Binary images (B&W) and then save the gray scale values in a Pandas DataFrame/CSV file based on the B&W pixel coordinates.
So that means a lot of switching back and forth the original image and the binary images.
I don't want to read every time the images from file because it might be very time consuming.
Any suggestion which data structure should be used for storing several types of images in Python? | 0 | 1 | 783 |
0 | 39,971,118 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-10-11T04:29:00.000 | 0 | 3 | 0 | Python: Plot a sparse matrix | 39,970,515 | 0 | python,matplotlib | It seems to me heatmap is the best candidate for this type of plot. imshow() will return u a colored matrix with color scale legend.
I don't get ur stretched ellipses problem, shouldnt it be a colored squred for each data point?
u can try log color scale if it is sparse. also plot the 12 classes separately to analyze if theres any inter-class differences. | I have a sparse matrix X, shape (6000, 300). I'd like something like a scatterplot which has a dot where the X(i, j) != 0, and blank space otherwise. I don't know how many nonzero entries there are in each row of X. X[0] has 15 nonzero entries, X[1] has 3, etc. The maximum number of nonzero entries in a row is 16.
Attempts:
plt.imshow(X) results in a tall, skinny graph because of the shape of X. Using plt.imshow(X, aspect='auto) will stretch out the graph horizontally, but the dots get stretched out to become ellipses, and the plot becomes hard to read.
ax.spy suffers from the same problem.
bokeh seems promising, but really taxes my jupyter kernel.
Bonus:
The nonzero entries of X are positive real numbers. If there was some way to reflect their magnitude, that would be great as well (e.g. colour intensity, transparency, or across a colour bar).
Every 500 rows of X belong to the same class. That's 12 classes * 500 observations (rows) per class = 6000 rows. E.g. X[:500] are from class A, X[500:1000] are from class B, etc. Would be nice to colour-code the dots by class. For the moment I'll settle for manually including horizontal lines every 500 rows to delineate between classes. | 0 | 1 | 2,877 |
0 | 40,127,976 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-10-11T04:29:00.000 | 0 | 3 | 0 | Python: Plot a sparse matrix | 39,970,515 | 0 | python,matplotlib | plt.matshow also turned out to be a feasible solution. I could also plot a heatmap with colorbars and all that. | I have a sparse matrix X, shape (6000, 300). I'd like something like a scatterplot which has a dot where the X(i, j) != 0, and blank space otherwise. I don't know how many nonzero entries there are in each row of X. X[0] has 15 nonzero entries, X[1] has 3, etc. The maximum number of nonzero entries in a row is 16.
Attempts:
plt.imshow(X) results in a tall, skinny graph because of the shape of X. Using plt.imshow(X, aspect='auto) will stretch out the graph horizontally, but the dots get stretched out to become ellipses, and the plot becomes hard to read.
ax.spy suffers from the same problem.
bokeh seems promising, but really taxes my jupyter kernel.
Bonus:
The nonzero entries of X are positive real numbers. If there was some way to reflect their magnitude, that would be great as well (e.g. colour intensity, transparency, or across a colour bar).
Every 500 rows of X belong to the same class. That's 12 classes * 500 observations (rows) per class = 6000 rows. E.g. X[:500] are from class A, X[500:1000] are from class B, etc. Would be nice to colour-code the dots by class. For the moment I'll settle for manually including horizontal lines every 500 rows to delineate between classes. | 0 | 1 | 2,877 |
0 | 39,972,738 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-10-11T07:25:00.000 | 0 | 1 | 0 | How to update pandas when python is installed as part of ArcGIS10.4, or another solution | 39,972,261 | 0 | python,pandas,upgrade,arcmap | I reinstalled python again directly from python.org and then installed pandas which seems to work.
I guess this might stop the ArcMap version of python working properly but since I'm not using python with ArcMap at the moment it's not a big problem. | I recently installed ArcGIS10.4 and now when I run python 2.7 programs using Idle (for purposes unrelated to ArcGIS) it uses the version of python attached to ArcGIS.
One of the programs I wrote needs an updated version of the pandas module. When I try to update the pandas module in this verion of python (by opening command prompt as an administrator, moving to C:\Python27\ArcGIS10.4\Scripts and using the command pip install --upgrade pandas) the files download ok but there is an access error message when PIP tries to upgrade. I have tried restarting the computer in case something was open. The error message is quite long and I can't cut and paste from command prompt but it finishes with
" Permission denied: 'C:\Python27\ArcGIS10.4\Lib\site-packages\numpy\core\multiarray.pyd' "
I've tried the command to reinstall pandas completely which also gave an error message. I've tried installing miniconda in the hope that I could get a second version of python working and then use that version instead of the version attached to ArcMap. However I don't know how to direct Idle to choose the newly installed version.
So overall I don't mind having 2 versions of python if someone could tell me how to choose which one runs or if there's some way to update the ArcMap version that would be even better. I don't really want to uninstall ArcMap at the moment.
Any help is appreciated! Thanks! | 0 | 1 | 212 |
0 | 44,608,692 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-10-12T15:14:00.000 | 1 | 2 | 0 | fit_transform with the training data and transform with the testing | 40,002,232 | 0.099668 | python,scikit-learn | If you use fit only on the training and transform on the test data, you won't get the correct result.
When using fit_transform on the training data, it means that the machine is learning from the parameters in the feature space and also transforming (scaling) the training data. On the other hand, you should only use transform on the test data to scale it according to the parameters learned from the training data. | As the title says, I am using fit_transform with the CountVectorizer on the training data .. and then I am using tranform only with the testing data ... will this gives me the same as using fit only on the training and tranform only on the testing data ? | 0 | 1 | 2,610 |
0 | 40,025,421 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-10-12T17:01:00.000 | 0 | 1 | 0 | Putting a Regression Line When Using Pandas scatter_matrix | 40,004,334 | 0 | python-2.7,pandas | I think this is a misleading question/thought process.
If you think of data in strictly 2 dimension then a regression line on a scatter plot makes sense. But let's say you have 5 dimensions of data you are plotting in your scatter matrix. In this case the regression for each pair of dimensions is not an accurate representation of the global regression.
I would be wary presenting that to anyone as I can easily see where it could create confusion.
That being said if you don't care about a regression across all of your dimensions then you could write your own function to do this. A quick walk through of steps may be:
1. Identify number of dimensions N
2. Create figure
3. Double for loop on N, first will walk down rows, second will walk across rows
4. At each point add subplot, calculate regression (if not kde/hist position), plot scatter cloud and regression line or kde/hist | I'm using scatter_matrix for correlation visualization and calculating correlation values using corr(). Is it possible to have the scatter_matrix visualization draw the regression line in the scatter plots? | 0 | 1 | 674 |
0 | 40,007,828 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2016-10-12T20:27:00.000 | 0 | 1 | 0 | Digital Image Processing via Python | 40,007,759 | 1.2 | python,image-processing | You can search for this libraries: dlib, PIL (pillow), opencv and scikit learn image. This libraries are image processing libraries for python.
Hope it helps. | I am starting a new project with a friend of mine, we want to design a system that would alert the driver if the car is diverting from its original path and its dangerous.
so in a nutshell we have to design a real-time algorithm that would take pictures from the camera and process them. All of this will be done in Python.
I was wondering if anyone has any advises for us or maybe point out some stuff that we have to consider
Cheers ! | 0 | 1 | 124 |
0 | 57,003,384 | 0 | 0 | 0 | 0 | 2 | false | 30 | 2016-10-13T03:36:00.000 | 1 | 7 | 0 | NLTK vs Stanford NLP | 40,011,896 | 0.028564 | python,nlp,nltk,stanford-nlp | NLTK can be used for the learning phase to and perform natural language process from scratch and basic level.
Standford NLP gives you high-level flexibility to done task very fast and easiest way.
If you want fast and production use, can go for Standford NLP. | I have recently started to use NLTK toolkit for creating few solutions using Python.
I hear a lot of community activity regarding using Stanford NLP.
Can anyone tell me the difference between NLTK and Stanford NLP? Are they two different libraries? I know that NLTK has an interface to Stanford NLP but can anyone throw some light on few basic differences or even more in detail.
Can Stanford NLP be used using Python? | 0 | 1 | 15,755 |
0 | 50,968,392 | 0 | 0 | 0 | 0 | 2 | false | 30 | 2016-10-13T03:36:00.000 | 1 | 7 | 0 | NLTK vs Stanford NLP | 40,011,896 | 0.028564 | python,nlp,nltk,stanford-nlp | I would add to this answer that if you are looking to parse date/time events StanfordCoreNLP contains SuTime which is the best datetime parser available. The support for arbitrary texts like 'Next Monday afternoon' is not present in any other package. | I have recently started to use NLTK toolkit for creating few solutions using Python.
I hear a lot of community activity regarding using Stanford NLP.
Can anyone tell me the difference between NLTK and Stanford NLP? Are they two different libraries? I know that NLTK has an interface to Stanford NLP but can anyone throw some light on few basic differences or even more in detail.
Can Stanford NLP be used using Python? | 0 | 1 | 15,755 |
0 | 40,021,035 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2016-10-13T12:17:00.000 | 2 | 1 | 0 | Add a library in Spark in Bluemix & connect MongoDB , Spark together | 40,020,767 | 1.2 | python,apache-spark,ibm-cloud,ibm-cloud-plugin | In a Python notebook:
!pip install <package>
and then
import <package> | 1) I have Spark on Bluemix platform, how do I add a library there ?
I can see the preloaded libraries but cant add a library that I want.
Any command line argument that will install a library?
pip install --package is not working there
2) I have Spark and Mongo DB running, but I am not able to connect both of them.
con ='mongodb://admin:ITCW....ssl=true'
ssl1 ="LS0tLS ....."
client = MongoClient(con,ssl=True)
db = client.mongo11
collection = db.mongo11
ff=db.sammy.find()
Error I am getting is :
SSL handshake failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590) | 0 | 1 | 106 |
0 | 40,046,686 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2016-10-14T03:12:00.000 | 2 | 1 | 0 | Using compression library to estimate information complexity of an english sentence? | 40,034,334 | 1.2 | python,scala,compression,information-theory | All this is going to do is tell you whether the words in the sentence, and maybe phrases in the sentence, are in the dictionary you supplied. I don't see how that's complexity. More like grade level. And there are better tools for that. Anyway, I'll answer your question.
Yes, you can preset the zlib compressor a dictionary. All it is is up to 32K bytes of text. You don't need to run zlib on the dictionary or "freeze a state" -- you simply start compressing the new data, but permit it to look back at the dictionary for matching strings. However 32K isn't very much. That's as far back as zlib's deflate format will look, and you can't load much of the English language in 32K bytes.
LZMA2 also allows for a preset dictionary, but it can be much larger, up to 4 GB. There is a Python binding for the LZMA2 library, but you may need to extend it to provide the dictionary preset function. | I'm trying to write an algorithm that can work out the 'unexpectedness' or 'information complexity' of a sentence. More specifically I'm trying to sort a set of sentences so the least complex come first.
My thought was that I could use a compression library, like zlib?, 'pre train' it on a large corpus of text in the same language (call this the 'Corpus') and then append to that corpus of text the different sentences.
That is I could define the complexity measure for a sentence to be how many more bytes it requires to compress the whole corpus with that sentence appended, versus the whole corpus with a different sentence appended. (The fewer extra bytes, the more predictable or 'expected' that sentence is, and therefore the lower the complexity). Does that make sense?
The problem is with trying to find the right library that will let me do this, preferably from python.
I could do this by literally appending sentences to a large corpus and asking a compression library to compress the whole shebang, but if possible, I'd much rather halt the processing of the compression library at the end of the corpus, take a snapshot of the relevant compression 'state', and then, with all that 'state' available try to compress the final sentence. (I would then roll back to the snapshot of the state at the end of the corpus and try a different sentence).
Can anyone help me with a compression library that might be suitable for this need? (Something that lets me 'freeze' its state after 'pre training'.)
I'd prefer to use a library I can call from Python, or Scala. Even better if it is pure python (or pure scala) | 0 | 1 | 101 |
0 | 40,049,831 | 0 | 0 | 0 | 0 | 1 | true | 7 | 2016-10-14T17:36:00.000 | 3 | 1 | 0 | Importing PMML models into Python (Scikit-learn) | 40,048,987 | 1.2 | python,r,scikit-learn,pmml | You can't connect different specialized representations (such as R and Scikit-Learn native data structures) over a generalized representation (such as PMML). You may have better luck trying to translate R data structures to Scikit-Learn data structures directly.
XGBoost is really an exception to the above rule, because its R and Scikit-Learn implementations are just thin wrappers around the native XGBoost library. Inside a trained R XGBoost object there's a blob raw, which is the model in its native XGBoost representation. Save it to a file, and load in Python using the xgb.Booster.load_model(fname) method.
If you know that you need to the deploy XGBoost model in Scikit-Learn, then why do you train it in R? | There seem to be a few options for exporting PMML models out of scikit-learn, such as sklearn2pmml, but a lot less information going in the other direction. My case is an XGboost model previously built in R, and saved to PMML using r2pmml, that I would like to use in Python. Scikit normally uses pickle to save/load models, but is it also possible to import models into scikit-learn using PMML? | 0 | 1 | 2,878 |
0 | 40,065,428 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-10-15T23:49:00.000 | 0 | 1 | 0 | Tensorflow: Converting a tenor [B, T, S] to list of B tensors shaped [T, S] | 40,065,396 | 1.2 | python,tensorflow,deep-learning,lstm | It sounds like you want tf.unpack() | Tensorflow: Converting a tenor [B, T, S] to list of B tensors shaped [T, S]. Where B, T, S are whole positive numbers ...
How can I convert this? I can't do eval because no session is running at the time I want to do this. | 0 | 1 | 76 |
0 | 40,087,542 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-10-17T11:36:00.000 | 2 | 1 | 0 | Which scipy function should I use to interpolate from a rectangular grid to regularly spaced rectangular grid in 2D? | 40,085,367 | 1.2 | python,numpy,scipy,grid | RectBivariateSpline
Imagine your grid as a canyon, where the high values are peaks and the low values are valleys. The bivariate spline is going to try to fit a thin sheet over that canyon to interpolate. This will still work on irregularly spaced input, as long as the x and y array you supply are also irregularly spaced, and everything still lies on a rectangular grid.
RegularGridInterpolator
Same canyon, but now we'll linearly interpolate the surrounding gridpoints to interpolate. We'll assume the input data is regularly spaced to save some computation. It sounds like this won't work for you.
Now What?
Both of these map 2D-1D. It sounds like you have an irregular input space with, but rectangularly spaced sample points, and an output space with regularly spaced sample points. You might just try LinearNDInterpolator, since you're in 2D it won't be that much more expensive.
If you're trying to interpolate a mapping between two 2D things, you'll want to do two interpolations, one that interpolates (x1, y1) -> x2 and one that interpolates (x1, y1) -> y2.
Vstacking the output of those will give you an array of points in your output space.
I don't know of a more efficient method in scipy for taking advantage of the expected structure of the interpolation output, given an irregular grid input. | I pretty new to python, and I'm looking for the most efficient pythonic way to interpolate from a grid to another one.
The original grid is a structured grid (the terms regular or rectangular grid are also used), and the spacing is not uniform.
The new grid is a regularly spaced grid. Both grids are 2D. For now it's ok using a simple interpolation method like linear interpolation, pheraps in the future I could try something different like bicubic.
I'm aware that there are methods to interpolate from an irregular grid to a regular one, however given the regular structure of my original grid, more efficient methods should be available.
After searching in the scipy docs, I have found 2 methods that seem to fit my case: scipy.interpolate.RegularGridInterpolator and scipy.interpolate.RectBivariateSpline.
I don't understand the difference between the two functions, which one should I use?
Is the difference purely in the interpolation methods? Also, while the non-uniform spacing of the original grid is explicitly mentioned in RegularGridInterpolator, RectBivariateSpline doc is silent about it.
Thanks,
Andrea | 0 | 1 | 401 |
0 | 40,091,765 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-10-17T16:01:00.000 | 2 | 2 | 0 | Check failed: error == cudaSuccess (2 vs. 0) out of memory | 40,090,892 | 0.197375 | python,gpu,caffe | This happens when you run out of memory in the GPU. Are you sure you stopped the first script properly? Check the running processes on your system (ps -A in ubuntu) and see if the python script is still running. Kill it if it is. You should also check the memory being used in your GPU (nvidia-smi). | I am trying to run a neural network with pycaffe on gpu.
This works when I call the script for the first time.
When I run the same script for the second time, CUDA throws the error in the title.
Batch size is 1, image size at this moment is 243x322, the gpu has 8gb RAM.
I guess I am missing a command that resets the memory?
Thank you very much!
EDIT:
Maybe I should clarify a few things: I am running caffe on windows.
When i call the script with python script.py, the process terminates and the gpu memory gets freed, so this works.
With ipython, which I use for debugging, the GPU memory indeed does not get freed (after one pass, 6 of the 8 bg are in use, thanks for the nvidia-smi suggestion!)
So, what I am looking for is a command I can call from within pyhton, along the lines of:
run network
process image output
free gpu memory | 0 | 1 | 2,965 |
0 | 40,099,554 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-10-18T03:26:00.000 | 1 | 1 | 0 | how to install previous version of xarray | 40,099,001 | 0.197375 | python-xarray | Use "conda install xarray==0.8.0" if you're using anaconda, or "pip install xarray==0.8.0" otherwise. | I am reading other's pickle file that may have data type based on xarray. Now I cannot read in the pickle file with the error "No module named core.dataset".
I guess this maybe a xarray issue. My collaborator asked me to change my version to his version and try again.
My version is 0.8.2, and his version 0.8.0. So how can I change back to his version?
Thanks! | 0 | 1 | 999 |
0 | 40,109,662 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-10-18T13:16:00.000 | 0 | 1 | 0 | How do I link python 3.4.3 to opencv? | 40,109,379 | 0 | python,python-2.7,python-3.x,opencv,numpy | You can try:
Download the OpenCV module
Copy the ./opencv/build/python/3.4/x64/cv2.pyd file
To the python installation directory path: ./Python34/Lib/site-packages.
I hope this helps | So I have OpenCV on my computer all sorted out, I can use it in C/C++ and the Python 2.7.* that came with my OS.
My computer runs on Linux Deepin and whilst I usually use OpenCV on C++, I need to use Python 3.4.3 for some OpenCV tasks.
Problem is, I've installed python 3.4.3 now but whenever I try to run an OpenCV program on it, it doesn't recognize numpy or cv2, the modules I need for OpenCV. I've already built and installed OpenCV and I'd rather not do it again
Is there some way I can link my new Python 3.4.3 environment to numpy and the opencv I already built so I can use OpenCV on Python 3.4.3?
Thanks in advance | 0 | 1 | 1,213 |
0 | 40,116,907 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-10-18T13:56:00.000 | 0 | 2 | 0 | Python, scipy, curve_fit, bounds: How can I constraint param by two intervals? | 40,110,260 | 1.2 | python,scipy,curve-fitting | No, least_squares (hence curve_fit) only supports box constraints. | I`m using scipy.optimize.curve_fit for fitting a sigmoidal curve to data. I need to bound one of parameters from [-3, 0.5] and [0.5, 3.0]
I tried fit curve without bounds, and next if parameter is lower than zero, I fit once more with bounds [-3, 0.5] and in contrary with[0.5, 3.0]
Is it possible, to bound function curve_fit with two intervals? | 0 | 1 | 569 |
0 | 51,822,131 | 0 | 0 | 0 | 0 | 1 | false | 20 | 2016-10-18T19:11:00.000 | 17 | 4 | 0 | xgboost sklearn wrapper value 0for Parameter num_class should be greater equal to 1 | 40,116,215 | 1 | python,scikit-learn,xgboost | In my case, the same error was thrown during a regular fit call. The root of the issue was that the objective was manually set to multi:softmax, but there were only 2 classes. Changing it to binary:logistic solved the problem. | I am trying to use the XGBClassifier wrapper provided by sklearn for a multiclass problem. My classes are [0, 1, 2], the objective that I use is multi:softmax. When I am trying to fit the classifier I get
xgboost.core.XGBoostError: value 0for Parameter num_class should be greater equal to 1
If I try to set the num_class parameter the I get the error
got an unexpected keyword argument 'num_class'
Sklearn is setting this parameter automatically so I am not supposed to pass that argument. But why do I get the first error? | 0 | 1 | 14,175 |
0 | 45,497,131 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2016-10-19T10:28:00.000 | 1 | 4 | 1 | How to install openCV 2.4.13 for Python 2.7 on Ubuntu 16.04? | 40,128,751 | 0.049958 | python-2.7,opencv,ubuntu | sudo apt-get install build-essential cmake git pkg-config
sudo apt-get install libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev
sudo apt-get install libgtk2.0-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install python2.7-dev
sudo pip install numpy
sudo apt-get install python-opencv
Then you can have a try:
$ python
Python 2.7.6 (default, Oct 26 2016, 20:30:19)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv
>>> import cv2 | I have tried a lot of online posts to install opencv but they are not working for Ubuntu 16.04. May anyone please give me the steps to install openCV 2.4.13 on it? | 0 | 1 | 29,525 |
0 | 41,708,567 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-10-19T23:10:00.000 | 5 | 1 | 0 | Add text to scatter point using python gmplot | 40,142,959 | 0.761594 | python | Just looking for the answer to this myself. gmplot was updated to June 2016 to include a hovertext functionality for the marker method, but unfortunately this isn't available for the scatter method. The enthusiastic user will find that the scatter method simply calls the marker method over and over, and could modify the scatter method itself to accept a title or range of titles.
If like myself you are using an older version, make sure to run
pip install --upgrade gmplot
and to place a marker with hovertext (mouse hovering over pin without clicking)
gmap=gmplot.GoogleMapPlotter("Seattle")
gmap.marker(47.61028142523736, -122.34147349538826, title="A street corner in Seattle")
st="testmap.html"
gmap.draw(st) | I plotted some points on google maps using gmplot's scatter method (python). I want to add some text to the points so when someone clicks on those points they can see the text.
I am unable to find any documentation or example that shows how to do this.
Any pointers are appreciated. | 0 | 1 | 8,147 |
0 | 40,163,636 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-10-20T00:36:00.000 | 4 | 1 | 0 | fuzzy match between 2 columns (Python) | 40,143,675 | 0.664037 | python,python-3.x,pandas,fuzzywuzzy | Thanks everyone for your inputs. I have solved my problem! The link that "agg3l" provided was helpful. The "TypeError" I saw was because either the "url_entrance" or "company_name" has some floating types in certain rows. I converted both columns to string using the following scripts, re-ran the fuzz.ratio script and got it to work!
df_combo['url_entrance']=df_combo['url_entrance'].astype(str)
df_combo['company_name']=df_combo['company_name'].astype(str) | I have a pandas dataframe called "df_combo" which contains columns "worker_id", "url_entrance", "company_name". I am trying to produce an output column that would tell me if the URLs in "url_entrance" column contains any word in "company_name" column. Even a close match like fuzzywuzzy would work.
For example, if the URL is "www.grandhotelseattle.com" and the "company_name" is "Hotel Prestige Seattle", then the fuzz ratio might be somewhere 70-80.
I have tried the following script:
>>>fuzz.ratio(df_combo['url_entrance'],df_combo['company_name'])
but it returns only 1 number which is the overall fuzz ratio for the whole column. I would like to have fuzz ratio for every row and store those ratios in a new column. | 0 | 1 | 3,169 |
0 | 40,695,844 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-10-21T00:03:00.000 | 2 | 1 | 0 | spyder cant load tensorflow | 40,166,386 | 0.379949 | python-2.7,ubuntu,tensorflow,spyder | Enter the enviornment
source activate tensorflow
install spyder
conda install spyder
Run spyder
spyder
` | I build and installed tensorflow in my ubuntu 16.04 with gpu. In command line I can easily activate tensorflow environment but while I try to run the code through spyder it show this : "No module named tensorflow.examples.tutorials.mnist"
how can I run my python code from spyder with tensorflow? | 0 | 1 | 771 |
0 | 40,188,461 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-10-22T04:16:00.000 | 3 | 1 | 0 | Difference between numpy.zeros(n) and numpy.zeros(n,1) | 40,188,251 | 0.53705 | python,numpy | The first argument indicates the shape of the array. A scalar argument implies a "flat" array (vector), whereas a tuple argument is interpreted as the dimensions of a tensor. So if the argument is the tuple (m,n), numpy.zeros will return a matrix with m rows and n columns. In your case, it is returning a matrix with n rows and 1 column.
Although your two cases are equivalent in some sense, linear algebra routines that require a vector as input will likely expect something like the first form. | What is the difference between
numpy.zeros(n)
and
numpy.zeros(n,1)?
The output for the first statement is
[0 0 ..... n times]
whereas the second one is
([0]
[0]
.... n rows) | 0 | 1 | 1,550 |
0 | 65,651,852 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2016-10-22T14:38:00.000 | 0 | 8 | 0 | How to check if a CSV has a header using Python? | 40,193,388 | 0 | python,python-2.7,csv | I think the best way to check this is -> simply reading 1st line from file and then match your string instead of any library. | I have a CSV file and I want to check if the first row has only strings in it (ie a header). I'm trying to avoid using any extras like pandas etc. I'm thinking I'll use an if statement like if row[0] is a string print this is a CSV but I don't really know how to do that :-S any suggestions? | 0 | 1 | 24,553 |
0 | 40,236,052 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-10-24T18:22:00.000 | 0 | 1 | 0 | Python machine learning sklearn.linear_model vs custom code | 40,224,973 | 0 | python,machine-learning | I recommend you use as much as possible the functions given by sklearn or another ML library (I like TensorFlow). That's because it's very difficult to get the performance of any library. They are calculating in a low level of the operating system, meanwhile the common users don't implement the computational actions outside the python environment.
Moreover, python by itself is not very efficient in relation to the data structures. For example,a simple array is implemented as a LinkedList. The ML libraries use Numpy to their computations to get a better performance. | I am new to machine learning and Python. I am trying to understand when to use the functions in sklearn.linear_model (linearregression and logisticregression) and when to implement my own code for the same. Any suggestions or references will be highly appreciated.
regards
Souvik | 0 | 1 | 72 |
0 | 48,947,334 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2016-10-25T03:25:00.000 | 6 | 1 | 0 | CountVectorizer and Out-Of-Vocabulary (OOV) tokens? | 40,230,865 | 1 | python,scikit-learn | There is no inbuilt way in scikit-learn to do this, you need to write some additional code to be able to do this. However you could use the vocabulary_ attribute of CountVectorizer to achieve this.
Cache the current vocabulary
Call fit_transform
Compute the diff with the new vocabulary and the cached vocabulary | Right now I'm using CountVectorizer to extract features. However, I need to count words not seen during fitting.
During transforming, the default behavior of CountVectorizer is to ignore words that were not observed during fitting. But I need to keep a count of how many times this happens!
How can I do this?
Thanks! | 0 | 1 | 1,618 |
0 | 40,231,181 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-10-25T03:56:00.000 | 0 | 5 | 0 | add a dummy dimension for a multi-dimensional array | 40,231,128 | 0 | python,numpy,scipy | Sure, no problem. Use 'reshape'. Assuming A1 is a numpy array
A1 = A1.reshape([1,255,255,3])
This will reshape your matrix.
If A1 isn't a numpy array then use
A1 = numpy.array(A1).reshape([1,255,255,3]) | There has a nd array A with shape [100,255,255,3], which correspond to 100 255*255 images. I would like to iterate this multi-dimensional array, and each iteration I get one image. This is what I do, A1 = A[i,:,:,:] The resulting A1 has shape [255,255,3]. However, i would like to enforce it have the shape [1,255,255,3]. How can I do it? | 0 | 1 | 1,297 |
0 | 40,236,282 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-10-25T08:58:00.000 | 2 | 1 | 0 | Python: Blur specific region in an image | 40,235,643 | 1.2 | python,opencv,image-processing,scikit-image | What was the result in the first case? It sounds like a good approach. What did you expect and what you get?
You can also try something like that:
Either create a copy of a whole image or just slightly bigger ROI (to include samples that will be used for blurring)
Apply blur on the created image
Apply masks on two images (from original image take everything except ROI and from blurred image take ROI)
Add two masked images
If you want more smooth transition make sure that masks aren't binary. You can smooth them using another blurring (blur one mask and create the second one by calculating: mask2 = 1 - mask1. By doing so you will be sure that weights always add up to one). | I'm trying to blur around specific regions in a 2D image (the data is an array of size m x n).
The points are specified by an m x n mask. cv2 and scikit avaiable.
I tried:
Simply applying blur filters to the masked image. But that isn't not working.
Extracting the points to blur by np.nan the rest, blurring and reassembling. Also not working, because the blur obviously needs the surrounding points to work correctly.
Any ideas?
Cheers | 0 | 1 | 1,655 |
0 | 40,268,763 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-10-26T14:17:00.000 | 1 | 1 | 0 | Generate random sparse matrix filled with values greater than 1 python | 40,264,741 | 0.197375 | python-3.x,scipy,sparse-matrix | sparse.rand calls sparse.random. random adds a optional data_rvs.
I haven't used data_rvs. It can probably emulate the dense randint, but definition is more complicated.
Another option is to generate the random floats and then convert them with a bit of math to the desired integers. You have to be a little careful since some operations will produce a Sparse Efficiency warning. You want operations that will change the nonzero values without touching the zeros.
(I suspect the data_rvs parameter was added in newer Scipy version, but I don't see an indication in the docs). | The method available in python scipy sps.rand() generates sparse matrix of random values in the range (0,1). How can we generate discrete random values greater than 1 like 2, 3,etc. ? Any method in scipy, numpy ? | 0 | 1 | 258 |
0 | 44,125,243 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-10-26T15:30:00.000 | 0 | 1 | 0 | Java heap size error when running Spark from Python | 40,266,372 | 1.2 | java,python,apache-spark,pyspark | This is because you're setting the maximum available heap size (128M) to be larger than the initial heap size error. Check the _JAVA_OPTIONS parameter that you're passing or setting in the configuration file. Also, note that the changes in the spark.driver.memory won't have any effect because the Worker actually lies within the driver JVM process that is started on starting spark-shell and the default memory used for that is 512M.
This creates a conflict as spark tries to initialize a heap size equal to 512M, but the maximum allowed limit set by you is only 128M.
You can set the minimum heap size through the --driver-java-options command line option or in your default properties file | I'm trying to run a Python script with the pyspark library.
I create a SparkConf() object using the following commands:
conf = SparkConf().setAppName('test').setMaster(<spark-URL>)
When I run the script, that line runs into an error:
Picked up _JAVA_OPTIONS: -Xmx128m
Picked up _JAVA_OPTIONS: -Xmx128m
Error occurred during initialization of VM Initial heap size set to a larger value than the maximum heap size.
I tried to fix the problem by setting the configuration property spark.driver.memory to various values, but nothing changed.
What is the problem and how can I fix it?
Thanks | 0 | 1 | 1,509 |
0 | 47,528,448 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-10-27T10:38:00.000 | 0 | 1 | 0 | what's the equivalent of the python zip function in dataflow? | 40,282,480 | 1.2 | python,google-cloud-dataflow | As jkff pointed out in the above comment, the code is indeed correct and the procedure is the recommended way of programming a tensorflow algorithm. The DoFn applied to each element was the bottleneck. | I'm using the python apache_beam version of dataflow. I have about 300 files with an array of 4 million entries each. The whole thing is about 5Gb, stored on a gs bucket.
I can easily produce a PCollection of the arrays {x_1, ... x_n} by reading each file, but the operation I now need to perform is like the python zip function: I want a PCollection ranging from 0 to n-1, where each element i contains the array of all the x_i across the files. I tried yielding (i, element) for each element and then running GroupByKey, but that is much too slow and inefficient (it won't run at all locally because of memory constraints, and it took 24 hours on the cloud, whereas I'm sure I can at least load all of the dataset if I want).
How do I restructure the pipeline to do this cleanly? | 0 | 1 | 283 |
Subsets and Splits