GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 56,175,288 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-16T18:21:00.000 | 0 | 1 | 1 | How to scale Kafka stream processing dynamically? | 56,174,516 | 0 | java,python,apache-kafka,kafka-consumer-api | If you have N partitions, then you can have up to N consumers within the same consumer group each of which reading from a single partition. When you have less consumers than partitions, then some of the consumers will read from more than one partition. Also, if you have more consumers than partitions then some of the consumers will be inactive and will receive no messages at all.
Therefore, if you want to kick off 20 consumers, you need to increase the number of partitions of the topic to -at least- 20 otherwise, 10 of your consumers will be inactive.
With regards to the duplicates that you've mentioned, if all of your consumers belong to the same group, then each message will be consumed only once.
To summarise,
Increase the number of partitions of your topic to 20.
Create the mechanism that will be creating and killing consumers based on peak/off-peak hours and make sure that when you kick of a consumer, it belongs to the existing consumer group so that the messages are consumed only once. | I have a fixed number of partitions of a topic. Producers produce data at varying rate in different hours of the day.
I want to add consumers dynamically based on hours of the day for the processing so that I can process records as fast as I can.
For example I have 10 partitions of a topic. I want to deploy 5 consumers for non peak hours and 20 consumers for peak hours.
My problem is that when I will have 20 consumers, each consumer will be receiving duplicate records, which I want to avoid. I want to process unique records only to speed-up records processing.
Is there any mechanism to do this? | 0 | 1 | 210 |
0 | 57,013,231 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-16T19:49:00.000 | 0 | 2 | 0 | Rearrange powerpoint slides automatically using python-pptx | 56,175,678 | 0 | python,powerpoint,python-pptx | Would it be feasible - if all we're doing is reordering - to read the XML and rewrite it with the slide elements permuted?
Further - for the "delete" case - is it feasible to simply delete a slide element in the XML? (I realise this could leave dangling objects such as images in the file.)
The process of extracting the XML and rewriting it to a copy of the file is probably not too onerous. | We typically use powerpoint to facilitate our experiments. We use "sections" in powerpoint to keep groups of slides together for each experimental task. Moving the sections to counterbalance the task order of the experiment has been a lot of work!
I thought we might be able to predefine a counterbalance order (using a string of numbers representing the order of the sections) in a CSV which could be read from python. Then using python-pptx move the sections and save the file for each order. The problem I am having is understanding how to read sections from the python-pptx. If anyone has a better solution than python please let me know.
Thanks for your help! | 0 | 1 | 2,247 |
0 | 56,178,539 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-16T20:10:00.000 | 1 | 1 | 0 | Is there a way to assign a maximum number of clusters using DBSCAN? | 56,175,928 | 1.2 | python,cluster-analysis,dbscan | Not with DBSCAN itself. Connected components are connected components, there is no ambiguity at this point.
You could write your own rules to extract the X most significant costs from an OPTICS plot though. OPTICS is the more variable formulation of DBSCAN. | If I am trying to cluster my data using DBSCAN, is there a way to assign a maximum number of clusters? I know I can set the minimum distance between points to be considered a cluster, but my data changes case by case and I would prefer to not allow more than 4 clusters. Any suggestions? | 0 | 1 | 281 |
0 | 56,179,295 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-05-16T20:52:00.000 | 1 | 2 | 0 | Pytorch argsort ordered, with duplicate elements in the tensor | 56,176,439 | 0.099668 | python,sorting,machine-learning,pytorch,tensor | Here is one way:
sort the numpy array using numpy.argsort()
convert the result into tensor using torch.from_numpy()
import torch
import numpy as np
A = [0,1,2,3,0,0,1,1,2,2,3,3]
x = np.array(A)
y = torch.from_numpy(np.argsort(x, kind='mergesort'))
print(y) | I have a vector A = [0,1,2,3,0,0,1,1,2,2,3,3]. I need to sort it in an increasing matter such that it is listed in an ordered fashion and from that extract the argsort. To better explain this I need to sort A to such that it returns B = [0,4,5,1,6,7,2,8,9,3,10,11]. However, when I use pyotrch's torch.argsort(A) it returns B = [4,5,0,1,6,7,2,8,9,3,10,11].
I'm assuming the algorithm that does so cannot be controlled on my end. Is there anyway to approach this without introducing for loops? Such operation are part of my NN model and will cause performance issues if not done efficiently. Thanks! | 0 | 1 | 2,541 |
0 | 56,195,251 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-16T22:21:00.000 | 1 | 1 | 0 | DataParallel multi-gpu RuntimeError: chunk expects at least a 1-dimensional tensor | 56,177,305 | 0.197375 | python,pytorch,multi-gpu | To identify the problem, you should check the shape of your input data for each mini-batch. The documentation says, nn.DataParallel splits the input tensor in dim0 and sends each chunk to the specified GPUs. From the error message, it seems you are trying to pass a 0-dimensional tensor.
One possible reason can be if you have a mini-batch with n examples and you are running your program on more than n GPUs, then you will get this error.
Let's consider the following scenario.
Total training examples = 161, Batch size = 80, total mini-batches = 3
Number of GPUs specified for DataParallel = 3
Now, in the above scenario, in the 3rd mini-batch, there will be 1 example. So, it is not possible to send chunks to all the specific GPUs and you will receive the error message. So, please check if you are not a victim of this issue. | I am trying to run my model on multiple gpus using DataParallel by setting model = nn.DataParallel(model).cuda(), but everytime getting this error -
RuntimeError: chunk expects at least a 1-dimensional tensor (chunk at
/pytorch/aten/src/ATen/native/TensorShape.cpp:184).
My code is correct. Does anyone know what's wrong?
I have tried setting device_ids=[0,1] parameter and also CUDA_VISIBLE_DEVICES on the terminal. Also tried different batch sizes. | 0 | 1 | 1,864 |
0 | 56,182,355 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2019-05-17T07:14:00.000 | 2 | 1 | 0 | How to predict different data via neural network, which is trained on the data with 36x60 size? | 56,181,395 | 1.2 | python-3.x,opencv,keras,neural-network,data-science | Neural networks (insofar as I've encountered) have a fixed input shape, freedom permitted only to batch size. This (probably) goes for every amazing neural network you've ever seen. Don't be too afraid of reshaping your image with off-the-shelf sampling to the network's expected input size. Robust computer-vision networks are generally trained on augmented data; randomly scaled, skewed, and otherwise transformed in order to---among other things---broaden the network's ability to handle this unavoidable scaling situation.
There are caveats, of course. An input for prediction should be as similar to the dataset it was trained on as possible, which is to say that a model should be applied to the data for which it was designed. For example, consider an object detection network made for satellite applications. If that same network is then applied to drone imagery, the relative size of objects may be substantially larger than the objects for which the network (specifically its anchor-box sizes) was designed.
Tl;dr: Assuming you're using the right network for the job, don't be afraid to scale your images/frames to fit the network's inputs. | I was training a neural network with images of an eye that are shaped 36x60. So I can only predict the result using a 36x60 image? But in my application I have a video stream, this stream is divided into frames, for each frame 68 points of landmarks are predicted. In the eye range, I can select the eye point, and using the 'boundingrect' function from OpenCV, it is very easy to get a cropped image. But this image has no form 36x60. What is the correct way to get 36x60 data that can be used for forecasting? Or how to use a neural network for data of another form? | 0 | 1 | 81 |
0 | 56,188,894 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-17T14:46:00.000 | 0 | 1 | 0 | Order data set using pandas dataframe based on lowest value inside | 56,188,801 | 0 | python,pandas,dataframe | Order first by pass then do it by date. This way you will be sure to have your df the way you want it | I have a dataset that I would like to order by date but second order with 'pass' value lowest inside of highest. The reason I don't have any code is because, I just have no idea where to begin.
dataframe input:
index date pass
0 11/14/2014 1
1 3/13/2015 1
2 3/20/2015 1
3 5/1/2015 2
4 5/1/2015 1
5 5/22/2015 3
6 5/22/2015 1
7 5/22/2015 2
8 9/25/2015 1
9 9/25/2015 2
10 9/25/2015 3
11 12/4/2015 2
12 12/4/2015 1
13 2/12/2016 2
14 2/12/2016 1
15 5/27/2016 1
16 6/10/2016 1
17 9/23/2016 1
18 12/23/2016 1
19 11/24/2017 1
20 12/29/2017 1
21 1/26/2018 2
22 1/26/2018 1
23 2/9/2018 1
24 3/16/2018 1
25 4/6/2018 2
26 4/6/2018 1
27 6/15/2018 1
28 6/15/2018 2
29 10/26/2018 1
30 11/30/2018 1
31 12/21/2018 1
** Expected Output **
index date pass
0 11/14/2014 1
1 3/13/2015 1
2 3/20/2015 1
3 5/1/2015 2
4 5/1/2015 1
5 5/22/2015 3
6 5/22/2015 2
7 5/22/2015 1
8 9/25/2015 3
9 9/25/2015 2
10 9/25/2015 1
11 12/4/2015 2
12 12/4/2015 1
13 2/12/2016 2
14 2/12/2016 1
15 5/27/2016 1
16 6/10/2016 1
17 9/23/2016 1
18 12/23/2016 1
19 11/24/2017 1
20 12/29/2017 1
21 1/26/2018 1
22 1/26/2018 2
23 2/9/2018 1
24 3/16/2018 1
25 4/6/2018 1
26 4/6/2018 2
27 6/15/2018 1
28 6/15/2018 2
29 10/26/2018 1
30 11/30/2018 1
31 12/21/2018 1
I have spaced out the results that would change. index 5,6,7 and 21,21 and 25,26. So all the bigger pass numbers should be inside the lower pass number if the dates are same.
So if you look at INDEX 5,6,7 the pass for it is changed to 3,2,1 and if you look at INDEX 25,26 the pass is changed to 1,2. Hope you understand. | 0 | 1 | 52 |
0 | 56,202,144 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-17T14:56:00.000 | 0 | 2 | 0 | User word2vec model output in larger kmeans project | 56,188,976 | 0 | python,cluster-analysis,k-means,word2vec,unsupervised-learning | There are two common approaches.
Taking the average of all words. That is easy, but the resulting vectors tend to be, well, average. They are not similar to the keywords of the document, but rather similar to the most average and least informative words... My experiences with this approach are pretty disappointing, despite this being the most mentioned approach.
par2vec/doc2vec. You add a "word" for each user to all it's contexts, in addition to the neighbor words, during training. This way you get a "predictive" vector for each paragraph/document/user the same way you get a word in the first word2vec. These are supposedly more informative but require much more effort to train - you can't download a pretrained model because they are computed during training. | I am attempting a rather large unsupervised learning project and am not sure how to properly utilize word2vec. We're trying to cluster groups of customers based on some stats about them and what actions they take on our website. Someone recommended I use word2vec and treat each action a user takes as a word in a "sentence". The reason this step is necessary is because a single customer can create multiple rows in the database (roughly same stats, but new row for each action on the website in chronological order). In order to perform kmeans on this data we need to get that down to one row per customer ID. Hence the previous idea to collapse down the actions as words in a sentence "describing the user's actions"
My question is I've come across countless tutorials and resources online that show you how to use word2vec (combined with kmeans) to cluster words on their own, but none of them show how to use the word2vec output as part of a larger kmeans model. I need to be able to use the word2vec model along side other values about the customer. How should I go about this? I'm using python for the clustering if you want to be specific with coding examples, but I could also just be missing something super obvious and high level. It seems the word2vec outputs vectors, but kmeans needs straight numbers to work, no? Any guidance is appreciated. | 0 | 1 | 71 |
0 | 56,192,030 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-17T18:10:00.000 | 0 | 1 | 0 | Conversion from pixel to general Metric(mm, in) | 56,191,574 | 0 | python,opencv,image-processing | The image formation process implies taking a 2D projection of the real, 3D world, through a lens. In this process, a lot of information is lost (e.g. the third dimension), and the transformation is dependent on lens properties (e.g. focal distance).
The transformation between the distance in pixels and the physical distance depends on the depth (distance between the camera and the object) and the lens. The complex, but more general way, is to estimate the depth (there are specialized algorithms which can do this under certain conditions, but require multiple cameras/perspectives) or use a depth camera which can measure the depth. Once the depth is known, after taking into account the effects of the lens projection, an estimation can be made.
You do not give much information about your setup, but the transformation can be measured experimentally. You simply take a picture of an object of known dimensions and you determine the physical dimension of one pixel (e.g. if the object is 10x10 cm and in the picture it has 100x100px, then 10px is 1mm). This is strongly dependent on the distance to the camera from the object.
An approach a bit more automated is to use a certain pattern (e.g. checkerboard) of known dimensions. It can be automatically detected in the image and the same transformation can be performed. | I am using openCV to process an image and use houghcircles to detect the circles in the image under test, and also calculating the distance between their centers using euclidean distance.
Since this would be in pixels, I need the absolute distances in mm or inches, can anyone let me know how this can be done
Thanks in advance. | 0 | 1 | 1,149 |
0 | 56,211,743 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-19T20:18:00.000 | 2 | 2 | 0 | How to reduce the number of features in text classification? | 56,211,670 | 1.2 | python,nlp,text-classification,naivebayes,countvectorizer | You can set the parameter max_features to 5000 for instance, It might help with overfitting. You could also tinker with max_df (for instance set it to 0.95) | I'm doing dialect text classification and I'm using countVectorizer with naive bayes. The number of features are too many, I have collected 20k tweets with 4 dialects. every dialect have 5000 tweets. And the total number of features are 43K. I was thinking maybe that's why I could be having overfitting. Because the accuracy has dropped a lot when I tested on new data. So how can I fix the number of features to avoid overfitting the data? | 0 | 1 | 471 |
0 | 56,238,216 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-05-20T07:23:00.000 | 0 | 2 | 0 | Comparing 2 huge (5-6 GB) csv files and count the number of matching and unmatched no. of rows | 56,216,081 | 0 | python,python-3.x,python-2.7 | I hope this algorithm work
create hash of every line in both file
now create set of that hash
difference and intersection of that set. | There are 2 huge (5-6 GB) each csv files. Now the objective is to compare both these files. how many rows are matching and how many rows are not matching?
Lets say file1.csv contains 5 similar lines, we need to count it as 1 but not 5.
Similarly, for file2.csv if there are redundant data, we need to count it as 1.
I expect the output to display the number of rows that are matching and the no. of rows that are different. | 0 | 1 | 196 |
0 | 56,221,192 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-20T12:05:00.000 | 1 | 1 | 0 | TensorFlow: Is it possible to map a function to a dataset using a for-loop? | 56,220,696 | 1.2 | python,tensorflow,tensor,map-function | No, not exactly.
A Dataset is inherently lazily evaluated and cannot be assigned to in that way - conceptually try to think of it as a pipeline rather than a variable: each value is read, passed through any map() operations, batch() ops, etc and surfaced to the model as needed. To "assign" a value would be to write it to disk in the .tfrecord file and just isn't likely to ever be supported (these files are specifically designed to be fast-read not random-accessed).
You could, instead, use TensorFlow to do your pre-processing and use TfRecordWriter to write to a NEW tfrecord with the expensive pre-processing completed then use this new dataset as the input to your model. If you have the disk space avilable this might well be your best option. | I have a tf.data.TFRecordDataset and a (computationally expensive) function, which I want to map to it. I use TensorFlow 1.12 and eager execution, and the function uses NumPy ndarray interpretations of the tensors in my dataset using EagerTensor.numpy(). However, code inside functions that are given to tf.Dataset.map() are not executed eagerly, which is why the .numpy() conversion doesn't work there and .map() is not an option anymore. Is it possible to for-loop through a dataset and modify the examples in it? Simply assigning to them doesn't seem to work. | 0 | 1 | 582 |
0 | 56,222,077 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-20T13:10:00.000 | 0 | 1 | 0 | Differences between sklearn.model_selection.KFold and sklearn.model_selection.cross_validate with 'cv' parameter? | 56,221,694 | 0 | python,scikit-learn | I believe that KFold will simply carve your training data into 10 splits.
cross_validate, however, will also carve the data into 10 splits (with the cv=10 parameter) but it will also actually perform the cross-validation. In other words, it will run your model 10x and you will be able to report on the performance of your model, which KFold does not do.
In other words, KFold is 1 small step in cross_validation. | Can I use cross_validate in sklearn with cv=10 to instead of using Kfold with n_splits=10? Does they work as same? | 0 | 1 | 171 |
0 | 56,226,834 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-20T17:25:00.000 | 0 | 1 | 0 | How to make multiple y axes zoomable individually | 56,225,582 | 1.2 | python,bokeh | Bokeh does not support this, twin axes are always linked to maintain their original relative scale. | I have a bokeh plot with multiple y axes. I want to be able to zoom in one y axis while having the other one's displayed range stay the same. Is this possible in bokeh, and if it is, how can I accomplish that? | 0 | 1 | 27 |
0 | 56,228,359 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2019-05-20T20:37:00.000 | 1 | 3 | 0 | Create a csv file that Excel will not mutate the data of when opening | 56,227,867 | 0.066568 | python,excel,python-3.x,string,csv | Have you tried expressly formatting the relevant column(s) to 'str' before exporting?
df['column_ex'] = df['column_ex'].astype('str')
df.to_csv('df_ex.csv')
Another workaround may be to open Excel program (not file), go to Data menu, then Import form Text. Excel's import utility will give you options to define each column's data type. I believe Apache's Liibre office defaults to keep the leading 0s but Excel doesn't. | I am programmatically creating csv files using Python. Many end users open and interact with those files using excel. The problem is that Excel by default mutates many of the string values within the file. For example, Excel converts 0123 > 123.
The values being written to the csv are correct and display correctly if I open them with some other program, such as Notepad. If I open a file with Excel, save it, then open it with Notepad, the file now contains incorrect values.
I know that there are ways for an end user to change their Excel settings to disable this behavior, but asking every single user to do so is not possible for my situation.
Is there a way to generate a csv file using Python that a default copy of Excel will NOT mutate the values of?
Edit: Although these files are often opened in Excel, they are not only opened in Excel and must be output as .csv, not .xlsx. | 0 | 1 | 190 |
0 | 56,241,130 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-21T09:29:00.000 | 0 | 1 | 0 | How to build a resnet with Keras that trains and predicts the subclass from the main class? | 56,235,267 | 0 | python,keras,conv-neural-network,hierarchical,deep-residual-networks | The easiest way to do so would be to train multiple classifiers and build a hierarchical system by yourself.
One classifier detecting class A, B etc. After that make a new prediction for subclasses.
If you want only one single classifier:
What about just killing the first hierarchy of parent classes? Should be also quite easy. If you really want a model, where the hierarchy is learned take a look at Hierarchical Multi-Label Classification Networks. | I would like go implement a hierarchical resnet architecture. However, I could not find any solution for this. For example, my data structure is like:
class A
Subclass 1
Subclass 2
....
class B
subclass 6
........
So i would like to train and predict the main class and then the subclass of the chosen/predicted mainclass. Can someone provide a simple example how to do this with generators? | 0 | 1 | 286 |
0 | 70,215,508 | 0 | 0 | 0 | 0 | 3 | false | 40 | 2019-05-21T13:23:00.000 | 5 | 12 | 0 | Could not find a version that satisfies the requirement torch>=1.0.0? | 56,239,310 | 0.083141 | python,torch | I finally managed to solve this problem thanks to John Red' comment and serg06 answer. Here's what I've done :
Install Python 3.7.9 and not newer.
BUT make sure to install 64bits python
Every other combination failed for me. | Could not find a version that satisfies the requirement torch>=1.0.0
No matching distribution found for torch>=1.0.0 (from stanfordnlp) | 0 | 1 | 95,229 |
0 | 63,728,120 | 0 | 0 | 0 | 0 | 3 | false | 40 | 2019-05-21T13:23:00.000 | 0 | 12 | 0 | Could not find a version that satisfies the requirement torch>=1.0.0? | 56,239,310 | 0 | python,torch | I tried every possible command for Windows, but nothing worked. I also tried using Pycharm package installation, everything throws the same error.
Finally installed Pytorch using Anaconda. | Could not find a version that satisfies the requirement torch>=1.0.0
No matching distribution found for torch>=1.0.0 (from stanfordnlp) | 0 | 1 | 95,229 |
0 | 71,016,393 | 0 | 0 | 0 | 0 | 3 | false | 40 | 2019-05-21T13:23:00.000 | 0 | 12 | 0 | Could not find a version that satisfies the requirement torch>=1.0.0? | 56,239,310 | 0 | python,torch | I want to pip install " torch>=1.4.0, torchvision>=0.5.0 ", but in a conda env with python=3.0, this is not right.
I tried create a new conda env with python=3.7, and pip install " torch>=1.4.0, torchvision>=0.5.0 " again, it is ok. | Could not find a version that satisfies the requirement torch>=1.0.0
No matching distribution found for torch>=1.0.0 (from stanfordnlp) | 0 | 1 | 95,229 |
0 | 56,327,491 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-05-21T14:26:00.000 | 0 | 2 | 0 | Can I import data from On-Premises SQL Server Database to Azure Machine Learning virtual machine? | 56,240,481 | 0 | python,sql,azure,jupyter-notebook,azure-machine-learning-service | You can always push the data to a supported source using a data movement/orchestration service. Remember that all Azure services are not going to have every source option like Power BI, Logic Apps or Data Factory...this is why data orchestration/movement services exist. | On the limited Azure Machine Learning Studio, one can import data from an On-Premises SQL Server Database.
What about the ability to do the exact same thing on a python jupyter notebook on a virtual machine from the Azure Machine Learning Services workspace ?
It does not seem possible from what I've found in the documentation.
Data sources would be limited in Azure ML Services : "Currently, the list of supported Azure storage services that can be registered as datastores are Azure Blob Container, Azure File Share, Azure Data Lake, Azure Data Lake Gen2, Azure SQL Database, Azure PostgreSQL, and Databricks File System"
Thank you in advance for your assistance | 0 | 1 | 1,048 |
0 | 60,404,903 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-05-22T08:17:00.000 | 5 | 1 | 0 | Python3 numpy import error PyCapsule_Import could not import module "datetime" | 56,252,250 | 1.2 | python-3.x,numpy,pip | In my case I had this problem, because my script was called math.py, which caused module import problems. Make sure your own python files do not share name with some of common module names. After I renamed my script to something else, I could run script normally. | I am trying to import numpy with python3 on MacOS mojave. I am getting this error. I don't know if it has something to do with a virtual environment or something like that.
Error:
PyCapsule_Import could not import module "datetime"
I have tried reinstalling python3 and reinstalling numpy | 0 | 1 | 2,527 |
0 | 56,254,221 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-05-22T08:37:00.000 | 0 | 1 | 0 | Zipping the files in S3 | 56,252,619 | 0 | python,amazon-web-services,amazon-s3,databricks | Amazon S3 does not have a zip/compress function.
You will need to download the files, zip them on an Amazon EC2 instance or your own computer, then upload the result. | I am having some text files in S3 location. I am trying to compress and zip each text files in it. I was able to zip and compress it in Jupyter notebook by selecting the file from my local. While trying the same code in S3, its throwing error as file is missing. Could someone please help | 0 | 1 | 40 |
0 | 56,291,641 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-05-22T12:38:00.000 | 1 | 1 | 0 | writing a pyspark dataframe to AWS - s3 from EC2 instance using pyspark code the time taken to complete write operation is longer than usual time | 56,256,999 | 0.197375 | python,amazon-web-services,amazon-s3,amazon-ec2,pyspark | It seems an issue with the cloud environment. Four things coming to my mind, which you may check:
Spark version: For some older version of spark, one gets S3 issues.
Data size being written in S3, and also the format of data while storing
Memory/Computation issue: The memory or CPU might be getting utilized to maximum levels.
Temporary memory storage issue- Spark stores some intermediate data in temporary storage, and that might be getting full.
So, with more details, it may become clear on the solution. | When we are writing a pyspark dataframe to s3 from EC2 instance using pyspark code the time taken to complete write operation is longer than usual time. Earlier it used to take 30 min to complete the write operation for 1000 records, but now it is taking more than an hour. Also after completion of the write operation the context switch to next lines of code is taking longer time(20-30min). We are not sure whether this is AWS-s3 issue or else because of lazy computation of Pyspark. Could anybody throw some light on this quesion.
Thanking in advance | 1 | 1 | 75 |
0 | 56,260,405 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-05-22T15:23:00.000 | 1 | 3 | 0 | Load tensorflow model without importing tensorflow | 56,260,192 | 0.066568 | python,tensorflow | Pretty much, unless you brought tensorflow and all of it's files with your application. Other than that, no, you cannot import tensorflow or have any tensorflow dependent modules or code. | Is it possible to train a tensorflow model, then export it as something accessible without tensorflow? I want to apply some machine learning to a school project in which the code is submitted on an online portal - it doesn’t have tensorflow installed though, only standard libraries. I am able to upload additional files, but any tensorflow file would require tensorflow to make sense of... Will I have to write my ML code from scratch? | 0 | 1 | 735 |
0 | 56,278,851 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-23T15:41:00.000 | 5 | 1 | 0 | What is Difference Between Flatten() and Dense() Layers in Convolutional Neural Network? | 56,278,769 | 0.761594 | python,machine-learning,neural-network,deep-learning,conv-neural-network | Flatten as the name implies, converts your multidimensional matrices (Batch.Size x Img.W x Img.H x Kernel.Size) to a nice single 2-dimensional matrix: (Batch.Size x (Img.W x Img.H x Kernel.Size)). During backpropagation it also converts back your delta of size (Batch.Size x (Img.W x Img.H x Kernel.Size)) to the original (Batch.Size x Img.W x Img.H x Kernel.Size).
Dense layer is of course the standard fully connected layer. | I Have Serious Doubt Between Them. Can Anyone Please Elaborate With Examples and Some Ideas. | 0 | 1 | 2,581 |
0 | 56,299,140 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2019-05-24T20:15:00.000 | 4 | 2 | 0 | How to implement neural network pruning? | 56,299,034 | 0.379949 | python,tensorflow,optimization,deep-learning,inference | If you add a mask, then only a subset of your weights will contribute to the computation, hence your model will be pruned. For instance, autoregressive models use a mask to mask out the weights that refer to future data so that the output at time step t only depends on time steps 0, 1, ..., t-1.
In your case, since you have a simple fully connected layer, it is better to use dropout. It randomly turns off some neurons at each iteration step so it reduces the computation complexity. However, the main reason dropout was invented is to tackle overfitting: by having some neurons turned off randomly, you reduce neurons' co-dependencies, i.e. you avoid that some neurons rely on others. Moreover, at each iteration, your model will be different (different number of active neurons and different connections between them), hence your final model can be interpreted as an ensamble (collection) of several diifferent models, each specialized (we hope) in the understanding of a specific subset of the input space. | I trained a model in keras and I'm thinking of pruning my fully connected network. I'm little bit lost on how to prune the layers.
Author of 'Learning both Weights and Connections for Efficient
Neural Networks', say that they add a mask to threshold weights of a layer. I can try to do the same and fine tune the trained model. But, how does it reduce model size and # of computations? | 0 | 1 | 2,526 |
0 | 56,303,121 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-25T07:28:00.000 | 0 | 1 | 0 | Why Head() function showing semicolon separated data in my jupyter note book? | 56,302,744 | 0 | python-2.x | When I opened the csv file, each row was shown as single cell. Noticed that the delimiter was ;(semicolon ). I have changed the delimiter to ,(comma) and then the each value in the csv file was displayed in each cell.
Now, the head() method is displaying the results in table structure as expected :)
Is there any limitation to use semicolon as a delimiter in csv file? | read the csv file flle using pd.read_csv() method. on displaying , it still in semicolon separated data. I expected table structure
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import random
import os
df=pd.read_csv("E:\Python\data_full.csv")
df.head()
Actual result :
56;"housemaid";"married";"basic.4y";"no";"no";...
1
57;"services";"married";"high.school";"unknown...
2
37;"services";"married";"high.school";"no";"ye...
3
40;"admin.";"married";"basic.6y";"no";"no";"no...
4
56;"services";"married";"high.school";"no";"no...
In [ ]:
| 0 | 1 | 83 |
0 | 57,820,210 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-05-25T07:31:00.000 | 0 | 1 | 0 | Colab connection always broken | 56,302,763 | 0 | python | Colab restarts on inactivity in 2-3 hours, you need to reconnect it. To simply avoid this show some activity on runtime. | I run CNN code on Colab notebook, and it takes long time. However, the connection always breaks after I ran 2 or 3 hours and cannot be reconnect back. I was told Colab virture machine would break connection after 12 hours without any operates, so how can I avoid restart my code after connection broken, or any easier way to controll Colab? | 0 | 1 | 86 |
0 | 56,305,461 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-25T10:52:00.000 | 0 | 1 | 0 | I have many tiff files of neurons. I was wondering if there is a way to read the strength of light where neurons are and import that data into a file | 56,304,143 | 0 | python-3.x,image,graph,tiff | You can read your file as an image and convert it to a black and white image. then if you can specify which pixels are located at each neurons in image you can check pixel's values to check if the neuron is on or off.
any way I suggest to search in image processing packages of python the solution for your problem is easy to find.
and in the case you can not specify which pixels are located at each neurons in image maybe search in "image processing dilation and erosion" would help.
Good luck! | I am currently doing research at a university and wanted to create custom code that would be able to analyze hours worth of images of neurons and determine if the neurons are on or off. I want to write the code myself and was just wondering where I can get started. For example, what kinds of things can I import into python to read light in tiff files and what kind of things do I need to import to extract that data into a table or graph?
Some clarification:
The neurons are always in the same place in all the images.
It is a black and white image.
The images are hours worth of neuron activity, where an image is taken every second or so.
I do not want someone to write the whole code for me, but for people to please tell me what kind of things are necessary to read light in tiff files and export that data into some sort of table or graph.
Thank you!
I am not allowed to show a photo as an example, but I am allowed to describe what it looks like. The background is all black. There are white dots on the images scattered all around. Some are neurons and I am able to locate where they are. In all of the photos, the location of the neurons are the same and some of the neurons are lit up and others are dark. Which neurons are lit and which are dark change from photo to photo.
I do not have any code done yet, as I am here seeking for a place to start in this project.
The expected output of the code looking at the neurons and seeing which ones are lit up is data in the form of a table, graph, or even a text file if a table or graph is impossible. The table would include either a 1 or 0, 1 indicating that the neuron is lit up, and 0, indicating the neuron is dark. For the graph, it would have a number representing a neuron (1, 2, 3 4, ...) on the y-axis and on the x-axis would be a 1 or 0. Each column would be one image and you would see how the activity of each neuron would change over time. I am open to any other form of data. | 0 | 1 | 34 |
0 | 56,315,167 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-26T02:09:00.000 | 2 | 1 | 0 | Comparing results of neural net on two subsets of features | 56,310,122 | 1.2 | python,neural-network,lstm,data-science,feature-extraction | Since you have mentioned that using the different feature extraction methods, you are only getting slightly different feature sets, so the results are also similar. Also since your LSTM model is then also getting almost similar RMSE values, the models are able to generalize well and learn similarly and extract important information from all the datasets.
The best model depends on your future data, the computation time and load of different methods and how well they will last in production. Setting a seed is not really a good idea in neural nets. The basic idea is that your model should be able to reach the optimal weights no matter how they start. If your models are always getting similar results, in most cases, it is a good thing. | I am running a LSTM model on a multivariate time series data set with 24 features. I have ran feature extraction using a few different methods (variance testing, random forest extraction, and Extra Tree Classifier). Different methods have resulted in a slightly different subset of features. I now want to test my LSTM model on all subsets to see which gives the best results.
My problem is that the test/train RMSE scores for my 3 models are all very similar, and every time I run my model I get slightly different answers. This question is coming from a person who is naive and still learning the intricacies of neural nets, so please help me understand: in a case like this, how do you go about determining which model is best? Can you do seeding for neural nets? Or some type of averaging over a certain amount of trials? | 0 | 1 | 20 |
0 | 56,332,236 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-27T20:37:00.000 | 0 | 1 | 0 | I have repeated date values in my x axis. How do i create a different row with a single average of those values? | 56,332,208 | 0 | python,datetime,plot | Assuming you're using pandas:
pd.groupby('DATE')['PRICE'].mean() | I have been working with a dataset which contains information about houses that have been sold on a particular market. There are two columns, 'price', and 'date'.
I would like to make a line plot to show how the prices of this market have chaged over time.
The problem is, I see that some houses have been sold at the same date but with a diferent price.
So ideally i would need to get the mean/average price of the house sold on each date before plotting.
So for example, if I had something like this:
DATE / PRICE
02/05/2015 / $100
02/05/2015 / $200
I would need to get a new row with the following average:
DATE / PRICE
02/05/2015 / $150
I just havent been able to figure it out yet. I would appreciate anyone who could guide me in this matter. Thanks in advance. | 0 | 1 | 14 |
0 | 56,371,107 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2019-05-27T23:00:00.000 | 0 | 1 | 0 | Setting legend entries manually | 56,333,244 | 1.2 | python,excel,openpyxl | You cannot do that. You need to set the rows when creating the plots. That will create the titles for your charts | I am using openpyxl to create charts. For some reason, I do not want to insert row names when adding data. So, I want to edit the legend entries manually. I am wondering if anyone know how to do this.
More specifically
class openpyxl.chart.legend.Legend(legendPos='r', legendEntry=(),
layout=None, overlay=None, spPr=None, txPr=None, extLst=None). I want to edit the legendEntry field | 0 | 1 | 621 |
0 | 56,335,423 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-05-28T00:51:00.000 | 0 | 1 | 0 | Time series prediction: need help using series with different periods of days | 56,333,832 | 0 | python,statistics,time-series,prediction | Based on what I understand after reading your question, I would approach this problem in the following way.
For each day, find how far out the event is from that day. The max
value for this number is 46 in 2016, 77 in 2017 etc. Scale this value
by the max day.
Use the above variable, along with day of the month, day of the week
etc as extraneous variable
Additionally, use lag information from ticket sales. You can try one
day lag, one week lag etc.
You would be able to generate all this data from the sale start until
end.
Use the generated variables as predictor for each day and use ticket
sales as target variable and generate a machine learning model
instead of forecasting.
Use the machine learning model along with generated variables to predict future sales. | There's this event that my organization runs, and we have the ticket sales historic data from 2016, 2017, 2018. This data contains the quantity of tickets selled by day, considering all the sales period.
To the 2019 edition of this event, I was asked to make a prediction of the quantity of tickets selled by day, considering all the sales period, sort of to guide us through this period, meaning we would have the information if we are above or below the expected sales average.
The problem is that the historic data has a different size of sales period in days:
In 2016, the total sales period was 46 days.
In 2017, 77 days.
In 2018, 113 days.
In 2019 we are planning 85 days. So how do I ajust those historical data, in a logic/statistical way, so I could use them as inputs to a statistical predictive model (such as ARIMA model)?
Also, I'm planning to do this on Python, so if you have any suggestions about that, I would love to hear them too!
Thank you! | 0 | 1 | 101 |
0 | 56,335,363 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-05-28T04:57:00.000 | 1 | 1 | 0 | Why does PyTorch gather function require index argument to be of type LongTensor? | 56,335,215 | 1.2 | python,pytorch | By default all indices in pytorch are represented as long tensors - allowing for indexing very large tensors beyond just 4GB elements (maximal value of "regular" int). | I'm writing some code in PyTorch and I came across the gather function. Checking the documentation I saw that the index argument takes in a LongTensor, why is that? Why does it need to take in a LongTensor instead of another type such as IntTensor? What are the benefits? | 0 | 1 | 39 |
0 | 70,738,720 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2019-05-28T13:45:00.000 | 0 | 3 | 0 | How to pass different set of data to train and test without splitting a dataframe. (python)? | 56,343,657 | 0 | python,scikit-learn,linear-regression,data-science,training-data | please skillsmuggler what about the X_train and X_Test how I can define it because when I try to do that it said NameError: name 'X_train' is not defined | I have gone through multiple questions that help divide your dataframe into train and test, with scikit, without etc.
But my question is I have 2 different csvs ( 2 different dataframes from different years). I want to use one as train and other as test?
How to do so for LinearRegression / any model? | 0 | 1 | 1,436 |
0 | 56,390,413 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-28T14:35:00.000 | -1 | 2 | 0 | Gaussian Mixture Models for pixel clustering | 56,344,581 | 1.2 | python-3.x,scikit-learn,cluster-analysis,gmm | Its not clustering if you use labeled training data!
You can, however, use the labeling function of GMM clustering easily.
For this, compute the prior probabilities, mean and covariance matrixes, invert them. Then classify each pixel of the new image by the maximum probability density (weighted by prior probabilities) using the multivariate Gaussians from the training data. | I have a small set of aerial images where different terrains visible in the image have been have been labelled by human experts. For example, an image may contain vegetation, river, rocky mountains, farmland etc. Each image may have one or more of these labelled regions. Using this small labeled dataset, I would like to fit a gaussian mixture model for each of the known terrain types. After this is complete, I would have N number of GMMs for each N types of terrains that I might encounter in an image.
Now, given a new image, I would like to determine for each pixel, which terrain it belongs to by assigning the pixel to the most probable GMM.
Is this the correct line of thought ? And if yes, how can I go about clustering an image using GMMs | 0 | 1 | 353 |
0 | 58,932,030 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-29T07:31:00.000 | 1 | 2 | 0 | Neural Network Prediction Interval | 56,355,244 | 0.099668 | python,machine-learning,neural-network,prediction,confidence-interval | One approach is caliculate residual for the validation set, it will be having a distribution, calculate mean, variance of the residual distribution and if you are looking for 95% add +,- 2sigma to your prediction and that should be your prediction interval. | I created a neural network in Python for a regression problem. I would like to have a prediction intervals for each value. How would I go about approaching this since neural networks are nonlinear? | 0 | 1 | 416 |
0 | 56,373,034 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-05-29T07:51:00.000 | 0 | 2 | 0 | Best Clustering Algorithm for High Dimensional Vectors | 56,355,551 | 0 | python,machine-learning,cluster-analysis | 45 dimensions is not particularly high. It's at best "medium" dimensionality, so most algorithms could work.
Usually it's not so much a matter of the number of dimensions, but rather how well they are preprocessed. With bad preprocessing, 2 dimensions can be a problem if the signal in one attribute is drowned by the noise in the other.
There is no automatic way to get this right, otherwise it would be in all the libraries. Scaling can help, but can also harm. It's up to the user to prepare the data and choose parameters (such as distance functions and algorithms) to achieve the desired effect, because there is no computable equation fot "desirable". | I am attempting to use some sort of clustering method on a set of datapoint vectors which have 45 dimensions. I'm fairly new to clustering data points and was wondering if anyone could point out appropriate methods to utilize? I was attempted using K-Means Clustering but was wondering if the dimensionality of my data might be too large for this? | 0 | 1 | 323 |
0 | 56,358,796 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2019-05-29T10:18:00.000 | 2 | 1 | 0 | How to generate all posible binary nxm matrices, where the sum of each row is 1 | 56,358,243 | 1.2 | python,r,matlab,matrix | TLDR: No.
Let's look at the simpliest example: 2 DC. Your possible rows will be:
(1,0)
(0,1)
Now you want to construct all possible 2x50 matrices. Their number is 2^50 (2 possible rows in 50 rows). It is equal to:
1125899906842624
We suppose that each matrix stores 100 bytes. That all 2x50 matrices will store:
(2**50) * 100 / 1024 / 1024 / 1024 / 1024 = 102400 terabytes of data.
And to process all of them (in the most optimistic results for normal computers) will spend time equal to:
(2**50) / 10**9 / 60 / 60 = 312 hours.
And 10x50 will be even more... | I'm working on an assignment where i have to assign 1 up to 10 distribution centers to all US states. I have made a model in excel to calculate all the costs, and clearly the goal of the assignment is to find the cheapest way. I have 50 rows (for each state) and 10 columns (for all possible DC locations). My model is based on this matrix, and if I change the matrix, the costs will instantly display. The only constraint is that each state is supplied by exactly 1 DC.
Its clear I cant make all possible combinations by hand, I have tried to translate my model into an optimization program (AIMMS), but it'l take allot of time witch I already put in the excel model. I was thinking if I had all possible matrices (generated in R, Matlab, or Python, dont care witch one) i can loop over my spreadsheet them and let a program read the cost, to determine the best choice. Its theoretically possible to supply all state by 1 DC, and at most 10, so every possible 1x50, 2x50, 3x50 ... 10x50 matrix is needed to determine the best one.
So again in short, is it possible to generate every nxm binary matrix with a sum total of 1 on each row in preferably R, or otherwise in Matlab or Python? | 0 | 1 | 149 |
0 | 56,368,801 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-29T13:17:00.000 | 0 | 1 | 0 | Which are the possible ways to retrain a model made in IBM Watson Knowledge Studio? | 56,361,575 | 1.2 | python-3.x,ibm-watson,watson-nlu,watson-knowledge-studio | IBM Watson Knowledge Studio does not support online training to retrain the existing model with new data. To adapt to new data, you need to train a brand new model with both the new data and the existing data. | I am working on a knowledge based chatbot creation on ibm watson and I have trained my custom model on ibm watson knowledge studio for agricultural database. Now if someone asked about any information that is not available in our dataset then how can we retrained that model/improve the model with that new data ?
I am using knowledge graph to retrieve the information. Knowledge graph is made in Neo4J.
If any new data comes or admin wants to retrain the model with new data than the model should retrained without going to knowledge studio like rasa is ibm watson assistant provides feedback system. | 0 | 1 | 161 |
0 | 68,811,985 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-05-30T05:38:00.000 | 0 | 6 | 0 | How do you produce a random 0 or 1 with random.rand | 56,372,240 | 0 | python,numpy | Is there a reason to specifically use np.random.rand? This function outputs a float as noted in the question and previous answers, and you would need thresholding to obtain an int.
scipy.stats.bernoulli(p) directly outputs a 1 with probability p and 0 with probability 1-p. | I'm trying to produce a 0 or 1 with numpy's random.rand.
np.random.rand() produces a random float between 0 and 1 but not just a 0 or a 1.
Thank you. | 0 | 1 | 6,029 |
0 | 56,383,864 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-05-30T15:14:00.000 | 1 | 1 | 0 | Cluster analysis of large dataset containing only categorical variables | 56,380,999 | 1.2 | python,cluster-analysis,large-data | Instead of clustering, what you should likely be using is frequent pattern mining.
One-hot encoding variables often does more harm than good. Either use a well-chosen distance for such data (could be as simple as Hamming or Jaccard on some data sets) with a suitable clustering algorithm (e.g., hierarchical, DBSCAN, but not k-means). Alternatively, try k-modes. But most likely, frequent itemsets is the more meaningful analysis onnsuvh data. | I have been given the task of clustering our customers base on products they bought together. My data contains 500,000 rows related to each customer and 8,000 variables (product ids). Each variable is a one hot encode vector that shows if a customer bought that product or not.
I have tried to reduce the dimensions of my data with MCA (multiple correspondence algorithm) and then use k-means and dbscan for cluster analysis, but my results were not satisfying.
What are some proper algorithms for cluster analysis of large datasets with high dimensions and their python implementation? | 0 | 1 | 452 |
0 | 56,382,933 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-05-30T17:06:00.000 | 0 | 1 | 0 | Is it possible to run ipython with some packages already imported? | 56,382,641 | 0 | python,numpy,ipython | In the case of Mac add a script like load_numpy.py under ~/.ipython/profile_default/startup/ directory and add the import statements you need to that script. Every time you run ipython, all the scripts in the startup directory will be executed first and so the imports will be there. in case of Ubuntu add the file to ~/.config/ipython/profile_default/startup/ | Is it possible to run ipython with some packages already imported?
almost every time when I run ipython I do import numpy as np, is it possible to automate this process? i.e. just after I run ipython I want to be able to write something like np.array([0,1]). Is it possible? | 0 | 1 | 50 |
0 | 62,975,319 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2019-05-30T18:01:00.000 | 1 | 4 | 0 | How can I generate a requirements.txt file for a package not available on my development platform? | 56,383,379 | 0.049958 | python,pip | You could run pip-compile-multi in a Docker container. That way you'd be running it under Linux, and you could do that on your Mac or other dev machines. As a one-liner, it might look something like this:
docker run --rm --mount type=bind,src=$(pwd),dst=/code -w /code python:3.8 bash -c "pip install pip-compile-multi && pip-compile-multi"
I haven't used pip-compile-multi, so I'm not exactly sure how you call it. Maybe you'd need to add some arguments to that command. Depending on how complicated your setup is, you could consider writing a Dockerfile and simplifying the one-liner a bit. | I'm trying to generate requirements/dev.txt and prod.txt files for my python project. I'm using pip-compile-multi to generate them from base.in dev.in and prod.in files. Everything works great until I add tensorflow-gpu==2.0.0a0 into the prod.in file. I get this error when I do: RuntimeError: Failed to pip-compile requirements/prod.in.
I believe this is because tensorflow-gpu is only available on Linux, and my dev machine is a Mac. (If I run pip install tensorflow-gpu==2.0.0a0 I am told there is no distribution for my platform.) Is there a way I can generate a requirements.txt file for pip for a package that is not available on my platform? To be clear, my goal is to generate a requirements.txt file using something like pip-compile-multi (because that will version dependencies) that will only install on Linux, but I want to be able to actually generate the file on any platform. | 0 | 1 | 663 |
0 | 56,388,545 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-05-30T20:56:00.000 | 1 | 2 | 0 | what follows after clustering | 56,385,518 | 0.099668 | python,deep-learning,cluster-analysis,k-means,sklearn-pandas | since clustering is unsupervised, there isn't an objective way to evaluate it. Typically, you just observe and see if there is some features for a certain cluster. | I am trying to cluster images based on their similarities with SIFT and Affinity Propagation, I did the clustering but I just don't want to visualize the results. How can I test with a random image from the obtained labels? Or maybe there's more to it?
Other than data visualization, I just don't know what follows after clustering. How do I verify the 'clustering' | 0 | 1 | 38 |
0 | 56,388,587 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-05-30T20:56:00.000 | 0 | 2 | 0 | what follows after clustering | 56,385,518 | 0 | python,deep-learning,cluster-analysis,k-means,sklearn-pandas | If you have ground-truth cluster labels, you can measure Jacquad-Index or something in that line to get an error score. Then, you can tweak your distance measure or parameters etc. to minimize the error score.
You can also do some clustering in order to group your data as the divide step in divide-and-conquer algorithms/applications. | I am trying to cluster images based on their similarities with SIFT and Affinity Propagation, I did the clustering but I just don't want to visualize the results. How can I test with a random image from the obtained labels? Or maybe there's more to it?
Other than data visualization, I just don't know what follows after clustering. How do I verify the 'clustering' | 0 | 1 | 38 |
0 | 60,986,471 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-31T03:19:00.000 | 4 | 1 | 0 | ValueError: ('Could not interpret initializer identifier:', 0.2) | 56,388,245 | 0.664037 | tensorflow,keras,python-3.5 | you should change it to X = layers.Dense(neurons, activation=activation, kernel_initializer=keras.initializers.Constant(weight_init))(X) | Traceback (most recent call last): File
"AutoFC_AlexNet_randomsearch_CalTech101_v2.py", line 112, in
X = layers.Dense(neurons, activation=activation, kernel_initializer=weight_init)(X) File
"/home/shabbeer/NAS/lib/python3.5/site-packages/keras/legacy/interfaces.py",
line 91, in wrapper
return func(*args, **kwargs) File "/home/shabbeer/NAS/lib/python3.5/site-packages/keras/layers/core.py",
line 824, in init
self.kernel_initializer = initializers.get(kernel_initializer) File
"/home/shabbeer/NAS/lib/python3.5/site-packages/keras/initializers.py",
line 503, in get
identifier) ValueError: ('Could not interpret initializer identifier:', 0.2)
I am getting the above error when running the code using tensorflow-gpu version 1.4.0 and keras version 2.1.3 | 0 | 1 | 3,852 |
0 | 56,556,679 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-05-31T08:28:00.000 | 1 | 1 | 0 | How to fix '_pickle.UnpicklingError: invalid load key, '<' ' error in Pytorch | 56,391,392 | 0.197375 | python-3.x,pytorch | The reason about the problem is that the previous download was not finished. So when I deleted the original file and re-downloaded it, the problem was solved. | The problem I encountered when I ran the official code of maskrcnn-benchmark for facebookresearch,which was wrong when loading the pre-training model.
The code runs on a remote server at the school and the graphics card is an NVIDIA P100.
checkpointer = DetectronCheckpointer(
cfg, model, optimizer, scheduler, output_dir, save_to_disk)
extra_checkpoint_data = checkpointer.load(cfg.MODEL.WEIGHT)
arguments.update(extra_checkpoint_data)
I expect to run the code correctly and understand why this is the problem. | 0 | 1 | 7,056 |
0 | 56,402,618 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-05-31T22:41:00.000 | 2 | 1 | 0 | Is it feasible to run a Support Vector Machine Kernel on a device with <= 1 MB RAM and <= 10 MB ROM? | 56,402,429 | 1.2 | python,c,performance,memory-management,svm | If you're that strapped for space, you'll probably want to skip scikit and simply implement the math yourself. That way, you can cycle through the data in structures of your own choosing. Memory requirements depend on the class of SVM you're using; a two-class linear SVM can be done with a single pass through the data, considering only one observation at a time as you accumulate sum-of-products, so your command logic would take far more space than the data requirements.
If you need to keep the entire data set in memory for multiple passes, that's "only" 5000*10*8 bytes for floats, or 400k of your 1Mb, which might be enough room to do your manipulations. Also consider a slow training process, re-reading the data on each pass, as this reduces the 400k to a triviality at the cost of wall-clock time.
All of this is under your control if you look up a usable SVM implementation and alter the I/O portions as needed.
Does that help? | Some preliminary testing shows that a project I'm working on could potentially benefit from the use of a Support-Vector-Machine to solve a tricky problem. The concern that I have is that there will be major memory constraints. Prototyping and testing is being done in python with scikit-learn. The final version will be custom written in C. The model would be pre-trained and only the decision function would be stored on the final product. There would be <= 10 training features, and <= 5000 training data-points. I've been reading mixed things regarding SVM memory, and I know the default sklearn memory cache is 200 MB. (Much larger than what I have available) Is this feasible? I know there are multiple different types of SVM kernel and that the kernel's can also be custom written. What kernel types could this potentially work with, if any? | 0 | 1 | 158 |
0 | 56,404,614 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2019-06-01T06:29:00.000 | 3 | 6 | 0 | How to generate a random sample of points from a 3-D ellipsoid using Python? | 56,404,399 | 0.099668 | python,math,random,geometry,ellipse | Consider using Monte-Carlo simulation: generate a random 3D point; check if the point is inside the ellipsoid; if it is, keep it. Repeat until you get 1,000 points.
P.S. Since the OP changed their question, this answer is no longer valid. | I am trying to sample around 1000 points from a 3-D ellipsoid, uniformly. Is there some way to code it such that we can get points starting from the equation of the ellipsoid?
I want points on the surface of the ellipsoid. | 0 | 1 | 2,545 |
0 | 72,498,276 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2019-06-01T06:29:00.000 | 0 | 6 | 0 | How to generate a random sample of points from a 3-D ellipsoid using Python? | 56,404,399 | 0 | python,math,random,geometry,ellipse | One way of doing this whch generalises for any shape or surface is to convert the surface to a voxel representation at arbitrarily high resolution (the higher the resolution the better but also the slower). Then you can easily select the voxels randomly however you want, and then you can select a point on the surface within the voxel using the parametric equation. The voxel selection should be completely unbiased, and the selection of the point within the voxel will suffer the same biases that come from using the parametric equation but if there are enough voxels then the size of these biases will be very small.
You need a high quality cube intersection code but with something like an elipsoid that can optimised quite easily. I'd suggest stepping through the bounding box subdivided into voxels. A quick distance check will eliminate most cubes and you can do a proper intersection check for the ones where an intersection is possible. For the point within the cube I'd be tempted to do something simple like a random XYZ distance from the centre and then cast a ray from the centre of the elipsoid and the selected point is where the ray intersects the surface. As I said above, it will be biased but with small voxels, the bias will probably be small enough.
There are libraries that do convex shape intersection very efficiently and cube/elipsoid will be one of the options. They will be highly optimised but I think the distance culling would probably be worth doing by hand whatever. And you will need a library that differentiates between a surface intersection and one object being totally inside the other.
And if you know your elipsoid is aligned to an axis then you can do the voxel/edge intersection very easily as a stack of 2D square intersection elipse problems with the set of squares to be tested defined as those that are adjacent to those in the layer above. That might be quicker.
One of the things that makes this approach more managable is that you do not need to write all the code for edge cases (it is a lot of work to get around issues with floating point inaccuracies that can lead to missing or doubled voxels at the intersection). That's because these will be very rare so they won't affect your sampling.
It might even be quicker to simply find all the voxels inside the elipse and then throw away all the voxels with 6 neighbours... Lots of options. It all depends how important performance is. This will be much slower than the opther suggestions but if you want ~1000 points then ~100,000 voxels feels about the minimum for the surface, so you probably need ~1,000,000 voxels in your bounding box. However even testing 1,000,000 intersections is pretty fast on modern computers. | I am trying to sample around 1000 points from a 3-D ellipsoid, uniformly. Is there some way to code it such that we can get points starting from the equation of the ellipsoid?
I want points on the surface of the ellipsoid. | 0 | 1 | 2,545 |
0 | 56,410,719 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-01T11:32:00.000 | 0 | 1 | 0 | How to choose a split variables for continous features for decision tree | 56,406,338 | 0 | python,machine-learning,artificial-intelligence,decision-tree,machine-learning-model | Decision tree works calculating entropy and information gain to determine the most important feature. Indeed, 8000 row is not too much for decision tree. But generally, Random forest is similar to decision tree. It is working as ensemble. You can review and try it.Moreover, maybe being slowly is related to another thing. | I am currently implementing decision tree algorithm. If I have a continous featured data how do i decide a splitting point. I came across few resources which say to choose mid points between every two points but considering I have 8000 rows of data this would be very time consuming. The output/feature label is having category data. Is any approach where I can perform this operation quicker | 0 | 1 | 89 |
0 | 56,409,739 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-01T12:48:00.000 | 0 | 1 | 0 | Removing white space at the beginning of values in multiple columns | 56,406,907 | 0 | python,pandas,strip | You could try using
df['Name'].replace(" ", "")
this would delete all whitespaces though. | I found a solution to this:
df['Name']=df['Name'].str.lstrip
df['Parent']=df['Name'].str.lstrip
I have this DataFrame df (there is a white space at the left of "A" and "C" in the second row (which doesn't show well here). I would like to remove that space.
Mark Name Parent age
10 A C 1
12 A C 2
13 B D 3
I tried
df['Name'].str.lstrip()
df['Parent'].str.lstrip()
then tried
df.to_excel('test.xlsx')
but the result in excel didn't remove the white spaces
I then tried defining another variable
x=df['Name'].str.lstrip
x.to_excel('test.xlsx')
that worked fine in Excel, but this is a new dataFrame, and only had the x column
I then tried repeating the same for 'Parent', and to play around with joining multiple dataframes to the original dataframe, but I still couldnt' get it to work, and that seems too convoluted anyway
Finally, even if my first attempt had worked, I would like to be able to replace the white spaces in one go, without having to do a separate call for each column name | 0 | 1 | 70 |
0 | 56,407,510 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-06-01T13:01:00.000 | 0 | 1 | 0 | Interpretation of Training, Testing (Dev) and Validation Score in Machine Learning | 56,406,988 | 1.2 | python,validation,machine-learning,scikit-learn,data-science | As you explained in the comments, your test set is the set you used to tune your parameters and the validation set is the set that your model didn't use for training.
Considering that, it's natural that your Validation scores are lower than other scores.
When you're training a machine learning model, you show the training set to your model, that's why your model get's the best scores on training set, i.e. samples it has already seen and knows the answer for.
You use validation set to tune your parameters (e.g. degree of complexity in regression and so on) so your parameters are fine tuned for your validation sets but your model has not been trained on them. (for this you used the term test set, and to be fair they are sometimes used that way)
finally you have the least score on your test set which is natural since the parameters are not exactly tuned for the test set and the model has never seen them before.
if there is a huge hap between your training and test results, your model might have become overfit and there are ways to avoid that.
hope this helped ;) | I have trained a Machine Learnig Model using Sklearn and looked at different scores for the traing, testing (dev) and validation set.
Here are the scores:
Accuracy on Train: 94.5468%
Accuracy on Test: 74.4646%
Accuracy on Validation: 65.6548%
Precision on Train: 96.7002%
Precision on Test: 85.2289%
Precision on Validation: 79.7178%
F1-Score on Train: 96.9761%
F1-Score on Test: 85.6203%
F1-Score on Validation: 79.6747%
I am having some problems with Interpretation of the scores. Is it normal, that the Model has a much worse result on the validation set?
Do you have thoughts on those results? | 0 | 1 | 330 |
0 | 56,411,523 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-06-01T17:31:00.000 | 1 | 1 | 0 | Create a new vector model in gensim | 56,408,959 | 1.2 | python,vector,gensim,word2vec | Word-vectors are generally only comparable to each other if they were trained together.
So, if you want to have vectors for all of 'new', 'york', and 'new_york', you should prepare a corpus which includes them all, in a variety of uses, and train a Word2Vec model from that. | I already trained a word2vec model with gensim library. For example, my model contains vectors for 2 words: "new" and "york". However, I also want to train a vector for the word "new york", so I transform "new york" into "new_york" and train a new vector model. Finally, I want to combine 3 vectors: vector of the word "new", "york" and "new_york" into one vector representation for the word "new york".
How can I save the new vector value to the model?
I try to assign the new vector to the model but gensim did not allow to assign the new value for vector model. | 0 | 1 | 103 |
0 | 56,416,556 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2019-06-02T11:32:00.000 | 0 | 3 | 0 | Keras How To Resume Training With Adam Optimizer | 56,414,605 | 0 | python,tensorflow,machine-learning,keras | What about model.load('saved.h5'). It should also load the optimizer if you save it with model.save() though. | My model requires to run many epochs in order to get decent result, and it takes few hours using v100 on Google Cloud.
Since I'm on a preemptible instance, it kicks me off in the middle of training. I would like to be able to resume from where it left off.
In my custom CallBack, I run self.model.save(...) in on_epoch_end. Also it stops the training if the score hasn't improved in last 50 epochs.
Here are the steps I tried:
I ran model.fit until the early stops kicked in after epoch 250 (best score was at epoch 200)
I loaded the model saved after 100th epoch.
I ran model.fit with initial_epoch=100. (It starts with Epoch 101.)
However, it takes while to catch up with the first run. Also the accuracy score of each epoch gets kind of close to the first run, but it's lower. Finally the early stop kicked in at like 300, and the final score is lower than the first run. Only way I can get the same final score is to create the model from scratch and run fit from the epoch 1.
I also tried to utilize float(K.get_value(self.model.optimizer.lr)) and K.set_value(self.model.optimizer.lr, new_lr).
However, self.model.optimizer.lr always returned the same number. I assume it's because the adam optimizer calculates the real lr from the initial lr that I set with Adam(lr=1e-4).
I'm wondering what's the right approach to resume training using Adam optimizer? | 0 | 1 | 6,918 |
0 | 56,464,549 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-02T16:35:00.000 | 0 | 1 | 0 | Is it possible to model a min-max-problem using pyomo | 56,416,916 | 0 | python,pyomo | I think yes, but unless you find a clever way to reformulate your model, it might not be very efficent.
You could solve all possiblity of max(g_m(x)), then select the solution with the lowest objective function value.
I fear that the max operation is not something you can add to a minimization model, since it is not a mathematical operation, but a solver operation. This operation is on the problems level. Keep in mind that when solving a model, Pyomo requires as argument only one sense of optimization (min or max), thus making it unable to understand min-max sense. Even if it did, how could it knows what to maximize or minimize? This is why I suggest you to break your problem in two, unless you work on its formulation. | Ist it possible to formulate a min-max-optimization problem of the following form in pyomo:
min(max(g_m(x)) s.t. L
where g_m are nonlinear functions (actually constrains of another model) and L is a set of linear constrains?
How would I create the expression for the objective function of the model?
The problem is that using max() on a list of constraint-objects returns only the constraint possessesing the maximum value at a given point. | 0 | 1 | 966 |
0 | 56,421,074 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-02T20:10:00.000 | 0 | 1 | 0 | Multiple inputs for Dijkstra's algorithm | 56,418,597 | 0 | python,algorithm,routing,navigation,dijkstra | Does your requirement have a function through which the both matrices are related
If yes then on the basis of that function find a new weight matrix. Use this matrix on the flow path
If no, then try running matrix one first and then two and vice-versa and choose the one with corresponding cost output to your requirement | The inputs to Dijkstra's algorithm are a directed and weighted graph, generally represented by an adjacency (distance) matrix and a start node.
I have two different distance matrices to be used as inputs, representing two different infrastructure (e.g., roads and cycle ways). Any ideas how modify Dijkstra's algorithm to be use these two inputs? I want to implement in Python.
Thanks! | 0 | 1 | 130 |
0 | 62,840,501 | 0 | 0 | 0 | 0 | 2 | false | 38 | 2019-06-04T08:28:00.000 | 1 | 5 | 0 | Pipenv stuck "⠋ Locking..." | 56,440,090 | 0.039979 | python,pip,pipenv | try doing pipenv --rm - removes virtual environment
then pipenv shell - this will again initiate virtual env
then pipenv install installs all the packages again
worked for me | Why is my pipenv stuck in the "Locking..." stage when installing [numpy|opencv|pandas]?
When running pipenv install pandas or pipenv update it hangs for a really long time with a message and loading screen that says it's still locking. Why? What do I need to do? | 0 | 1 | 24,826 |
0 | 71,402,278 | 0 | 0 | 0 | 0 | 2 | false | 38 | 2019-06-04T08:28:00.000 | 1 | 5 | 0 | Pipenv stuck "⠋ Locking..." | 56,440,090 | 0.039979 | python,pip,pipenv | I had this happen to me just now. Pipenv got stuck locking forever, 20+ minutes with no end in sight, and pipenv --rm didn't help.
In the end, the problem was that I had run pipenv install "boto3~=1.21.14" to upgrade boto3 from boto3 = "==1.17.105". But I had other conflicting requirements (in my case, botocore = "==1.20.105" and s3transfer = "==0.4.2") which are boto3 dependencies. The new version of boto3 required newer versions of these two packages, but the == requirements didn't allow that. Pipenv didn't explain this, and just spun "Locking…" forever.
So if you run into this, I would advise you to carefully look at your Pipenv packages, see if there are any obvious conflicts, and loosen or remove package requirements where possible.
In my case, I was able to just remove the s3transfer and botocore packages from the list entirely, and rely on boto3 to fetch the necessary versions. | Why is my pipenv stuck in the "Locking..." stage when installing [numpy|opencv|pandas]?
When running pipenv install pandas or pipenv update it hangs for a really long time with a message and loading screen that says it's still locking. Why? What do I need to do? | 0 | 1 | 24,826 |
0 | 56,446,962 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-04T12:14:00.000 | 0 | 2 | 0 | google colab GPU processing become very slow after keras and tensorflow upgrade | 56,443,694 | 0 | python,tensorflow,keras,google-colaboratory | You can reset your backend using the Runtime -> Reset all runtimes... menu item. (This is much faster than kill -9 -1, which will take some time to reconnect.) | I've upgrade my tensorflow and keras with this code:
!pip install tf-nightly-gpu-2.0-preview
Now every epoch of model learning cost 22 min which was 17 sec before this upgrade!!!
I did downgrade tensorflo and keras but it did not help! | 0 | 1 | 1,597 |
0 | 56,461,379 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-05T07:50:00.000 | 0 | 1 | 0 | How to specify integration limits in scipy.integrate.trapz or simps? | 56,456,248 | 0 | python,python-3.x,scipy,numerical-integration | but at what value does it start x
It doesn't matter for integration. You just specify the values of y and tell scipy how far the xs are apart. Whether they start at -5 or +26 doesn't influence the value of the integral. | I understand that in scipy.integrate.trapz(y, x=None, dx=1.0, axis=-1) or simps, the min and max values of x (if specified) are taken to be the limits of the integral, but what happens when x=None? It has dx to figure out the spacing in the x values but at what value does it start x?
I tried it with and without x, from which I understand that it starts x values from 0.0 when x is not specified. | 0 | 1 | 608 |
0 | 61,623,908 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-06-05T15:11:00.000 | 2 | 1 | 0 | DataLoader num_workers vs torch.set_num_threads | 56,463,317 | 1.2 | python,machine-learning,pytorch | The num_workers for the DataLoader specifies how many parallel workers to use to load the data and run all the transformations. If you are loading large images or have expensive transformations then you can be in situation where GPU is fast to process your data and your DataLoader is too slow to continuously feed the GPU. In that case setting higher number of workers helps. I typically increase this number until my epoch step is fast enough. Also, a side tip: if you are using docker, usually you want to set shm to 1X to 2X number of workers in GB for large dataset like ImageNet.
The torch.set_num_threads specifies how many threads to use for parallelizing CPU-bound tensor operations. If you are using GPU for most of your tensor operations then this setting doesn't matter too much. However, if you have tensors that you keep on cpu and you are doing lot of operations on them then you might benefit from setting this. Pytorch docs, unfortunately, don't specify which operations will benefit from this so see your CPU utilization and adjust this number until you can max it out. | Is there a difference between the parallelization that takes place between these two options? I’m assuming num_workers is solely concerned with the parallelizing the data loading. But is setting torch.set_num_threads for training in general? Trying to understand the difference between these options. Thanks! | 0 | 1 | 1,236 |
0 | 56,469,920 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-06-05T20:58:00.000 | 0 | 1 | 0 | Keras prints out result of every batch in a single epoch, why is that? | 56,467,912 | 0 | python,keras | That looks like an interaction with a notebook/kernel environment.
You may prefer the results if you change verbose=1 to verbose=2. | As described in Keras documentation, the verbose=1 asks the keras to print out results in a progress bar. But sometimes keras prints out the results of every batch, which makes a very messy printout report (see below). I wonder why is that? I mean, the only setup is the parameter of verbose, isn't it?
My code is simple:
history = model.fit(X_shuffle, y_scores_one_hot,
validation_split=0.2, verbose = 1,
epochs = 100, batch_size = 50)
Wrong printout:
Epoch 1/100
5750/8107 [====================>.........] - ETA: 5:03 - loss: 1.3690 - acc: 0.520 - ETA: 1:42 - loss: 1.3600 - acc: 0.533 - ETA: 1:02 - loss: 1.3994 - acc: 0.500 - ETA: 39s - loss: 1.4173 - acc: 0.482 - ETA: 29s - loss: 1.4189 - acc: 0.47 - ETA: 23s - loss: 1.4320 - acc: 0.46 - ETA: 19s - loss: 1.4432 - acc: 0.46 - ETA: 16s - loss: 1.4373 - acc: 0.46 - ETA: 14s - loss: 1.4318 - acc: 0.46 - ETA: 12s - loss: 1.4322 - acc: 0.46 - ETA: 11s - loss: 1.4314 - acc: 0.46 - ETA: 10s - loss: 1.4342 - acc: 0.46 - ETA: 10s - loss: 1.4386 - acc: 0.45 - ETA: 9s - loss: 1.4399 - acc: 0.4557 - ETA: 8s - loss: 1.4373 - acc: 0.458 - ETA: 7s - loss: 1.4418 - acc: 0.453 - ETA: 7s - loss: 1.4419 - acc: 0.454 - ETA: 6s - loss: 1.4435 - acc: 0.453 - ETA: 6s - loss: 1.4421 - acc: 0.453 - ETA: 6s - loss: 1.4439 - acc: 0.451 - ETA: 5s - loss: 1.4437 - acc: 0.452 - ETA: 5s - loss: 1.4388 - acc: 0.456 - ETA: 5s - loss: 1.4430 - acc: 0.453 - ETA: 4s - loss: 1.4440 - acc: 0.452 - ETA: 4s - loss: 1.4428 - acc: 0.452 - ETA: 4s - loss: 1.4469 - acc: 0.449 - ETA: 4s - loss: 1.4471 - acc: 0.450 - ETA: 3s - loss: 1.4517 - acc: 0.445 - ETA: 3s - loss: 1.4489
I expected something like:
Epoch 1/100
3009/3009 [==============================] - 30s 10ms/step - loss: 1.5875 - acc: 0.2795 - val_loss: 1.5542 - val_acc: 0.4130
Epoch 2/100
3009/3009 [==============================] - 27s 9ms/step - loss: 1.5049 - acc: 0.4403 - val_loss: 1.4963 - val_acc: 0.4130 | 0 | 1 | 738 |
1 | 56,469,292 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-05T23:27:00.000 | 0 | 1 | 0 | How to use python with qr scanner devices? | 56,469,264 | 0 | python,input,qr-code,barcode-scanner | Typically a barcode scanner automatically outputs to the screen, just like a keyboard (except really quickly), and there is an end of line character at the end (like and enter).
Using a python script all you need to do is start the script, connect a scanner, scan something, and get the input (STDIN) of the script. If you built a script that was just always receiving input and storing or processing them, you could do whatever you please with the data.
A QR code is read in the same way that a barcode scanner works, immediately outputting the encoded data as text. Just collect this using the STDIN of a python script and you're good to go! | I want to create a program that can read and store the data from a qr scanning device but i don't know how to get the input from the barcode scanner as an image or save it in a variable to read it after with openCV | 0 | 1 | 601 |
0 | 56,472,750 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-06T06:24:00.000 | 1 | 2 | 0 | What are the available estimators which we can use as estimator in onevsrest classifier? | 56,471,908 | 0.099668 | python-3.x,scikit-learn | The following can be used for classification problems:
Logistic Regression
SVM
RandomForest Classifier
Neural Networks | I want to know briefly about all the available estimators like logisticregression or multinomial regression or SVMs which can be used for classification problems.
These are the three I know. Are there any others like these? and relatively how long they run or how accurate can they get than these? | 0 | 1 | 26 |
0 | 56,472,341 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-06-06T06:51:00.000 | 0 | 1 | 0 | Getting error while trying to fit model - The kernel appears to have died. It will restart automatically | 56,472,233 | 1.2 | python,tensorflow,jupyter | There could be many reasons for the Kernel dying, the most common one I encounter is because I have ran out of memory.
If you are training a particularly large model try temporarily reducing it and bringing the batch_size down to 1
(I don't think the warning message is related - this is just giving advance warning of a future change) | I am trying to fit a model using keras but I get the following error -
WARNING:tensorflow:From /anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/100
classifier.fit(X_train,y_train,epochs=100,batch_size=10) | 0 | 1 | 359 |
0 | 56,473,975 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-06-06T08:34:00.000 | 2 | 2 | 0 | conv net save weight and new test set | 56,473,760 | 0.197375 | python,machine-learning,keras,conv-neural-network | yes, for fair evaluation no sample in the test set should be seen during training | i'm using conv net for image classification.
There is something I dont understand theoretically
For training I split my data 60%train/20%validation/20% test
I save weight when metric on validation set is the best (I have same performance on training and validation set).
Now, I do a new split. Some data from training set will be on test set. I load the weight and I classify new test set.
Since weight have been computed on a part of the new test set, are we agree to says this is a bad procedure and I should retrain my model with my new training/validation set? | 0 | 1 | 26 |
0 | 56,474,029 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-06-06T08:34:00.000 | 1 | 2 | 0 | conv net save weight and new test set | 56,473,760 | 0.099668 | python,machine-learning,keras,conv-neural-network | The all purpose of having a test set is that the model must never see it until the very last moment.
So if your model trained on some of the data in your test set, it becomes useless and the results it will gives you will have no meaning.
So basicly:
1.Train on your train set
2.Validate on your validation set
3.Repeat 1 and 2 until you are happy with the results
4.At the very end, finally test your model on the test set | i'm using conv net for image classification.
There is something I dont understand theoretically
For training I split my data 60%train/20%validation/20% test
I save weight when metric on validation set is the best (I have same performance on training and validation set).
Now, I do a new split. Some data from training set will be on test set. I load the weight and I classify new test set.
Since weight have been computed on a part of the new test set, are we agree to says this is a bad procedure and I should retrain my model with my new training/validation set? | 0 | 1 | 26 |
0 | 56,577,125 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-06-06T09:28:00.000 | 0 | 1 | 1 | Cassandra write throttling with multiple clients | 56,474,650 | 1.2 | python,cassandra,datastax-python-driver | The solution I came up with was to make both data producers write to the same queue.
To meet the requirement that the low-priority bulk data doesn't interfere with the high-priority live data, I made the producer of the low-priority data check the queue length and then add a record to the queue only if the queue length is below a suitable threshold (in my case 5 messages).
The result is that no live data message can have more than 5 bulk data messages in front of it in the queue. If messages start backing up on the queue then the bulk data producer stops queuing more data until the queue length falls below the threshold.
I also split the bulk data into many small messages so that they are relatively quick to process by the consumer.
There are three disadvantages of this approach:
There is no visibility of how many queued messages are low priority and how many are high priority. However we know that there can't be more than 5 low priority messages.
The producer of low-priority messages has to poll the queue to get the current length, which generates a small extra load on the queue server.
The threshold isn't applied strictly because there is a race between the two producers from checking the queue length to queuing a message. It's not serious because the low-priority producer queues only a single message when it loses the race and next time it will know the queue is too long and wait. | I have two clients (separate docker containers) both writing to a Cassandra cluster.
The first is writing real-time data, which is ingested at a rate that the cluster can handle, albeit with little spare capacity. This is regarded as high-priority data and we don't want to drop any. The ingestion rate varies quite a lot from minute to minute. Sometimes data backs up in the queue from which the client reads and at other times the client has cleared the queue and is (briefly) waiting for more data.
The second is a bulk data dump from an online store. We want to write it to Cassandra as fast as possible at a rate that soaks up whatever spare capacity there is after the real-time data is written, but without causing the cluster to start issuing timeouts.
Using the DataStax Python driver and keeping the two clients separate (i.e. they shouldn't have to know about or interact with each other), how can I throttle writes from the second client such that it maximises write throughput subject to the constraint of not impacting the write throughput of the first client? | 0 | 1 | 246 |
0 | 56,476,394 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-06-06T11:10:00.000 | 1 | 1 | 0 | Is Keras Sequential fit the same as several train_on_batch calls? | 56,476,357 | 1.2 | python,tensorflow,keras | If I do model.fit(x, y, epochs=5) is this the same as
for i in range(5) model.train_on_batch(x, y)?
Yes.
Your understanding is correct.
There are a few more bells and whistles to .fit() (we, can for example, artificially control the number of batches to consider an epoch rather than exhausting the whole dataset) but, fundamentally, you are correct. | Just confused as to the differences between keras.sequential train_on_batch and fit. Is the only difference that, with train_on_batch, you automatically pass over the data only once whereas with fit you specify this with the no. of epochs?
If I do model.fit(x, y, epochs=5) is this the same as
for i in range(5)
model.train_on_batch(x, y)? | 0 | 1 | 49 |
0 | 56,484,661 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-06-06T12:14:00.000 | 0 | 1 | 0 | Scipy spline interpolation: Determine array length of vector of knots / B-spline coefficients in tck before actual computation | 56,477,397 | 1.2 | python,arrays,scipy,spline | Short answer: no, not easily. Dierckx Fortran library, which splrep wraps, uses some fairly non-trivial logic for determining the knot vector, and it's all baked into the Fortran code. So, the only way is to carefully trace the latter. It's available from netlib, also scipy/interpolate/fitpack | Is it somehow possible to determine the array length of the arrays in the tck tuple returned by scipy.interpolate.splprep before computing the values?
I have to fit a spline interpolation to noisy data with 5 million data points (or less, can be varying).
My observation is that the interpolation at an array length of ~ 90 is pretty good, while it takes a long time to compute the interpolation for higher array lengths (it sometimes also directly jumps from ~ 90 to ~ 1000 while making s one step smaller and the interpolation also becomes noisy) and it is not appropriate enough, if the array length is far less (<50)...
Actually, this array length depends on the smoothing factor s provided to the splprep function, but for different measurement data, s varies a lot to get a consistent array length of around 90. E.g. for data1 s has a value of around 1000 to get len(cfk[0]) equals to 90, for data2 s has a value of around 100 to get len(cfk[0]) equals to 90 at same lengths of data1 and data2. It might be dependent on the noise of the data...
I have thought about a loop where s starts at some point and decreases through the loop while len(cfk[0]) is constantly being checked - but this takes ages, especially if len(cfk[0]) gets closer to 90.
Therefore, it would be useful to somehow know the smoothing factor to get the desired array length before computing the cfk tuple. | 0 | 1 | 161 |
0 | 56,499,818 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-06T12:39:00.000 | 0 | 1 | 0 | Why a convolution neural network model which works well on new test images fails on video stream? | 56,477,793 | 0 | python,image-processing,video-streaming,conv-neural-network,video-processing | Assuming the neural network works well on images, it should work the same on frames of a video stream. In the end, a video stream is a sequence of images.
The problem is not that it doesn't work on video stream, it simply does not work on the type of images similar to the ones you have in the video stream.
It is hard to exactly find the problem, since the question does not have enough detail. However, some considerations are:
Obviously, there is a problem with the network's ability to generalize. Was the testing performed well? For example, is there a train-validation split of the data?
Does the training error and the validation error indicate any possible issues, such as overfitting?
Is the data used to train the image similar enough to the video frames?
Is the training dataset large enough? Data augmentation might help if there is not enough data. | I have implemented a convolutional neural network by transfer learning using VGG19 to classify 5 different traffic signs. It works well with new test images, but when I apply the model upon video streaming it doesn't classify them correctly. | 0 | 1 | 30 |
0 | 56,480,077 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2019-06-06T14:39:00.000 | 1 | 3 | 0 | Difference between tf.constant and tf.Variable (trainable= False) | 56,479,870 | 0.066568 | python,tensorflow | If you declare something with tf.constant() you won't be able to change the value in future. But, tf.Variable() let's you change the variable in future. You can assign some other value to it. If it is not trainable, then the gradient won't flow through it. | I came across some code where tf.Variable(... trainable=False) was used and I wondered whether there was any difference between using tf.constant(...) and tf.Variable(with the trainable argument set to False)
It seems a bit redundant to have the trainable argument option available with tf.constant is available. | 0 | 1 | 739 |
0 | 56,480,023 | 0 | 1 | 0 | 0 | 3 | false | 2 | 2019-06-06T14:39:00.000 | 1 | 3 | 0 | Difference between tf.constant and tf.Variable (trainable= False) | 56,479,870 | 0.066568 | python,tensorflow | There may be other differences, but one that comes to mind is that, for some TF graphs, you want a variable to be trainable sometimes and frozen other times. For example, for transfer learning with convnets you want to freeze layers closer to the inputs and only train layers closer to the output. It would be inconvenient, I suppose, if you had to swap out all the tf.Variable layers for tf.constant layers. | I came across some code where tf.Variable(... trainable=False) was used and I wondered whether there was any difference between using tf.constant(...) and tf.Variable(with the trainable argument set to False)
It seems a bit redundant to have the trainable argument option available with tf.constant is available. | 0 | 1 | 739 |
0 | 56,480,281 | 0 | 1 | 0 | 0 | 3 | true | 2 | 2019-06-06T14:39:00.000 | 2 | 3 | 0 | Difference between tf.constant and tf.Variable (trainable= False) | 56,479,870 | 1.2 | python,tensorflow | Few reasons I can tell you off the top of my head:
If you declare a tf.Variable, you can change it's value later on if you want to. On the other hand, tf.constant is immutable, meaning that once you define it you can't change its value.
Let's assume that you have a neural network with multiple weight matrices, for the first few epochs you can to train the last layer, while keeping the all the rest frozen. After that, for the last few epochs, you want to fine-tune the whole model. If the first layers are defined as tf.constant, you can't do that. | I came across some code where tf.Variable(... trainable=False) was used and I wondered whether there was any difference between using tf.constant(...) and tf.Variable(with the trainable argument set to False)
It seems a bit redundant to have the trainable argument option available with tf.constant is available. | 0 | 1 | 739 |
0 | 56,482,246 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-06T17:09:00.000 | 0 | 1 | 0 | Is there an efficient python implementation of spectral clustering for large, dense matrices? | 56,482,181 | 0 | python,bigdata | I'd recommend performing PCA to project the data to a lower dimensionality , and then utilize mini batch k-means | Currently I'm using the spectral clustering method from sklearn for my dense 7000x7000 matrix which performs very slowly and exceeds an execution time of 6 hours. Is there a faster implementation of spectral clustering in python? | 0 | 1 | 244 |
0 | 56,495,157 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-07T13:15:00.000 | 0 | 1 | 0 | How to train a model with data with only one label | 56,495,100 | 0 | python,machine-learning,supervised-learning | Sounds to me you need to shuffle that. The dataset you have have inherent information coded in the structure of the data ( Player 1 wins). You have no way to recreate this information at runtime.
What you want is a dataset where the order of the player information is not important , and a label 0/1 determining if player 1 or player 2 will win. | I am trying to build a model to predict the outcome (win or loose) of a tennis match, as an exercise. I am using Python, Pandas and scikit-learn.
The dataset I have has the two players ID and the result of the match, among other quantities.
In my case, the way the database is organized, has always the Player1 as the winner and the Player2 as the looser. So, if I have to label the data, it will always be the same label (1, for instance).
What do you think is better:
to try to train a model with a single-valued training ser (for instance a 1-label SVM)
to randomly shuffle the data, in order to place some of the Player2 as the Player1 and viceversa, and so to have another label (0, for instance)?
Thanks a lot! | 0 | 1 | 460 |
0 | 57,731,033 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-06-07T14:37:00.000 | 1 | 2 | 0 | partitionBy taking too long while saving a dataset on S3 using Pyspark | 56,496,387 | 0.099668 | python,apache-spark,amazon-s3,pyspark,amazon-emr | Use version 2 of the FileOutputCommiter
.set("mapreduce.fileoutputcommitter.algorithm.version", "2") | I am trying to save a dataset using partitionBy on S3 using pyspark. I am partitioning by on a date column. Spark job is taking more than hour to execute it. If i run the code without partitionBy it just takes 3-4 mints.
Could somebody help me in fining tune the parititonby? | 0 | 1 | 1,717 |
0 | 59,047,316 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-07T21:11:00.000 | 0 | 1 | 0 | Tensorflow no_grad concept | 56,501,260 | 0 | python,tensorflow,pytorch | If you don't want to train certain Variables in TensorFlow you can achieve this behaviour by adding trainable=False to Variables. | I know with pytorch you can turn off training by calling eval() on your model.
Also you can set requires_grad=False.
How can you ensure that a TensorFlow element is not modified during training? | 0 | 1 | 670 |
0 | 56,501,994 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-07T22:45:00.000 | 0 | 2 | 0 | Pandas is replacing rows with FALSE and TRUE with False and True | 56,501,959 | 0 | python,pandas | I think the output dataframe of read_csv already convert the columns to boolean values. You can verify it by calling df.info(). If you want to keep the columns as string values you need to pass a dict to the dtype parameter to specify it explicitly. | using pd.read_csv("my.csv"), I have certain rows that appear as either TRUE or FALSE. read_csv is changing these rows in the dataframe as "True" and "False". Is there any way to keep case sensitivity when reading a CSV for true and false values? | 0 | 1 | 162 |
0 | 56,651,186 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-08T04:00:00.000 | 0 | 1 | 0 | One gpu uses more memory than others during training | 56,503,303 | 0 | python,memory,gpu,pytorch,multi-gpu | DataParallel splits the batch and sends each split to a different GPU, each GPU has a copy of the model, then the forward pass is computed independently and then the outputs of each GPU are collected back to one GPU instead of computing loss independently in each GPU.
If you want to mitigate this issue you can include the loss computation in the DataParallel module.
If doing this is still an issue, then you might want model parallelism instead of data parallelism: move different parts of your model to different GPUs using .cuda(gpu_id). This is useful when the weights of your model are pretty large. | I use multigpu to train a model with pytorch. One gpu uses more memory than others, causing "out-of-memory". Why would one gpu use more memory? Is it possible to make the usage more balanced? Is there other ways to reduce memory usage? (Deleting variables that will not be used anymore...?) The batch size is already 1. Thanks. | 0 | 1 | 186 |
0 | 56,506,873 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-08T13:22:00.000 | -5 | 2 | 0 | Does Opencv allow you to compute the 3x4 perspective transformation using 6 points? | 56,506,815 | -1 | python,opencv,computational-geometry | Yes of course there is. Just look for the computeThreeByFourMatrix() function in the OpenCV library documentation. It is all there | I want to compute a 3x4 matrix transformation, in homogeneous coordinates, that transforms 3d world points to 2d image points. My problem is that in the documentation and tutorials of the function getPerspectiveTransformation the default matrices are either 3x3 for perspective or 2x3 in affine transformations.
Is there any built in function to compute the 3x4 matrix?
I have read the tutorials, some books on computer vision and I know the matrix that transforms 3d to 2d is 3x4. It is necessary to input 6 points to get this matrix and in the examples I find for opencv they are using 4, so I guess this is a 2d to 2d transformation not what I need.
cv2.getPerspectiveTransform
cv2.getAffineTransform
I used these functions but it didn't work, I used cv2.circle function to plot what was supposed to be a straigh line on the image and I got curved lines. Obviously I was not using the proper matrix transformation. | 0 | 1 | 304 |
0 | 56,517,433 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-08T17:25:00.000 | 0 | 1 | 0 | Creating train,test data for Word2Vec model | 56,508,631 | 0 | python,gensim,word2vec | Word2Vec is considered an 'unsupervised' algorithm, so at least during its training, it is not typical to hold back any 'test' data for later evaluation.
A Word2Vec model is usually then evaluated on how well it helps some other process - such as the analogy-solving highlighted by the original paper. In gensim, the method [evaluate_word_analogies()][1] can repeat that process. But note: word-vectors that perform best on word-analogies my not be best for other purposes, like classification or info-retrieval. It's always best to evaluate & tune your word-vectors in a repeatable way that's related to your actual underlying use.
(If you're using the Word2Vec model's outputs - word-vectors specific to your domain – as part of a larger system, where some steps should be evaluated with held-back data, the decision of whether to train the Word2Vec component on all data could go either way, depending on other considerations.) | I am trying to create a W2V model and then generate train and test data to be used for my model.My question is how can I generate test data after I am done with creating a W2V model with my train data. | 0 | 1 | 1,283 |
0 | 56,509,709 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-08T19:04:00.000 | 0 | 1 | 0 | How can I stop networkx to change the source and the target node? | 56,509,345 | 0 | python,pandas,networkx | If you mean the order has changed, check out nx.OrderedGraph | I make a Graph (not Digraph) from a data frame (Huge network) with networkx.
I used this code to creat my graph:
nx.from_pandas_edgelist(R,source='A',target='B',create_using=nx.Graph())
However, in the output when I check the edge list, my source node and the target node has been changed based on the sort and I don't know how to keep it as the way it was in the dataframe (Need the source and target node stay as the way it was in dataframe). | 0 | 1 | 309 |
0 | 56,521,630 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-06-08T19:22:00.000 | 2 | 1 | 0 | After installing Tensorflow 2.0 in a python 3.7.1 env, do I need to install Keras, or does Keras come bundled with TF2.0? | 56,509,459 | 0.379949 | python-3.x,tensorflow2.0,tf.keras | In Tensorflow 2.0 there is strong integration between TensorFlow and the Keras API specification (TF ships its own Keras implementation, that respects the Keras standard), therefore you don't have to install Keras separately since Keras already comes with TF in the tf.keras package. | I need to use Tensorflow 2.0(TF2.0) and Keras but I don't know if it's necessary to install both seperately or just TF2.0 (assuming TF2.0 has Keras bundled inside it). If I need to install TF2.0 only, will installing in a Python 3.7.1 be acceptable?
This is for Ubuntu 16.04 64 bit. | 0 | 1 | 176 |
0 | 56,524,739 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2019-06-09T23:02:00.000 | 0 | 1 | 0 | What's the purpose of TensorFlow specific data types? | 56,518,982 | 1.2 | python,tensorflow | In short because TensorFlow is not executed by the python interpreter (at least not in general).
Python provides but one possible API to interact with TensorFlow. The core of TensorFlow itself is compiled (written mostly in C++) where python datatypes are not available. Also, (despite recent advances allowing eager execution) we need to be able to first create an execution graph, then to pass data through it. That execution graph needs to be aware of datatypes but is language agnostic | For example, Why use tf.int32? Why not just use Python builtin integers? | 0 | 1 | 41 |
0 | 56,520,260 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-09T23:29:00.000 | 0 | 1 | 0 | Can we limit the value of a response variable in any machine learning algorithm? | 56,519,098 | 0 | python,machine-learning | It depends on the meaning of your response variable, considering you are using linear regression. But in a general function y=f(x), you can add a Softmax function y=Softmax(f(x)) to make sure y in (0, 1). If you replace Softmax with sigmoid and use it for regression, then you get a Logistic regression, they you can limit the response value to (0, 1). | I am working on a problem in which my response variable is a relative power whose value cannot go beyond 100%. When I use linear regression or any other machine-learning algorithms, the predicted value goes beyond 100% and I want to limit that to 100%. Is there any way we can achieve that? | 0 | 1 | 36 |
0 | 56,543,927 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-10T03:37:00.000 | 1 | 2 | 0 | How to Identify Each Components from Audio Signal? | 56,520,227 | 0.099668 | python,machine-learning,signal-processing | Well...
If your shaft is rotating at, say 1200 RPM or 20 Hz, then all the significant sound produced by that rotation should be at harmonics of 20Hz.
If the turbine has 3 perfect blades, however, then it will be in exactly the same configuration 3 times for every rotation, so all of the sound produced by the rotation should be confined to multiples of 60 Hz.
Energy at the other harmonics of 20 Hz -- 20, 40, 80, 100, etc. -- that is above the noise floor would generally result from differences between the blades.
This of course ignores noise from other sources that are also synchronized to the shaft, which can mess up the analysis. | I have some audio files recorded from wind turbines, and I'm trying to do anomaly detection. The general idea is if a blade has a fault (e.g. cracking), the sound of this blade will differ with other two blades, so we can basically find a way to extract each blade's sound signal and compare the similarity / distance between them, if one of this signals has a significant difference, we can say the turbine is going to fail.
I only have some faulty samples, labels are lacking.
However, there seems to be no one doing this kind of work, and I met lots of troubles while attempting.
I've tried using stft to convert the signal to power spectrum, and some spikes show. How to identify each blade from the raw data? (Some related work use AutoEncoders to detect anomaly from audio, but in this task we want to use some similarity-based method.)
Anyone has good idea? Have some related work / paper to recommend? | 0 | 1 | 48 |
0 | 56,520,450 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-06-10T04:05:00.000 | 3 | 2 | 0 | Different values between pandas df.size and len(df.to_dict("records")) | 56,520,381 | 0.291313 | python,python-3.x,pandas | Size will display total number of values while len display length of Data Frame
Ex: if you have 3*2(3 rows and 2 columns)
size will be "6", len will be "3" | Why might the values between df.size and len(df.to_dict("records")) be different? I find the value of df.size=58151429 while my len(df.to_dict("records"))=2528323 which is quite a big difference. Why can that be? | 0 | 1 | 696 |
0 | 56,526,165 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-06-10T11:57:00.000 | 0 | 2 | 0 | How can I calculate the coordinates of vertices of an zebra crossing area from the coordinates of vertices of zebra stripe? | 56,525,947 | 0 | python,computer-vision,computational-geometry | As you told, you know the coordinates of each individual stripe of the zebra crossing. So now you can determine the first and last stripes by looking at max and min coordinates of all vertices(By considering a reference axis from which you can measure distance). then you know coordinates of terminal stripes and hence you can make the outline by considering the coordinates and hence making a larger rectangle from those four coordinates determining the whole zebra crossing. | I am doing a zebra crossing detection problem, and now I've already known the vertices of each zebra stripe, as a list of points. How can I efficiently calculate the coordinates of the vertices of the outline rectangle containing those zebra stripes?
I am doing it in 3D
I've been thinking this question for days, and cannot figure out a solution rather than brutal force...
That's a different problem from finding the bounding box of a given list of points. For this task, the return would be four of those zebra stripes' vertices. I just need to find them out.
Any help or pointers would be valuable!
UPDATE: I finally sorted those zebra-crossings by orientation and found the terminal zebra strips easily. The rest of the work is trivial | 0 | 1 | 85 |
0 | 56,526,776 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-06-10T11:57:00.000 | 0 | 2 | 0 | How can I calculate the coordinates of vertices of an zebra crossing area from the coordinates of vertices of zebra stripe? | 56,525,947 | 1.2 | python,computer-vision,computational-geometry | From what you say, it seems that you have the 3D coordinates of the outline of a rectangle. I will assume Cartesian coordinates and undistorted geometry.
The points belong to a plane, which you can determine by 3D plane fitting. Then by an orthogonal change of variables, you can project the points onto that plane.
For resonably good accuracy, you can
find the centroid of the points;
find the point the most distant from the centroid;
split the point set by means of the line from the centroid to that point;
on both halves, find the most distant points from the centroid;
the line that joints them allows you to further split in four quadrants;
in every quadrant, apply line fitting to find the edges.
If what you are after is the bounding box of several stripes, you can proceed as above to find the directions of the sides. Then apply a change of coordinates to bring those sides axis-aligned. Finding the bounding box is now straightforward.
Undo the transforms to obtain the3D coordinates of the rectangle in 3D. | I am doing a zebra crossing detection problem, and now I've already known the vertices of each zebra stripe, as a list of points. How can I efficiently calculate the coordinates of the vertices of the outline rectangle containing those zebra stripes?
I am doing it in 3D
I've been thinking this question for days, and cannot figure out a solution rather than brutal force...
That's a different problem from finding the bounding box of a given list of points. For this task, the return would be four of those zebra stripes' vertices. I just need to find them out.
Any help or pointers would be valuable!
UPDATE: I finally sorted those zebra-crossings by orientation and found the terminal zebra strips easily. The rest of the work is trivial | 0 | 1 | 85 |
0 | 56,526,556 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-10T12:31:00.000 | 0 | 2 | 0 | Packaging tensorflow models as wheel files | 56,526,497 | 0 | python,tensorflow,keras,setup.py,python-wheel | You can send only the frozen inference graph in .pb format. | I have created my tensorflow model which will act as a serve. Code will be hosted on client's local server. I don't want to give them my code but give them a wheel file. But after following python's package distribution my tensorflow files become corrupted. | 1 | 1 | 120 |
0 | 56,531,971 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-10T18:18:00.000 | 0 | 1 | 0 | tensorflow dataset cache cross validation | 56,531,580 | 0 | python-3.x,tensorflow,tensorflow-datasets | Answering my own question, to do this, I can create a pipeline for each file, cache each pipeline on disk, put them into deque, then use tf.data.experimental.sample_from_datasets. | I have a very expensive data pipeline. I want to use tf.data.Dataset.cache to cache the first epoch dataset to disk. Then speed up the process. The reason I'm doing this instead of saving the dataset into tfrecords is
1) I change many parameters doing the processing every time, it is more convenient for me to cache it on the fly
2) I'm doing cross-validation so I don't know which files to process
I have a naive solution - to create a pipeline for each fold of the training files, but that takes a lot of space to cache (I'm doing 10 fold) that's equivalent of 1TB in total.
Is there any other way to do this more efficiently both in space and in time? | 0 | 1 | 218 |
0 | 56,533,007 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-06-10T19:39:00.000 | 0 | 2 | 0 | Python point cloud data to surface fit/function | 56,532,588 | 0 | python,python-3.x,mesh,point-clouds,surface | I don't know if creating a single function for the entire surface is the correct approach?
I guess this depends on your data. Let's assume the base form of your surface is spherical. Then you can model it as such.
If your surface is more complex then a sphere you might can still model the neighborhood of (x,y) as such. Maybe you could even consider your surface as plain in the near neighborhood of (x,y). | I have unstructured (taken in no regular order) point cloud data (x,y,z) for a surface. This surface has bulges (+z) and depressions (-z) scattered around in an irregular fashion. I would like to generate some surface that is a function of the original data points and then be able to input a specific (x,y) and get the surface roughness value from it (z value). How would I go about doing this?
I've looked at scipy's interpolation functions, but I don't know if creating a single function for the entire surface is the correct approach? Is there a technical name for what I am trying to do? I would appreciate any suggestions/direction. | 0 | 1 | 847 |
0 | 56,534,865 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-06-10T19:39:00.000 | -1 | 2 | 0 | Python point cloud data to surface fit/function | 56,532,588 | -0.099668 | python,python-3.x,mesh,point-clouds,surface | What you are trying to do, can be called surface fitting, or two-dimensional curve fitting. You would be able to find lots of available algorithms by searching for those terms. Now, the choice of the particular algorithm/method should be dictated:
by the origin of your data (there are specialized algorithms or variations of more common ones that are tailored for certain application areas)
by the future use of your data (depending on what you are going to do with it, maybe you need to be able to calculate derivatives easily, etc)
It is not easy to represent complicated data (especially the noisy one) using a single function. Thus there is a lot of research about it. However, in a lot of applications curve-fitting is very successful and very widely used. | I have unstructured (taken in no regular order) point cloud data (x,y,z) for a surface. This surface has bulges (+z) and depressions (-z) scattered around in an irregular fashion. I would like to generate some surface that is a function of the original data points and then be able to input a specific (x,y) and get the surface roughness value from it (z value). How would I go about doing this?
I've looked at scipy's interpolation functions, but I don't know if creating a single function for the entire surface is the correct approach? Is there a technical name for what I am trying to do? I would appreciate any suggestions/direction. | 0 | 1 | 847 |
0 | 56,695,079 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-11T02:31:00.000 | 0 | 1 | 0 | How to load NTU rgbd dataset? | 56,535,700 | 0 | python,machine-learning | The overall size of the dataset is 1.3 TB and this size will decrease after processing the data and converting it into numpy arrays or something else.
But I do not think you will work on the entire dataset, what is the part you want to work on it in the dataset? | We are working on early action prediction but we are unable to understand the dataset itself NTU rgbd dataset is 1.3 tb.my laptop Hard disk is 931 GB
.first problem : how to deal with such a big dataset?
Second problem : how to understand dataset?
Third problem: how to load dataset ?
Thanks for the help | 0 | 1 | 128 |
0 | 56,546,713 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-06-11T14:52:00.000 | 1 | 1 | 0 | Statsmodels.api doesn't import | 56,546,431 | 1.2 | python-3.x,statsmodels | From the error it looks as though there is not a function called factorial within the misc directory of the scipy package.
Have you tried opening up the __init__.py file specified in the error and looking through the misc directory to find the factorial function? | That's it. It installs, I can import statsmodels, but statsmodels.api doesn't import.
I've tried installing with pip and conda, both give me version 0.9.0 and everything is fine.
I've installed all the dependencies, statsmodels works, but statsmodels.api doesn't.
import statsmodels.api Traceback (most recent call last): File "", line 1, in File
"C:\Users\user\AppData\Local\Programs\Python\Python37-32\lib\site-packages\statsmodels\api.py",
line 16, in
from .discrete.discrete_model import (Poisson, Logit, Probit, File
"C:\Users\user\AppData\Local\Programs\Python\Python37-32\lib\site-packages\statsmodels\discrete\discrete_model.py",
line 45, in
from statsmodels.distributions import genpoisson_p File "C:\Users\user\AppData\Local\Programs\Python\Python37-32\lib\site-packages\statsmodels\distributions__init__.py",
line 2, in
from .edgeworth import ExpandedNormal File "C:\Users\user\AppData\Local\Programs\Python\Python37-32\lib\site-packages\statsmodels\distributions\edgeworth.py",
line 7, in
from scipy.misc import factorial ImportError: cannot import name 'factorial' from 'scipy.misc'
(C:\Users\user\AppData\Local\Programs\Python\Python37-32\lib\site-packages\scipy\misc__init__.py) | 0 | 1 | 178 |
0 | 56,548,032 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-06-11T16:14:00.000 | 0 | 4 | 0 | Graphing multiple csv lists into one graph in python | 56,547,899 | 0 | python,pandas,csv,matplotlib,graph | Read the first file and create a list of lists in which each list filled by two columns of this file. Then read the other files one by one and append y column of them to the correspond index of this list. | I have 5 csv files that I am trying to put into one graph in python. In the first column of each csv file, all of the numbers are the same, and I want to treat these as the x values for each csv file in the graph. However, there are two more columns in each csv file (to make 3 columns total), but I just want to graph the second column as the 'y-values' for each csv file on the same graph, and ideally get 5 different lines, one for each file. Does anyone have any ideas on how I could do this?
I have already uploaded my files to the variable file_list | 0 | 1 | 932 |
0 | 56,561,928 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-06-12T11:39:00.000 | 1 | 2 | 0 | Is there any way in OCR/tesseract/OpenCV for extracting text from a particular region of an image? | 56,561,357 | 0.099668 | python,artificial-intelligence,ocr,tesseract,text-extraction | Looks like you are newbird,so let me help you quick walkthrough of understanding of terms used in your keyword.
OCR is optical character recognition a concept
Tesseract is special library handling for OCR.
OpenCV helps in image processing library helping in object detection and recognition.
Yes, you can extract the text from image if its more than 300dpi by using tesseract library
but before that you should train the tesseract model with that font, if font of text is very new or unknown to system.
Also keep in mind, if you are able to box-image the text prior calling to tesseract, it will work more accurately.
Certain word - box image, dpi will create alert, but these are pivot concepts to your work.
My suggestion, if you want to extract the digits from image, go in step by step.
Process the image by enhancing its quality.
Detect the region which want to be extracted.
Find the contour and area.
Pass it to box-image editor and tune the parameters
Finally give it to Tesseract. | I’m setting up a new invoice extraction method using AI, I able to recognize "Total"/"Company Details" from invoice images but need help with extracting data from that particular region recognized in the invoice image by specifying an area in the image(Xmin, Xmax, Ymin, Ymax)? | 0 | 1 | 950 |
Subsets and Splits