text
stringlengths 226
34.5k
|
---|
Pyodbc - The specified DSN contains an architecture mismatch between the Driver and Application
Question: I'm trying to connect to a MS Access Database (.accdb file) via python.
I used pyodbc to do this connection:
import pyodbc
conn = pyodbc.connect("DRIVER = {Microsoft Access Driver (*.mdb, *.accdb)}; DBG=C:\\test_db.accdb")
However, I got the following error:
('IM002, '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)')
I went to the ODBC Data Source Administrator and when I tried to configure or
remove the Driver I got the message:
Errors Found:
The specified DSN contains an architecture mismatch between the Driver and Application
I found that this error is provoked by an incompatibility between versions of
Windows (windows 7 - 64bit) and Microsoft Access (Office 2010 - 32bits). I
tried to reinstall the driver several times, both with 32 and 64bit versions
but the problem wasn't solved. Could you please help me to solve this problem?
Thank you in advance.
Answer: You have to make sure the Python version matches the ODBC driver version:
32-bit with 32-bit, 64-bit with 64-bit.
It looks like you have 64-bit Python / pyodbc and 32-bit MS Access.
What you'll need to do is install the 32-bit Python version, and then install
`pyodbc`.
Good luck!
|
Exceptions when reading tutorial CSV file in the Cloudera VM
Question: I'm trying to do a Spark tutorial that comes with the Cloudera Virtual
Machine. But even though I'm using the correct line-ending encoding, I can not
execute the scripts, because I get tons of errors. The tutorial is part of the
Coursera [Introduction to Big Data
Analytics](https://www.coursera.org/learn/bigdata-analytics/) course. The
assignment [can be found here](http://matthias-
heise.eu/programming_assignment_%20dataframe.pdf).
So here's what I did. Install the IPython shell (if not yet done):
sudo easy_install ipython==1.2.1
Open/Start the shell (either with 1.2.0 or 1.4.0):
PYSPARK_DRIVER_PYTHON=ipython pyspark --packages com.databricks:spark-csv_2.10:1.2.0
Set the line-endings to windows style. This is because the file is in windows-
encoding and it's said in the course to do so. If you don't do this, you'll
get other errors.
sc._jsc.hadoopConfiguration().set('textinputformat.record.delimiter','\r\n')
Trying to load the CSV file:
yelp_df = sqlCtx.load(source='com.databricks.spark.csv',header = 'true',inferSchema = 'true',path = 'file:///usr/lib/hue/apps/search/examples/collections/solr_configs_yelp_demo/index_data.csv')
But getting a very long list of errors, which starts like this:
Py4JJavaError: An error occurred while calling o23.load.: java.lang.RuntimeException:
Unable to instantiate
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
The full error message [can be seen here](http://matthias-
heise.eu/error_programming_assignment_%20dataframe.txt). And this is the
/etc/hive/conf/hive-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- Hive Configuration can either be stored in this file or in the hadoop configuration files -->
<!-- that are implied by Hadoop setup variables. -->
<!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive -->
<!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
<!-- resource). -->
<!-- Hive Execution Parameters -->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://127.0.0.1/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>cloudera</value>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>/usr/lib/hive/lib/hive-hwi-0.8.1-cdh4.0.0.jar</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://127.0.0.1:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
</configuration>
Any help or idea how to solve that? I guess it's a pretty common error. But I
couldn't find any solution, yet.
One more thing: is there a way to dump such long error messages into a
separate log-file?
Answer: Seems that there are two problems. First, the hive-metastore was offline in
some occasions. And second, the schema can not be inferred. Therefore I
created a schema manually and added it as an argument when loading the CSV
file. Anyway, I would love to understand if this works somehow with
schemaInfer=true.
Here's my version with the manually defined schema. So, make sure the hive is
started:
sudo service hive-metastore restart
Then, have a look into the first part of the CSV file to understand it's
structure. I used this command line:
head /usr/lib/hue/apps/search/examples/collections/solr_configs_yelp_demo/index_data.csv
Now, open the python shell. See the original posting for how to do that. Then
define the schema:
from pyspark.sql.types import *
schema = StructType([
StructField("business_id", StringType(), True),
StructField("cool", IntegerType(), True),
StructField("date", StringType(), True),
StructField("funny", IntegerType(), True),
StructField("id", StringType(), True),
StructField("stars", IntegerType(), True),
StructField("text", StringType(), True),
StructField("type", StringType(), True),
StructField("useful", IntegerType(), True),
StructField("user_id", StringType(), True),
StructField("name", StringType(), True),
StructField("full_address", StringType(), True),
StructField("latitude", DoubleType(), True),
StructField("longitude", DoubleType(), True),
StructField("neighborhood", StringType(), True),
StructField("open", StringType(), True),
StructField("review_count", IntegerType(), True),
StructField("state", StringType(), True)])
Then load the CSV file by specifying the schema. Note that there is no need to
set the windows line endings:
yelp_df = sqlCtx.load(source='com.databricks.spark.csv',
header = 'true',
schema = schema,
path = 'file:///usr/lib/hue/apps/search/examples/collections/solr_configs_yelp_demo/index_data.csv')
The the result by any method executed on the dataset. I tried getting the
count, which worked perfectly.
yelp_df.count()
Thanks to the help of @yaron we could figure out how to load the CSV with
inferSchema. First, you must setup the hive-metastore correctly:
sudo cp /etc/hive/conf.dist/hive-site.xml /usr/lib/spark/conf/
Then, start the Python shell and DO NOT change the line endings to Windows
encoding. Keep in mind that changing that is persistent (session invariant).
So, if you changed it to Windows style before, you need to reset it it '\n'.
Then load the CSV file with inferSchema set to true:
yelp_df = sqlCtx.load(source='com.databricks.spark.csv',
header = 'true',
inferSchema = 'true',
path = 'file:///usr/lib/hue/apps/search/examples/collections/solr_configs_yelp_demo/index_data.csv')
|
How to determine the cause for "BUS-Error"
Question: I'm working on a variscite board with a yocto distribution and python 2.7.3.
I get sometimes a **Bus error** message from the python interpreter.
My program runs normally at least some hours or days before the error ocours.
But when I get it once, I get it directly when I try to restart my program.
I have to reboot before the system works again.
My program uses only a serial port, a bit usb communication and some tcp
sockets.
I can switch to another hardware and get the same problems.
I also used the python selftest with
`python -c "from test import testall"`
And I get errors for these two tests
> test_getattr (test.test_builtin.BuiltinTest) ... ERROR test_nameprep
> (test.test_codecs.NameprepTest) ... ERROR
And the selftest stops always at
> test_callback_register_double
> (ctypes.test.test_callbacks.SampleCallbacksTestCase) ... Segmentation fault
But when the systems runs some hours the selftests stops earlier at
> ctypes.macholib.dyld Bus error
I checked the RAM with memtester, it seems to be okay.
How I can find the cause for the problems?
Answer: Bus errors are generally caused by applications trying to access memory that
hardware cannot physically address. In your case there is a segmentation fault
which may cause dereferencing a bad pointer or something similar which leads
to accessing a memory address which physically is not addressable. I'd start
by root causing the segmentation fault first as the bus error is the secondary
symptom.
|
Returning two randomly chosen groups from a list in python
Question: How would i write code that involves having a list of 9 people split as evenly
as possible into 2 cars but they are placed into each car randomly in python?
Essentially i'm looking for a return similar to this:
Car 1: Person8, Person2, Person4, Person7
Car 2: Person5, Person1, Person3, Person6, Person9
Answer: Just shuffle the whole list, then just split that list into two chucks, one
with 4 people and one with the remainder:
import random
people = ['foo', 'bar', 'baz', 'eggs', 'ham', 'spam', 'eric', 'john', 'terry']
random.shuffle(people)
car1, car2 = people[:4], people[4:]
If you can't sort the list of people directly, use `random.sample()` instead:
people = ['foo', 'bar', 'baz', 'eggs', 'ham', 'spam', 'eric', 'john', 'terry']
shuffled = random.sample(people, len(people))
car1, car2 = shuffled[:4], shuffled[4:]
Demo of the latter approach:
>>> import random
>>> people = ['foo', 'bar', 'baz', 'eggs', 'ham', 'spam', 'eric', 'john', 'terry']
>>> shuffled = random.sample(people, len(people))
>>> shuffled[:4], shuffled[4:]
(['bar', 'baz', 'terry', 'ham'], ['spam', 'eric', 'foo', 'john', 'eggs'])
|
Why does Python's base64.b64decode() ignore gibberish at end of string?
Question: I have a long token that I'm decoding using python's `base64.b64decode()`
method.
It works. But as you can see below, it returns the same result even if I
insert gibberish characters at the end. Why? Shouldn't these two strings
produce two different decoded results?
>>> import base64
>>> token = "Ti6VXtqWYb8WVuR6m/bnDKyVqS96pvRGH9SqTsC7w1E4ZlcjCK8SDQFWRa2b0q96pAflgZTmao+CeEk9cJFVUq0MgBCBoPMUEdTLwT7AhyAa1xOQf8b9C63+DH3v2L+PqJMPSTPfWRXL5WeOPR+gFJBrAm/658phg6vzhBMNS6wgyiiLqfWUOpWyAlcMRrKu5Yq7mXaloxxFQm6HEVcvrjDVGSdsCHRB0Osby8PttEel5oqFkYq85LfNobE9VaR6Onzowru1lHnTdfEqUT5qabXaw9j9rapT4+in2N1WQt1t+XzBn1xxGLT903FOZQxkf2X7R9sGrhLXzSnBAW5q18T8ZJBsxsq3OryCgKfPEJ3x+uj0LCnoogX/gucVcZDp19HIdvelOQsD5de85U800LCDQFKatd/+VBhh4oRrnefD+6l4WRzjg1h5J2ZNgjUhCtIu6r63zFq5ef7nG60JxdTYPOT1njGfEUNAuNuBW97i98ZfhmiPOZMaINPoEFHJQRG1nMwAYCwcytn053n+7D5Dz6MZxrWwAX3/VS9fT6SduFVQ6X4HJA/+FIH8epcqAkU6M6UVm7sfQwHV/vflVAkGNQFevNwA2+u6erInPTWqL9usz4IU47ekp68xk1BBAYEqE0AKeXaZZVpYJ8CJmbAcdxvMD9+Pchi9lk6ZomzxxLKWEPGcPjFobM8bRDEVbmfP+vYfWwovy/tOo9tqkqc0sAvS5RGp9Q0SBAfBQ9c8TXuwqrDBc0OPG5TTEQQ42Cd9Ky9K2ZHldQkXOc/H0vIWBo2m5aJABvVWambd0oEzGmQHrNYzQxNSKgWSLoh7w8HrUzn9skJQGzU/igt6EOdp617ToBD5G936ByF7Rft+FGKB3jiFeEvke0Fbh3wrsr0xqP9JxL/tr8P2x29hRQauigY2MYwrt0nilET/x88="
>>> base64.b64decode("%sXXXXXXBlahBlahBlah" % (token)) == base64.b64decode(token)
True
If I put the gibberish characters at the beginning, it fails:
>>> base64.b64decode("%sXXXXXXBlahBlahBlah" % (token)) == base64.b64decode(token)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/base64.py", line 76, in b64decode
raise TypeError(msg)
TypeError: Incorrect padding
Answer: [The CPython
implementation](https://hg.python.org/cpython/file/2.7/Modules/binascii.c#l403)
stop interpreting if it see a pad (`=`).
if (this_ch == BASE64_PAD) {
if ( (quad_pos < 2) ||
((quad_pos == 2) &&
(binascii_find_valid(ascii_data, ascii_len, 1)
!= BASE64_PAD)) )
{
continue;
}
else {
/* A pad sequence means no more input.
** We've already interpreted the data
** from the quad at this point.
*/
leftbits = 0;
break;
}
}
An experiment with base64-encoded string without padding:
>>> base64.decodestring('YWJj') # without a padding
'abc'
>>> base64.decodestring('YWJj' + 'XXX')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/base64.py", line 328, in decodestring
return binascii.a2b_base64(s)
binascii.Error: Incorrect padding
>>> base64.decodestring('YWI=') # with a padding
'ab'
>>> base64.decodestring('YWI=XXX')
'ab'
|
Python FTP Upload calling variable
Question: I'm uploading a file to the ftp server, the actual settings for the upload are
correct but it isn't uploading the correct filename, it is uploading filename
as the actual name of the file instead of capture......
#!/usr/bin/python
#
# Lightweight Motion Detection using python picamera libraries
# based on code from raspberry pi forum by user utpalc
# modified by Claude Pageau for this working example
# ------------------------------------------------------------
# original code on github https://github.com/pageauc/picamera-motion
# This is sample code that can be used for further development
verbose = True
if verbose:
print "Loading python libraries ....."
else:
print "verbose output has been disabled verbose=False"
import picamera
import picamera.array
import datetime
import time
import ftplib
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
from fractions import Fraction
#Constants
SECONDS2MICRO = 1000000 # Constant for converting Shutter Speed in Seconds to Microseconds
# User Customizable Settings
imageDir = "images"
imagePath = "/home/pi/pimotion/" + imageDir
imageNamePrefix = 'capture-' # Prefix for all image file names. Eg front-
imageWidth = 1980
imageHeight = 1080
imageVFlip = False # Flip image Vertically
imageHFlip = False # Flip image Horizontally
imagePreview = False
numberSequence = False
threshold = 10 # How Much pixel changes
sensitivity = 100 # How many pixels change
nightISO = 800
nightShutSpeed = 6 * SECONDS2MICRO # seconds times conversion to microseconds constant
# Advanced Settings not normally changed
testWidth = 100
testHeight = 75
def checkImagePath(imagedir):
# Find the path of this python script and set some global variables
mypath=os.path.abspath(__file__)
baseDir=mypath[0:mypath.rfind("/")+1]
baseFileName=mypath[mypath.rfind("/")+1:mypath.rfind(".")]
# Setup imagePath and create folder if it Does Not Exist.
imagePath = baseDir + imagedir # Where to save the images
# if imagePath does not exist create the folder
if not os.path.isdir(imagePath):
if verbose:
print "%s - Image Storage folder not found." % (progName)
print "%s - Creating image storage folder %s " % (progName, imagePath)
os.makedirs(imagePath)
return imagePath
def takeDayImage(imageWidth, imageHeight, filename):
if verbose:
print "takeDayImage - Working ....."
with picamera.PiCamera() as camera:
camera.resolution = (imageWidth, imageHeight)
# camera.rotation = cameraRotate #Note use imageVFlip and imageHFlip variables
if imagePreview:
camera.start_preview()
camera.vflip = imageVFlip
camera.hflip = imageHFlip
# Day Automatic Mode
camera.exposure_mode = 'auto'
camera.awb_mode = 'auto'
camera.capture(filename)
sftp = ftplib.FTP('ftpdomainname','myftpusername','myftppassword') # Connect
fp = open(filename) # file to send
sftp.storbinary('STOR filename', fp) # Send the file
fp.close() # Close file and FTP
sftp.quit()
if verbose:
print "takeDayImage - Captured %s" % (filename)
return filename
def takeNightImage(imageWidth, imageHeight, filename):
if verbose:
print "takeNightImage - Working ....."
with picamera.PiCamera() as camera:
camera.resolution = (imageWidth, imageHeight)
if imagePreview:
camera.start_preview()
camera.vflip = imageVFlip
camera.hflip = imageHFlip
# Night time low light settings have long exposure times
# Settings for Low Light Conditions
# Set a frame rate of 1/6 fps, then set shutter
# speed to 6s and ISO to approx 800 per nightISO variable
camera.framerate = Fraction(1, 6)
camera.shutter_speed = nightShutSpeed
camera.exposure_mode = 'off'
camera.iso = nightISO
# Give the camera a good long time to measure AWB
# (you may wish to use fixed AWB instead)
time.sleep(10)
camera.capture(filename)
if verbose:
print "checkNightMode - Captured %s" % (filename)
return filename
def takeMotionImage(width, height, daymode):
with picamera.PiCamera() as camera:
time.sleep(1)
camera.resolution = (width, height)
with picamera.array.PiRGBArray(camera) as stream:
if daymode:
camera.exposure_mode = 'auto'
camera.awb_mode = 'auto'
else:
# Take Low Light image
# Set a framerate of 1/6 fps, then set shutter
# speed to 6s and ISO to 800
camera.framerate = Fraction(1, 6)
camera.shutter_speed = nightShutSpeed
camera.exposure_mode = 'off'
camera.iso = nightISO
# Give the camera a good long time to measure AWB
# (you may wish to use fixed AWB instead)
time.sleep( 10 )
camera.capture(stream, format='rgb')
return stream.array
def scanIfDay(width, height, daymode):
data1 = takeMotionImage(width, height, daymode)
while not motionFound:
data2 = takeMotionImage(width, height, daymode)
pCnt = 0L;
diffCount = 0L;
for w in range(0, width):
for h in range(0, height):
# get the diff of the pixel. Conversion to int
# is required to avoid unsigned short overflow.
diff = abs(int(data1[h][w][1]) - int(data2[h][w][1]))
if diff > threshold:
diffCount += 1
if diffCount > sensitivity:
break; #break outer loop.
if diffCount > sensitivity:
motionFound = True
else:
# print "Sum of all pixels=", pxCnt
data2 = data1
return motionFound
def scanMotion(width, height, daymode):
motionFound = False
data1 = takeMotionImage(width, height, daymode)
while not motionFound:
data2 = takeMotionImage(width, height, daymode)
diffCount = 0L;
for w in range(0, width):
for h in range(0, height):
# get the diff of the pixel. Conversion to int
# is required to avoid unsigned short overflow.
diff = abs(int(data1[h][w][1]) - int(data2[h][w][1]))
if diff > threshold:
diffCount += 1
if diffCount > sensitivity:
break; #break outer loop.
if diffCount > sensitivity:
motionFound = True
else:
data2 = data1
return motionFound
def getFileName(imagePath, imageNamePrefix, currentCount):
rightNow = datetime.datetime.now()
if numberSequence :
filename = imagePath + "/" + imageNamePrefix + str(currentCount) + ".jpg"
else:
filename = "%s/%s%04d%02d%02d-%02d%02d%02d.jpg" % ( imagePath, imageNamePrefix ,rightNow.year, rightNow.month, rightNow.day, rightNow.hour, rightNow.minute, rightNow.second)
return filename
def motionDetection():
print "Scanning for Motion threshold=%i sensitivity=%i ......" % (threshold, sensitivity)
isDay = True
currentCount= 1000
while True:
if scanMotion(testWidth, testHeight, isDay):
filename = getFileName(imagePath, imageNamePrefix, currentCount)
if numberSequence:
currentCount += 1
if isDay:
takeDayImage( imageWidth, imageHeight, filename )
else:
takeNightImage( imageWidth, imageHeight, filename )
if __name__ == '__main__':
try:
motionDetection()
finally:
print ""
print "+++++++++++++++"
print "Exiting Program"
print "+++++++++++++++"
print ""
Answer: Instead of `'STOR filename'`, use the actual name of the file
`sftp.storbinary('STOR ' + filename, fp)`
|
from EC2 Spark Python how to access S3 file
Question: I have a s3 file which I am trying to access through Python code. I am
submitting my code in an EC2 instance via spark submit. To do the submission I
use the following code post starting the master and slave.
./spark-submit --py-files /home/usr/spark-1.5.0/sbin/test_1.py
I get the following error: urllib2.HTTPError: HTTP Error 403: Forbidden
In the test_1.py, I calling the S3 file using the following:
import pandas as pd
import numpy as np
import boto
from boto.s3.connection import S3Connection
AWS_KEY = 'XXXXXXDDDDDD'
AWS_SECRET = 'pweqory83743rywiuedq'
aws_connection = S3Connection(AWS_KEY, AWS_SECRET)
bucket = aws_connection.get_bucket('BKT')
for file_key in bucket.list():
print file_key.name
df = pd.read_csv('https://BKT.s3.amazonaws.com/test_1.csv')
The above code works well in my local machine. However, it is not working in
the EC2 instance.
Please let me know if anyone has a solution.
Answer: You cannot access the file using the link because the file is private by
default in S3. You can change the rights or you can try this:
import pandas as pd
import StringIO
from boto.s3.connection import S3Connection
AWS_KEY = 'XXXXXXDDDDDD'
AWS_SECRET = 'pweqory83743rywiuedq'
aws_connection = S3Connection(AWS_KEY, AWS_SECRET)
bucket = aws_connection.get_bucket('BKT')
fileName = "test_1.csv"
# Saving the file locally and read it.
with open(fileName, 'w+') as writer:
bucket.get_key(fileName).get_file(writer)
with open(fileName, 'r') as reader:
reader = pd.read_csv(reader)
# Without saving the file locally.
content = bucket.get_key(fileName).get_contents_as_string()
reader = pd.read_csv(StringIO.StringIO(content))
|
how to subclass google app engine ndb property to support python subclassed objects
Question: from this article <http://stackoverflow.com/a/32107024/5258689>
I have a dict() subclass - that allows me to do dict.key (use dot to access
keys i mean) - as follows:
class Permissions(dict):
"""
Example:
m = Map({'first_name': 'Eduardo'}, last_name='Pool', age=24, sports=['Soccer'])
"""
def __init__(self, *args, **kwargs):
super(Permissions, self).__init__(*args, **kwargs)
for arg in args:
if isinstance(arg, dict):
for k, v in arg.iteritems():
self[k] = v
if kwargs:
for k, v in kwargs.iteritems():
self[k] = v
def __getattr__(self, attr):
return self.get(attr)
def __setattr__(self, key, value):
self.__setitem__(key, value)
def __setitem__(self, key, value):
super(Permissions, self).__setitem__(key, value)
self.__dict__.update({key: value})
def __delattr__(self, item):
self.__delitem__(item)
def __delitem__(self, key):
super(Permissions, self).__delitem__(key)
del self.__dict__[key]
**my question is** how to create my own **PermessionsPropery()** ? or what
property to extend so I can create that ?
I am willing to use this property in my subclassed User object to add school
name as key and permission as dict value, ex(user can have permissions in
multiple schools):
from webapp2_extras.appengine.auth.models import User as webapp2User
class User(webapp2User):
permissions = PermissionsProperty()
u = User(permissions=Permissions({"school1": {"teacher": True}}))
then I check for user's permissions like:
if user.permissions[someshcool].teacher:
#do stuff.....
#or
if user.permissions.someschool.teacher:
#do stuff.....
I've tried to follow this doc
<https://cloud.google.com/appengine/docs/python/ndb/subclassprop> with no
profit !
so is it even possible ? and if so, how ? thank you...
Answer: App Engine's ndb package doesn't support saving dictionaries directly, but
json can be saved in a `JsonProperty`, and dictionaries are easily encoded as
json, so the simplest implementation is a subclass of `JsonProperty` that
returns a `Permissions` instance when accessed.
class PermissionsProperty(ndb.JsonProperty):
def _to_base_type(self, value):
return dict(value)
def _from_base_type(self, value):
return Permissions(value)
This implementation is incomplete though, because JsonProperty will accept
values that aren't Permissions instances, so you need to add a `_validate`
method to ensure that what you're saving is the right type of object.
class PermissionsProperty(ndb.JsonProperty):
def _to_base_type(self, value):
return dict(value)
def _from_base_type(self, value):
return Permissions(value)
def _validate(self, value):
if not isinstance(value, Permissions):
raise TypeError('Expected Permissions instance, got %r', % value)
|
An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe
Question: I am new to spark and facing an error while converting .csv file to dataframe.
I am using pyspark_csv module for the conversion but gives an error, here is
the stack trace for the error, can any one of give me suggestions resolving
this error
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-16-67fe725a8e27> in <module>()
----> 1 data_df = pycsv.csvToDataFrame(sqlCtx, data_body, sep=",", columns=data_header.split('\t')).cache()
/usr/spark-1.5.0/python/pyspark_csv.py in csvToDataFrame(sqlCtx, rdd, columns, sep, parseDate)
51 rdd_sql = rdd_array.zipWithIndex().filter(
52 lambda r_i: r_i[1] > 0).keys()
---> 53 column_types = evaluateType(rdd_sql, parseDate)
54
55 def toSqlRow(row):
/usr/spark-1.5.0/python/pyspark_csv.py in evaluateType(rdd_sql, parseDate)
177 def evaluateType(rdd_sql, parseDate):
178 if parseDate:
--> 179 return rdd_sql.map(getRowType).reduce(reduceTypes)
180 else:
181 return rdd_sql.map(getRowTypeNoDate).reduce(reduceTypes)
/usr/spark-1.5.0/python/pyspark/rdd.py in reduce(self, f)
797 yield reduce(f, iterator, initial)
798
--> 799 vals = self.mapPartitions(func).collect()
800 if vals:
801 return reduce(f, vals)
/usr/spark-1.5.0/python/pyspark/rdd.py in collect(self)
771 """
772 with SCCallSiteSync(self.context) as css:
--> 773 port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
774 return list(_load_from_socket(port, self._jrdd_deserializer))
775
/usr/spark-1.5.0/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
536 answer = self.gateway_client.send_command(command)
537 return_value = get_return_value(answer, self.gateway_client,
--> 538 self.target_id, self.name)
539
540 for temp_arg in temp_args:
/usr/spark-1.5.0/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
298 raise Py4JJavaError(
299 'An error occurred while calling {0}{1}{2}.\n'.
--> 300 format(target_id, '.', name), value)
301 else:
302 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 10.0 failed 1 times, most recent failure: Lost task 0.0 in stage 10.0 (TID 20, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/spark-1.5.0/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/usr/spark-1.5.0/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/spark-1.5.0/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/usr/spark-1.5.0/python/pyspark/rdd.py", line 797, in func
yield reduce(f, iterator, initial)
File "/tmp/spark-d85b88bf-e4a4-46b8-8b51-eaf0f03e48ab/userFiles-40f9eb34-4efa-4ffb-aaf5-ebcb24a4ecb9/pyspark_csv.py", line 160, in reduceTypes
b_type = b[col]
IndexError: list index out of range
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1910)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:905)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.collect(RDD.scala:904)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:373)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/spark-1.5.0/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/usr/spark-1.5.0/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/spark-1.5.0/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/usr/spark-1.5.0/python/pyspark/rdd.py", line 797, in func
yield reduce(f, iterator, initial)
File "/tmp/spark-d85b88bf-e4a4-46b8-8b51-eaf0f03e48ab/userFiles-40f9eb34-4efa-4ffb-aaf5-ebcb24a4ecb9/pyspark_csv.py", line 160, in reduceTypes
b_type = b[col]
IndexError: list index out of range
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Here is my code, and at last statement it is giving this error while
converting from csv to dataframe
import findspark
findspark.init()
findspark.find()
import pyspark
sc=pyspark.SparkContext(appName="myAppName")
sqlCtx = pyspark.SQLContext
#csv to dataframe
sc.addPyFile('/usr/spark-1.5.0/python/pyspark_csv.py')
import pyspark_csv as pycsv
def skip_header(idx, iterator):
if(idx == 0):
next(iterator)
return iterator
data=sc.textFile('gdeltdata/20160427.CSV')
data_header = data.first()
data_body = data.mapPartitionsWithIndex(skip_header)
data_df = pycsv.csvToDataFrame(sqlCtx, data_body, sep=",", columns=data_header.split('\t'))
Answer: I can't actually comment but, without any code, I would have to guess that
you're trying to reference an index that doesn't exist on a string that DOES
exist - this would be the same as doing the following:
`string = 'hello' new_char = string[6]`
This would try to find the 7th letter on a 5 letter string - this would then
bring the following error:
`IndexError: string index out of range`
Since I don't see the code that causes that error, this is all I'm able to
provide regarding your question.
|
Executemany on pyodbc only return result from last parameter
Question: I have a problem when I try to use pyodbc executemany function. I have an
Oracle database and I want to extract data for multiple days.
I cannot use between in my request, because the database is not indexed on the
date field and its taking forever. I want to manually ask all day and process
answers. I cannot thread this part, so I wanted to use `executemany` to get
rows more quickly.
The problem is when I use `executemany` I only got the result of the last
argument asked.
Here is my code:
import pyodbc
conn = pyodbc.connect('DRIVER={Oracle in instantclient_11_2};DBQ=dbname;UID=uid;PWD=pwd')
cursor = conn.cursor()
query = "SELECT date FROM table WHERE date = TO_DATE(?, 'DD/MM/YYYY')"
query_args = (
('29/04/2016',),
('28/04/2016',),
)
cursor.executemany(query, query_args)
rows = cursor.fetchall()
In rows, I can only find rows with `(datetime.datetime(2016, 4, 28, 0, 0), )`.
Always the last argument.
I am using python 2.7.9 from WinPython on a Oracle database with a client on
11.0.2. Except this query, every other query is perfectly fine.
I cannot use `IN ()` synthax for 2 reasons:
* I want to limit operations on database side, and do most of thing on script side (I've tried but it's way too long)
* I might have more than 1000 different dates in the request.
(Right now I'm using IN() OR IN() OR IN()... but if anyone find something
better that would be wonderful !)
Am I doing something wrong ?
Thanks for helping.
Answer: Your query runs once with one argument. If you want to run for multiple dates
either use "IN" clause, this will require to modify query_args a bit.
"SELECT date FROM table WHERE date in (TO_DATE(?, 'DD/MM/YYYY'), TO_DATE(?, 'DD/MM/YYYY'))"
query_args = (
('29/04/2016','28/04/2016'),
)
or cursor through each date argument:
while query_arg in query_args:
cursor.executemany(query, query_arg )
rows = cursor.fetchall()
|
Unable to install threading with Anaconda 64-bit
Question: When I pip install or conda install "threading" I get an error saying it
cannot be found, I am having a similar problem with Queue. Does Anaconda only
fetch 64-bit libraries? I am trying to go through Parallel Programming with
Python.
How do I install this library correctly?
Is any other information is needed?
Answer: Have you tried "import threading" and "import Queue" in your code? They are
both standard libs in Python. There should be no need for an install.
|
Different behaviour of matplotlib in interpretor and in script
Question: Running following code inside python interpretor displays a figure with random
values
>>>fig = plt.figure();ax1 = fig.add_subplot(111);plt.ion();ax1 = ax1.imshow(np.random.rand(256,256))
while running the following script as a file does not display any
output/figure.
import numpy as np
import matplotlib.pyplot as plt
import time
fig = plt.figure()
ax1 = fig.add_subplot(111)
plt.ion()
ax1 =ax1.imshow(np.random.rand(256,256))
what is the reason for difference in behaviour?
Answer: I suspect what is going on is that
matplotlib.rcParams['interactive'] == True
and this is set in your `.matplotlibrc` file.
which means that `plt.show` is non-blocking (so that you get a figure that you
can interact with _and_ an command prompt you can type more code at). However,
in the case of a script the (implicit) `plt.show` does not block so the script
exits, taking the figure with it.
I suggest the setting the `interactive` rcparam to `False` and then either
explitily setting it to true in the repl or (the preferred method) use
`IPython` and the `%matplotlib` magic.
|
Imbed matplotlib figure into iPython HTML
Question: I want to dynamically write and display HTML with a code cell in Jupyter
Notebook. The objective is to generate the HTML to display table, div, img
tags in some way I choose. I want to capture img data and place it where I
want in this auto generated HTML.
So far I've figured out that I can do the following:
from IPython.core.display import HTML
HTML("<h1>Hello</h1>")
and get:
# Hello
That's great. However, I want to be able to do this:
HTML("<h1>Hello</h1><hr/><img src='somestring'/>")
and get something similar to a **Hello** with a horizontal line and an image
below it, where the image is the same one as below.
import pandas as pd
import numpy as np
np.random.seed(314)
df = pd.DataFrame(np.random.randn(1000, 2), columns=['x', 'y'])
df.plot.scatter(0, 1)
[](http://i.stack.imgur.com/AY9Gx.png)
The result should look like this:
[](http://i.stack.imgur.com/Xl1rP.png)
### Question
What do I replace `'something'` with in order to implement this? And more to
the point, how do I get it via python?
I would have imagined there was an attribute on a figure object that would
hold an serialized version of the image but I can't find it.
Answer: Let say you have base64 encoded image data:
img_data =
"iVBORw0KGgoAAAANSUhEUgAAAIAAAACACAYAAADDPmHLAAAb2ElEQVR42u1dB3wU5bY/m+xuOklIARIgdKQqeunk2kClSRNsKD9UVFR4ei8PBFTKu1f8Xd8PeCpeBCPlonRBmggiXaogYBIJJQkppPdNts68cybZzZaZrbNJNsyByexO3++c73/Kd843MpZlQaJ7l+RiXUiGRMK0ZMkSWXJysqy5NVSvXr1MPWXRokUs/lzTPtaHe5FMpGeXTZkyxQ8byb+8vNwfya+6uloWGxsLtPaVxggODjY1RkFBgcX20NBQNjc3F+Li4pji4mJWo9Ew+Jnt2bMnu337dgshMQqILwiGGAIgw15PjFcEBAQEMgwThEuAVquVI/kkEqAAE4O5dd0mRqfTsfjd4OfnZ8Dfp8ffZkDS48IEBQWxuI2hz6WlpWyHDh0YOgeRkDUKxeLFi9mmiBYeCwAy3w9XysrKylC9Xh+Fkh+NbRGODRWIDYIrP18TAmoTP2Q2g7+Fwd/E4HcGf4ce9+nwsxY/a3GfBn8nrXUkFLhdT4JB3/FcHQlHRESEHlGDwY5hMCIGCUZTEghPBYDr/QiJwfg5BnvC4926dZtHKoA6Ut31fUoAUGUFIJq1IEYRM3GtwaUCEaAE9+Wo1eo0ZG4B7lPh9hr8rRqjYNCxKAzVtB2PUdN3hUKhxc9aPJ8ERxcVFaXH9uIEAtGCIYRoTJXhsQCg7ld06dIlDH9QW2yMyTNnzlyAEGja72vwj8yCsrIyqKqqAmQUlJSUADIKampqAJkPiHQsfVYqlWxgYCCpgCrcfxOPv4pokYNMrkIkqMK2oHU1flfRGr+rcOGEA7dpSHAqKip0aCcRsjBoSxhSUlJYQoaGFAQxEECBPz4CJbwjNspzKAD/hQLg016AsU1obd0+aNtAVlYWpKamcoKBzITo6GgSHBYNR0alUumwPfJQcK7hsel4Sin27kpcyglJaMFzKvG6lUa0QEFSE0qgsalDlWEgZNi2bRvTEKjgsQDMnj1bGRYWFoHw2AUNo+ffQvJ1AXDg7gL2aE4wCC3u3LkDFy5cADIau3btCt27d+cQJDs7m/Yx2Mv1KBTliBxpuL6BKJGJjCehKMVrkMtUhp8rSCBw4dQK2g6kTvRoRBpIRXgTFUSJA2DvN+p6v+YeOCE+kBDQgsyDTp06QUJCAiCj4ejRo3Dz5k0YNmwY9OnTB3r37u2HxytROGLy8/Nj0tPTB+Nag51FhUsm9vQzKBB38FpFeK0ivHwJfi7D7ZXYmapjYmLUqIZ0iAb6OptEdESQg0QeCwMaetCyZUsYN24cIJPh2LFjFC+AAQMGcPsR4jkhad++PQlEEC0oCNG///57n8LCQhUanWm4nMbtmXg8BSAKUX2UoEooQ+GpwuvVoH2gnTx5soE8EzGFQBQVgD8wEh+4CzbEC6gB3mzOKsAZoSB1QGhANsKTTz7JIYXRnjC3K4yfc3Jy4OrVq+qioqIKVB9XEE2OI6OzccnDc8njKEG1U0nqITw8nDwTRiy1ICGAF2wE9Pth+PDh8Ouvv8KBAwdg1KhRgJAuKABt27aF+Pj4QPwciHbD8HPnzg1C6E9FAdqP6jUDr5mDh+ejEJArWonIoEEB0IuhEiQB8JIQkFoYMmQIt963bx+MHTvWQgjMBcB8G6EnqoswNCL7owD1RG8iGZdduP8WoQIKQD6ibSkaoDWoEvSeqgRJALxIxHyjHXD8+HEYMWKEIPOtt7dr145iLKF3794dcPr06R5oK1xEQfgWhYjC7RRmL27durUKkUDnCRL4SWzyLlGvf+ihh7j4QWZmJhc34FvITuDbhqpBhj29xSOPPPLXNm3azMOe3xu3J+A6Cq8dgqpCgULgts0lCUADIcHIkSPh7NmznCAICYG9BeMB8tGjR3dFe2EhdvZ+eNn26EJGoj0QiMEjf3ejrpIANJBNEBISAgMHDoQTJ064JQC0oGtJaNAa7YT52PsHIhK0RpsgDLcraDheZp6kINkATYsIzilKePnyZQqc0ViCXYMQo4acyqCwM6EGRR2NKqVz584R2Pv/hvvJMCzHMQpVZGQk5x5KAtDEhQAHzuDatWvQv39/CwGg2AGFlW/dusWFkmk7MpWgH9D3Bxxp5c6nfVeuXJGhELXEkPFk/J6LAlCMKFCDtgBJCSsJQBMlgvEHH3wQtm7dyqkDYjJa94B+PzfyiFlH0KNHD+jYsSMXS6DjjWFnI+G4C6AxSJFHGbqJT+DA00a8ToC76lwSgEbwCoi5ZBBmZGRw8E69/IknnuCMRaO+d4QkJEgXL16U47GUgSXHkUg/FCbJBvAFFEhMTIRTp07B4MGDuSggMdS6pzsyKkmIWrRoUUqpavjdH9FDRmgiCYAPeAQE4RMnTjQx3t3rkL4nyx8NRLfjAJIANJIQkCoQ41Keptx5TQDIhVm4cCHn8rhKmDcHX375peD+Dz/8ENLS0uzqWbKgaUiWhmGHDh0KZmlqEjWEANTlz7l1LulHe0S+MV3fHlFOHx1HFjZZ3agvYe7cudC3b1+J6w0hAJQgQQYPGSaUQkW9kqxcoz9rJIxkcShhXOgcDG+KDrkkjB988AGHBJ999pnEeW8LAKYzwa5du0zfiQGUYbty5Uq4ffs2t40YTulTU6dO5WCfAh6uGkWYScNF2Ohc821U2UMoQNk4RreKrk0ZO6tWrYK3335b8Jo7zmfAxbQcfvWEeZx/f+5xh66aEJ25VQI/nEnl3Rci08Ks0f0AAzy+LwDWRAEMWgiKzYl0NIY23b4uhkThvffeg/vuu493P6VnrVixwsINI8GkoVmsYeA953JmKWQbInn3USx/1sRETrDcoRt55YLXLr59Fcb2yoAHHnig+QlAYxEOo8KgQYMAB0tM6EIDM8uXL4fVq1fbt2MqSqDq2k8WKilGn2+hwlylPm1CYPf2LZZM6DAAlJFtmpcKaEpEvfW1116DpKQk0zaKwjkibXUFvP/C45wQiUX9OsXC9yvnW2yb891FyNM1TtvcM8PB1tY/JWz6EpWodFBWo5cQwF2i6Ju17UBGIg3AuENkaG7ZsoUzZCnfj2wZIcKULm4YmCqIJkyYwIVxXaGj14sh6WwejRlDQsVl+OfslyQBcJUw5dpGANxlPhEZs4cOHeKuc/DgQS7xk4/IBSa3k+IW5IE888wzLt/ryLVs/KsgNwZOXS/gruWuEXrPqgAKCpkT9UZPYwvkSRBR8oaQTUE9nxhGgkI1Au6EgGOCWMDJBzApBFEr77ZYYeR7SwB++OEHi+9Ux+cpPf/88yY0+Pzzz3mP2b9/P7em2UVeffVVt+4zY0RvaJl/DspObYJZ44eIKgDye4X5WFpl+p6Xlwevv/66x9el4BPVBlJgi8b3qZebM4einIQARBTcEYpVcJlBqCoYA39GV7BSDl/Mf03yAlwhKtakqt033ngDvvnmG9N2Sr3CWnzADFtR7mPs1TT4ZJwryEg0pkE2AKkIinYKEcG7Xq/Dtb7B28nnEYB6FsX4jcEZSp4UGoGknka9f8eOHaLdHyuAufENmjRi06ZNJrVAtGfPHhPiPPvss4LXIN1u0PlxaykO4CIR44nhZGjRIsR8rLDhrH5SB55Y/3xEVcFEJATGcQ5iOhmeJHSUAGrvntTzSX3glFOSAIhFBPU0+ETGF6HCV199Bd99951df92EFAxbOwmEwTmGkG9PRGMdRmOQqoOJaFDKkfFHut+AKsAgqQD3AjI0GQPpYE6icbAH6+q4gR4y0CgpxNURRpzsCxkiA72TDCGfnCKNWOIN58+fr/XdjxwxqSQa8bRHBur9OkayAdwhgk4a2hWysN0hrkeiKsE54pw+h8YacK4Ezh6gyB/lQVDG76RJk5y4H6kAGScIkgA0ASLG1zLE+REaUi3GnIT169dza5r4Ydq0aU4IAAZ59KxdFaBSU7KMDIKU4rJMqg3kg2TGUKeTXTPKXnjhBW6N079wayrwsB6DEPYChN3AQ5duw/D5m+DhOevgoxVJkgB4HQEIknV6lyGZQsPGah4q4Zo+fbqT9zPY9QK2HvkNdLiP4kRbDp93K9FWEgCXVEAtAjB61xqaXFKju0cC4GywyWAw2PUCIoP8TFPTacprcywlAWgAFaB30SqnVHWKQBJDaY4gSoJ12uawY3TOm/oUtNFlg+7WKXhr4sMuDydLRqDLKsBglyG8sQPsnZRsSkRVvo7SzWxsAC4UzK8CYiJCYP+apb7lBVCyxcmTJ7lxeFooA4d6B1nGppvjwAnNooUzYnIjauTLU3Yw+dRUMNmYNkAtJDOgRv+8qkZby2RcFDgPdqCVJU6xiJ07d3JDwhT3p3Aw1fzxkVZnAI2VrjcwjTdhuNcEgBjtqBeQsUQRO+sqn2XLlnFBlUZTAaSTEQEUYVGw+MdsYH/czHGfmxiy8DqcXLeEM9rIxaNt5kYZ6f45c+YIXvvLPWdg4y8pTQbtJBUgGJjRc32em8CBhmtrcR6K8/M4I4yElwSFgj30neoe6NgNGzbY1f2uupY+KwBUrPHuu+9y8OgqOUq7Xrp0KVy6dMn0XQhu3aVPX31cuDxt+Dhuehdy99asWcMN/lCvp9oGCj07Cju/M24QdFYKJaTGcrGDZoMAjz76qFeui3Pucou3iNLFxo8f79RxrqaWkQXvzLUbiiQ38B4nSQAkAZDoXibJC7BDyw8kQ2GZCkwzr9V5Ax0jFTDjqX5uVwhLAuAjdPb6XVCzCpMLyLEf/x86eACeHdK5Qcu4fVoATt8uh0q1/bBqgNwPHu0W6bVn+OXPIqjR1vrgprhb3SSNT/aO5SJ81sSa+f9G5hvjApIKcIE2nboJlWyQw+POHT8M78+Y4pVnWHfsOqhlgbXMs2AswJ9Ht8Ocd2fxnGWcydOM+cBCc3rddoMoMWensj9TFABVKpVXnoEx9lybXs1y0Tz+54b6c4zMb2YvW29SVkxYXFf4x9od3pJCYUi3d46J6fVh4eYkBU3OjL2ubw2FRcXe4H8dlBsZWQ/pgufUCUk988GEGpIAeIlCYtrB0qQfvIQAZnrcGUg3f52LGfMlG8DLlBvYGTKzc8Tlvz1Id4AabDNlfpMVgKCIGFiStF98BDDT+xa92p4SMEMMo1HISirA+15DZXRvuJaaJq4nYq73ndDn9WaD8bjmxfwmIQBlWdd5XUZlcAtYtvmoqErAXO+bIN0+AFhAv+QFeIFiylOwPRkz5tcbXvo2/eDU+cuiegHmTDR5Bna9gHrXUbIBvEDRGCBsUXWnPvWKrdfX/spAWL77nLgIUNerWWcgnTVLCbOOIUgCIB40z31mEDA6rU2Qhv7IE/4Ce38+KQ4ECEG6fdgwsxvqhchZUml0kFFQAclZxZBXqsIKH6ZJCUDjjgbWNXDH1i2hleEsFPi3s4rQsTjkKoc1R1Jh7PBEzxEArKDfCUi3Tgp1hBqZhZVw6Pc7cOTybShWaaF2/Kke2eizQsZArzbB8OKI/tC/a2uu6POeEwDrRlz43MPw9uY/ONg3979pHdThQdj4/Y/w8sSRnscBLHo1OIgD8I0d8B97Ni0Pvj50FW4WqMwEBXjP1+JLPi5lV8Olb47j7H9aePmvXWH66MH3jgrg079U/dI1oNTSWDO5aX6w5UKuZzVxZnF/a2PTfuwAeMYO6s+hoo41h5Nhwbdn6plvZctYqzVzW0TLymHt8XQYvzAJcoormr8AWBt75jT/+cfAoK608NmNDR6ScD98sWm3x4jDWvVOp7wAnhgCURXW7M/++hhsPpkGnGwKDTiZXYc3rIzrfE0ATFqyBc7/caO5CgDrEFJDg5TQL1JtE6Qxnnfgukpw6Nal+5v1TodxAJ6xA6Pl8M8dFyAlu8wW0XgSSMyRzcIWMUchRRC88++f4cr1jGYmAHYh0ZL++7nHQK8qtbHU6XNofHf4ZO029+1NnvuzjmwA3uFgnHzyYhbq/XwbZKGS8uIbFyH98HooOrkB/P/YCcq0H6Hq0i7I//0w6FRl9WFlYG3OlwWEwIyVeyEjO6+ZGYG8kGh7GKVmPdZBCcfybY01+vdrvpxLGgnFlz64fH8Z3/3tuXXWzK89N6RVAmw8edOSebjcvXwY2mgyYO60F7H4Y4HN21EIvX7EiaXX7joKBSHdwT8wlN/TCAyDaYvWwPGkj5oRAghAIh/NHD8UDJWFvJAaHNsBFq3a7JYTKAzJTngOZoITFBVfW9Fbt4/RaeDW3pUwZ/xDcPzIT/Dyyy/bMJ8Lb2NJ2binn4Z961bAwlGdQFuYIagWa8LawadrtzQPAWB506qELXA/rK+b0DfK0lI308fJNS1dTxphwen7W5xjoTJYyzQx+oiTSaQfXA3ff73cpfmHx416Arb/z3TQVxTwqiVaNp/Nwd9Z1AwEAMDGvXMUUZv65ABgK/LMjLV65gVFtoaFq7a66XpaQbodT8DaUrdwH+u2ZZ/ZDUn/WggDBgxwuV06tY+HFa+P4CaKtu4k9N8vJAIWfLq2OagAnkY3xdT5G59iY9MTO/H4z7XnZWHUMP1OtptegJPpXTyWurkQGBD6+8XKuPcAuEuJ/ftCfIDaRjiNz3YGU+r1Xpw/sIEQQNj/tQfCY4b0BnlFLq//rAyLhA9Wf++iGuKBdBcCVtYxhPyrRzGd/B2P22f+tKd4kYmWgNhOsGn7Dz6OAHxpVU5m1swefb/g+SWhXeBK8nXn1RDf/R0khFiqLHNEYHD8Io97BbynNKBnR4iASt5OQsvGfSeagREoFPxwQMP6doZgVQ7v+YqgUFiy/kf34hHWYwJCwQOBGEZx2gX428xXRGujWZMSbZGp7tlyynVey0RqMBuAZVkBSHVM708ews2gxXe+OqoHHD/7m1NCyHt/V8cCjHZD8S1RJ3p4+q/98F3wBl5kkkfE231buk+oAJvIngupVX06x3Nv7OSDZD9FAPxr6wmnej/w3J916rltYwjtYyNFrw4OkTO8nSIgPAaOnz7n4wgAIFCa5dwlPnzpcWBp5k4+SG7TG3YdPOZ8PAIsz7evAfhjGG1jwkVvpshg4cmlzl39s3kEgtxhPlEHTBppLy/hhWSZzA9WHbjshP73tDSsds1g8KdT21ait1NcVJjgvsKSch9GALCK6bNOND4PLZk+EhhtDS8kK+N7w9db9rhxf8dp4dYxBF1lGSS0byd6G3WIE55wqrJG68MIALYJGI4an4+iw0Ogd7iaP6yMf7/9NcNO0gj//R1Gg3liEOrKIq/MUtatvfDU8tUavQ8jgM0YvPul1h9NHw36miresHJA3H2wPGmrsA3Id38nh4PN3TOdqtwrAhATKawCanxZAPj0rnm1jUuWcqAChrWT84eVcb0nuZQ/aYQ3LYt1ujTMXIgVWLTi6nuInKGSyhrhJzFofRwBQKA0yw2a99JTwFBiBU9YOQhDp0tXbeJXQyw/pNs3Xm1jCEqcQ5hmBBebsguFDb3gALkPI4BF9M0SUt0hShoZ0yeKN6xL6xNZtXP38nkB7paGmXsziuBwyMgUXwDScwoF94UGKnwYAWzG4J1ofAf01qRHgFUV847UBbSMgwX/9x9bNcR7f+dKw6wDRzcyc0VvpayCUsF9LYIDfBwBeEa6wIMSK0oaeXFIB5uwrhFtrpQFQX5BoVOQ7ih2wBdDSM8VP1GjoLxGUBTbRof5uhcgUJrlAb00cjD4qwp5I3zKFjEwb+UmW+bb3N+RF8AfQ7hbUiV6M1Wo+aeS15QVQP9+fX1bBfCOdIlQbfvWyL6C2b63DdFwK/2OlSFqdX8n4gB8MYRCJlTUEbrTyVmgZ/k9i+qCDBg4cKCP2wCCkOrZlccMux8C1QVWkcG6GrygcJj3+RZLV5QH0u17AfwxDEVsN1i/dbdoLbR2/3nh56jM516F20y8AHBO/7pAcycP5fLq+NKqCpRt4dK1FMH728UgFngrlo3XWbP3rCjPn5JZCFcziwUbr1dciFfiDg3sBQiVZnlOw+7vCpGGIl5j018ZBB98tVv4/qz9ugB7cwtpWnSAoyfPePz8SQeF8xkq7vwBM15+3mucaQQvwGo4VyRaNO0JLkWbL8hTFdYZNAZ7pWHOqAHbGILMXwEfr9vn0XNnFpTDsWvCMQXD3T9gzJgxvi0AjqplxaDemDQSLy/jtfT9/OXAyBQCuQguloZZxRCKAtvBZ5v2uvXM5So1vPPFPkEg1JTlw9RRidzr9XwbAawa3aLqVkT6x4yxmDqm4y3gEPIUWNbRWIBwDIN7jRzmIqw7lQVJu35x6VkLy1Xw1mf7sCRcJXjvqqsH4L333vUqb5pkaZi7lNA6CrqH1fAWcNje3/GIpFAdQU1Jbq2g1ZEMEWbVzzfhxY++huKKGrvPqMeXUe45ex3GL9oMqdklgscVp5yGTxbMhpYtW3qVNfKGQQDWvdIsN+jjNyfCxGWYGCJX2hibnpWGgWmtLs6BoZ0j4LcSy+ZLLWZgxPyNkBAhh7HDHoD42AiIbhGM8whouTePpN4phJ8u3oBqrf2JLrQVRdAztIJ7A6m3qYGqg1nB0iyxKQqTRvq38YMLBayAC2cF6U5MFs1Xx/Dp36fB+LmroEIRY3WWDDLLDPDFvt/cen5tFaJCyh7YeHBvg3CmEUrDnJ2m1X1a+sYEYDQq4RwAZyd/dlDHsPXjmcCUZon23LrqctD9vhMO7f0eYmJimo8A1GfX2pZmeYOCA5Uwonu43RwAd0rDrKlVdEv4acUsCC7902N7pjz9CshTdsPhvTshLi4OGoqafGmYuzT/lacxj6qcd25gZ0vD6oVAmFq1ioUTm/4XBrYoger8dNchv7IEco6uh1eGxsGF08e9+kbURrMBxneTw7oN/zHTkmzdGqDnSO+8XpaSRhaMuQ9WrlrtkLFd+4/nPealv0TBth07bbZHKzUWL4imApF/L5sPqamp8NWGLXDw0m2AiARQhkaCPMhyGJfFF1Nrq0qhMisZ/EvTYcLwIfDm7vVei/U3CQGYMmYEtzQ0jXw8kVvcpRlTn+EWZ6lHjx6w8pMlnHDRy6dTUlIgKycHM30KoKCoFCJClJAQ1wriu8fBQ6+/CcOGDWv0dw9K7w30AtHATWJiIrc0dZJeHXuPkyQAkgBIJAmAOHrPsyxPidzyr+vavXEFAF0ieggKcBsknjSosUkCYMD5B5ng4GC2UQQgKioKvR6WwQfR4aI2SqbEngbp/QYkjU6nM1RVVbnV5h67gbm5uSwOWepRCFTo05bfvXsXJCFoGMLp4xhMFinDtQYFgenVq5fLbS7zNBw7ZcoU/4SEhEB8gIiAgIBueL0huERgmTaVsvhDbcBPIrG6PRJ2NELbalwyNRrNCVznh4eHVy1evFjHushQjxGgTup0xcXFVIyXhvZAHkJSCAqEUkbpMhKJrveRDP7+/mps56qgoKBybHt1dna2W/aXxwiADyRDyaNeLkc9pAgNDVXgAylQSv3QOJF6v3dQgMGORnpfhypAh71ft2jRIoM7alcmxoicUQh69uwp+/nnn/1wOFOGQiAx34tExjd5XsnJyey2bdsYd20umZhDsrK66oXm9nrVJqwOTHaBu9f4fyVgzJGpmA/3AAAAAElFTkSuQmCC"
then in have it rendered inside of an iPython cell you simply do:
from IPython.core.display import Image
Image(data=img_data)
|
decoding json dictionary with python
Question: Ive got a script written to fetch data from an API, and return it, but now i
need to parse that data, this is an example of what the json data looks like,
with some of the dictionary values i am looking to pull.
{'results': [{'icpsr_id': 21133,
'twitter_id': 'RepToddYoung',
'thomas_id': '02019',
'term_end': '2017-01-03',
'office': '1007 Longworth House Office Building',
'gender': 'M',
'phone': '202-225-5315',
* * *
this is the code i have written to pull, and parse the json data file. could
anyone tell me what is wrong with it? i am still returning the full value from
the 'results' dictionary, Meaning it's like the code has done nothing, i still
get the full dictionary, it isn't parsed instead of **_ONLY_** the
'twitter_id', and 'office'
import requests
import json
def call():
payload = {'apikey':'my_apikey', 'zip':'74120'}
bas_url = 'http://openstates.org/api/v1//legislators/?state=ok'
r = requests.get(bas_url, params = payload)
grab = r.json()
return grab
jsonResponse=json.loads(decoded_response)
jsonData = jsonResponse["results"]
for item in jsonData:
chamber = item.get("twitter_id")
last_name = item.get("office")
Answer: It sounds like you want something like this:
def call():
payload = {'apikey':'my_apikey', 'zip':'74120'}
bas_url = 'http://openstates.org/api/v1//legislators/?state=ok'
r = requests.get(bas_url, params = payload)
grab = r.json()
jsonData = grab["results"]
return [{key: value for key, value in result.items() if key in ("twitter_id", "office")} for result in jsonData]
|
python: import global variable from parent directory
Question:
applications/
app.py
extensions.py
controllers/
__init__.py
inner.py
app.py
import inner
from extensions import aaa
inner.test()
extensions.py
import os
aaa = os.system
__init__.py
from inner import *
inner.py
from extensions import aaa
def test():
aaa('pwd')
My project structure and code is described above, and the program will start
from app.py.
Why does this work? How is aaa imported in inner.py?
Why can we directly import from extensions.py which located in parent
directory?
Answer: You aren't importing from the "parent directory", you're importing from
`applications/`. That `applications/` happens to be the parent directory is a
coincidence.
|
Python: parsing texts in a .txt file
Question: I have a text file like this.
1 firm A Manhattan (company name) 25,000
SK Ventures 25,000
AEA investors 10,000
2 firm B Tencent collaboration 16,000
id TechVentures 4,000
3 firm C xxx 625
(and so on)
I want to make a matrix form and put each item into the matrix. For example,
the first row of matrix would be like:
[[1,Firm A,Manhattan,25,000],['','',SK Ventures,25,000],['','',AEA
investors,10,000]]
or,
[[1,'',''],[Firm A,'',''],[Manhattan,SK Ventures,AEA
Investors],[25,000,25,000,10,000]]
For doing so, I wanna parse texts from each line of the text file. For
example, from the first line, I can create [1,firm A, Manhattan, 25,000].
However, I can't figure out how exactly to do it. Every text starts at the
same position, but ends at different positions. Is there any good way to do
this?
Thank you.
Answer: From what you've given as data*, the input changes if the lines starts with a
number or a space, and the data can be separated as
(numbers)(spaces)(letters with 1 space)(spaces)(letters with 1
space)(spaces)(numbers+commas)
or
(spaces)(letters with 1 space)(spaces)(numbers+commas)
That's what the two regexes below look for, and they build a dictionary with
indexes from the leading numbers, each having a firm name and a list of
company and value pairs.
I can't really tell what your matrix arrangement is.
import re
data = {}
f = open('data.txt')
for line in f:
if re.match('^\d', line):
matches = re.findall('^(\d+)\s+((\S\s|\s\S|\S)+)\s\s+((\S\s|\s\S|\S)+)\s\s+([0-9,]+)', line)
idx, firm, x, company, y, value = matches[0]
data[idx] = {}
data[idx]['firm'] = firm.strip()
data[idx]['company'] = [(company.strip(), value)]
else:
matches = re.findall('\s+((\S\s|\s\S|\S)+)\s\s+([0-9,]+)', line)
company, x, value = matches[0]
data[idx]['company'].append((company.strip(), value))
import pprint
pprint.pprint(data)
->
{'1': {'company': [('Manhattan (company name)', '25,000'),
('SK Ventures', '25,000'),
('AEA investors', '10,000')],
'firm': 'firm A'},
'2': {'company': [('Tencent collaboration', '16,000'),
('id TechVentures', '4,000')],
'firm': 'firm B'},
'3': {'company': [('xxx', '625')],
'firm': 'firm C'}
}
* This works on your example, but it may not work on your real data very well. YMMV.
|
Python Finite State Machine Issues (Skipping the Proccessing?)
Question: I'm creating a python program for finite state machine without it being object
oriented. However, my processing phase is off. It doesn't even seem to be
running through the triple for loop I created, I checked by attempting to
print the CurrentState. Any help would be appreciated.
import sys
try:
Sfile = open("states.txt","r")
except IOError:
print "Could not open file", states.txt
os.kill()
States = []
ReadLine = Sfile.readline()
while ReadLine != "":
SN, SS, AS = ReadLine.split(",")
States.append((SN, bool(int(SS)), bool(int(AS))))
ReadLine = Sfile.readline()
print States, "\n"
Sfile.close()
try:
Tfile = open("transistions.txt","r")
except IOError:
print "Could not open file", transitions.txt
os.kill()
Transitions = []
ReadLine = Tfile.readline()
while ReadLine != "":
ReadLine = ReadLine.rstrip()
CS, IN, NS = ReadLine.split(",")
Transitions.append((CS, IN, NS))
ReadLine = Tfile.readline()
print Transitions
Tfile.close()
try:
Strfile = open("strings2.txt","r")
except IOError:
print "Could not open file", strings2.txt
os.kill()
Strings = []
ReadLine = Strfile.readline()
while ReadLine != "":
Readline = ReadLine.rstrip()
Strings.append(Readline)
ReadLine = Strfile.readline()
print Strings, '\n'
Strfile.close()
CurrentState = ''
Start = ''
RejectState= ''
AcceptState= ''
for S in Strings:
if S != '':
for C in S:
for (CS, IN, NS) in Transitions:
if CS == CurrentState and IN == C:
CurrentState =NS
break
for (SN, SS, AS) in States:
if SN == CurrentState and SS ==C:
CurrentState = NS
if NS == AS:
NS = AcceptState
print "String", AcceptState, "is accepted"
break
else:
NS = RejectState
print "String", RejectState, "is rejected"
break
Here are my different text files: strings2.txt
01010
1001
010
transitions.txt
Start,0,State1
State1,1,State2
State2,0,State3
states.txt
State1,1,0
State2,0,1
State3,1,0
Answer: I copied your code and data, and was able to get this code (slightly modified
from yours) working using python2.7:
import sys
import os
try:
Sfile = open("states.txt","r")
except IOError:
print "Could not open file", "states.txt"
sys.exit()
States = []
ReadLine = Sfile.readline()
while ReadLine != "":
SN, SS, AS = ReadLine.split(",")
States.append((SN, bool(int(SS)), bool(int(AS))))
ReadLine = Sfile.readline()
print "States:\n", States, "\n"
Sfile.close()
try:
Tfile = open("transitions.txt","r")
except IOError:
print "Could not open file", "transitions.txt"
sys.exit()
Transitions = []
ReadLine = Tfile.readline()
while ReadLine != "":
ReadLine = ReadLine.rstrip()
CS, IN, NS = ReadLine.split(",")
Transitions.append((CS, IN, NS))
ReadLine = Tfile.readline()
print "Transitions:\n", Transitions, "\n"
Tfile.close()
try:
Strfile = open("strings2.txt","r")
except IOError:
print "Could not open file", strings2.txt
sys.exit()
Strings = []
ReadLine = Strfile.readline()
while ReadLine != "":
Readline = ReadLine.rstrip()
Strings.append(Readline)
ReadLine = Strfile.readline()
print "Strings:\n", '\n'.join(Strings), '\n'
Strfile.close()
CurrentState = ''
Start = ''
RejectState= ''
AcceptState= ''
for S in Strings:
if S != '':
print "String:", S
for C in S:
print "Char:", C
for (CS, IN, NS) in Transitions:
if CS == CurrentState and IN == C:
CurrentState =NS
break
for (SN, SS, AS) in States:
if SN == CurrentState and SS ==C:
CurrentState = NS
if NS == AS:
NS = AcceptState
print "String", AcceptState, "is accepted"
else:
NS = RejectState
print "String", RejectState, "is rejected"
Here is the output I got:
$ python2.7 test.py
States:
[('State1', True, False), ('State2', False, True), ('State3', True, False)]
Transitions:
[('Start', '0', 'State1'), ('State1', '1', 'State2'), ('State2', '0', 'State3')]
Strings:
01010
1001
010
String: 01010
Char: 0
Char: 1
Char: 0
Char: 1
Char: 0
String is rejected
String: 1001
Char: 1
Char: 0
Char: 0
Char: 1
String is rejected
String: 010
Char: 0
Char: 1
Char: 0
String is rejected
|
what is the Default user-Agent of PyQt Web kit Qwebview and how to get it
Question: i am new to python and developing a GUI in PyQt which has a Web Browser. I
want to show the User-Agent going with the Url but not founding a way.my code
is -
class Manager(QNetworkAccessManager):
def __init__(self, table):
QNetworkAccessManager.__init__(self)
self.finished.connect(self._finished)
self.table = table
def _finished(self, reply):
headers = reply.rawHeaderPairs()
headers = {str(k):str(v) for k,v in headers}
content_type = headers.get("Content-Type")
# some code like "print headers.get("User-Agent")"
url = reply.url().toString()
status = reply.attribute(QNetworkRequest.HttpStatusCodeAttribute)
status, ok = status.toInt()
self.table.update([url, str(status), content_type])
Presently, the above code is showing only the URL,status and content type ,
but with this i also wants to display user agent.do someone has any idea?
Answer: A `User-Agent` is something which gets send to the server. This information is
not sent from the server.
To set a user agent you can do the following with your `Manager` class for
example:
from PyQt4.QtNetwork import QNetworkAccessManager, QNetworkRequest
manager = Manager()
request = QNetworkRequest(QUrl("http://www.google.com/"))
request.setRawHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1")
manager.get(request)
And modify your `def _finished(self, reply):` method to get the request with
the `User-Agent`:
def _finished(self, reply):
print reply.request().rawHeader("User-Agent")
|
Linear Regression with 3 input vectors and 4 output vectors?
Question: Task:
As an example, we have 3 input vectors:
foo = [1, 2, 3, 4, 5, 6]
bar = [50, 60, 70, 80, 90, 100]
spam = [-10, -20, -30, -40, -50, -60]
Also, we have 4 output vectors that have linear dependency from input vectors:
foofoo = [1, 1, 2, 2, 3, 3]
barbar = [4, 4, 5, 5, 6, 6]
spamspam = [7, 7, 8, 8, 9, 9]
hamham = [10, 10, 11, 11, 12, 12]
How to use Linear Regression at this data in Python?
Answer: You can use [OLS (Ordinary Least Squares
model)](http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.OLS.html)
as done [here](http://stackoverflow.com/a/14971531/1771479):
#imports
import numpy as np
import statsmodels.api as sm
#generate the input matrix
X=[foo,bar,spam]
#turn it into a numpy array
X = np.array(X).T
#add a constant column
X=sm.add_constant(X)
This gives the input matrix `X`:
array([[ 1., 1., 50., -10.],
[ 1., 2., 60., -20.],
[ 1., 3., 70., -30.],
[ 1., 4., 80., -40.],
[ 1., 5., 90., -50.],
[ 1., 6., 100., -60.]])
And now you can fit each desired output vector:
resFoo = sm.OLS(endog=foofoo, exog=X).fit()
resBar = sm.OLS(endog=barbar, exog=X).fit()
resSpam = sm.OLS(endog=spamspam, exog=X).fit()
resham = sm.OLS(endog=hamham, exog=X).fit()
The
[result](http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.RegressionResults.html#statsmodels.regression.linear_model.RegressionResults)
gives you the coefficients (for the constant, and the three columns foo, bar,
and spam):
>>> resFoo.params
array([-0.00063323, 0.0035345 , 0.01001583, -0.035345 ])
You can now check it with the input:
>>> np.matrix(X)*np.matrix(resFoo.params).T
matrix([[ 0.85714286],
[ 1.31428571],
[ 1.77142857],
[ 2.22857143],
[ 2.68571429],
[ 3.14285714]])
Which is close to the desired output of `foofoo`.
* * *
See this question for different ways to do the regression: [Multivariate
linear regression in
Python](http://stackoverflow.com/questions/11479064/multivariate-linear-
regression-in-python)
|
Sort a big number of pdf files by the common part in their names with Python 3.5.1
Question: I need to sort a huge (ca. 20000) amount of pdf files by the most common part
in their names. The structure of each file is pretty same:
`XXX_1500004898_CommonPART.pdf` (some files are delimited with "`_`" and some
with "`-`")
This is the code I used for it:
files = []
for root, dirnames, files in os.walk(r'C:PATH/TO/FILES'):
for file in fnmatch.filter(files, '*0000*.pdf'):
print (file)
files.append(os.path.join(root, file))
time.sleep(2)
sorted_files = sorted(files, key=lambda x: str(x.split('-')[2]))
But when I run it, the only thing I get is a traceback:
Traceback (most recent call last):
File "C:\PATH\Sorting.py", line 14, in <module>
sorted_files = sorted(files, key=lambda x: str(x.split('-')[2]))
File "C:\PATH\Sorting.py", line 14, in <lambda>
sorted_files = sorted(files, key=lambda x: str(x.split('-')[2]))
IndexError: list index out of range
I'm new in Python, so I may seem unexperienced, as well I still have no clue
how to tell Python to create the folders by these common parts and move the
files there.
Can you please help me with this issue?
Thanks a lot!
UPDATED CODE:
files_result = []
for root, dirnames, files in os.walk(r'C:\PATH\TESTT'):
for file in fnmatch.filter(files, '*0000*.pdf'):
print (file)
files_result.append(os.path.join(root, file))
time.sleep(2)
sorted_files = sorted(file.replace("_", "-").split("-")[2] for file in files_result if (file.count("-")+file.count("_") == 2))
print (sorted_files)
and this is the result:
['ALOISE emma.pdf', 'ALOISEEMMA.pdf', 'ARETEIA.pdf', 'ASSEL.pdf', 'AVV.BELLOMI.pdf', 'BRACI E ABBRACCI.pdf', 'CERRATA D..pdf', 'CERRATA REFRIGERAZIONE.pdf', etc.....]
* * *
Here are some typical filenames:
ANI-150000000106SD_approvato.pdf
ANI-1500000006-CENTROCHIRURGIAAMBULATORIALEsrl_approvato.pdf
ANI-1500000007-EUROMED ECOLOGICA_APPROVATO.pdf
ANI-1500000008-TELECOM_APPROVATO.pdf
ANI-1500000009-TELECOM_APPROVATO.pdf
ANI-15000000100-ALOISE EMMA_approvato.pdf
ANI-15000000101-centro.chirurgia.ambulatoriale_approvato.pdf
ANI-15000000102-TELECOM_APPROVATO.pdf
ANI-15000000103-MCLINK_APPROVATO.pdf
ani-15000000104-idrafer.pdf
ANI-15000000105EUROMEDECOLOGICA_approvata.pdf
ANI-15000000107LAGSERVICE.pdf
ANI-15000000109TCHR_approvato.pdf
ANI-1500000011-COOPSERVICEn9117011288 approvate (2).pdf
ANI-1500000011-COOPSERVICEn°9117011288.pdf
ANI-15000000110-TELECOM_APPROVATO.pdf
ANI-15000000113-SECURLAB_approvato.pdf
ANI-15000000114-SECURLAB_approvato.pdf
ANI-15000000115-COOPSERVICE_approvato.pdf
ANI-15000000116-COOPSERVICE_approvato.pdf
ANI-15000000117-REPOWER_approvato.pdf
ANI-15000000118-CECCHINIlaura_approvato.pdf
ANI-15000000119-DESENA_approvato.pdf
ANI-1500000012-TCHRSERCICES.R.L._approvato (1).pdf
ANI-15000000121-ALOISE_approvato.pdf
ANI-15000000122-LAGSERVICE.pdf
ANI-15000000123-SECURLAB_approvata.pdf
ANI-15000000125-QUERZOLA_approvato.pdf
ANI-15000000129-TC HR_apprpvato.pdf
ANI-1500000013-TAV_approvato.pdf
ANI-15000000130-LAGSERVICE.pdf
ANI-15000000131EUROMEDecologica_approvato.pdf
ANI-15000000132-LAV.pdf
ANI-15000000133-REPOWER.pdf
ANI-15000000134-MCLINK.pdf
ANI-15000000135-COOPSERVICE_approvato.pdf
ANI-15000000136-COOPSERVICE_approvato.pdf
ANI-15000000138-TCHR._approvatopdf.pdf
ANI-15000000139-ALOISEEMMA.pdf
ANI-1500000014-OFFICEDEPOT_approvato.pdf
ANI-15000000140_TELECOM.pdf
ANI-15000000141-CHIRURGIAAMBULATORIALE_approvato.pdf
ANI-15000000142-LAG.pdf
ANI-15000000143-LAG.pdf
ANI-15000000145-TELECOM.pdf
ANI-15000000146-LAG.pdf
ANI-15000000147-WERFEN.pdf
ani-15000000148-enigas.pdf
ANI-15000000153TCHR_approvato.pdf
ANI-15000000154-ASSEL.pdf
ANI-15000000155-DIGIUSEPPEgiancarlo.pdf
ANI-15000000156-SD.pdf
ANI-15000000157-SAS.pdf
ani-15000000158-energeticSOURCE.pdf
ANI-15000000159-chirurgia ambulatoriale.pdf
ANI-1500000016-THEMIX_approvato.pdf
ANI-15000000160-CERRATA REFRIGERAZIONE.pdf
ANI-15000000162-ALOISE emma.pdf
ANI-1500000017-ASSEL_approvato.pdf
ANI-1500000018-QUERZOLA_approvato.pdf
ANI-1500000019-BDO_approvato.pdf
ANI-1500000020-THEMIXfatt_ approvato.134.pdf
ANI-1500000021-SECURLAB_approvato.pdf
ANI-1500000022-LYRECO+DDT_approvato.pdf
ANI-1500000023-COOPSERVICE approvato (1).pdf
ANI-1500000024-REPOWER135812_approvato.pdf
ANI-1500000025-DR.BRANDIMARTE-fatt.35_approvato (1).pdf
ANI-1500000026-D.SSA AMBRUZZI_approvato.pdf
ANI-1500000027-COOPSERVICE9117034433 approvato (1).pdf
ANI-1500000031-TAVf.314_approvato.pdf
ANI-1500000032-d.ALOISEmaggio2015_approvato.pdf
ANI-1500000033-CENTROchirurgiaAMBULATORIALEf201500306_approvato.pdf
ANI-1500000034-WINDf.7407817176_approvato.pdf
ANI-1500000035-avv.BELLOMI.pdf
ANI-1500000038-TOPCARf._approvato.pdf
ANI-1500000039-TCHRf.000544_approvato.pdf
ANI-1500000040-THEMIX_approvato.pdf
ANI-1500000041-DESENA_approvato.pdf
ANI-1500000042-TCHRSERVICESf.000565_approvato.pdf
ANI-1500000043-QUERZOLAf.109_approvato.pdf
ANI-1500000047-TELEPASS.pdf
ANI-1500000049-WIND_approvato.pdf
ANI-1500000051-MCLINKf.109493_approvato.pdf
ANI-1500000052-MCLINKf.88508_approvato.pdf
ANI-1500000053-OFFICEDEPOT_approvato.pdf
ANI-1500000054-COOPSERVICEapprovatof 9117037004.pdf
ANI-1500000055-COOPSERVICEf 9117039325approvato.pdf
ANI-1500000056-SD_approvato.pdf
ANI-1500000057-REPOWER_approvato.pdf
ANI-1500000058-MCLINK_approvato.pdf
ANI-1500000059-LAG.pdf
ANI-1500000059WERFEN_approvato.pdf
ANI-1500000060WERFEN_approvato.pdf
ANI-1500000063-CENTROCHIRURGIAAMBULATORIALE_approvato.pdf
ANI-1500000064-dott.ALOISEemma_approvato.pdf
ANI-1500000066-MERCURI_approvato.pdf
ANI-1500000067-QUERZOLA_approvato.pdf
ANI-1500000070-TIM_approvato.pdf
ANI-1500000071LIFEBRAIN.pdf
ANI-1500000072-TC HR_approvato.pdf
ANI-1500000073-LAVAGGIO E GOMMISTA_approvato.pdf
ANI-1500000075-THEMIX_approvato.pdf
ANI-1500000076-EUROMEDecologica_approvato.pdf
ANI-1500000077-REPOWER_approvato.pdf
ANI-1500000078-SAS_approvato.pdf
ANI-1500000079-LAGSERVICE.pdf
ANI-1500000080-COOPSERVICE appr.pdf
ANI-1500000081-COOPSERVICE appr.pdf
ANI-1500000083-TAV_approvato.pdf
ANI-1500000084-aloise emma_approvato.pdf
ANI-1500000085-centro.chirurgia.ambulatoriale_approvato.pdf
ANI-1500000088-lagSERVICE.pdf
ANI-1500000089-FARMACIACAMERUCCI.pdf
ANI-1500000091-LAGservice.pdf
ANI-1500000092-ASSEL_approvata.pdf
ANI-1500000093-COOPSERVICE_approvato.pdf
ANI-1500000095-TCHR_approvato.pdf
ANI-1500000097-SAS (2)_approvato.pdf
ANI-1500000099-REPOWER_approvato.pdf
ARE-1500000001SAS_approvato.pdf
ARE-1500000002ACEA_approvato.pdf
ARE-1500000004VERGARI_approvato.pdf
ARE-1500000005PINTO_approvato.pdf
ARE-1500000006COSMOPOL_approvato.pdf
ARE-1500000007LAGSERVICE.pdf
ARE-1500000009 OFFICE DEPOT_ARETEIA.pdf
ARE-1500000010 SERVIZI ABITAZIONE_aqpprovato.pdf
ARE-1500000011 TELECOM_approvato.pdf
ARE-1500000012 TELECOM_approvato.pdf
ARE-1500000013 THEMIX_approvato.pdf
ARE-1500000014 QUERZOLA_approvato.pdf
ARE-1500000015 DA.CA. ESTINTORI_approvato.pdf
ARE-1500000016 COOPSERVICE approvato.pdf
ARE-1500000017-SAS.pdf
ARE-1500000017-SAS_approvato.pdf
ARE-1500000018-DR.BRANDIMARTE_approvato.pdf
ARE-1500000019-COOPSERVICE approvato.pdf
ARE-1500000020-BRACI E ABBRACCI.pdf
ARE-1500000021-COSMOPOL_approvato.pdf
ARE-1500000023-SAS_approvato.pdf
ARE-1500000024-MESCHINI_approvato.pdf
ARE-1500000025-VERGARI_approvato.pdf
ARE-1500000026-AVV.BELLOMI.pdf
ARE-1500000027-PINTO_approvato.pdf
ARE-1500000032-DA.CA_approvato.pdf
ARE-1500000033-SERVIZI ABITAZIONE_approvato.pdf
ARE-1500000034-QUERZOLA_approvato.pdf
ARE-1500000035-CERRATA D_approvato..pdf
ARE-1500000036-SECURLAB_approvata.pdf
ARE-1500000037-COSMOPOL_approvato.pdf
ARE-1500000038-OFFICE DEPOT_approvato.pdf
ARE-1500000039-MONIGEST_approvato.pdf
ARE-1500000040-MONIGEST_approvato.pdf
ARE-1500000041-COOPSERVICE approvato.pdf
ARE-1500000042-COOPSERVICE approvato.pdf
ARE-1500000043-SECURLAB_APPROVATO.pdf
ARE-1500000044-MESCHINI_APPROVATO.pdf
ARE-1500000045-ACEA_approvato.pdf
ARE-1500000047-PINTO_approvato.pdf
ARE-1500000050-VERGARI_approvato.pdf
ARE-1500000052-QUERZOLA_approvato.pdf
ARE-1500000053-CONTI ROSELLA_approvato.pdf.pdf
ARE-1500000057-DE SENA_approvato.pdf
ARE-1500000058-SERVIZI ABITAZIONE_approvato.pdf
ARE-1500000059-SECURLAB_approvato.pdf
ARE_1500000048_TELECOM_approvato.pdf
ARE_1500000049_TELECOM_approvato.pdf
ARE_1500000144_CERRATA D..pdf
BIO_1500000048_GIROLAMO LUCIANA_APPROVATO.pdf
BIO_1500000049_SPORTELLI MARIO_APPROVATO20150505_10081133.pdf
BIO_1500000050_LEGROTTAGLIE BENEDETTO_APPROVATO.pdf
BIO_1500000051_ANTIFORTUNISTICA MERIDIONALE_APPROVATO.pdf
BIO_1500000052_SAIL_APPROVATO.pdf
BIO_1500000053_SAIL_APPROVATO.pdf
BIO_1500000056_PRONTO UFFICIO_APPROVATO.pdf
BIO_1500000057_H3G SPA_APPROVATO.pdf
BIO_1500000060_RITELLA BENEDETTA_APPROVATO.pdf
BIO_1500000061_POSTA 7_APPROVATO.pdf
BIO_1500000062_POSTASETTESAS_APPROVATO.pdf
BIO_1500000063_PIGNATELLI_APPROVATO.pdf
BIO_1500000064_DIALINE SRL_APPROVATO.pdf
BIO_1500000065_L2 SRL SRL_APPROVATO.pdf
BIO_1500000066_FARMACIA TREROTOLI_APPROVATO.pdf
BIO_1500000067_FARMACIA TREROTOLI_APPROVATO.pdf
BIO_1500000068_BIOGROUP_APPROVATO.pdf
BIO_1500000069_VITO RINALDI_APPROVATO.pdf
BIO_1500000070_EUROCOMPUTERS_APPROVATO.pdf
BIO_1500000071_SERVIZI DIAGNOSTICI_APPROVATO.pdf
BIO_1500000072_SERVIZI DIAGNOSTICI_APPROVATO.pdf
BIO_1500000073_SERVIZI DIAGNOSTICI_APPROVATO.pdf
Answer: You use the same name for your result array and os.walk (`files`). Here is
your code with corrected variable names:
import os
import fnmatch
files_result = []
for root, dirnames, files in os.walk(r'C:\PATH\TESTT'):
for f in fnmatch.filter(files, '*0000*.pdf'):
print(f)
files_result.append(os.path.join(root, f))
#sorted_files = sorted(files, key=lambda x: x.split('-')[1])
sorted_files = sorted(files, key=lambda x: x.replace("_", "-").split('-')[1]) # as Byte Commander suggested
print(sorted_files)
And as Byte Commander suggested. The replacement with underscore
|
Search neighboring points using k-d trees
Question: I have a bunch of coordinates contained in three `numpy` arrays: `xarr`,
`yarr` and `zarr` (corresponding positions in each array belong to the same
point - i.e the first point is at `xarr[0]`, `yarr[0]`, `zarr[0]`). Given
another point in space at `P(x,y,z)` I would like to find all the points that
are within a distance _r_ of `P(x,y,z)`.
My current (and very inefficient) method of doing this is to simply iterate
through and calculate the distance to each point and see if it is within _r_
of `P(x,y,z)`.
However, I'd like to use SciPy's k-d tree algorithm to do this, but I'm not
really sure how to start implementing it (I'm very new to Python). I'd really
appreciate it if someone could briefly outline some code that demonstrates
_how to set up_ a k-d tree given data in the format I've got.
I know of [SciPy documentation of its k-d tree
implementation](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.KDTree.html#scipy.spatial.KDTree),
I've looked over it but I'm still confused on how to create the tree given the
data in the format I have (`np.mgrid` and `ravel()` were called and I don't
quite understand why).
Thanks!
Answer: Here's an explanation of the example provided in the scipy docs :
from scipy import spatial
x, y = np.mgrid[0:4, 0:4]
`np.mgrid` creates a mesh grid x,y from 0 to 4. Since you already have your
x,y,z coordinates, you are going to skip this step.
points = zip(x.ravel(), y.ravel())
points = zip(xarr.ravel(), yarr.ravel(), zarr.ravel()) #in your case
points = zip(xarr, yarr, zarr) # if x,y,z are already 1-d
`zip` creates a list of tuples containing each x,y point pair (associate
together the coordinates for each point). `ravel` flattens the x, y mesh grid
(converts a n-d array to 1-d) so that `zip` can be used. In your case you will
only use `ravel` if `xarr`, `yarr`, `zarr` are not already 1-d.
tree = spatial.KDTree(points)
Create index the points to provide rapid neighbour look up.
tree.query_ball_point([2, 0], 1)
Look up points within `r=1` of the point `[2,0]`
Hope this helps.
|
How to use autobahn.ws with django?
Question: Need for websockets in my project. Found out crossplatform solution
[autobahn.ws](http://autobahn.ws) but only tutorial for pure python is
available. How to use autobahn as chat server in django project?
Answer: Simply add the following bit of code to the python script where you setup your
websocket.
if __name__ == '__main__': #pragma nocover
# Setup environ
sys.path.append(os.getcwd())
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.settings")
import django
django.setup()
Now your code that creates a web socket can make use of django models and
other features just as if it was a view.
|
'AttributeError: 'module' object has no attribute 'file'' when using oauth2client with Google Calendar
Question: I'm using the example script for google calendars python api
(<https://developers.google.com/google-
apps/calendar/quickstart/python#step_3_set_up_the_sample>) to try and print
out all the calendar events. However I am getting an error of:
AttributeError: 'module' object has no attribute 'file'
from the line
store = oauth2client.file.Storage(credential_path)
I can find no references to such an error in the docs. Has anyone else come
across this before?
Cheers, Jack
Answer: In original sample
<https://developers.google.com/google-
apps/calendar/quickstart/python#step_3_set_up_the_sample>
missing import entry:
from oauth2client import file
add this line and try run your script again
|
Login to web portal using Python
Question: So I have followed many guides on how to login to a portal using python using
urllib2 and to parse it using 'BeautifulSoup'.
I am trying to login to this [webportal](http://academia.srmuniv.ac.in) which
has its login form way nested
[there](http://%20https://academia.srmuniv.ac.in/accounts/signin?_sh=false&hideidp=true&portal=10002227248&client_portal=true&servicename=ZohoCreator&serviceurl=https://academia.srmuniv.ac.in/)
I looked at the from tag in the source and found this
<form id="signinForm" action="/accounts/signin.ac" name="signinform" method="post" novalidate="true" autocomplete="off">
but the link `https://academia.srmuniv.ac.in/accounts/signin.ac` is invalid.
Can someone help me with this.
**EDIT**
Code used:
from bs4 import BeautifulSoup import urllib2 import requests
payload = {'username': 'some_username', 'password': 'some_password'}
r = requests.get("academia.srmuniv.ac.in/accounts/signin.ac";, params=payload)
data = r.text soup = BeautifulSoup(data) print soup.prettify()
Answer: As said in comment, you need to understand the difference between a `GET` and
a `POST` request.
Get is a basic http request where parameters are passed by the url. It is very
limited in size and security. Post is a when the parameters are in the body of
the request.
In your case, you need a POST request, so you need to use
`requests.post("academia.srmuniv.ac.in/accounts/signin.ac";, params=payload)`
Also, you need to use a session so that the login persists.
Here is a small code to do that:
s = requests.Session()
r = s.post('your address', payload)
Lastly, about the parameters, you need to pass an Email and a Password (look
at the id in the input tags), and the address is relativ to an ifram, but I am
not sure about what is the correct address to give...
So it should gives you:
import requests
from bs4 import BeautifulSoup
s = requests.Session()
payload = {'Email' : youremail, 'Password' : yourpwd}
r = s.post('https://academia.srmuniv.ac.in/accounts/signin.ac', payload)
data = r.text()
soup = BeautifulSoup(data)
print soup.pretify()
And now to access any page in the web site, you just need to use your session
and get or post:
r2 = s.get('some other page behind the login page')
r3 = s.post('some other page with a post method', payload)
|
Does Rust have an equivalent of Python's threading.Timer?
Question: I'm looking for a timer which uses threads, not plain `time.sleep`:
from threading import Timer
def x():
print "hello"
t = Timer(2.0, x)
t.start()
t = Timer(2.0, x)
t.start()
Answer: You can use the [timer](https://crates.io/crates/timer) crate
extern crate timer;
extern crate chrono;
use timer::Timer;
use chrono::Duration;
use std::thread;
fn x() {
println!("hello");
}
fn main() {
let timer = Timer::new();
let guard = timer.schedule_repeating(Duration::seconds(2), x);
// give some time so we can see hello printed
// you can execute any code here
thread::sleep(::std::time::Duration::new(10, 0));
// stop repeating
drop(guard);
}
|
Problems with fastcgi when installing Django on Justhost
Question: I am following the following tutorial at <http://flailingmonkey.com/install-
django-justhost/> to install Django on my Justhost web server. So far I have
managed to install Django and Python on my Justhost shared web server.
However, I am now stuck when trying to configure my new site. Every time I run
the command: `python mysite.fcgi` I keep getting the following error message:
Traceback (most recent call last):
File "mysite.fcgi", line 9, in <module>
from django.core.servers.fastcgi import runfastcgi
ImportError: No module named fastcgi
**Content of mysite.fcgi**
#!/home4/xxxxx/python/bin/python
import sys, os
# Where /home/your_username is the path to your home directory
sys.path.insert(0, "/home4/xxxxx/python")
sys.path.insert(13, "/home4/xxxxx/public_html/django-project/admin")
os.environ['DJANGO_SETTINGS_MODULE'] = 'admin.settings'
from django.core.servers.fastcgi import runfastcgi
runfastcgi(method="threaded", daemonize="false")
How do I fix it?
Answer: I had the exact same issue. Heres how to solve it
load up your ssh client
cd ~/
pip install django==1.8.7
pip install flup==1.0.2
and you should be good
|
Python - Help needed with parsing file. Is there a way to ignore EOF chars?
Question: I have a binary file that I am trying to extract strings from and I am having
quite the time doing so. :(
My current strategy is to read in the file using Python (using one of the
following functions: read(), readline(), or readlines()). Next, I parse
through the line (char by char) and look for the special character 'ô', which
in **most cases** directly follows the strings I want! Lastly, I parse
backwards from the special char recording all chars that I have identified as
being "valid."
At the end of the day, I want the front time stamp and the next 3 strings
within the line.
Results:
In the input example line #1 the "read" functions won't read through the
entire line (shown in the output image). I believe this is because the
function is interpreting the binary as an EOF char and then it stops reading
on.
In line #2 of the example, there are times in which the "special char" shows
up, however it is not after a string I want to extract. :(
Is there a better way to parse this data? If not, is there way to solve issue
seen in example line #1?
Examples of input data and the resulting output data when I just print the
lines as read. As you can see, it does not read through the entire line when
using `readlines()` [](http://i.stack.imgur.com/hTb11.png)
My string extraction algorithm, which is not very robust. [](http://i.stack.imgur.com/nGLLf.png)
FYI, efficiency is not necessarily import.
Answer: Why use Python. Use strings and pipe it through head, eg
strings /bin/ls | head -3
and see what you get. You can get a strings for Windows too.
|
PyQt4 in Mac OS X for Python 3 doesn't seem to work
Question: I installed PyQT4 in Mac OS X El Capitan for Python 3 using the instructions
given in [this answer](https://stackoverflow.com/a/21714509/1815288). The
instructions gave the following commands:
brew install python3
brew install qt
brew install sip --with-python3
brew install pyqt --with-python3
which I run with no problems at all. I then added this line to my `.bashrc`
file:
PYTHONPATH=/usr/local/lib/python3.3/site-packages:$PYTHONPATH
I verified Python 3 was running correcly. I also correctly evaluated the
following code within Python 3:
import PyQT4
Now, when I try to run this simple program, nothing happens:
import sys
from PyQt4 import QtGui
app = QtGui.QApplication(sys.argv)
window = QtGui.QWidget()
window.show()
After running it, no window opens and nothing happens. I get no error or
warning messages. Any ideas of what's going on in here?
Answer: This is correct. When you run your code, nothing is supposed to happen, and
the application should immediately exit without any errors. Your example
translated to C++ will behave identically, too.
Perhaps you wished to spin the event loop? `app.exec()` will do that.
|
Recursively rename file extensions
Question: I am having a difficult time creating a python script that will rename file
extensions in a folder and continue to do so in sub directories. Here is the
script I have thus far; it can only rename files in the top directory:
#!/usr/bin/python
# Usage: python rename_file_extensions.py
import os
import sys
for filename in os.listdir ("C:\\Users\\username\\Desktop\\test\\"): # parse through file list in the folder "test"
if filename.find(".jpg") > 0: # if an .jpg is found
newfilename = filename.replace(".jpg","jpeg") # convert .jpg to jpeg
os.rename(filename, newfilename) # rename the file
Answer: You can process the directory like this:
import os
def process_directory(root):
for item in os.listdir(root):
if os.path.isdir(item):
print("is directory", item)
process_directory(item)
else:
print(item)
#Do stuff
process_directory(os.getcwd())
Although, this isn't really necessary. Simply use `os.walk` which will iterate
through all toplevel and further directories / files
|
Removing randomly generated file extensions from .jpg files using python
Question: I recently recovered a folder that i had accidentally deleted. It has .jpg and
.tar.gz files. However, all of the files now have some sort of hash extension
appended to them and it is different for every file. There are more than 600
files in the folders. So example names would be:
IMG001.jpg.3454637876876978068
IMG002.jpg.2345447786787689769
IMG003.jpg.3454356457657757876
and
folder1.tar.gz.45645756765876
folder2.tar.gz.53464575678588
folder3.tar.gz.42345435647567
I would like to have a script that could go in turn (maybe i can specify
extension or it can have two iterations, one through the .jpg files and the
other through the .tar.gz) and clean up the last part of the file name
starting from the . right before the number. So the final file names would end
in .jpg and .tar.gz
What I have so far in python:
import os
def scandirs(path):
for root, dirs, files in os.walk(path):
for currentFile in files:
os.path.splitext(currentFile)
scandirs('C:\Users\ad\pics')
Obviously it doesn't work. I would appreciate any help. I would also consider
using a bash script, but I do not know how to do that.
Answer: `shutil.move(currentFile,os.path.splitext(currentFile)[0])`
at least I think ...
|
How to exit a main program from the Multiprocessing Process in python
Question: I am spawning 3 processes using multiprocessing.process and waiting for them
to complete. If one of them fails then i want to stop all other processes and
also the main program. But when i use sys.exit the execution is only stopping
just the process and not the main program. Here is the snippet of the code.
proc1=process(function1)
proc2=process(function2)
proc3=process(function3)
proc1.start
proc2.start
proc3.start
proc1.join
proc2.join
proc3.join
. . .
I am running some tasks in functions 1,2 and 3. I have a condition in each
function to check the returncode of the task and if the returncode is not
success then i would like to stop proc1,proc2 and proc3 and stop execution of
the main program. When i execute sys.exit inside the function it just comes
out of that process and not the main program.
Answer: For this to work you need to have communication between the worker processes
and the main process. Probably the simplest way is to use
`multiprocessing.Event`.
_Before_ starting the processes, create a pair of `multiprocessing.Event`.
Give them meaningful names like `stop_main` and `stop_workers`. For
portability, one should give add these `Event`s to the arguments given for the
`Process` target.
A worker process should call `stop_main.set()` when it wants the main program
to exit. A worker process should also call `stop_workers.is_set()` regularly
and exit when this returns `True`.
After the main process starts all the workers it should keep polling
`stop_main.is_set()`. When that returns `True` it should call
`stop_workers.set()`, `join` the workers and exit.
**Updated:**
Edited to make it shorter and hopefully make it work on ms-windows.
An example:
import multiprocessing as mp
import time
def worker(num, sw, sm):
if num == 5:
print('This is worker', num)
time.sleep(1)
print('Worker', num, 'signalling main program to quit')
sm.set()
while not sw.is_set():
print('This is worker', num)
time.sleep(0.7)
else:
print('Worker', num, 'signing off..')
if __name__ == '__main__':
stop_worker = mp.Event()
stop_main = mp.Event()
workers = [mp.Process(target=worker, args=(n, stop_worker, stop_main))
for n in range(1, 6)]
for w in workers:
w.start()
while not stop_main.is_set():
time.sleep(1)
print('MAIN: Received stop event')
print('MAIN: Sending stop event to workers')
stop_worker.set()
for c, w in enumerate(workers, start=1):
w.join()
print('worker', c, 'joined')
It runs like this:
>
> This is worker 1
> This is worker 2
> This is worker 3
> This is worker 4
> This is worker 5
> This is worker 2
> This is worker 3
> This is worker 1
> This is worker 4
> Worker 5 signalling main program to quit
> This is worker 5
> This is worker 2
> This is worker 3
> This is worker 1
> This is worker 4
> This is worker 5
> MAIN: Received stop event
> MAIN: Sending stop event to workers
> Worker 3 signing off..
> Worker 1 signing off..
> Worker 2 signing off..
> worker 1 joined
> worker 2 joined
> worker 3 joined
> Worker 4 signing off..
> worker 4 joined
> Worker 5 signing off..
> worker 5 joined
>
|
How to handle exceptions in a Django migration?
Question: How do I catch an exception in a Django migration?
I have a migration that, because of various legacy reasons, I expect to fail
sometimes. I want to be able to catch that error and run some error handling
code in that case.
Specifically, I'm renaming a table, and sometimes the destination table
already exists and I want to merge the contents of the old and new tables,
then delete the old one.
I'm running Django 1.7 ( :( ) and we're planning on upgrading to 1.8 but it
hasn't happened yet.
My migration is:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0007_migration_name'),
]
operations = [
migrations.AlterModelTable(
name='table_name',
table='LegacyTableName',
),
]
When I run this, I get
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File ".../django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File ".../django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File ".../django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File ".../django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File ".../django/core/management/commands/migrate.py", line 161, in handle
executor.migrate(targets, plan, fake=options.get("fake", False))
File ".../django/db/migrations/executor.py", line 68, in migrate
self.apply_migration(migration, fake=fake)
File ".../django/db/migrations/executor.py", line 102, in apply_migration
migration.apply(project_state, schema_editor)
File ".../django/db/migrations/migration.py", line 108, in apply
operation.database_forwards(self.app_label, schema_editor, project_state, new_state)
File ".../django/db/migrations/operations/models.py", line 236, in database_forwards
new_model._meta.db_table,
File ".../django/db/backends/schema.py", line 350, in alter_db_table
"new_table": self.quote_name(new_db_table),
File ".../django/db/backends/schema.py", line 111, in execute
cursor.execute(sql, params)
File ".../django/db/backends/utils.py", line 81, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File ".../django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File ".../django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File ".../django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File ".../django/db/backends/mysql/base.py", line 129, in execute
return self.cursor.execute(query, args)
File ".../MySQLdb/cursors.py", line 226, in execute
self.errorhandler(self, exc, value)
File ".../MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorvalue
django.db.utils.OperationalError: (1050, "Table 'LegacyTableName' already exists")
All that's provided in the migration itself is the `operations` list, and
there doesn't seem to be an optional error-handling parameter in [the
docs](https://docs.djangoproject.com/en/1.9/ref/migration-
operations/#altermodeltable).
How do I catch the OperationalError so I can run some Python to merge the
tables?
Answer: The problem with trying to catch database exceptions in Python is that they
may not be specific enough - e.g., `OperationalError` could arise for various
reasons (only one of which is that the table name has already been changed).
I would suggest that rather than trying to catch exceptions you write your own
migration function that does whatever checks/modifications are necessary. See
the [documentation on
`RunPython`](https://docs.djangoproject.com/en/1.9/ref/migration-
operations/#runpython).
> This is generally the operation you would use to create data migrations, run
> custom data updates and alterations, and anything else you need access to an
> ORM and/or Python code for.
In your case you would write a function that checks whether the table exists
and performs some actions for either case.
There are some database-specific issues to be aware of when writing these
functions, e.g., :
> on PostgreSQL, for example, you should avoid combining schema changes and
> RunPython operations in the same migration or you may hit errors.
|
I'm writing a fuel conversion program and its not working :(
Question: I am a novice python code writer and i am starting small with a fuel
conversion program. The program asks for your name and then converts a miles
per gallon rate or a kilometers per litre rate. Currently, the program runs
fine until it gets to the convert to MPG line. then once you press y, it does
nothing. funny thing is, no syntax error has been returned. please help as i
cannot find anything on it :(
import time
y = 'y', 'yes', 'yep', 'yea', 'ye'
n = 'n', 'no', 'nup', 'nay'
name = str(input("Hey, User, whats your name? "))
time.sleep(1.5)
print("Alright", name, "Welcome the the *gravynet* Fuel Efficiency Converter!")
time.sleep(1.5)
str(input("Would you like to convert the fuel efficiency of your motor vehcile? (Miles Per Gallon) (y/n): "))
if y is True:
miles = int(input("How far did you travel (in miles): "))
galls = int(input("How much fuel did you consume (in gallons): "))
mpgc = (galls/miles)
print("The MPG Rate is: ", int(mpgc))
time.sleep(2)
print("test print")
if y is (not True):
input(str("Would you like to convert KPL instead? (y/n): "))
time.sleep(1.5)
if y is True:
kilometers = int(input("How far did you travel (in kilometers): "))
litres = int(input("How much fuel did you consume (in litres): "))
kplc = ( litres / kilometers )
print("The KPL Rate is: ", int(kplc))
time.sleep(3)
exit()
if y is not True:
print("No worries")
time.sleep(1.5)
print("Thanks", name, "for using *gravynet* Fuel Efficiency Coverter")
time.sleep(1.5)
print("Have a good day!")
time.sleep(1.5)
exit()
else :
print("Sorry, invalid response. Try again")
exit()
elif not y:
print("Please use y/n to answer" )
time.sleep(2)
elif not n:
print("Please use y/n to answer" )
time.sleep(2)
sorry if you think that is bad but i just started python and i need some help
:)
Answer: Severely trimmed down and indentation fixed (I think....)
`if y is True` and similarly `if y is not True` make no sense here.
Also, speaking of `is`.. `is` and `==` may be work as equivalent expressions
sometimes for checking for "equality", but not necessarily. `==` checks for
equality whereas `is` checks for object identity. You should use `==` for
checking for equality between two objects. Except for `None` in which case
it's generally preferred to use `is` instead of `==` for this.
You're converting to `str` in a bunch of places unnecessarily. They're already
strings.
In your mpg conversion you already have a floating point number (possibly an
int). There's no need to convert to an int here. Suppose mpg is < 1\. Then
`int` casting will make this return zero
Your math is also backwards. miles _per_ gallon. Similarly, kilometers _per_
gallon.
name = input("Hey, User, whats your name? ")
print("Alright", name, "Welcome the the *gravynet* Fuel Efficiency Converter!")
mpg = input("Would you like to convert the fuel efficiency of your motor vehcile? (Miles Per Gallon) (y/n): ")
if mpg in y:
miles = int(input("How far did you travel (in miles): "))
galls = int(input("How much fuel did you consume (in gallons): "))
mpgc = miles / galls
print("The MPG Rate is: ", mpgc)
else:
kpl = input("Would you like to convert KPL instead? (y/n): ")
if kpl in y:
kilometers = int(input("How far did you travel (in kilometers): "))
litres = int(input("How much fuel did you consume (in litres): "))
kplc = kilometers / litres
print("The KPL Rate is: ", kplc)
else:
print("No worries")
print("Thanks", name, "for using *gravynet* Fuel Efficiency Coverter")
print("Have a good day!")
|
Multivalue dict to list of individual dicts
Question: Having following _dict_ structure
>>> d = {
'email': ['e_val1', 'e_val2', 'e_val3', 'e_val4', 'e_val5'],
'id' : ['i_val1', 'i_val2', 'i_val3', 'i_val4'],
'ref' : ['r_val1', 'r_val2', 'r_val3', 'r_val4']
}
what would be an effective way to get the following _list_ of individual
dicts?
>>> l = [
{'email': 'e_val1', 'id': 'i_val1', 'ref': 'r_val1'},
{'email': 'e_val2', 'id': 'i_val2', 'ref': 'r_val2'},
{'email': 'e_val3', 'id': 'i_val3', 'ref': 'r_val3'},
{'email': 'e_val4', 'id': 'i_val4', 'ref': 'r_val4'},
{'email': 'e_val5', 'id': None, 'ref': None}
]
So far, i tried:
def split(d):
l, longest = [], False
for k, v in d.items():
longest = max(longest, len(v))
for pointer in range(longest):
r = {}
for k, v in d.items():
try:
r[k] = v[pointer]
except IndexError:
# current list is shorter than longest
r[k] = None
l.append(r)
return l
which shortly after became
from itertools import izip_longest
def split(d):
"""
With Python < 2.7,
- itertools.izip_longest(*d.values())
might be substituted by map with None:
- map(None, *d.values())
"""
_zipper = lambda keys: lambda v: dict(zip(keys, v))
lmb = _zipper(d.keys())
return map(lmb,
itertools.izip_longest(*d.values()))
Assuming Python 2.7.x, in terms of performance, what would be a better way?
>>> from timeit import timeit
>>> # with map
>>> timeit(setup="""
... d={'email': ['e_val1', 'e_val2', 'e_val3', 'e_val4', 'e_val5'],
... 'id': ['i_val1', 'i_val2', 'i_val3', 'i_val4'],
... 'ref': ['r_val1', 'r_val2', 'r_val3', 'i_val4']};
... _zipper=lambda keys: lambda v: dict(zip(keys, v))""",
... stmt="""
... lmb=_zipper(d.keys());
... map(lmb, map(None, *d.values()))""")
16.14903998374939
>>> # with itertools.izip_longest
>>> timeit(setup="""
... d={'email': ['e_val1', 'e_val2', 'e_val3', 'e_val4', 'e_val5'],
... 'id': ['i_val1', 'i_val2', 'i_val3', 'i_val4'],
... 'ref': ['r_val1', 'r_val2', 'r_val3', 'i_val4']};
... _zipper=lambda keys: lambda v: dict(zip(keys, v))""",
... stmt="""
... lmb=_zipper(d.keys());
... map(lmb, izip_longest(*d.values()))""")
18.98265790939331
_P.S. For those curious, initial dict is a Django MultiValue QueryDict,
containing many`<input>` values with same names._
Answer: Using `itertools.zip_longest` and list comprehension:
[{'email': i, 'id': j, 'ref': k} for (i, j, k) in itertools.zip_longest(d.get('email'), d.get('id'), d.get('ref'))]
**Example:**
>>> d
{'ref': ['r_val1', 'r_val2', 'r_val3', 'r_val4'], 'id': ['i_val1', 'i_val2', 'i_val3', 'i_val4'], 'email': ['e_val1', 'e_val2', 'e_val3', 'e_val4', 'e_val5']}
>>> [{'email': i, 'id': j, 'ref': k} for (i, j, k) in itertools.zip_longest(d.get('email'), d.get('id'), d.get('ref'))]
[{'ref': 'r_val1', 'id': 'i_val1', 'email': 'e_val1'}, {'ref': 'r_val2', 'id': 'i_val2', 'email': 'e_val2'}, {'ref': 'r_val3', 'id': 'i_val3', 'email': 'e_val3'}, {'ref': 'r_val4', 'id': 'i_val4', 'email': 'e_val4'}, {'ref': None, 'id': None, 'email': 'e_val5'}]
|
i can't click button type="submit" python mechanize
Question: i've this button :
<input class="bi bj bk bl" type="submit" name="add_photo_done" value="معاينة">
but i can't click on it i tried this code :
self.br.submit("add_photo_done")
but it's give me the following error:
self.br.submit("add_photo_done")
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 541, in submit
return self.open(self.click(*args, **kwds))
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203, in open
return self._mech_open(url, data, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 230, in _mech_open
response = UserAgentBase.open(self, request, data)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_opener.py", line 193, in open
response = urlopen(self, req, data)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_urllib2_fork.py", line 344, in _open
'_open', req)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_urllib2_fork.py", line 332, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_urllib2_fork.py", line 1170, in https_open
return self.do_open(conn_factory, req)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_urllib2_fork.py", line 1115, in do_open
h.request(req.get_method(), req.get_selector(), req.data, headers)
File "/usr/lib/python2.7/httplib.py", line 1052, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1092, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 1048, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 890, in _send_output
msg += message_body
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 252: ordinal not in range(128)
> i don't now where is the problem even though everythink is fine
Answer: At the beginning of your code set the encoding
import sys
reload(sys)
sys.setdefaultencoding('utf8')
|
Why is nested `if` in Python much slower than parallel `and`?
Question: I'm answering a question on an online judge. A section of the solution looks
like this:
if j > 0 and i < m and B[j-1] > A[i]:
imin = i + 1
elif i > 0 and j < n and A[i-1] > B[j]:
imax = i - 1
It passes the judge without issue.
However, if I change it to
if j > 0 and i < m:
if B[j-1] > A[i]:
imin = i + 1
elif i > 0 and j < n:
if A[i-1] > B[j]:
imax = i - 1
The judge immediately tells me I've exceeded time limit, even on a very simple
test case.
I believe the two pieces of code to be logically equivalent (Of course I could
have been wrong here. Please correct me if that's the case.). It surprised me
how much difference it makes by just changing parallel `and` to nested `if`.
Is my assumption right? If that's the case, why did that happen and how much
difference does it make?
(Sorry I am not able to provide the exact time for the program to run, since
the online judge doesn't tell how much it took to run the test case. The whole
function is available at [here](https://leetcode.com/discuss/15790/share-my-o-
log-min-m-n-solution-with-explanation) and the question is
[here](https://leetcode.com/problems/median-of-two-sorted-arrays/). It's about
finding the median of two sorted arrays put together. The test case that
failed included `[1], [1]` and `[1,1], [1,1]`)
The whole function:
def median(A, B):
m, n = len(A), len(B)
if m > n:
A, B, m, n = B, A, n, m
if n == 0:
raise ValueError
imin, imax, half_len = 0, m, (m + n + 1) / 2
while imin <= imax:
i = (imin + imax) / 2
j = half_len - i
if j > 0 and i < m and B[j-1] > A[i]:
# i is too small, must increase it
imin = i + 1
elif i > 0 and j < n and A[i-1] > B[j]:
# i is too big, must decrease it
imax = i - 1
else:
# i is perfect
if i == 0: max_of_left = B[j-1]
elif j == 0: max_of_left = A[i-1]
else: max_of_left = max(A[i-1], B[j-1])
if (m + n) % 2 == 1:
return max_of_left
if i == m: min_of_right = B[j]
elif j == n: min_of_right = A[i]
else: min_of_right = min(A[i], B[j])
return (max_of_left + min_of_right) / 2.0
Answer: Nesting your `if` inside is neither faster or slower, to _Python_ the first
`if` test compiles to **exactly** the same bytecode, if taken in isolation:
>>> import dis
>>> dis.dis(compile('''\
... if j > 0 and i < m and B[j-1] > A[i]:
... pass
... ''', '', 'exec'))
1 0 LOAD_NAME 0 (j)
3 LOAD_CONST 0 (0)
6 COMPARE_OP 4 (>)
9 POP_JUMP_IF_FALSE 48
12 LOAD_NAME 1 (i)
15 LOAD_NAME 2 (m)
18 COMPARE_OP 0 (<)
21 POP_JUMP_IF_FALSE 48
24 LOAD_NAME 3 (B)
27 LOAD_NAME 0 (j)
30 LOAD_CONST 1 (1)
33 BINARY_SUBTRACT
34 BINARY_SUBSCR
35 LOAD_NAME 4 (A)
38 LOAD_NAME 1 (i)
41 BINARY_SUBSCR
42 COMPARE_OP 4 (>)
45 POP_JUMP_IF_FALSE 48
2 >> 48 LOAD_CONST 2 (None)
51 RETURN_VALUE
>>> dis.dis(compile('''\
... if j > 0 and i < m:
... if B[j-1] > A[i]:
... pass
... ''', '', 'exec'))
1 0 LOAD_NAME 0 (j)
3 LOAD_CONST 0 (0)
6 COMPARE_OP 4 (>)
9 POP_JUMP_IF_FALSE 48
12 LOAD_NAME 1 (i)
15 LOAD_NAME 2 (m)
18 COMPARE_OP 0 (<)
21 POP_JUMP_IF_FALSE 48
2 24 LOAD_NAME 3 (B)
27 LOAD_NAME 0 (j)
30 LOAD_CONST 1 (1)
33 BINARY_SUBTRACT
34 BINARY_SUBSCR
35 LOAD_NAME 4 (A)
38 LOAD_NAME 1 (i)
41 BINARY_SUBSCR
42 COMPARE_OP 4 (>)
45 POP_JUMP_IF_FALSE 48
3 >> 48 LOAD_CONST 2 (None)
51 RETURN_VALUE
Only the line numbers differ in the above disassemblies.
However, you assume that the `elif` branch is still equivalent. It is not;
because you moved a test _out_ of the first `if`, the second `elif` will be
tested more often, independent of `B[j-1] > A[i]`; e.g. if `j > 0 and i < m`
is True, but `B[j-1] > A[i]` is False, your first version will then skip the
`elif` test altogether, but your second version will _still test`i > 0 and j <
n`_!
Taking the `dis.dis()` output for your full `if..elif` tests, and removing
everything but the comparisons and jumps, you get:
6 COMPARE_OP 4 (>)
9 POP_JUMP_IF_FALSE 51
18 COMPARE_OP 0 (<)
21 POP_JUMP_IF_FALSE 51
42 COMPARE_OP 4 (>)
45 POP_JUMP_IF_FALSE 51
48 JUMP_FORWARD 48 (to 99)
57 COMPARE_OP 4 (>)
60 POP_JUMP_IF_FALSE 99
69 COMPARE_OP 0 (<)
72 POP_JUMP_IF_FALSE 99
93 COMPARE_OP 4 (>)
96 POP_JUMP_IF_FALSE 99
>> 99 LOAD_CONST 2 (None)
102 RETURN_VALUE
for your initial version, but moving the `and` sections into separate, nested
`if` tests you get:
6 COMPARE_OP 4 (>)
9 POP_JUMP_IF_FALSE 51
18 COMPARE_OP 0 (<)
21 POP_JUMP_IF_FALSE 51
42 COMPARE_OP 4 (>)
45 POP_JUMP_IF_FALSE 99
48 JUMP_FORWARD 48 (to 99)
57 COMPARE_OP 4 (>)
60 POP_JUMP_IF_FALSE 99
69 COMPARE_OP 0 (<)
72 POP_JUMP_IF_FALSE 99
93 COMPARE_OP 4 (>)
96 POP_JUMP_IF_FALSE 99
>> 99 LOAD_CONST 2 (None)
102 RETURN_VALUE
Note the `POP_JUMP_IF_FALSE` opcode at index 45. One jumps to the end (99),
the other jumps to the `elif` branch (at index 51)!
This is certainly a bug in your code, leading to a far greater time spent and
the judge failing your code.
|
Problems in using python tkinter
Question: Initially running the code, blinking will start row wise. What my software
should do is that if the user gives the input "1" in the last row textarea,the
blinking should start column wise.
Again if the user give the input "1" then the letter should be selected and
should be displayed on the top textarea and entire process should start again
I am not able to control the while loop when the user gives the input in the
last row textarea.
I am beginner in python tkinter and I am not able to do what I want exactly.
Thanking You in advance
# your code goes here
import Tkinter
from Tkinter import *
import tkMessageBox
top = Tkinter.Tk()
content=0
def helloCallBack1():
tkMessageBox.showinfo( "Hello Python", "Hello World")
L1 = Label(top, text="Your Text Appears Here")
L1.grid(columnspan=10)
E1 = Entry(top, bd =5,width=40)
E1.grid(columnspan=10)
a1 = Tkinter.Button(top, text ="WATER",width="10", command = helloCallBack1)
a1.grid(row=4,column=0)
B = Tkinter.Button(top, text ="B", command = helloCallBack1)
B.grid(row=4,column=1)
C = Tkinter.Button(top, text ="C",command = helloCallBack1)
C.grid(row=4,column=2)
D = Tkinter.Button(top, text ="D", command = helloCallBack1)
D.grid(row=4,column=3)
E = Tkinter.Button(top, text ="E", command = helloCallBack1)
E.grid(row=4,column=4)
F = Tkinter.Button(top, text ="F", command = helloCallBack1)
F.grid(row=4,column=5)
row1 = Tkinter.Button(top, text =" ", command = helloCallBack1)
row1.grid(row=4,column=6)
a1 = Tkinter.Button(top, text ="ALARM",width="10",bg="red", command = helloCallBack1)
a1.grid(row=5,column=0)
H = Tkinter.Button(top, text ="H", command = helloCallBack1)
H.grid(row=5,column=1)
I = Tkinter.Button(top, text ="I", command = helloCallBack1)
I.grid(row=5,column=2)
J = Tkinter.Button(top, text ="J", command = helloCallBack1)
J.grid(row=5,column=3)
K = Tkinter.Button(top, text ="K", command = helloCallBack1)
K.grid(row=5,column=4)
L = Tkinter.Button(top, text ="L", command = helloCallBack1)
L.grid(row=5,column=5)
row2 = Tkinter.Button(top, text =" ", command = helloCallBack1)
row2.grid(row=5,column=6)
a1 = Tkinter.Button(top, text ="FOOD",width="10", command = helloCallBack1)
a1.grid(row=6,column=0)
N = Tkinter.Button(top, text ="N", command = helloCallBack1)
N.grid(row=6,column=1)
O = Tkinter.Button(top, text ="O",command = helloCallBack1)
O.grid(row=6,column=2)
P = Tkinter.Button(top, text ="P", command = helloCallBack1)
P.grid(row=6,column=3)
Q = Tkinter.Button(top, text ="Q",command = helloCallBack1)
Q.grid(row=6,column=4)
R = Tkinter.Button(top, text ="R", command = helloCallBack1)
R.grid(row=6,column=5)
row3 = Tkinter.Button(top, text =" ", command = helloCallBack1)
row3.grid(row=6,column=6)
a4 = Tkinter.Button(top, text ="BACKSPACE",width="10", command = helloCallBack1)
a4.grid(row=7,column=0)
S = Tkinter.Button(top, text ="S", command = helloCallBack1)
S.grid(row=7,column=1)
T = Tkinter.Button(top, text ="T", command = helloCallBack1)
T.grid(row=7,column=2)
U = Tkinter.Button(top, text ="U", command = helloCallBack1)
U.grid(row=7,column=3)
V = Tkinter.Button(top, text ="V", command = helloCallBack1)
V.grid(row=7,column=4)
W = Tkinter.Button(top, text ="W", command = helloCallBack1)
W.grid(row=7,column=5)
row4 = Tkinter.Button(top, text =" ", command = helloCallBack1)
row4.grid(row=7,column=6)
L2 = Label(top, text="Press 1 when you want to select")
L2.grid(columnspan=10)
E2 = Entry(top, bd =5,width=40)
E2.grid(columnspan=10)
content = E2.get()
content=0;
i=0;j=0;
while(i<30):
row1.after(4000*j+1000*i, lambda: row1.config(fg="red",bg="black"))
row1.after(4000*j+1000*(i+1), lambda: row1.config(fg="grey",bg=top["bg"]))
row2.after(4000*j+1000*(i+1), lambda: row2.config(fg="red",bg="black"))
row2.after(4000*j+1000*(i+2), lambda: row2.config(fg="grey",bg=top["bg"]))
row3.after(4000*j+1000*(i+2), lambda: row3.config(fg="red",bg="black"))
row3.after(4000*j+1000*(i+3), lambda: row3.config(fg="grey",bg=top["bg"]))
row4.after(4000*j+1000*(i+3), lambda: row4.config(fg="red",bg="black"))
row4.after(4000*j+1000*(i+4), lambda: row4.config(fg="grey",bg=top["bg"]))
content=E2.get()
if content==1:#this is not working
break
i=i+1
j=j+1
top.mainloop()
Answer: The problem is that your while loop runs in like a blink of an eye, and you
cant input anything meanwhile. Because of the `after` calls the blinking
persist, but that does not mean you are still in your wile loop. The program
exited that loop long when you input something into the box.
What i would do is to bind the entry box to a key (like Return) and when the
key is pressed check the content of the entry box, and if it is 1 then stop
the blinking.
Also you can just bind this whole stuff to the `1` key, and avoid the whole
Entry widget stuff
|
Paramiko finish process before reading all output
Question: I'm Trying to make a real time SSH Library, but as usually getting stuck on
things, I have taken this code from [Long-running ssh commands in python
paramiko module (and how to end
them)](http://stackoverflow.com/questions/760978/long-running-ssh-commands-in-
python-paramiko-module-and-how-to-end-them). But this code doesn't prints the
whole output.
I guess that when the while loop exits on channel.exit_status_ready() the
channel still have data to read. I've been trying to fix this but the fix was
not on all inputs.
How can I make this work to print all kind of commands?
import paramiko
import select
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect('host.example.com')
channel = client.get_transport().open_session()
channel.exec_command("cd / && ./test.sh")
while True:
if channel.exit_status_ready():
break
rl, wl, xl = select.select([channel], [], [], 0.0)
if len(rl) > 0:
print channel.recv(1024)
test.sh:
echo 1
wait 1
echo 2
wait 1
echo 3
Output:
1
2
Process finished with exit code 0
Thanks.
Answer: I couldn't reproduce problem with your command, but I can reproduce it with
command like `cat some_big_file.txt`.
So looks like you are right in your hypothesis. Exit status can be ready
before you read all the stuff from your `channel`. It's not clear if you
really need to use `select`. If not I would rewrite loop:
while True:
buf = channel.recv(1024)
if not buf:
break
print buf
Such loop will keep reading the channel while it has some data in it. If you
really want to use `select` you can put the above loop just after your loop.
It will read and print remaining data.
|
Python RegEx String Parsing with inconsistent data
Question: I have a string that I need to extract values out of. The problem is the
string is inconsistent. Here's an example of the script that has the string
within it.
import re
RAW_Data = "Name Multiple Words Zero Row* (78.59/0) Name Multiple Words2* (96/24.56) Name Multiple Words3* (0/32.45) Name Multiple Words4* (96/12.58) Name Multiple Words5* (96/0) Name Multiple Words Zero Row6* (0) Name Multiple Words7* (96/95.57) Name Multiple Words Zero Row8* (0) Name Multiple Words9*"
First_Num = re.findall(r'\((.*?)\/*', RAW_Data)
Seg_Length = re.findall(r'\/(.*?)\)', RAW_Data)
#WithinParenthesis = re.findall(r'\((.*?)\)', RAW_Data) #This works correctly
print First_Num
print Seg_Length
del RAW_Data
What I need to get out of the string are all values within the parenthesis.
However, I need some logic that will handle the absence of the "/" between the
numbers. Basically if the "/" doesn't exist make both values for First_Num and
Seg_Length equal to "0". I hope this makes sense.
Answer: Use a simple regex and add some programming logic:
import re
rx = r'\(([^)]+)\)'
string = """Name Multiple Words Zero Row* (78.59/0) Name Multiple Words2* (96/24.56) Name Multiple Words3* (0/32.45) Name Multiple Words4* (96/12.58) Name Multiple Words5* (96/0) Name Multiple Words Zero Row6* (0) Name Multiple Words7* (96/95.57) Name Multiple Words Zero Row8* (0) Name Multiple Words9*"""
for match in re.finditer(rx, string):
parts = match.group(1).split('/')
First_Num = parts[0]
try:
Seg_Length = parts[1]
except IndexError:
Seg_Length = None
print "First_Num, Seg_Length: ", First_Num, Seg_Length
You might get along with a regex alone solution (e.g. with conditional regex),
but this approach is likely to be still understood in three months. See a demo
on `[ideone.com](http://ideone.com/vAAETC)`.
|
Pickle data with a persistent_id to a binary object (dumps and loads)
Question: A first question I asked was [_how to load a pickle object and resolve certain
references_](http://stackoverflow.com/questions/37026745/how-to-load-a-pickle-
object-and-resolve-certain-references). A next problem I'm facing is that I
cannot call
[`dumps`](https://docs.python.org/3/library/pickle.html#pickle.dumps) or
[`loads`](https://docs.python.org/3/library/pickle.html#pickle.loads) objects
to a binary object.
Below is an implementation of the `ContextAwarePickler` and the
`ContextAwareUnpickler`. How can I use these to convert an object to and back
from its binary representation? As far as I know this only works for files.
import pickle
class ContextAwarePickler(pickle.Pickler):
def persistent_id(self, obj):
# if this is a context, return the key
if isinstance(obj, Context):
return ("Context", context.key)
# pickle as usual
return None
class ContextAwareUnpickler(pickle.Unpickler):
def recover_context(self, key_id):
...
def persistent_load(self, pid):
type_tag, key_id = pid
if type_tag == "Context":
return self.recover_context(key_id)
else:
raise pickle.UnpicklingError("unsupported persistent object")
Answer: Your solution is similar to the one in `dill` (I'm the author) -- but not as
robust.
<https://github.com/uqfoundation/dill/blob/cccbea9b715e16b742288e1e5a21a687a4d4081b/dill/temp.py#L169>
(code snipped reproduced below)
def loadIO(buffer, **kwds):
"""load an object that was stored with dill.temp.dumpIO
buffer: buffer object
>>> dumpfile = dill.temp.dumpIO([1, 2, 3, 4, 5])
>>> dill.temp.loadIO(dumpfile)
[1, 2, 3, 4, 5]
"""
import dill as pickle
if PY3:
from io import BytesIO as StringIO
else:
from StringIO import StringIO
value = getattr(buffer, 'getvalue', buffer) # value or buffer.getvalue
if value != buffer: value = value() # buffer.getvalue()
return pickle.load(StringIO(value))
def dumpIO(object, **kwds):
"""dill.dump of object to a buffer.
Loads with "dill.temp.loadIO". Returns the buffer object.
>>> dumpfile = dill.temp.dumpIO([1, 2, 3, 4, 5])
>>> dill.temp.loadIO(dumpfile)
[1, 2, 3, 4, 5]
"""
import dill as pickle
if PY3:
from io import BytesIO as StringIO
else:
from StringIO import StringIO
file = StringIO()
pickle.dump(object, file)
file.flush()
return file
Note that you may want to be careful about things like to `flush` the buffer
on `dump`, as `dill` does.
|
imp.load_source() throwing "No Module Named" Error Python 2.7
Question: I'm currently using Python 2.7, and I'm trying to load a file like this:
myPlt = imp.load_source('SourceFile', 'path/to/SourceFile.py')
However, SourceFile.py imports module OtherModule, which is in the same
directory as SourceFile. The package structure looks like this:
/path
.../to
...SourceFile.py
...OtherModule.py
...__init__.py
When I run the load_source, I get the error "ImportError: No module named
OtherModule"
Is my load_source call incorrect? Is there an alternate way I should go about
importing SourceFile?
Answer: Try:
imp.load_source("directory", "directory" + "filename.py")
|
matplotlib runs but does not generate a graph
Question: I am trying to complete <http://opencv-python-
tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_image_display/py_image_display.html#using-
matplotlib> it runs but does not display anything
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('messi5.jpg',0)
plt.imshow(img, cmap = 'gray', interpolation = 'bicubic')
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.show()
(i am using a raspberry pi and followed this tutorial to install open cv
<http://www.pyimagesearch.com/2015/10/26/how-to-install-opencv-3-on-raspbian-
jessie/> subsequently i pip installed matplotlib)
if i replace plt.show with plt.savefig it works what is wrong?
* * *
after adding import `matplotlib; matplotlib.use('TkAgg')` and `import Tkinter`
or `tkinter` i get
(cv) pi@raspberrypi:~/Desktop $ python tst4.py
Traceback (most recent call last):
File "tst4.py", line 5, in <module>
from matplotlib import pyplot as plt
File "/home/pi/.virtualenvs/cv/lib/python3.4/site- packages/matplotlib/pyplot.py", line 114, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/home/pi/.virtualenvs/cv/lib/python3.4/site- packages/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "/home/pi/.virtualenvs/cv/lib/python3.4/site- packages/matplotlib/backends/backend_tkagg.py", line 13, in <module>
import matplotlib.backends.tkagg as tkagg
File "/home/pi/.virtualenvs/cv/lib/python3.4/site- packages/matplotlib/backends/tkagg.py", line 9, in <module>
from matplotlib.backends import _tkagg
ImportError: cannot import name '_tkagg'
Answer: I've ran into this issue myself. The problem is related to the matplotlib
backend not being properly set within the virtual environment. It took me a
lot of trial and error, but you first need to install a few dependencies:
`$ sudo apt-get install tcl-dev tk-dev python-tk python3-tk`
And then _manually_ install matplotlib from source rather than using pip:
$ workon your_env_name
$ pip uninstall matplotlib
$ git clone https://github.com/matplotlib/matplotlib.git
$ cd matplotlib
$ python setup.py install
This should take care of the problem.
I detail my full experience and more details to the solution [on this
page](http://www.pyimagesearch.com/2015/08/24/resolved-matplotlib-figures-not-
showing-up-or-displaying/).
|
Tricky filling holes in an image
Question: I need to fill holes in images using python. This is the image with objects
that I managed to get - they are really edges of objects I want, so I need to
fill them. [](http://i.stack.imgur.com/IpauJ.png)
It seemed very straightforward using `ndimage.binary_fill_holes(A)`, but the
problem is that it produces this (manually filled with red colour):
[](http://i.stack.imgur.com/koNd2.png)
But I need this:
[](http://i.stack.imgur.com/kO9x0.png)
Any way this can be solved?
This is the first image without the axes if you want to give it a try:
[](http://i.stack.imgur.com/9JiQB.png)
Answer: I think I have found a solution. It is a little bit lengthy since I ran out of
time, but maybe it helps. I have coded if for this problem only, but it should
be easy to generalize it for many images.
Some naming conventions first:
* I define "first level regions" as compact regions which are enclosed by the backround. Such first level regions may consist of different subregions.
* A first level region which consists of more than one subregion is called a critical region.
My basic idea is to compare the lengths of the contours of two subregions
which are part of one critical region. However, **I do not compare their
complete contour length, but only the segment which is close to the
background**. The one with the shorter contour segment close to the background
is considered a hole.
I'll start with the result images first.
Some overview of what we are talking about, vizualizing the naming conventions
above:
[](http://i.stack.imgur.com/DLwtM.png)
The two subregions of the critical region. The two border segments of each of
the regions which are close to the background are marked in different colours
(very thin, blue and dark red, but visible). These segments are obviously not
perfect ("thin" areas cause errors), but sufficient to compare their length:
[](http://i.stack.imgur.com/9QfrG.png)
The final result. In case that you want to have the hole "closed", let me
know, you just have to assign the original black contours to the regions
instead of to the background ([EDIT] I have included three marked lines of
code which assign the borders to the regions, as you wished):
[](http://i.stack.imgur.com/aBSVN.png)
Code is attached here. I have used the OpenCV contour function which is pretty
straigthforward, and some masking techniques. The code is legthy due to its
visualizations, sorry for its limited readability, but there seems to be no
two line solution to this problem.
Some final remarks: I first tried to do a matching of contours using sets of
points, which would avoid loops and allow the use of set.intersection to
determine the two contour segments close to the background, but since your
black lines are rather thick, the contours are sligthly mismatched. I tried
skeletonization of contours, but that opened another can of worms, so I worked
with a dump approach doing a loop and calculation distance between contour
points. There may be a nicer way to do that part, but it works.
I also considered using the [Shapely](https://pypi.python.org/pypi/Shapely
"Shapely") module, there might be ways gaining some advantage from it, but I
did not find any, so I dropped it again.
import numpy as np
import scipy.ndimage as ndimage
from matplotlib import pyplot as plt
import cv2
img= ndimage.imread('image.png')
# Label digfferentz original regions
labels, n_regions = ndimage.label(img)
print "Original number of regions found: ", n_regions
# count the number of pixels in each region
ulabels, sizes = np.unique(labels, return_counts=True)
print sizes
# Delete all regions with size < 2 and relabel
mask_size = sizes < 2
remove_pixel = mask_size[labels]
labels[remove_pixel] = 0
labels, n_regions = ndimage.label(labels) #,s)
print "Number of regions found (region size >1): ", n_regions
# count the number of pixels in each region
ulabels, sizes = np.unique(labels, return_counts=True)
print ulabels
print sizes
# Determine large "first level" regions
first_level_regions=np.where(labels ==1, 0, 1)
labeled_first_level_regions, n_fl_regions = ndimage.label(first_level_regions)
print "Number of first level regions found: ", n_fl_regions
# Plot regions and first level regions
fig = plt.figure()
a=fig.add_subplot(2,3,1)
a.set_title('All regions')
plt.imshow(labels, cmap='Paired', vmin=0, vmax=n_regions)
plt.xticks([]), plt.yticks([]), plt.colorbar()
a=fig.add_subplot(2,3,2)
a.set_title('First level regions')
plt.imshow(labeled_first_level_regions, cmap='Paired', vmin=0, vmax=n_fl_regions)
plt.xticks([]), plt.yticks([]), plt.colorbar()
for region_label in range(1,n_fl_regions):
mask= labeled_first_level_regions!=region_label
result = np.copy(labels)
result[mask]=0
subregions = np.unique(result).tolist()[1:]
print region_label, ": ", subregions
if len(subregions) >1:
print " Element 4 is a critical element: ", region_label
print " Subregions: ", subregions
#Critical first level region
crit_first_level_region=np.ones(labels.shape)
crit_first_level_region[mask]=0
a=fig.add_subplot(2,3,4)
a.set_title('Crit. first level region')
plt.imshow(crit_first_level_region, cmap='Paired', vmin=0, vmax=n_regions)
plt.xticks([]), plt.yticks([])
#Critical Region Contour
im = np.array(crit_first_level_region * 255, dtype = np.uint8)
_, contours0, hierarchy = cv2.findContours( im.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
crit_reg_contour = [contours0[0].flatten().tolist()[i:i+2] for i in range(0, len(contours0[0].flatten().tolist()), 2)]
print crit_reg_contour
print len(crit_reg_contour)
#First Subregion
mask2= labels!=subregions[1]
first_subreg=np.ones(labels.shape)
first_subreg[mask2]=0
a=fig.add_subplot(2,3,5)
a.set_title('First subregion: '+str(subregions[0]))
plt.imshow(first_subreg, cmap='Paired', vmin=0, vmax=n_regions)
plt.xticks([]), plt.yticks([])
#First Subregion Contour
im = np.array(first_subreg * 255, dtype = np.uint8)
_, contours0, hierarchy = cv2.findContours( im.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
first_sub_contour = [contours0[0].flatten().tolist()[i:i+2] for i in range(0, len(contours0[0].flatten().tolist()), 2)]
print first_sub_contour
print len(first_sub_contour)
#Second Subregion
mask3= labels!=subregions[0]
second_subreg=np.ones(labels.shape)
second_subreg[mask3]=0
a=fig.add_subplot(2,3,6)
a.set_title('Second subregion: '+str(subregions[1]))
plt.imshow(second_subreg, cmap='Paired', vmin=0, vmax=n_regions)
plt.xticks([]), plt.yticks([])
#Second Subregion Contour
im = np.array(second_subreg * 255, dtype = np.uint8)
_, contours0, hierarchy = cv2.findContours( im.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
second_sub_contour = [contours0[0].flatten().tolist()[i:i+2] for i in range(0, len(contours0[0].flatten().tolist()), 2)]
print second_sub_contour
print len(second_sub_contour)
maxdist=6
print "Points in first subregion close to first level contour:"
close_1=[]
for p1 in first_sub_contour:
for p2 in crit_reg_contour:
if (abs(p1[0]-p2[0])+abs(p1[1]-p2[1]))<maxdist:
close_1.append(p1)
break
print close_1
print len(close_1)
print "Points in second subregion close to first level contour:"
close_2=[]
for p1 in second_sub_contour:
for p2 in crit_reg_contour:
if (abs(p1[0]-p2[0])+abs(p1[1]-p2[1]))<maxdist:
close_2.append(p1)
break
print close_2
print len(close_2)
for p in close_1:
result[p[1],p[0]]=1
for p in close_2:
result[p[1],p[0]]=2
if len(close_1)>len(close_2):
print "first subregion is considered a hole:", subregions[0]
hole=subregions[0]
else:
print "second subregion is considered a hole:", subregions[1]
hole=subregions[1]
#Plot Critical region with subregions
a=fig.add_subplot(2,3,3)
a.set_title('Critical first level region with subregions')
plt.imshow(result, cmap='Paired', vmin=0, vmax=n_regions)
plt.xticks([]), plt.yticks([])
result2=result.copy()
#Plot result
fig2 = plt.figure()
a=fig2.add_subplot(1,1,1)
a.set_title('Critical first level region with subregions and bordering contour segments')
plt.imshow(result2, cmap='flag', vmin=0, vmax=n_regions)
plt.xticks([]), plt.yticks([])
#Plot result
mask_hole=np.where(labels ==hole, True, False)
labels[mask_hole]=1
labels=np.where(labels > 1, 2, 1)
# [Edit] Next two lines include black borders into final result
mask_borders=np.where(img ==0, True, False)
labels[mask_borders]=2
fig3 = plt.figure()
a=fig3.add_subplot(1,1,1)
a.set_title('Final result')
plt.imshow(labels, cmap='flag', vmin=0, vmax=n_regions)
plt.xticks([]), plt.yticks([])
plt.show()
|
re.search in Python yields "none" when using a file as the input
Question: I am very new to stack overflow and to programming in general, and appreciate
any help you can provide!
I want to use the re.search function in Python to find text in a file (this is
just practice, because ultimately I want to use re.sub and regular expressions
to find/replace). But I can't get it to work!
For example, if I enter the following code into Python:
import re
SearchStr = 'world'
Result = re.search(SearchStr,"Hello world")
print Result
I get the following output: `<_sre.SRE_Match object at 0x106875e00>` Great!
But then I made a file called "python_test.txt" which contains the text "Hello
world", and I ran the following script:
import re
InFileName = 'python_test.txt'
InFile = open(InFileName, 'r')
SearchStr = 'world'
Result = re.search(SearchStr,InFile)
print Result
I get the following output: `None`
If I replace the last three lines with `InFile.read()`, I get `'Hello
world.\n'` as the output, so I think my script is reading the file just fine.
I have also tried using `InFile.read()` instead of `InFile` in my `re.search`
terms; that didn't work either. Why doesn't my script find "world" in my file?
Answer: `re.search` expects string as the second argument. Not the file. Try use:
Result = re.search(SearchStr,InFile.read())
|
My Spark app has a read time out when reading from Cassandra and I don't know how to solve this
Question: My Spark app has a read time out when reading from Cassandra and I don't know
how to solve this. Everytime it reaches the part of my code mentined below it
has a read time out. I tried to change the structure of my code but this still
did not resolve the issue.
#coding = utf-8
import json
from pyspark import SparkContext, SparkConf
from pyspark.streaming import StreamingContext
from pyspark.sql import SQLContext, Row
from pyspark.streaming.kafka import KafkaUtils
from datetime import datetime, timedelta
def read_json(x):
try:
y = json.loads(x)
except:
y = 0
return y
def TransformInData(x):
try:
body = json.loads(x['body'])
return (body['articles'])
except:
return 0
def partition_key(source,id):
return source+chr(ord('A') + int(id[-2:]) % 26)
def articleStoreToCassandra(rdd,rdd_axes,source,time_interval,update_list,schedules_rdd):
rdd_article = rdd.map(lambda x:Row(id=x[1][0],source=x[1][5],thumbnail=x[1][1],title=x[1][2],url=x[1][3],created_at=x[1][4],last_crawled=datetime.now(),category=x[1][6],channel=x[1][7],genre=x[1][8]))
rdd_article_by_created_at = rdd.map(lambda x:Row(source=x[1][5],created_at=x[1][4],article=x[1][0]))
rdd_article_by_url = rdd.map(lambda x:Row(url=x[1][3],article=x[1][0]))
if rdd_article.count()>0:
result_rdd_article = sqlContext.createDataFrame(rdd_article)
result_rdd_article.write.format("org.apache.spark.sql.cassandra").options(table="articles", keyspace = source).save(mode ="append")
if rdd_article_by_created_at.count()>0:
result_rdd_article_by_created_at = sqlContext.createDataFrame(rdd_article_by_created_at)
result_rdd_article_by_created_at.write.format("org.apache.spark.sql.cassandra").options(table="article_by_created_at", keyspace = source).save(mode ="append")
if rdd_article_by_url.count()>0:
result_rdd_article_by_url = sqlContext.createDataFrame(rdd_article_by_url)
result_rdd_article_by_url.write.format("org.apache.spark.sql.cassandra").options(table="article_by_url", keyspace = source).save(mode ="append")
This part of my code has the problem and is connected to the error message
below
rdd_schedule = rdd.map(lambda x:(partition_key(x[1][5],x[1]
[0]),x[1][0])).subtract(schedules_rdd).map(lambda x:Row(source=x[0],type='article',scheduled_for=datetime.now().replace(second=0, microsecond=0)+timedelta(minutes=time_interval),id=x[1]))
I attached the error message below which is probably related to datastax.
if rdd_schedule.count()>0:
result_rdd_schedule = sqlContext.createDataFrame(rdd_schedule)
result_rdd_schedule.write.format("org.apache.spark.sql.cassandra").options(table="schedules", keyspace = source).save(mode ="append")
def zhihuArticleTransform(rdd):
rdd_cassandra =rdd.map(lambda x:(x[0],(x[0],x[1]['thumbnail'], x[1]['title'], x[1]['url'], datetime.fromtimestamp(float(x[1]['created_at'])),'zhihu', x[1]['category'] if x[1]['category'] else '', x[1]['channel'],''))) \
.subtract(zhihu_articles)
articleStoreToCassandra(rdd_cassandra,rdd_cassandra,'zhihu',5,[],zhihu_schedules)
conf = SparkConf().setAppName('allstreaming')
conf.set('spark.cassandra.input.consistency.level','QUORUM')
sc = SparkContext(conf=conf)
ssc = StreamingContext(sc,30)
sqlContext = SQLContext(sc)
start = 0
partition = 0
kafkaParams = {"metadata.broker.list": "localhost"}
"""
zhihustreaming
"""
zhihu_articles = sc.cassandraTable('keyspace','articles').map(lambda x:(x.id,(x.id,x.thumbnail,x.title,x.url,x.created_at+timedelta(hours=8),x.source,x.category,x.channel)))
zhihu_schedules=sqlContext.read.format('org.apache.spark.sql.cassandra').options(keyspace="keyspace", table="schedules").load().map(lambda x:(x.source,x.id))
zhihu_topic = 'articles'
zhihu_article_stream = KafkaUtils.createDirectStream(ssc, [zhihu_topic], kafkaParams)
zhihu_article_join_stream=zhihu_article_stream.map(lambda x:read_json(x[1])).filter(lambda x: x!=0).map(lambda x:TransformInData(x)).filter(lambda x: x!=0).flatMap(lambda x:(a for a in x)).map(lambda x:(x['id'].encode("utf-8") ,x))
zhihu_article_join_stream.transform(zhihuArticleTransform).pprint()
ssc.start() # Start the computation ssc.awaitTermination()
ssc.awaitTermination()
This is my error message:
[Stage 67:===================================================> (12 + 1) / 13]WARN 2016-05-04 09:18:36,943 org.apache.spark.scheduler.TaskSetManager: Lost task 7.0 in stage 67.0 (TID 231, 10.47.182.142): java.io.IOException: Exception during execution of SELECT "source", "type", "scheduled_for", "id" FROM "zhihu"."schedules" WHERE token("source", "type") > ? AND token("source", "type") <= ? ALLOW FILTERING: Cassandra timeout during read query at consistency QUORUM (3 responses were required but only 0 replica responded)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:215)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD$$anonfun$13.apply(CassandraTableScanRDD.scala:229)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:12)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:966)
at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:972)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:425)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:248)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1652)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:208)
Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency QUORUM (3 responses were required but only 0 replica responded)
at com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:69)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:269)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:183)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at sun.reflect.GeneratedMethodAccessor199.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:33)
at com.sun.proxy.$Proxy8.execute(Unknown Source)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.com$datastax$spark$connector$rdd$CassandraTableScanRDD$$fetchTokenRange(CassandraTableScanRDD.scala:207)
... 14 more
Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency QUORUM (3 responses were required but only 0 replica responded)
at com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:69)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:99)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:118)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:183)
at com.datastax.driver.core.RequestHandler.access$2300(RequestHandler.java:45)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:748)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:587)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:991)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:913)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:307)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:293)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:307)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:293)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:307)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:307)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:293)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:840)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:830)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:348)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency QUORUM (3 responses were required but only 0 replica responded)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:60)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:213)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:204)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
... 12 more
[Stage 67:===================================================> (12 + 1) / 13]
Thanks for your help!
Answer: You have to make ReadConf object and then increase read time out for reading
data . As well as using WriteConf you can increase write time out also .
Cassandra driver used by default some seconds for read and write . so change
that .
|
Stuck on PHP to Python Translation
Question: I'm attempting to translate some PHP code to Python and I'm stuck on the 4th
line in the following code (included for context):
$table = array();
for ($i = 0; $i < strlen($text); $i++) {
$char = substr($text, $i, $look_forward);
if (!isset($table[$char])) $table[$char] = array();
}
If `array()` is used to create an array in PHP, what is `$table[$char] =
array()` doing? Creating a new array inside an existing array? Or is it
extending the array?
What is this accomplishing? What would be the Python equivalent to this?
`if (!isset($table[$char])) $table[$char] = array();`
Answer: Seems to me, you should use another data structure than `list` as `table`
variable. I suppose that `dict` should be nice for the purpose.
I've just made a quick try to mimic your PHP code in Python:
table = {} # use dictionary instead of list here
for char in text:
if char not in table:
table[char] = []
# do your stuff with table[char]
pass
Also, I suggest you look into
<https://docs.python.org/3/library/collections.html#collections.defaultdict>
With the class the code could be rewritten in the following way:
import collections
table = collections.defaultdict(list)
for char in text:
# do your stuff with table[char], empty list is created by default
pass
|
parsing XML file depends on tags which may or may not be existed
Question: Im trying to parse an XML file depends on a tag which may or may not be
existed !
how I can avoid this IndexError without using exception handler ?
python script:
#!/usr/bin/python3
from xml.dom import minidom
doc = minidom.parse("Data.xml")
persons = doc.getElementsByTagName("person")
for person in persons:
print(person.getElementsByTagName("phone")[0].firstChild.data)
Data.xml :
<?xml version="1.0" encoding="UTF-8"?>
<obo>
<Persons>
<person>
<id>XA123</id>
<first_name>Adam</first_name>
<last_name>John</last_name>
<phone>01-12322222</phone>
</person>
<person>
<id>XA7777</id>
<first_name>Anna</first_name>
<last_name>Watson</last_name>
<relationship>
<type>Friends</type>
<to>XA123</to>
</relationship>
<!--<phone>01-12322222</phone>-->
</person>
</Persons>
</obo>
and I get an IndexError:
01-12322222
Traceback (most recent call last):
File "XML->Neo4j-try.py", line 29, in <module>
print(person.getElementsByTagName("phone")[0].firstChild.data)
IndexError: list index out of range
Answer: First, you need to check whether current person has phone data, and proceed
further only if it has. Also, it is slightly better to store the result of
`getElementsByTagName()` in a variable to avoid doing the same query
repeatedly, especially when the actual XML has a lot more content in each
`person` element :
for person in persons:
phones = person.getElementsByTagName("phone")
if phones:
print(phones[0].firstChild.data)
|
Python read file into memory for repeated FTP copy
Question: I need to read a local file and copy to remote location with FTP, I copy same
file file.txt to remote location repeatedly hundreds of times with different
names like f1.txt, f2.txt... f1000.txt etc. Now, is it necessary to always
open, read, close my local file.txt for every single FTP copy or is there a
way to store into a variable and use that all time and avoid file open, close
functions. file.txt is small file of 6KB. Below is the code I am using
for i in range(1,101):
fname = 'file'+ str(i) +'.txt'
fp = open('file.txt', 'rb')
ftp.storbinary('STOR ' + fname, fp)
fp.close()
I tried reading into a string variable and replace fp but ftp.storbinary
requires second argument to have method read(), please suggest if there is
better way to avoid file open close or let me know if it has no performance
improvement at all. I am using python 2.7.10 on Windows 7.
Answer: Simply open it before the loop, and close it after:
fp = open('file.txt', 'rb')
for i in range(1,101):
fname = 'file'+ str(i) +'.txt'
fp.seek(0)
ftp.storbinary('STOR ' + fname, fp)
fp.close()
**Update** Make sure you add `fp.seek(0)` before the call to `ftp.storbinary`,
otherwise the `read` call will exhaust the file in the first iteration as
noted by @eryksun.
**Update 2** depending on the size of the file it will probably be faster to
use `BytesIO`. This way the file content is saved in memory but will still be
a file-like object (ie it will have a `read` method).
from io import BytesIO
with open('file.txt', 'rb') as f:
output = BytesIO()
output.write(f.read())
for i in range(1, 101):
fname = 'file' + str(i) + '.txt'
output.seek(0)
ftp.storbinary('STOR ' + fname, fp)
|
Tweepy Python library "media_ids parameter is invalid" and "Tweet must not have more than 4 mediaIds" when submitting status update. Codes 44 and 324
Question: I have some pretty simple code for uploading images to Twitter via the Tweepy
library and then posting a status update using the returned media ids. I've
seen a lot of questions on this topic here but none that have solved my
problem. Code is as follows.
import tweepy
from configparser import SafeConfigParser
config = SafeConfigParser()
config.read('/var/www/config.ini')
CONSUMER_KEY = config.get('twitter', 'CONSUMER_KEY')
CONSUMER_SECRET = config.get('twitter', 'CONSUMER_SECRET')
ACCESS_KEY = config.get('twitter', 'ACCESS_KEY')
ACCESS_SECRET = config.get('twitter', 'ACCESS_SECRET')
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
file = open('/var/www/photo1.jpeg', 'rb')
r1 = api.media_upload(filename='/var/www/photo1.jpeg', file=file)
print(r1)
print(r1.media_id_string)
file = open('/var/www/photo2.jpeg', 'rb')
r2 = api.media_upload(filename='/var/www/photo2.jpeg', file=file)
print(r2)
print(r2.media_id_string)
media_ids = r1.media_id_string + ', ' + r2.media_id_string
print(media_ids)
api.update_status(media_ids=media_ids, status="Test Tweet")
When executing this script I get the following error at the last line
Traceback (most recent call last):
File "test2.py", line 26, in <module>
api.update_status(media_ids=media_ids, status="Test Tweet")
File "/usr/local/lib/python3.4/dist-packages/tweepy/api.py", line 194, in update_status
)(post_data=post_data, *args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/tweepy/binder.py", line 245, in _call
return method.execute()
File "/usr/local/lib/python3.4/dist-packages/tweepy/binder.py", line 229, in execute
raise TweepError(error_msg, resp, api_code=api_error_code)
tweepy.error.TweepError: [{'message': 'media_ids parameter is invalid.', 'code': 44}]
The 2 media upload requests return the following objects:
Media(media_id=728190961679929344, size=879715, expires_after_secs=86400,
media_id_string='728190961679929344', _api=<tweepy.api.API object at
0x7ffaf4d8fda0>, image={'h': 4000, 'w': 5000, 'image_type': 'image/jpeg'})
and
Media(media_id=728190987122532353, size=17489, expires_after_secs=86400,
media_id_string='728190987122532353', _api=<tweepy.api.API object at
0x7ffaf4d8fda0>, image={'h': 369, 'w': 640, 'image_type': 'image/jpeg'})
from which I extract the media ids of `728190961679929344` and
`728190987122532353` as strings through the media_id_string variable and
combine them into a single string separated by commas i.e.
`728190961679929344, 728190987122532353`. I've tried with and without the
space, in single and double quotations, singularly quoted and quoting the
entire string but nothing works.
If instead I try just update with a single image id as in the following
import tweepy
from configparser import SafeConfigParser
config = SafeConfigParser()
config.read('/var/www/config.ini')
CONSUMER_KEY = config.get('twitter', 'CONSUMER_KEY')
CONSUMER_SECRET = config.get('twitter', 'CONSUMER_SECRET')
ACCESS_KEY = config.get('twitter', 'ACCESS_KEY')
ACCESS_SECRET = config.get('twitter', 'ACCESS_SECRET')
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
file = open('/var/www/photo1.jpeg', 'rb')
r1 = api.media_upload(filename='/var/www/photo1.jpeg', file=file)
print(r1)
print(r1.media_id_string)
file = open('/var/www/photo2.jpeg', 'rb')
r2 = api.media_upload(filename='/var/www/photo2.jpeg', file=file)
print(r2)
print(r2.media_id_string)
media_ids = r1.media_id_string + ', ' + r2.media_id_string
print(media_ids)
api.update_status(media_ids=r1.media_id_string, status="Test Tweet")
I get the following error again at the last line
Traceback (most recent call last):
File "test2.py", line 26, in <module>
api.update_status(media_ids=r1.media_id_string, status="Test Tweet")
File "/usr/local/lib/python3.4/dist-packages/tweepy/api.py", line 194, in update_status
)(post_data=post_data, *args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/tweepy/binder.py", line 245, in _call
return method.execute()
File "/usr/local/lib/python3.4/dist-packages/tweepy/binder.py", line 229, in execute
raise TweepError(error_msg, resp, api_code=api_error_code)
tweepy.error.TweepError: [{'message': 'Tweet must not have more than 4 mediaIds.', 'code': 324}]
Clearly I only have 1 media id, so the error makes no sense. I assume I'm
formatting the request incorrectly but I've tried a range of different formats
and none seem to work.
Any ideas as I'm out??
Thanks in advance.
Answer: Turns out the media_ids was not formatted as a string but instead a list of
strings, this differs from the Twitter API documentation and thus Tweepy must
format the request from the list before wrapping. Here is my code firstly for
multiple images:
import tweepy
from configparser import SafeConfigParser
config = SafeConfigParser()
config.read('/var/www/config.ini')
CONSUMER_KEY = config.get('twitter', 'CONSUMER_KEY')
CONSUMER_SECRET = config.get('twitter', 'CONSUMER_SECRET')
ACCESS_KEY = config.get('twitter', 'ACCESS_KEY')
ACCESS_SECRET = config.get('twitter', 'ACCESS_SECRET')
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
file = open('/var/www/photo1.jpeg', 'rb')
r1 = api.media_upload(filename='/var/www/photo1.jpeg', file=file)
print(r1)
print(r1.media_id_string)
file = open('/var/www/photo2.jpeg', 'rb')
r2 = api.media_upload(filename='/var/www/photo2.jpeg', file=file)
print(r2)
print(r2.media_id_string)
media_ids = [r1.media_id_string, r2.media_id_string]
print(media_ids)
api.update_status(media_ids=media_ids, status="Test Tweet")
and then for a single image:
import tweepy
from configparser import SafeConfigParser
config = SafeConfigParser()
config.read('/var/www/config.ini')
CONSUMER_KEY = config.get('twitter', 'CONSUMER_KEY')
CONSUMER_SECRET = config.get('twitter', 'CONSUMER_SECRET')
ACCESS_KEY = config.get('twitter', 'ACCESS_KEY')
ACCESS_SECRET = config.get('twitter', 'ACCESS_SECRET')
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
file = open('/var/www/photo1.jpeg', 'rb')
r1 = api.media_upload(filename='/var/www/photo1.jpeg', file=file)
print(r1)
print(r1.media_id_string)
file = open('/var/www/photo2.jpeg', 'rb')
r2 = api.media_upload(filename='/var/www/photo2.jpeg', file=file)
print(r2)
print(r2.media_id_string)
media_ids = [r1.media_id_string_ids]
print(media_ids)
api.update_status(media_ids=media_ids, status="Test Tweet")
|
Why is python ctypes class descriptor called when object is not being destroyed?
Question:
>>> from ctypes import *
>>> class A(Structure):
... _fields_ = [('a', c_int)]
... def __del__(self):
... print("destructor called")
...
>>> a = (A * 10)()
>>> a[0]
<__main__.A object at 0x7f93038cdd08>
>>> a[0]
destructor called
<__main__.A object at 0x7f93038cde18>
>>> a[0]
destructor called
<__main__.A object at 0x7f93038cdd08>
>>> a[0]
destructor called
<__main__.A object at 0x7f93038cde18>
Why is the destructor being called here ? Why is the address of the object
different each time ? Why doesn't python crash with a double free error ?
Answer: `a` is a _proxy object_ , representing an array of C structs, in the Python
world. Each time you index into this object, `ctypes` creates a _new instance_
of the `A` class for you, to proxy the contained C structs.
Now, because you don't store any references to this new object, it is also
garbage collected as soon as it's `repr()` value has been echoed in the
interpreter.
You could store the object produced by `a[0]` in a new variable:
>>> foo = a[0]
>>> foo
<__main__.A object at 0x11061ea60>
This object will always be distinct from any others you create by indexing
position 0 of `a`, but they all represent the same C struct because they
reference the same address:
>>> bar = a[0]
>>> foo is bar
False
>>> addressof(foo) == addressof(bar)
True
If you delete this reference the object is destructed again (provided you did
not create more references to the Python proxy object of course):
>>> del foo
destructor called
|
1D Random Walk from Matlab to Python
Question: I have a Matlab code that generates a 1D random walk.
%% probability to move up or down
prob = [0.05, 0.95];
start = 2; %% start with 2
positions(1) = start;
for i=2:1000
rr = rand(1);
down = rr<prob(1) & positions(i-1)>1;
up = rr>prob(2) & positions(i-1)<4;
positions(i) = positions(i-1)-down + up;
figure(1), clf
plot(positions)
This gives me the plot below [1D Random Walk with
Matlab](http://i.stack.imgur.com/URx9o.png)
I need to try to translate this in Python and I have came up with this (using
numpy):
import random
import numpy as np
import matplotlib.pyplot as plt
prob = [0.05, 0.95] ##probability to move up or down
N = 100 ##length of walk
def randomWalk(N):
positions=np.zeros(N)
start = 2 ##Start at 2
positions[0] = start
for i in range(1,100):
rr = random.randint(0,1)
if rr<prob[0] and positions[i-1]>1:
start -= 1
elif rr>prob[1] and positions[i-1]<4:
start += 1
positions[i] = start
return positions
plt.plot(randomWalk(N))
plt.show()
It looks fairly close to what I want (see figure below):[1D Random Walk with
Python](http://i.stack.imgur.com/Q476o.png)
But I wonder if they are really equivalent, because they do seem different:
The Python code seems spikier than the Matlab one.
What is missing in my Python code to achieve the perfect stepwise
increase/decrease (similar to the Matlab code)? Maybe it needs an "else" that
tells it to stay the same unless the two conditions are met. How do I
implement that?
Answer: You are doing a bunch of things differently.
For one, you are using `rand` in MATLAB, which returns a random float between
0 and 1. In python, you are using `randint`, which returns a random integer.
You are doing `randint(0, 1)`, which means "a random integer from 0 to 1, not
including 0". So it will always be 1. You want `random.random()`, which
returns a random float between 0 and 1.
Next, you are computing `down` _and_ `up` in MATLAB, but in Python you are
computing `down` _or_ `up` in Python. For your particular case of
probabilities these end up having the same result, but they are syntactically
different. You can use an almost identical syntax to MATLAB for Python in this
case.
Finally, you are calculating a lot more samples for MATLAB than Python (about
a factor of 10 more).
Here is a direct port of your MATLAB code to Python. The result for me is
pretty much the same as your MATLAB example (with different random numbers, of
course):
import random
import matplotlib.pyplot as plt
prob = [0.05, 0.95] # Probability to move up or down
start = 2 #Start at 2
positions = [start]
for _ in range(1, 1000):
rr = random.random()
down = rr < prob[0] and positions[-1] > 1
up = rr > prob[1] and positions[-1] < 4
positions.append(positions[-1] - down + up)
plt.plot(positions)
plt.show()
If speed is an issue you can probably speed this up by using
`np.random.random(1000)` to generate the random numbers up-front, and do the
probability comparisons up-front as well in a vectorized manner.
So something like this:
import random
import numpy as np
import matplotlib.pyplot as plt
prob = [0.05, 0.95] # Probability to move up or down
start = 2 #Start at 2
positions = [start]
rr = np.random.random(1000)
downp = rr < prob[0]
upp = rr > prob[1]
for idownp, iupp in zip(downp, upp):
down = idownp and positions[-1] > 1
up = iupp and positions[-1] < 4
positions.append(positions[-1] - down + up)
plt.plot(positions)
plt.show()
Edit: To explain a bit more about the second example, basically what I am
doing is pre-computing whether the probability is below the first threshold or
above the second for every step ahead of time. This is much faster than
computing a random sample and doing the comparison at each step of the loop.
Then I am using `zip` to combine those two random sequences into one sequence
where each element is the pair of corresponding elements from the two
sequences. This is assuming python 3, if you are using python 2 you should use
`itertools.izip` instead of `zip`.
So it is roughly equivalent to this:
import random
import numpy as np
import matplotlib.pyplot as plt
prob = [0.05, 0.95] # Probability to move up or down
start = 2 #Start at 2
positions = [start]
rr = np.random.random(1000)
downp = rr < prob[0]
upp = rr > prob[1]
for i in range(len(rr)):
idownp = downp[i]
iupp = upp[i]
down = idownp and positions[-1] > 1
up = iupp and positions[-1] < 4
positions.append(positions[-1] - down + up)
plt.plot(positions)
plt.show()
In python, it is generally preferred to iterate over values, rather than
indexes. There is pretty much never a situation where you need to iterate over
an index. If you find yourself doing something like `for i in
range(len(foo)):`, or something equivalent to that, you are almost certainly
doing something wrong. You should either iterate over `foo` directly, or if
you need the index for something else you can use something like `for i, ifoo
in enumerate(foo):`, which gets you both the elements of foo and their
indexes.
Iterating over indexes is common in MATLAB because of various limitations in
the MATLAB language. It is technically possible to do something similar to
what I did in that Python example in MATLAB, but in MATLAB it requires a lot
of boilerplate to be safe and will be extremely slow in most cases. In Python,
however, it is the fastest and cleanest approach.
|
How to parse through complex season and episode formatting in Python Pandas
Question: I'm trying to clean up some data and am struggling to do so in Python/Pandas.
I have a series of data with TV Show Titles. I would like to do the following:
1. check if there are integers at the end of the string
2. if there is only one integer, return everything before that part of the string
3. if there are multiple parts of the string that are integers, return the first all of the string and then the 1st integer
So here is my inputs:
Brooklyn 99 103
Hit The Floor 110
Outputs:
Brooklyn 99
Hit The Floor
As a separate function (or functions), I would like to remove any additional
season/ episode formatting and any strings after it :
Inputs
Hot in Cleveland s6 ep03
Mutt & Stuff #111
LHH ATL 08/31a HD
LHH ATL 04/04 Check
Esther With Hot Chicks Ep. 1
Suspect 2/24
Suspect 2/24 HD
Output
Hot in Cleveland
Mutt & Stuff
LHH ATL
LHH ATL
Esther With Hot Chicks
Suspect
Suspect
I've written a function like so:
def digit(value):
return value.isdigit()
def another(value):
li = value.split(" ")
x = len(filter(digit, value))
ind = li.index( str(filter(digit, li)[0]) )
try:
if x > 1:
return " ".join(li[:ind+1])
else:
value.str.replace(r'(\D+).*', r'\1').str.replace(r'\s+.$', '').str.strip()
except:
return value.str.replace(r'(\D+).*', r'\1').str.replace(r'\s+.$', '').str.strip()
data["LongTitleAdjusted"] = data["Long Title"].apply(another)
data["LongTitleAdjusted"]
but I am getting this error:
AttributeError Traceback (most recent call last)
<ipython-input-49-3526b96a8f5a> in <module>()
15 return value.str.replace(r'(\D+).*', r'\1').str.replace(r'\s+.$', '').str.strip()
16
---> 17 data["LongTitleAdjusted"] = data["Long Title"].apply(another)
18 data["LongTitleAdjusted"]
C:\Users\lehmank\AppData\Local\Continuum\Anaconda2\lib\site- packages\pandas\core\series.pyc in apply(self, func, convert_dtype, args, **kwds)
2167 values = lib.map_infer(values, lib.Timestamp)
2168
-> 2169 mapped = lib.map_infer(values, f, convert=convert_dtype)
2170 if len(mapped) and isinstance(mapped[0], Series):
2171 from pandas.core.frame import DataFrame
pandas\src\inference.pyx in pandas.lib.map_infer (pandas\lib.c:62578)()
<ipython-input-49-3526b96a8f5a> in another(value)
13 value.str.replace(r'(\D+).*', r'\1').str.replace(r'\s+.$', '').str.strip()
14 except:
---> 15 return value.str.replace(r'(\D+).*', r'\1').str.replace(r'\s+.$', '').str.strip()
16
17 data["LongTitleAdjusted"] = data["Long Title"].apply(another)
AttributeError: 'unicode' object has no attribute 'str'
for regex
Answer: this will do the trick with your sample data set:
df['title'].str.replace(r'(\D+).*', r'\1').str.replace(r'\s+.$', '').str.strip()
but it would also convert `Brooklyn 99` to `Brooklyn`
|
Python: How can I tell if my python has SSL?
Question: How can I tell if my source-built python has SSL enabled? either
* after running configure, but before compiling (best).
* after compiling, when I can run the python.
Context:
* a script that populates a bare linux box.
* Prerequisite is to install openssl, so that Python can do https.
* trying to detect if this prerequisite is not met.
Answer: If all you want to do is figure out if `openssl` is installed, you can parse
the output of `openssl version`:
$ openssl version
OpenSSL 1.0.2g-fips 1 Mar 2016
You can get [all sorts of
information](https://www.openssl.org/docs/manmaster/apps/version.html) from
`version`, for example, the directory where its stored:
$ openssl version -d
OPENSSLDIR: "/usr/lib/ssl"
As far as Python goes, I'm not sure how you can tell before running configure
(maybe check the contents of `config.log`?) but once Python is installed;
simply parse the output of `ssl.OPENSSL_VERSION`, like this:
$ python -c "import ssl; print(ssl.OPENSSL_VERSION)"
OpenSSL 1.0.2g-fips 1 Mar 2016
For even more information, have a play with the `sysconfig` module, for
example:
$ python -c "import sysconfig; print(sysconfig.get_config_var('CONFIG_ARGS'))"
'--enable-shared' '--prefix=/usr' '--enable-ipv6' '--enable-unicode=ucs4' '--with-dbmliborder=bdb:gdbm' '--with-system-expat' '--with-computed-gotos' '--with-system-ffi' '--with-fpectl' 'CC=x86_64-linux-gnu-gcc' 'CFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security ' 'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro'
|
how to plot two very large lists in python
Question: I've two lists:
time_str = ['06:03' '06:03' '06:04' ..., '19:58' '19:59' '19:59']
value = ['3.25' '3.09' '2.63' ..., '2.47' '2.57' '2.40']
I tried below code but got error:
plt.plot(time_str,value)
plt.xlabel('Time')
plt.show()
> ValueError: invalid literal for float(): 06:00
How can I plot time_str on x_axis and value on y axis. time_str has values for
every minute and maybe we can show for every 15 minutes ticks on x axis.I
tried in several ways but I couldn't get the line plot properly. Can anyone
suggest
Edit: After some trials, I have this code yet I don't have appropriate labels
on the axis (It appears as though python just tried to scratch something):
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.xaxis.set_major_locator(md.MinuteLocator(interval=15))
ax.xaxis.set_major_formatter(md.DateFormatter('%H:%M'))
plt.plot(y)
plt.xticks(range(len(x)), x)
plt.show()
[](http://i.stack.imgur.com/Edpsz.png)
Answer: You can plot every ith value using numpy's array slicing and indexing
(<http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-
slicing-and-indexing>)
#only plot every 15th value
plt.plot(time_str[::15], value[::15])
**In response to question update**
The value error is not associated with the size of the list. You need to cast
your string as a datetime object. Something like:
from datetime import datetime
times = [datetime.strptime(time, '%I:%M') for time in time_str]
**In response to comment**
You will have to adjust this to your needs, but using masks would be the
simpliest way to get every 15 mins
import matplotlib.pyplot as plt
from datetime import datetime
import numpy as np
time_str = ['06:03', '06:18', '06:28', '19:33', '19:03', '19:59']
value = np.array(['3.25', '3.09', '2.63', '2.47', '2.57', '2.40'])
times = np.array([datetime.strptime(time, '%H:%M') for time in time_str])
time_deltas = np.array([(time - times[0]).total_seconds()/60. for time in times])
plt_times = times[time_deltas%15==0]
plt_values = value[time_deltas%15==0]
plt.plot_date(plt_times, plt_values)
plt.show()
|
Is there something wrong with splitting a command line with split(' ')?
Question: In most [answers](http://stackoverflow.com/questions/89228/calling-an-
external-command-in-python) (I found) to the question _"How to run an external
command"_ I see something along the lines
> If you want to run `ls -l` you need to use `subprocess.call(["ls", "-l"])`
What I normally do when I know what I will be running is to call
`subprocess.call('ls -l'.split(' '))` to have visually the command line in one
piece (it is usually a variable).
Is there anything inherently wrong with using `split()` as opposed to building
the list manually (again, when the command is known). Or are these answers
crafted to explicitly show that a list is needed?
I tried to find some drawbacks (multiple spaces, escaped spaces, ...) but I do
not see where that approach could go wrong?
Note: this question is specifically about the robustness of splitting on
spaces, not the security issues or other (very important) considerations as
such.
Answer: Observe that this works:
>>> import subprocess
>>> subprocess.call(['ls', '-l', "my file"])
-rw-rw---- 1 john john 0 May 5 10:46 my file
0
But this does not:
>>> subprocess.call('ls -l "my file"'.split(' '))
ls: cannot access "my: No such file or directory
ls: cannot access file": No such file or directory
2
And this does work:
>>> import shlex
>>> shlex.split('ls -l "my file"')
['ls', '-l', 'my file']
>>> subprocess.call(shlex.split('ls -l "my file"'))
-rw-rw---- 1 john john 0 May 5 10:46 my file
0
### Recommendation
In python philosphy, [explicit is better than
implicit](https://www.python.org/dev/peps/pep-0020/). Thus, of those three
forms, use this one:
subprocess.call(['ls', '-l', 'my file'])
This avoids all preprocessing and shows you clearly and unambiguously and
_explicitly_ what will be executed and what its arguments are.
|
Creating a Python command line application
Question: So I wrote a Python 3 library, which serves as an application 'backend'. Now I
can sit down with the interpreter, import the source file(s), and hack around
using the lib - I know how to do this.
But I would also like to build a command line 'frontent' application using the
library. My library defines a few objects which have high-level commands,
which should be visible by the application. Such commands may return some data
structures and the high-level commands would print them nicely. In other
words, the command line app would be a thin wrapper around the lib, passing
her input to the library commands, and presenting results to the user.
The best example of what I'm trying to achieve would probably be Mercurial
SCM, as it is written in Python and the 'hg' command does what I'm looking for
- for instance, 'hg commit -m message' will find the code responsible for the
'commit' command implementation, pass the arguments from the user and do its
work. On the way back, it might get some results and print them out nicely.
Is there a general way of doing it in Python? Like exposing
classes/methods/functions as 'high level' commands with an annotation? Does
anybody know of any tutorials?
Answer: You can do this with
[`argparse`](https://docs.python.org/3/howto/argparse.html). For example here
is the start of my [`deploy`](https://github.com/rsmith-nl/deploy) script.
def main(argv):
"""
Entry point for the deploy script.
Arguments:
argv: All command line arguments save the name of the script.
"""
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument('-v', '--verbose', action='store_true',
help='also report if files are the same')
parser.add_argument('-V', '--version', action='version',
version=__version__)
parser.add_argument('command', choices=['check', 'diff', 'install'])
fname = '.'.join(['filelist', pwd.getpwuid(os.getuid())[0]])
args = parser.parse_args(argv)
It uses an argument with choices to pick a function. You could define a
dictionary mapping choices to functions;
cmds = {'check': do_check, 'diff': do_diff, 'install': do_install}
fn = cmds[args.command]
If you make sure that all the dict keys are in the command choices, you don't
need to catch `KeyError`.
|
LibreOffice problems drawing hash marks in shape using Python
Question: I am trying to create a hash mark pattern inside a shape using LibreOffice 5
on Windows 10 using the Python 3.3 that came with LibreOffice. Two thirds of
the code is similar to this
[post](http://stackoverflow.com/questions/36852645/create-flowchart-in-
libreoffice-using-python) with additional questions about creating hash marks
at the end of the code listing.
This is the Python code I tried.
import sys
print(sys.version)
import socket
import uno
# get the uno component context from the PyUNO runtime
localContext = uno.getComponentContext()
# create the UnoUrlResolver
resolver = localContext.ServiceManager.createInstanceWithContext("com.sun.star.bridge.UnoUrlResolver", localContext )
# connect to the running office
ctx = resolver.resolve( "uno:socket,host=localhost,port=2002;urp;StarOffice.ComponentContext" )
smgr = ctx.ServiceManager
# get the central desktop object
desktop = smgr.createInstanceWithContext( "com.sun.star.frame.Desktop",ctx)
model = desktop.getCurrentComponent()
# Create the shape
def create_shape(document, x, y, width, height, shapeType):
shape = model.createInstance(shapeType)
aPoint = uno.createUnoStruct("com.sun.star.awt.Point")
aPoint.X, aPoint.Y = x, y
aSize = uno.createUnoStruct("com.sun.star.awt.Size")
aSize.Width, aSize.Height = width, height
shape.setPosition(aPoint)
shape.setSize(aSize)
return shape
def formatShape(shape):
shape.setPropertyValue("FillColor", int("FFFFFF", 16)) # blue
shape.setPropertyValue("LineColor", int("000000", 16)) # black
aHatch = uno.createUnoStruct("com.sun.star.drawing.Hatch")
#HatchStyle = uno.createUnoStruct("com.sun.star.drawing.HatchStyle")
#aHatch.Style=HatchStyle.DOUBLE;
aHatch.Color=0x00ff00
aHatch.Distance=100
aHatch.Angle=450
shape.setPropertyValue("FillHatch", aHatch)
shape.setPropertyValue("FillStyle", "FillStyle.DOUBLE")
shape = create_shape(model, 0, 0, 10000, 10000, "com.sun.star.drawing.RectangleShape")
formatShape(shape)
drawPage.add(shape)
This code should set a double crosshatch pattern inside a rectangle but no
pattern shows ups inside the rectangle.
aHatch = uno.createUnoStruct("com.sun.star.drawing.Hatch")
#HatchStyle = uno.createUnoStruct("com.sun.star.drawing.HatchStyle")
#aHatch.Style=HatchStyle.DOUBLE;
aHatch.Color=0x00ff00
aHatch.Distance=100
aHatch.Angle=450
shape.setPropertyValue("FillHatch", aHatch)
shape.setPropertyValue("FillStyle", "FillStyle.DOUBLE")
The line to set the hatch style pattern:
uno.RuntimeException: pyuno.getClass:
Fails with the following error
com.sun.star.drawing.HatchStyleis a ENUM, expected EXCEPTION,
Here are some links to
[Java](https://svn.apache.org/repos/asf/openoffice/trunk/test/testuno/source/fvt/uno/sd/shape/ShapeProperties.java)
and
[BASIC](https://wiki.openoffice.org/wiki/Documentation/BASIC_Guide/Structure_of_Drawings)
examples I used for reference.
Answer: > HatchStyle = uno.createUnoStruct("com.sun.star.drawing.HatchStyle")
This fails because
[HatchStyle](http://api.libreoffice.org/docs/idl/ref/namespacecom_1_1sun_1_1star_1_1drawing.html#a021284aa8478781ba1b958b81da7b608)
is an
[Enum](https://wiki.openoffice.org/wiki/Python/Transfer_from_Basic_to_Python#Enum),
not a
[Struct](https://wiki.openoffice.org/wiki/Python/Transfer_from_Basic_to_Python#Struct_Instance).
To use the HatchStyle enum, follow one of the three ways in the python example
from the Enum link.
> shape.setPropertyValue("FillStyle", "FillStyle.DOUBLE")
It looks like you are confusing "FillStyle.HATCH" and "HatchStyle.DOUBLE" from
the example. This is what the code should be in Python:
from com.sun.star.drawing.FillStyle import HATCH
shape.setPropertyValue("FillStyle", HATCH)
This seems to be missing as well:
drawPage = model.getDrawPages().getByIndex(0)
|
Changing rectangle color on click in Python using Tk
Question: I'm trying to get a Tk rectangle created on a canvas to change its color when
clicked. Right now, no color change happens when the rectangle is clicked.
What do I need to be doing differently?
This is in Python3.5, by the way.
from tkinter import *
def set_color(id):
global alive, colors
alive = not alive
col = colors[alive]
canvas.itemconfigure(id, fill=col)
root = Tk()
canvas = Canvas(root)
canvas.grid(column=1, row=1, sticky=(N, S, E, W))
alive = False
colors = {True: "green", False: "red"}
id = canvas.create_rectangle((1, 1, 60, 60), fill="red")
canvas.tag_bind(id, "<ButtonPress-1>", set_color)
root.mainloop()
Answer: tag_bind sends an event to the function, so "id" is overwritten and now
contains the event. So you can change from
def set_color(id):
## to
def set_color(event=None):
and it will work because there is only one object/id to deal with in this
program. event=None is used because it assigns a default value when no event
is sent to the function, as in a button press for example, so will work for
all responses.
|
wxPython: Switching between multiple panels with a button
Question: I would like to have two (I will add more later) panels that occupy the same
space within the frame and for them to be shown/hidden when the respective
button is pressed on the toolbar, "mListPanel" should be the default.
Currently the settings panel is shown when the application is launched and the
buttons don't do anything. I've searched and tried lots of stuff for hours and
still can't get it to work. I apologise if it's something simple, I've only
started learning python today.
This is what the code looks like now:
import wx
class mListPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent=parent)
#wx.StaticText(self, -1, label='Search:')#, pos=(10, 3))
#wx.TextCtrl(self, pos=(10, 10), size=(250, 50))
class settingsPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent=parent)
class bifr(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, wx.ID_ANY, "Title")
self.listPanel = mListPanel(self)
self.optPanel = settingsPanel(self)
menuBar = wx.MenuBar()
fileButton = wx.Menu()
importItem = wx.Menu()
fileButton.AppendMenu(wx.ID_ADD, 'Add M', importItem)
importItem.Append(wx.ID_ANY, 'Import from computer')
importItem.Append(wx.ID_ANY, 'Import from the internet')
exitItem = fileButton.Append(wx.ID_EXIT, 'Exit')
menuBar.Append(fileButton, 'File')
self.SetMenuBar(menuBar)
self.Bind(wx.EVT_MENU, self.Quit, exitItem)
toolBar = self.CreateToolBar()
homeToolButton = toolBar.AddLabelTool(wx.ID_ANY, 'Home', wx.Bitmap('icons/home_icon&32.png'))
importLocalToolButton = toolBar.AddLabelTool(wx.ID_ANY, 'Import from computer', wx.Bitmap('icons/comp_icon&32.png'))
importToolButton = toolBar.AddLabelTool(wx.ID_ANY, 'Import from the internet', wx.Bitmap('icons/arrow_bottom_icon&32.png'))
settingsToolButton = toolBar.AddLabelTool(wx.ID_ANY, 'settings', wx.Bitmap('icons/wrench_plus_2_icon&32.png'))
toolBar.Realize()
self.Bind(wx.EVT_TOOL, self.switchPanels(), settingsToolButton)
self.Bind(wx.EVT_TOOL, self.switchPanels(), homeToolButton)
self.Layout()
def switchPanels(self):
if self.optPanel.IsShown():
self.optPanel.Hide()
self.listPanel.Show()
self.SetTitle("Home")
elif self.listPanel.IsShown():
self.listPanel.Hide()
self.optPanel.Show()
self.SetTitle("Settings")
else:
self.SetTitle("Error")
self.Layout()
def Quit(self, e):
self.Close()
if __name__ == "__main__":
app = wx.App(False)
frame = bifr()
frame.Show()
app.MainLoop()
Answer: first off, i would highly suggest that you learn about [wxpython
sizers](http://wiki.wxpython.org/UsingSizers) and get a good understanding of
them (they're really not that hard the understand) as soon as possible before
delving deeper into wxpython, just a friendly tip :).
as for your example, a few things: when your'e not using sizers, you have to
give size and position for every window or else they just wont show, so you'd
have to change your panel classes to something like this (again this is only
for demonstration, you should be doing this with wx.sizers, and not position
and size):
class mListPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent=parent,pos=(0,100),size=(500,500))
class settingsPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent=parent,pos=(0,200),size (1000,1000))
further more, when binding an event it should look like this:
self.Bind(wx.EVT_TOOL, self.switchPanels, settingsToolButton)
self.Bind(wx.EVT_TOOL, self.switchPanels, homeToolButton)
notice how I've written only the name of the function without the added (), as
an event is passed to it, you cant enter your own parameters to a function
emitted from an event (unless you do it with the following syntax lambda
e:FooEventHandler(paramaters))
and the event handler (function) should look like this:
def switchPanels(self, event):
if self.optPanel.IsShown():
self.optPanel.Hide()
self.listPanel.Show()
self.SetTitle("Home")
elif self.listPanel.IsShown():
self.listPanel.Hide()
self.optPanel.Show()
self.SetTitle("Settings")
else:
self.SetTitle("Error")
self.Layout()
there should always be a second parameter next to self in functions that are
bind to event as the event object is passes there, and you can find its
associated methods and parameters in the documentation (in this example it is
the wx.EVT_TOOL).
|
Can't access Bluemix object store from my Notebook
Question: I'm trying to read a couple of JSON files from my Bluemix object store into a
Jupyter notebook using Python. I've followed the examples I've found, but I'm
still getting a "No such file or directory" error.
Here is the code that should authenticate the object store and identify the
files:
# Set up Spark
from pyspark import SparkContext
from pyspark import SparkConf
if('config' not in globals()):
config = SparkConf().setAppName('warehousing_sql').setMaster('local')
if('sc' not in globals()):
sc= SparkContext(conf=config)
# Set the Hadoop configuration.
def set_hadoop_config(name, credentials):
prefix = "fs.swift.service." + name
hconf = sc._jsc.hadoopConfiguration()
hconf.set(prefix + ".auth.url", credentials['auth_url']+'/v3/auth/tokens')
hconf.set(prefix + ".auth.endpoint.prefix", "endpoints")
hconf.set(prefix + ".tenant", credentials['project_id'])
hconf.set(prefix + ".username", credentials['user_id'])
hconf.set(prefix + ".password", credentials['password'])
hconf.setInt(prefix + ".http.port", 8080)
hconf.set(prefix + ".region", credentials['region'])
hconf.setBoolean(prefix + ".public", True)
# Data Sources (generated by Insert to code)
credentials = {
'auth_url':'https://identity.open.softlayer.com',
'project':'***',
'project_id':'****',
'region':'dallas',
'user_id':'****',
'domain_id':'****',
'domain_name':'****',
'username':'****',
'password':"""****""",
'filename':'Warehousing-data.json',
'container':'notebooks',
'tenantId':'****'
}
set_hadoop_config('spark', credentials)
# The data files should now be accessible through URLs of the form
# swift://notebooks.spark/filename.json
Here is the calling code:
...
resource_path= "swift://notebooks.spark/"
Warehousing_data_json = "Warehousing-data.json"
Warehousing_sales_data_nominal_scenario_json = "Warehousing-sales_data-nominal_scenario.json"
...
Here is the error: IOError: [Errno 2] No such file or directory:
'swift://notebooks.spark/Warehousing-data.json'
I'm sorry if this seems like a novice question (which I admit I am), but I
think it's ridiculously complicated to set this up and really bad form to rely
on an undocumented method SparkContext._jsc.hadoopConfiguration().
* * *
Added in response to Hobert's and Sven's comments:
Thanks Hobert. I don’t understand your comment about the definition for
"swift://notebooks**.spark**/" Unless I misunderstand the logic of the sample
I followed (which is essentially identical to what Sven shows in his
response), this path results from the call to sc._jsc.hadoopConfiguration(),
but it’s hard to know what this call actually does, since the
HadoopConfiguation class is not documented.
I also do not understand the alternatives to “use/add that definition for the
Hadoop configuration” or “alternatively, … use swift client inside of Spark to
access the JSON.” I suppose I would prefer the latter since I make no other
use of Hadoop in my notebook. Please point me to a more detailed explanation
of these alternatives.
Thanks Sven. You are correct that I did not show the actual reading of the
JSON files. The reading actually occurs within a method that is part of the
API for
[DOcplexcloud](https://developer.ibm.com/docloud/documentation/docloud/python-
api/). Here is the relevant code in my notebook:
resource_path= "swift://notebooks.spark/"
Warehousing_data_json = "Warehousing-data.json"
Warehousing_sales_data_nominal_scenario_json = "Warehousing-sales_data-nominal_scenario.json"
resp = client.execute(input= [{'name': "warehousing.mod",
'file': StringIO(warehousing_data_dotmod + warehousing_inputs + warehousing_dotmod + warehousing_outputs)},
{'name': Warehousing_data_json,
'filename': resource_path + Warehousing_data_json},
{'name': Warehousing_sales_data_nominal_scenario_json,
'filename': resource_path + Warehousing_sales_data_nominal_scenario_json}],
output= "results.json",
load_solution= True,
log= "solver.log",
gzip= True,
waittime= 300,
delete_on_completion= True)
Here is the stack trace:
IOError Traceback (most recent call last)
<ipython-input-8-67cf709788b3> in <module>()
29 gzip= True,
30 waittime= 300,
---> 31 delete_on_completion= True)
32
33 result = WarehousingResult(json.loads(resp.solution.decode("utf-8")))
/gpfs/fs01/user/sbf1-4c17d3407da8d0-a7ea98a5cc6d/.local/lib/python2.7/site-packages/docloud/job.pyc in execute(self, input, output, load_solution, log, delete_on_completion, timeout, waittime, gzip, parameters)
496 # submit job
497 jobid = self.submit(input=input, timeout=timeout, gzip=gzip,
--> 498 parameters=parameters)
499 response = None
500 completed = False
/gpfs/fs01/user/sbf1-4c17d3407da8d0-a7ea98a5cc6d/.local/lib/python2.7/site-packages/docloud/job.pyc in submit(self, input, timeout, gzip, parameters)
436 gzip=gzip,
437 timeout=timeout,
--> 438 parameters=parameters)
439 # run model
440 self.execute_job(jobid, timeout=timeout)
/gpfs/fs01/user/sbf1-4c17d3407da8d0-a7ea98a5cc6d/.local/lib/python2.7/site-packages/docloud/job.pyc in create_job(self, **kwargs)
620 self.upload_job_attachment(job_id,
621 attid=inp.name,
--> 622 data=inp.get_data(),
623 gzip=gzip)
624 return job_id
/gpfs/fs01/user/sbf1-4c17d3407da8d0-a7ea98a5cc6d/.local/lib/python2.7/site-packages/docloud/job.pyc in get_data(self)
110 data = self.data
111 if self.filename is not None:
--> 112 with open(self.filename, "rb") as f:
113 data = f.read()
114 if self.file is not None:
IOError: [Errno 2] No such file or directory: 'swift://notebooks.spark/Warehousing-data.json'
This notebook works just fine when I run it locally and resource_path is a
path on my own machine.
Sven, your code seems pretty much identical to what I have, and it follows
closely the sample I copied, so I do not understand why yours works and mine
doesn’t.
I have verified that the files are present on my Instance_objectstore.
Therefore it seems that swift://notebooks.spark/ does not point to this
objectstore. How that would happen has been a mystery to me from the start.
Again, the HadoopConfiguation class is not documented, so it is not possible
to know how it makes the association between the URL and the objectstore.
Answer: The error message you get `IOError: [Errno 2] No such file or directory:
'swift://notebooks.spark/Warehousing-data.json'` means that at that path there
is no such file. I think the setup of the Hadoop configuration was successful
otherwise you would get a different error message complaining about missing
credentials settings.
I have tested in a Python notebook on Bluemix the following code and it worked
for me. I took the sample code from the latest sample notebooks showing how to
load data from Bluemix Object Storage V3.
Method for setting the Hadoop configuration:
def set_hadoop_config(credentials):
"""This function sets the Hadoop configuration with given credentials,
so it is possible to access data using SparkContext"""
prefix = "fs.swift.service." + credentials['name']
hconf = sc._jsc.hadoopConfiguration()
hconf.set(prefix + ".auth.url", credentials['auth_url']+'/v3/auth/tokens')
hconf.set(prefix + ".auth.endpoint.prefix", "endpoints")
hconf.set(prefix + ".tenant", credentials['project_id'])
hconf.set(prefix + ".username", credentials['user_id'])
hconf.set(prefix + ".password", credentials['password'])
hconf.setInt(prefix + ".http.port", 8080)
hconf.set(prefix + ".region", credentials['region'])
hconf.setBoolean(prefix + ".public", True)
Insert credentials for associated Bluemix Object Storave V3:
credentials_1 = {
'auth_url':'https://identity.open.softlayer.com',
'project':'***',
'project_id':'***',
'region':'dallas',
'user_id':'***',
'domain_id':'***',
'domain_name':'***',
'username':'***',
'password':"""***""",
'filename':'people.json',
'container':'notebooks',
'tenantId':'***'
}
Set Hadopp configuration with given credentials:
credentials_1['name'] = 'spark'
set_hadoop_config(credentials_1)
Read JSON file usind `sc.textFile()` into an `RDD` and print out first 5 rows:
data_rdd = sc.textFile("swift://" + credentials_1['container'] + "." + credentials_1['name'] + "/" + credentials_1['filename'])
data_rdd.take(3)
Output:
[u'{"name":"Michael"}',
u'{"name":"Andy", "age":30}',
u'{"name":"Justin", "age":19}']
Read JSON file using `sqlContext.read.json()` into a `DataFrame`and output
first 5 rows:
data_df = sqlContext.read.json("swift://" + credentials_1['container'] + "." + credentials_1['name'] + "/" + credentials_1['filename'])
data_df.take(3)
Output:
[Row(age=None, name=u'Michael'),
Row(age=30, name=u'Andy'),
Row(age=19, name=u'Justin')]
|
ValueError: array length does not match index length
Question: I am practicing for contests like kaggle and I have been trying to use XGBoost
and am trying to get myself familiar with python 3rd party libraries like
pandas and numpy.
I have been reviewing scripts from this particular competition called the
Santander Customer Satisfaction Classification and I have been modifying
different forked scripts in order to experiment on them.
Here is one modified script through which I am trying to implement XGBoost:
import pandas as pd
from sklearn import cross_validation as cv
import xgboost as xgb
df_train = pd.read_csv("/Users/pavan7vasan/Desktop/Machine_Learning/Project Datasets/Santander_Customer_Satisfaction/train.csv")
df_test = pd.read_csv("/Users/pavan7vasan/Desktop/Machine_Learning/Project Datasets/Santander_Customer_Satisfaction/test.csv")
df_train = df_train.replace(-999999,2)
id_test = df_test['ID']
y_train = df_train['TARGET'].values
X_train = df_train.drop(['ID','TARGET'], axis=1).values
X_test = df_test.drop(['ID'], axis=1).values
X_train, X_test, y_train, y_test = cv.train_test_split(X_train, y_train, random_state=1301, test_size=0.4)
clf = xgb.XGBClassifier(objective='binary:logistic',
missing=9999999999,
max_depth = 7,
n_estimators=200,
learning_rate=0.1,
nthread=4,
subsample=1.0,
colsample_bytree=0.5,
min_child_weight = 3,
reg_alpha=0.01,
seed=7)
clf.fit(X_train, y_train, early_stopping_rounds=50, eval_metric="auc", eval_set=[(X_train, y_train), (X_test, y_test)])
y_pred = clf.predict_proba(X_test)
print("Cross validating and checking the score...")
scores = cv.cross_val_score(clf, X_train, y_train)
'''
test = []
result = []
for each in id_test:
test.append(each)
for each in y_pred[:,1]:
result.append(each)
print len(test)
print len(result)
'''
submission = pd.DataFrame({"ID":id_test, "TARGET":y_pred[:,1]})
#submission = pd.DataFrame({"ID":test, "TARGET":result})
submission.to_csv("submission_XGB_Pavan.csv", index=False)
Here is the stacktrace :
Traceback (most recent call last):
File "/Users/pavan7vasan/Documents/workspace/Machine_Learning_Project/Kaggle/XG_Boost.py", line 45, in <module>
submission = pd.DataFrame({"ID":id_test, "TARGET":y_pred[:,1]})
File "/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 214, in __init__
mgr = self._init_dict(data, index, columns, dtype=dtype)
File "/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 341, in _init_dict
dtype=dtype)
File "/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 4798, in _arrays_to_mgr
index = extract_index(arrays)
File "/anaconda/lib/python2.7/site-packages/pandas/core/frame.py", line 4856, in extract_index
raise ValueError(msg)
ValueError: array length 30408 does not match index length 75818
I have tried solutions based on my searches for different solutions, but I am
not able to figure out what the mistake is. What is it that I have gone wrong
in? Please let me know
Answer: The problem is that you defining `X_test` twice as @maxymoo mentioned. First
you defined it as
X_test = df_test.drop(['ID'], axis=1).values
And then you redefine that with:
X_train, X_test, y_train, y_test = cv.train_test_split(X_train, y_train, random_state=1301, test_size=0.4)
Which means now `X_test` have size equal to `0.4*len(X_train)`. Then after:
y_pred = clf.predict_proba(X_test)
you've got predictions for that part of `X_train` and you trying to create
dataframe with that and initial `id_test` which has length of the original
`X_test`.
You could use `X_fit` and `X_eval` in `train_test_split` and not hide initial
`X_train` and `X_test` because for your `cross_validation` you also has
different `X_train` which means you'll not get right answer or you `cv` would
be inaccurate with public/private score.
|
Strip Punctuation From String in Python
Question: I`m working with documents, and I need to have the words isolated without
punctuation. I know how to use string.split(" ") to make each word just the
letters, but the punctuation baffles me.
Answer: this is an example using regex, and the result is ['this', 'is', 'a',
'string', 'with', 'punctuation']
s = " ,this ?is a string! with punctuation. "
import re
pattern = re.compile('\w+')
result = pattern.findall(s)
print(result)
|
I can't run a simple code using pyaudio - [Errno -9996] Invalid output device (no default output device)
Question: (I'm new at python)
I'm trying to run a simple code about pyaudio. I just copied and pasted a code
that I found on the pyaudio web site.
I get this error:
OSError Traceback (most recent call last)
<ipython-input-7-3fc52ceecbf3> in <module>()
15 channels=wf.getnchannels(),
16 rate=wf.getframerate(),
---> 17 output=True)
18
19 # read data
/home/gustavolg/anaconda3/lib/python3.5/site-packages/pyaudio.py in open(self, *args, **kwargs)
748 """
749
--> 750 stream = Stream(self, *args, **kwargs)
751 self._streams.add(stream)
752 return stream
/home/gustavolg/anaconda3/lib/python3.5/site-packages/pyaudio.py in __init__(self, PA_manager, rate, channels, format, input, output, input_device_index, output_device_index, frames_per_buffer, start, input_host_api_specific_stream_info, output_host_api_specific_stream_info, stream_callback)
439
440 # calling pa.open returns a stream object
--> 441 self._stream = pa.open(**arguments)
442
443 self._input_latency = self._stream.inputLatency
OSError: [Errno -9996] Invalid output device (no default output device)
I can not figure out how to solve this error. I don't know if this has
something to do with audio driver or if the code needs an output declaration.
I mean, if I have to select an output.
The code:
import pyaudio
import wave
import sys
CHUNK = 1024
wf = wave.open("/home/gustavolg/anaconda3/aPython/file.wav", 'rb')
# instantiate PyAudio (1)
p = pyaudio.PyAudio()
# open stream (2)
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True)
# read data
data = wf.readframes(CHUNK)
# play stream (3)
while len(data) > 0:
stream.write(data)
data = wf.readframes(CHUNK)
# stop stream (4)
stream.stop_stream()
stream.close()
# close PyAudio (5)
p.terminate()
I'm using python3 on Jupyter notebook.
Answer: check the following steps:
>>> import pyaudio
>>> pa = pyaudio.PyAudio()
>>> pa.get_default_input_device_info()
{'defaultLowOutputLatency': 0.008707482993197279, 'maxOutputChannels': 32, 'hostApi': 0, 'defaultSampleRate': 44100.0, 'defaultHighOutputLatency': 0.034829931972789115, 'name': 'default', 'index': 15, 'maxInputChannels': 32, 'defaultHighInputLatency': 0.034829931972789115, 'defaultLowInputLatency': 0.008707482993197279, 'structVersion': 2}
>>> pyaudio.pa.__file__
'/root/.virtualenvs/py3k/lib/python3.4/site-packages/_portaudio.cpython-34m.so'
>>>
make sure you have a default input device,if not you can [refer to
here](http://stackoverflow.com/questions/4672155/pyaudio-ioerror-no-default-
input-device-available?rq=1)
I want it's useful for you!
|
Python: print function hang when printing global list of objects
Question: I'm currently writing a Python Telegram bot which is used to monitor Raspi IOs
and send messages to a channel. So basically it has a function that will
update a logging variable `llog`.
This function (`logUpdate`), as it's named, will remove entries that are more
than 5 mins old. In it, I tried to check the content of the global variable.
Upon printing, it just hangs.
This doesn't seem to block any other functionalities of the bot because I can
still call out other bot commands.
I don't think it's the bot. It must be some kind of data access problems.
I attach some code snippet below:
#!usr/bin/python
##
### RF Security bot start script
##
##
### Imports
##
import telegram as tg
import telegram.ext as tgExt
import RPi.GPIO as gpio
import time
from datetime import datetime as dt
##
### Common variables
##
NULLSENSOR = 0
PRESSENSOR = 1
MAGSENSOR = 2
sensDict = {NULLSENSOR:"No sensor",
PRESSENSOR:"Pressure sensor",
MAGSENSOR:"Magnetic sensor"}
# Event class
class ev(object):
timestamp = 0
sType = NULLSENSOR
def __init__(self, ts=0, st=NULLSENSOR):
self.timestamp = ts
self.sType = st
def toString(self):
if(sType == PRESSENSOR):
return str("-> @"+timestamp.strftime('%c')+
": Pressure sensor triggered\n")
elif(sType == MAGSENSOR):
return str("-> @"+timestamp.strftime('%c')+
": Magnetic sensor triggered\n")
else:
return ""
# Report log
llog = [] # Data log
lmutex = True # Log mutex for writing
##
### Hardware configuration
##
# GPIO callbacks
def pressureCallback(channel):
global llog
global lmutex
global trigCntGlobal
global trigCntPress
ep = ev(ts=dt.now(), st=PRESSENSOR)
print("---> Pressure sensor triggered at "+
ep.timestamp.strftime("%c"))
rfSecuBot.sendMessage('@channel', "Pressure sensor "+
"triggered.")
while(not lmutex):
pass
lmutex = False
llog.insert(0, ep)
trigCntGlobal = trigCntGlobal + 1
trigCntPress = trigCntPress + 1
lmutex = True
def magneticCallback(channel):
global llog
global lmutex
global trigCntGlobal
global trigCntMag
global rfSecuBot
em = ev(ts=dt.now(), st=PRESSENSOR)
print("---> Magnetic sensor triggered at "+
em.timestamp.strftime("%c"))
rfSecuBot.sendMessage('@channel', "Magnetic sensor "+
"triggered.")
while(not lmutex):
pass
lmutex = False
llog.insert(0, em)
trigCntGlobal = trigCntGlobal + 1
trigCntMag = trigCntMag + 1
lmutex = True
# Periodic logging function
def logUpdate():
global llog
global lmutex
updTime = dt.now()
print("---> Updating log\n")
while(not lmutex):
pass
lmutex = False
for i in llog: ########### STUCK HERE
print(i.toString()) ###########
# Check log timestamps
for i in llog:
if((updTime - i.timestamp).total_seconds() > 300):
llog.remove(i)
for i in llog: ########### WAS STUCK HERE
print(i.toString()) ########### TOO
lmutex = True
print("---> Log updated\n")
# Formatting function
def logFormat():
global llog
global lmutex
logUpdate() # Asynchronous call to logUpdate to make sure
# that the log has been updated at the time
# of formatting
while(not lmutex):
pass
lmutex = False
flog = []
cnt = 0
for i in llog:
if(cnt < 10):
flog.append(i.toString())
cnt = cnt + 1
else:
break
lmutex = True
print("----> Formatted string:")
print(flog+"\n")
return flog
def listFormat():
global llog
global lmutex
logUpdate() # Asynchronous call to logUpdate to make sure
# that the log has been updated at the time
# of formatting
while(not lmutex):
pass
lmutex = False
flog = []
flog.append(" Sensors \n")
dLen = len(sensDict.keys())
if(dLen <= 1):
flog.append(sensDict.get(NULLSENSOR))
else:
sdItr = sensDict.iterkeys()
st = sdItr.next() # Had to add extra var
while(dLen > 1):
st = sdItr.next()
trigCnt = 0
for i in llog:
if(i.sType == st):
trigCnt = trigCnt + 1
if(trigCnt < 1):
pass
else:
flog.append("-> "+st+"\n")
flog.append(" No. of times tripped: "+
trigCnt+"\n")
lmutex = True
print("----> Formatted string:")
print(flog+"\n")
return flog
##
### Software configuration
##
def blist(bot, update):
print("--> List command received\n")
listString = "List of sensor trips in the last 5 minutes:\n"
listString = listString+listFormat()
print("> "+listString+"\n")
bot.sendMessage('@channel', listString)
def log(bot, update):
print("--> Log command received\n")
logString = "Log of last 10 occurrences:\n"
logString = logString+logFormat()
print("> "+logString+"\n")
bot.sendMessage('@channel', logString)
rfSecuBotUpd.start_polling(poll_interval=1.0,clean=True)
while True:
try:
time.sleep(1.1)
except KeyboardInterrupt:
print("\n--> Ctrl+C key hit\n")
gpio.cleanup()
rfSecuBotUpd.stop()
rfSecuBot = 0
quit()
break
## Callback registration and handlers are inserted afterwards
# Just in case...
print("--> Bot exiting\n")
gpio.cleanup()
rfSecuBotUpd.stop()
rfsecuBot = 0
print("\n\n\t *** EOF[] *** \t\n\n")
quit()
# EOF []
P.S. I think someone might suggest a 'class' version of this. Think it'll
work?
Answer: In the `toString` function, I forgot to put `self` in front of the should-be
members `sType` and `timestamp`:
def toString(self):
if(sType == PRESSENSOR):
return str("-> @"+timestamp.strftime('%c')+
": Pressure sensor triggered\n")
elif(sType == MAGSENSOR):
return str("-> @"+timestamp.strftime('%c')+
": Magnetic sensor triggered\n")
else:
return ""
Which is why the value returned was always an empty string.
Note to self: check your variables!!!
On that note, that kind of explained why it didn't seem to block the thread.
|
Using Spark and Python in same IDE
Question: I am using Spyder(Anaconda) on Mac for Python development. I also have
installed PySprak on my machine, which i use from the terminal. Is it possible
to use both of them in Spyder, or somehow manage to import the spark context
into my python 2.7?
Answer: yes it is possible just install
pip install findspark
then run
findspark.init()
<http://stackoverflow.com/a/34763240>
then try to import pyspark if it works then good or else add pyspark to
pythonpath & try again
# Add the PySpark classes to the Python path:
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
|
python pandas groupby and subtract columns from different groups
Question: I have a dataframe df1
pid stat h1 h2 h3 h4 h5 h6 ... h20
1 a 3.2 3.5 6.2 7.1 1.2 2.3 ... 3.2
1 b 3.3 1.5 4.2 7.7 4.2 3.5 ... 8.4
1 a 3.1 3.8 2.2 1.1 6.2 5.3 ... 9.2
1 b 3.7 1.2 8.2 4.7 3.2 8.5 ... 2.4
: : : : : : : : : :
2 a 2.2 3.8 6.2 7.3 1.3 4.3 ... 3.2
2 b 4.3 1.3 4.2 5.7 2.2 3.1 ... 2.4
2 a 2.1 3.7 2.4 1.6 6.4 9.3 ... 9.6
2 b 3.8 1.3 8.7 3.7 7.2 8.3 ... 9.4
: : : : : : : : : :
3 a 2.2 3.8 6.2 7.3 1.3 4.3 ... 3.2
3 b 4.3 1.3 4.2 5.7 2.2 3.1 ... 2.4
3 a 2.1 3.7 2.4 1.6 6.4 9.3 ... 9.6
3 b 3.8 1.3 8.7 3.7 7.2 8.3 ... 9.4
: : : : : : : : : :
I would like to obtain groups indexed on `pid` and `stat` and then subtract
`h` values of group1 from `h` values of group2 for a final `dataframe`
(`df2`). This final dataframe needs to be reindexed with numbers starting from
`0:len(groups)` Repeat it iteratively for all permutations of pid like 1-2,
1-3, 1-4, 2-1, 2-3 ... etc. I need to perform other calculations on the on the
final dataframe `df2`(values in the below `df2` are not exact subtracted, but
just a representation)
pid(string) stat h1p1-h1p2 h2p1-h2p2 h3p1-h3p2 h4p1-h4p2 h5p1-h5p2 h6p1-h6p2 ... h20p1-h2p2
1-2 a 3.2 3.5 6.2 7.1 1.2 2.3 ... 3.2
1-2 b 3.3 1.5 4.2 7.7 4.2 3.5 ... 8.4
1-2 a 3.1 3.8 2.2 1.1 6.2 5.3 ... 9.2
1-2 b 3.7 1.2 8.2 4.7 3.2 8.5 ... 2.4
1-3 ....
I looked at options of;
for (pid, stat), group in df1.groupby(['pid', 'stat']):
print('pid = %s Stat = %s' %(pid, stat))
print group
this gives me groups but, I am not sure how to access dataframes from this for
loop and use it for subtracting from other groups. Also
df_grouped = df.groupby(['pid', 'stat']).groups()
still not sure how to access the new dataframe of groups and perform
operations. I would like to know, if this can be done using groupby or if
there is any better approach. Thanks in advance!
Answer: I implemented a generator and ignored the `stat` column because it makes no
different in any groups according to your sample. Please tell me if I did it
wrong.
import pandas as pd
from itertools import permutations
def subtract_group(df, col):
pid = df['pid'].unique()
# select piece with pid == i
segment = lambda df, i: df[df['pid'] == i].reset_index()[col]
for x, y in permutations(pid, 2):
result_df = pd.DataFrame(segment(df, x) - segment(df, y))
# rename columns
result_df.columns=["%sp%d-%sp%d" % (c, x, c, y) for c in col]
# insert pid column
result_df.insert(0, 'pid', '-'.join([str(x), str(y)]))
yield result_df
You can test it with:
# column name in your case
columns = ['h' + str(i+1) for i in range(20)]
print next(subtract_group(df1, columns))
Hope it helps.
|
plotting arrays in python upto a particular element
Question: I have a data file like this:
0.001 5.515e-01 1.056e+00 1.384e-01 1.273e+01 -1.808e-01 1.255e+01
0.002 2.335e-02 -1.100e-03 -8.850e-03 1.273e+01 -3.176e-01 1.241e+01
0.003 2.335e-02 -1.100e-03 -8.850e-03 1.273e+01 -3.177e-01 1.241e+01
0.004 2.335e-02 -1.101e-03 -8.851e-03 1.273e+01 -3.177e-01 1.241e+01
0.005 2.335e-02 -1.101e-03 -8.851e-03 1.273e+01 -3.177e-01 1.241e+01
0.006 2.335e-02 -1.102e-03 -8.851e-03 1.273e+01 -3.177e-01 1.241e+01
0.007 2.335e-02 -1.102e-03 -8.852e-03 1.273e+01 -3.177e-01 1.241e+01
... ... ... ... ... ... ...
where the 1st column is time, the last one is total energy, 2nd last is
potential energy and 3rd last is kinetic energy. Now I want to plot these
energies as function of time, but I do not want to plot the whole array at one
go.
Rather I wish to choose a time and plot the energies upto that time and then
again choose another time and plot the energies upto that time (starting
always from t=0). The code I have written for that is given below:
from pylab import*
from numpy import*
data=loadtxt('500.txt')
t=data[:,0]
KE=data[:,-3]
PE=data[:,-2]
TE=data[:,-1]
t=0
while t<100:
ke=KE[:t]
time=t[:t]
plot(time,ke)
picname=temp+'e.png'
savefig(picname)
show()
t=t+40
But it returns `File "energyprofile.py", line 14, in <module> time=t[:t]
TypeError: 'int' object has no attribute '__getitem__'`. How can I get round
this problem?
Answer: There is no commas in the [slicing notation for
python](https://docs.python.org/2/tutorial/introduction.html)
this here:
t=data[:,0]
KE=data[:,-3]
PE=data[:,-2]
TE=data[:,-1]
must be replaced by:
t=data[:0]
KE=data[:-3]
PE=data[:-2]
TE=data[:-1]
|
CMake Error "NumPy import failure" when compiling Boost.Numpy
Question: Here is what I installed as described [here](https://github.com/mitmul/ssai-
cnn):
1. Python 3.5 (Anaconda3 2.4.3)
Chainer 1.5.0.2
Cython 0.23.4
NumPy 1.10.1
tqdm
2. OpenCV 3.0.0
3. lmdb 0.87
4. Boost 1.59.0
Next I want to compile and install Boost.NumPy. In the beginning, NumPy module
could not be found. After some search, I found NumPy-related files in
`~/anaconda3/lib/python3.5/site-packages/numpy/core/include/numpy` instead of
something like `/usr/lib`, `/usr/local/lib`, etc. Therefore, in
`/Boost.NumPy/CMakeList.txt`, I added this line:
set(NUMPY_INCLUDE_DIRS, /home/graphics/anaconda3/lib/python3.5/site-packages)
But NumPy still could not be found. An error occurred as I run `cmake
-DPYTHON_LIBRARY=$HOME/anaconda3/lib/libpython3.5m.so ../` to generate the
makefile for Boost.NumPy. Here is the error:
graphics@gubuntu:~/usr/Boost.NumPy/build$ sudo cmake -DPYTHON_LIBRARY=$HOME/anaconda3/lib/libpython3.5m.so ../
-- The C compiler identification is GNU 4.9.2
-- The CXX compiler identification is GNU 4.9.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Found PythonInterp: /usr/bin/python3.5 (found suitable version "3.5.1", minimum required is "3.5")
-- Found PythonInterp: /usr/bin/python3.5 (found version "3.5.1")
-- Found PythonLibs: /home/graphics/anaconda3/lib/libpython3.5m.so
CMake Error at libs/numpy/cmake/FindNumPy.cmake:61 (message):
NumPy import failure:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named 'numpy'
Call Stack (most recent call first):
CMakeLists.txt:30 (find_package)
-- Configuring incomplete, errors occurred!
I have tried to replace `NUMPY_INCLUDE_DIRS` with some other directories, but
nothing works. What should I write to the `CMakelists.txt` to tell cmake where
to find NumPy module and import it?
Thanks in advance!
* * *
Others files which might be needed to find out what goes wrong:
1. [CMakeLists.txt](http://example.haichaoyu.com/example/CMakeLists.txt) of Boost.NumPy.
Answer: Finally it works! But I don't know why...:(
What I did:
1. I reinstalled numpy to /usr/lib/python3.5/site-packages (previously, I installed it to ~/anaconda3/lib/python3.4/site-packages)
1.1 I also added ~/anaconda3/lib/python3.4/site-packages/numpy/include to $PYTHONPATH and $PATH
2. I ran these commands in Python:
>>>import numpy
And I found it returns no error!
3. I removed previously compiled files in directory build, and rebuilt. Finally it worked
Hope these helps someone else.
|
How do I get variables from my main module in Python?
Question: I am a Structural Engineer by trade and I am trying to automate the creation
of 3D models using scripts.
So far I have created three modules; the GUI module using PyQt4, a main module
that controls the signals from the GUI, and an export module which "should"
pull the variables from main module and generate a script that can be read by
my analysis program.
So far the I can't pull the variables from main module when clicking the
export menu in the GUI because variable names are not defined.
If I import the main module into the export module to get the variables, I get
errors with the Ui_MainWindow.
I have tried to simplify what I am doing below.
**main.py module**
import sys
from PyQt4 import QtGui, QtCore
from gui import Ui_MainWindow
from export import newFile
class Main(QtGui.QMainWindow):
def __init__(self):
super(Main, self).__init__()
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
self.setName()
self.ui.actionExport.triggered.connect(self.exportName)
def exportName(self):
self.exportStaad = newFile().createNewFile()
def setName(self):
self.ui.tbo_Name.textChanged.connect(self.name_Changed)
def name_Changed(self):
someName = self.ui.tbo_Name.text()
print('Name = ' + someName)
app = QtGui.QApplication(sys.argv)
form = Main()
form.show()
app.exec_()
gui.py
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'gui.ui'
#
# Created by: PyQt4 UI code generator 4.11.4
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName(_fromUtf8("MainWindow"))
MainWindow.resize(800, 600)
self.centralwidget = QtGui.QWidget(MainWindow)
self.centralwidget.setObjectName(_fromUtf8("centralwidget"))
self.tbo_Name = QtGui.QLineEdit(self.centralwidget)
self.tbo_Name.setGeometry(QtCore.QRect(80, 60, 150, 20))
self.tbo_Name.setObjectName(_fromUtf8("tbo_Name"))
self.lab_Name = QtGui.QLabel(self.centralwidget)
self.lab_Name.setGeometry(QtCore.QRect(30, 60, 40, 20))
self.lab_Name.setObjectName(_fromUtf8("lab_Name"))
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtGui.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 21))
self.menubar.setObjectName(_fromUtf8("menubar"))
self.menuFile = QtGui.QMenu(self.menubar)
self.menuFile.setObjectName(_fromUtf8("menuFile"))
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtGui.QStatusBar(MainWindow)
self.statusbar.setObjectName(_fromUtf8("statusbar"))
MainWindow.setStatusBar(self.statusbar)
self.actionExport = QtGui.QAction(MainWindow)
self.actionExport.setObjectName(_fromUtf8("actionExport"))
self.menuFile.addAction(self.actionExport)
self.menubar.addAction(self.menuFile.menuAction())
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow", None))
self.lab_Name.setText(_translate("MainWindow", "Name:", None))
self.menuFile.setTitle(_translate("MainWindow", "File", None))
self.actionExport.setText(_translate("MainWindow", "Export", None))
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
MainWindow = QtGui.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
export.py
import sys
from PyQt4 import QtGui, QtCore
from os import path
import math
class newFile():
def createNewFile(dest):
'''
Creates file
'''
name = QtGui.QFileDialog.getSaveFileName ()
f = open(name, 'w')
f.write('Hello' + someName)
f.close
Answer: The method called createNewFile(dest) inside the class newFile uses undefined
var someName at **f.write('Hello' + someName)**. This causes the error as it
is not defined in the class. Define a variable before you use it.
|
Clear output and rewrite it in python
Question: Hi I'm trying to make a tic tac toe game in python and I've run into a
problem. 
As you can see on the picture it rewrites the playing board after your input,
what I want it to do is to clear the output and then rewrite the board. So
instead of just printing new boards all the time it only clears the current
board and rewrites it. I've searched on "clear output" etc, but all I find is
these kind of codes:
import os
clear = lambda : os.system('cls')
or
import os
def clear():
os.system( 'cls' )
Using this clear function above don't work for me. It only returns this
symbol:[](http://i.stack.imgur.com/2iFLF.png)
I am currently writing my code in Pycharm and just to make it clear, I wanna
keep it in pycharm.
Answer: I see a syntax error from your code, you are missing the ":".
clear = lambda : os.system('cls')
However avoid lambda and define a function for clear because it is easier to
read.
def clear():
os.system( 'cls' )
then you can clear the window with:
clear()
|
List of lists (not just list) in Python
Question: I want to make a list of lists in python.
My code is below.
import csv
f = open('agGDPpct.csv','r')
inputfile = csv.DictReader(f)
list = []
next(f) ##Skip first line (column headers)
for line in f:
array = line.rstrip().split(",")
list.append(array[1])
list.append(array[0])
list.append(array[53])
list.append(array[54])
list.append(array[55])
list.append(array[56])
list.append(array[57])
print list
I'm pulling only select columns from every row. My code pops this all into one
list, as such:
['ABW', 'Aruba', '0.506252445', '0.498384331', '0.512418427', '', '', 'AND', 'Andorra', '', '', '', '', '', 'AFG', 'Afghanistan', '30.20560247', '27.09154001', '24.50744042', '24.60324707', '23.96716227'...]
But what I want is a list in which each row is its own list:
`[[a,b,c][d,e,f][g,h,i]...]` Any tips?
Answer: You are almost there. Make all your desired inputs into a list before
appending. Try this:
import csv
with open('agGDPpct.csv','r') as f:
inputfile = csv.DictReader(f)
list = []
for line in inputfile:
list.append([line[1], line[0], line[53], line[54], line[55], line[56], line[57]])
print list
|
How do I remove transparency from a histogram created using Seaborn in python?
Question: I'm creating histograms using seaborn in python and want to customize the
colors. The default settings create transparent histograms, and I would like
mine to be solid. How do I remove the transparency?
I've tried creating a color palette and setting desaturation to 0, but this
hasn't changed the saturation of the resulting histogram.
Example:
# In[1]:
import seaborn as sns
import matplotlib.pyplot as plt
get_ipython().magic('matplotlib inline')
# In[2]:
iris = sns.load_dataset("iris")
# In[3]:
myColors = ['#115e67','#f4633a','#ffd757','#4da2e8','#cfe5e5']
sns.palplot(sns.color_palette(myColors))
# In[4]:
sns.set_palette(palette=myColors,desat=0)
# In[5]:
sns.set(style="white")
# In[6]:
sns.despine()
# In[7]:
plt.title('Distribution of Petal Length')
sns.distplot(iris.petal_length, axlabel = 'Petal Length')
[Distribution of petal length](http://i.stack.imgur.com/D5j7P.png)
Answer:
sns.distplot(iris.petal_length, axlabel = 'Petal Length', hist_kws=dict(alpha=1))
|
First python app on C9 with error
Question: after doing the courses of python and reading some books i decided to do an
app. Since that seemed overwhelming I researched and found this
<http://sebsauvage.net/python/gui/> which im replicating on Cloud9.io, and got
here:
import Tkinter
class simpleapp_tk(Tkinter.Tk):
def __init__(self,parent):
Tkinter.Tk.__init__(self,parent)
self.parent = parent
self.initialize()
def initialize(self):
pass
if __name__== '__main__':
app = simpleapp_tk(None)
app.title('FirstApp')
app.mainloop()
All well and fine, but now they say we can run it and see a empty window,
which when i run gives me this:
Traceback (most recent call last):
File "/home/ubuntu/workspace/Calculator/Calc.py", line 22, in <module>
app().mainloop()
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 2537, in __init__
Widget.__init__(self, master, 'frame', cnf, {}, extra)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 2049, in __init__
BaseWidget._setup(self, master, cnf)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 2024, in _setup
_default_root = Tk()
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1767, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
Process exited with code: 1
Any ideas on what's wrong or how to fix it? thanks
Answer: You're trying to run a GUI app on Cloud9, which has no desktop environment.
You'll want to look into web frameworks if you're going to run on a cloud
provider.
Flask is a good, simple one.
Alternatively, if you like books and you're interested in Django, you might
check out [Hello Web App](https://hellowebapp.com/).
|
Get all td content inside tbody of tr in python using lxml
Question: I am getting values of header of html table below using lxml but when I am
trying to parse the contents of the td's inside tr which is in tbody using
xpath its giving me empty value because the data is generated dynamically.
Below is my python code with its output value I am getting. How can I get the
values?
<table id="datatabl" class="display compact cell-border dataTable no-footer" role="grid" aria-describedby="datatabl_info">
<thead>
<tr role="row">
<th class="dweek sorting_desc" tabindex="0" aria-controls="datatabl" rowspan="1" colspan="1" style="width: 106px;" aria-label="Week: activate to sort column ascending" aria-sort="descending">Week</th>
<th class="dnone sorting" tabindex="0" aria-controls="datatabl" rowspan="1" colspan="1" style="width: 100px;" aria-label="None: activate to sort column ascending">None</th>
</tr>
</thead>
<tbody>
<tr class="odd" role="row">
<td class="sorting_1">2016-05-03</td>
<td>4.27</td>
<td>21.04</td>
</tr>
<tr class="even" role="row">
<td class="sorting_1">2016-04-26</td>
<td>4.24</td>
<td>95.76</td>
<td>21.04</td>
</tr>
</tbody>
My Python code
from lxml import etree
import urllib
web = urllib.urlopen("http://droughtmonitor.unl.edu/MapsAndData/DataTables.aspx")
s = web.read()
html = etree.HTML(s)
## Get all 'tr'
tr_nodes = html.xpath('//table[@id="datatabl"]/thead')
print tr_nodes
## 'th' is inside first 'tr'
header = [i[0].text for i in tr_nodes[0].xpath("tr")]
print header
## tbody
tr_nodes_content = html.xpath('//table[@id="datatabl"]/tbody')
print tr_nodes_content
td_content = [[td[0].text for td in tr.xpath('td')] for tr in tr_nodes_content[0]]
print td_content
output in terminal:
[<Element thead at 0xb6b250ac>]
['Week']
[<Element tbody at 0xb6ad20cc>]
[]
Answer: The data is dynamically loaded from the
`http://droughtmonitor.unl.edu/Ajax.aspx/ReturnTabularDM` endpoint. One option
would be to try to mimic that request and get the data from the JSON response.
Or, you can stay on a high-level and solve it via
[`selenium`](http://selenium-python.readthedocs.io/):
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.maximize_window()
wait = WebDriverWait(driver, 10)
url = 'http://droughtmonitor.unl.edu/MapsAndData/DataTables.aspx'
driver.get(url)
# wait for the table to load
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "table#datatabl tr[role=row]")))
rows = driver.find_elements_by_css_selector("table#datatabl tr[role=row]")[1:]
for row in rows:
cells = row.find_elements_by_tag_name("td")
print(cells[2].text)
driver.close()
Prints the contents of the D0-D4 column:
33.89
39.64
39.28
39.20
...
36.74
38.45
43.61
|
Having trouble with scipy.optimize.leastsq
Question: I am new to optimization and having trouble using the least squares
minimization. Here is the code I have tried so far:
def func(tpl, x): return 1./exp(x/360. * tpl)
def errfunc(tpl, x, y): func(tpl,x) - y
//x-axis
xdata = np.array([181,274])
//minimize sum(y - func(x))**2
ydata = np.array([0.992198836646864,0.992996067735572])
//initial guesses
tplInitial1 = (0.031, 0.032) popt, pcov = leastsq(errfunc, tplInitial1[:],
args=(xdata, ydata)) print popt
I was hoping to get [0.032359,0.03071] returned by the minimize function but
getting "only lenght-1 arrays can be converted to Python scalars". Any help is
appreciated. Thank you.
Answer: I suspect you are using `math.exp` instead of `numpy.exp` (i.e. the scalar
version instead of the array version). Try using `from numpy import exp`.
|
Cannot find “Grammar.txt” in lib2to3
Question: I am trying to get NetworkX running under IronPython on my machine. From other
sources I think other people have made this work.
(<https://networkx.github.io/documentation/networkx-1.10/reference/news.html>)
I am running IronPython 2.7 2.7.5.0 on .NET 4.0.30319.42000 in VisualStudio
2015 Community Edition.
The problem is that when I
import NetworkX as nx
I get this exception:
Traceback (most recent call last):
File "C:\SourceModules\CodeKatas\IronPythonExperiment\ProveIronPython\ProveIronPython\ProveIronPython.py", line 1, in <module>
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\networkx\__init__.py", line 87, in <module>
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\networkx\readwrite\__init__.py", line 14, in <module>
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\networkx\readwrite\gml.py", line 46, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\refactor.py", line 27, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\fixer_util.py", line 9, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\pygram.py", line 32, in <module>
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\pgen2\driver.py", line 121, in load_grammar
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\pgen2\pgen.py", line 385, in generate_grammar
File "C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\pgen2\pgen.py", line 15, in __init__
IOError: [Errno 2] Could not find file 'C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\Grammar.txt'.: C:\Program Files (x86)\IronPython 2.7\Lib\lib2to3\Grammar.txt
The bottom line seems to be that NetworkX wants Grammar.txt to be in the
lib2to3 directory of my IronPython installation.
I have tried several things, but no success. Some are too dumb to admit to in
public, but I did try
* running from command line: (ipy myExecutable.py)
* pip installing another package (BeautifulSoup), but that package installed and instantiated with no problems.
* I also looked at [Cannot find "Grammar.txt" in python-sphinx](http://stackoverflow.com/questions/11649565/cannot-find-grammar-txt-in-python-sphinx) , but it did not seem to have any explanation that helped my specific case.
**My Question:** How can I resolve this problem with 'import NetworkX' raising
this exception?
Answer: A lib2to3 import snuck into networkx-1.10 and networkx-1.11 which is the
latest release. Try the development release from the github site. (That will
soon be networkx-2.0). The lib2to3 library import has been removed since the
networkx-1.11 release. github.com/networkx/networkx/archive/master.zip
|
How can I import PCL into Python, on Ubuntu?
Question: So my situation is as follows: I am on an Ubuntu 14.04, and I am very simply,
trying to use PCL (point cloud library) in Python 2.7x.
I followed the instructions
here,(<http://pointclouds.org/downloads/linux.html>), however in Python if I
now do
> import pcl
I still get the error:
> ImportError: No module named pcl
I am not sure what else to do - there do not seem to be any more leads I can
follow... thanks.
Answer: You can try [python-pcl](https://github.com/strawlab/python-pcl). It is a
python binding and supports operations on PointXYZ.
|
How can I get histogram number with pairs by using python code?
Question: I want to take hist no with pairs by using python code like when i put input
11 2 34 21 the output should be like that 11(1) 2(1) 34(1) 21(1)
Answer: First, let's create a list of numbers (I have added some repeats to make it
more interesting):
>>> v = ( 11, 2, 34, 21, 2, 2 )
Next, let's create a Counter instance:
>>> from collections import Counter
>>> ctr = Counter(v)
Now, let's get the counts that you wanted:
>>> dict(ctr)
{2: 3, 11: 1, 34: 1, 21: 1}
If you prefer the parenthesized format that you show in the question, then we
need to do some formatting:
>>> ' '.join('{}({})'.format(x, ctr[x]) for x in ctr)
'2(3) 11(1) 34(1) 21(1)'
You can read more about the Counter class in the [python
docs](https://docs.python.org/2/library/collections.html#collections.Counter).
|
How to pack python flask_socketio app with pyinstaller
Question: I tried official demo code:
#test.py
from flask import Flask, render_template
from flask_socketio import SocketIO
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
if __name__ == '__main__':
socketio.run(app)
it runs well, but when packed with:
pyinstaller --onefile test.py
and then run test.exe, I got:
Z:\test\dist>test2.exe
Traceback (most recent call last):
File "<string>", line 6, in <module>
File "site-packages\flask_socketio\__init__.py", line 119, in __init__
File "site-packages\flask_socketio\__init__.py", line 144, in init_app
File "site-packages\socketio\server.py", line 72, in __init__
File "site-packages\engineio\server.py", line 100, in __init__
ValueError: Invalid async_mode specified
test2 returned -1
is there anything I am missing?
Answer: add 'engineio.async_gevent' to hiddenimports in spec file. you may refer to:
<https://github.com/miguelgrinberg/python-socketio/issues/35>
|
Loop doesn't work, 3-lines python code
Question: _this question is about blender, python scripting_
I'm completely new in this, so please excuse me for any stupid/newbie
question/comment.
I made it simple (3 lines code) to make it easy addressing the problem.
what I need is a code that adds a new uv map for each object within loop
function.
But this code instead is adding multiple new UV maps to only one object.
import bpy
for x in bpy.context.selected_objects:
bpy.ops.mesh.uv_texture_add()
what's wrong I'm doing here??
Thanks
Answer: Similar to what Sambler said, I always use:
for active in bpy.context.selected_objects:
bpy.context.scene.objects.active = active
...
These two lines I use more than any other when programming for Blender (except
`import bpy` of course).
I think I first learned this here if you'd like a good intro on how this
works:
<https://cgcookiemarkets.com/2014/12/11/writing-first-blender-script/>
In the article he uses:
# Create a list of all the selected objects
selected = bpy.context.selected_objects
# Iterate through all selected objects
for obj in selected:
bpy.context.scene.objects.active = obj
...
His comments explain it pretty well, but I will take it a step further. As you
know, Blender lacks built-in multi-object editing, [so you have _selected_
objects and one _active_
object](https://www.blender.org/manual/editors/3dview/selecting.html). The
_active_ object is what you can and will edit if you try to set its values
from python or Blender's gui itself. So although we are writing it slightly
differently each time, the effect is the same. We loop over all _selected_
objects with the `for active in bpy.context.selected_objects`, then we **set**
the active object to be the next one in the loop that iterates over **all**
the objects that are selected with `bpy.context.scene.objects.active =
active`. As a result, whatever we do in the loop gets done once for every
object in the selection _and_ any operation we do _on_ the object in question
gets done _on all of the objects_. What would happen if we only used the first
line and put our code in the `for` loop?
for active in bpy.context.selected_objects:
...
Whatever we do in the loop gets done once for every object in the selection
_but_ any operation we do _on_ the object in question gets done _on only the
active object, but as many times as there are selected objects_. This is why
we need to set the active object from within the loop.
|
Access a hidden library function in Python?
Question: So when I was doing coding I came across this:
from hidden_lib import train_classifier
Out of curiosity, is there a way to access the function using the terminal and
see what's inside there?
Answer: You can use "inspect" library to do that, but it will work only if you have
the source code of the "hidden_lib" somewhere on your machine:
>>> import hidden_lib
>>> import inspect
>>> print inspect.getsource(hidden_lib.train_classifier)
Otherwise library will throw the exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\inspect.py", line 701, in getsource
lines, lnum = getsourcelines(object)
File "C:\Python27\lib\inspect.py", line 690, in getsourcelines
lines, lnum = findsource(object)
File "C:\Python27\lib\inspect.py", line 529, in findsource
raise IOError('source code not available')
IOError: source code not available
In such a case you need to decompile .pyc file first. To do that you need to
go to the:
https://github.com/wibiti/uncompyle2
then download the package, go to the package folder and install it:
C:\package_location> C:\Python27\python.exe setup.py install
Now you can easily find location of the library by typing [1]:
>>> hidden_lib.__file__
Then go to the pointed directory and unpyc the file:
>C:\Python27\python.exe C:\Python27\Scripts\uncompyle2 -o C:\path_pointed_by_[1]\hidden_lib.py C:\path_pointed_by_[1]\hidden_lib.pyc
Sources should be decompiled seccessfully:
# 2016.05.07 17:47:36 Central European Daylight Time
+++ okay decompyling hidden_lib.pyc
# decompiled 1 files: 1 okay, 0 failed, 0 verify faile
# 2016.05.07 17:47:36 Central European Daylight Time
And now you can display sources of functions exposed by hidden_lib in a way I
described at the beginning of the post. If you are using iPython you can use
also embedded function help(hidden_lib.train_classifier) to do exactly the
same.
IMPORTANT NOTE: uncompyle2 library (that I used) works only with Python 2.7,
if you want to do the same for Python 3.x you need to find other similar
library.
|
Put python long integers into memory without any space between them
Question: I want to put many large long integers into memory without any space between
them. How to do that with python 2.7 code in linux?
The large long integers all use the same number of bits. There is totally
about 4 gb of data. Leaving spaces of a few bits to make each long integer
uses multiples of 8 bits in memory is ok. I want to do bitwise operation on
them later.
So far, I am using a python list. But I am not sure if that leaves no space in
memory between the integers. Can ctypes help?
Thank you.
The old code uses bitarray (<https://pypi.python.org/pypi/bitarray/0.8.1>)
import bitarray
data = bitarray.bitarray()
with open('data.bin', 'rb') as f:
data.fromfile(f)
result = data[:750000] & data[750000:750000*2]
This works and the bitarray doesn't have gap in memory. But, the bitarray
bitwise and is slower than the native python's bitwise operation of long
integer by about 6 times on the computer. Slicing the bitarray in the old code
and accessing an element in the list in the newer code use roughly the same
amount of time.
Newer code:
import cPickle as pickle
with open('data.pickle', 'rb') as f:
data = pickle.load(f)
# data is a list of python's (long) integers
result = data[0] & data[1]
Numpy: In the above code. result = data[0] & data[1] creates a new long
integer. Numpy has the out option for numpy.bitwise_and. That would avoid
creating a new numpy array. Yet, numpy's bool array seems to use one byte per
bool instead of one bit per bool. While, converting the bool array into a
numpy.uint8 array avoids this problem, counting the number of set bit is too
slow.
The python's native array can't handle the large long integers:
import array
xstr = ''
for i in xrange(750000):
xstr += '1'
x = int(xstr, 2)
ar = array.array('l',[x,x,x])
# OverflowError: Python int too large to convert to C long
Answer: You can use the [array](https://docs.python.org/2/library/array.html) module,
for example:
import array
ar = array('l', [25L, 26L, 27L])
ar[1] # 26L
|
Python How to supress showing Error in user created function before the function is called
Question: I have imported a user created file into my main code but an error in imported
file is being displayed before. How can I suppress that error and display it
only when the function is called
Importing file and its function :
import userValidation
NameString = input("Enter your name : ")
I have called `user_validation` function later in the code :
user_validation(names)
`user_validation()` has some error which I know and is being displayed just
after the code start running.
I want to suppress the error till the time user_validation is called.How can i
do that.
Answer: Use exception handling appropriately.
try:
#code with exception
except:
#handle it here
In the `except` part you may use `pass` to just move on if no action is
required or use `raise` to handle it in the calling function.
|
A single string in single quotes with PyYAML
Question: When I edit a YAML file in Python with PyYAML, all of my string values are
saved back to the original file without quotes.
one: valueOne
two: valueTwo
three: valueThree
I wanted one of those strings to be surrounded with single quotes:
one: valueOne
two: valueTwo
three: 'valueThree'
Changing the `default_style` parameter in `yaml_dump` affects whole file,
which is not desired. I thought about adding single quotes to the beginning
and end of a string that I want to be surrounded with:
valueThreeVariable = "'" + valueThreeVariable + "'"
However, this ends up with a dumped YAML looking like this:
one: valueOne
two: valueTwo
three: '''valueThree'''
I have tried escaping the single quote in various ways, using unicode or raw
strings, all to no avail. How can I make only one of my YAML values to be a
string surrounded with single quotes?
Answer: You can graft such functionality onto PyYAML but it is not easy. The value in
the mapping for `three` has to be some instance of a class different from a
normal string, otherwise the YAML dumper doesn't know that it has to do
something special and that instance is dumped as string with quotes. On
loading scalars with single quotes need to be created as instances of this
class. And apart from that you probably don't want the keys of your
`dict`/`mapping` scrambled as PyYAML does by default.
I do something similar to the above in my PyYAML derivative
[ruamel.yaml](https://pypi.python.org/pypi/ruamel.yaml/) for block style
scalars:
import ruamel.yaml
yaml_str = """\
one: valueOne
two: valueTwo
three: |-
valueThree
"""
data = ruamel.yaml.round_trip_load(yaml_str)
assert ruamel.yaml.round_trip_dump(data) == yaml_str
doesn't throw an assertion error.
* * *
To start with the dumper, you can "convert" the `valueThree` string:
import ruamel.yaml
from ruamel.yaml.scalarstring import ScalarString
yaml_str = """\
one: valueOne
two: valueTwo
three: 'valueThree'
"""
class SingleQuotedScalarString(ScalarString):
def __new__(cls, value):
return ScalarString.__new__(cls, value)
data = ruamel.yaml.round_trip_load(yaml_str)
data['three'] = SingleQuotedScalarString(data['three'])
but this cannot be dumped, as the dumper doesn't know about the
`SingleQuotedScalarString`. You can solve that in different ways, the
following extends `ruamel.yaml`'s `RoundTripRepresenter` class:
from ruamel.yaml.representer import RoundTripRepresenter
import sys
def _represent_single_quoted_scalarstring(self, data):
tag = None
style = "'"
if sys.version_info < (3,) and not isinstance(data, unicode):
data = unicode(data, 'ascii')
tag = u'tag:yaml.org,2002:str'
return self.represent_scalar(tag, data, style=style)
RoundTripRepresenter.add_representer(
SingleQuotedScalarString,
_represent_single_quoted_scalarstring)
assert ruamel.yaml.round_trip_dump(data) == yaml_str
Once again doesn't throw an error. The above can be done in PyYAML and the
`safe_load`/`safe_dump` in principle, but you would need to write code to
preserve the key ordering, as well as some of the base functionality. (Apart
from that PyYAML only supports the older YAML 1.1 standard not the YAML 1.2
standard from 2009).
To get the loading to work without using the explicit `data['three'] =
SingleQuotedScalarString(data['three'])` conversion, you can add the following
before the call to `ruamel.yaml.round_trip_load()`:
from ruamel.yaml.constructor import RoundTripConstructor
from ruamel.yaml.nodes import ScalarNode
from ruamel.yaml.compat import text_type
def _construct_scalar(self, node):
if not isinstance(node, ScalarNode):
raise ConstructorError(
None, None,
"expected a scalar node, but found %s" % node.id,
node.start_mark)
if node.style == '|' and isinstance(node.value, text_type):
return PreservedScalarString(node.value)
elif node.style == "'" and isinstance(node.value, text_type):
return SingleQuotedScalarString(node.value)
return node.value
RoundTripConstructor.construct_scalar = _construct_scalar
There are different ways to do the above, including subclassing the
`RoundTripConstructor` class, but the actual method to change is small and can
easily be patched.
* * *
Combining all of the above and cleaning up a bit you get:
import ruamel.yaml
from ruamel.yaml.scalarstring import ScalarString
from ruamel.yaml.representer import RoundTripRepresenter
from ruamel.yaml.constructor import RoundTripConstructor
from ruamel.yaml.nodes import ScalarNode
from ruamel.yaml.compat import text_type, PY2
class SingleQuotedScalarString(ScalarString):
def __new__(cls, value):
return ScalarString.__new__(cls, value)
def _construct_scalar(self, node):
if not isinstance(node, ScalarNode):
raise ConstructorError(
None, None,
"expected a scalar node, but found %s" % node.id,
node.start_mark)
if node.style == '|' and isinstance(node.value, text_type):
return PreservedScalarString(node.value)
elif node.style == "'" and isinstance(node.value, text_type):
return SingleQuotedScalarString(node.value)
return node.value
RoundTripConstructor.construct_scalar = _construct_scalar
def _represent_single_quoted_scalarstring(self, data):
tag = None
style = "'"
if PY2 and not isinstance(data, unicode):
data = unicode(data, 'ascii')
tag = u'tag:yaml.org,2002:str'
return self.represent_scalar(tag, data, style=style)
RoundTripRepresenter.add_representer(
SingleQuotedScalarString,
_represent_single_quoted_scalarstring)
yaml_str = """\
one: valueOne
two: valueTwo
three: 'valueThree'
"""
data = ruamel.yaml.round_trip_load(yaml_str)
assert ruamel.yaml.round_trip_dump(data) == yaml_str
Which still runs without assertion error, i.e. with dump output equalling
input. As indicated you can do this in PyYAML, but it requires considerably
more coding.
|
Wand Rounded Edges on Images
Question: I've been scratching my head for a few days on how to complete the task of
making the edges rounded on an image taken from picamera using python-wand. I
have it setup now to where it grabs the image and composites it over the
banner/background image with the following:
img = Image(filename=Picture)
img.resize(1200, 800)
bimg = Image(filename=Background)
bimg.composite(img, left=300, top=200)
bimg.save(filename=BPicture)
Any help is appreciated!
Answer: You can use [`wand.drawing.Drawing.rectangle`](http://docs.wand-
py.org/en/0.4.2/wand/drawing.html#wand.drawing.Drawing.rectangle) to generate
rounded corners, and overlay it with composite channels.
from wand.image import Image
from wand.color import Color
from wand.drawing import Drawing
with Image(filename='rose:') as img:
img.resize(240, 160)
with Image(width=img.width,
height=img.height,
background=Color("white")) as mask:
with Drawing() as ctx:
ctx.fill_color = Color("black")
ctx.rectangle(left=0,
top=0,
width=mask.width,
height=mask.height,
radius=mask.width*0.1) # 10% rounding?
ctx(mask)
img.composite_channel('all_channels', mask, 'screen')
img.save(filename='/tmp/out.png')
[](http://i.stack.imgur.com/ikaSR.png)
Now if I understand your question, you can apply the same technique, but
composite `Picture` in the drawing context.
with Image(filename='rose:') as img:
img.resize(240, 160)
with Image(img) as nimg:
nimg.negate() # For fun, let's negate the image for the background
with Drawing() as ctx:
ctx.fill_color = Color("black")
ctx.rectangle(left=0,
top=0,
width=nimg.width,
height=nimg.height,
radius=nimg.width*0.3) # 30% rounding?
ctx.composite('screen', 0, 0, nimg.width, nimg.height, img)
ctx(nimg)
nimg.save(filename='/tmp/out2.png')
[](http://i.stack.imgur.com/KiimT.png)
|
How to use python to convert a float number to fixed point with predefined number of bits
Question: I have float 32 numbers (let's say positive numbers) in numpy format. I want
to convert them to fixed point numbers with predefined number of bits to
reduce precision.
For example, number 3.1415926 becomes 3.25 in matlab by using function
num2fixpt. The command is num2fixpt(3.1415926,sfix(5),2^(1 + 2-5),
'Nearest','on') which says 3 bits for integer part, 2 bits for fractional
part.
Can I do the same thing using Python
Answer: You can do it if you understand how IEEE floating point notation works.
Basically you'll need to convert to a python LONG, do bitwise operators, then
covert back. For example:
import time,struct,math
long2bits = lambda L: ("".join([str(int(1 << i & L > 0)) for i in range(64)]))[::-1]
double2long = lambda d: struct.unpack("Q",struct.pack("d",d))[0]
double2bits = lambda d: long2bits(double2long(d))
long2double = lambda L: struct.unpack('d',struct.pack('Q',L))[0]
bits2double = lambda b: long2double(bits2long(b))
bits2long=lambda z:sum([bool(z[i] == '1')*2**(len(z)-i-1) for i in range(len(z))[::-1]])
>>> pi = 3.1415926
>>> double2bits(pi)
'0100000000001001001000011111101101001101000100101101100001001010'
>>> bits2long('1111111111111111000000000000000000000000000000000000000000000000')
18446462598732840960L
>>> double2long(pi)
4614256656431372362
>>> long2double(double2long(pi) & 18446462598732840960L)
3.125
>>>
def rshift(x,n=1):
while n > 0:
x = 9223372036854775808L | (x >> 1)
n -= 1
return x
>>> L = bits2long('1'*12 + '0'*52)
>>> L
18442240474082181120L
>>> long2double(rshift(L,0) & double2long(pi))
2.0
>>> long2double(rshift(L,1) & double2long(pi))
3.0
>>> long2double(rshift(L,4) & double2long(pi))
3.125
>>> long2double(rshift(L,7) & double2long(pi))
3.140625
This will only truncate the number of bits though, not round them. The rshift
function is necessary because python's right-shift operator fills the empty
leftmost bit with a zero. See a discription of IEEE floating point
[here](https://en.wikipedia.org/wiki/Double-precision_floating-point_format).
|
how to update contents of file in python
Question:
def update():
global mylist
i = j = 0
mylist[:]= []
key = input("enter student's tp")
myf = open("data.txt","r+")
ml = myf.readlines()
#print(ml[1])
for line in ml:
words = line.split()
mylist.append(words)
print(mylist)
l = len(mylist)
w = len(words)
print(w)
print(l)
for i in range(l):
for j in range(w):
print(mylist[i][j])
## if(key == mylist[i][j]):
## print("found at ",i,j)
## del mylist[i][j]
## mylist[i].insert((j+1), "xxx")
below is the error
> print(mylist[i][j])
>
> IndexError: list index out of range
I am trying to update contents in a file. I am saving the file in a list as
lines and each line is then saved as another list of words. So "mylist" is a
2D list but it is giving me error with index
Answer: Your `l` variable is the length of the _last_ line list. Others could be
shorter.
A better idiom is to use a `for` loop to iterate over a list. But there is an
even better way.
It appears you want to replace a "tp" (whatever that is) with the string `xxx`
everywhere. A quicker way to do that would be to use regular expressions.
import re
with open('data.txt') as myf:
myd = myf.read()
newd = re.sub(key, 'xxx', myd)
with open('newdata.txt', 'w') ad newf:
newf.write(newd)
|
has_permission() missing 1 required positional argument: 'view'
Question: i am working on a project for learning purpose with following config: Python
3.4.4 django==1.9.1 djangorestframework==3.3.3 OS (Windows 8.1)`
In project i having a model Post for that i have created permissions.py
from rest_framework import permissions
class IsAuthorOfPost(permissions.BasePermission):
def has_permission(self, request, view):
return True
def has_object_permission(self, request, view, post):
if request.user:
return post.author == request.user
return False
> views.py:
from rest_framework import permissions, viewsets
from rest_framework.response import Response
from posts.models import Post
from posts.permissions import IsAuthorOfPost
from posts.serializers import PostSerializer
class PostViewSet(viewsets.ModelViewSet):
queryset = Post.objects.order_by('-created_at')
serializer_class = PostSerializer
def get_permissions(self):
if self.request.method in permissions.SAFE_METHODS:
return (permissions.AllowAny(),)
return (permissions.IsAuthenticated, IsAuthorOfPost(),)
def perform_create(self, serializer):
instance = serializer.save(author=self.request.user)
return super(PostViewSet, self).perform_create(serializer)
class AccountPostViewSet(viewsets.ModelViewSet):
queryset = Post.objects.select_related('author').all()
serializer_class = PostSerializer
def list(self, request, account_username=None):
queryset = self.queryset.filter(author__username=account_username)
serializer = self.serializer_class(queryset, many=True)
return Response(serializer.data)
> serializers.py:
from rest_framework import serializers
from authentication.serializers import AccountSerializer
from posts.models import Post
class PostSerializer(serializers.ModelSerializer):
author = AccountSerializer(read_only=True, required=False)
class Meta:
model = Post
fields = ('id', 'author', 'content', 'created_at', 'updated_at')
read_only_fields = ('id', 'created_at', 'updated_at')
def get_validation_exclusions(self, *args, **kwargs):
exclusions = super(PostSerializer, self).get_validation_exclusions()
return exclusions + ['author']
> urls.py
from django.conf.urls import url, include
from django.contrib import admin
from rest_framework.routers import DefaultRouter
from rest_framework_nested import routers
from djangular.views import IndexView
from authentication.views import AccountViewSet, LoginView, LogoutView
from posts.views import PostViewSet, AccountPostViewSet
router = routers.SimpleRouter()
router.register(r'accounts', AccountViewSet)
router.register(r'posts', PostViewSet)
account_router = routers.NestedSimpleRouter(
router, r'accounts', lookup='account'
)
account_router.register(r'posts', AccountPostViewSet)
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^api/v1/', include(router.urls)),
url(r'^api/v1/', include(account_router.urls)),
url(r'^api/v1/auth/login/$', LoginView.as_view(), name='login'),
url(r'^api/v1/auth/logout/$', LogoutView.as_view(), name='logout'),
url('^.*$', IndexView.as_view(), name='index'),
]
[localhost:8000/api/v1/posts/](http://localhost:8000/api/v1/posts/)
> **Error** :
TypeError at /api/v1/posts/
has_permission() missing 1 required positional argument: 'view'
Request Method: GET
Request URL: http://localhost:8000/api/v1/posts/
Django Version: 1.9.1
Exception Type: TypeError
Exception Value:
has_permission() missing 1 required positional argument: 'view'
Exception Location: C:\Users\Devansh\Envs\19\lib\site-packages\rest_framework\views.py in check_permissions, line 318
Python Executable: C:\Users\Devansh\Envs\19\Scripts\python.exe
Python Version: 3.4.4
Python Path:
['D:\\djangular-app',
'C:\\Windows\\SYSTEM32\\python34.zip',
'C:\\Users\\Devansh\\Envs\\19\\DLLs',
'C:\\Users\\Devansh\\Envs\\19\\lib',
'C:\\Users\\Devansh\\Envs\\19\\Scripts',
'c:\\python34\\Lib',
'c:\\python34\\DLLs',
'C:\\Users\\Devansh\\Envs\\19',
'C:\\Users\\Devansh\\Envs\\19\\lib\\site-packages']
> **Traceback**
Traceback (most recent call last):
File "C:\Users\Devansh\Envs\19\lib\site-packages\django\core\handlers\ba
, line 174, in get_response
response = self.process_exception_by_middleware(e, request)
File "C:\Users\Devansh\Envs\19\lib\site-packages\django\core\handlers\ba
, line 172, in get_response
response = response.render()
File "C:\Users\Devansh\Envs\19\lib\site-packages\django\template\respons
line 160, in render
self.content = self.rendered_content
File "C:\Users\Devansh\Envs\19\lib\site-packages\rest_framework\response
line 71, in rendered_content
ret = renderer.render(self.data, media_type, context)
File "C:\Users\Devansh\Envs\19\lib\site-packages\rest_framework\renderer
line 676, in render
context = self.get_context(data, accepted_media_type, renderer_context
File "C:\Users\Devansh\Envs\19\lib\site-packages\rest_framework\renderer
line 618, in get_context
raw_data_post_form = self.get_raw_data_form(data, view, 'POST', reques
File "C:\Users\Devansh\Envs\19\lib\site-packages\rest_framework\renderer
line 521, in get_raw_data_form
if not self.show_form_for_method(view, method, request, instance):
File "C:\Users\Devansh\Envs\19\lib\site-packages\rest_framework\renderer
line 417, in show_form_for_method
view.check_permissions(request)
File "C:\Users\Devansh\Envs\19\lib\site-packages\rest_framework\views.py
e 318, in check_permissions
if not permission.has_permission(request, self):
TypeError: has_permission() missing 1 required positional argument: 'view'
Answer: You are missing a class instantiation for `permissions.IsAuthenticated`:
def get_permissions(self):
if self.request.method in permissions.SAFE_METHODS:
return (permissions.AllowAny(),)
return (permissions.IsAuthenticated, IsAuthorOfPost(),)
# ^^^
The error message comes from calling the instance method on `IsAuthenticated`
on the class. Thus `request` gets mapped to `self`, `view` to `request` and
`view` itself is then missing.
Changing `get_permissions()` to
def get_permissions(self):
if self.request.method in permissions.SAFE_METHODS:
return (permissions.AllowAny(),)
return (permissions.IsAuthenticated(), IsAuthorOfPost(),)
# ^^
should solve the problem.
As a side note: Your `get_permissions()` code takes an active role in deciding
authorization. It would be better to move this functionality into the
permissions themselves to make the code better follow the single
responsibility principle.
|
Adding frame over lable on python
Question: I want to create frame over the label and been trying lots of code but it is
not working out. Also trying to make checkbutton to left side of the screen
with no frame. Can anyone help me? Thank you
I got far as this [](http://i.stack.imgur.com/JKWNM.png)
But I want to make it look like this with the frame
[](http://i.stack.imgur.com/5W55M.png)
show_status = Label(dashboard, bd = 5, text = 'Even', fg = 'black',
font = ('Arial', 70), width = 8)
def update_dashboard():
three_buttons = Label(dashboard, relief = 'groove')
Alpha_button = Checkbutton(three_buttons, text = 'Alpha',
variable = alpa_1,
command = update_dashboard)
Beta_button = Checkbutton(three_buttons, text = 'Beta',
variable = beta_2,
command = update_dashboard)
Gamma_button = Checkbutton(three_buttons, text = 'Gamma',
variable = gemma_3,
command = update_dashboard)
Alpha_button.grid(row = 1, column = 0, sticky = 'w')
Beta_button.grid(row = 1, column = 2, sticky = 'w')
Gamma_button.grid(row = 1, column = 4, sticky = 'w')
margin = 5 # pixels
show_status.grid(padx = margin, pady = margin, row = 1,
column = 1, columnspan = 2,)
three_buttons.grid(row = 4, column = 2, sticky = W)
dashboard.mainloop()
Answer: You can use a Frame or a Canvas and draw the rest of the widgets on it. Let us
use the Frame by relying on the [grid layout
manager](http://effbot.org/tkinterbook/grid.htm).
To have that effect you are looking for, you simply need to span the label
over the 3 columns of the check button widgets using the option the
[`columnspan`](http://effbot.org/tkinterbook/grid.htm#Tkinter.Grid.grid-
method) option.
# Full program
Here is a simple solution using the object oriented concepts:
'''
Created on May 8, 2016
@author: Billal Begueradj
'''
import Tkinter as Tk
class Begueradj(Tk.Frame):
'''
Dislay a Label spanning over 3 columns of checkbuttons
'''
def __init__(self, parent):
'''Inititialize the GUI
'''
Tk.Frame.__init__(self, parent)
self.parent=parent
self.initialize_user_interface()
def initialize_user_interface(self):
"""Draw the GUI
"""
self.parent.title("Billal BEGUERADJ")
self.parent.grid_rowconfigure(0,weight=1)
self.parent.grid_columnconfigure(0,weight=1)
self.parent.config(background="lavender")
# Create a Frame on which other elements will be attached to
self.frame = Tk.Frame(self.parent, width = 500, height = 207)
self.frame.pack(fill=Tk.X, padx=5, pady=5)
# Create the checkbuttons and position them on the second row of the grid
self.alpha_button = Tk.Checkbutton(self.frame, text = 'Alpha', font = ('Arial', 20))
self.alpha_button.grid(row = 1, column = 0)
self.beta_button = Tk.Checkbutton(self.frame, text = 'Beta', font = ('Arial', 20))
self.beta_button.grid(row = 1, column = 1)
self.gamma_button = Tk.Checkbutton(self.frame, text = 'Gamma', font = ('Arial', 20))
self.gamma_button.grid(row = 1, column = 2)
# Create the Label widget on the first row of the grid and span it over the 3 checbbuttons above
self.label = Tk.Label(self.frame, text = 'Even', bd = 5, fg = 'black', font = ('Arial', 70), width = 8, relief = 'groove')
self.label.grid(row = 0, columnspan = 3)
# Main method
def main():
root=Tk.Tk()
d=Begueradj(root)
root.mainloop()
# Main program
if __name__=="__main__":
main()
# Demo
Here is a screenshot of the running program:
[](http://i.stack.imgur.com/0uMkK.png)
|
Creating a timeout function in Python with multiprocessing
Question: I'm trying to create a timeout function in Python 2.7.11 (on Windows) with the
multiprocessing library.
My basic goal is to return one value if the function times out and the actual
value if it doesn't timeout.
My approach is the following:
from multiprocessing import Process, Manager
def timeoutFunction(puzzleFileName, timeLimit):
manager = Manager()
returnVal = manager.list()
# Create worker function
def solveProblem(return_val):
return_val[:] = doSomeWork(puzzleFileName) # doSomeWork() returns list
p = Process(target=solveProblem, args=[returnVal])
p.start()
p.join(timeLimit)
if p.is_alive():
p.terminate()
returnVal = ['Timeout']
return returnVal
And I call the function like this:
if __name__ == '__main__':
print timeoutFunction('example.txt', 600)
Unfortunately this doesn't work and I receive some sort of EOF error in
pickle.py
Can anyone see what I'm doing wrong?
Thanks in advance,
Alexander
**Edit:** doSomeWork() is not an actual function. Just a filler for some other
work I do. That work is not done in parallel and does not use any shared
variables. I'm only trying to run a single function and have it possibly
timeout.
Answer: You can use [Pebble](https://pypi.python.org/pypi/Pebble) library for this.
from pebble.process import concurrent
from pebble import TimeoutError
TIMEOUT_IN_SECONDS = 5
def function(foo, bar=0):
return foo + bar
task = concurrent(target=function, args=[1], kwargs={'bar': 1}, timeout=TIMEOUT_IN_SECONDS)
try:
results = task.get() # blocks until results are ready
except TimeoutError:
results = 'timeout'
The [documentation](http://pythonhosted.org/Pebble/#concurrent-functions) has
more complete examples.
The library will terminate the function if it timeouts so you don't need to
worry about IO or CPU being wasted.
EDIT:
If you're doing an assignment, you can still look at
[its](https://github.com/noxdafox/pebble) implementation.
Short example:
from multiprocessing import Pipe, Process
def worker(pipe, function, args, kwargs):
try:
results = function(*args, **kwargs)
except Exception as error:
results = error
pipe.send(results)
pipe = Pipe(duplex=False)
process = Process(target=worker, args=(pipe, function, args, kwargs))
if pipe.poll(timeout=5):
process.terminate()
process.join()
results = 'timeout'
else:
results = pipe.recv()
Pebble provides a neat API, takes care of corner cases and uses more robust
mechanisms. Yet this is what it does under the hood.
|
Less noisy graph and extra humps in python
Question: Here is the data file:
<https://jsfiddle.net/83ygso6u/>
Sorry for posting it in jsfiddle... didn't know where else to host it.
Anyway the second column should be ignored.
Here is the code and graph:
import pylab as plb
import math
from pylab import *
import matplotlib.pyplot as plt
data = plb.loadtxt('title_of_datafile.txt')
x = data[:,0]*1000
y= data[:,2]
plt.figure()
plt.title('Some_Title',fontsize=35, y=1.05)
plt.xlabel('Frequency (Hz)',fontsize=30)
plt.ylabel('dBu',fontsize=30)
plt.plot(x,y,'k-', label='Data')
plt.xticks(fontsize = 25, y=-0.008)
plt.yticks(fontsize = 25, x=-0.008)
plt.show()
[](http://i.stack.imgur.com/coia2.png)
So you can see this signal is quite noisy, but it does have two distinct peaks
at around 4500 Hz and 5500 Hz.
I have been searching around the net and havn't really come across anything
that will help me.
How can I extract these peaks and/or clean up the signal in python?
Answer: Well I managed to find a solution. Here is the script with the resulting plot.
Script:
import pylab as plb
import math
from pylab import *
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from scipy import signal
import peakutils
from peakutils.plot import plot as pplot
data = plb.loadtxt('data_file_name')
x = data[:,0]*1000
y= data[:,2]
y1 = sp.signal.medfilt(y,431) # remove noise to the signal
indexes = peakutils.indexes(y1, thres=0.00005, min_dist=1400) #determine peaks
x_new = x[indexes]
plt.figure()
plt.subplot(1,2,1)
plt.title('some_title_1',fontsize=35, y=1.05)
plt.xlabel('Frequency (Hz)',fontsize=30)
plt.ylabel('Signal (dBu)',fontsize=30)
plt.plot(x,y,'r-', label='Raw Data')
plt.plot(x,y1,'b-', label='Cleaned up Signal')
plt.plot(x_new[3:6],y1[indexes][3:6],'k^',markersize=10, label='Peaks')
plt.xticks(fontsize = 25, y=-0.008)
plt.yticks(fontsize = 25, x=-0.008)
plt.legend(loc=1,prop={'size':30})
plt.subplot(1,2,2)
for i,j in zip(x_new[3:6], y1[indexes][3:6]):
plt.annotate(str(i)+ " Hz",xy=(i,j+0.5),fontsize=15)
plt.title('some_title_2',fontsize=35, y=1.05)
plt.xlabel('Frequency (Hz)',fontsize=30)
plt.ylabel('Signal (dBu)',fontsize=30)
plt.plot(x,y,'r-', label='Data')
plt.plot(x,y1,'b-')
plt.plot(x_new[3:6],y1[indexes][3:6],'k^',markersize=10)
plt.xticks(fontsize = 25, y=-0.008)
plt.yticks(fontsize = 25, x=-0.008)
plt.xlim([3000, 6000])
plt.ylim([-90, -75])
plt.subplots_adjust(hspace = 0.6)
plt.show()
[](http://i.stack.imgur.com/iEILd.png)
|
Extract non- empty values from the regex array output in python
Question: I have a column of type numpy.ndarray which looks like:
col
['','','5','']
['','8']
['6','','']
['7']
[]
['5']
I want the ouput like this :
col
5
8
6
7
0
5
How can I do this in python.Any help is highly appreciated.
Answer: To convert the data to numeric values you could use:
import numpy as np
import pandas as pd
data = list(map(np.array, [ ['','','5',''], ['','8'], ['6','',''], ['7'], [], ['5']]))
df = pd.DataFrame({'col': data})
df['col'] = pd.to_numeric(df['col'].str.join('')).fillna(0).astype(int)
print(df)
yields
col
0 5
1 8
2 6
3 7
4 0
5 5
* * *
To convert the data to strings use:
df['col'] = df['col'].str.join('').replace('', '0')
The result looks the same, but the dtype of the column is `object` since the
values are strings.
* * *
If there is more than one number in some rows and you wish to pick the
largest, then you'll have to loop through each item in each row, convert each
string to a numeric value and take the max:
import numpy as np
import pandas as pd
data = list(map(np.array, [ ['','','5','6'], ['','8'], ['6','',''], ['7'], [], ['5']]))
df = pd.DataFrame({'col': data})
df['col'] = [max([int(xi) if xi else 0 for xi in x] or [0]) for x in df['col']]
print(df)
yields
col
0 6 # <-- note ['','','5','6'] was converted to 6
1 8
2 6
3 7
4 0
5 5
* * *
For versions of pandas prior to 0.17, you could use `df.convert_objects`
instead:
import numpy as np
import pandas as pd
data = list(map(np.array, [ ['','','5',''], ['','8'], ['6','',''], ['7'], [], ['5']]))
df = pd.DataFrame({'col': data})
df['col'] = df['col'].str.join('').replace('', '0')
df = df.convert_objects(convert_numeric=True)
|
Why is the output of my subprocess not being printed?
Question: Here is my best attempt at asynchronously reading stdin/stdout from a
subprocess and printing it from Python:
import asyncio
import subprocess
from asyncio.subprocess import STDOUT, PIPE, DEVNULL
async def start_stream():
return await asyncio.create_subprocess_shell(
'watch ls /proc',
stdout=PIPE,
stderr=PIPE,
limit=1024
)
async def spawn():
ev_proc = await start_stream()
while True:
stdout, stderr = await ev_proc.communicate()
print(stdout, stderr)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(spawn())
Why is the print function not outputting anything?
Answer: Your `watch` process never terminates and `communicate()` waits for the
process to terminate, therefore `stdout` never arrives in your script.
<https://docs.python.org/3/library/asyncio-subprocess.html>
> coroutine communicate(input=None)
>
> Interact with process: Send data to stdin. Read data from stdout and stderr,
> until end-of-file is reached. **Wait for process to terminate.**
Try the following code which was inspired by
<http://stackoverflow.com/a/24435988/2776376>. It uses `pipe_data_received`
and then `len > 16` is simply to prevent printing empty lines.
> SubprocessProtocol.pipe_data_received(fd, data)
>
> Called when the child process writes data into its stdout or stderr pipe. fd
> is the integer file descriptor of the pipe. data is a non-empty bytes object
> containing the data.
import asyncio
class SubprocessProtocol(asyncio.SubprocessProtocol):
def pipe_data_received(self, fd, data):
if fd == 1:
text = data.decode()
if len(text.strip()) > 16:
print(text.strip())
def process_exited(self):
loop.stop()
loop = asyncio.get_event_loop()
ls = loop.run_until_complete(loop.subprocess_exec(
SubprocessProtocol, 'watch', 'ls', '/proc'))
loop.run_forever()
|
Python Gtk.MessageDialog Hides Parent Window
Question: I am working on a Gtk3 app written in Python. The main window for my app is
set up as follows:
#!/bin/python
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk as Gtk
## OTHER IMPORTS
class MainGui(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self, title="APP TITLE")
# Defaults
self.set_default_size(600, 500)
## OTHER CODE
# Setup the Window
self.connect("destroy", self.on_close)
self.show_all()
## OTHER CODE
def on_close(self, widget):
if self.editor.get_document().get_has_changes():
save_dialog = Gtk.MessageDialog(self, 0,
Gtk.MessageType.QUESTION,
Gtk.ButtonsType.YES_NO,
"Save changes?")
response = save_dialog.run()
## REST OF DIALOG HANDELING
The problem I'm having is related to the save dialog. The app displays the
dialog just fine, but it hides my main window, which is not the desired
effect. I've tried searching around for a solution, but can't seem to figure
out what I'm doing wrong. Any help would be greatly appreciated!
Answer: Shortly after posting this I realized that the reason things weren't working
is because of a bone-headed mistake. I was hooking up my on_close method using
this:
self.connect("destroy", self.on_close)
It turns out I should be doing it this way:
self.connect("delete-event", self.on_close)
Now things work great.
|
Python Flask timed queuing script
Question: I just started using Flask, and I'm creating a web application that does two
main things server side: Accessing another online API (which I can only send
so many requests to per second) and sending page requests to a user connecting
to the server.
When a user connects to my Flask server, it will send the user's browser a
page, then an AJAX script on that page will populate the page with data (this
is done for UI performance). This data comes from another API (the League of
Legends API), but there is a rate limit set on the number of calls I can make
per second, so I must make a queuing script.
Currently, I plan on using a `time.sleep()` function after every call, but I'm
worried that this will prevent the server from doing anything else. I still
want the server to be responding to page requests while the API calls are
being delayed.
**For this, should I use multiprocessing, or does Flask have something built
in to handle this? Or should I install a specific plugin for this?**
Thanks!
Answer: I think the recommended way of doing this is by using an asynchronous task
queue/job like **[celery](http://www.celeryproject.org/)**
Using it is very simple, you just need to put @app.task for functions you need
to run in the background:
from celery import Celery
app = Celery('tasks', broker='amqp://guest@localhost//')
@app.task
def add(x, y):
return x + y
result = add.delay(2, 2)
It has many features and functionalities and it'll do the job for you. You can
refer to the doc for more information.
|
pyinstaller and Tkinter
Question: I've built a Python (2.7) app that uses Tkinter and am trying to build a
Windows7 .exe using Pyinstaller (3.2). The app works find in windows is I run
it as `python myapp.py`, but once compiled into a pyinstaller distributable, I
get this error message:
ImportError: No module named Tkinter
Just to be sure, the top of myapp.py contains:
from copy import deepcopy
import cPickle as pickle
import Tkinter as tk
from PIL import ImageTk
Checking the distribution directory, I see tk85.dll, tcl85.dll and two
directories that see pertinent, tcl/ and tk/
I've found many references to secondary Tkinter dependencies, such as
matplotlib which imports Tkinter itslef, but I've not found any details of a
direct dependency like this.
Any ideas how to get this one working?
Answer: Have you checked: <https://github.com/pyinstaller/pyinstaller/issues/1877> (or
other issues)? <https://github.com/pyinstaller/pyinstaller/wiki/If-Things-Go-
Wrong>
quote from issue 1877 "It looks like the hook-_tkinter.py is not able to
handle custom compiled Tk." Possible workaround: "Thanks, after installed
tkinter, tix, tcl-devel and tk-devel using yum installation, It's now work
fine. "
Otherwise, Py2exe is also an option for creating a .exe file, and i have used
it plenty of times with tkinter with no issues.
|
How do I determine the window with the active keyboard focus using ScriptingBridge (or AppleScript)?
Question: From all the API documentation I can find, it seems that the right thing to do
is to check the "frontmost" window as returned by System Events or the
accessibility API, like so (example in Python here, but this is the same in
ObjC or swift or ruby or whatever):
#!/usr/bin/env python
from ScriptingBridge import SBApplication
events = SBApplication.applicationWithBundleIdentifier_(
"com.apple.systemevents")
for proc in events.applicationProcesses():
if proc.frontmost():
print(proc.name())
The value I get back from this is the same as from
`NSWorkspace.sharedWorkspace().frontmostApplication()`. And it's _usually_
correct. Except when a prompt dialog, especially one from the system, is
_actually_ what has the keyboard focus. For example, if Messages.app wants a
password to my Jabber account, or if my iCloud password changes; these dialogs
appear to be coming from the `UserNotificationCenter` process, which does not
report itself as the frontmost application somehow, even though it definitely
has keyboard focus.
Answer: "**UserNotificationCenter** " and "**UserNotificationCenter** " are background
applications (the `NSUIElement` key is 1 in the info.plist).
`proc.frontmost()` is always **false** on process which is in background (no
menu and not in the Dock).
And `NSWorkspace.sharedWorkspace().frontmostApplication()` doesn't work on
background application.
* * *
To get the active application, use the `activeApplication` method from the
`NSWorkspace` class
Here's the AppleScript:
set pyScript to "from AppKit import NSWorkspace
activeApp = NSWorkspace.sharedWorkspace().activeApplication()
print activeApp['NSApplicationName'].encode('utf-8')
print activeApp['NSApplicationProcessIdentifier']"
set r to do shell script "/usr/bin/python -c " & quoted form of pyScript
set {localizedAppName, procID} to paragraphs of r -- procID is the unix id
* * *
**Update** with a not deprecated method:
set pyScript to "from AppKit import NSWorkspace
for app in NSWorkspace.sharedWorkspace().runningApplications():
if app.isActive():
print app.localizedName().encode('utf-8')
print app.processIdentifier()
break"
set r to do shell script "/usr/bin/python -c " & quoted form of pyScript
set {localizedAppName, procID} to paragraphs of r -- procID is the unix id
* * *
To get the front window from a process ID, use the `procID`variable, like
this:
tell application "System Events"
tell (first process whose unix id = procID)
log (get properties) -- properties of this process
tell window 1 to if exists then log (get properties) -- properties of the front window of this process
end tell
end tell
|
py2neo - Unable to fetch Data from a remote server
Question: I am using Py2neo package to query my database which is located in a server
machine.
**My code snippet:**
from py2neo import Graph,authenticate
import time
from py2neo.packages.httpstream import http
http.socket_timeout = 9999
def dbConnect():
graph = Graph("http://192.xxx.xxx.xxx:7473/root/neo4j.graphdb")
print(graph)
#execute a cypher query
cypher()
return
def cypher():
start_time = time.time()
result = graph.cypher.execute("MATCH (n) RETURN COUNT(n)")
print(time.time - start_time)
return
if __name__ == '__main__':
dbConnect()
Unable to fetch data from the machine, in turn returning with a error,
Error Message:
<Graph uri=u'http://192.168.204.146:7473/root/neo4j.graphdb/'>
Traceback (most recent call last):
File "D:\Innominds\Collective[I]\Dev\Graph\Cypher_VS_Api.py", line 30, in <module>
dbConnect()
File "D:\Innominds\Collective[I]\Dev\Graph\Cypher_VS_Api.py", line 19, in dbConnect
cypher()
File "D:\Innominds\Collective[I]\Dev\Graph\Cypher_VS_Api.py", line 25, in cypher
result = graph.cypher.execute("MATCH (n) RETURN COUNT(n)")
File "C:\Python27\lib\site-packages\py2neo\core.py", line 661, in cypher
metadata = self.resource.metadata
File "C:\Python27\lib\site-packages\py2neo\core.py", line 213, in metadata
self.get()
File "C:\Python27\lib\site-packages\py2neo\core.py", line 258, in get
response = self.__base.get(headers=headers, redirect_limit=redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 966, in get
return self.__get_or_head("GET", if_modified_since, headers, redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 943, in __get_or_head
return rq.submit(redirect_limit=redirect_limit, **kwargs)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 433, in submit
http, rs = submit(self.method, uri, self.body, self.headers)
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 325, in submit
response = send("peer closed connection")
File "C:\Python27\lib\site-packages\py2neo\packages\httpstream\http.py", line 318, in send
return http.getresponse(**getresponse_args)
File "C:\Python27\lib\httplib.py", line 1074, in getresponse
response.begin()
File "C:\Python27\lib\httplib.py", line 415, in begin
version, status, reason = self._read_status()
File "C:\Python27\lib\httplib.py", line 379, in _read_status
raise BadStatusLine(line)
httplib.BadStatusLine: ''
Observe, the first line in the error message just a print statement in the
code, which is printing the graph object to the console. And, http import is a
googled knowledge.
What are the required setting and changes to be made in order to access the
graph database in the server machine from my local machine.
Answer: First you should check if your server is accessible and if you can open the
web interface in a browser.
You connect to `http` with the standard `https` port `7473` and the URL looks
wrong.
http://192.xxx.xxx.xxx:7473/root/neo4j.graphdb
You should try to connect with `http` to `7474` or `https` to `7473`. And the
graph URL should look like `http://server:port/db/data`. Try:
http://192.xxx.xxx.xxx:7474/db/data
https://192.xxx.xxx.xxx:7473/db/data
Also you don't use authentication. Have you disabled it on the server?
|
How to manually populate Many-To-Many fields through JSON fixture in Django
Question: I have a JSON fixture and I want to populate a many-to-many field from my JSON
fixture but it seems Django only wants just one pk but I need to pass in a lot
of integers representing pks for the other related fields.
IS there anyway to go about this.
I have used a `raw_id = ['reference(my table name)']` in the `ModelAdmin` so
that pks can be used to reference the related fields.
The error message is
> File "/usr/local/lib/python2.7/dist-
> packages/django/core/serializers/python.py", line 142, in Deserializer raise
> base.DeserializationError.WithData(e, d['model'], d.get('pk'), pk)
> django.core.serializers.base.DeserializationError: Problem installing
> fixture '/home/user/Desktop/File/data/file.json':
>
> `[u"',' value must be an integer."]: (kjv.verse:pk=1) field_value was ','`
Answer: You can use `JSONField()` for django model
from django.contrib.postgres.fields import JSONField
raw_id=JSONField(primary_key=True,db_index=True,null=True)
S0 your database will be like `{raw_id:[1,2,3,4]}`
|
Should I use Pickle or cPickle?
Question: Python has both the `pickle` and `cPickle` modules for serialization.
`cPickle` has an obvious advantage over `pickle`: speed. Does `pickle` have
any advantages? If not, should I simply use `cPickle` always?
Answer: The **pickle** module implements an algorithm for turning an arbitrary
**Python** object into a series of bytes. This process is also called
serializing” the object. The byte stream representing the object can then be
transmitted or stored, and later reconstructed to create a new object with the
same characteristics.
The **cPickle** module implements the same algorithm, in **C** instead of
Python. It is many times faster than the Python implementation, but does not
allow the user to subclass from Pickle. If subclassing is not important for
your use, you probably want to use cPickle.
[Source](https://pymotw.com/2/pickle/) of above information.
|
Subsets and Splits