text
stringlengths 226
34.5k
|
---|
Python: Loop acting on several files and writing new ones
Question: I have the following code which takes the file "University2.csv", and writes
new csv files "Hours.csv" - "Hours -Stacked.csv" and "Days.csv".
Now I want the code to be able to loop and run on several files
(University3.csv, University4.csv etc.) and produce for each of them
"Hours3.csv", "Hours - Stacked3.csv" "Days3.csv", "Hours4.csv" etc.
Here is the code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#Importing the csv file into df
df = pd.read_csv('university2.csv', sep=";", skiprows=1)
#Changing datetime
df['YYYY-MO-DD HH-MI-SS_SSS'] = pd.to_datetime(df['YYYY-MO-DD HH-MI-SS_SSS'],
format='%Y-%m-%d %H:%M:%S:%f')
#Set index from column
df = df.set_index('YYYY-MO-DD HH-MI-SS_SSS')
#Add Magnetic Magnitude Column
df['magnetic_mag'] = np.sqrt(df['MAGNETIC FIELD X (μT)']**2 + df['MAGNETIC FIELD Y (μT)']**2 + df['MAGNETIC FIELD Z (μT)']**2)
#Copy interesting values
df2 = df[[ 'ATMOSPHERIC PRESSURE (hPa)',
'TEMPERATURE (C)', 'magnetic_mag']].copy()
#Hourly Average and Standard Deviation for interesting values
df3 = df2.resample('H').agg(['mean','std'])
df3.columns = [' '.join(col) for col in df3.columns]
#Daily Average and Standard Deviation for interesting values
df4 = df2.resample('D').agg(['mean','std'])
df4.columns = [' '.join(col) for col in df4.columns]
#Write to new csv
df3.to_csv('Hours.csv', index=True)
df4.to_csv('Days.csv', index=True)
#New csv with stacked hour averages
df5 = pd.read_csv('Hours.csv')
df5['YYYY-MO-DD HH-MI-SS_SSS'] = pd.to_datetime(df5['YYYY-MO-DD HH-MI-SS_SSS'])
hour = pd.to_timedelta(df5['YYYY-MO-DD HH-MI-SS_SSS'].dt.hour, unit='H')
df6 = df5.groupby(hour).mean()
df6.to_csv('Hours - stacked.csv', index=True)
Can anyone help ?
Thank you !
Answer: The following code should do the trick.
It runs a for loop using index (idx) which uses the following values (3,4,5)
It use variable filenames, with the idx as parameter. e.g.
uni_name = "university" + str(idx) + ".csv"
Here is the code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
for idx in 3,4,5:
#Importing the csv file into df
uni_name = "university" + str(idx) + ".csv"
df = pd.read_csv(uni_name, sep=";", skiprows=1)
#Changing datetime
df['YYYY-MO-DD HH-MI-SS_SSS'] = pd.to_datetime(df['YYYY-MO-DD HH-MI-SS_SSS'],
format='%Y-%m-%d %H:%M:%S:%f')
#Set index from column
df = df.set_index('YYYY-MO-DD HH-MI-SS_SSS')
#Add Magnetic Magnitude Column
df['magnetic_mag'] = np.sqrt(df['MAGNETIC FIELD X (μT)']**2 + df['MAGNETIC FIELD Y (μT)']**2 + df['MAGNETIC FIELD Z (μT)']**2)
#Copy interesting values
df2 = df[[ 'ATMOSPHERIC PRESSURE (hPa)',
'TEMPERATURE (C)', 'magnetic_mag']].copy()
#Hourly Average and Standard Deviation for interesting values
df3 = df2.resample('H').agg(['mean','std'])
df3.columns = [' '.join(col) for col in df3.columns]
#Daily Average and Standard Deviation for interesting values
df4 = df2.resample('D').agg(['mean','std'])
df4.columns = [' '.join(col) for col in df4.columns]
#Write to new csv
hours = "Hours" + str(idx) + ".csv"
days = "Days" + str(idx) + ".csv"
df3.to_csv(hours, index=True)
df4.to_csv(days, index=True)
#New csv with stacked hour averages
df5 = pd.read_csv('Hours.csv')
df5['YYYY-MO-DD HH-MI-SS_SSS'] = pd.to_datetime(df5['YYYY-MO-DD HH-MI-SS_SSS'])
hour = pd.to_timedelta(df5['YYYY-MO-DD HH-MI-SS_SSS'].dt.hour, unit='H')
df6 = df5.groupby(hour).mean()
hours_st = "Hours - stacked" + str(idx) + ".csv"
df6.to_csv('Hours - stacked.csv', index=True)
|
Python Tkinter on Debian Beaglebone: lost font styling when changed directory name
Question: I have installed non-system fonts onto BeagleBone Black (Debian Jessie) and
have been using them in a GUI created using python 2.7 script via Tkinter and
tkFont. When I changed the name of the directory my file was stored in, these
fonts stopped appearing in my python script GUI!
I installed the fonts into /usr/shared/fonts and they are still there, of
course, but somehow I lost the connection to the fonts from my script.
I ran `fc-cache -fv` and rebooted. I ran a short script with
list( tkFont.families() )
in it, any the fonts I want to use appear in the list.
Still displaying system font in the GUI.
How can it be? Here is my code:
#!/usr/bin/python
import time
import threading
import Queue
import Tkinter as tk
import tkFont
try:
import alsaaudio as aa
import audioop
import Adafruit_BBIO.GPIO as GPIO
debug = False
except ImportError:
# To enable simple testing on systems without alsa/gpio
import mock
aa = mock.MagicMock()
aa.PCM().read.return_value = (1, '')
audioop = mock.MagicMock()
audioop.max.return_value = 5000
GPIO = mock.MagicMock()
import random
GPIO.input.side_effect = lambda *a: random.randint(0, 5000) == 0
debug = True
# layout ########################################################
BACKGROUND_COLOR = '#000000'
TEXTBOX_WIDTH = 1920
# vertical alignment of text in percentages from top
V_ALIGN_L1 = .16
V_ALIGN_L2 = .28
V_ALIGN_HEADER = .52
V_ALIGN_SCORE = .68
V_ALIGN_TILT = .50
V_ALIGN_AGAIN = .68
# type ##########################################################
TYPEFACE_L1 = 'Avenir Next Demi Bold'
TYPEFACE_L2 = 'Avenir Next Bold'
TYPEFACE_HEADER = 'Avenir Next Bold'
TYPEFACE_SCORE = 'Avenir Next Demi Bold'
TYPEFACE_TILT = 'Avenir Next Bold'
TYPEFACE_AGAIN = 'Avenir Next Bold'
WEIGHT_L1 = tkFont.NORMAL
WEIGHT_L2 = tkFont.BOLD
WEIGHT_HEADER = tkFont.BOLD
WEIGHT_SCORE = tkFont.NORMAL
WEIGHT_TILT = tkFont.BOLD
WEIGHT_AGAIN = tkFont.BOLD
FONT_SIZE_L1 = 56
FONT_SIZE_L2 = 56
FONT_SIZE_HEADER = 76
FONT_SIZE_SCORE = 168
FONT_SIZE_TILT = 114
FONT_SIZE_AGAIN = 76
LINE_HEIGHT_L1 = -5
LINE_HEIGHT_L2 = -5
LINE_HEIGHT_HEADER = -10
LINE_HEIGHT_SCORE = -20
LINE_HEIGHT_TILT = -20
LINE_HEIGHT_AGAIN = -1
TEXT_COLOR_HIGHLIGHT = '#FFFFFF'
TEXT_COLOR_BODY = '#92C73D'
# text ###########################################################
L1 = 'Try to beat your own score.'
L2 = 'The lowest score wins!'
HEADER_MESSAGE = 'Your Score:'
TILT_MESSAGE = 'Too loud!'
TRY_AGAIN = 'Start again!'
# audio collection configuration ##################################
BUTTON_PIN = 'P8_12'
DEVICE = 'hw:1' # hardware sound card index
CHANNELS = 2
SAMPLE_RATE = 44100 # Hz
PERIOD = 256 # Frames
FORMAT = aa.PCM_FORMAT_S16_LE # Sound format
NOISE_THRESHOLD = 0 # to eliminate small noises, scale of 0 - 7
TILT_THRESHOLD = 100.0 # upper limit of score before tilt state
SCALAR = 4680 # normalizes score, found by trial and error
UPDATE_TIME = 100 # ms
# start script ###################################################
class Display(object):
def __init__(self, parent, queue, stop_event):
self.parent = parent
self.queue = queue
self.stop_event = stop_event
self.tilt_event = threading.Event()
self._geom = '200x200+0+0'
parent.geometry("{0}x{1}+0+0".format(
parent.winfo_screenwidth(), parent.winfo_screenheight()))
parent.overrideredirect(1)
parent.title(TITLE)
parent.configure(background=BACKGROUND_COLOR)
self.create_text()
self.process_queue()
self.audio_thread = threading.Thread(target=self.setup_audio)
self.audio_thread.start()
def __delete__(self, instance):
instance.stop_event.set()
def create_text(self):
message_kwargs = dict(
bg=BACKGROUND_COLOR,
width=TEXTBOX_WIDTH,
justify='c',
)
self.message_L1 = tk.Message(
self.parent,
text=L1,
fg=TEXT_COLOR_HIGHLIGHT,
font=(TYPEFACE_HEADER, FONT_SIZE_L1, WEIGHT_L1),
pady=LINE_HEIGHT_L1,
**message_kwargs)
self.message_L2 = tk.Message(
self.parent,
text=L2,
fg=TEXT_COLOR_HIGHLIGHT,
font=(TYPEFACE_HEADER, FONT_SIZE_L2, WEIGHT_L2),
pady=LINE_HEIGHT_L2,
**message_kwargs)
self.message_score_header = tk.Message(
self.parent,
text=HEADER_MESSAGE,
fg=TEXT_COLOR_BODY,
font=(TYPEFACE_HEADER, FONT_SIZE_HEADER, WEIGHT_HEADER),
pady=LINE_HEIGHT_HEADER,
**message_kwargs)
self.message_score = tk.Message(
self.parent,
text='0.0',
fg=TEXT_COLOR_HIGHLIGHT,
font=(TYPEFACE_SCORE, FONT_SIZE_SCORE, WEIGHT_SCORE),
pady=LINE_HEIGHT_SCORE,
**message_kwargs)
self.message_L1.place(relx=.5, rely=V_ALIGN_L1, anchor='c')
self.message_L2.place(relx=.5, rely=V_ALIGN_L2, anchor='c')
self.message_score_header.place(relx=V_ALIGN_HEADER, rely=.5, anchor='c')
self.message_score.place(relx=.5, rely=V_ALIGN_SCORE, anchor='c')
def process_queue(self):
text = None
while not self.queue.empty():
text = self.queue.get_nowait()
if text:
self.message_score_header.configure(text=HEADER_MESSAGE)
self.message_score.configure(text=text)
elif self.tilt_event.is_set():
self.message_L1.configure(text="")
self.message_L2.configure(text="")
self.message_score_header.configure(text=TILT_MESSAGE, fg=TEXT_COLOR_HIGHLIGHT, font=(TYPEFACE_TILT, FONT_SIZE_TILT, WEIGHT_TILT), pady=LINE_HEIGHT_TILT)
self.message_score.configure(text=TRY_AGAIN, fg=TEXT_COLOR_BODY, font=(TYPEFACE_AGAIN, FONT_SIZE_AGAIN, WEIGHT_AGAIN), pady=LINE_HEIGHT_AGAIN)
self.message_score.place(relx=.75, rely=V_ALIGN_AGAIN, anchor='c')
self.parent.after(UPDATE_TIME, self.process_queue)
def setup_audio(self):
data_in = aa.PCM(aa.PCM_CAPTURE, aa.PCM_NONBLOCK, DEVICE)
data_in.setchannels(2)
data_in.setrate(SAMPLE_RATE)
data_in.setformat(FORMAT)
data_in.setperiodsize(PERIOD)
score = 0
running = False
while not self.stop_event.is_set():
# Sleep a very short time to prevent the thread from locking up
time.sleep(0.001)
if GPIO.input(BUTTON_PIN):
self.tilt_event.clear()
score = 0
if not running:
self.message_L1.configure(text=L1)
self.message_L2.configure(text=L2)
self.message_score_header.configure(text=HEADER_MESSAGE, fg=TEXT_COLOR_BODY, font=(TYPEFACE_HEADER, FONT_SIZE_HEADER, WEIGHT_HEADER), pady=LINE_HEIGHT_HEADER)
self.message_score.configure(text='0.0', fg=TEXT_COLOR_HIGHLIGHT, font=(TYPEFACE_SCORE, FONT_SIZE_SCORE, WEIGHT_SCORE), pady=LINE_HEIGHT_SCORE)
self.message_score.place(relx=.5, rely=V_ALIGN_SCORE, anchor='c')
running = True
self.queue.put('0.0')
elif not running:
# Not running yet, keep waiting
continue
# Read data from device
l, data = data_in.read()
if l and not self.tilt_event.is_set():
# catch frame error
try:
max = audioop.max(data, CHANNELS)
scaled_max = max // SCALAR
if scaled_max <= NOISE_THRESHOLD:
# Too quiet, ignore
continue
score += scaled_max / 10.0
if score > TILT_THRESHOLD:
self.tilt_event.set()
running = False
else:
self.queue.put(str(score))
except audioop.error, e:
if e.message != "not a whole number of frames":
raise e
def main():
GPIO.setup(BUTTON_PIN, GPIO.IN)
stop_event = threading.Event()
window = None
try:
root = tk.Tk()
queue = Queue.Queue()
window = Display(root, queue, stop_event)
# Force the window to the foreground
root.attributes('-topmost', True)
if debug:
root.maxsize(1920, 1200)
root.mainloop()
finally:
stop_event.set()
if window:
window.audio_thread.join()
del window
if __name__ == '__main__':
main()
There are no error messages when I run the script.
**EDIT:** It is also worth mentioning that the font size and weight are
working, just not the typeface.
Answer: I think you are missing the `Font()` construct:
self.message_score.configure(
text=TRY_AGAIN,
fg=TEXT_COLOR_BODY,
font=(TYPEFACE_AGAIN, FONT_SIZE_AGAIN, WEIGHT_AGAIN),
pady=LINE_HEIGHT_AGAIN)
Instead it probably be:
self.message_score.configure(
text=TRY_AGAIN,
fg=TEXT_COLOR_BODY,
font=tkFont.Font(TYPEFACE_AGAIN, FONT_SIZE_AGAIN, WEIGHT_AGAIN),
pady=LINE_HEIGHT_AGAIN)
The full version:
#!/usr/bin/python
import time
import threading
import Queue
import Tkinter as tk
import tkFont
try:
import alsaaudio as aa
import audioop
import Adafruit_BBIO.GPIO as GPIO
debug = False
except ImportError:
# To enable simple testing on systems without alsa/gpio
import mock
aa = mock.MagicMock()
aa.PCM().read.return_value = (1, '')
audioop = mock.MagicMock()
audioop.max.return_value = 5000
GPIO = mock.MagicMock()
import random
GPIO.input.side_effect = lambda *a: random.randint(0, 5000) == 0
debug = True
# layout ########################################################
BACKGROUND_COLOR = '#000000'
TEXTBOX_WIDTH = 1920
# vertical alignment of text in percentages from top
V_ALIGN_L1 = .16
V_ALIGN_L2 = .28
V_ALIGN_HEADER = .52
V_ALIGN_SCORE = .68
V_ALIGN_TILT = .50
V_ALIGN_AGAIN = .68
# type ##########################################################
TYPEFACE_L1 = 'Avenir Next Demi Bold'
TYPEFACE_L2 = 'Avenir Next Bold'
TYPEFACE_HEADER = 'Avenir Next Bold'
TYPEFACE_SCORE = 'Avenir Next Demi Bold'
TYPEFACE_TILT = 'Avenir Next Bold'
TYPEFACE_AGAIN = 'Avenir Next Bold'
WEIGHT_L1 = tkFont.NORMAL
WEIGHT_L2 = tkFont.BOLD
WEIGHT_HEADER = tkFont.BOLD
WEIGHT_SCORE = tkFont.NORMAL
WEIGHT_TILT = tkFont.BOLD
WEIGHT_AGAIN = tkFont.BOLD
FONT_SIZE_L1 = 56
FONT_SIZE_L2 = 56
FONT_SIZE_HEADER = 76
FONT_SIZE_SCORE = 168
FONT_SIZE_TILT = 114
FONT_SIZE_AGAIN = 76
LINE_HEIGHT_L1 = -5
LINE_HEIGHT_L2 = -5
LINE_HEIGHT_HEADER = -10
LINE_HEIGHT_SCORE = -20
LINE_HEIGHT_TILT = -20
LINE_HEIGHT_AGAIN = -1
TEXT_COLOR_HIGHLIGHT = '#FFFFFF'
TEXT_COLOR_BODY = '#92C73D'
# text ###########################################################
L1 = 'Try to beat your own score.'
L2 = 'The lowest score wins!'
HEADER_MESSAGE = 'Your Score:'
TILT_MESSAGE = 'Too loud!'
TRY_AGAIN = 'Start again!'
# audio collection configuration ##################################
BUTTON_PIN = 'P8_12'
DEVICE = 'hw:1' # hardware sound card index
CHANNELS = 2
SAMPLE_RATE = 44100 # Hz
PERIOD = 256 # Frames
FORMAT = aa.PCM_FORMAT_S16_LE # Sound format
NOISE_THRESHOLD = 0 # to eliminate small noises, scale of 0 - 7
TILT_THRESHOLD = 100.0 # upper limit of score before tilt state
SCALAR = 4680 # normalizes score, found by trial and error
UPDATE_TIME = 100 # ms
# start script ###################################################
class Display(object):
def __init__(self, parent, queue, stop_event):
self.parent = parent
self.queue = queue
self.stop_event = stop_event
self.tilt_event = threading.Event()
self._geom = '200x200+0+0'
parent.geometry("{0}x{1}+0+0".format(
parent.winfo_screenwidth(), parent.winfo_screenheight()))
parent.overrideredirect(1)
parent.title(TITLE)
parent.configure(background=BACKGROUND_COLOR)
self.create_text()
self.process_queue()
self.audio_thread = threading.Thread(target=self.setup_audio)
self.audio_thread.start()
def __delete__(self, instance):
instance.stop_event.set()
def create_text(self):
message_kwargs = dict(
bg=BACKGROUND_COLOR,
width=TEXTBOX_WIDTH,
justify='c',
)
self.message_L1 = tk.Message(
self.parent,
text=L1,
fg=TEXT_COLOR_HIGHLIGHT,
font=tkFont.Font(TYPEFACE_HEADER, FONT_SIZE_L1, WEIGHT_L1),
pady=LINE_HEIGHT_L1,
**message_kwargs)
self.message_L2 = tk.Message(
self.parent,
text=L2,
fg=TEXT_COLOR_HIGHLIGHT,
font=tkFont.Font(TYPEFACE_HEADER, FONT_SIZE_L2, WEIGHT_L2),
pady=LINE_HEIGHT_L2,
**message_kwargs)
self.message_score_header = tk.Message(
self.parent,
text=HEADER_MESSAGE,
fg=TEXT_COLOR_BODY,
font=tkFont.Font(TYPEFACE_HEADER, FONT_SIZE_HEADER, WEIGHT_HEADER),
pady=LINE_HEIGHT_HEADER,
**message_kwargs)
self.message_score = tk.Message(
self.parent,
text='0.0',
fg=TEXT_COLOR_HIGHLIGHT,
font=tkFont.Font(TYPEFACE_SCORE, FONT_SIZE_SCORE, WEIGHT_SCORE),
pady=LINE_HEIGHT_SCORE,
**message_kwargs)
self.message_L1.place(relx=.5, rely=V_ALIGN_L1, anchor='c')
self.message_L2.place(relx=.5, rely=V_ALIGN_L2, anchor='c')
self.message_score_header.place(relx=V_ALIGN_HEADER, rely=.5, anchor='c')
self.message_score.place(relx=.5, rely=V_ALIGN_SCORE, anchor='c')
def process_queue(self):
text = None
while not self.queue.empty():
text = self.queue.get_nowait()
if text:
self.message_score_header.configure(text=HEADER_MESSAGE)
self.message_score.configure(text=text)
elif self.tilt_event.is_set():
self.message_L1.configure(text="")
self.message_L2.configure(text="")
self.message_score_header.configure(
text=TILT_MESSAGE,
fg=TEXT_COLOR_HIGHLIGHT,
font=tkFont.Font(
TYPEFACE_TILT,
FONT_SIZE_TILT,
WEIGHT_TILT,
),
pady=LINE_HEIGHT_TILT)
self.message_score.configure(
text=TRY_AGAIN,
fg=TEXT_COLOR_BODY,
font=tkFont.Font(
TYPEFACE_AGAIN,
FONT_SIZE_AGAIN,
WEIGHT_AGAIN,
),
pady=LINE_HEIGHT_AGAIN)
self.message_score.place(relx=.75, rely=V_ALIGN_AGAIN, anchor='c')
self.parent.after(UPDATE_TIME, self.process_queue)
def setup_audio(self):
data_in = aa.PCM(aa.PCM_CAPTURE, aa.PCM_NONBLOCK, DEVICE)
data_in.setchannels(2)
data_in.setrate(SAMPLE_RATE)
data_in.setformat(FORMAT)
data_in.setperiodsize(PERIOD)
score = 0
running = False
while not self.stop_event.is_set():
# Sleep a very short time to prevent the thread from locking up
time.sleep(0.001)
if GPIO.input(BUTTON_PIN):
self.tilt_event.clear()
score = 0
if not running:
self.message_L1.configure(text=L1)
self.message_L2.configure(text=L2)
self.message_score_header.configure(
text=HEADER_MESSAGE,
fg=TEXT_COLOR_BODY,
font=tkFont.Font(
TYPEFACE_HEADER,
FONT_SIZE_HEADER,
WEIGHT_HEADER,
),
pady=LINE_HEIGHT_HEADER)
self.message_score.configure(
text='0.0',
fg=TEXT_COLOR_HIGHLIGHT,
font=tkFont.Font(
TYPEFACE_SCORE,
FONT_SIZE_SCORE,
WEIGHT_SCORE,
),
pady=LINE_HEIGHT_SCORE)
self.message_score.place(
relx=.5,
rely=V_ALIGN_SCORE,
anchor='c')
running = True
self.queue.put('0.0')
elif not running:
# Not running yet, keep waiting
continue
# Read data from device
l, data = data_in.read()
if l and not self.tilt_event.is_set():
# catch frame error
try:
max = audioop.max(data, CHANNELS)
scaled_max = max // SCALAR
if scaled_max <= NOISE_THRESHOLD:
# Too quiet, ignore
continue
score += scaled_max / 10.0
if score > TILT_THRESHOLD:
self.tilt_event.set()
running = False
else:
self.queue.put(str(score))
except audioop.error, e:
if e.message != "not a whole number of frames":
raise e
def main():
GPIO.setup(BUTTON_PIN, GPIO.IN)
stop_event = threading.Event()
window = None
try:
root = tk.Tk()
queue = Queue.Queue()
window = Display(root, queue, stop_event)
# Force the window to the foreground
root.attributes('-topmost', True)
if debug:
root.maxsize(1920, 1200)
root.mainloop()
finally:
stop_event.set()
if window:
window.audio_thread.join()
del window
if __name__ == '__main__':
main()
There are no error messages when I run the script.
|
HTML: parameter in javascript function
Question: Can we put a string parameter in a JS function, while i'm in html? Like this:
<form name="form3" action="mat.py" method="get" onsubmit="return validation(param1,param2)"/>
I can also say that i'm working in Python, so my code is like that: there's
just two ' ', i don't think it can deal damages
print'<form name="form3" action="mat.py" method="get" onsubmit="return validation(param1,param2)"/>'
I included my JS in an other file
Thank you, Clément.
Answer: There are _three layers_ to this statement:
print'<form name="form3" action="mat.py" method="get" onsubmit="return validation(param1,param2)"/>'
There's
1. The Python layer, the whole HTML thing is one big string in `'` quotes.
2. The HTML layer, which is putting the attribute values in `"` quotes.
3. The JavaScript layer, where you don't currently have any quotes.
It's important to keep track of what's happening at each level.
So, you have a couple of options:
1. Outputting JavaScript code inside HTML attribuets is why JavaScript originally had two kinds of quotes (it now has three), both `"` and `'`. So you can use `\` in your Python code to put `'` around the params:
print'<form name="form3" action="mat.py" method="get" onsubmit="return validation(\'param1\', \'param2\')"/>'
That outputs this:
<form name="form3" action="mat.py" method="get" onsubmit="return validation('param1', 'param2')"/>'
Note we're using `'` around the JavaScript strings, so as not to end the HTML
attribute early.
It also works if we use `'` for the HTML attribute quotes and `"` in the
JavaScript, since HTML also allows both kinds of quotes:
print'<form name=\'form3\' action=\'mat.py'\ method=\'get\' onsubmit=\'return validation("param1", "param2")\'/>'
which outputs
<form name='form3' action='mat.py' method='get' onsubmit='return validation("param1", "param2")'/>
2. Remember that the content of HTML attributes **is HTML** , so you can use `"`, the named character entity for `"`:
print'<form name="form3" action="mat.py" method="get" onsubmit="return validation("param1", "param2")"/>'
which outputs
<form name="form3" action="mat.py" method="get" onsubmit="return validation("param1", "param2")"/>
Not very readable though. :-)
But your best option is _don't use`onxyz` attribute handlers at all_, use
modern techniques for hooking up events.
|
Slicing a String after certain key words are mentioned into a list
Question: I am new to python and I am stuck with a problem. What I'm trying to do that I
have a string containing a conversation between two people :
str = " dylankid: *random words* senpai: *random words* dylankid: *random words* senpai: *random words*"
I want to create 2 lists from the string using dylankid and senpai as names :
dylankid = [ ]
senpai = [ ]
and here is where I am struggling, inside list dylankid I want to place all
the words that come after 'dylankid' in the string but before the next
'dylankid' or 'senpai' same goes for senpai list so it would look something
like this
dylankid = ["random words", "random words", "random words"]
senpai = ["random words", "random words", "random words"]
dylankid containing all the messages from dylankid and vice versa.
I have looked into slicing it and using `split()` and `re.compile()`, but I
can't figure out a way to specify were to start slicing and where to stop.
Hopefully it was clear enough, any help would be appreciated :)
Answer: Following code will create a dict where keys are persons and values are list
of messages:
from collections import defaultdict
import re
PATTERN = '''
\s* # Any amount of space
(dylankid|senpai) # Capture person
:\s # Colon and single space
(.*?) # Capture everything, non-greedy
(?=\sdylankid:|\ssenpai:|$) # Until we find following person or end of string
'''
s = " dylankid: *random words* senpai: *random words* dylankid: *random words* senpai: *random words*"
res = defaultdict(list)
for person, message in re.findall(PATTERN, s, re.VERBOSE):
res[person].append(message)
print res['dylankid']
print res['senpai']
It will produce following output:
['*random words*', '*random words*']
['*random words*', '*random words*']
|
How to copy a cropped image onto the original one, given the coordinates of the center of the crop
Question: I'm cropping an image like this:
self.rst = self.img_color[self.param_a_y:self.param_b_y,
self.param_a_x:self.param_b_x:, ]
How do I copy this image back to the original one. The data I have available
are the coordinates of the original image, which makes the center of the crop.
Seems like there's no`copy_to()` function for python
Answer: I failed myself getting copy_to() working a few days ago, but came up with a
difeerent solution: You can uses masks for this task.
I have an example at hand which shows how to create a mask from a defined
colour range using inrange. With that mask, you create two partial images
(=masks), one for the old content and one for the new content, the not used
area in both images is back. Finally, a simple bitwise_or combines both
images.
This works for arbitrary shapes, so you can easily adapt this to rectangular
ROIs.
import cv2
import numpy as np
img = cv2.imread('image.png')
rows,cols,bands = img.shape
print rows,cols,bands
# Create image with new colour for replacement
new_colour_image= np.zeros((rows,cols,3), np.uint8)
new_colour_image[:,:]= (255,0,0)
# Define range of color to be exchanged (in this case only one single color, but could be range of colours)
lower_limit = np.array([0,0,0])
upper_limit = np.array([0,0,0])
# Generate mask for the pixels to be exchanged
new_colour_mask = cv2.inRange(img, lower_limit, upper_limit)
# Generate mask for the pixels to be kept
old_image_mask=cv2.bitwise_not(new_colour_mask)
# Part of the image which is kept
img2= cv2.bitwise_and(img,img, old_image_mask)
# Part of the image which is replaced
new_colour_image=cv2.bitwise_and(new_colour_image,new_colour_image, new_colour_mask)
#Combination of the two parts
result=cv2.bitwise_or(img2, new_colour_image)
cv2.imshow('image',img)
cv2.imshow('mask',new_colour_mask)
cv2.imshow('r',result)
cv2.waitKey(0)
|
Getting "TypeError: unsupported operand type(s) for -: 'list' and 'list'"
Question: Hi I know there are a few people that had this issue but none of the solutions
I've seen are helping. I'm taking a set of data, reading the file then
creating arrays from the data to input into this equation: `Dist = 10 **
((app_m - abs_M + 5.) /5)` Where I app_m and abs_M are the arrays from the
data.
I'm using Python 2.7 and only just learning so if things can be explained as
simply as possible that would be great
Answer: you can not use `list` \- `list`
you can just change your code like this
import numpy as np
Dist = 10 ** ((np.array(app_m) - np.array(abs_M) + 5.) /5)
|
Python list comparing characters and counting them
Question: I have a little question about how to check and compare two or more characters
in the list in Python.
For example, I have a string "cdcdccddd". I made a list from this string to
easier comparing the characters. And the needed output is: c: 1 d: 1 c: 1 d: 1
c: 2 d: 3 So it is counting the characters, if first is not the same as the
second, the counter = 1, if the second is the same as third, then counter is
+1 and need check the third with fourth and so on.
I got so far this algorithm:
text = "cdcdccddd"
l = []
l = list(text)
print list(text)
for n in range(0,len(l)):
le = len(l[n])
if l[n] == l[n+1]:
le += 1
if l[n+1] == l[n+2]:
le += 1
print l[n], ':' , le
else:
print l[n], ':', le
but its not working good, because its counts the first and second element, but
not the second and third. For this output will be:
c : 1
d : 1
c : 1
d : 1
c : 2
c : 1
d : 3
How to make this algorithm better?
Thank you!
Answer: You can use
[itertools.groupby](https://docs.python.org/2/library/itertools.html#itertools.groupby):
from itertools import groupby
s = "cdcdccddd"
print([(k, sum(1 for _ in v)) for k,v in groupby(s)])
[('c', 1), ('d', 1), ('c', 1), ('d', 1), ('c', 2), ('d', 3)]
Consecutive chars will be grouped together, so each `k` is the char of that
group, calling `sum(1 for _ in v)` gives us the length of each group so we end
up with `(char, len(group))` pairs.
If we run it in ipython and call list on each v it should be really clear what
is happening:
In [3]: from itertools import groupby
In [4]: s = "cdcdccddd"
In [5]: [(k, list(v)) for k,v in groupby(s)]
Out[5]:
[('c', ['c']),
('d', ['d']),
('c', ['c']),
('d', ['d']),
('c', ['c', 'c']),
('d', ['d', 'd', 'd'])]
We can also roll our own pretty easily:
def my_groupby(s):
# create an iterator
it = iter(s)
# set consec_count, to one and pull first char from s
consec_count, prev = 1, next(it)
# iterate over the rest of the string
for ele in it:
# if last and current char are different
# yield previous char, consec_count and reset
if prev != ele:
yield prev,
consec_count, = 0
prev = ele
consec_count, += 1
yield ele, consec_count
Which gives us the same:
In [8]: list(my_groupby(s))
Out[8]: [('c', 1), ('d', 1), ('c', 1), ('d', 1), ('c', 2), ('d', 3)]
|
Bigger color-palette in matplotlib for SciPy's dendrogram (Python)
Question: I'm trying to **expand** my `color_palette` in either `matplotlib` or
`seaborn` for use in `scipy`'s **dendrogram** so it colors each cluster
differently.
Currently, the `color_palette` only has a few colors so multiple clusters are
getting mapped to the same color. I know there's like 16 million `RGB` colors,
so...
**How can I use more colors from that huge palette in this type of figure?**
#!/usr/bin/python
from __future__ import print_function
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import colorsys
from scipy.cluster.hierarchy import dendrogram,linkage,fcluster
from scipy.spatial import distance
np.random.seed(0) #43984
#Dims
n,m = 10,1000
#DataFrame: rows = Samples, cols = Attributes
attributes = ["a" + str(j) for j in range(m)]
DF_data = pd.DataFrame(np.random.randn(n, m),#
columns = attributes)
A_dist = distance.cdist(DF_data.as_matrix().T, DF_data.as_matrix().T)
DF_dist = pd.DataFrame(A_dist, index = attributes, columns = attributes)
#Linkage Matrix
Z = linkage(squareform(DF_dist.as_matrix()),method="average") #metric="euclidead" necessary since the input is a dissimilarity measure?
#Create dendrogram
D_dendro = dendrogram(
Z,
labels=DF_dist.index,
no_plot=True,
color_threshold=3.5,
count_sort = "ascending",
#link_color_func=lambda k: colors[k]
)
#Display dendrogram
def plotTree(D_dendro):
fig,ax = plt.subplots(figsize=(25, 10))
icoord = np.array( D_dendro['icoord'] )
dcoord = np.array( D_dendro['dcoord'] )
color_list = np.array( D_dendro['color_list'] )
x_min, x_max = icoord.min(), icoord.max()
y_min, y_max = dcoord.min(), dcoord.max()
for xs, ys, color in zip(icoord, dcoord, color_list):
plt.plot(xs, ys, color)
plt.xlim( x_min-10, x_max + 0.1*abs(x_max) )
plt.ylim( y_min, y_max + 0.1*abs(y_max) )
plt.title("Dendrogram", fontsize=30)
plt.xlabel("Clusters", fontsize=25)
plt.ylabel("Distance", fontsize=25)
plt.yticks(fontsize = 20)
plt.show()
return(fig,ax)
fig,ax = plotTree(D_dendro) #wrapper I made
#Dims
print(
len(set(D_dendro["color_list"])), "^ # of colors from dendrogram",
len(D_dendro["ivl"]), "^ # of labels",sep="\n")
# 7
# ^ # of colors from dendrogram
# 1000
# ^ # of labels
[](http://i.stack.imgur.com/wwJax.png)
Answer: Most matplotlib colormaps will give you a value given a value between 0 and 1.
For example,
import matplotlib.pyplot as plt
import numpy as np
print [plt.cm.Greens(i) for i in np.linspace(0, 1, 5)]
will print
[(0.9686274528503418, 0.98823529481887817, 0.96078431606292725, 1.0),
(0.77922338878407194, 0.91323337695177864, 0.75180316742728737, 1.0),
(0.45176470875740049, 0.76708959481295413, 0.46120723030146432, 1.0),
(0.13402538141783546, 0.54232989970375511, 0.26828144368003398, 1.0),
(0.0, 0.26666668057441711, 0.10588235408067703, 1.0)]
So you no longer need to be restricted to values provided to you. Just choose
a colormap, and get a color from that colormap depending upon some fraction.
For example, in your code, you could consider,
for xs, ys in zip(icoord, dcoord):
color = plt.cm.Spectral( ys/6.0 )
plt.plot(xs, ys, color)
or something to that effect. I am unsure how exactly you want to display your
colors, but I am sure you can modify your code very easily for achieving any
color combinations you want ...
Another thing you can try is
N = D_dendro["color_list"]
colorList = [ plt.cm.Spectral( float(i)/(N-1) ) for i in range(N)]
and pass on that `colorList`.
Play around a bit ...
|
Module ImportError using PySpark
Question: I have a pyspark job (spark 1.6.1, python 2.7). The basic structure is:
spark_jobs/
__init__.py
luigi_workflow.py
my_pyspark/
__init__.py
__main__.py
spark_job.py
stuff/
__init__.py
module1.py
module2.py
analytics/
__init__.py
analytics.py
In my `spark_job.py` I have:
from dir1.module1 import func1
from dir1.module2 import func2
from analytics.analytics import some_analytics_func
...
func1(...)
func2(...)
some_analytics_func(...)
...
When I launch the spark job, `func1` and `func2` execute perfectly, but then I
get:
`ImportError: No module named analytics.analytics`
This has been driving me absolutely insane. Any help would be appreciated.
Note: I'm launching with a wrapper around `spark-submit` and designating the
path with `python -m spark_jobs.my_pyspark`
Answer: I don't understand where `dir1` is coming from? Shouldn't it be `from
my_pyspark.stuff.module1 import func1`? Have you tried this before `from
my_pyspark.analytics.analytics import some_analytics_func`? Since you are
using Luigi, you can also try to build the package through
[setup.py](https://docs.python.org/2/install/).
Hope this helps! I had this problems before but it can be solved.
|
Converting python code to cython
Question: I have a python program that uses OpenCV. The program runs as expected as it
is at the moment. Now I would like to use Cython to compile my python code to
C code. I am doing this instead of re-writing the entire program in C because
I would still like other python programs to be able to `import my_program`.
I have never used Cython before but have just read few blog posts about it.
Can someone please tell me what I should be prepared for and how much of an
uphill task it would be.... My current python program is ~200 LoC.
Answer: Based on your comments you're looking to run your existing code "as is" to
avoid providing the source, rather than make any significant changes to use
Cython-specific features. With that in mind I'd expect it to just work without
any major effort. One easy alternative to consider would be to just provide
pyc bytecode files.
A list of minor gotchas that I know of (in rough order of importance) follows.
A few others are listed [in the
documentation](http://docs.cython.org/src/userguide/limitations.html). Most of
these are fairly minor so you'd be unlucky to meet them.
1. You will likely have to recompile your module for every platform, 32bit and 64bit, every (major, e.g. 3.4, 3.5) version of Python used, and possibly on Windows with multiple different compilers.
2. You can't use `__file__` at the module level. This is sometimes becomes an issue when trying to find the path of static resources stored in the same place as your code.
3. A few things that try to do clever things by inspecting the stack (to see what variables are defined in the functions that called them) break, for example [some of sympy](http://stackoverflow.com/questions/36191146/lambdify-works-with-python-but-throws-an-exception-with-cython/36199057) and possibly some shortcuts to string formatting ([see for example](http://stackoverflow.com/questions/13312240/is-a-string-formatter-that-pulls-variables-from-its-calling-scope-bad-practice) for some recipes that might use this idea)
4. Anything that looks at the bytecode of functions (since it isn't generated by Cython). Numba is probably the most commonly used example in numerical python, but I know of at least one (unmaintained) [MATLAB/Python wrapper](http://ompc.juricap.com/) that inspects the bytecode of the calling function to try to work out the number of arguments being returned.
5. [You must have an `__init__.py` file to make a folder into a module - it won't recognise a compiled `__init__.so` file on its own](http://stackoverflow.com/questions/28261147/cython-package-with-init-pyx-possible/32067984#32067984).
6. [String concatenation can go through a fast path in Python that Cython doesn't manage](http://stackoverflow.com/questions/35787022/cython-string-concatenation-is-super-slow-what-else-does-it-do-poorly). You should not being doing this too much in your code anyway, but you may see large performance differences if you rely on it.
|
Python 2.7.1 import MySQLdb working via cmd but no in a .py file
Question: Okay, i have Python2.7.1 installed in a windows 32.
The problem is, when i try to import MySQLdb module via python in cmd,
python recognizes the module well,
but when i try the same script in a python file i got: ImportError: No module
named 'MySQLdb'
Anyone have faced same issue ?
Just to make more clear. Via cmd:
C:\Users\Desktop>python Python 2.7.11 (v2.7.11:6d1b6a68f775, Dec 5 2015,
20:32:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright",
"credits" or "license" for more information.
> > > import MySQLdb
via .py file C:\Users\Desktop>TesteDB12.py Traceback (most recent call last):
File "C:\Users\fs0222\Desktop\TesteDB12.py", line 1, in import MySQLdb
ImportError: No module named 'MySQLdb'
By the way i had python 3.5 before on my machine but i already changed my
enviroment variables to fit python 2.7, i flush the cache, restarted my
machine, tryed python mysqldb installers .exe .msi, and still the same
problem.
By the way couple months ago i was able to use this module normally.
Any helps ? Thanks a lot...
Answer: Specify the source path of python(.exe) in the first line of your python code
May be as follows:
#!C:\Python26\python.exe
import MySQLdb
|
How to `pip install` a package that has Git dependencies?
Question: I have a private library called `some-library` _(actual names have been
changed)_ with a setup file looking somewhat like this:
setup(
name='some-library',
// Omitted some less important stuff here...
install_requires=[
'some-git-dependency',
'another-git-dependency',
],
dependency_links=[
'git+ssh://[email protected]/my-organization/some-git-dependency.git#egg=some-git-dependency',
'git+ssh://[email protected]/my-organization/another-git-dependency.git#egg=another-git-dependency',
],
)
All of these Git dependencies _may_ be private, so [installation via
HTTP](http://stackoverflow.com/a/14928126/4101697) is not an option. I can use
`python setup.py install` and `python setup.py develop` in `some-library`'s
root directory without problems.
However, installing over Git doesn't work:
pip install -vvv -e 'git+ssh://[email protected]/my-organization/[email protected]#egg=some-library'
The command fails when it looks for `some-git-dependency`, mistakenly assumes
it needs to get the dependency from PyPI and then fails after concluding it's
not on PyPI. My first guess was to try re-running the command with `--process-
dependency-links`, but then this happened:
Cannot look at git URL git+ssh://[email protected]/my-organization/some-git-dependency.git#egg=some-git-dependency
Could not find a version that satisfies the requirement some-git-dependency (from some-library) (from versions: )
Why is it producing this vague error? What's the proper way to `pip install` a
package with Git dependencies that might be private?
Answer: This should work for private repositories as well:
dependency_links = [
'git+ssh://[email protected]/my-organization/some-git-dependency.git@master#egg=some-git-dependency',
'git+ssh://[email protected]/my-organization/another-git-dependency.git@master#egg=another-git-dependency'
],
|
python treeview column "stretch=False" not working
Question: I want to disable column resize, but "stretch = False" is not working, I don't
know why, my python version 3.4.3 .
from tkinter import *
from tkinter import ttk
def main():
gMaster = Tk()
w = ttk.Treeview(gMaster, show="headings", columns=('Column1', 'Column2'))
w.heading('#1', text='Column1', anchor=W)
w.heading('#2', text='Column2', anchor=W)
w.column('#1', minwidth = 70, width = 70, stretch = False)
w.column('#2', minwidth = 70, width = 70, stretch = False)
w.grid(row = 0, column = 0)
mainloop()
if __name__ == "__main__":
main()
Answer: Try adding this before mainloop()
gMaster.resizable(0,0)
You don't need stretch = False
|
Convert escaped utf-8 string to utf in python 3
Question: I have a py3 string that includes escaped utf-8 sequencies, such as
"Company\\\ffffffc2\\\ffffffae", which I would like to convert to the correct
utf 8 string (which would in the example be "Company®", since the escaped
sequence is c2 ae). I've tried
print (bytes("Company\\\\ffffffc2\\\\ffffffae".replace(
"\\\\ffffff", "\\x"), "ascii").decode("utf-8"))
result: Company\xc2\xae
print (bytes("Company\\\\ffffffc2\\\\ffffffae".replace (
"\\\\ffffff", "\\x"), "ascii").decode("unicode_escape"))
result: Company®
(wrong, since chracters are treated separately, but they should be treated
together.
If I do
print (b"Company\xc2\xae".decode("utf-8"))
It gives the correct result. Company®
How can i achieve that programmatically (i.e. starting from a py3 str)
Answer: A simple solution is:
import ast
test_in = "Company\\\\ffffffc2\\\\ffffffae"
test_out = ast.literal_eval("b'''" + test_in.replace('\\\\ffffff','\\x') + "'''").decode('utf-8')
print(test_out)
However it will fail if there is a triple quote `'''` in the input string
itself.
* * *
Following code does not have this problem, but it is not as simple as the
first one.
In the first step the string is split on a regular expression. The odd items
are ascii parts, e.g. `"Company"`; each even item corresponds to one escaped
utf8 code, e.g. `"\\\\ffffffc2"`. Each substring is converted to bytes
according to its meaning in the input string. Finally all parts are joined
together and decoded from bytes to a string.
import re
REGEXP = re.compile(r'(\\\\ffffff[0-9a-f]{2})', flags=re.I)
def convert(estr):
def split(estr):
for i, substr in enumerate(REGEXP.split(estr)):
if i % 2:
yield bytes.fromhex(substr[-2:])
elif substr:
yield bytes(substr, 'ascii')
return b''.join(split(estr)).decode('utf-8')
test_in = "Company\\\\ffffffc2\\\\ffffffae"
print(convert(test_in))
The code could be optimized. Ascii parts do not need encode/decode and
consecutive hex codes should be concatenated.
|
Which library to import in Python to read data from an Excel file, for automation testing using Selenium?
Question: Which library to import in Python to read data from an Excel file, I want to
store different `xpaths` in Excel file for automation testing using Selenium?
Answer: The [xlrd](https://pypi.python.org/pypi/xlrd) library is what you are looking
for to read excel files. And to write, you can use
[xlwt](https://pypi.python.org/pypi/xlwt).
|
prevent the sub windows to open multiple times
Question: I am creating an application by using the language wxPython. I have a simple
problem in which I cant really find the solution in the internet.
I have a main user interface with a menubar which contain a menu called new
file. By clicking the new file, a new window will appear demanding the user to
fill up the necessary information.
The problem is that, by clicking multiple times the menu (new file), the
application opens multiple windows.
How can i prevent this?
Answer: The following code creates a new sub frame if one doesn't exists already. If
it does exist already, it uses the existing sub frame.
Note the code is tested with latest wxpython phoenix and classic.
import wx
from wx.lib import sized_controls
class MultiMessageFrame(sized_controls.SizedFrame):
def __init__(self, *args, **kwargs):
super(MultiMessageFrame, self).__init__(*args, **kwargs)
pane = self.GetContentsPane()
text_ctrl = wx.TextCtrl(
pane, style=wx.TE_READONLY | wx.TE_CENTRE | wx.TE_MULTILINE)
text_ctrl.SetSizerProps(proportion=1, expand=True)
text_ctrl.SetBackgroundColour('White')
self.text_ctrl = text_ctrl
pane_btns = sized_controls.SizedPanel(pane)
pane_btns.SetSizerType('horizontal')
pane_btns.SetSizerProps(align='center')
button_ok = wx.Button(pane_btns, wx.ID_OK)
button_ok.Bind(wx.EVT_BUTTON, self.on_button_ok)
def append_msg(self, title_text, msg_text):
self.SetTitle(title_text)
self.text_ctrl.AppendText(msg_text)
def on_button_ok(self, event):
self.Close()
class MainFrame(sized_controls.SizedFrame):
def __init__(self, *args, **kwargs):
super(MainFrame, self).__init__(*args, **kwargs)
self.SetInitialSize((800, 600))
self.CreateStatusBar()
menubar = wx.MenuBar()
self.SetMenuBar(menubar)
menu_file = wx.Menu()
menu_file.Append(
wx.ID_NEW, 'Show msg', 'Add a new message to message frame')
menubar.Append(menu_file, '&File')
self.Bind(wx.EVT_MENU, self.on_new, id=wx.ID_NEW)
self.count = 1
self.multi_message_frame = None
def on_new(self, event):
title_text = 'MultiMessageFrame already exists'
if not self.multi_message_frame:
title_text = 'Newly created MultiMessageFrame'
self.multi_message_frame = MultiMessageFrame(
self, style=wx.DEFAULT_FRAME_STYLE | wx.FRAME_FLOAT_ON_PARENT)
self.multi_message_frame.Bind(
wx.EVT_CLOSE, self.on_multi_message_frame_close)
self.multi_message_frame.Center()
self.multi_message_frame.Show()
self.multi_message_frame.append_msg(
title_text, 'message no.{}\n'.format(self.count))
self.count += 1
def on_multi_message_frame_close(self, event):
self.multi_message_frame = None
event.Skip()
if __name__ == '__main__':
app = wx.App(False)
main_frame = MainFrame(None)
main_frame.Show()
app.MainLoop()
|
Python: issue with building mock function
Question: I'm writing unit tests to validate my project functionalities. I need to
replace some of the functions with mock function and I thought to use the
Python mock library. The implementation I used doesn't seem to work properly
though and I don't understand where I'm doing wrong. Here a simplified
scenario:
_root/connector.py_
from ftp_utils.py import *
def main():
config = yaml.safe_load("vendor_sftp.yaml")
downloaded_files = []
downloaded_files = get_files(config)
for f in downloaded_files:
#do something
_root/utils/ftp_utils.py_
import os
import sys
import pysftp
def get_files(config):
sftp = pysftp.Connection(config['host'], username=config['username'])
sftp.chdir(config['remote_dir'])
down_files = sftp.listdir()
if down_files is not None:
for f in down_files:
sftp.get(f, os.path.join(config['local_dir'], f), preserve_mtime=True)
return down_files
_root/tests/connector_tester.py_
import unittest
import mock
import ftp_utils
import connector
def get_mock_files():
return ['digital_spend.csv', 'tv_spend.csv']
class ConnectorTester(unittest.TestCase)
@mock.patch('ftp_utils.get_files', side_effect=get_mock_files)
def test_main_process(self, get_mock_files_function):
# I want to use a mock version of the get_files function
connector.main()
When I debug my test I expect that the get_files function called inside the
main of connector.py is the get_mock_files(), but instead is the
ftp_utils.get_files(). What am I doing wrong here? What should I change in my
code to properly call the get_mock_file() mock?
Thanks, Alessio
Answer: I think there are several problems with your scenario:
* `connector.py` cannot import from `ftp_utils.py` that way
* nor can `connector_tester.py`
* as a habit, it is better to have your testing files under the form `test_xxx.py`
* to use `unittest` with patching, see [this example](http://www.voidspace.org.uk/python/mock/patch.html#patch-methods-start-and-stop)
In general, try to provide working minimal examples so that it is easier for
everyone to run your code.
I modified rather heavily your example to make it work, but basically, the
problem is that you patch `'ftp_utils.get_files'` while it is not the
reference that is actually called inside `connector.main()` but probably
rather `'connector.get_files'`.
Here is the modified example's directory:
test_connector.py
ftp_utils.py
connector.py
test_connector.py:
import unittest
import sys
import mock
import connector
def get_mock_files(*args, **kwargs):
return ['digital_spend.csv', 'tv_spend.csv']
class ConnectorTester(unittest.TestCase):
def setUp(self):
self.patcher = mock.patch('connector.get_files', side_effect=get_mock_files)
self.patcher.start()
def test_main_process(self):
# I want to use a mock version of the get_files function
connector.main()
suite = unittest.TestLoader().loadTestsFromTestCase(ConnectorTester)
if __name__ == "__main__":
unittest.main()
**NB:** what is called when running `connector.main()` is
`'connector.get_files'`
connector.py:
from ftp_utils import *
def main():
config = None
downloaded_files = []
downloaded_files = get_files(config)
for f in downloaded_files:
print(f)
connector/ftp_utils.py unchanged.
|
How to upload a picture to woocommerce with python/django POST request
Question: I have created a woocommerce web page and I am trying to use Django/Python
synchronized with my page. From the documentation [woocomerce post
request](https://woothemes.github.io/woocommerce-rest-api-docs/?python#create-
a-product):
data = {
"product": {
"title": "Sample of Title through POST",
"type": "simple",
"regular_price": "21.99",
"description": "Long description from Post Request",
"short_description": "Short description from Post Request",
"categories": [
9,
14
],
"images": [
{
"src": "http://example.com/wp-content/uploads/2015/01/premium-quality-front.jpg",
"position": 0
},
{
"src": "http://example.com/wp-content/uploads/2015/01/premium-quality-back.jpg",
"position": 1
}
]
}
}
print (wcapi.post("products", data).json())
I am using the [Python wrapper for the WooCommerce REST
API](https://pypi.python.org/pypi/WooCommerce) and it seems to be working with
the get requests, I can not find a way to make it work with the post request.
I constantly getting this error:
TypeError: <open file 'test.jpg', mode 'rb' at 0x104b4ced0> is not JSON serializable
I have been searching over and over the web for possible solutions but I can
not find one. Does anyone knows what is the correct way to upload an image
from the local directory to the web page? I have tried to reformat the path
from absolute path to url path, but it did not work.
Complete code:
import pprint
import urllib
import os.path
import urlparse
from woocommerce import API
def path2url(path):
return urlparse.urljoin(
'file:', urllib.pathname2url(path))
wcapi = API(
url= '' # Your store URL
consumer_key= '' # Your consumer key
consumer_secret= '' # Your consumer secret
version='v3' # WooCommerce API version
)
# Get request
pprint.pprint(wcapi.get("products").json())
# Post request
data = {
"product": {
"title": "Sample of Title through POST",
"type": "simple",
"regular_price": "21.99",
"description": "Long description from Post Request",
"short_description": "Short description from Post Request",
"categories": [
9,
14
],
"images": [
{
"src": open('test.jpg', 'rb'),
"position": 0
},
{
"src": open('test.jpg', 'rb'),
"position": 1
}
]
}
}
print (wcapi.post("products", data).json())
**Update:** I have tried to use the exact path for images from my local host
e.g. " `http://localhost:8888/wordpress/wp-content/uploads/2016/04/test.jpg` "
where on the browser it works fine, I can see the picture. When I use this
path on the post request, it produces the same error. I also tried to use
relative path e.g. " `file:///Users/tinyOS/Sites/wordpress/wp-
content/uploads/2016/04/test.jpg` " still same error code.
Answer: So I manage to find the solution to my problem. Just in case someone else in
the future might need it.
In order to be able to upload a picture to woocommerce you need to have a
valid url path (e.g. `http://localhost:8888/wordpress/wp-
content/uploads/2016/04/test.jpg`)
In order to get that url, you need to upload a file first to woocommerce with
the relative path and as a second step to retrieve the path and add it to the
secondary post request with all the data of the product that you want to post.
The tool for python is [python-wordpress-xmlrpc](https://python-wordpress-
xmlrpc.readthedocs.org/en/latest/examples/media.html). I also found the manual
that contains more analytical examples that I have found more useful than just
the documentation: [python-wordpress-xmlrpc, Documentation, Release
2.3](https://media.readthedocs.org/pdf/python-wordpress-xmlrpc/latest/python-
wordpress-xmlrpc.pdf).
The example below demonstrates the process to upload the image. The code is
taken from the manual:
from wordpress_xmlrpc import Client, WordPressPost
from wordpress_xmlrpc.compat import xmlrpc_client
from wordpress_xmlrpc.methods import media, posts
client = Client('http://mysite.wordpress.com/xmlrpc.php', 'username', 'password')
# set to the path to your file
filename = '/path/to/my/picture.jpg'
# prepare metadata
data = {
'name': 'picture.jpg',
'type': 'image/jpeg', # mimetype
}
# read the binary file and let the XMLRPC library encode it into base64
with open(filename, 'rb') as img:
data['bits'] = xmlrpc_client.Binary(img.read())
response = client.call(media.UploadFile(data))
# response == {
# 'id': 6,
# 'file': 'picture.jpg'
# 'url': 'http://www.example.com/wp-content/uploads/2012/04/16/picture.jpg',
# 'type': 'image/jpeg',
# }
attachment_id = response['id']
As a second step you can create a function that post all the information to
the woocommerce store of yours. Sample of code taken from [Create a Product,
WooCommerce 2.1, the REST API](https://woothemes.github.io/woocommerce-rest-
api-docs/?python#create-a-product). You simply need to create a dictionary
with all the data:
data = {
"product": {
"title": "Premium Quality",
"type": "simple",
"regular_price": "21.99",
"description": "Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo.",
"short_description": "Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas.",
"categories": [
9,
14
],
"images": [
{
"src": "http://example.com/wp-content/uploads/2015/01/premium-quality-front.jpg",
"position": 0
},
{
"src": "http://example.com/wp-content/uploads/2015/01/premium-quality-back.jpg",
"position": 1
}
]
}
}
print(wcapi.post("products", data).json())
The `src:` needs to be replaced with the retrieved url from the upload request
and voila. Very simple if you know which tools to use, complicated if you do
not.
I hope this helps.
|
Bind to pgcrypto from python
Question: I'd like to call some pgcrypto functions from python. Namely
[px_crypt](http://doxygen.postgresql.org/px-
crypt_8c.html#a6e88d87094f37fecc56c0abfb42d1fc3). I can't seem to figure out
the right object files to link it seems.
Here's my code:
#include <Python.h>
#include "postgres.h"
#include "pgcrypto/px-crypt.h"
static PyObject*
pgcrypt(PyObject* self, PyObject* args)
{
const char* key;
const char* setting;
if (!PyArg_ParseTuple(args, "ss", &key, &setting))
return NULL;
return Py_BuildValue("s", px_crypt(key, setting, "", 0));
}
static PyMethodDef PgCryptMethods[] =
{
{"pgcrypt", pgcrypt, METH_VARARGS, "Call pgcrypto's crypt"},
{NULL, NULL, 0, NULL}
};
PyMODINIT_FUNC
initpypgcrypto(void)
{
(void) Py_InitModule("pypgcrypto", PgCryptMethods);
}
and gcc commands and output:
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/ionut/github/postgres/contrib/ -I/usr/include/postgresql/9.4/server/ -I/usr/include/python2.7 -c pypgcrypto.c -o build/temp.linux-x86_64-2.7/pypgcrypto.o
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/pypgcrypto.o /usr/lib/postgresql/9.4/lib/pgcrypto.so -lpgport -lpq -o build/lib.linux-x86_64-2.7/pypgcrypto.so
Error is:
python -c "import pypgcrypto; print pypgcrypto.pgcrypt('foo', 'bar')"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: /usr/lib/postgresql/9.4/lib/pgcrypto.so: undefined symbol: InterruptPending
Answer: From one of your comments I got this...
> I want to replicate pgcrypto's behavior in order to be able to generate
> password hashes that match the ones already in my database.
You can use python to do this already. I don't know what algorithm you're
using, nor should I, here are two different methods using python to generate
the exact same hash as Postgresql's pgcrypto
**Crypt**
=# select crypt('12345678', gen_salt('xdes')), md5('test');
crypt | md5
----------------------+----------------------------------
_J9..b8FIoskMdlHvKjk | 098f6bcd4621d373cade4e832627b4f6
Here's the Python to check the password...
#!/usr/bin/env python
import crypt
from hmac import compare_digest as compare_hash
def login():
hash_ = '_J9..OtC82a6snTAAqWg'
print(compare_hash(crypt.crypt('123456789', hash_), hash_))
#return True
if __name__ == '__main__':
login()
**MD5**
For md5 you can use `passlib`'s md5_crypt as follows...
=# select crypt('12345678', gen_salt('md5')), md5('test');
crypt | md5
------------------------------------+----------------------------------
$1$UUVXoPbO$JMA7yhrKvaZcKqoFoi9jl. | 098f6bcd4621d373cade4e832627b4f6
Python would look something like...
#!/usr/bin/env python
from passlib.hash import md5_crypt
def login():
hash_ = '$1$kOFl2EuX$QhhnPMAdx2/j2Tsk15nfQ0'
print(md5_crypt.verify("12345678", hash_))
if __name__ == '__main__':
login()
**Blowfish**
select crypt('12345678', gen_salt('bf')), md5('test');
crypt | md5
--------------------------------------------------------------+----------------------------------
$2a$06$HLZUXMgqFhi/sl1D697il.lN8OMQFBWR2VBuZ5nTCd59jvGLU9pQ2 | 098f6bcd4621d373cade4e832627b4f6
Python code...
#!/usr/bin/env python
from passlib.hash import md5_crypt
from passlib.hash import bcrypt
def blowfish():
hash_ = '$2a$06$HLZUXMgqFhi/sl1D697il.lN8OMQFBWR2VBuZ5nTCd59jvGLU9pQ2'
print(bcrypt.verify("12345678", hash_))
if __name__ == '__main__':
blowfish()
|
Using regex, best way to get all punctuations from a line in Python?
Question: i tried something like this but it's a bit long:
punct_tab=[]
for line in f:
tab=line.split()
for word in tab:
if re.search(r",",word)!=0:
punct_tab.append(',')
if re.search(r".",word)!=0:
punct_tab.append('.')
.... ETC
Do you have a better idea ?
Thank you
Answer: You can use
[`string.punctuation`](https://docs.python.org/2/library/string.html#string.punctuation):
>>> import string
>>>
>>> line = "Hello, world!"
>>>
>>> punctuation = set(string.punctuation)
>>> print([c for c in line if c in punctuation])
[',', '!']
|
Spaces in directory path python
Question: I'm a noob at coding Python and I've run into something that no amount of
Googling is helping me with. I'm trying to write a simple Directory listing
tool and I cannot seem to deal with Spaces in the directory name in OSX. My
code is as follows:
def listdir_nohidden(path):
import os
for f in os.listdir(path):
if not f.startswith('.'):
yield f
def MACListDirNoExt():
import os
MACu = PCu = os.environ['USER']
MACDIR = '/Users/'+MACu+'/Desktop//'
while True:
PATH = raw_input("What is the PATH you would like to list?")
if os.path.exists(PATH):
break
else:
print "That PATH cannot be found or does not exist."
NAME = raw_input ("What would you like to name your file?")
DIR = listdir_nohidden(PATH)
DIR = [os.path.splitext(x)[0] for x in DIR]
f = open(''+MACDIR+NAME+'.txt', "w")
for file in DIR:
f.write(str(file) + "\n")
f.close()
print "The file %s.txt has been written to your Desktop" % (NAME)
raw_input ("Press Enter to exit")
For ease of trouble shooting though I think this could essentially be boiled
down to:
import os
PATH = raw_input("What is the PATH you would like to list")
os.listdir(PATH)
When supplying a directory path that contains spaces /Volumes/Disk/this is a
folder it returns
_"No such file or Directory: '/Volumes/Disk/this\\\ is\\\ a\\\ folder/'_
It looks like its escaping the escape...?
Answer: Check the value returned from raw_input() for occurences of '\\\' and replace
them with ''.
a = a.replace('\\', '')
|
Very Large number Calculations with No Loss in Accuracy?
Question: Very Large number Calculations with No Loss in Accuracy ?
Given a 1700 digit number, we want to store the value and perform two
functions on it with NO loss of accuracy, its ok if calc time takes longer but
better if faster.
Where `x` = a 1700 digit long numeric value
The two calcs to be computed with be ;
`X` * (up to a four digit value )
then we take the modulus of this resultant of 400 ;
( x % 400 )
If we cant multiply [ `X` * (up to a four digit value ) ] and then take the
modulus due to processing bottlenecks, ceilings - then can this be done where
we first take the modulus of the original `x` = 1700 digits and then multiply
this by the four digit value and then take the modulus of this after? Ideally
Id prefer to be able to do the first scenario.
Constraints Im aware of regarding this to date ;
Firstly, Im only running on a WinXp 32 bit system and not able to upgrade
currently.
Secondly, Ive been becoming aware of a lot of issues, bugs, errors with
python, sympy, etc.. in properly handling very large number calcs. These
problems seem to arise out of data loss through use of floats and related.
Details on a number of different approaches can be viewed here ;
<https://groups.google.com/forum/#!topic/sympy/eUfW6C_nHdI>
<https://groups.google.com/forum/#!topic/sympy/hgoQ74iZLkk>
My system will not properly handle "float128" floats, although Ive been told
by one person this would be able to handle wsuch a computation - altho the
prob is it seems that float128 is rarely actually a 128 float and certainly
not on my system. Also due to internal processing peculiarties it seems that
most floats will lose data on these kinds of computations. If I understand
correctly, one of the best candidates for getting the most accurate values
returned involves the use of arbitrary precision and representing the inputs
as strings and not just straight numeroc values? Also, ideally, Id like the
formula to be able to handle rationals without accuracy loss. So "x" starts
off as a whole number, but when I multiply it by the four digit value, Id like
that value to be any numeric value such as an integer, whole number or
rational like "2243.0456".
Structure of one of the methods Ive been experimenting with ;
from sympy import mpmath
mpmath.mp.dps = 1700
x = (mpmath.mpf" INSERT 1700 DIGIT NUMBER HERE"
(x % 400)
An example with live data ;
from sympy import mpmath
mpmath.mp.dps = 1700
x = (mpmath.mpf"4224837741562986738552195234618134569391350587851527986076117152972791626026988760293885754068768475423919991676816860701478996539715076968649431668262941552499272851934021744703799728797962346859481772141964720120813934781420732260156446701740408591264289745960985811289070246238359268267313892549883722768575435935465369820850766441187744058828599331364172396647692768693734233545999439071435129082764340446292057962343360114463696515950803159895238667237356887294549618489296157716384494295159851060500050371940523385701946860964162569067371175357615144192344763876540813882107379891591055307476597279137714860430053785074855035948744902479909111840444834198237419177418965268614345042634655648237818899253116247916585686713243193074635608527160273611309051938762676520507404815180792793701259216609316118483835216791263172902470123821111779223204735647931377027227055312940934756325611832463728974558417085791096461266371917752574370345933533929245534623041989305973992490523694190318284666464757159324866096861573704540654160644711274766759520501013633999706244117691235878123489694261724158073725644897527727473450037615295487637338687848351441331386946416003718795419822246935787682977520303924734875834943985619000970655639767984458204513958680501990182471695393372003272654902387493955849775308922901631024199011283441050881608686856746206012270890984260424834329551281249797545775091226433669036680463406283858413423722935297859778786945935751468048494081427689669730664660260908636113264573712854536295005312934569838992758429422872122606102877623867968067833225444280667381025371705347744037508121975424674439904984528128036994803804742198422695627759844248"
(x % 400)
But I have no idea if accurate results are being returned with this, would
love to hear anyones suggestions?
Answer: [Fractions](https://docs.python.org/3/library/fractions.html) can grow to a
very large amount. Although less efficient, they might do what you want.
|
python kernel crashes on mouse hover over Tkinter window
Question: I want to plot graph in jupiter notebook. When I use the following code
%pylab inline
import numpy as np
x=np.linspace(0,10,40)
plt.plot(x,x**2)
plt.show()
everything works fine but if I change `%pylab inline` to `%pylab tk` or
`%pylab qt` an interactive graph in separate window is shown and when I hover
the mouse over the window python kernel crashes. Does anyone has idea how to
solve this problem and plot graphs in separate windows?
I use Windows 7, Python 3.5.1 from Anaconda 2.4.1 (64-bit) distribution.
Answer: If you want matplotlib interactive, i.e. the plots open in a separate window,
you will want to execute the first cell of your notebook with the following
magic:
%matplotlib
This should load an interactive backend for your system
If you want to work inline:
%matplotlib inline
Then you can run your code, but please, do not use `pylab`, use `numpy` and
`matplotlib.pyplot` instead; this will keep your namespaces tidy.
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,10,40)
plt.plot(x, x**2)
plt.show()
To change back end during a session, you may have to restart your kernel in
`jupyter` for the new backend settings to take effect.
|
Convert Time to printable or localtime format
Question: In my python code I get start and end time some thing like:
end = int(time.time())
start = end - 1800
Now start and end variables holds values like 1460420758 and 1460422558.
I am trying to convert it in a meaningful format like :
Mon Apr 11 17:50:25 PDT 2016
But am unable to do so, I tried:
time.strftime("%a %b %d %H:%M:%S %Y", time.gmtime(start))
Gives me
Tue Apr 12 00:25:58 2016
But not only the timezone but the H:M:S are wrong
As date returns me the below information:
$ date
Mon Apr 11 18:06:27 PDT 2016
How to correct it?
Answer: This one involves utilizing datetime to great the format you wish with the
strftime module.
What's important is that the time information you get 'MUST' be UTC in order
to do this. Otherwise, you're doomed D:
I'm using timedelta to 'add' hours to the time. It will also increments the
date, too. I still would recommend using the module I shared above to handle
time zones.
import time
# import datetime so you could play with time
import datetime
print int(time.time())
date = time.gmtime(1460420758)
# Transform time into datetime
new_date = datetime.datetime(*date[:6])
new_date = new_date + datetime.timedelta(hours=8)
# Utilize datetime's strftime and manipulate it to what you want
print new_date.strftime('%a %b %d %X PDT %Y')
|
Change the style or background of a cell in Dominate table (Python)
Question: Here's a sample from my csv file (imagine that the xxxx.img are actually
<http://my.website.me/xxxx.img>)
LHS_itemname,LHS_img, LHS_color, RHS_itemname, RHS_img, RHS_color
backpack, bck.img, blue , lunchbox, lch.img, blue
backpack, bck.img, green , lunchbox, lch.img, blue
I want to display this csv as an HTML table where each image url can be
grabbed from the web using the web url and displayed inside the table. And if
the LHS_color is the same as the RHS_color, I want that row in the table to
have a grey background.
Here's what I have so far using the `dominate` package in Python:
import os
import os.path
import sys
import csv
import urllib
import re
import glob
import numpy as np
from dominate import document
from dominate.tags import *
import dominate
Set names for the input csv and output html (call them inFileName, and
outFileName)
f = open(inFileName, 'rb') # Path to csv file
reader = csv.reader(f)
header = ['LHS_itemname','LHS_img', 'LHS_color', 'RHS_itemname', 'RHS_img', 'RHS_color']
with document(title='ItemsBoughtTogether') as doc:
h1('ItemsBoughtTogether', align = 'Center')
with table(border='1').add(tbody()):
l = thead().add(tr())
for col in header:
print col
l += td(p(b(str(col))))
l = thead().add(tr())
for row in reader:
l = tr()
l += td(p(row[0], ALIGN='Center'))
l += td(p(row[1], ALIGN='Center'))
l += td(div(img(src=row[2]), _class='photo', ALIGN='Center')) # img LHS
l += td(p(row[3], ALIGN='Center'))
l += td(p(row[4], ALIGN='Center'))
l += td(div(img(src=row[6]), _class='photo', ALIGN='Center')) # img RHS
if row[2] == row[5]: {background-color:'grey'}
This last `if` statement is what I don't know how to put in syntactically. I'm
having a hard time finding dominate examples with html tables in general, so
if anyone has good resources for that, please comment.
Answer: I've never used dominate, but it's generally preferable to use style sheets
for css attributes (like background colour). I would just include an external
style sheet here, and give this row a certain class if it satisfies your
criteria.
eg. style.css:
.grey_background {
background-color: grey;
}
add in a link (after the `with document(title...` line:
with doc.head:
link(rel='stylesheet', href='style.css')
finally, add the class - instead of: `l = tr()`, do something like:
l = tr(_class='grey_background') if row[2] == row[5] else tr()
**Edit: Alternatively, for an inline style**
Since it seems to support keywords, the following should work:
l = tr(style="background-color: grey") if row[2] == row[5] else tr()
|
How to create a Text Node with lxml?
Question: I'm using lxml and python to manipulate xml files. I want to create a text
node with no tags preferably, instead of creating a new `Element` and then
append a text to it. How can I do that?
I could find an equivalent of this in `xml.dom.minidom` package of python
called `createTextNode`, so I was wondering if lxml supports same
functionality or not?
Answer: Looks like `lxml` doesn't provide a special API to create text node. You can
simply set `text` property of a parent element to create or modify text node
in that element, for example :
>>> from lxml import etree
>>> raw = '''<root><foo/></root>'''
>>> root = etree.fromstring(raw)
>>> root.text = 'bar'
>>> etree.tostring(root)
'<root>bar<foo/></root>'
|
Python encoding issue in script if string not hard-coded
Question: I have an encoding issue with strings I get from an external source. This
source sends the strings encoded to me and I can decode them only if they are
part of the script's code. I've looked at several threads here and even some
recommended tutorials (such as [this](https://wiki.python.org/moin/PrintFails)
one) but came up empty.
For example, if I run this:
python -c 'print "gro\303\237e"'
I get:
große
Which is the correct result.
But If I use it in a script, such as:
import sys
print sys.argv[1]
and call it like `test.py "gro\303\237e"`, I get:
gro\303\237e
I intend to write the correct string to syslog, but I can't seem to get this
to work.
Some data on my system: \- Python 2.7.10 \- CentOS Linux \- LANG=en_US.UTF-8
\- LC_CTYPE=UTF-8
I will appreciate any help, please let me know if you need more information.
Thanks!
Answer: This will work:
import sys
import ast
print ast.literal_eval('b"%s"' % sys.argv[1]).decode("utf-8")
But please read about
[literal_eval](https://docs.python.org/2/library/ast.html#ast.literal_eval)
first to make sure it suits your needs (I think it should be safe to use but
you should read and make sure).
|
Why does using multiprocessing with pandas apply lead to such a dramatic speedup?
Question: Suppose I have a pandas dataframe and a function I'd like to apply to each
row. I can call `df.apply(apply_fn, axis=1)`, which should take time linear in
the size of `df`. Or I can split `df` and use `pool.map` to call my function
on each piece, and then concatenate the results.
I was expecting the speedup factor from using `pool.map` to be roughly equal
to the number of processes in the pool (new_execution_time =
original_execution_time/N if using N processors -- and that's assuming zero
overhead).
Instead, in this toy example, time falls to around 2% (0.005272 / 0.230757)
when using 4 processors. I was expecting 25% at best. What is going on and
what am I not understanding?
import numpy as np
from multiprocessing import Pool
import pandas as pd
import pdb
import time
n = 1000
variables = {"hello":np.arange(n), "there":np.random.randn(n)}
df = pd.DataFrame(variables)
def apply_fn(series):
return pd.Series({"col_5":5, "col_88":88,
"sum_hello_there":series["hello"] + series["there"]})
def call_apply_fn(df):
return df.apply(apply_fn, axis=1)
n_processes = 4 # My machine has 4 CPUs
pool = Pool(processes=n_processes)
t0 = time.process_time()
new_df = df.apply(apply_fn, axis=1)
t1 = time.process_time()
df_split = np.array_split(df, n_processes)
pool_results = pool.map(call_apply_fn, df_split)
new_df2 = pd.concat(pool_results)
t2 = time.process_time()
new_df3 = df.apply(apply_fn, axis=1) # Try df.apply a second time
t3 = time.process_time()
print("identical results: %s" % np.all(np.isclose(new_df, new_df2))) # True
print("t1 - t0 = %f" % (t1 - t0)) # I got 0.230757
print("t2 - t1 = %f" % (t2 - t1)) # I got 0.005272
print("t3 - t2 = %f" % (t3 - t2)) # I got 0.229413
I saved the code above and ran it using `python3 my_filename.py`.
PS I realize that in this toy example `new_df` can be created in a much more
straightforward way, without using apply. I'm interested in applying similar
code with a more complex `apply_fn` that doesn't just add columns.
Answer: **Edit** (My previous answer was actually wrong.)
`time.process_time()`
([doc](https://docs.python.org/3.5/library/time.html#time.process_time))
measures time only in the current process (and doesn't include sleeping time).
So the time spent in child processes is not taken into account.
I run your code with `time.time()`, which measures real-world time (showing no
speedup at all) and with a more reliable `timeit.timeit` (about 50% speedup).
I have 4 cores.
|
How to get original favorite count, and each user's follower count, from Twitter streaming API in Python
Question: I'm attempting to extract individual pieces of data from the public stream of
tweets for two tracked keywords, using the Python package
[TwitterAPI](https://github.com/geduldig/TwitterAPI/blob/master/README.rst).
I would ideally like to get the original favorite count for the
`retweeted_status` object (not for the user's `status` wrapper) but am having
difficulty doing so, since both `print(retweeted_status['favorite_count'])`
and `print(status['favorite_count'])` always return zero.
Failing that, I would like to be able to get the follower count of each user
in the stream. I can see an entity called 'friends_count' in the full json
returned from each tweet when I run `print(item)`, but if I run
`print(user['friends_count'])` I get the following error:
Traceback (most recent call last):
File "twitter.py", line 145, in <module>
friends()
File "twitter.py", line 110, in favourites
print(user['friends_count'])
KeyError: 'friends_count'
This is what my full code looks like at the moment:
import sys
sys.path.append('/Library/Python/2.6/site-packages')
from TwitterAPI import TwitterAPI
import string
OAUTH_SECRET = "foo"
OAUTH_TOKEN = "foo"
CONSUMER_KEY = "foo"
CONSUMER_SECRET = "foo"
def friends():
TRACK_TERM = 'hello'
api = TwitterAPI(CONSUMER_KEY, CONSUMER_SECRET, OAUTH_TOKEN, OAUTH_SECRET)
f = api.request('statuses/filter', {'track': TRACK_TERM})
for user in f:
print(user['friends_count'])
def favorite():
TRACK_TERM = 'kanye'
api = TwitterAPI(CONSUMER_KEY, CONSUMER_SECRET, OAUTH_TOKEN, OAUTH_SECRET)
h = api.request('statuses/filter', {'track': TRACK_TERM})
for retweeted_item in h:
print(retweeted_item['favorite_count'])
if __name__ == '__main__':
try:
friends()
favorite()
except KeyboardInterrupt:
print '\nGoodbye!'
Any advice or information would be much appreciated - I assume I have made a
mistake somewhere in my syntax (I am a Python beginner!) which is throwing
KeyErrors but haven't been able to work out what it is from either the
documentation for the TwitterAPI package, nor the Twitter API itself after
hours of searching.
EDIT: this is what the streaming API returns for a single user's post when I
run `for user in f print(user)` (I don't know how to make it more
readable/wrap the text on Stack Overflow, sorry) - you can see both
'friends_count' and 'followers_count' return a number but I don't know how to
print them out individually without it just resulting in a KeyError.
{u'contributors': None, u'truncated': False, u'text': u'Hearing Kanye spit on a Drake beat is just really a lot for me!!!! I was not prepared!!', u'is_quote_status': False, u'in_reply_to_status_id': None, u'id': 719940912453853184, u'favorite_count': 0, u'source': u'<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', u'retweeted': False, u'coordinates': None, u'timestamp_ms': u'1460482264041', u'entities': {u'user_mentions': [], u'symbols': [], u'hashtags': [], u'urls': []}, u'in_reply_to_screen_name': None, u'id_str': u'719940912453853184', u'retweet_count': 0, u'in_reply_to_user_id': None, u'favorited': False, u'user': {u'follow_request_sent': None, u'profile_use_background_image': True, u'default_profile_image': False, u'id': 247986350, u'verified': False, u'profile_image_url_https': u'https://pbs.twimg.com/profile_images/715358123108601856/KM-OCY2D_normal.jpg', u'profile_sidebar_fill_color': u'DDEEF6', u'profile_text_color': u'333333', u'followers_count': 277, u'profile_sidebar_border_color': u'FFFFFF', u'id_str': u'247986350', u'profile_background_color': u'C0DEED', u'listed_count': 1, u'profile_background_image_url_https': u'https://pbs.twimg.com/profile_background_images/695740599/089d0a4e4385f2ac9cad05498169e606.jpeg', u'utc_offset': -25200, u'statuses_count': 6024, u'description': u'this is my part, nobody else speak', u'friends_count': 298, u'location': u'las vegas', u'profile_link_color': u'FFCC4D', u'profile_image_url': u'http://pbs.twimg.com/profile_images/715358123108601856/KM-OCY2D_normal.jpg', u'following': None, u'geo_enabled': True, u'profile_banner_url': u'https://pbs.twimg.com/profile_banners/247986350/1454553801', u'profile_background_image_url': u'http://pbs.twimg.com/profile_background_images/695740599/089d0a4e4385f2ac9cad05498169e606.jpeg', u'name': u'princess laser tag', u'lang': u'en', u'profile_background_tile': True, u'favourites_count': 9925, u'screen_name': u'hannahinloafers', u'notifications': None, u'url': u'http://eecummingsandgoings.tumblr.com', u'created_at': u'Sun Feb 06 00:49:24 +0000 2011', u'contributors_enabled': False, u'time_zone': u'Pacific Time (US & Canada)', u'protected': False, u'default_profile': False, u'is_translator': False}, u'geo': None, u'in_reply_to_user_id_str': None, u'lang': u'en', u'created_at': u'Tue Apr 12 17:31:04 +0000 2016', u'filter_level': u'low', u'in_reply_to_status_id_str': None, u'place': None}
Answer: I've solved it, and think it was an issue with me not understanding how to
retrieve JSON from nested dictionaries. This worked:
if 'retweeted_status' in item:
item2 = item['retweeted_status']
print(item2['favorite_count'])
|
How to use a list in other function?
Question: I have a list like this `cs_id["CS_A1","CS_b7",...]` in a function. At the end
of the function the list ist filled with 80 values. How can I use this list
(and values) in another function? Here I want to use the list `cs_id[]` from
function unzip in function `changecs`. (By the way, the second function isn't
ready yet.)
## Update
I still dont get it....dont know why.
Here is my full code...maybe someone can help.
**maker.py**
#!/usr/bin/python
import getopt
import sys
import functions as func
ifile = ''
ofile = ''
instances = 0
def main(argv):
try:
opts, args = getopt.getopt(argv, "hi:o:n:d", ["help", "ifile=", "ofile=", "NumberOfInstances="])
except getopt.GetoptError:
func.usage()
sys.exit(2)
for opt, arg in opts:
if opt in ("-h", "--help"):
func.usage()
sys.exit()
elif opt in '-d':
global _debug
_debug = 1
elif opt in ("-i", "--ifile"):
global ifile
ifile = arg
elif opt in ("-o", "--ofile"):
global ofile
ofile = arg
elif opt in ("-n", "--NumberOfInstances"):
global instances
instances = int(arg)
func.unzip(ifile, instances)
func.changecs()
if __name__ == "__main__":
main(sys.argv[1:])
**functions.py**
import os
import zipfile
import sys
import string
import random
# printing usage of warmaker.py
def usage():
print "How to use warmaker.py"
print 'Usage: ' + sys.argv[0] + ' -i <inputfile> -o <outputfile> -n <NumberOfInstances>'
# creating random IDs for CS instance e.g. CS_AE, CS_3B etc.
def id_generator(size=2, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
# unzip the reference warfile and build n instances
def unzip(ifile, instances,):
newinstance = ifile
cs_id = []
for i in xrange(instances):
cs_id.append('CS_' + id_generator())
i += 1
print 'Searching for reference file ' + newinstance
if os.path.isfile(newinstance): # check if file exists
print 'Found ' + newinstance
else:
print newinstance + ' not fonund. Try again.'
sys.exit()
print 'Building ' + str(instances) + ' instances... '
for c in xrange(instances):
extract = zipfile.ZipFile(newinstance)
extract.extractall(cs_id[c])
extract.close()
print cs_id[c] + ' done'
c += 1
return cs_id
#def create_war_file():
def changecs(cs_id):
n = 0
for item in cs_id:
cspath = cs_id[n] + '/KGSAdmin_CS/conf/contentserver/contentserver-conf.txt'
if os.path.isfile(cspath):
print 'contentserver-conf.txt found'
else:
print 'File not found. Try again.'
sys.exit()
n += 1
#f = open(cspath)
#row = f.read()
Answer: Two ways.
### 1/ Return the list in unzip
def unzip(ifile, instances):
# No need for this global
# global cs_id
cs_id = []
# Do stuff
# [...]
# Return the list
return cs_id
In this case you can call unzip and get the complete list as return value:
def changecs(instances):
# The following line is equivalent to
# cs_id = unzip(ifile, instances)
# for c in cs_id:
for c in unzip(ifile, instances):
cspath = cs_id + '/abc/myfile.txt'
### 2/ Pass it as a parameter and modify it in unzip.
def unzip(ifile, instances, cs_id):
# Do stuff
# [...]
In this case you can pass unzip the empty list and let it modify it in place:
def changecs(instances):
cs_id = []
unzip(ifile, instances, cs_id):
for c in cs_id:
cspath = cs_id + '/abc/myfile.txt'
I prefer the first approach. No need to provide unzip with an empty list. The
second approach is more suited if you have to call unzip on an existing non-
empty list.
## Edit:
Since your edit, `unzip` returns `cs_id` and `changecs` uses it as an input.
def unzip(ifile, instances,):
[...]
return cs_id
def changecs(cs_id):
[....]
But you call them like this:
func.unzip(ifile, instances)
func.changecs() # This should trigger an Exception since changecs expects a positional argument
You should call them like this:
variable = func.unzip(ifile, instances)
func.changecs(variable)
or just
func.changecs(func.unzip(ifile, instances))
|
Collapse information according to certain column of a line
Question: For the matrix as below
A 20 200
A 10 150
B 60 200
B 80 300
C 90 400
C 30 300
My purpose is trying to: for each category (labelled as A,B,C..in the 1st
column), I'd like to find the minimum as well as maximum numbers (as biggest
range). So expect to see:
A 10 200
B 60 300
C 30 400
So how could I do using Python?
Answer: I would start by:
maxs, mins = {}, {}
for line in fd:
category, small, big = line.split()
if category not in maxs or big > maxs[category]:
maxs[category] = big
if category not in mins or small < mins[category]:
mins[category] = small
# final printings
for category in maxs:
print(category, mins[category], maxs[category], sep='\t')
This returns dicts, that can be merged using `{c: (mins[c], maxs[c]) for c in
maxs}`.
This code assume that an iterable of lines is named `fd`. Could be an opened
file containing the matrix in raw text.
If the order is important, a good solution is to use an
[OrderedDict](https://docs.python.org/3/library/collections.html#collections.OrderedDict)
instead of the regular dict for `mins` and `maxs`.
|
Accessing GET Form data from - Javascript Form in Django
Question: I'm having trouble with Django in terms of getting data from a Javascript
form. Here is my Javascript code...
function save() {
var form = document.createElement("form");
console.log(form);
form.setAttribute('method', 'get');
form.setAttribute('action', '/quiz_score/');
document.body.appendChild(form);
var i = document.createElement("input");
i.setAttribute('name', 'Score');
i.setAttribute('value', ""+score);
form.appendChild(i);
var i = document.createElement("input");
i.setAttribute('name', 'csrfmiddlewaretoken');
i.setAttribute('value', '{{ csrf_token }}');
form.appendChild(i);
form.submit();
}
I know using GET isn't ideal however I couldn't get POST working, it simply
wouldn't redirect to the target page.
Here is my Django Class and function...
class QuizScoreView(TemplateView):
template_name = "quiz_score.html"
def quiz_score(self, request):
# Quiz.objects.create(username= ,score= )
print("Score: "+request.body)
I am simply trying to get the score variable so I can use it in python.
Please comment if you need any more details and I will add them to the
question below.
Answer: I got it to work using the following HTML/JavaScript:
<html><body>
<button onclick="save();">click me</button>
<script>
function save() {
var form = document.createElement("form");
console.log(form);
form.setAttribute('method', 'get');
form.setAttribute('action', '/quiz_score/');
document.body.appendChild(form);
var i = document.createElement("input");
i.setAttribute('name', 'Score');
i.setAttribute('value', "+score");
form.appendChild(i);
var i = document.createElement("input");
i.setAttribute('name', 'csrfmiddlewaretoken');
i.setAttribute('value', '{{ csrf_token }}');
form.appendChild(i);
form.submit();
}
</script>
</body></html>
View:
from django.shortcuts import render
def quiz_score(request):
context = {'score': request.GET['Score']}
return render(request, 'quiz_score.html', context=context)
urls.py:
url(r'^quiz_score/$', quiz_score)
I noticed in your JavaScript you have `i.setAttribute('value', ""+score);`.
Maybe that's supposed to be `i.setAttribute('value', "+score");` or something
similar?
I went with a straight function view. You have a interesting mix of
TemplateView and function based view. If you wanted to use a TemplateView, you
could do something like:
from django.views.generic import TemplateView
class QuizScoreView(TemplateView):
template_name = 'quiz_score.html'
def get(self, request, *args, **kwargs):
context = self.get_context_data(**kwargs)
context['Score'] = request.GET['Score']
return self.render_to_response(context)
urls.py:
url(r'^quiz_score/$', QuizScoreView.as_view())
Hope that helps!
|
What is "backlog" in TCP connections?
Question: Below, you see a python program that acts as a server listening for connection
requests to port _9999_ :
# server.py
import socket
import time
# create a socket object
serversocket = socket.socket(
socket.AF_INET, socket.SOCK_STREAM)
# get local machine name
host = socket.gethostname()
port = 9999
# bind to the port
serversocket.bind((host, port))
# queue up to 5 requests
serversocket.listen(5)
while True:
# establish a connection
clientsocket,addr = serversocket.accept()
print("Got a connection from %s" % str(addr))
currentTime = time.ctime(time.time()) + "\r\n"
clientsocket.send(currentTime.encode('ascii'))
clientsocket.close()
The questions is what is the function of the parameter of `socket.listen()`
method (i.e. `5`).
Based on the tutorials around the internet:
> The backlog argument specifies the maximum number of queued connections and
> should be at least 0; the maximum value is system-dependent (usually 5), the
> minimum value is forced to 0.
But:
1. What is these _queued connections_?
2. Does it make any change for clients requests? (I mean is the server that is running with `socket.listen(5)` different from the server that is running with `socket.listen(1)` in accepting connection requests or in receiving data?)
3. Why the minimum value is zero? Shouldn't it be at least `1`?
4. Which value is preferred?
5. Is this `backlog` defined for TCP connections only or we have it for UDP and other protocols too?
Answer: **NOTE : Answers are framed without having any background in Python, but, the
questions are irrelevant to language, to be answered.**
> What is these queued connections?
In simple words, **_BACKLOG_** is equal to how many pending connections the
queue will hold.
When multiple clients would like to connect to the server, then server holds
the incoming requests in a queue. The clients are arranged in a queue, and the
server processes their requests one by one as and when queue-member proceeds.
This nature of connection is called queued connection.
> Does it make any change for clients requests? (I mean is the server that is
> running with socket.listen(5) different from the server that is running with
> socket.listen(1) in accepting connection requests or in receiving data?)
Yes, both cases are different. The first case would allow only 5 clients to be
arranged to the queue; whereas in the case of backlog=1, only 1 connection can
be hold in the queue, thereby resulting in the dropping of the further
connection request!
> Why the minimum value is zero? Shouldn't it be at least 1?
I have no idea about Python, but, [as per this
source](http://pubs.opengroup.org/onlinepubs/009695399/functions/listen.html),
in C, a backlog argument of 0 may allow the socket to accept connections, in
which case the length of the listen queue may be set to an implementation-
defined minimum value.
> Which value is preferred?
This question has no well-defined answer. I'd say this depends on the nature
of your application, as well as the hardware configurations and software
configuration too. Again, as per the source, `BackLog` is silently limited to
between 1 and 5, inclusive(again as per C).
> Is this backlog defined for TCP connections only or we have it for UDP and
> other protocols too?
NO. Please note that there's no need to listen() or accept() for unconnected
datagram sockets(UDP). This is one of the perks of using unconnected datagram
sockets!
But, do keep in mind, then there are `TCP based datagram socket
implementations`(called TCPDatagramSocket) too which have backlog parameter.
|
Python vector field of ODE of three variables
Question: I am trying to plot a vector field of a ODE model with three variables. I
would like to average the vectors along the third axis, and present the vector
field together with the information of the standard deviation of their values.
The ODE system is:
a = 1.
b1 = 0.1
b2 = 0.11
c1 = 1.5
c2 = 1.6
d = 0.75
def dudt(a,b1,b2,u,v1,v2):
return a*u - b1*u*v1 - b2*u*v2
def dv1dt(d,c1,b1,u,v1):
return -c1*v1 + d*b1*u*v1
def dv2dt(d,c2,b2,u,v2):
return -c2*v2 + d*b2*u*v2
The function that I am currently using is:
import numpy as np
import matplotlib.pyplot as plt
def plotVF(mS=None, density= 20,color='k'):
mB1 = np.linspace(0,1.1,int(density))
mB2 = np.linspace(0,1.1,int(density))
if mS==None:
mS = np.linspace(0,1.1,int(density))
B1,B2,S = np.meshgrid(mB1,mB2,mS)
average=True
else:
B1,B2 = np.meshgrid(mB1,mB2)
S = mS
average=False
DB1 = dv1dt(d,c1,b1,S,B1)
DB2 = dv2dt(d,c2,b2,S,B2)
DS = dudt(a,b1,b2,S,B1,B2)
if average:
print "Averaging"
DB1std = np.std(DB1,axis=2)
DB2std = np.std(DB2,axis=2)
DB1 = np.mean(DB1,axis=2)
DB2 = np.mean(DB2,axis=2)
DS = np.mean(DS,axis=2)
vecstd = np.hypot(DB1std,DB2std)
plt.imshow(vecstd)
plt.colorbar()
B1,B2 = np.meshgrid(mB1,mB2)
M = (np.hypot(DB1, DB2, DS))
M[ M == 0] = 1.
DB1=DB1/M
DB2=DB2/M
DS=DS/M
print B1.shape,B2.shape,DB1.shape,DB2.shape
plt.quiver(B1, B2, DB1, DB2, pivot='mid', color=color)
plt.xlim(0,1.1), plt.ylim(0,1.1)
plt.grid('on')
plt.show()
It gives me that the standard deviation along the third axis is zero, which
does not make sense.
[](http://i.stack.imgur.com/0HSyY.png)
Someone has an idea what am I doing wrong?
Answer: Your code is almost perfectly fine. There's only one problem: you're plotting
the colormap with a vanilla call to `plt.imshow`.
As its name suggests, `imshow` is used for plotting images. As such, it by
default doesn't expect coordinate inputs, just a single array containing the
pixel data. This implies that a simple call to `imshow` will have axes limits
corresponding to the number of pixels in your image -- in your case the
dimensions of your 2d data arrays. If you take a look at the image directly
created by `imshow`, you'll see that the limits go up to `x,y=20`. Later you
set new limits according to your actual underlying mesh, truncating your plot
to the first 2 data points.
The solution is to explicitly tell `plt.imshow()` where you want your plot to
reside in coordinate space:
plt.imshow(vecstd, extent=[B1.min(),B1.max(),B2.min(),B2.max()], origin='lower')
The first keyword argument `extent` gives the `x` and `y` limits into which
the data should be plotted. Note the important second keyword argument,
`origin`. By default `imshow` plots things "upside down" in order to _not_
plot actual images upside down. When you're using `imshow` to plot stuff
defined with Cartesian coordinates, you have to tell it that the origin of the
coordinate system should not be the _upper_ left corner of the figure (as for
images), bur rather the _lower_ left corner (as for regular plots).
|
What is a good way to make several attempts to handle one exception in Python?
Question: Let's say, I have a preferred solution to handle exception.
But exception still occurs sometimes and I have to use the second, less
preferred, solution.
I use code like the following, it might look not cool. If there is better
practice, please advice.
try:
print(undefined_varible)
except NameError:
try:
print("1st attempt to handle NameError exception")
except NameError:
print("2nd attempt to handle NameError exception")
UPDATE:
here is a piece of my real code
import re
def find_phone(text):
try:
pattern = re.compile(r"(?:\+?(\d{1,3}))?[-. (]*(\d{3})[-. )]*(\d{3})[-. ]*(\d{4})(?: *x(\d+))?")
#search for a phone number
mo = pattern.search(text)
phone = mo.group()
print (phone)
except AttributeError:
try:
#don't want to use this pattern always because it's too wide and not perfect
pattern = re.compile(r"(?:\+?(\d{1,3}))?[-. (]*(\d{1,3})[-. )]*(\d{1,3})[-. ]*(\d{1,4})([-. ]*(\d{1,4}))?([-. ]*(\d{1,4}))?(?: *x(\d{1,4}))?")
mo = pattern.search(text)
phone = mo.group()
print (phone)
except AttributeError:
print("phone number has not been found")
find_phone("here is the weird formatted phone number: +33 6 29 37 24 57")
Answer: Suppose you have several ways to check for a phone number. One idea is to have
a counter in your function and through an `if` statement choose which one to
try
import re
def find_phone(text):
number_of_cases = 2
i = 0
while i < number_of_cases
try:
if i == 0:
pattern = re.compile(r"(?:\+?(\d{1,3}))?[-. (]*(\d{3})[-. )]*(\d{3})[-. ]*(\d{4})(?: *x(\d+))?")
#search for a phone number
mo = pattern.search(text)
phone = mo.group()
print (phone)
elif i == 1:
#don't want to use this pattern always because it's too wide and not perfect
pattern = re.compile(r"(?:\+?(\d{1,3}))?[-. (]*(\d{1,3})[-. )]*(\d{1,3})[-. ]*(\d{1,4})([-. ]*(\d{1,4}))?([-. ]*(\d{1,4}))?(?: *x(\d{1,4}))?")
mo = pattern.search(text)
phone = mo.group()
print (phone)
except AttributeError:
i += 1
else: break
Another thought would be to check if you have found a phone number
|
BeautifulSoup: Get all product links from specific category
Question: I want to get all the product links from specific category by using
BeautifulSoup in Python.
I have tried the following but don't get a result:
import lxml
import urllib2
from bs4 import BeautifulSoup
html=urllib2.urlopen("http://www.bedbathandbeyond.com/store/category/bedding/bedding/quilts-coverlets/12018/1-96?pagSortOpt=DEFAULT-0&view;=grid")
br= BeautifulSoup(html.read(),'lxml')
for links in br.findAll('a', class_='prodImg'):
print links['href']
Answer: You use urllib2 wrong.
import lxml
import urllib2
from bs4 import BeautifulSoup
#create a http request
req=urllib2.Request("http://www.bedbathandbeyond.com/store/category/bedding/bedding/quilts-coverlets/12018/1-96?pagSortOpt=DEFAULT-0&view=grid")
# send the request
response = urllib2.urlopen(req)
# read the content of the response
html = response.read()
br= BeautifulSoup(html,'lxml')
for links in br.findAll('a', class_='prodImg'):
print links['href']
|
OrientDB: text searching using gremlin
Question: I am using OrientDB and the gremlin console that comes with.
I am trying to search a pattern in text property. I have Email vertices with
ebodyText property. The problem is that the result of querying with SQL like
command and Gremlin language is quite different.
If I use SQL like query such as:
`select count(*) from Email where eBodyText like '%Syria%'`
it returns 24.
But if I query in gremlin console such as:
`g.V.has('eBodyText').filter{it.eBodyText.matches('.*Syria.*')}.count()`
it returns none.
Same queries with a different keyword 'memo' returns 161 by SQL but 20 by
gremlin.
Why does this behave like this? Is there a problem with the syntax of gremlin
command? Is there a better way to search text in gremlin?
I guess there might be a problem of setting properties in the upload script
which uses python driver 'pyorient'. [Python script used to upload the
dataset](https://github.com/romanegloo/cs505_proj2/blob/master/scripts/importCsv.py)
Thanks for your help.
[](http://i.stack.imgur.com/YjaIh.png)
[](http://i.stack.imgur.com/jkZHg.png)
Answer: I tried with 2.1.15 and I had no problem.
These are the records.
[](http://i.stack.imgur.com/Neaoj.png)
[](http://i.stack.imgur.com/ry8Jg.png)
[](http://i.stack.imgur.com/IY4KG.png)
[](http://i.stack.imgur.com/RHBOu.png)
**EDITED**
I added some vertexes to my DB and now the `count()` is 11
**QUERY:**
g.V.has('eBodyText').filter{it.eBodyText.contains('Syria')}.count()
**OUTPUT:**
==>11
Hope it helps.
|
python autopy problems/confusion
Question: so im trying to make a bot script that when a certain hex color is on a
certain pixel it will execute some code to move the mouse,click etc. and i
have it to where it takes a screenshot every 1 second to the same png file and
updates the png file's pic. i have the hex color for the pixel cords print to
the console so i can see if its updating or not. it never updates it just
stays the same. ive tried writing this script many ways and sadly i only have
one version to show you but hopefully you will understand what i was trying to
accomplish. im on python 2.7 btw. thank you all for your time!!!!
import autopy
from time import sleep
color_grabber = hex(autopy.bitmap.Bitmap.open("screen1.png").get_color(292,115))
def color_checker():
global color_grabber
color_grabber = color_grabber
return
def mouse_move_click():
autopy.mouse.smooth_move(433,320)
autopy.mouse.click()
def screen_grab():
autopy.bitmap.capture_screen().save("screen1.png")
def the_ifs(mouse_move_click):
if color_checker == "0xffcb05":
mouse_move_click()
while 1==1:
sleep(1)
screen_grab()
color_checker()
the_ifs(mouse_move_click)
print color_grabber
Answer:
from autopy.mouse import LEFT_BUTTON
autopy.mouse.click(LEFT_BUTTON)
autopy.mouse.toggle(True, LEFT_BUTTON)
autopy.mouse.toggle(False, LEFT_BUTTON)
I see the need to do this in other people's code, but I don't understand why
want to use the up and down after the click.In fact when I test on Windows 7,
click is effective, but is not very correct, feel more like the down to my
operation
|
Python regex findall to read line in .csv file
Question: I have a .csv file (or could happily be a .txt file) with some records in it:
JB74XYZ Kerry Katona 44 Mansion_House LV10YFB
WL67IAM William Iam 34 The_Voice_Street LN44HJU
etc etc
I have used python to open and read the file, then regex findall (and
attempted a similar regex rule) to identify a match:
import re
from re import findall
reg = "JB74XYZ"
with open("RegDD.txt","r")as file:
data=file.read()
search=findall(reg,data)
print (search)
which gives the resulting output:
['JB74XYZ']
I have tested this out, and it seems I have the regex findall working, in that
it is correctly identifying a 'match' and returning it.
1. My question is, how do I get the remaining content of the 'matched' lines to be returned as well? (eventually I will get this written into a new file, but for now I just want to have the matching line printed).
I have explored python dictionaries as one way of indexing things, but I hit a
wall and got no further than the regex returning a positive result.
2. I guess from this a second question might be: am I choosing the wrong approach altogether?
I hope I have been specific enough, first question here, and I have spent
hours (not minutes) looking for specific solutions, and trying out a few
ideas. I'm guessing that this is not an especially tricky concept, but I could
do with a few hints if possible.
Answer: A better way to handle this would be to use Python's
[csv](https://docs.python.org/library/csv.html) module. From the looks of your
CSV, I'm guessing it's tab-delimited so I'm running off of that assumption.
import csv
match = "JB74XYZ"
matched_row = None
with open("RegDD.txt", "r") as file:
# Read file as a CSV delimited by tabs.
reader = csv.reader(file, delimiter='\t')
for row in reader:
# Check the first (0-th) column.
if row[0] == match:
# Found the row we were looking for.
matched_row = row
break
print(matched_row)
This should then output the following from `matched_row`:
['JB74XYZ', 'Kerry', 'Katona', '44', 'Mansion_House', 'LV10YFB']
|
Python function such as max() doesn't work in pyspark application
Question: Python function max(3,6) works under pyspark shell. But if it is put in an
application and submit, it will throw an error: TypeError: _() takes exactly 1
argument (2 given)
Answer: It looks like you have an import conflict in your application most likely due
to wildcard import from `pyspark.sql.functions`:
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.6.1
/_/
Using Python version 2.7.10 (default, Oct 19 2015 18:04:42)
SparkContext available as sc, HiveContext available as sqlContext.
In [1]: max(1, 2)
Out[1]: 2
In [2]: from pyspark.sql.functions import max
In [3]: max(1, 2)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-bb133f5d83e9> in <module>()
----> 1 max(1, 2)
TypeError: _() takes exactly 1 argument (2 given)
Unless you work in a relatively limited it is best to either perfix:
from pyspark.sql import functions as sqlf
max(1, 2)
## 2
sqlf.max("foo")
## Column<max(foo)>
or alias:
from pyspark.sql.functions import max as max_
max(1, 2)
## 2
max_("foo")
## Column<max(foo)>
|
Python Turtle - Click Events
Question: I'm currently making a program in python's Turtle Graphics. Here is my code in
case you need it
import turtle
turtle.ht()
width = 800
height = 800
turtle.screensize(width, height)
##Definitions
def text(text, size, color, pos1, pos2):
turtle.penup()
turtle.goto(pos1, pos2)
turtle.color(color)
turtle.begin_fill()
turtle.write(text, font=('Arial', size, 'normal'))
turtle.end_fill()
##Screen
turtle.bgcolor('purple')
text('This is an example', 20, 'orange', 100, 100)
turtle.done()
I want to have click events. So, where the text `'This is an example'` is
wrote, I want to be able to click that and it prints something to the console
or changes the background. How do I do this?
**EDIT:**
I don't want to install anything like pygame, it has to be made in Turtle
Answer: Use the onscreenclick method to get the position then act on it in your
mainloop (to print or whatever).
import turtle as t
def main():
t.onscreenclick(getPos)
t.mainloop()
main()
Also see : [Python 3.0 using
turtle.onclick](http://stackoverflow.com/questions/15893236/python-3-0-using-
turtle-onclick) Also see : [Turtle in python- Trying to get the turtle to move
to the mouse click position and print its
coordinates](http://stackoverflow.com/questions/17864085/turtle-in-python-
trying-to-get-the-turtle-to-move-to-the-mouse-click-position-a)
|
Scrapy and xpath to crawl my site and export URLs - what am I doing wrong?
Question: I'm trying to set up a basic Scrapy to crawl my website and extract all the
page URLs of my site. I would think this would be fairly easy.
Here's my items.py, copied from the tutorial:
from scrapy.item import Item, Field
class Website(Item):
name = Field()
description = Field()
url = Field()
Here's my Spider, named example.py from the tutorial.
from scrapy.spiders import Spider
from scrapy.selector import Selector
from cspenn.items import Website
class DmozSpider(Spider):
name = "cspenn"
allowed_domains = ["christopherspenn.com"]
start_urls = ["http://www.christopherspenn.com/"]
def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//a')
items = []
for site in sites:
item = Website()
item['name'] = site.xpath('a/text()').extract()
item['url'] = site.xpath('a/@href').extract()
item['description'] = site.xpath('text()').re('-\s[^\n]*\\r')
items.append(item)
return items
What I get in return from the bot is:
scrapy crawl cspenn
2016-04-13 13:15:25 [scrapy] INFO: Scrapy 1.0.5 started (bot: cspenn)
2016-04-13 13:15:25 [scrapy] INFO: Optional features available: ssl, http11, boto
2016-04-13 13:15:25 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'cspenn.spiders', 'SPIDER_MODULES': ['cspenn.spiders'], 'BOT_NAME': 'cspenn'}
2016-04-13 13:15:25 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-04-13 13:15:26 [boto] DEBUG: Retrieving credentials from metadata server.
2016-04-13 13:15:27 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/usr/lib/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 449, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1227, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open
raise URLError(err)
URLError: <urlopen error timed out>
2016-04-13 13:15:27 [boto] ERROR: Unable to read instance data, giving up
2016-04-13 13:15:27 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-04-13 13:15:27 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-04-13 13:15:27 [scrapy] INFO: Enabled item pipelines:
2016-04-13 13:15:27 [scrapy] INFO: Spider opened
2016-04-13 13:15:27 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-04-13 13:15:27 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-04-13 13:15:27 [scrapy] DEBUG: Crawled (200) <GET http://www.christopherspenn.com/> (referer: None)
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] DEBUG: Scraped from <200 http://www.christopherspenn.com/>
{'description': [], 'name': [], 'url': []}
2016-04-13 13:15:27 [scrapy] INFO: Closing spider (finished)
2016-04-13 13:15:27 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 222,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 14302,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 4, 13, 17, 15, 27, 262789),
'item_scraped_count': 93,
'log_count/DEBUG': 96,
'log_count/ERROR': 2,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2016, 4, 13, 17, 15, 27, 77084)}
2016-04-13 13:15:27 [scrapy] INFO: Spider closed (finished)
What am I doing wrong? I followed the tutorial almost exactly. The desired
output is a CSV file of title, page URL, and description.
Answer: You are not making context-specific xpaths correctly. You already have the `a`
in the context, inside the `site` variable, no need to prepend `a` to the
XPath expressions inside the loop:
sel = Selector(response)
sites = sel.xpath('//a')
for site in sites:
item = Website()
item['name'] = site.xpath('text()').extract()
item['url'] = site.xpath('@href').extract()
item['description'] = site.xpath('text()').re('-\s[^\n]*\\r')
yield item
And, since you have the empty descriptions in the output as well, I suspect
the regular expression needs to be tweaked too. This though depends on what
exactly are you trying to extract from the link texts.
|
Delete an element in a JSON object
Question: I am trying to loop through a list of objects deleting an element from each
object. Each object is a new line. I am trying to then save the new file as is
without the element contained within the objects. I know this is probably a
simple task but I cannot not seem to get this work. Would be grateful if
somebody could offer a hand. Thanks.
{
"business_id": "fNGIbpazjTRdXgwRY_NIXA",
"full_address": "1201 Washington Ave\nCarnegie, PA 15106",
"hours": {
"Monday": {
"close": "23:00",
"open": "11:00"
},
"Tuesday": {
"close": "23:00",
"open": "11:00"
},
"Friday": {
"close": "23:00",
"open": "11:00"
},
"Wednesday": {
"close": "23:00",
"open": "11:00"
},
"Thursday": {
"close": "23:00",
"open": "11:00"
},
"Saturday": {
"close": "23:00",
"open": "11:00"
}
},
"open": true,
"categories": ["Bars", "American (Traditional)", "Nightlife", "Lounges", "Restaurants"],
"city": "Carnegie",
"review_count": 7,
"name": "Rocky's Lounge",
"neighborhoods": [],
"longitude": -80.0849416,
"state": "PA",
"stars": 4.0,
"latitude": 40.3964688,
"attributes": {
"Alcohol": "full_bar",
"Noise Level": "average",
"Music": {
"dj": false
},
"Attire": "casual",
"Ambience": {
"romantic": false,
"intimate": false,
"touristy": false,
"hipster": false,
"divey": false,
"classy": false,
"trendy": false,
"upscale": false,
"casual": false
},
"Good for Kids": true,
"Wheelchair Accessible": true,
"Good For Dancing": false,
"Delivery": false,
"Dogs Allowed": false,
"Coat Check": false,
"Smoking": "no",
"Accepts Credit Cards": true,
"Take-out": true,
"Price Range": 1,
"Outdoor Seating": false,
"Takes Reservations": false,
"Waiter Service": true,
"Wi-Fi": "free",
"Caters": false,
"Good For": {
"dessert": false,
"latenight": false,
"lunch": false,
"dinner": false,
"brunch": false,
"breakfast": false
},
"Parking": {
"garage": false,
"street": false,
"validated": false,
"lot": true,
"valet": false
},
"Has TV": true,
"Good For Groups": true
},
"type": "business"
}
I need to remove the information contained within the hours element however
the information is not always the same. Some contain all the days and some
only contain one or two day information. The code i've tried to use is Pyton
that I have search throughout the day to use with my problem. I am not very
skilled with Python. Any help would be appreciated.
import json
with open('data.json') as data_file:
data = json.load(data_file)
for element in data:
del element['hours']
Sorry Just to Add the error I am getting when running the code is TypeError:
'unicode' object does not support item deletion
Answer: Let's assume you want to overwrite the same file:
import json
with open('data.json', 'r') as data_file:
data = json.load(data_file)
for element in data:
element.pop('hours', None)
with open('data.json', 'w') as data_file:
data = json.dump(data, data_file)
`dict.pop(<key>, not_found=None)` is probably what you where looking for, if I
understood your requirements. Because it will remove the `hours` key if
present and will not fail if not present.
However I am not sure I understand why it makes a difference to you whether
the hours key contains some days or not, because you just want to get rid of
the whole key / value pair, right?
Now, if you really want to use `del` instead of `pop`, here is how you could
make your code work:
import json
with open('data.json') as data_file:
data = json.load(data_file)
for element in data:
if 'hours' in element:
del element['hours']
with open('data.json', 'w') as data_file:
data = json.dump(data, data_file)
**EDIT** So, as you can see, I added the code to write the data back to the
file. If you want to write it to another file, just change the filename in the
second open statement.
I had to change the indentation, as you might have noticed, so that the file
has been closed during the data cleanup phase and can be overwritten at the
end.
`with` is what is called a context manager, whatever it provides (here the
data_file file descriptor) is available **ONLY** within that context. It means
that as soon as the indentation of the `with` block ends, the file gets closed
and the context ends, along with the file descriptor which becomes invalid /
obsolete.
Without doing this, you wouldn't be able to open the file in write mode and
get a new file descriptor to write into.
I hope it's clear enough...
**SECOND EDIT**
This time, it seems clear that you need to do this:
with open('dest_file.json', 'w') as dest_file:
with open('source_file.json', 'r') as source_file:
for line in source_file:
element = json.loads(line.strip())
if 'hours' in element:
del element['hours']
dest_file.write(json.dumps(element))
|
Write simultaneously to float array with python multiprocessing
Question: I coded a matrix multiplier a while ago, in an attempt to make it faster I
tried to make it threaded just to discover that threads run on the same
process.. I later discovered the multiprocessing library which I have
implemented in the code below. Now, I don't know how to merge the work made by
the processes spawned since the result is not in shared memory.
How can I merge the distributed calculations into the "final_multi" variable?
Heres my code:
#!/usr/bin/env python
import numpy as np
from multiprocessing import Process, Array
T=64
v1 = np.empty([T,T], dtype=np.float32)
v2 = np.empty_like(v1)
final_multi = np.empty_like(v1)
#shared = Array('f', final_multi) This doesnt work
def calclinea(mat1, mat2, fil, col):
escalar = 0
for vl in range(T):
escalar += mat1[fil,vl]*mat2[vl,col]
return escalar
def mulshared(vec1, vec2, froY, toY, froX, toX):
global final_multi
for y in range(froY,toY):
for x in range(froX, toX):
final_multi[x,y] = calclinea(vec1,vec2,x,y)
#shared[x,y] = calclinea(vec1,vec2,x,y)
def main():
for r in range(T): ### Allocate host memory
for c in range(T):
v1[r,c] = r
v2[r,c] = c+2
final_multi[r,c] = 0
#p1 =Process(target=mulshared, args=(v1,v1,0,(T*1/4 -1),0,T))
#p2 =Process(target=mulshared, args=(v1,v1,(T*1/4),(T*2/4 -1),0,T))
#p3 =Process(target=mulshared, args=(v1,v1,(T*2/4),(T*3/4 -1),0,T))
p4 =Process(target=mulshared, args=(v1,v1,T*3/4,T*4/4,0,T)) #All four processes to demo distribution of data, only 4th is initialized so result can be seen, p1 result is all zeros so..
p4.start()
p4.join()
print "\nfinal_multi\n", final_multi
main()
I know this is a inefficient way of matrix multiplication, I just want to
learn how multiprocessing works, Thanks in advance.
Answer: You can use the [sharedmem](https://github.com/rainwoodman/sharedmem) module,
it's an enhanced version of the multiprocessing module that comes with Python.
It offers a nice an easy way to share memory between processes.
import sharedmem as shmem
out_matrix = shmem.empty((400,400))
def do_work(x):
out_matrix[100*x:100*(x+1), :] = x
def main():
with shmem.MapReduce(np=4) as pool:
pool.map(do_work, range(4))
In this minimal example, the output matrix will be filled by four workers in
parallel.
|
How to make this Battleship game more user friendly in terms of values?
Question: I have a Battleship game set up in Python, however the grid i set up ranged
between 0 and 5. Meaning the first row and columns of the battleship will be
(0,0) I don't want this however, as any stranded user will likely count from
1, so they'll put (1,1) or (1,2) the value 0 won't be a value they'd think to
enter. How can I make my program reflect that, in a way where 1,1 is the
beginning column and row not the 2nd. As the user can only enter a value
between 0 and 4, 5 is represented as an invalid value and it says it's not on
the grid.
So the only possible combinations are these:
Row: 0, 1, 2, 3, 4, Column: 0, 1, 2, 3, 4
I want it to be:
Row: 1, 2, 3, 4, 5 Column 1, 2, 3, 4, 5
Here is my code:
import random
Battleship_Board = []
for x in range(0,5):
Battleship_Board.append(["O"] * 5)
def print_Battleship_Board(Battleship_Board):
for row in Battleship_Board:
print (" ".join(row))
print ("Let's play a game of Battleships!")
print_Battleship_Board(Battleship_Board)
def Random_Battleship_Board_Row(Battleship_Board):
return random.randint(0, len(Battleship_Board)-1)
def Random_Battleship_Board_Column(Battleship_Board):
return random.randint(0, len(Battleship_Board[0])-1)
Battleship_Board_Row = Random_Battleship_Board_Row(Battleship_Board)
Battleship_Board_Column = Random_Battleship_Board_Column(Battleship_Board)
print (Battleship_Board_Row)
print (Battleship_Board_Column)
for turn in range(5):
Guess_Battleship_Board_Row = int(input("Guess the X coordinate:"))
Guess_Battleship_Board_Column = int(input("Guess the Y coordinate:"))
if Guess_Battleship_Board_Row == Battleship_Board_Row and Guess_Battleship_Board_Column == Battleship_Board_Column:
print ("You sunk the battleship!")
print ("My ship was here: [" + str(Battleship_Board_Row) + "][" + str(Battleship_Board_Column) + "]")
break
else:
if turn + 1 == 5:
Battleship_Board[Guess_Battleship_Board_Row][Guess_Battleship_Board_Column] = "X"
print_Battleship_Board(Battleship_Board)
print ("Game Over")
print ("My ship was here: [" + str(Battleship_Board_Row) + "][" + str(Battleship_Board_Column) + "]")
if (Guess_Battleship_Board_Row < 0 or Guess_Battleship_Board_Row > 4) or (Guess_Battleship_Board_Column < 0 or Guess_Battleship_Board_Column > 4):
print ("The inserted value is not on the grid.")
elif(Battleship_Board[Guess_Battleship_Board_Row ][Guess_Battleship_Board_Column] == "X"):
print ("You already inserted this combination")
else:
print ("You missed my battleship")
Battleship_Board[Guess_Battleship_Board_Row][Guess_Battleship_Board_Column] = "X"
print ("Number of turns:", turn + 1,"out of 5")
print_Battleship_Board(Battleship_Board)
Answer: You can just subtract one from the user's guess, and also add a note to say
that the numbers are not zero-based. Remember to check for valid input!
Guess_Battleship_Board_Row = int(input("Guess the X coordinate:")) - 1
Guess_Battleship_Board_Column = int(input("Guess the Y coordinate:")) - 1
|
How can I subtract two values which I have got from a .txt file
Question: So far I have managed to print out certain parts of the `.txt` file in Python
however I cannot figure out how to subtract the amount paid from my total
amount and then add up the outstanding value from each column.
import csv
FILE_NAME = "paintingJobs.txt" #I use this so that the file can be used easier
COL_HEADERS = ['Number', 'Date', 'ID', 'Total', 'Status', 'Paid']
NUM_COLS = len(COL_HEADERS)#This will insure that the header of each column fits into the length of the data
# read file once to determine maximum width of data in columns
with open(FILE_NAME) as f:
reader = csv.reader(f, delimiter=',')
# determine the maximum width of the data in each column
max_col_widths = [len(col_header) for col_header in COL_HEADERS]
for columns in reader:
for i, col in enumerate(columns):
if "A" in columns and int(columns[5]) < int(columns[3]):
max_col_widths[i] = max(max_col_widths[i], len(repr(col)))
# add 1 to each for commas
max_col_widths = [col_width+1 for col_width in max_col_widths]
# read file second time to display its contents with the headers
with open(FILE_NAME) as f:
reader = csv.reader(f, delimiter=',')
# display justified column headers
print(' ' + ' '.join(col_header.ljust(max_col_widths[i])
for i, col_header in enumerate(COL_HEADERS)))
# display justified column data
for columns in reader:
if "A" in columns and int(columns[5]) < int(columns[3]):
print(columns)`
This is the result so far:
Number Date ID Total Status Paid
['E5345', '22/09/2015', 'C106', '815', 'A', '400']
['E5348', '23/09/2015', 'C109', '370', 'A', '200']
['E5349', '25/09/2015', 'C110', '480', 'A', '250']
['E5353', '28/09/2015', 'C114', '272', 'A', '200']
['E5355', '29/09/2015', 'C116', '530', 'A', '450']
['E5363', '01/10/2015', 'C124', '930', 'A', '500']
['E5364', '02/10/2015', 'C125', '915', 'A', '800']
['E5367', '03/10/2015', 'C128', '427', 'A', '350']
['E5373', '10/10/2015', 'C134', '1023', 'A', '550']
What i want to do is add a new column which is the difference of the total and
the paid
Answer: It looks like the data is being stored as strings. Have you tried changing
them to integers? You would do it like this. Suppose we have:
x="1"
y="2"
You can convert them to integers like this.
x=int(x)
y=int(x)
Then you should have no problem adding them.
|
is this Python's pass by reference' behavior?
Question: I thought Python assignment statements were 'pass by value'. For example
b=0
a=b
b=1
print(a) #prints 0
print
(b) #prints 1
However, I am confused by a different behavior when dealing with other kinds
of data. From this tutorial [on openCV](https://pythonprogramming.net/image-
arithmetics-logic-python-opencv-tutorial/) I modified the code slightly to
show two images. The code below takes this image:
[](http://i.stack.imgur.com/Obn1w.png)
and adds it into this image [](http://i.stack.imgur.com/T4Ldd.png)
and repeats the process, adding this image
[](http://i.stack.imgur.com/LCAh8.png)
onto the same base image.
import cv2
import numpy as np
# Load two images
img1 = cv2.imread('3D-Matplotlib.png')
#img1a = img1
img1a = cv2.imread('3D-Matplotlib.png')
img2 = cv2.imread('mainlogo.png')
img3 = cv2.imread('helloo.png')
# I want to put logo on top-left corner, So I create a ROI
rows,cols,channels = img2.shape
roi = img1[20:rows+20, 20:cols+20]
rows3,cols3,channels3 = img3.shape
roi3 = img1[50:rows3+50, 50:cols3+50 ]
# Now create a mask of logo
img2gray = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)
# add a threshold
ret, mask = cv2.threshold(img2gray, 220, 255, cv2.THRESH_BINARY_INV)
#anything crossing over 220 is thelower limit
#binary threshold is 0 or 1
#anything> 220 goes to 255
#anything below 220 goes to 0-> black
#and create its inverse mask
mask_inv = cv2.bitwise_not(mask)
#do same for img3
img3gray = cv2.cvtColor(img3,cv2.COLOR_BGR2GRAY)
ret3, mask3 = cv2.threshold(img3gray, 140, 255, cv2.THRESH_BINARY_INV)
mask_inv3 = cv2.bitwise_not(mask3)
# take the ROI of the plot, and throw the mask over it
img1_bg = cv2.bitwise_and(roi,roi,mask = mask_inv)
# Take only region of logo from logo image.
img2_fg = cv2.bitwise_and(img2,img2,mask = mask)
#do the same with the other mask
img3_bg = cv2.bitwise_and(roi3,roi3,mask = mask_inv3)
img3_fg = cv2.bitwise_and(img3,img3,mask = mask3)
#
dst = cv2.add(img1_bg,img2_fg)
dst3 = cv2.add(img3_bg,img3_fg)
img1[0:rows, 0:cols ] = dst
img1a[50:rows3+50, 50:cols3+50 ] = dst3
cv2.imshow('r1',img1)
cv2.imshow('r3',img1a)
cv2.waitKey(0)
cv2.destroyAllWindows()
In the above posted code, I get [](http://i.stack.imgur.com/qnTw3.png)
If I comment out line 7 and uncomment line 8, I would expect the same result
if it was pass by value. But I get something else
[](http://i.stack.imgur.com/F5kBB.png).
Both images are the same. Obviously, the manipulations onto img1 are 'carried
over' to img1a because img1a is set to be equal to img1. If the assignment
statement was 'pass by value' (as I would expect from python), then img1 and
img1a should be different. But since they are the same, I conclude that img1
is a ptr that was passed to img1a. Thus, if I try to print img1a, I get the
same data as I would from printing img1.
So maybe images are passed by reference? What other data types in Python
behave this way? Arrays? Dictionaries? Or am I totally wrong and confused.
Answer: Like in Java, everything in Python is passed and assigned by value. All values
(every expression and variable) in Python are references (pointers to
objects), and assigning one variable to another make the second variable point
to the same object as the first.
When you say you are making "manipulations onto img1", what you are doing is
you are calling methods on the object pointed to by `img1` (using the
subscript or slice syntax, e.g. `img1[...] = dst`, is still implicitly calling
methods on the object pointed to by `img1`) that are mutating that object.
Those changes can be seen through any other object pointer that happens to be
pointing to that object. That has nothing to do with passing or assigning.
Calling methods is not assigning or passing. If all you did with `img1` was
simply _assign_ various things to the variable `img1` (i.e. `img1 =
something`), you would indeed see that such statements never have an effect on
what object `img1a` points to. That is what assigning by value means.
|
Python: Reading a global variable inside a function creator
Question: So, I want to create a function creator that reads a global variable every
time it's called, and not just when it's created. That's not the case, since
Python replaces my var reference with it's current value.
import operator
space = []
pp = 0 #this is a global var
def op(op,*tval,over = 0, doSave = True):
def operation():
val = list(tval)
val.append(space[pp]) # This apparently becomes val.append(space[0])
temp = op(*val)
if doSave:
space[pp] = temp # This apparently becomes space[0] = temp
return operation
increase = op(operator.add, 1, doSave = True)
Increase only works with space[0] and even if I change pp, it only modifies
the first value of space.
Answer:
return operation
should be indented.
You should have
global space
to change `space`.
|
Multi dimensional dictionary in python
Question:
#!/usr/bin/python
import sys
from collections import defaultdict
from collections import Counter
new_dic_defaultdict = defaultdict(dict)
#new_dic_defaultdict = defaultdict(int)
file="SMSCDR_POSTPAID_150901235000_10.84.0.29_AS.log"
fp=open(file,"r")
str = fp.readlines();
for line in str:
array = line.split('|')
system_id = array[55]
Origin = array[12]
TIME = array[0]
message_id = array[62]
new_dic_defaultdict[Origin][system_id] = new_dic_defaultdict[Origin].setdefault(system_id, 0) + 1;
print new_dic_defaultdict
Here I can write two dimensional directory. If I add third dimension its not
working.
Ex: new_dic_defaultdict[Origin][system_id][message_id] = new_dic_defaultdict[Origin][system_id].setdefault(message_id, 0) + 1;
How to solve this
Answer: You can use defaultdict like that:
new_dic_defaultdict = defaultdict(lambda: defaultdict(int))
|
move up the files from subdirectory in root directory
Question: I have the following folder hierarchy:
----Folder
------Subfolders
-----------Sub-sub-folder
--------------Files
So I have multiple subfolders, and in every Subfolder I have one Sub-sub-
folder that contains multiple files, that I want to move up from Sub-sub-
folder to Subfolder
The easiest, and dirty way of doing this: just copy files from sub-sub-folder
to Subfolder and then delete the sub-sub-folder. But if you know better
pythonic way of doing this, please let me know
Answer:
import os
import shutil
Subfolders = os.listdir('Folder') # get the list of Subfolders
for Subfolder in Subfolders: # iterate thru each subfolder
sfiles = os.listdir('Folder/Subfolder/Sub-sub-folder') # get list of file at each Subfolder/Sub-sub-folder
for sfile in sfiles:
shutil.move(sfile, Subfolder) # each file in subfolder is moved to subfolder
|
python multiprocessing using multiple arguments
Question: I can use multiprocessing to easily set up parallel calls to "func" like this:
import multiprocessing
def func(tup):
(a, b) = tup
return str(a+b)
pool = multiprocessing.Pool()
tups = [ (1,2), (3,4), (5,6), (7,8)]
results = pool.imap(func, tups)
print ", ".join(results)
Giving result:
3, 7, 11, 15
The problem is that my actual function "func" is more complicated than the
example here, so I don't want to call it with a single "tup" argument. I want
multiple arguments ~~and also keyword arguments~~. What I want to do is
something like below, but the "*" unpacking inside a list doesn't work ~~(and
doesn't support keywords either)~~ :
import multiprocessing
def func(a, b):
return str(a+b)
pool = multiprocessing.Pool()
tups = [ *(1,2), *(3,4), *(5,6), *(7,8)]
results = pool.imap(func, tups)
print ", ".join(results)
So... is there a way to get all the power of python function calls, while
doing parallel processing?
Answer: Can't you just use dicts or objects?
import multiprocessing
def func(a):
print(str(a['left'] + a['right']))
pool = multiprocessing.Pool()
i1 = {'left': 2, 'right': 5}
i2 = {'left': 3, 'right': 4}
pool.imap(func, [i1, i2])
This way you won't have the keywords defined in the method definition but you
will at least be able to reference the keywords within the method body. Same
goes for the function input. Instead of dealing with tuple this is much more
readable.
|
python3 - No module named 'html5lib'
Question: I'm running a python3 program that requires `html5lib` but I receive the error
`No module named 'html5lib'`.
Here are two session of terminal:
sam@pc ~ $ python
Python 2.7.9 (default, Mar 1 2015, 12:57:24)
[GCC 4.9.2] on linux2
>>> import html5lib
>>> html5lib.__file__
'/usr/local/lib/python2.7/dist-packages/html5lib/__init__.pyc'
>>> quit()
sam@pc ~ $ python3
Python 3.4.2 (default, Oct 8 2014, 10:45:20)
[GCC 4.9.1] on linux
>>> import html5lib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'html5lib'
>>>
Where can be the problem?
Answer: Seems you have the module only for python 2. Most probably need to install it
for python3. Usually use pip3 for that.
pip3 install html5lib
You can check your installed modules using:
pip freeze (or pip3 freeze)
I strongly recommend you to use [virtualenv](http://docs.python-
guide.org/en/latest/dev/virtualenvs/) for development. So you can separate the
different python versions and libraries/Modules by project.
use:
pip3 install virtualenv
You can then easily create "environments" using (simple version)
virtualenv projectname --python=PYTHON_EXE_TO_USE
This creates a directory projectname. You just switch into that dir and do a
Scripts\activate (on linux/unix: source bin/activte)
And boom. You have an isolated environment with the given python.exe and no
installed modules at all. You also have an isolated pip for that project.
Really helps a lot.
To end working in that project do a:
Scripts\deactivate (on linux: deactivate)
Thats it.
ONe moer thing ;) You can also do a
pip freeze > requirements.txt
to save all needed dependencies for a project in a file. Whenever you need to
restart from scratch in a new virtualenv you cabn simply do a:
pip install -r requirements.txt
This installs all needed modules for you. Add a _-U_ to get the newest
version.
|
How to list the names of PyPI packages corresponding to imports in a script?
Question: Is there a way to list the **PyPi package** names which correspond to modules
being imported in a script?
For instance to import the module
[`scapy3k`](https://github.com/phaethon/scapy) (this is its name) I need to
use
import scapy.all
but the actual package to install is `scapy-python3`. The latter is what I am
looking to extract from what I will find in the `import` statement (I do not
care about its name - `scapy3k` in that case).
There are other examples (which escape me right now) of packages which have a
`pip install` name completely different from what is being used in the
`import` afterwards.
Answer: The name listed on pypi is the
[name](https://packaging.python.org/en/latest/distributing/#setup-name)
defined in the distribution's setup.py / setup.cfg file. There is no
requirement that this name relates to the name of the package that will be
installed. So there is no 100% reliable way to obtain the name of a
distribution on pypi, given only the name of the package that it installs (the
use case identified in the OP's comment).
|
Display SQLite output in TK python
Question: Im trying to get a row from my db to display on a tk text widget if 1 and
remove from display if 0.
The code I have so far shows the row for one card. When I scan a seccond time
I get an error of.
SQLite objects created in a thread can be used in that same thread.The object was created in thread id 6740 and this is thread id 6320
<traceback object at 0x02AAC418>
<class 'sqlite3.ProgrammingError'>
Traceback (most recent call last):
File "C:\rfid\main2.py", line 66, in <module>
cardmonitor.addObserver( cardobserver )
File "C:\Python27\lib\site-packages\smartcard\CardMonitoring.py", line 105, in addObserver
observer.update(self, (self.rmthread.cards, []))
File "C:\rfid\main2.py", line 56, in update
a(tag)
File "C:\rfid\main2.py", line 25, in a
root.mainloop()
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1017, in mainloop
self.tk.mainloop(n)
KeyboardInterrupt
Main code below
import sqlite3 as db
import os
from prettytable import from_db_cursor
from smartcard.scard import *
from smartcard.util import toHexString
from prettytable import from_db_cursor
from smartcard.CardMonitoring import CardMonitor, CardObserver
import time
from Tkinter import Tk, BOTH, INSERT, Text
def main(tag):
q = "SELECT * FROM CARDS WHERE TAG=?"
up = "UPDATE CARDS SET FLAG = (CASE WHEN FLAG=0 THEN 1 ELSE 0 END) WHERE TAG=?"
id = "SELECT * FROM CARDS WHERE TAG=?"
cursor.execute(q, (tag,))
cursor.execute(up, (tag,))
conn.commit()
for row in cursor.execute(id, (tag,)):
print row [1] + row[2] #debugging to console
r1 = str(row[1])
r2 = str(row[2])
msg = str(r1 + r2)
text_widget = Text(root, font='times 40 bold', bg='Green')
text_widget.pack(fill=BOTH, expand=0)
text_widget.tag_configure('tag-center', wrap='word', justify='center')
text_widget.insert(INSERT, msg, 'tag-center')
root.mainloop()
class printobserver( CardObserver ):
def update( self, observable, (addedcards, removedcards) ):
previousIdString = ""
idString = ""
for card in addedcards:
if addedcards:
hresult, hcontext = SCardEstablishContext(SCARD_SCOPE_USER)
assert hresult==SCARD_S_SUCCESS
hresult, readers = SCardListReaders(hcontext, [])
assert len(readers)>0
reader = readers[0]
hresult, hcard, dwActiveProtocol = SCardConnect(
hcontext,
reader,
SCARD_SHARE_SHARED,
SCARD_PROTOCOL_T0 | SCARD_PROTOCOL_T1)
hresult, response = SCardTransmit(hcard,dwActiveProtocol,[0xFF,0xCA,0x00,0x00,0x04])
v = toHexString(response, format=0)
tag = str(v)
main(tag)
conn = db.connect('cards3.db')
root = Tk()
while True:
cursor = conn.cursor()
cardmonitor = CardMonitor()
cardobserver = printobserver()
cardmonitor.addObserver( cardobserver )
cardmonitor.deleteObserver( cardobserver )
time.sleep( 2 )
**update:** From answers below i have now tired the following.
Moved conn.cursor into class but same. Different error being `Coursor is not
defined`
import sqlite3 as db
import os
from prettytable import from_db_cursor
from smartcard.scard import *
from smartcard.util import toHexString
from prettytable import from_db_cursor
from smartcard.CardMonitoring import CardMonitor, CardObserver
import time
from Tkinter import Tk, BOTH, INSERT, Text
def main(tag):
q = "SELECT * FROM CARDS WHERE TAG=?"
up = "UPDATE CARDS SET FLAG = (CASE WHEN FLAG=0 THEN 1 ELSE 0 END) WHERE TAG=?"
id = "SELECT * FROM CARDS WHERE TAG=?"
cursor.execute(q, (tag,))
cursor.execute(up, (tag,))
conn.commit()
for row in cursor.execute(id, (tag,)):
print row [1] + " has been checked " + ('in' if row[2] else 'out')
r1 = str(row[1])
r2 = str(row[2])
mseg = str(r1 + r2)
text_widget = Text(root, font='times 40 bold', bg='Green')
text_widget.pack(fill=BOTH, expand=0)
text_widget.tag_configure('tag-center', wrap='word', justify='center')
text_widget.insert(INSERT, r1 + r2, 'tag-center')
root.mainloop()
class printobserver( CardObserver ):
cursor = conn.cursor()
def update( self, observable, (addedcards, removedcards) ):
previousIdString = ""
idString = ""
for card in addedcards:
if addedcards:
hresult, hcontext = SCardEstablishContext(SCARD_SCOPE_USER)
assert hresult==SCARD_S_SUCCESS
hresult, readers = SCardListReaders(hcontext, [])
assert len(readers)>0
reader = readers[0]
hresult, hcard, dwActiveProtocol = SCardConnect(
hcontext,
reader,
SCARD_SHARE_SHARED,
SCARD_PROTOCOL_T0 | SCARD_PROTOCOL_T1)
hresult, response = SCardTransmit(hcard,dwActiveProtocol,[0xFF,0xCA,0x00,0x00,0x04])
v = toHexString(response, format=0)
tag = str(v)
main(tag)
conn = db.connect('cards3.db')
root = Tk()
while True:
cardmonitor = CardMonitor()
cardobserver = printobserver()
cardmonitor.addObserver( cardobserver )
cardmonitor.deleteObserver( cardobserver )
time.sleep( 2 )
Have also tired putting in main and update but still same error
def main(tag):
cursor = conn.cursor
q = "SELECT * FROM CARDS WHERE TAG=?"
up = "UPDATE CARDS SET FLAG = (CASE WHEN FLAG=0 THEN 1 ELSE 0 END) WHERE TAG=?"
id = "SELECT * FROM CARDS WHERE TAG=?"
cursor.execute(q, (tag,))
cursor.execute(up, (tag,))
conn.commit()
for row in cursor.execute(id, (tag,)):
print row [1] + " has been checked " + ('in' if row[2] else 'out')
r1 = str(row[1])
r2 = str(row[2])
mseg = str(r1 + r2)
text_widget = Text(root, font='times 40 bold', bg='Green')
text_widget.pack(fill=BOTH, expand=0)
text_widget.tag_configure('tag-center', wrap='word', justify='center')
text_widget.insert(INSERT, r1 + r2, 'tag-center')
root.mainloop()
def update( self, observable, (addedcards, removedcards) ):
previousIdString = ""
idString = ""
for card in addedcards:
if addedcards:
hresult, hcontext = SCardEstablishContext(SCARD_SCOPE_USER)
assert hresult==SCARD_S_SUCCESS
hresult, readers = SCardListReaders(hcontext, [])
assert len(readers)>0
reader = readers[0]
hresult, hcard, dwActiveProtocol = SCardConnect(
hcontext,
reader,
SCARD_SHARE_SHARED,
SCARD_PROTOCOL_T0 | SCARD_PROTOCOL_T1)
hresult, response = SCardTransmit(hcard,dwActiveProtocol,[0xFF,0xCA,0x00,0x00,0x04])
v = toHexString(response, format=0)
tag = str(v)
main(tag)
cursor = conn.cursor
If i remove all the TK stuff and put `cursor = conn.cursor` above `while
True:` I can keep scanning cards with no issues
def main(tag):
cursor = conn.cursor
q = "SELECT * FROM CARDS WHERE TAG=?"
up = "UPDATE CARDS SET FLAG = (CASE WHEN FLAG=0 THEN 1 ELSE 0 END) WHERE TAG=?"
id = "SELECT * FROM CARDS WHERE TAG=?"
cursor.execute(q, (tag,))
cursor.execute(up, (tag,))
conn.commit()
for row in cursor.execute(id, (tag,)):
print row [1] + " has been checked " + ('in' if row[2] else 'out')
Answer: `SQLite objects created in a thread can be used in that same thread.The object
was created in thread id 6740 and this is thread id 6320`
`class printobserver(CardObserver)` has to be creating a new thread, `sqlite3`
does not support much concurrency. However, it does across different
processes. Notice this snip from your code:
**printobserver**
main(tag)
**main**
q = "SELECT * FROM CARDS WHERE TAG=?"
up = "UPDATE CARDS SET FLAG = (CASE WHEN FLAG=0 THEN 1 ELSE 0 END) WHERE TAG=?"
id = "SELECT * FROM CARDS WHERE TAG=?"
cursor.execute(q, (tag,))
cursor.execute(up, (tag,))
conn.commit()
**global namespace`while-loop`**
cardobserver = printobserver()
cardmonitor.addObserver( cardobserver )
cardmonitor.deleteObserver( cardobserver )
Since you are calling `main()` from your `printobserver` object (clearly this
object is creating a new thread each time), you spawned `cursor` in the main
thread in the global namespace, and you are calling cursor within `main()`
which is now executed from a new thread, you are getting this error.
Since you are not using the `cursor` externally from `main()`, I recommend
connecting to the database at the top of `main()`, initializing the `cursor`,
then doing whatever you need to do, and disconnecting from the database at the
bottom of main. Alternatively, you can do this within the `printobserver`
object, as the cursor initialization will still be within the same thread as
`main()`.
|
Sending csv file using Requests.PUT in python [400 Client error: Bad Request]
Question: I am trying to send a csv file using Request module but I keep getting "400
Client Error: BAD REQUEST for url" error. According to the specification that
I have, here is an example that was given for curl; `curl -X PUT -H "Content-
Disposition: attachment;filename=ABC.csv" -H "Content-Type: application/csv"
-T ABC.csv http://.../api/dss/sites/1/vardefs`
Below is my python code;
import requests
filepath = 'C:\...\ABC.csv'
with open(filepath) as WA:
mydata = WA.read()
response = requests.put('http://...../api/dss/sites/1/vardefs',
data=mydata,
headers = {'content-type':'application/csv', 'Content-Disposition': 'attachment;filename=Cu_temp.csv'},
params={'file': filepath}
)
response.raise_for_status()
Any idea on what I am doing wrong?
Answer: From 'requests' docs:
> data -- (optional) Dictionary, bytes, or file-like object to send in the
> body of the Request.
Try sending `WA` directly (without reading first) for a streaming upload
instead. Also, it is always recommended to open files in 'rb' (read binary)
mode when uploading with requests.
(Edit in response to a comment)
Something like this:
import requests
filepath = 'C:\...\ABC.csv'
with open(filepath, 'rb') as WA:
response = requests.put('http://...../api/dss/sites/1/vardefs',
data=WA,
headers = {
'content-type':'application/csv',
'Content-Disposition': 'attachment;filename=Cu_temp.csv'
})
Does it work this time?
|
How to hide SDL library debug messages in Python?
Question: I am trying to write a simple python app, which will detect a 2-axis joystick
axis movements and call other functions when one axis has been moved to an
endpoint. I don't do programming regularly, I'm doing sysadmin tasks.
Using the pygame library this would be easy, but when I call the get_axis()
method, I am getting an output on the console. Example :
import pygame
pygame.joystick.init()
stick = pygame.joystick.Joystick(0)
stick.init()
pygame.init()
while True:
for e in pygame.event.get():
x=stick.get_axis(0)
y=stick.get_axis(1)
print x,y
And on the console I got :
SDL_JoystickGetAxis value:258:
SDL_JoystickGetAxis value:258:
0.00787353515625 0.00787353515625
I will be running the script in text mode, not for gaming purposes, so in my
case the output is flooded with useless stuff. Although question(s) similar to
were already posted, in my opinion none of them offers a real solution. The
cause of this seems to be the fact that the SDL library was compiled with
debugging turned on. How can I disable the SDL library console output ? I
don't want to suppress the stdout/stderr as other posts suggest.
Answer: As another answered explained, unless you recompile the SDL source yourself,
SDL is going to try to write to the stdout because they left a debug feature
on. I suggest writing a function to get_axis that first turns off stdout,
calls SDL, turns back on stdout, and then returns the value. Something like:
import sys
import os
import pygame
def quiet_get_axis(joystick):
"""Returns two floats representing joystick x,y values."""
sys.stdout = os.devnull
sys.stderr = os.devnull
x = joystick.get_axis(0)
y = joystick.get_axis(1)
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
return x, y
stick = pygame.joystick.Joystick(0)
stick.init()
pygame.init()
while True:
for e in pygame.event.get():
x, y = quiet_get_axis(stick)
print x, y
|
How to split each line from file using python?
Question: I try to split contents from file, this file has many lines and we don't know
how much lines as example i have these data in the file:
7:1_8:35_2016-04-14
8:1_9:35_2016-04-15
9:1_10:35_2016-04-16
using paython i want to loop at each line and split each line like that:
for line in iter(file):
task =line.split("_")
first_time=task[0] #8:1
second_time=task[1] #9:35
date=task[2] #2016-04-15
But this will give me: task[0] is first line task[1] is second line and so on
.... how i can read only one line at a time and split its content to do do
something and the same thing with the other lines.
Update my question: full code :
with open('onlyOnce.txt', 'r') as fp:
for f_time, sec_time, dte in filter(None, reader(fp, delimiter="_")):
check_stime=f_time.split(":")
Stask_hour=check_stime[0]
Stask_minutes=check_stime[1]
check_etime=sec_time.split(":")
Etask_hour=check_etime[0]
Etask_minutes=check_etime[1]
#check every minute if current information = desired information
now = datetime.now()
now_time = now.time()
date_now = now.date()
if (time(Stask_hour,Stask_minutes) <= now_time <= time(Etask_hour,Etask_minutes) and date_now == dte):
print("this line in range time: "+ f_time)
else:
print("")
fp.close()
My aim from this code is: to check current time with each line, and when the
current line in the range of the "first line" //do somthing , it is like make
schedule or alarm .
Error is:
Traceback (most recent call last):
File "<encoding error>", line 148, in <module>
TypeError: 'module' object is not callable
OKay, final Update is:
from datetime import datetime,time
from csv import reader
with open('onlyOnce.txt', 'r') as fp:
for f_time, sec_time, dte in filter(None, reader(fp, delimiter="_")):
check_stime=f_time.split(":")
Stask_hour=check_stime[0]
Stask_minutes=check_stime[1]
check_etime=sec_time.split(":")
Etask_hour=check_etime[0]
Etask_minutes=check_etime[1]
#check every minute if current information = desired information
now = datetime.now()
now_time = now.time()
date_now = now.date()
if time(int(Stask_hour),int(Stask_minutes)) <= now_time <= time(int(Etask_hour),int(Etask_minutes) and dte == date_now):
print("this line in range time: "+ f_time)
else:
print("")
fp.close()
But i want to ask a Stupid question :/ When i check this logic, will not print
"yes" !! but date is equal 2016-04-14 so why not correct ?? O.o i'm confused
if('2016-04-14' == datetime.now().date() ):
print("yes")
Thanks for every one helped me : Padraic Cunningham and others
Answer: Use a [csv reader](https://docs.python.org/2/library/csv.html#csv.reader)
passing a [file object](https://docs.python.org/2/library/stdtypes.html#file-
objects) and use `_` as the delimiter:
from csv import reader
with open("infile') as f:
# loop over reader getting a row at a time
for f_time, sec_time, dte in reader(f, delimiter="_"):
print(f_time, sec_time, dte )
Which will give you something output like:
In [2]: from csv import reader
In [3]: from StringIO import StringIO
In [4]: for f,s,d in reader(StringIO(s), delimiter="_"):
...: print(f,s,d)
...:
('7:1', '8:35', '2016-04-14')
('8:1', '9:35', '2016-04-15')
('9:1', '10:35', '2016-04-16')
Since you have empty lines we need to filter those out:
with open("infile') as f:
for f_time, sec_time, dte in filter(None, reader(f, delimiter="_")):
print(f_time, sec_time, dte )
So now empty rows will be removed:
In [5]: s = """7:1_8:35_2016-04-14
...: 8:1_9:35_2016-04-15
...:
...: 9:1_10:35_2016-04-16"""
In [6]: from csv import reader
In [7]: from StringIO import StringIO
In [8]: for f,s,d in filter(None, reader(StringIO(s), delimiter="_")):
...: print(f,s,d)
...:
('7:1', '8:35', '2016-04-14')
('8:1', '9:35', '2016-04-15')
('9:1', '10:35', '2016-04-16')
If you want to compare the current date and the hour and minute against the
current time:
from datetime import datetime
from csv import reader
with open('onlyOnce.txt', 'r') as fp:
for f_time, sec_time, dte in filter(None, reader(fp, delimiter="_")):
check_stime = f_time.split(":")
stask_hour= int(check_stime[0])
stask_minutes = int(check_stime[1])
check_etime = sec_time.split(":")
etask_hour = int(check_etime[0])
etask_minutes = int(check_etime[1])
# check every minute if current information = desired information
now = datetime.now()
hour_min_sec = now.hour, now.minute, now.second
if now.strftime("%Y-%d-%m") == dte and (stask_hour, stask_minutes, 0) <= hour_min_sec <= (etask_hour, etask_minutes, 0):
print("this line in range time: " + f_time)
else:
print("")
Or a simpler way may be to just parse the times:
from datetime import datetime
from csv import reader
with open('onlyOnce.txt', 'r') as fp:
for f_time, sec_time, dte in filter(None, reader(fp, delimiter="_")):
check_stime = datetime.strptime(f_time,"%H:%m").time()
check_etime = datetime.strptime(f_time,"%H:%m").time()
# check every minute if current information = desired information
now = datetime.now()
if now.strftime("%Y-%d-%m") == dte and check_etime <= now.time() <= check_etime:
print("this line in range time: " + f_time)
else:
print("")
|
Plotting data from CSV in python
Question: I have CSV files in following format in a folder.It also have additional
column which I dont care
Date Price
20150101 1
20160102 3
I want to iterate through all the files in folder and create graph for date on
x-axis and price on y-axis and save image on each page of pdf file.
I am fairly new to python and tried few things from google but it didn't work.
Thanks in advance
Answer: You can iterate over the list of files and save each plot as you go using
usecols to specify the column to use:
import pandas as pd
import matplotlib.pyplot as plt
import os
pth = "path_to"
for fle in os.listdir(pth):
df = pd.read_csv(os.path.join(pth, fle),usecols=(0, 1))
df.plot()
plt.savefig("{}.png".format(fle))
You can use [PdfPages](http://matplotlib.org/faq/howto_faq.html#save-multiple-
plots-to-one-pdf-file):
from matplotlib.backends.backend_pdf import PdfPages
import matplotlib.pyplot as plt
import pandas as pd
import glob
with PdfPages('multipage_pdf.pdf') as pdf:
pth = "path_to"
for fle in os.listdir(pth):
df = pd.read_csv(os.path.join(pth, fle),usecols=(0, 1)
df.plot()
pdf.savefig()
plt.close()
|
Python3 Portscanner can't solve the socket pr0blem
Question: When I run this code I am getting this socket error:
> [WinError 10038] An operation was attempted on something that is not a
> socket
but even if I delete the `s.close()` it gives me wrong results.
It is a port scanner that are going to try connecting to all ports on the
server I want to scan. And the ones that i'm getting connection from is stored
in a list. But for some reason it is giving me wrong results. can someone
please help me.
import socket
import threading
def scan_for_open_ports():
#Creating variables
OpenPorts = []
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = input('Host to scan: ')
global port
global OpenPorts
port = 1
#Scanning
for i in range(65534):
try:
s.connect((host, port))
s.shutdown(2)
OpenPorts.append(port)
print(str(port) + 'is open.')
s.close()
port += 1
except socket.error as msg:
print(msg)
s.close()
show_user()
def show_user():
#Giving the user results
print('------Open porst-----\n')
print(OpenPorts)
Answer: That's because you're closing your socket inside the loop with `s.close()` and
you're not opening it again and you try to connect with a socket that's closed
already. you should close the socket when you're done with it at the end of
the loop, i also amended your code to make `OpenPorts` global and remove the
unnecessary `port` variable you define and increment inside your for loop
import socket
OpenPorts = []
def scan_for_open_ports():
# Creating variables
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = input('Host to scan: ')
# Scanning
for port in range(1, 65534):
try:
s.connect((host, port))
OpenPorts.append(port)
print(str(port) + 'is open.')
except socket.error as msg:
print(msg)
s.close()
show_user()
def show_user():
# Giving the user results
print('------Open ports-----\n')
print(OpenPorts)
scan_for_open_ports()
|
How do you use dask + distributed for NFS files?
Question: Working from [Matthew Rocklin's
post](http://matthewrocklin.com/blog/work/2016/02/22/dask-distributed-part-2)
on distributed data frames with Dask, I'm trying to distribute some summary
statistics calculations across my cluster. Setting up the cluster with
`dcluster ...` works fine. Inside a notebook,
import dask.dataframe as dd
from distributed import Executor, progress
e = Executor('...:8786')
df = dd.read_csv(...)
The file I'm reading is on an NFS mount that all the worker machines have
access to. At this point I can look at `df.head()` for example and everything
looks correct. From the blog post, I think I should be able to do this:
df_future = e.persist(df)
progress(df_future)
# ... wait for everything to load ...
df_future.head()
But that's an error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-26-8d59adace8bf> in <module>()
----> 1 fraudf.head()
/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/dataframe/core.py in head(self, n, compute)
358
359 if compute:
--> 360 result = result.compute()
361 return result
362
/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/base.py in compute(self, **kwargs)
35
36 def compute(self, **kwargs):
---> 37 return compute(self, **kwargs)[0]
38
39 @classmethod
/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/base.py in compute(*args, **kwargs)
108 for opt, val in groups.items()])
109 keys = [var._keys() for var in variables]
--> 110 results = get(dsk, keys, **kwargs)
111
112 results_iter = iter(results)
/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/threaded.py in get(dsk, result, cache, num_workers, **kwargs)
55 results = get_async(pool.apply_async, len(pool._pool), dsk, result,
56 cache=cache, queue=queue, get_id=_thread_get_id,
---> 57 **kwargs)
58
59 return results
/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/async.py in get_async(apply_async, num_workers, dsk, result, cache, queue, get_id, raise_on_exception, rerun_exceptions_locally, callbacks, **kwargs)
479 _execute_task(task, data) # Re-execute locally
480 else:
--> 481 raise(remote_exception(res, tb))
482 state['cache'][key] = res
483 finish_task(dsk, key, state, results, keyorder.get)
AttributeError: 'Future' object has no attribute 'head'
Traceback
---------
File "/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/async.py", line 264, in execute_task
result = _execute_task(task, data)
File "/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/async.py", line 246, in _execute_task
return func(*args2)
File "/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/dataframe/core.py", line 354, in <lambda>
dsk = {(name, 0): (lambda x, n: x.head(n=n), (self._name, 0), n)}
What's the right approach to distributing a data frame when it comes from a
normal file system instead of HDFS?
Answer: Dask is trying to use the single-machine scheduler, which is the default if
you create a dataframe using the normal dask library. Switch the default to
use your cluster with the following lines:
import dask
dask.set_options(get=e.get)
|
Traversing from one node in xml to another using Python
Question: I am very new to XML with Python and I have the following XML string that I
get as a response from a network device:
'<Response MajorVersion="1" MinorVersion="0"><Get><Configuration><OSPF MajorVersion="19" MinorVersion="2"><ProcessTable><Process><Naming><ProcessName>1</ProcessName></Naming><DefaultVRF><AreaTable><Area><Naming><AreaID>0</AreaID></Naming><Running>true</Running><NameScopeTable><NameScope><Naming><InterfaceName>Loopback0</InterfaceName></Naming><Running>true</Running><Cost>1000</Cost></NameScope><NameScope><Naming><InterfaceName>Loopback1</InterfaceName></Naming><Running>true</Running><Cost>1</Cost></NameScope><NameScope><Naming><InterfaceName>GigabitEthernet0/0/0/0</InterfaceName></Naming><Running>true</Running><Cost>1</Cost></NameScope></NameScopeTable></Area></AreaTable></DefaultVRF><Start>true</Start></Process></ProcessTable></OSPF></Configuration></Get><ResultSummary ErrorCount="0" /></Response>'
I have the following code to retrieve the interface information along with the
interface cost associated with it. However I would also like to get the
'AreaID' tag associated with each interface as part of my dictionary. Unable
to navigate the tree correctly to retrieve the AreaID tag value:
for node in x.iter('NameScope'):
int_name = str(node.find('Naming/InterfaceName').text)
d[int_name] = {}
d[int_name]['cost'] = str(node.find('Cost').text)
This code gives the following output when 'd' is printed:
{'GigabitEthernet0/0/0/0': {'cost': '1'},
'Loopback0': {'cost': '1000'},
'Loopback1': {'cost': '1'}}
I want something like this in the output:
{'GigabitEthernet0/0/0/0': {'cost': '1', 'area': 0},
'Loopback0': {'cost': '1000', 'area': 0},
'Loopback1': {'cost': '1', 'area': 0}}
Any suggestions or modifications to my code will be really appreciated!
Answer: I would use the [`preceding`](https://developer.mozilla.org/en-
US/docs/Web/XPath/Axes/preceding) notation:
node.xpath(".//preceding::AreaID")[0].text
Complete code I am executing:
from lxml import etree as ET
x = ET.parse("input.xml")
d = {}
for node in x.iter('NameScope'):
int_name = str(node.find('Naming/InterfaceName').text)
d[int_name] = {
'cost': str(node.find('Cost').text),
'area': node.xpath(".//preceding::AreaID")[0].text
}
print(d)
Prints:
{
'Loopback0': {'cost': '1000', 'area': '0'},
'Loopback1': {'cost': '1', 'area': '0'},
'GigabitEthernet0/0/0/0': {'cost': '1', 'area': '0'}
}
|
how to make a phrase in python that is input by a user not case
Question: I am trying to fix my code so when the user enters a phrase for instance
**cat** but in the directory that they are telling the script to look at to
find the phrase the word is spelled **Cat** or **CAt** or **CAT** or **cAT**
it will still return showing the phrase is in the directory and where it is. I
have attached my code below:
import os
phrase_to_look_for = input("What phrase would you like to look for?")
name_of_directory = input("What is the name of your directory?")
for subdir, dirs, files in os.walk(name_of_directory, phrase_to_look_for):
for file in files:
file_path = os.path.join(name_of_directory, subdir, file)
file_to_search = open(file_path,'r')
try:
contents_of_file = file_to_search.read()
except:
contents_of_file = ''
if phrase_to_look_for in contents_of_file:
print("The word is in the file: " + file)
else:
print("The phrase is not in the file.")
Thanks for your help in advance.
Answer: Simply `lower` the file contents and the users input:
phrase_to_look_for = input("What phrase would you like to look for?").lower()
and
contents_of_file = file_to_search.read().lower()
This will make everything lowercase, effectively making your search case
insensitive. I should point out that your empty `except` should be what you
actually want to be handling, namely `except IOError:`
|
Multiplication of floating point numbers gives different results in Numpy and R
Question: I am doing data analysis in Python (Numpy) and R. My data is a vector 795067 X
3 and computing the mean, median, standard deviation, and IQR on this data
yields different results depending on whether I use Numpy or R. I crosschecked
the values and it looks like R gives the "correct" value.
Median:
Numpy:14.948499999999999
R: 14.9632
Mean:
Numpy: 13.097945407088607
R: 13.10936
Standard Deviation:
Numpy: 7.3927612774052083
R: 7.390328
IQR:
Numpy:12.358700000000002
R: 12.3468
Max and min of the data are the same on both platforms. I ran a quick test to
better understand what is going on here.
* Multiplying 1.2*1.2 in Numpy gives 1.4 (same with R).
* Multiplying 1.22*1.22 gives 1.4884 in Numpy and the same with R.
* However, multiplying 1.222*1.222 in Numpy gives 1.4932839999999998 which is clearly wrong! Doing the multiplication in R gives the correct answer of 1.49324.
* Multiplying 1.2222*1.2222 in Numpy gives 1.4937728399999999 and 1.493773 in R. Once more, R is correct.
In Numpy, the numbers are float64 datatype and they are double in R. What is
going on here? Why are Numpy and R giving different results? I know R uses
IEEE754 double-precision but I don't know what precision Numpy uses. How can I
change Numpy to give me the "correct" answer?
Answer: # Python
The `print` statement/function in Python will print single-precision floats.
Calculations will actually be done in the precision specified. Python/numpy
uses double-precision float by default (at least on my 64-bit machine):
import numpy
single = numpy.float32(1.222) * numpy.float32(1.222)
double = numpy.float64(1.222) * numpy.float64(1.222)
pyfloat = 1.222 * 1.222
print single, double, pyfloat
# 1.49328 1.493284 1.493284
print "%.16f, %.16f, %.16f"%(single, double, pyfloat)
# 1.4932839870452881, 1.4932839999999998, 1.4932839999999998
In an interactive Python/iPython shell, the shell prints double-precision
results when printing the results of statements:
>>> 1.222 * 1.222
1.4932839999999998
In [1]: 1.222 * 1.222
Out[1]: 1.4932839999999998
# R
It looks like R is doing the same as Python when using `print` and `sprintf`:
print(1.222 * 1.222)
# 1.493284
sprintf("%.16f", 1.222 * 1.222)
# "1.4932839999999998"
In contrast to interactive Python shells, the interactive R shell also prints
single-precision when printing the results of statements:
> 1.222 * 1.222
[1] 1.493284
# Differences between Python and R
The differences in your results could result from using single-precision
values in numpy. Calculations with a lot of additions/subtractions will
ultimately make the problem surface:
In [1]: import numpy
In [2]: a = numpy.float32(1.222)
In [3]: a*6
Out[3]: 7.3320000171661377
In [4]: a+a+a+a+a+a
Out[4]: 7.3320003
As suggested in the comments to your actual question, make sure to use double-
precision floats in your numpy calculations.
|
Python: How to output the FASTA header or chromosome index figure according to the location?
Question: I have the code which help me to move the window of size 5 when it moves from
left to right. The file is in fasta format with header >chromosome for example
followed by the index of the chromosome. I would like to output the header
index number according to the exact index. Can anyone help me?
**Code**
from Bio import SeqIO
with open("test1_out.fasta","w") as f:
for seq_record in SeqIO.parse("test1.fasta", "fasta"):
for i in range(len(seq_record.seq) - 4) :
f.write(">" + str(seq_record.id) + "\n")
f.write(str(seq_record.seq[i:i+5]) + "\n")
**test1.fasta**
>chr1:1-8
ATCGCGTC
>chr2:1-10
ATTTTCGCGA
**Actual output**
>chr1:1-8
ATCGC
>chr1:1-8
TCGCG
>chr1:1-8
CGCGT
>chr1:1-8
GCGTC
>chr2:1-10
ATTTT
>chr2:1-10
TTTTC
>chr2:1-10
TTTCG
>chr2:1-10
TTCGC
>chr2:1-10
TCGCG
>chr2:1-10
CGCGA
**Desired output**
>chr1:1-5
ATCGC
>chr1:2-6
TCGCG
>chr1:3-7
CGCGT
>chr1:4-8
GCGTC
>chr2:1-5
ATTTT
>chr2:2-6
TTTTC
>chr2:3-7
TTTCG
>chr2:4-8
TTCGC
>chr2:5-9
TCGCG
>chr2:6-10
CGCGA
Answer: You just have to modify the heading writing:
seq_name = seq_record.id.split(":")[0] # Get the "chr1"
for i in range(len(seq_record.seq) - 4):
seq_coords = "{}-{}".format(i + 1, i + 5) # Make the coordinates
f.write(">" + seq_name + ":" + seq_coords + "\n") # Print them both
|
how to do 'knife ec2 server create' from python script
Question: I am trying to convert my ant script to python. The ant script runs knife ec2
server create command. What is the best practice to run knife ec2 server
create from Python?
BTW, is python the right scripting technology for automation?
Answer: I'm not familiar with a Python interface for knife, but I see no reason for
this not to work:
import sh
sh.knife.ec2.server.create(r='role[x]', I='ami-xxxxxxx', f='t2.micro', aws-access-key-id=ACCESS_KEY, aws-secret-access-key=SECRET_KEY)
You'll need to `pip install sh` for it, so you can also use subprocess for it
instead if you don't want extra dependencies:
import subprocess
subprocess.Popen('knife ec2 server create ...', shell=True)
If I were you, i'd write a small client for running knife commands more
comfortably though, as if you use knife, this is probably not the only command
you're going to run. `sh` would be ideal for that.
Regarding your second question, you should open another issue for it.
|
Python multiprocessing refuses to loop
Question: I've recently discovered Multiprocessing for Python, so I'm playing around
with it a little bit and I ran into a wall.
This is the script I'm working with:
import multiprocessing, time
def p_def():
print "running p"
time.sleep(5)
def v_def():
print "running v"
time.sleep(5)
p = multiprocessing.Process(target=p_def)
v = multiprocessing.Process(target=v_def)
while True:
time.sleep(0.25)
p.start()
while p.is_alive() == True:
time.sleep(0.5)
print "waiting for it to finish"
All works well when I run the code, it starts up the p_def but when it
finishes and wants to run it again, it runs into a snag and outputs me this
error:
running p
waiting for it to finish
waiting for it to finish
waiting for it to finish
waiting for it to finish
waiting for it to finish
waiting for it to finish
waiting for it to finish
waiting for it to finish
waiting for it to finish
waiting for it to finish
Traceback (most recent call last):
File "proc.py", line 19, in <module>
p.start()
File "/usr/lib/python2.7/multiprocessing/process.py", line 120, in start
assert self._popen is None, 'cannot start a process twice'
AssertionError: cannot start a process twice
This seems a little odd to me. My understanding says it should be able to run
it again, although, what I see online is people saying it cannot run the same
thing twice.
What's the truth and how can I make it run twice?
Answer: You need to recreate the object p, a process can be start only one time.
(<https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Process.start>)
import multiprocessing, time
def p_def():
print "running p"
time.sleep(5)
def v_def():
print "running v"
time.sleep(5)
v = multiprocessing.Process(target=v_def)
while True:
time.sleep(0.25)
p = multiprocessing.Process(target=p_def)
p.start()
while p.is_alive() == True:
time.sleep(0.5)
print "waiting for it to finish"
|
How to create variables from an CSV file in Python
Question: I am an absolut noobie in coding. So I have a problem to solve. First I have a
CSV file looking like this for example:
text.csv:
> jan1,A
> jan2,B
> jan3,C
> jan4,A
> jan5,B
> jan6,C
Now I want to import this "data" from the CSV in a Python programm, so that
the variables are made directly from the CSV file:
jan1=A
jan2=B
...
Please remark that the `A` should not be imported as a string, it`s a
variable. When I import the CSV with CSV reader all the data is imported as a
string?
Answer: It sounds like you want to "cross the boundary" between data and code by
turning data _into_ code. This is generally discouraged because it can be
dangerous (what if someone added a command to your csv file that wiped your
computer?).
Instead, just split the lines in the file by comma and save the first part,
the variable name, as a dictionary key and make the second part the value:
csv_dict = {}
with open(csv_file, "r") as f:
for line in f:
key_val = line.strip().split(",")
csv_dict[key_val[0]] = key_val[1]
You can now access the keys/values via dictionary lookup:
>>> csv_dict["jan5"]
'B'
>>> csv_dict["jan4"]
'A'
>>> my_variable = data_dict["jan4"]
>>> my_variable
'A'
|
Parsing xml file in python which contains multifasta BLAST result
Question: I'm trying to parse xml file which contains multifasta BLAST result - Here is
the
[link](https://drive.google.com/file/d/0B9-yqnpWUqL3eEhHWEkxc2ZVcnM/view?usp=sharing)
\- it's around 400kB in size. Program should return four sequence names. Every
next result should be first after (contains the best alignment) "<
Iteration_iter-num > n < Iteration_iter-num />", where n = 1,2,3,...
Like this:
< Iteration_iter-num >1< /Iteration_iter-num >
****Alignment****
sequence: gi|171864|gb|AAC04946.1| Yal011wp [Saccharomyces cerevisiae]
< Iteration_iter-num >2< /Iteration_iter-num >
****Alignment****
sequence: gi|330443384|ref|NP_009392.2|
< Iteration_iter-num >3< /Iteration_iter-num >
****Alignment****
sequence: gi|6319310|ref|NP_009393.1|
< Iteration_iter-num >4< /Iteration_iter-num >
****Alignment****
sequence: gi|6319312|ref|NP_009395.1|
But in result my program returns this:
<Iteration_iter-num>1</Iteration_iter-num>
****Alignment****
sequence: gi|171864|gb|AAC04946.1| Yal011wp [Saccharomyces cerevisiae]
<Iteration_iter-num>2</Iteration_iter-num>
****Alignment****
sequence: gi|171864|gb|AAC04946.1| Yal011wp [Saccharomyces cerevisiae]
<Iteration_iter-num>3</Iteration_iter-num>
****Alignment****
sequence: gi|171864|gb|AAC04946.1| Yal011wp [Saccharomyces cerevisiae]
<Iteration_iter-num>4</Iteration_iter-num>
****Alignment****
sequence: gi|171864|gb|AAC04946.1| Yal011wp [Saccharomyces cerevisiae]
How to get to another BLASTA result from this xml file?
Here is my code:
from Bio.Blast import NCBIXML
from bs4 import BeautifulSoup
result = open ("BLAST_left.xml", "r")
records = NCBIXML.parse(result)
item = next(records)
file = open("BLAST_left.xml")
page = file.read()
soup = BeautifulSoup(page, "xml")
num_xml_array = soup.find_all('Iteration_iter-num')
i = 0
for records in records:
for itemm in num_xml_array:
print (itemm)
for alignment in item.alignments:
for hsp in alignment.hsps:
print("\n\n****Alignment****")
print("sequence:", alignment.title)
break
itemm = num_xml_array[i+1]
break
// I know my English isn't perfect, but I really don't know what to do and I
don't have any person to ask for it, so I had chosen You :)
Answer: I think Biopython is a better option here to parse the XML, no neet to use
BeautifulSoup:
from Bio.Blast import NCBIXML
result = open("BLAST_left.xml", "r")
records = NCBIXML.parse(result)
for i, record in enumerate(records):
for align in record.alignments:
print("Iteration {}".format(i))
print(align.hit_id)
break # Breaking here gives you only the best HSP.
|
flask-RESTful : why do I get an AssertionError when parsing an argument with the wrong type?
Question: I'm using flask-RESTful for the first time. In the [docs](http://flask-
restful.readthedocs.org/en/0.3.5/quickstart.html#argument-parsing) it says :
> Using the reqparse module also gives you sane error messages for free. If an
> argument fails to pass validation, Flask-RESTful will respond with a 400 Bad
> Request and a response highlighting the error.
However in my case, I get an exception AssertionError instead.
Here's the code:
#! /usr/bin/python
# -*- coding: UTF-8 -*-
from flask import Flask
from flask.ext.restful import Api, Resource, reqparse
class Test(Resource):
@staticmethod
def put(id):
parser = reqparse.RequestParser()
parser.add_argument('arg1')
parser.add_argument('arg2', type=int, help='helptext')
args = parser.parse_args()
return args, 200
app = Flask(__name__)
api = Api(app)
api.add_resource(Test, '/v1.0/test/<int:id>', endpoint='test')
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5001, debug=True)
When I test this with right values, it works:
$ curl -i -H "Accept: application/json" -X PUT --data "arg1=ABC&arg2=1" http://192.0.0.7:5001/v1.0/test/1
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 38
Server: Werkzeug/0.8.3 Python/2.6.6
Date: Fri, 15 Apr 2016 11:59:48 GMT
{
"arg1": "ABC",
"arg2": 1
}
However if I put a wrong value for arg2, instead of getting a status code 400
with an error message I get an exception:
curl -i -H "Accept: application/json" -X PUT --data "arg1=ABC&arg2=A" http://192.0.0.7:5001/v1.0/test/1
HTTP/1.0 500 INTERNAL SERVER ERROR
Content-Type: text/html; charset=utf-8
Connection: close
Server: Werkzeug/0.8.3 Python/2.6.6
Date: Fri, 15 Apr 2016 12:04:25 GMT
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>AssertionError // Werkzeug Debugger</title>
<link rel="stylesheet" href="?__debugger__=yes&cmd=resource&f=style.css" type="text/css">
<script type="text/javascript" src="?__debugger__=yes&cmd=resource&f=jquery.js"></script>
<script type="text/javascript" src="?__debugger__=yes&cmd=resource&f=debugger.js"></script>
<script type="text/javascript">
var TRACEBACK = 27516816,
CONSOLE_MODE = false,
EVALEX = true,
SECRET = "uovVRKyVTy1b8gi5Yc3t";
</script>
</head>
<body>
<div class="debugger">
<h1>AssertionError</h1>
<div class="detail">
<p class="errormsg">AssertionError</p>
</div>
<h2 class="traceback">Traceback <em>(most recent call last)</em></h2>
<div class="traceback">
<ul><li><div class="frame" id="frame-27516688">
<h4>File <cite class="filename">"/usr/lib/python2.6/site-packages/flask/app.py"</cite>,
line <em class="line">1701</em>,
in <code class="function">__call__</code></h4>
<pre>return self.wsgi_app(environ, start_response)</pre>
</div>
<li><div class="frame" id="frame-139714215145552">
<h4>File <cite class="filename">"/usr/lib/python2.6/site-packages/flask/app.py"</cite>,
line <em class="line">1689</em>,
in <code class="function">wsgi_app</code></h4>
<pre>response = self.make_response(self.handle_exception(e))</pre>
</div>
<li><div class="frame" id="frame-139714215148944">
<h4>File <cite class="filename">"/usr/lib/python2.6/site-packages/flask_restful/__init__.py"</cite>,
line <em class="line">271</em>,
in <code class="function">error_router</code></h4>
<pre>return original_handler(e)</pre>
</div>
<li><div class="frame" id="frame-139714215149072">
<h4>File <cite class="filename">"/usr/lib/python2.6/site-packages/flask_restful/__init__.py"</cite>,
line <em class="line">268</em>,
in <code class="function">error_router</code></h4>
<pre>return self.handle_error(e)</pre>
</div>
<li><div class="frame" id="frame-139714215149136">
<h4>File <cite class="filename">"/usr/lib/python2.6/site-packages/flask/app.py"</cite>,
line <em class="line">1687</em>,
in <code class="function">wsgi_app</code></h4>
<pre>response = self.full_dispatch_request()</pre>
</div>
<li><div class="frame" id="frame-139714215149008">
<h4>File <cite class="filename">"/usr/lib/python2.6/site-packages/flask/app.py"</cite>,
line <em class="line">1360</em>,
in <code class="function">full_dispatch_request</code></h4>
<pre>rv = self.handle_user_exception(e)</pre>
</div>
<li><div class="frame" id="frame-139714215149200">
<h4>File <cite class="filename">"/usr/lib/python2.6/site-packages/flask_restful/__init__.py"</cite>,
line <em class="line">271</em>,
in <code class="function">error_router</code></h4>
<pre>return original_handler(e)</pre>
</div>
<li><div class="frame" id="frame-139714215149264">
<h4>File <cite class="filename">"/usr/lib/python2.6/site-packages/flask/app.py"</cite>,
line <em class="line">1246</em>,
in <code class="function">handle_user_exception</code></h4>
<pre>assert exc_value is e</pre>
</div>
</ul>
<blockquote>AssertionError</blockquote>
</div>
<div class="plain">
<form action="http://paste.pocoo.org/" method="post">
<p>
<input type="hidden" name="language" value="pytb">
This is the Copy/Paste friendly version of the traceback. <span
class="pastemessage">You can also paste this traceback into LodgeIt:
<input type="submit" value="create paste"></span>
</p>
<textarea cols="50" rows="10" name="code" readonly>Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/flask/app.py", line 1701, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/lib/python2.6/site-packages/flask/app.py", line 1689, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/usr/lib/python2.6/site-packages/flask_restful/__init__.py", line 271, in error_router
return original_handler(e)
File "/usr/lib/python2.6/site-packages/flask_restful/__init__.py", line 268, in error_router
return self.handle_error(e)
File "/usr/lib/python2.6/site-packages/flask/app.py", line 1687, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python2.6/site-packages/flask/app.py", line 1360, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python2.6/site-packages/flask_restful/__init__.py", line 271, in error_router
return original_handler(e)
File "/usr/lib/python2.6/site-packages/flask/app.py", line 1246, in handle_user_exception
assert exc_value is e
AssertionError</textarea>
</form>
</div>
<div class="explanation">
The debugger caught an exception in your WSGI application. You can now
look at the traceback which led to the error. <span class="nojavascript">
If you enable JavaScript you can also use additional features such as code
execution (if the evalex feature is enabled), automatic pasting of the
exceptions and much more.</span>
</div>
<div class="footer">
Brought to you by <strong class="arthur">DON'T PANIC</strong>, your
friendly Werkzeug powered traceback interpreter.
</div>
</div>
</body>
</html>
<!--
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/flask/app.py", line 1701, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/lib/python2.6/site-packages/flask/app.py", line 1689, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/usr/lib/python2.6/site-packages/flask_restful/__init__.py", line 271, in error_router
return original_handler(e)
File "/usr/lib/python2.6/site-packages/flask_restful/__init__.py", line 268, in error_router
return self.handle_error(e)
File "/usr/lib/python2.6/site-packages/flask/app.py", line 1687, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python2.6/site-packages/flask/app.py", line 1360, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python2.6/site-packages/flask_restful/__init__.py", line 271, in error_router
return original_handler(e)
File "/usr/lib/python2.6/site-packages/flask/app.py", line 1246, in handle_user_exception
assert exc_value is e
AssertionError
-->
This is all running on a Centos 6.5 with
* Python 2.6.6
* Flask (0.9)
* Flask-RESTful (0.3.5)
EDIT: If I run the server with `debug=False`, I get:
$ curl -i -H "Accept: application/json" -X PUT --data "arg1=ABC&arg2=A" http://192.0.0.7:5001/v1.0/test/1
HTTP/1.0 500 INTERNAL SERVER ERROR
Content-Type: application/json
Content-Length: 37
Server: Werkzeug/0.8.3 Python/2.6.6
Date: Fri, 15 Apr 2016 12:50:37 GMT
{"message": "Internal Server Error"}
Answer: Upgraded flask to 0.10.1 and the problem disappears.
|
ajax request python array list
Question: I am making ajax call and fetching details in python and saving it in mongodb.
**Scenario:** I tried `request.POST.getlist('arrayList[]')`
> _Works:_ if array contains values inside it. Eg: ['abcd', '1234']
>
> **_Doesn't work:_** if array contains arrays inside it. Eg: [[arr1], [arr2]]
> -> this returns []
**How to retrieve outer array and inner arrays?**
Answer: I have the solution of my own. I will go through everything once again.
**Scenario:** _I am saving data from html in mongodb through python (django).
There is a scenario where I have a single array and this array can have n
number of smaller arrays inside and the smaller arrays can again have n
numbers of smaller arrays and so on. I thought of doing it in my ajax request
but I was new to python and wanted to learn the python way._
I have attached image for reference as you can observer what I was talking
about
[](http://i.stack.imgur.com/efnqf.png)
**What I did?:** I stored all the arrays as I was required to and in python I
did this:
from querystring_parser import parser
def index(request):
if request.method=="POST":
urlEncoded = parser.parse(request.POST.urlencode())
if length < noOfActionCards:
appendJsonKey = fullappendJsonKey[length]['']
appendShowUi = fullappendShowUi[length]['']
appendHintText = fullappendHintText[length]['']
.........
.........
dropDownValues = fulldropDownValues[length]
while i < len(innerArray[length])
# and so one iterating here...
and then saving it on mongodb.
|
Debugging a c-extension in python
Question: I run [bayesopt](http://rmcantin.bitbucket.org/html/) with python bindings. So
I have a `bayesopt.so` that I import from python (a C-extension).
When I run it, it core dumps. I want to load this core dump in gdb to see what
the issue is. How can I do this? Or get information on it?
I tried to load gdb on it, but of course it asks for a binary which I don't
have since it's a `.so`.
Answer: You want to run gdb on python, ie: `gdb -ex r --args python myscript.py`.
There's some helpful tips in the python wiki:
<https://wiki.python.org/moin/DebuggingWithGdb>
|
Django - Creating form for editing multiple instance of model
Question: Note: Django/Python beginner, hope this question is clear
I need to create a form where multiple instances of a model can be edited at
once in a single form, and be submitted at the same time.
For instance, I have two models, Invite and Guest, where multiple Guests can
be associated with a single Invite. I need a single form where I'm able to
edit particular details of all Guests attached to the invite, submit at the
same time, and save to the database.
I've seen a few suggestions about using [crispy-forms](http://django-crispy-
forms.readthedocs.org/en/latest/), but haven't managed to get it working.
I've created a form that provides certain inputs:
from django import forms
from app.models import Guest
class ExtraForm(forms.ModelForm):
diet = forms.CharField(max_length=128, required=False)
transport = forms.BooleanField(initial=False)
# An inline class to provide additional information on the form.
class Meta:
# Provide an association between the ModelForm and a model
model = Guest
fields = ('diet', 'transport')
My view consists of:
def extra_view(request, code):
invite = get_invite(code)
# Get the context from the request.
context = RequestContext(request)
# Get just guests labelled as attending
guests_attending = invite.guest_set.filter(attending=True)
if request.method == 'POST':
form = ExtraForm(request.POST)
print(form.data)
# Have we been provided with a valid form?
if form.is_valid():
# Save the new category to the database.
# form.save(commit=True)
print(form)
return render(request, 'weddingapp/confirm.html', {
'invite': invite,
})
else:
# The supplied form contained errors - just print them to the terminal for now
print form.errors
else:
# # If the request was not a POST, display the form to enter details.
GuestForm = ExtraForm()
return render_to_response('weddingapp/extra.html',
{'GuestForm': GuestForm, 'invite': invite, 'guests_attending': guests_attending}, context)
And finally, my form:
<form id="extra_form" method="post" action="{% url 'weddingapp:extra' invite.code %}">
{% csrf_token %}
{% for guest in guests_attending %}
<fieldset class="form-group">
<h3>Form for {{ guest.guest_name }}</h3>
{% for field in GuestForm.visible_fields %}
{{ field.errors }}
<div>
{{ field.help_text }}
{{ field }}
</div>
{% endfor %}
</fieldset>
{% endfor %}
{{ form.management_form }}
<table>
{% for form in form %}
{{ form }}
{% endfor %}
</table>
<input type="submit" name="submit" value="Submit"/>
</form>
Any advice
Answer: You need to use a `FormSet`, in particular a
[ModelFormSet](https://docs.djangoproject.com/es/1.9/topics/forms/modelforms/#django.forms.models.BaseModelFormSet):
...
GuestFormSet = modelformset_factory(Guest, form=ExtraForm)
in your view you can use it as a normal form:
formset = GuestFormSet(data=request.POST)
if formset.is_valid():
formset.save()
and in your template:
<form method="post" action="">
{{ formset.management_form }}
<table>
{% for form in formset %}
{{ form }}
{% endfor %}
</table>
</form>
_tip_ : you can avoid the avoid this boilerplate
if request.method == 'POST':
form = ExtraForm(request.POST)
print(form.data)
# Have we been provided with a valid form?
if form.is_valid():
with a simple shortcut:
form = ExtraForm(data=request.POST or None)
if form.is_valid():
...
|
Create a subclass object with initialized parent object
Question: I have a BaseEntity class, which defines a bunch (a lot) of non-required
properties and has most of functionality. I extend this class in two others,
which have some extra methods, as well as initialize one required property.
class BaseEntity(object):
def __init__(self, request_url):
self.clearAllFilters()
super(BaseEntity, self).__init__(request_url=request_url)
@property
def filter1(self):
return self.filter1
@filter1.setter
def filter1(self, some_value):
self.filter1 = some_value
...
def clearAllFilters(self):
self.filter1 = None
self.filter2 = None
...
def someCommonAction1(self):
...
class DefinedEntity1(BaseEntity):
def __init__(self):
super(BaseEntity, self).__init__(request_url="someUrl1")
def foo():
...
class DefinedEntity2(BaseEntity):
def __init__(self):
super(ConsensusSequenceApi, self).__init__(request_url="someUrl2")
def bar(self):
...
What I would like is to initialize a BaseEntity object once, with all the
filters specified, and then use it to create each of the DefinedEntities, i.e.
baseObject = BaseEntity(None)
baseObject.filter1 = "boo"
baseObject.filter2 = "moo"
entity1 = baseObject.create(DefinedEntity1)
Looking for pythonic ideas, since I've just switched from statically typed
language and still trying to grasp the power of python.
Answer: One way to do it:
import copy
class A(object):
def __init__(self, sth, blah):
self.sth = sth
self.blah = blah
def do_sth(self):
print(self.sth, self.blah)
class B(A):
def __init__(self, param):
self.param = param
def do_sth(self):
print(self.param, self.sth, self.blah)
a = A("one", "two")
almost_b = copy.deepcopy(a)
almost_b.__class__ = B
B.__init__(almost_b, "three")
almost_b.do_sth() # it would print "three one two"
Keep in mind that Python is an extremely open language with lot of dynamic
modification possibilities and it is better not to abuse them. From clean code
point of view I would use just a plain old call to superconstructor.
|
Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment
Question: I downloaded Quokka Python/Flask CMS to a CentOS7 server. Everything works
fine with command
sudo python3 manage.py runserver --host 0.0.0.0 --port 80
Then I create a file /etc/init.d/quokkacms. The file contains following code
start() {
echo -n "Starting quokkacms: "
python3 /var/www/quokka/manage.py runserver --host 0.0.0.0 --port 80
touch /var/lock/subsys/quokkacms
return 0
}
stop() {
echo -n "Shutting down quokkacms: "
rm -f /var/lock/subsys/quokkacms
return 0
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
;;
restart)
stop
start
;;
*)
echo "Usage: quokkacms {start|stop|status|restart}"
exit 1
;;
esac
exit $?
But I get error when running `sudo service quokkacms start`
> RuntimeError: Click will abort further execution because Python 3 was
> configured to use ASCII as encoding for the environment. Either switch to
> Python 2 or consult <http://click.pocoo.org/python3/> for
> mitigation steps.
It seems to me that it is the bash script. How come I get different results?
Also I followed instructions in the link in the error message but still had no
luck.
[update] I had already tried the solution provided by Click before I posted
this question. Check the results below (i run in root):
[root@webserver quokka]# python3
Python 3.4.3 (default, Jan 26 2016, 02:25:35)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import locale
>>> import codecs
>>> print(locale.getpreferredencoding())
UTF-8
>>> print(codecs.lookup(locale.getpreferredencoding()).name)
utf-8
>>> locale.getdefaultlocale()
('en_US', 'UTF-8')
>>> locale.CODESET
14
>>>
Answer: At the top of your Python script, try to put
export LC_ALL=en_US.utf-8
export LANG=en_US.utf-8
|
Can't edit a URL with python
Question: I am new to python and just wanted to know if this is possible: I have scraped
a url using `urllib` and want to edit different pages.
**Example** : `http://test.com/All/0.html`
I want the `0.html` to become `50.html` and then `100.html` and so on ...
Answer:
found_url = 'http://test.com/All/0.html'
base_url = 'http://test.com/All/'
for page_number in range(0,1050,50):
url_to_fetch = "{0}{1}.html".format(base_url,page_number)
That should give you URLs from `0.html` to `1000.html`
If you want to use `urlparse`(as suggested in comments to your question):
import urlparse
found_url = 'http://test.com/All/0.html'
parsed_url = urlparse.urlparse(found_url)
path_parts = parsed_url.path.split("/")
for page_number in range(0,1050,50):
new_path = "{0}/{1}.html".format("/".join(path_parts[:-1]), page_number)
parsed_url = parsed_url._replace(path= new_path)
print parsed_url.geturl()
Executing this script would give you the following:
http://test.com/All/0.html
http://test.com/All/50.html
http://test.com/All/100.html
http://test.com/All/150.html
http://test.com/All/200.html
http://test.com/All/250.html
http://test.com/All/300.html
http://test.com/All/350.html
http://test.com/All/400.html
http://test.com/All/450.html
http://test.com/All/500.html
http://test.com/All/550.html
http://test.com/All/600.html
http://test.com/All/650.html
http://test.com/All/700.html
http://test.com/All/750.html
http://test.com/All/800.html
http://test.com/All/850.html
http://test.com/All/900.html
http://test.com/All/950.html
http://test.com/All/1000.html
Instead of printing in the for loop you can use the value of
parsed_url.geturl() as per your need. As mentioned, if you want to fetch the
content of the page you can use python `requests` module in the following
manner:
import requests
found_url = 'http://test.com/All/0.html'
parsed_url = urlparse.urlparse(found_url)
path_parts = parsed_url.path.split("/")
for page_number in range(0,1050,50):
new_path = "{0}/{1}.html".format("/".join(path_parts[:-1]), page_number)
parsed_url = parsed_url._replace(path= new_path)
# print parsed_url.geturl()
url = parsed_url.geturl()
try:
r = requests.get(url)
if r.status_code == 200:
with open(str(page_number)+'.html', 'w') as f:
f.write(r.content)
except Exception as e:
print "Error scraping - " + url
print e
This fetches the content from `http://test.com/All/0.html` till
`http://test.com/All/1000.html` and saves the content of each URL into its own
file. The file name on disk would be the file name in URL - `0.html` to
`1000.html`
Depending on the performance of the site you are trying to scrape from you
might experience considerable time delays in running the script. If
performance is of importance, you can consider using
[grequests](https://github.com/kennethreitz/grequests)
|
How to use compile_commands.json with clang python bindings?
Question: I have the following script that attempts to print out all the AST nodes in a
given C++ file. This works fine when using it on a simple file with trivial
includes (header file in the same directory, etc).
#!/usr/bin/env python
from argparse import ArgumentParser, FileType
from clang import cindex
def node_info(node):
return {'kind': node.kind,
'usr': node.get_usr(),
'spelling': node.spelling,
'location': node.location,
'file': node.location.file.name,
'extent.start': node.extent.start,
'extent.end': node.extent.end,
'is_definition': node.is_definition()
}
def get_nodes_in_file(node, filename, ls=None):
ls = ls if ls is not None else []
for n in node.get_children():
if n.location.file is not None and n.location.file.name == filename:
ls.append(n)
get_nodes_in_file(n, filename, ls)
return ls
def main():
arg_parser = ArgumentParser()
arg_parser.add_argument('source_file', type=FileType('r+'),
help='C++ source file to parse.')
arg_parser.add_argument('compilation_database', type=FileType('r+'),
help='The compile_commands.json to use to parse the source file.')
args = arg_parser.parse_args()
compilation_database_path = args.compilation_database.name
source_file_path = args.source_file.name
clang_args = ['-x', 'c++', '-std=c++11', '-p', compilation_database_path]
index = cindex.Index.create()
translation_unit = index.parse(source_file_path, clang_args)
file_nodes = get_nodes_in_file(translation_unit.cursor, source_file_path)
print [p.spelling for p in file_nodes]
if __name__ == '__main__':
main()
However, I get a `clang.cindex.TranslationUnitLoadError: Error parsing
translation unit.` when I run the script and provide a valid C++ file that has
a compile_commands.json file in its parent directory. This code runs and
builds fine using CMake with clang, but I can't seem to figure out how to pass
the argument for pointing to the compile_commands.json correctly.
I also had difficulty finding this option in the clang documentation and could
not get `-ast-dump` to work. However, clang-check works fine by just passing
the file path!
Answer: Your own accepted answer is incorrect. `libclang` [does support compilation
databases](http://clang.llvm.org/doxygen/group__COMPILATIONDB.html) and [so
does cindex.py](https://github.com/llvm-
mirror/clang/blob/4ab9d6e02b29c24ca44638cc61b52cde2df4a888/bindings/python/clang/cindex.py#L2748),
the libclang python binding.
The main source of confusion might be that the compilation flags that libclang
knows/uses are only a subset of all arguments that can be passed to the clang
frontend. The compilation database is supported but does not work
automatically: it must be loaded and queried manually. Something like this
should work:
#!/usr/bin/env python
from argparse import ArgumentParser, FileType
from clang import cindex
compilation_database_path = args.compilation_database.name
source_file_path = args.source_file.name
index = cindex.Index.create()
# Step 1: load the compilation database
compdb = CompilationDatabase.fromDirectory(compilation_database_path)
# Step 2: query compilation flags
try:
file_args = compdb.getCompileCommands(source_file_path)
translation_unit = index.parse(source_file_path, file_args)
file_nodes = get_nodes_in_file(translation_unit.cursor, source_file_path)
print [p.spelling for p in file_nodes]
except CompilationDatabaseError:
print 'Could not load compilation flags for', source_file_path
|
Convert table using python pandas
Question: I have a table like this:
vstid vstrseq date page timespent
1 1 1/1/16 a 20.00
1 1 1/1/16 b 3.00
1 1 1/1/16 c 131.00
1 1 1/1/16 d .000
1 1 1/1/16 a 3.00
I want this like:
A B date a b c d
1 1 1/1/16 23 3 131 0
How can I get it done in python? Any suggestions?
Answer: You could use pandas' [`pivot table`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.pivot_table.html) for this:
import pandas as pd
import numpy as np
df = pd.DataFrame({
"vstid": [1]*5,
"vstrseq": [1]*5,
"date": ["1/1/16"]*5,
"page": ["a", "b", "c", "d", "a"],
"timespent": [20.00, 3.00, 131.00, 0.000, 3.00]
})
table = df.pivot_table(index=["vstid", "vstrseq", "date"], values="timespent", columns="page", aggfunc=np.sum).reset_index()
print table.to_string(index=False)
which outputs
vstid vstrseq date a b c d
1 1 1/1/16 23 3 131 0
|
pip install produces OSError: [Errno 13] Permission denied:
Question: I'm wanting to install ten packages via pip in virtualenv.
I possibly used `sudo` improperly in my haste to get it "working" as suggested
by <http://stackoverflow.com/a/27939356/1063287>, ie I installed virtualenv
with sudo:
`sudo virtualenv --no-site-packages ENV`
I did this because without sudo I got this:
me@my-comp:/var/www/html$ virtualenv --no-site-packages ENV
Running virtualenv with interpreter /usr/bin/python2
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/virtualenv.py", line 2364, in <module>
main()
File "/usr/lib/python3/dist-packages/virtualenv.py", line 719, in main
symlink=options.symlink)
File "/usr/lib/python3/dist-packages/virtualenv.py", line 942, in create_environment
site_packages=site_packages, clear=clear, symlink=symlink))
File "/usr/lib/python3/dist-packages/virtualenv.py", line 1144, in install_python
mkdir(lib_dir)
File "/usr/lib/python3/dist-packages/virtualenv.py", line 324, in mkdir
os.makedirs(path)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/var/www/html/ENV'
In `Ubuntu 16.04` I cannot see "Disk Utility" to test the solution offered
however.
Trying to `pip install lxml` results in this final error:
Command "/var/www/html/ENV/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-jcCDbh/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-_oNugl-record/install-record.txt --single-version-externally-managed --compile --install-headers /var/www/html/ENV/include/site/python2.7/lxml" failed with error code 1 in /tmp/pip-build-jcCDbh/lxml/
Whilst two other examples are below:
**pip install bottle:**
(ENV) me@my-comp:/var/www/html/ENV$ pip install bottle
Collecting bottle
Installing collected packages: bottle
Exception:
Traceback (most recent call last):
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/basecommand.py", line 209, in main
status = self.run(options, args)
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/commands/install.py", line 335, in run
prefix=options.prefix_path,
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/req/req_set.py", line 732, in install
**kwargs
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/req/req_install.py", line 835, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/req/req_install.py", line 1030, in move_wheel_files
isolated=self.isolated,
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/wheel.py", line 344, in move_wheel_files
clobber(source, lib_dir, True)
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/wheel.py", line 322, in clobber
shutil.copyfile(srcfile, destfile)
File "/usr/lib/python2.7/shutil.py", line 83, in copyfile
with open(dst, 'wb') as fdst:
IOError: [Errno 13] Permission denied: '/var/www/html/ENV/lib/python2.7/site-packages/bottle.pyc'
**pip install requests:**
(ENV) me@my-comp:/var/www/html/ENV$ pip install requests
Collecting requests
Using cached requests-2.9.1-py2.py3-none-any.whl
Installing collected packages: requests
Exception:
Traceback (most recent call last):
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/basecommand.py", line 209, in main
status = self.run(options, args)
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/commands/install.py", line 335, in run
prefix=options.prefix_path,
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/req/req_set.py", line 732, in install
**kwargs
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/req/req_install.py", line 835, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/req/req_install.py", line 1030, in move_wheel_files
isolated=self.isolated,
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/wheel.py", line 344, in move_wheel_files
clobber(source, lib_dir, True)
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/wheel.py", line 315, in clobber
ensure_dir(destdir)
File "/var/www/html/ENV/local/lib/python2.7/site-packages/pip/utils/__init__.py", line 83, in ensure_dir
os.makedirs(path)
File "/var/www/html/ENV/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/var/www/html/ENV/lib/python2.7/site-packages/requests-2.9.1.dist-info'
If I use `sudo pip install bottle`, I get:
`sudo: pip: command not found`
**Update:**
I ran this suggestion:
`$sudo chown -R $(whoami) /var/www/html/ENV`
and can now pip install `bottle`, `requests`, `pymongo`, `beautifulsoup4`,
`Beaker`, `pycrypto` and `tldextract`. However, `lxml` and `pillow` are
failing.
**lxml fail:**
Failed building wheel for lxml
Command "/var/www/html/ENV/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-yHLQQe/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-hLznuQ-record/install-record.txt --single-version-externally-managed --compile --install-headers /var/www/html/ENV/include/site/python2.7/lxml" failed with error code 1 in /tmp/pip-build-yHLQQe/lxml/
**pillow fail:**
Failed building wheel for pillow
Command "/var/www/html/ENV/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-IkuM34/pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-60McJh-record/install-record.txt --single-version-externally-managed --compile --install-headers /var/www/html/ENV/include/site/python2.7/pillow" failed with error code 1 in /tmp/pip-build-IkuM34/pillow/
I have tried the suggestion here:
<http://stackoverflow.com/a/6504860/1063287>
for troubleshooting these remaining errors and `libxml2-dev`, `libxslt1-dev`
and `python2.7-dev` are already installed.
**Update 2:**
Installed `zlib1g-dev` as per:
<http://stackoverflow.com/a/19289133/1063287>
and can install `lxml` now.
Still can't install `pillow`.
**Update 3:**
Installed `libjpeg8-dev` as per:
<http://stackoverflow.com/a/33582789/1063287>
and can now install `pillow`.
Answer: Have you installed pip?
Try installing pip by
sudo apt-get install python
,download pip from <https://pip.pypa.io/en/stable/installing/> then do a
'python get-pip.py'. This will install pip
Then for the issue of permission denied use
$sudo chown -R $(whoami) /var/www/html/ENV
|
Merging 2 lists in Python
Question: With my current script
from lxml import html
import requests
from bs4 import BeautifulSoup
import re
import csv
import itertools
r = requests.get("http://www.mediamarkt.be/mcs/productlist/_128-tot-150-cm-51-tot-59-,98952,501091.html?langId=-17")
soup = BeautifulSoup((r.content),'lxml')
links = soup.find_all("h2")
g_data = soup.find_all("div", {"class": "price small"})
for item in g_data:
prijs =[item.text.encode("utf-8") for item in g_data]
for link in links:
if "TV" in link.text:
product = [link.text.encode("utf-8").strip() for link in links if "TV" in link.text]
for item in itertools.chain(prijs + product):
print item
I'm getting a list with first all the "prijs" and below all the "products. for
example: prijs prijs prijs product product product
I would like to get the following result Prijs Product
Prijs Product
Prijs Product
Thank you
Answer: The nature of the problem seems to have little to do with your actual code, so
in order to make this question useful for future readers I am going to give
you a general answer using example lists.
Don't concatenate your two lists. Generate a list of pairs with `zip`, then
flatten the result.
>>> lst1 = ['a1', 'a2', 'a3']
>>> lst2 = ['b1', 'b2', 'b3']
>>> [x for pair in zip(lst1, lst2) for x in pair]
['a1', 'b1', 'a2', 'b2', 'a3', 'b3']
The flattening looks a little bit nicer with `itertools.chain`.
>>> list(chain.from_iterable(zip(lst1, lst2)))
['a1', 'b1', 'a2', 'b2', 'a3', 'b3']
Alternatively, with unpacking:
>>> list(chain(*zip(lst1, lst2)))
['a1', 'b1', 'a2', 'b2', 'a3', 'b3']
Since you are using Python 2, all of the above could be made more memory
efficient by using `itertools.izip` instead of `zip` (the former returns an
iterator, the latter a list).
|
Convert list of tuples w/ lenght 5 to dictionary in Python
Question: what if I have a tuple list like this:
list = [('Ana', 'Lisbon', 42195, '10-18', 2224),
('Eva', 'New York', 42195, '06-13', 2319),
('Ana', 'Tokyo', 42195, '02-22', 2403),
('Eva', 'Sao Paulo', 21098, '04-12', 1182),
('Ana', 'Sao Paulo', 21098, '04-12', 1096),
('Dulce', 'Tokyo', 42195, '02-22', 2449),
('Ana', 'Boston', 42195, '04-20', 2187)]
How can I convert this to a dictionary like this one?
dict = {'Ana': [('Ana', 'Lisboa', 42195, '10-18', 2224),('Ana', 'Toquio',42195, '02-22', 2403),
('Ana', 'Sao Paulo', 21098, '04-12', 1096),('Ana', 'Boston', 42195, '04-20', 2187)],
'Dulce': [('Dulce', 'Toquio', 42195, '02-22', 2449)],
'Eva': [('Eva', 'Nova Iorque', 42195, '06-13', 2319),
('Eva', 'Sao Paulo', 21098, '04-12', 1182)]}
Answer: You can just loop through the list like this:
from collections import defaultdict
combined = defaultdict(list)
for i in list1:
combined[i[0]].append(i)
|
Printing lists in column format in Python
Question: I'm setting up a game of solitaire and I'm trying to figure out some ways that
I could print each list of cards in column format. Any ideas on how I could go
about doing this with the following lists?
[6♦]
[2♣, 6♠, A♣, 7♣, J♣, XX]
[4♥, 2♥, 4♠, 8♣, 5♦, XX, XX]
[5♠, 3♦, A♠, 10♦, 3♠, XX, XX, XX]
[7♥, 10♣, 10♥, 2♦, J♠, XX, XX, XX, XX]
[8♦, 3♣, 7♦, 9♥, K♠, XX, XX, XX, XX, XX]
[7♠, Q♠, 9♠, A♦, 3♥, XX, XX, XX, XX, XX, XX]
Answer: Taking some guesswork on what you have available in your code and what you
want to do, I would say that you should print an element from each list on a
row and then move to the next list.
# -*- coding: utf-8 -*-
from itertools import izip_longest
L1 = [u'6♦']
L2 = [u'2♣', u'6♠', u'A♣', u'7♣', u'J♣', u'XX']
L3 = [u'4♥', u'2♥', u'4♠', u'8♣', u'5♦', u'XX', u'XX']
for a,b,c in izip_longest(L1, L2, L3, fillvalue=' '):
print u'{0}\t{1}\t{2}'.format(a,b,c)
With few changes, you should get what you are looking for. However for more
serious terminal game UI, you should consider using [python
curses](https://docs.python.org/2/howto/curses.html).
|
Python paho-MQTT connection with azure IoT-Hub
Question: I am trying to connect with Azure IoT-Hub with MQTT and send and receive
messages.
I am following the official documentation given
[here](https://azure.microsoft.com/en-in/documentation/articles/iot-hub-mqtt-
support/)
But it always get disconnected with result code: 1, though it never goes
inside the on_connect function. But if I try to publish it outside the
functions (The commented out line after connection string), it goes inside the
on_publish method.
The deivce I am using here is a simulated device I created in the Azure IoT
Suite
Here is the code I am using
from paho.mqtt import client as mqtt
def on_connect(client, userdata, flags, rc):
print "Connected with result code: %s" % rc
client.subscribe("devices/MyTestDevice02/messages/devicebound/#")
client.publish("devices/MyTestDevice02/messages/events", "Hello World!")
def on_disconnect(client, userdata, rc):
print "Disconnected with result code: %s" % rc
def on_message(client, userdata, msg):
print " - ".join((msg.topic, str(msg.payload)))
client.publish("devices/MyTestDevice02/messages/events", "REPLY", qos=1)
def on_publish(client, userdata, mid):
print "Sent message"
client = mqtt.Client("MyTestDevice02", mqtt.MQTTv311)
client.on_connect = on_connect
client.on_disconnect = on_disconnect
client.on_message = on_message
client.on_publish = on_publish
client.username_pw_set(username="USERNAME.azure-devices.net/MyTestDevice02",
password="SharedAccessSignature=SharedAccessSignature sr=USERNAME.azure-devices.net%2fdevices%2fMyTestDevice02&sig=xxxxxx5rRr7c%3d&se=1492318301")
client.tls_insecure_set(True) # You can also set the proper certificate using client.tls_set()
client.connect("USERNAME.azure-devices.net", port=8883)
#client.publish("devices/MyTestDevice02/messages/events", "Hello World!")
client.loop_forever()
Any help is appreciated. And I dont want to use the sdk which is why I am
trying to publish it directly.
Answer: You've not enabled TLS - you'll need to use `tls_set()` to pass the CA
certificates to trust. Using `tls_insecure_set()` on its own does nothing.
|
Remove punctuation in sentiment analysis in python
Question: I have the following code I made. It works great but problems arise when I add
sentences with commas, full-stops etc. I've researched and can see strip() as
a potential option to fix it? I can't see where to add it and have tried but
just error after error!
Thanks
sent_analysis = {"beer": 10, "wine":13,"spirit": 11,"cider":16,"shot":16}
def sentiment_analysis(dic, text):
split_text = text.split()
result = 0.00
for i in split_text:
if i in dic:
result+= dic[i]
return result
print sentiment_analysis(sent_analysis,"the beer, wine and cider were great")
print sentiment_analysis(sent_analysis,"the beer and the wine were great")
Answer: Regular expressions can be used to remove all non alpha-numeric characters
from a string. In the code below the ^\w\s matches anything not (as indicated
by the ^) a-z, A-Z,0-9, and spaces, and removes them. The return statement
iterates though the split string, finding any matches, adding it to a list,
then returning the sum of those numbers.
[Regex \s](http://www.w3schools.com/jsref/jsref_regexp_whitespace.asp)
[Regex \w](http://www.w3schools.com/jsref/jsref_regexp_wordchar.asp)
import re
sent_analysis = {"beer": 10, "wine":13,"spirit": 11,"cider":16,"shot":16}
def sentiment_analysis(dic, text):
result = 0.00
s = re.sub(r'[^\w\s]','',text)
return sum([dic[x] for x in s.split() if x in dic])
print(sentiment_analysis(sent_analysis,"the beer,% wine &*and cider @were great"))
Output: 39
This will account for most punctuation, as indicated by the many different
ones added in the example string.
|
Python: what's the difference - abs and operator.abs
Question: In python what is the difference between :
`abs(a)` and `operator.abs(a)`
They are the very same and they work alike. If they are the very same then why
are two separate functions doing the same stuff are made??
If there is some specific functionality for any one of it - please do explain
it.
Answer: There is no difference. The documentation even says so:
>>> import operator
>>> print(operator.abs.__doc__)
abs(a) -- Same as abs(a).
It is implemented as a wrapper just so the documentation can be updated:
from builtins import abs as _abs
# ...
def abs(a):
"Same as abs(a)."
return _abs(a)
(Note, the above Python implementation is only used if the [C module
itself](https://hg.python.org/cpython/file/v3.5.1/Modules/_operator.c#l78)
can't be loaded).
It is there _purely_ to complement the other (mathematical) operators; e.g. if
you wanted to do dynamic operator lookups on that module you don't have to
special-case `abs()`.
|
I succesfully installed scikit-flow by using pip but somehow it doesn't work when I import and use it
Question: I've already installed scikitlearn the other day and The code which I tried to
execute is as follows.
import skflow
from sklearn import datasets, metrics
iris = datasets.load_iris()
classifier = skflow.TensorFlowLinearClassifier(n_classes=3)
classifier.fit(iris.data, iris.target)
score = metrics.accuracy_score(classifier.predict(iris.data), iris.target)
print("Accuracy: %f" % score)
Above code is the one I found in Githubgist page. And the rusult was
ImportError Traceback (most recent call last) in () \----> 1 import skflow 2
from sklearn import datasets, metrics 3 4 iris = datasets.load_iris() 5
classifier = skflow.TensorFlowLinearClassifier(n_classes=3)
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages/skflow/**init**.py in () 16 import pkg_resources as pkg_rs 17 import
numpy as np \---> 18 import tensorflow as tf 19 20 from skflow.io import *
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages/tensorflow/**init**.py in () 21 from **future** import print_function
22 \---> 23 from tensorflow.python import *
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/**init**.py in () 43 _default_dlopen_flags = sys.getdlopenflags() 44 sys.setdlopenflags(_default_dlopen_flags | ctypes.RTLD_GLOBAL) \---> 45 from tensorflow.python import pywrap_tensorflow 46 sys.setdlopenflags(_default_dlopen_flags) 47
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages/tensorflow/python/pywrap_tensorflow.py in () 26 fp.close() 27 return
_mod \---> 28 _pywrap_tensorflow = swig_import_helper() 29 del
swig_import_helper 30 else:
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages/tensorflow/python/pywrap_tensorflow.py in swig_import_helper() 22 if
fp is not None: 23 try: \---> 24 _mod = imp.load_module('_pywrap_tensorflow',
fp, pathname, description) 25 finally: 26 fp.close()
ImportError:
dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages/tensorflow/python/_pywrap_tensorflow.so, 10): no suitable image
found. Did find:
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-
packages/tensorflow/python/_pywrap_tensorflow.so: mach-o, but wrong
architecture
I started coding very recently so I can't handle with this problem at all..
What does 'wrong architecture' mean here?? Hope anyone answer this.
Answer: Looks like you have a wrong version of tensorflow installed. Wrong
architecture in this case probably means that you installed linux version of
TF on your OS X.
I would recommend uninstalling both tensorflow and skflow and then running
this command (for OS X with Python2.7):
sudo easy_install --upgrade six
sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0rc0-py2-none-any.whl
Skflow is now part of the TensorFlow, so you can use by importing `import
tensorflow.contrib.learn as skflow` instead of `import skflow`.
|
Matplotlib animation inside your own PyQt4 GUI
Question: I'm writing software in Python. I need to embed a Matplotlib time-animation
into a self-made GUI. Here are some more details about them:
### 1\. The GUI
The GUI is written in Python as well, using the PyQt4 library. My GUI is not
very different from the common GUIs you can find on the net. I just subclass
**QtGui.QMainWindow** and add some buttons, a layout, ...
### 2\. The animation
The Matplotlib animation is based on the **animation.TimedAnimation** class.
Here is the code for the animation:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
import matplotlib.animation as animation
class CustomGraph(animation.TimedAnimation):
def __init__(self):
self.n = np.linspace(0, 1000, 1001)
self.y = 1.5 + np.sin(self.n/20)
#self.y = np.zeros(self.n.size)
# The window
self.fig = plt.figure()
ax1 = self.fig.add_subplot(1, 2, 1)
self.mngr = plt.get_current_fig_manager()
self.mngr.window.setGeometry(50,100,2000, 800)
# ax1 settings
ax1.set_xlabel('time')
ax1.set_ylabel('raw data')
self.line1 = Line2D([], [], color='blue')
ax1.add_line(self.line1)
ax1.set_xlim(0, 1000)
ax1.set_ylim(0, 4)
animation.TimedAnimation.__init__(self, self.fig, interval=20, blit=True)
def _draw_frame(self, framedata):
i = framedata
print(i)
self.line1.set_data(self.n[ 0 : i ], self.y[ 0 : i ])
self._drawn_artists = [self.line1]
def new_frame_seq(self):
return iter(range(self.n.size))
def _init_draw(self):
lines = [self.line1]
for l in lines:
l.set_data([], [])
def showMyAnimation(self):
plt.show()
''' End Class '''
if __name__== '__main__':
print("Define myGraph")
myGraph = CustomGraph()
myGraph.showMyAnimation()
This code produces a simple animation:
[](http://i.stack.imgur.com/oHfhf.png)
The animation itself works fine. Run the code, the animation pops up in a
small window and it starts running. But how do I embed the animation in my own
self-made GUI?
### 3\. Embedding the animation in a self-made GUI
I have done some research to find out. Here are some things I tried. I have
added the following code to the python file. Note that this added code is
actually an extra class definition:
from PyQt4 import QtGui
from PyQt4 import QtCore
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
class CustomFigCanvas(FigureCanvas):
def __init__(self):
self.myGraph = CustomGraph()
FigureCanvas.__init__(self, self.myGraph.fig)
What I try to do here is embedding the **CustomGraph()** object - which is
essentially my animation - into a **FigureCanvas**.
I wrote my GUI in another python file (but still in the same folder). Normally
I can add Widgets to my GUI. I believe that an object from the class
**CustomFigCanvas(..)** is a Widget through inheritance. So this is what I try
in my GUI:
..
myFigCanvas = CustomFigCanvas()
self.myLayout.addWidget(myFigCanvas)
..
It works to some extent. I get indeed a figure displayed in my GUI. But the
figure is empty. The animation doesn't run:
[](http://i.stack.imgur.com/kUTNN.png)
And there is even another strange phenomenon going on. My GUI displays this
empty figure, but I get simultaneously a regular Matplotlib popup window with
my animation figure in it. Also this animation is not running.
There is clearly something that I'm missing here. Please help me to figure out
this problem. I appreciate very much all help.
Answer: I think I found the solution. All credit goes to Mr. Harrison who made the
Python tutorial website <https://pythonprogramming.net>. He helped me out.
So here is what I did. Two major changes:
### 1\. Structural change
I previously had two classes: **CustomGraph(TimedAnimation)** and
**CustomFigCanvas(FigureCanvas)**. Now I got only one left, but he inherits
from both TimedAnimation and FigureCanvas: **CustomFigCanvas(TimedAnimation,
FigureCanvas)**
### 2\. Change in making the figure object
This is how I made the figure previously:
self.fig = plt.figure()
With 'plt' coming from the import statement `'import matplotlib.pyplot as
plt'`. This way of making the figure apparently causes troubles when you want
to embed it into your own GUI. So there is a better way to do it:
self.fig = Figure(figsize=(5,5), dpi=100)
And now it works!
Here is the complete code:
import numpy as np
from matplotlib.figure import Figure
from matplotlib.animation import TimedAnimation
from matplotlib.lines import Line2D
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
class CustomFigCanvas(FigureCanvas, TimedAnimation):
def __init__(self):
# The data
self.n = np.linspace(0, 1000, 1001)
self.y = 1.5 + np.sin(self.n/20)
# The window
self.fig = Figure(figsize=(5,5), dpi=100)
ax1 = self.fig.add_subplot(111)
# ax1 settings
ax1.set_xlabel('time')
ax1.set_ylabel('raw data')
self.line1 = Line2D([], [], color='blue')
ax1.add_line(self.line1)
ax1.set_xlim(0, 1000)
ax1.set_ylim(0, 4)
FigureCanvas.__init__(self, self.fig)
TimedAnimation.__init__(self, self.fig, interval = 20, blit = True)
def _draw_frame(self, framedata):
i = framedata
print(i)
self.line1.set_data(self.n[ 0 : i ], self.y[ 0 : i ])
self._drawn_artists = [self.line1]
def new_frame_seq(self):
return iter(range(self.n.size))
def _init_draw(self):
lines = [self.line1]
for l in lines:
l.set_data([], [])
''' End Class '''
That's the code to make the animation in matplotlib. Now you can easily embed
it into your own Qt GUI:
..
myFigCanvas = CustomFigCanvas()
self.myLayout.addWidget(myFigCanvas)
..
It seems to work pretty fine. Thank you Mr. Harrison!
##
**EDIT :**
I came back to this question after many months. Here is the complete code.
Just copy-paste it into a fresh `.py` file, and run it:
###################################################################
# #
# PLOTTING A LIVE GRAPH #
# ---------------------------- #
# EMBED A MATPLOTLIB ANIMATION INSIDE YOUR #
# OWN GUI! #
# #
###################################################################
import sys
import os
from PyQt4 import QtGui
from PyQt4 import QtCore
import functools
import numpy as np
import random as rd
import matplotlib
matplotlib.use("Qt4Agg")
from matplotlib.figure import Figure
from matplotlib.animation import TimedAnimation
from matplotlib.lines import Line2D
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
import time
import threading
def setCustomSize(x, width, height):
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Fixed, QtGui.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(x.sizePolicy().hasHeightForWidth())
x.setSizePolicy(sizePolicy)
x.setMinimumSize(QtCore.QSize(width, height))
x.setMaximumSize(QtCore.QSize(width, height))
''''''
class CustomMainWindow(QtGui.QMainWindow):
def __init__(self):
super(CustomMainWindow, self).__init__()
# Define the geometry of the main window
self.setGeometry(300, 300, 800, 400)
self.setWindowTitle("my first window")
# Create FRAME_A
self.FRAME_A = QtGui.QFrame(self)
self.FRAME_A.setStyleSheet("QWidget { background-color: %s }" % QtGui.QColor(210,210,235,255).name())
self.LAYOUT_A = QtGui.QGridLayout()
self.FRAME_A.setLayout(self.LAYOUT_A)
self.setCentralWidget(self.FRAME_A)
# Place the zoom button
self.zoomBtn = QtGui.QPushButton(text = 'zoom')
setCustomSize(self.zoomBtn, 100, 50)
self.zoomBtn.clicked.connect(self.zoomBtnAction)
self.LAYOUT_A.addWidget(self.zoomBtn, *(0,0))
# Place the matplotlib figure
self.myFig = CustomFigCanvas()
self.LAYOUT_A.addWidget(self.myFig, *(0,1))
# Add the callbackfunc to ..
myDataLoop = threading.Thread(name = 'myDataLoop', target = dataSendLoop, daemon = True, args = (self.addData_callbackFunc,))
myDataLoop.start()
self.show()
''''''
def zoomBtnAction(self):
print("zoom in")
self.myFig.zoomIn(5)
''''''
def addData_callbackFunc(self, value):
# print("Add data: " + str(value))
self.myFig.addData(value)
''' End Class '''
class CustomFigCanvas(FigureCanvas, TimedAnimation):
def __init__(self):
self.addedData = []
print(matplotlib.__version__)
# The data
self.xlim = 200
self.n = np.linspace(0, self.xlim - 1, self.xlim)
a = []
b = []
a.append(2.0)
a.append(4.0)
a.append(2.0)
b.append(4.0)
b.append(3.0)
b.append(4.0)
self.y = (self.n * 0.0) + 50
# The window
self.fig = Figure(figsize=(5,5), dpi=100)
self.ax1 = self.fig.add_subplot(111)
# self.ax1 settings
self.ax1.set_xlabel('time')
self.ax1.set_ylabel('raw data')
self.line1 = Line2D([], [], color='blue')
self.line1_tail = Line2D([], [], color='red', linewidth=2)
self.line1_head = Line2D([], [], color='red', marker='o', markeredgecolor='r')
self.ax1.add_line(self.line1)
self.ax1.add_line(self.line1_tail)
self.ax1.add_line(self.line1_head)
self.ax1.set_xlim(0, self.xlim - 1)
self.ax1.set_ylim(0, 100)
FigureCanvas.__init__(self, self.fig)
TimedAnimation.__init__(self, self.fig, interval = 50, blit = True)
def new_frame_seq(self):
return iter(range(self.n.size))
def _init_draw(self):
lines = [self.line1, self.line1_tail, self.line1_head]
for l in lines:
l.set_data([], [])
def addData(self, value):
self.addedData.append(value)
def zoomIn(self, value):
bottom = self.ax1.get_ylim()[0]
top = self.ax1.get_ylim()[1]
bottom += value
top -= value
self.ax1.set_ylim(bottom,top)
self.draw()
def _step(self, *args):
# Extends the _step() method for the TimedAnimation class.
try:
TimedAnimation._step(self, *args)
except Exception as e:
self.abc += 1
print(str(self.abc))
TimedAnimation._stop(self)
pass
def _draw_frame(self, framedata):
margin = 2
while(len(self.addedData) > 0):
self.y = np.roll(self.y, -1)
self.y[-1] = self.addedData[0]
del(self.addedData[0])
self.line1.set_data(self.n[ 0 : self.n.size - margin ], self.y[ 0 : self.n.size - margin ])
self.line1_tail.set_data(np.append(self.n[-10:-1 - margin], self.n[-1 - margin]), np.append(self.y[-10:-1 - margin], self.y[-1 - margin]))
self.line1_head.set_data(self.n[-1 - margin], self.y[-1 - margin])
self._drawn_artists = [self.line1, self.line1_tail, self.line1_head]
''' End Class '''
# You need to setup a signal slot mechanism, to
# send data to your GUI in a thread-safe way.
# Believe me, if you don't do this right, things
# go very very wrong..
class Communicate(QtCore.QObject):
data_signal = QtCore.pyqtSignal(float)
''' End Class '''
def dataSendLoop(addData_callbackFunc):
# Setup the signal-slot mechanism.
mySrc = Communicate()
mySrc.data_signal.connect(addData_callbackFunc)
# Simulate some data
n = np.linspace(0, 499, 500)
y = 50 + 25*(np.sin(n / 8.3)) + 10*(np.sin(n / 7.5)) - 5*(np.sin(n / 1.5))
i = 0
while(True):
if(i > 499):
i = 0
time.sleep(0.1)
mySrc.data_signal.emit(y[i]) # <- Here you emit a signal!
i += 1
###
###
if __name__== '__main__':
app = QtGui.QApplication(sys.argv)
QtGui.QApplication.setStyle(QtGui.QStyleFactory.create('Plastique'))
myGUI = CustomMainWindow()
sys.exit(app.exec_())
''''''
|
Writting more data to file than reading?
Question: I am currently experimenting with how Python 3 handles bytes when reading, and
writing data and I have come across a particularly troubling problem that I
can't seem to find the source of. I am bassically reading bytes out of a JPEG
file, converting them to an integer using `ord()`, then returning the bytes to
their original character using the line `chr(character).encode('utf-8')` and
writing it back into a JPEG file. No issue right? Well when I go to try
opening the JPEG file, I get a Windows 8.1 notification saying it can not open
the photo. When I check the two files against each other one is 5.04MB, and
the other is 7.63MB which has me awfully confused.
def __main__():
operating_file = open('photo.jpg', 'rb')
while True:
data_chunk = operating_file.read(64*1024)
if len(data_chunk) == 0:
print('COMPLETE')
break
else:
new_operation = open('newFile.txt', 'ab')
for character in list(data_chunk):
new_operation.write(chr(character).encode('utf-8'))
if __name__ == '__main__':
__main__()
This is the exact code I am using, any ideas on what is happening and how I
can fix it?
NOTE: I am assuming that the list of numbers that `list(data_chunk)` provides
is the equivalent to `ord()`.
Answer: Here is a simple example you might wish to play with:
import sys
f = open('gash.txt', 'rb')
stuff=f.read() # stuff refers to a bytes object
f.close()
print(stuff)
f2 = open('gash2.txt', 'wb')
for i in stuff:
f2.write(i.to_bytes(1, sys.byteorder))
f2.close()
As you can see, the bytes object is iterable, but in the `for` loop we get
back an `int` in `i`. To convert that to a byte I use `int.to_bytes()` method.
|
How to convert a array of dimension 3 * 200 * 120 into a 1*600 *120 in python?
Question: I have an array like this:
`[ array([[2,3,4,5,6,10]]) array([[7,3,9,1,2,3]]) array([[3,7,34,345,22,1]])
]`
I would like to convert the above array as follows:
`[[2 3 4 5 6 10] [7 3 9 1 2 3] [ 3 7 34 345 22 1]]`
Answer: Use
[`np.vstack`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html):
import numpy as np
a = [ np.array([[2,3,4,5,6,10]]), np.array([[7,3,9,1,2,3]]), np.array([[3,7,34,345,22,1]]) ]
np.vstack(a)
# array([[ 2, 3, 4, 5, 6, 10],
# [ 7, 3, 9, 1, 2, 3],
# [ 3, 7, 34, 345, 22, 1]])
As @imaluengo pointed out in the comments: If you want to have a 3d array
you'd need to add another _empty_ dimension to your array:
res = np.vstack(a)
res3d = res[None, ...] # option 1 - ellipsis
res3d = res[None, :, :] # option 2
res3d = np.expand_dims(res, 0) # option 3 - using np.expand_dims
Your output looked like a list so you could use `.tolist()` afterwards - but
you would discard the advantages of numpy arrays.
|
Issue with Images in GUI python
Question: I am using python 2.7 and for some reason it doesn't recognize some of the
modules. I want to print an image with Tkinter and its just doesn't work.
from Tkinter import *
import ImageTk
root = Tk()
frame = Frame(root)
frame.pack()
canvas = Canvas(frame, bg="black", width=500, height=500)
canvas.pack()
photoimage = ImageTk.PhotoImage(file="Logo.png")
canvas.create_image(150, 150, image=photoimage)
root.mainloop()
The error is:
C:\Python27\python.exe D:/Users/user-pc/Desktop/Appland/Project.py
Traceback (most recent call last):
File "D:/Users/user-pc/Desktop/Appland/Project.py", line 2, in <module>
import ImageTk
ImportError: No module named ImageTk
Process finished with exit code 1
Answer: `ImageTk` is a part of the `PIL` module.
You need to use `from PIL import ImageTk`
You'll also want to save a reference to your image. Here's one example.
photoimage = ImageTk.PhotoImage(file="Logo.png")
root.image = photoimage
canvas_image = canvas.create_image(150, 150, image=root.image)
|
Graphing a colored grid in python
Question: I am trying to create a 2D plot in python where the horizontal axis is split
into a number of intervals or columns and the color of each column varies
along the vertical axis.
The color of each interval depends on the value of a periodic function of
time. For simplicity, let's say these values range between 0 and 1. Then
values closer to 1 should be dark red and values close to 0 should be dark
blue for example (the actual colors don't really matter).
Here is an example of what the plot should look like:
[](http://i.stack.imgur.com/XxWUL.png)
Is there a way to do this in Python using matplotlib?
Answer: This is really just displaying an image. You can do this with `imshow`.
import matplotlib.pyplot as plt
import numpy as np
# Just some example data (random)
data = np.random.rand(10,5)
rows,cols = data.shape
plt.imshow(data, interpolation='nearest',
extent=[0.5, 0.5+cols, 0.5, 0.5+rows],
cmap='bwr')
[](http://i.stack.imgur.com/hLAQm.png)
|
Unable to use Stanford NER in python module
Question: I want to use Python Stanford NER module but keep getting an error,I searched
it on internet but got nothing. Here is the basic usage with error.
import ner
tagger = ner.HttpNER(host='localhost', port=8080)
tagger.get_entities("University of California is located in California,
United States")
Error
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
tagger.get_entities("University of California is located in California, United States")
File "C:\Python27\lib\site-packages\ner\client.py", line 81, in get_entities
tagged_text = self.tag_text(text)
File "C:\Python27\lib\site-packages\ner\client.py", line 165, in tag_text
c.request('POST', self.location, params, headers)
File "C:\Python27\lib\httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "C:\Python27\lib\httplib.py", line 1097, in _send_request
self.endheaders(body)
File "C:\Python27\lib\httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 897, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 859, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 836, in connect
self.timeout, self.source_address)
File "C:\Python27\lib\socket.py", line 575, in create_connection
raise err
error: [Errno 10061] No connection could be made because the target machine actively refused it
Using windows 10 with latest Java installed
Answer: * The Python Stanford NER module is a wrapper for the Stanford NER that allows you to run python commands to use the NER service.
* The NER service is a separate entity to the Python module. It is a Java program. To access this service, via python, or any other way, you first need to start the service.
* Details on how to start the Java Program/service can be found here - <http://nlp.stanford.edu/software/CRF-NER.shtml>
* The NER comes with a `.bat` file for windows and a `.sh` file for unix/linux. I think these files start the `GUI`
* To start the service without the `GUI` you should run a command similar to this:
`java -mx600m -cp stanford-ner.jar edu.stanford.nlp.ie.crf.CRFClassifier
-loadClassifier classifiers/english.all.3class.distsim.crf.ser.gz`
This runs the NER jar, sets the memory, and sets the classifier you want to
use. (I think youll have to be in the Stanford NER directory to run this)
* Once the NER program is running then you will be able to run your python code and query the NER.
|
Error to cloning project with puppet on vagrant
Question: I am trying to install django and clone a github project with a puppet script.
I am using modules as follows:
* files
* (empty directory)
* manifests
* nodes.pp
* web.pp
* modules
* django
* manifests
* init.pp
* files
* (empty directory)
* git
* manifests
* init.pp
* files
* (empty directory)
* postgres
Within the **web.pp** file I have:
import ' nodes.pp '
In **nodes.pp** file I have:
node default {
include postgres
include git
include django
}
In **init.pp** file within the Manifests folder that is inside the git folder
I have the following code:
class git{
include git::install
}
class git::install{
package { 'git:':
ensure => present
}
}
define git::clone ( $path, $dir){
exec { "clone-$name-$path":
command => "/usr/bin/git clone [email protected]:$name $path/$dir",
creates => "$path/$dir",
require => [Class["git"], File[$path]],
}
}
In **init.pp** file within the Manifests folder that is inside the django
folder I have the following code:
class django{
include django::install, django::clone, django::environment
}
class django::install {
package { [ "python", "python-dev", "python-virtualenv", "python-pip",
"python-psycopg2", "python-imaging"]:
ensure => present,
}
}
class django::clone {
git::clone { 'My GitHub repository name':
path => '/home/vagrant/',
dir => 'django',
}
}
define django::virtualenv( $path ){
exec { "create-ve-$path":
command => "/usr/bin/virtualenv -q $name",
cwd => $path,
creates => "$path/$name",
require => [Class["django::install"]],
}
}
class django::environment {
django::virtualenv{ 've':
path => '/usr/local/app',
}
}
To run the scripts puppet I use the command:
sudo puppet apply --modulepath=/vagrant/modules /vagrant/manifests/web.pp
and run this command I get the following **error** :
Could not find dependency File[/home/vagrant/] for
Exec[clone-My GitHub repository name-/home/vagrant/] at
/vagrant/modules/git/manifests/init.pp:16
Note: where is the name 'My GitHub repository name', I put the name of my
github repository correctly.
What is wrong and how do I solve this problem?
Answer: in your define git::clone have you made sure to declare the file resource for
$path?
you should have:
file { $path: ensure => directory }
you can't _require_ a resource that you haven't specifically delcared
|
When i import collections in my Python file, I can't access Ordered Dictionary?
Question: [missing ordered dictionary in
collections](http://i.stack.imgur.com/5x7NC.jpg)
It is all said, I cant acces ordered dictionary. I have searched everywhere
but there is no solution. Please help.
Answer: You need to look for `collections.OrderedDict`, not `collections.ordereddict`.
Case matters, as your IDE appears to be case-sensitive.
|
Python MINIDOM Object How to get only the element name from DOM Object
Question: I have a python DOM object output, I need to get only the "Elements" from it.
Example:
[<DOM Text node "u'\n\t\t\t'">, <DOM Element: StartTime at 0x397af30>, <DOM Text node "u'\n\t\t\t'">, <DOM Element: EndTime at 0x397afd0>, <DOM Text node "u'\n\t\t'">]
I need output like
StartTime
EndTime
Can you please assist ? Thanks
Answer: > _"I have a python DOM object output, I need to get only the "Elements" from
> it."_
You can filter your list item where `nodeType ==
xml.dom.minidom.Node.ELEMENT_NODE`. For example, assuming that your _'DOM
object output'_ stored in a variable named `output`, you can do as follow :
from xml.dom import minidom
.....
.....
result = [item for item in output if item.nodeType == minidom.Node.ELEMENT_NODE]
|
Python Selenium Firefox driver - Disable Images
Question: Before I have used the code below, but it doesn't work anymore with firefox
update.
from selenium.webdriver.firefox.firefox_profile import FirefoxProfile
firefoxProfile = FirefoxProfile()
firefoxProfile.set_preference('permissions.default.image', 2)
I also tried this one below, it seems good but is there a way to disable
images without add-on or 3rd party tools?
from selenium import webdriver
firefox_profile = webdriver.FirefoxProfile()
firefox_profile.add_extension(folder_xpi_file_saved_in + "\\quickjava-2.0.6-fx.xpi")
firefox_profile.set_preference("thatoneguydotnet.QuickJava.startupStatus.Images", 2) ## Turns images off
Answer: Have you tried updating your selenium after the Firefox update?
eg :
sudo -H pip install --upgrade selenium
|
None type object attribute error Python
Question: Made a text game in python 2.7 following LPTHW by Zed Shaw. It consisted of
importing different files into one and calling it. The game is working but at
the end it gives me an attribute error.
Traceback (most recent call last):
File "Main.py", line 15, in <module> a_game.play()
File "C:\mystuff\Escape\Game_Engine.py", line 17, in play next_scene_name = current_scene.enter() #enter the current scene calling 'enter' function.
AttributeError: 'NoneType' object has no attribute 'enter'
**My code:** 1\. Rooms.py
#scenes module with all the scenes of the game and navigation.
from sys import exit
from random import randint
#Abstract Base class for scenes
class Scene(object):
#abstract class method enter to enter scenes.
def enter(self):
print "this scene not fully configured yet , implement enter()"
exit(1)
#Inheriting from the Scene class
class Death(Scene):
#create a list of ways to mock when you die.
ways = ["You deserve to die if you are so dumb!",
"Action without logic brings Death!",
"Such a loser! you die!" ,
"My grandma plays better than you!",
"My pet monkey plays this game better!"
]
#using the enter method from abstract class Scene
def enter(self):
print Death.ways[randint(0,len(self.ways)-1)]
exit(1)
#Opening scene with decisions
class Entrance(Scene):
def enter(self):
print "Welcome to your new mission ETHAN HUNT!"
print "Your mission if you choose to accept it is to sneak in the Rogue AI unit - Matrix and steal the nuclear codes , "
print "Then you have to place your bomb in the server room, "
print "and make your way through the roof for the waiting chopper to pick you up."
print "Save the world from nuclear destruction!"
print "Caution: the AI master unit - The Brain is said to be most intelligent virtual entity in the world!"
print "You have to defeat him in a math problem."
print "Do you accept the mission Ethan?"
print "Enter cool music! Ding ding ding ding ding dang ding dang..."
print "Enter yes or no."
choice = raw_input("> ")
if choice == "yes":
print "Brilliant Ethan! The world depends on you!"
print "Now you are outside the entrance: what will you do?"
print "You have 2 options: 1.sneak 2. shoot "
action = raw_input("> ")
if action == "sneak":
print "Well done! You are through the entrance on the way to control room."
print "No one suspects you!"
return 'control_room'
elif action == "shoot":
print "Was a dumb move! The guards overpower you and shoot you to bits!"
return 'death'
else:
print "Invalid input"
return 'entrance'
elif choice == "no":
print "Coward! We choose to terminate you instead!"
return 'death'
else:
print "Invalid input! Does Not Compute!"
return 'entrance'
#Next scene with decisions
class Control_Room(Scene):
def enter(self):
print "Now you are in the control room , your chance to proceed undetected!"
print "you find the control room guards , you inform them of some fire mishap outside!"
print "they leave to check the emergency.You meanwhile disable the CCTVs."
print "you sneak out. But the guards notice you."
print "They raise an alarm and quiz you!"
print "You have 2 options : 1.shoot 2.joke"
print "what you gonna do?"
action = raw_input("> ")
if action == "shoot":
print "you raise an alarm! dumbass move!"
print "they easily call other guards and shoot you to death!"
return 'death'
elif action == "joke":
print "you joke with the guards and tickle their funny bone."
print "they dont suspect you no more"
print "proceed on the mission!"
return 'AI_vault'
else:
print "invalid input"
return 'control_room'
#Next scene
class AI_Vault(Scene):
def enter(self):
print "welcome to the home of the Brain!"
print "Now you have to solve 3 math problems to get through and destruct me!"
print "I cant handle anyone else being more brainy than me!"
print "If you answer all 3 , The Brain would be forced to self destruct."
print "Here is your first problem!"
print "what is: 4x4+4x4+4-4x4 "
action = raw_input("> ")
if action == "20":
print "The Brain is furious , you got it right!"
print "I am sure you cant answer this one though , you miserable Human!"
print "Now to the next!"
print '''
Imagine you're on a game show, and you're given the choice of three doors:
Behind one door is a million dollars,
and behind the other two, nothing. You pick door #1,
and the host, who knows what's behind the doors, opens another door, say #3,
and it has nothing behind it. He then says to you,
"Do you want to stick with your choice or switch?
What will increase your probability to win? stick to first choice or switch? '''
action = raw_input("> ")
if action == "switch":
print "The Brain's muscles are red with rage! "
print "If you get this one right , he will self destruct with all the insult!!!"
else:
print "Now you die!"
return 'death'
print "what is the answer for: 6 / 2(1+2) ?"
action = raw_input("> ")
if action == "9":
print "Correct! You are a Math genius , that I never thought I would meet!"
print "The brain lets out a painful groan....and he self destructs!"
print "You just destroyed the Brain! Good going!"
return 'server_room'
else:
print "Wrong! Now you die!"
return 'death'
else:
print "Wrong! Now you die!"
return 'death'
#server scene with use of random module
class Server_Room(Scene):
def enter(self):
print "Now you are in the server room , retrieve the key nuclear codes"
print "You locate the case containing the nuclear codes. "
print "You have to guess the keycode to open the case containing nuclear codes"
print "After you guess , plant the bomb and escape to the roof."
print "You have 5 guesses to guess the 2 digit keycode for the container"
print "The digits can only be between 1 and 3. Goodluck!"
code = "%d%d" % (randint(1,3),randint(1,3))
guess = raw_input("## ")
chances = 0
while guess != code and chances < 4:
print "BZZZEEDDDD! Wrong!"
guess = raw_input("[keypad]## ")
chances += 1
if guess == code:
print "the container clicks open and you retrieve the nuclear codes. awesome!"
print "now you plant the bomb!"
print "the bomb starts ticking and its time to escape!"
print " you escape to the Roof where the chopper awaits."
return 'roof'
else:
print "the lock buzzes and the codes in papyrus roll melt away."
print "you despair and wait for the guards to discover you."
print "they capture you and put you through a dog's death!"
return 'death'
#Final scene with a question
class Roof(Scene):
def enter(self):
print "You escape to the rooftop using the AC ducts in the server room. "
print "You reach the rooftop."
print "Theres benjy waiting in the chopper hovering above."
print "Now there is another challenge: Benjy needs to know its really you, Ethan."
print "So you have to answer a random Science question."
print "If you solve it , he throws the rope to you , if not he shoots you."
print "Which Scientist discovered Oxygen Gas?"
guess = raw_input("> ")
chances = 0
while guess!= "Priestley" and chances < 4:
print "wrong!"
guess = raw_input("> ")
chances += 1
if guess == "Priestley":
print "correct! Hop in Ethan!"
print "you escape with the rope he throws and escape from the building bad ass style!"
print "As you fly away the city skyline , you enjoy the fireworks of the building. "
print "Mission accomplished!!!!! Well done Ethan! That was hard!"
return 'finished'
else:
print "We know rogue agents when we see one! you infiltrating bastard! now you die!"
print "Benjy shoots you and you die on rooftop!"
return 'death'
2. Room_maps.py
# Maps module , defining the 2 main methods of the class and a dictionary for
all the scenes in it with key-value pairing.
import Rooms
import Game_Engine
#create class map with a dictionary for scene reference.
class Map(object):
scenes = {
'entrance' : Rooms.Entrance(),
'control_room' : Rooms.Control_Room(),
'AI_vault' : Rooms.AI_Vault(),
'server_room' : Rooms.Server_Room(),
'roof' : Rooms.Roof(),
'death' : Rooms.Death()
}
#constructor with start_scene as argument.
def __init__(self,start_scene):
self.start_scene = start_scene
#function to retrieve scenes from dictionary
def next_scene(self,scene_name):
val = Map.scenes.get(scene_name)
return val
#using the next_scene function to display opening scene
def opening_scene(self):
return self.next_scene(self.start_scene)
3. Game_Engine.py
# Game engine module which runs the game with the method play and using the
map methods to get from one scene to another.
class Engine(object):
#constructor with scene_map as argument
def __init__(self,scene_map):
self.scene_map = scene_map
#function to enter opening scene
def play(self):
current_scene = self.scene_map.opening_scene()
while True:
print "<<<<<<<<<< MI-Escape from Rogue AI >>>>>>>>>>>>>> "
next_scene_name = current_scene.enter() #enter the current scene calling 'enter' function.
current_scene = self.scene_map.next_scene(next_scene_name) #to enter the next scene calling map function
4. Main.py (the file i call in powershell.)
# The Main module to call other modules
import Room_maps import Rooms import Game_Engine
# instances of Map , Engine created
a_map = Room_maps.Map('entrance')
# instance of Game Engine created.
a_game = Game_Engine.Engine(a_map)
# calling the play function from Engine to start game.
a_game.play()
Why am I getting that error? What can I do to fix?
Thank you.
Answer: There's no room called 'finished'. When you get to the end it won't find it.
Easily fixed in the main loop.
Change play to something like:
def play(self):
current_scene = self.scene_map.opening_scene()
while current_scene is not None:
print "<<<<<<<<<< MI-Escape from Rogue AI >>>>>>>>>>>>>> "
next_scene_name = current_scene.enter() #enter the current scene calling 'enter' function.
current_scene = self.scene_map.next_scene(next_scene_name) #to enter the next scene calling map function
|
Importing resource file to PyQt code?
Question: I have seen Qt documentary and a lot of questions less-similar to this one,
But i still haven't figured out how can i do it.
I'm not entirely sure how can i import resource file to Python code, so pixmap
appears without any issues.
* * *
I have all files in same directory, I created qrc. file and compiled it with:
`rcc -binary resources.qrc -o res.rcc` to make resource file.
I imported res_rcc but pixmap on label was still not shown:
`import res_rcc`
* * *
This is what i had in my qrc. file:
<RCC>
<qresource prefix="newPrefix">
<file>download.jpeg</file>
</qresource>
</RCC>
# Question:
How can i import resource files in the PyQt code ? **|** If pixmaps are in
same directory as .qrc resource files, Do i still need to specify full path?
Answer: For pyqt yuo have to use pyrcc4, that is the equivalent of rcc for python.
pyrcc4 -o resources.py resources.qrc
This generates the resources.py module that needs to be imported in the python
code in order to make the resources available.
import resources
To use the resource in your code you have to use the ":/" prefix:
Example
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import resources
pixmap = QPixamp(":/newPrefix/download.jpeg")
See [The PyQt4 Resource
System](http://pyqt.sourceforge.net/Docs/PyQt4/resources.html) and [The Qt
Resource System](http://doc.qt.io/qt-5/resources.html)
|
How to debug "pika.exceptions.AuthenticationError: EXTERNAL" error when establishing TLS connection to RabbitMQ?
Question: I have a RabbitMQ 3.6.1 server on Ubuntu 14.04 running properly. I tried to
configure an SSL listener according to [official
documentation](https://www.rabbitmq.com/ssl.html). No problems during the
startup.
However when trying to establish a connection, I get the following error on
Python/pika side (full transcript below):
pika.exceptions.AuthenticationError: EXTERNAL
What does `EXTERNAL` mean here? How to debug / get further details of the
error?
* * *
Course of actions (to test I used a Vagrant box and a local connection):
1. RabbitMQ starts SSL Listener on port 5671 (per `/var/log/rabbitmq/[email protected]`):
started SSL Listener on [::]:5671
2. I execute the `pika.BlockingConnection` on the client side.
3. On the server side I can see an incoming connection:
=INFO REPORT==== 17-Apr-2016::17:07:15 ===
accepting AMQP connection <0.2788.0> (127.0.0.1:48404 -> 127.0.0.1:5671)
4. Client fails with:
pika.exceptions.AuthenticationError: EXTERNAL
5. Server timeouts:
=ERROR REPORT==== 17-Apr-2016::17:07:25 ===
closing AMQP connection <0.2788.0> (127.0.0.1:48404 -> 127.0.0.1:5671):
{handshake_timeout,frame_header}
* * *
Full transcript of the client side:
>>> import pika, ssl
>>> from pika.credentials import ExternalCredentials
>>> ssl_options = ({"ca_certs": "/etc/rabbitmq/certs/testca/cacert.pem",
... "certfile": "/etc/rabbitmq/certs/client/cert.pem",
... "keyfile": "/etc/rabbitmq/certs/client/key.pem",
... "cert_reqs": ssl.CERT_REQUIRED,
... "server_side": False})
>>> host = "localhost"
>>> connection = pika.BlockingConnection(
... pika.ConnectionParameters(
... host, 5671, credentials=ExternalCredentials(),
... ssl=True, ssl_options=ssl_options))
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 339, in __init__
self._process_io_for_connection_setup()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 374, in _process_io_for_connection_setup
self._open_error_result.is_ready)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 410, in _flush_output
self._impl.ioloop.poll()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/select_connection.py", line 602, in poll
self._process_fd_events(fd_event_map, write_only)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/select_connection.py", line 443, in _process_fd_events
handler(fileno, events, write_only=write_only)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py", line 364, in _handle_events
self._handle_read()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py", line 415, in _handle_read
self._on_data_available(data)
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1347, in _on_data_available
self._process_frame(frame_value)
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1414, in _process_frame
if self._process_callbacks(frame_value):
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1384, in _process_callbacks
frame_value) # Args
File "/usr/local/lib/python2.7/dist-packages/pika/callback.py", line 60, in wrapper
return function(*tuple(args), **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pika/callback.py", line 92, in wrapper
return function(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pika/callback.py", line 236, in process
callback(*args, **keywords)
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1298, in _on_connection_start
self._send_connection_start_ok(*self._get_credentials(method_frame))
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1077, in _get_credentials
raise exceptions.AuthenticationError(self.params.credentials.TYPE)
pika.exceptions.AuthenticationError: EXTERNAL
>>>
Answer: The Python / pika code in the question is correct.
The error:
> pika.exceptions.AuthenticationError: EXTERNAL
is reported when client certificate authorisation is not enabled on the
RabbitMQ server side. The word `EXTERNAL` in the error refers to the
authentication mechanism as [described
here](https://github.com/rabbitmq/rabbitmq-auth-mechanism-
ssl/blob/rabbitmq_v3_6_1/README.md).
To enable:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
|
searching three different words using regex in python
Question: I am trying to search three different words in the below output
+---------------------+---------------------------+------------------+------------+-----------+------------+----------+-------------------+----------------+
| radius-server | address | secret | auth-port | acc-port | max-retry | timeout | nas-ip-local | max-out-trans |
+---------------------+---------------------------+------------------+------------+-----------+------------+----------+-------------------+----------------+
| rad_11 | 127.0.0.1 | testing123 | 9812 | 9813 | 5 | 10 | disable | 200 |
+---------------------+---------------------------+------------------+------------+-----------+------------+----------+-------------------+----------------+
They are `rad_11`, `127.0.0.1` and `testing123`. Can someone help me out ?
I have tried `re.search ('rad_11' '127.0.0.1' 'testing123', output)`.
Answer: You can clear all unnecessary symbols and parse the string:
import re
new_string = re.sub('\+[\-]*|\n', '', a).strip(' |').replace('||', '|')
names_values = map(lambda x: x.strip(' |\n'), filter(bool, new_string.split(' | ')))
count_of_values = len(names_values)/2
names, values = names_values[:count_of_values], names_values[count_of_values:]
print dict(zip(names, values))
>>> {'max-out-trans': '200', 'nas-ip-local': 'disable', 'address': '127.0.0.1',
'radius-server': 'rad_11', 'secret': 'testing123', 'acc-port': '9813',
'timeout': '10', 'auth-port': '9812', 'max-retry': '5'}
|
Python regex not greedy enough, multiple groups
Question: When trying to do some regexp matching in python, I stumbled over an oddity. I
wanted to match decimal numbers on the form xxx.yyy and divide them into three
groups for further processing. I ran something like the following snippet.
#!/usr/bin/env python3
import re
matches = re.search("a=(\d+)(\.?)(\d+?)", "var k = 2;var a; a=46")
print(matches.group(1))
Print returns 4, whereas 46 would be the expected result. Why would that be?
Python documentation states that the regexp + and * are greedy, but that does
not seem to be the case here. The reason seems to be that the last digit ends
up in the last group. I need to at least match the first and the last group. I
could skip the middle group if i use the last to distinguish between decimal
and non-decimal numbers.
It does however seem to work if the number matched is a decimal.
#!/usr/bin/env python3
import re
matches = re.search("a=(\d+)(\.?)(\d+?)", "var k = 2;var a; a=46.3")
print(matches.group(1))
Prints 46. I would be delighted if you could help me solve this conundrum.
Thank you.
Answer: It should be
matches = re.search("a=(\d+(?:\.\d+)?)", "var k = 2;var a; a=46")
**[Ideone Demo](http://ideone.com/n8PARJ)**
**Reason**
Your regex is
(\d+)(\.?)(\d+?)
Your `.` is optional which means that your both `.` and the next `\d+?` are
independent of each other.It means that it first matches all the digits (i.e.
till `4` in your example) of your input till the next `.` which is optional
and it requires at least one digit for the last group to succeed. So `6` will
be in last captured group.
This picture will make more clear
[](http://i.stack.imgur.com/I6nRx.png)
|
Subsets and Splits