text
stringlengths
226
34.5k
Subprocess module in Python to cmd.exe Question: I have some code here trying to open up the `cmd.exe` from Python and input some lines for the command to use. Here it is: PDF= "myPDF" output= "my output TIF" def my_subprocess(command,c='C:\here'): process = subprocess.Popen(command,stdout=subprocess.PIPE,shell=True,cwd=c) communicate = process.communicate()[0].strip() my_subprocess('"cmd.exe" && "C:\\here\\myinfamous.bat" && "C:\\my directory and lines telling cmd to do stuff"'+ PDF + " " + output) When run with the rest of my script, the command prompt does not even open up and there seems to be no output or errors at all. My thought is that it has not even run the `cmd.exe` command so none of this code is going in to create the final output. Is there something I am not doing properly? Thank you. Answer: You need to replace `subprocess.Popen` with `subprocess.call` Here is a working code on windows 8 that opens a text file using notepad. First field is the command itself and second field is argument. You can modify these and test with your files. import subprocess subprocess.call(['C:\\Windows\\System32\\Notepad.exe', 'C:\\openThisfile.txt'])
Cache Proxy Server Returning 404 with www.google.com Question: I have a homework assignment which involves implementing a proxy cache server in Python for web pages. Here is my implementation of it from socket import * import sys def main(): #Create a server socket, bind it to a port and start listening tcpSerSock = socket(AF_INET, SOCK_STREAM) #Initializing socket tcpSerSock.bind(("", 8030)) #Binding socket to port tcpSerSock.listen(5) #Listening for page requests while True: #Start receiving data from the client print 'Ready to serve...' tcpCliSock, addr = tcpSerSock.accept() print 'Received a connection from:', addr message = tcpCliSock.recv(1024) print message #Extract the filename from the given message filename = "" try: filename = message.split()[1].partition("/")[2].replace("/", "") except: continue fileExist = False try: #Check whether the file exists in the cache f = open(filename, "r") outputdata = f.readlines() fileExist = True #ProxyServer finds a cache hit and generates a response message tcpCliSock.send("HTTP/1.0 200 OK\r\n") tcpCliSock.send("Content-Type:text/html\r\n") for data in outputdata: tcpCliSock.send(data) print 'Read from cache' except IOError: #Error handling for file not found in cache if fileExist == False: c = socket(AF_INET, SOCK_STREAM) #Create a socket on the proxyserver try: srv = getaddrinfo(filename, 80) c.connect((filename, 80)) #https://docs.python.org/2/library/socket.html # Create a temporary file on this socket and ask port 80 for # the file requested by the client fileobj = c.makefile('r', 0) fileobj.write("GET " + "http://" + filename + " HTTP/1.0\r\n") # Read the response into buffer buffr = fileobj.readlines() # Create a new file in the cache for the requested file. # Also send the response in the buffer to client socket and the # corresponding file in the cache tmpFile = open(filename,"wb") for data in buffr: tmpFile.write(data) tcpCliSock.send(data) except: print "Illegal request" else: #File not found print "404: File Not Found" tcpCliSock.close() #Close the client and the server sockets main() I configured my browsers to use my proxy server like so [![enter image description here](http://i.stack.imgur.com/o2FjV.jpg)](http://i.stack.imgur.com/o2FjV.jpg) But my problem when I run it is that no matter what web page I try to access it returns a 404 error with the initial connection and then a connection reset error with subsequent connections. I have no idea why so any help would be greatly appreciated, thanks! Answer: There are quite a number of issues with your code. Your URL parser is quite cumbersome. Instead of the line filename = message.split()[1].partition("/")[2].replace("/", "") I would use import re parsed_url = re.match(r'GET\s+http://(([^/]+)(.*))\sHTTP/1.*$', message) local_path = parsed_url.group(3) host_name = parsed_url.group(2) filename = parsed_url.group(1) If you catch an exception there, you should probably throw an error because it is a request your proxy doesn't understand (e.g. a POST). When you assemble your request to the destination server, you then use fileobj.write("GET {object} HTTP/1.0\n".format(object=local_path)) fileobj.write("Host: {host}\n\n".format(host=host_name)) You should also include some of the header lines from the original request because they can make a major difference to the returned content. Furthermore, you currently cache the entire response with all header lines, so you should not add your own when serving from cache. What you have doesn't work, anyway, because there is no guarantee that you will get a 200 and `text/html` content. You should check the response code and only cache if you did indeed get a 200.
Python header unicode to dict Question: I have a MySQL database with a python request header saved in one of the columns and it looks something like this: { '_': '/Users/user/.virtualenvs/squadraft/bin/python', 'wsgi.multiprocess': False, 'RUN_MAIN': 'true', 'rvm_version': '1.26.11 (latest)', 'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.8,fr;q=0.6', 'PIP_RESPECT_VIRTUALENV': 'true', 'SERVER_SOFTWARE': 'WSGIServer/0.1 Python/2.7.5', 'rvm_path': '/Users/user/.rvm', 'TERM_PROGRAM_VERSION': '326', 'RUBY_VERSION': 'ruby-2.0.0-p643', 'REQUEST_METHOD': 'POST', 'LOGNAME': 'user', 'USER': 'user', 'HTTP_ORIGIN': 'chrome-extension://fhbjgbiflinjbdggehcddcbncdddomop', 'PATH': '/Users/user/.virtualenvs/squadraft/bin:/Users/user/.rvm/gems/ruby-2.0.0-p643/bin:/Users/user/.rvm/gems/ruby-2.0.0-p643@global/bin:/Users/user/.rvm/rubies/ruby-2.0.0-p643/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/sbin:/Users/user/.rvm/bin:/Users/user/.rvm/bin', 'QUERY_STRING': '', 'HOME': '/Users/user', 'VIRTUALENVWRAPPER_SCRIPT': '/usr/local/bin/virtualenvwrapper.sh', 'HTTP_CONNECTION': 'keep-alive', 'TERM_PROGRAM': 'Apple_Terminal', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm-256color', 'SHELL': '/bin/bash', 'TZ': 'UTC', 'SERVER_NAME': '1.0.0.127.in-addr.arpa', 'VERSIONER_PYTHON_VERSION': '2.7', 'SHLVL': '1', '_system_name': 'OSX', 'wsgi.url_scheme': 'http', 'VIRTUALENVWRAPPER_VIRTUALENV_ARGS': '--no-site-packages', 'SERVER_PORT': '8000', 'CONTENT_LENGTH': '38414', 'SERVER_PROTOCOL': 'HTTP/1.1', '_system_arch': 'x86_64', 'GEM_PATH': '/Users/user/.rvm/gems/ruby-2.0.0-p643:/Users/user/.rvm/gems/ruby-2.0.0-p643@global', 'rvm_bin_path': '/Users/user/.rvm/bin', 'WORKON_HOME': '/Users/user/.virtualenvs', 'TERM_SESSION_ID': '8CFC3FA5-6F56-49B7-AFAB-A807AFFA1D83', 'VERSIONER_PYTHON_PREFER_32_BIT': 'no', 'CONTENT_TYPE': 'application/xml', 'rvm_prefix': '/Users/user', 'HTTP_POSTMAN_TOKEN': 'c084799e-820d-7f70-4b7e-7597f1abab6f', 'Apple_PubSub_Socket_Render': '/tmp/launch-g9J1iQ/Render', 'SSH_AUTH_SOCK': '/tmp/launch-JlZPBH/Listeners', 'VIRTUAL_ENV': '/Users/user/.virtualenvs/squadraft', 'LC_CTYPE': 'UTF-8', 'IRBRC': '/Users/user/.rvm/rubies/ruby-2.0.0-p643/.irbrc', 'MY_RUBY_HOME': '/Users/user/.rvm/rubies/ruby-2.0.0-p643', 'HTTP_CACHE_CONTROL': 'no-cache', 'VIRTUALENVWRAPPER_WORKON_CD': '1', 'PS1': '(squadraft) \h:\W \u\$ ', 'SCRIPT_NAME': u'', 'wsgi.multithread': True, 'LC_ALL': 'en_US.UTF-8', '_system_type': 'Darwin', 'TMPDIR': '/var/folders/5c/vhcdkw8n6xz2n_ywlkvt2q_r0000gn/T/', 'HTTP_ACCEPT': '*/*', 'VIRTUALENVWRAPPER_PROJECT_FILENAME': '.project', 'wsgi.version': (1, 0), 'VIRTUALENVWRAPPER_HOOK_DIR': '/Users/user/.virtualenvs', 'GATEWAY_INTERFACE': 'CGI/1.1', 'wsgi.run_once': False, 'REMOTE_HOST': '', 'OLDPWD': '/Users/user/Development/rotas', 'REMOTE_ADDR': '127.0.0.1', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36', 'wsgi.input': <socket._fileobject object at 0x112519cd0>, 'wsgi.errors': <open file '<stderr>', mode 'w' at 0x10f4891e0>, 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', '__CF_USER_TEXT_ENCODING': '0x1F5:0:0', 'PWD': '/Users/user/Development/squadraft', 'PIP_VIRTUALENV_BASE': '/Users/user/.virtualenvs', 'DJANGO_SETTINGS_MODULE': 'fantasysport.settings.local', '_system_version': '10.9', 'HTTP_HOST': '127.0.0.1:8000', 'wsgi.file_wrapper': <class wsgiref.util.FileWrapper at 0x110994258>, '__CHECKFIX1436934': '1', 'GEM_HOME': '/Users/user/.rvm/gems/ruby-2.0.0-p643', 'PATH_INFO': u'/client/collect-data/' } Note accessing it through my model query I get it as Unicode. I would like to convert this into a dictionary so that I can access the underlying properties. Here are some of the things I have tried so far. import json json.loads(header_data) # This fails with a 'ValueError: Expecting property name # enclosed in double quotes: line 1 column 2 (char 1)' # Also import ast ast.literal_eval(header_data) # The above fails with a 'File "<unknown>", line 1' error More information: print(type(header_data)) <type 'unicode'> print(header_data) {'_': '/Users/user/.virtualenvs/squadraft/bin/python', 'wsgi.multiprocess': False, 'RUN_MAIN': 'true', 'rvm_version': '1.26.11 (latest)', 'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.8,fr;q=0.6', 'PIP_RESPECT_VIRTUALENV': 'true', 'SERVER_SOFTWARE': 'WSGIServer/0.1 Python/2.7.5', 'rvm_path': '/Users/user/.rvm', 'TERM_PROGRAM_VERSION': '326', 'RUBY_VERSION': 'ruby-2.0.0-p643', 'REQUEST_METHOD': 'POST', 'LOGNAME': 'user', 'USER': 'user', 'HTTP_ORIGIN': 'chrome-extension://fhbjgbiflinjbdggehcddcbncdddomop', 'PATH': '/Users/user/.virtualenvs/squadraft/bin:/Users/user/.rvm/gems/ruby-2.0.0-p643/bin:/Users/user/.rvm/gems/ruby-2.0.0-p643@global/bin:/Users/user/.rvm/rubies/ruby-2.0.0-p643/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/sbin:/Users/user/.rvm/bin:/Users/user/.rvm/bin', 'QUERY_STRING': '', 'HOME': '/Users/user', 'VIRTUALENVWRAPPER_SCRIPT': '/usr/local/bin/virtualenvwrapper.sh', 'HTTP_CONNECTION': 'keep-alive', 'TERM_PROGRAM': 'Apple_Terminal', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm-256color', 'SHELL': '/bin/bash', 'TZ': 'UTC', 'SERVER_NAME': '1.0.0.127.in-addr.arpa', 'VERSIONER_PYTHON_VERSION': '2.7', 'SHLVL': '1', '_system_name': 'OSX', 'wsgi.url_scheme': 'http', 'VIRTUALENVWRAPPER_VIRTUALENV_ARGS': '--no-site-packages', 'SERVER_PORT': '8000', 'CONTENT_LENGTH': '38414', 'SERVER_PROTOCOL': 'HTTP/1.1', '_system_arch': 'x86_64', 'GEM_PATH': '/Users/user/.rvm/gems/ruby-2.0.0-p643:/Users/user/.rvm/gems/ruby-2.0.0-p643@global', 'rvm_bin_path': '/Users/user/.rvm/bin', 'WORKON_HOME': '/Users/user/.virtualenvs', 'TERM_SESSION_ID': '8CFC3FA5-6F56-49B7-AFAB-A807AFFA1D83', 'VERSIONER_PYTHON_PREFER_32_BIT': 'no', 'CONTENT_TYPE': 'application/xml', 'rvm_prefix': '/Users/user', 'HTTP_POSTMAN_TOKEN': 'c084799e-820d-7f70-4b7e-7597f1abab6f', 'Apple_PubSub_Socket_Render': '/tmp/launch-g9J1iQ/Render', 'SSH_AUTH_SOCK': '/tmp/launch-JlZPBH/Listeners', 'VIRTUAL_ENV': '/Users/user/.virtualenvs/squadraft', 'LC_CTYPE': 'UTF-8', 'IRBRC': '/Users/user/.rvm/rubies/ruby-2.0.0-p643/.irbrc', 'MY_RUBY_HOME': '/Users/user/.rvm/rubies/ruby-2.0.0-p643', 'HTTP_CACHE_CONTROL': 'no-cache', 'VIRTUALENVWRAPPER_WORKON_CD': '1', 'PS1': '(squadraft) \h:\W \u\$ ', 'SCRIPT_NAME': u'', 'wsgi.multithread': True, 'LC_ALL': 'en_US.UTF-8', '_system_type': 'Darwin', 'TMPDIR': '/var/folders/5c/vhcdkw8n6xz2n_ywlkvt2q_r0000gn/T/', 'HTTP_ACCEPT': '*/*', 'VIRTUALENVWRAPPER_PROJECT_FILENAME': '.project', 'wsgi.version': (1, 0), 'VIRTUALENVWRAPPER_HOOK_DIR': '/Users/user/.virtualenvs', 'GATEWAY_INTERFACE': 'CGI/1.1', 'wsgi.run_once': False, 'REMOTE_HOST': '', 'OLDPWD': '/Users/user/Development/rotas', 'REMOTE_ADDR': '127.0.0.1', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36', 'wsgi.input': <socket._fileobject object at 0x112519cd0>, 'wsgi.errors': <open file '<stderr>', mode 'w' at 0x10f4891e0>, 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', '__CF_USER_TEXT_ENCODING': '0x1F5:0:0', 'PWD': '/Users/user/Development/squadraft', 'PIP_VIRTUALENV_BASE': '/Users/user/.virtualenvs', 'DJANGO_SETTINGS_MODULE': 'fantasysport.settings.local', '_system_version': '10.9', 'HTTP_HOST': '127.0.0.1:8000', 'wsgi.file_wrapper': <class wsgiref.util.FileWrapper at 0x110994258>, '__CHECKFIX1436934': '1', 'GEM_HOME': '/Users/user/.rvm/gems/ruby-2.0.0-p643', 'PATH_INFO': u'/opta/collect-data/'} Answer: As had already been pointed out in a deleted answer, your problem are the entries 'wsgi.input': <socket._fileobject object at 0x112519cd0> 'wsgi.errors': <open file '<stderr>', mode 'w' at 0x10f4891e0> 'wsgi.file_wrapper': <class wsgiref.util.FileWrapper at 0x110994258> Remove these and ast.literal_eval(header_data) should work.
Background Timer in PyQt Python Applicatiion Question: I am Building an application in python using PyQt4. I want to add a background timer to my application so that when the timer runs out, the text editor in my application stops receiving input and freezes up. I have tried the following method so far: def main(): app = QtGui.QApplication(sys.argv) main = Main() main.show() start = time.time() #this loop is for the time interval within which the text area of #the editor will accept inputs. while time.time() - start < 120: pass #this will set the text area to a read only state after and will not #any more inputs. main.text.setReadOnly(True) sys.exit(app.exec_()) But whenever I run this the application stops responding. I have also tried running the timer through a thread but in that case also the application stops responding. How can I implement this functionality? P.S. - I am a Beginner in PyQt. class Main(QtGui.QMainWindow): def __init__(self,parent = None): QtGui.QMainWindow.__init__(self,parent) self.init_ui() def init_ui(self): self.text = QtGui.QTextEdit(self) self.setCentralWidget(self.text) self.statusbar = self.statusBar() self.setGeometry(100,100,1080,720) self.setWindowTitle("Text Editor") This is the watered down version of the main class code. Hope this helps. Answer: Your application is not responding because you are blocking the event handler. It is processed by `app.exec_()`. This call is only checking for events and handling these in a infinite loop. Use QTimer with **signal** and **slot**. [Signals and Slots (wikipedia)](https://en.wikipedia.org/wiki/Signals_and_slots) For using signals and slots in pyqt see [this](http://pyqt.sourceforge.net/Docs/PyQt4/new_style_signals_slots.html). The timer is initialized with a duration and a signal-slot connection. After `app.exec_()` is called, it will wait for the `input_timer` signal `timeout`. If this is raised, the event loop will call lambda function, that disables the textbox, because it is connected to the signal. def main(): app = QtGui.QApplication(sys.argv) main = Main() main.show() input_timer = QTimer(self) input_timer.timeout.connect(lambda : main.text.setReadOnly(True)) input_timer.setSingleShot(True) input_timer.start(2000) sys.exit(app.exec_()) This will disable the textbox after 2000 milliseconds. Adjust as you need. I would move the timer and behaviour to `Main` class. import sys from PyQt4 import QtCore, QtGui class Main(QtGui.QMainWindow): def __init__(self,parent = None): QtGui.QMainWindow.__init__(self,parent) self.init_ui() self.disable_edit_text_timer = QtCore.QTimer(self) self.disable_edit_text_timer.timeout.connect(self.disable_edit_text) self.disable_edit_text_timer.setSingleShot(True) self.disable_edit_text_timer.start(2000) def init_ui(self): self.text = QtGui.QTextEdit(self) self.setCentralWidget(self.text) self.statusbar = self.statusBar() self.setGeometry(100,100,1080,720) self.setWindowTitle("Text Editor") def disable_edit_text(self): self.text.setReadOnly(True) def main(): app = QtGui.QApplication(sys.argv) main = Main() main.show() sys.exit(app.exec_()) if __name__ == "__main__": main()
pyGame image scale does not work as expected Question: I am new into Python and pyGame and i have a problem with scaling an image. I want to zoom an image in pygame. The pygame documentation claims that > pygame.transform.scale() should scale to a new resolution. But in my example below it does not work - it crops the image instead of resizing it!? What am i doing wrong? #!/usr/bin/env python3 # coding: utf-8 import pygame from pygame.locals import * # Define some colors BLACK = (0, 0, 0) pygame.init() # Set the width and height of the screen [width, height] screen = pygame.display.set_mode((1920, 1080)) pic = pygame.image.load('test.jpg').convert() pic_position_and_size = pic.get_rect() # Loop until the user clicks the close button. done = False # Clear event queue pygame.event.clear() # -------- Main Program Loop ----------- while not done: for event in pygame.event.get(): if event.type == QUIT: done = True elif event.type == KEYDOWN: if event.key == K_ESCAPE: done = True # background in black screen.fill(BLACK) # Copy image to screen: screen.blit(pic, pic_position_and_size) # Update the screen with what we've drawn. pygame.display.flip() pygame.display.update() pygame.time.delay(10) # stop the program for 1/100 second # decreases size by 1 pixel in x and y axis pic_position_and_size = pic_position_and_size.inflate(-1, -1) # scales the image pic = pygame.transform.scale(pic, pic_position_and_size.size) # Close the window and quit. pygame.quit() Answer: `pygame.transform.scale()` does not work very well for your case. If you shrink a `Surface` by such a small amount, the algorithm just crops the last column and row of pixels. If you now repeat this process over and over again with the same `Surface`, you get the strange behaviour you see. A better approach would be to keep a copy of your original `Surface` around, and use that for creating the scaled image. Also, using `smoothscale` instead of `scale` may also lead to a better effect; it's up to you if you want to use it. Here's a "fixed" version of your code: #!/usr/bin/env python3 # coding: utf-8 import pygame from pygame.locals import * # Define some colors BLACK = (0, 0, 0) pygame.init() # Set the width and height of the screen [width, height] screen = pygame.display.set_mode((1920, 1080)) org_pic = pygame.image.load('test.jpg').convert() pic_position_and_size = org_pic.get_rect() pic = pygame.transform.scale(org_pic, pic_position_and_size.size) # Loop until the user clicks the close button. done = False # Clear event queue pygame.event.clear() # -------- Main Program Loop ----------- while not done: for event in pygame.event.get(): if event.type == QUIT: done = True elif event.type == KEYDOWN: if event.key == K_ESCAPE: done = True # background in black screen.fill(BLACK) # Copy image to screen: screen.blit(pic, (0,0)) # Update the screen with what we've drawn. pygame.display.flip() pygame.display.update() pygame.time.delay(10) # stop the program for 1/100 second # decreases size by 1 pixel in x and y axis pic_position_and_size = pic_position_and_size.inflate(-1, -1) # scales the image pic = pygame.transform.smoothscale(org_pic, pic_position_and_size.size) # Close the window and quit. pygame.quit()
Distributed Programming on Google Cloud Engine using Python (mpi4py) Question: I want to do distributed programming with python using the mpi4py package. For testing reasons, I set up a 5-node cluster via Google container engine, and changed my code accordingly. But now, what are my next steps? How do I get my code running and working on all 5 VMs? I tried to just ssh-connect into one VM from my cluster and run the code, but it was obvious that the code was not getting distributed, but instead stayed on the same machine :( [see example below] . # Code: from mpi4py import MPI size = MPI.COMM_WORLD.Get_size() rank = MPI.COMM_WORLD.Get_rank() name = MPI.Get_processor_name() print("Hello, World! I am process/rank {} of {} on {}.\n".format(rank, size,name)) . # Output: > **mpiexec -n 5 python 5_test.py** > >> Hello, World! I am process/rank 0 of 5 on gke-cluster-1-000000cd-node-mgff. >> >> Hello, World! I am process/rank 1 of 5 on gke-cluster-1-000000cd-node-mgff. >> >> Hello, World! I am process/rank 2 of 5 on gke-cluster-1-000000cd-node-mgff. >> >> Hello, World! I am process/rank 3 of 5 on gke-cluster-1-000000cd-node-mgff. >> >> Hello, World! I am process/rank 4 of 5 on gke-cluster-1-000000cd-node-mgff. Answer: So, I figured out what I got wrong, and I think I should post the answer for someone who might has a similar question. Turns out, I should have read the documentation of mpi4py better :D The command `mpirun -np 5 python 5_test.py` is for running the program an a single, multi-core host on different processes. However, I wanted to distribute the task across various host. Therefore I needed the command `mpirun --hostfile <hostfile> python 5_test.py`. And `<hostfile>` must be a file looking like this: -- hostfile -- host1 slots=4 host2 slots=4 host3 slots=4 '-------------- . Useful Link: <https://github.com/jbornschein/mpi4py-examples>
UnicodeDecodeError from sound file Question: I'm trying to make a speech recogniser in Python using Google speech API. I've been using and adapting the code from [here](http://stackoverflow.com/a/19828908/4961615) (converted to Python3). I'm using an audio file on my computer that's been converted from mp3 to flac 16000 Hz (as specified in the original code) using an online converter. When running the code I get this error: $ python3 speech_api.py 02-29-2016_00-12_msg1.flac Traceback (most recent call last): File "speech_api.py", line 12, in <module> data = f.read() File "/usr/lib/python3.4/codecs.py", line 319, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 9: invalid start byte This is my code. (I'm sure there are also still things that don't work in Python3, as I've been trying to adapt it and am new to `urllib`...) #!/usr/bin/python import sys from urllib.request import urlopen import json try: filename = sys.argv[1] except IndexError: print('Usage: transcribe.py <file>') sys.exit(1) with open(filename) as f: data = f.read() req = urllib.request('https://www.google.com/intl/en/chrome/demos/speech.html', data=data, headers={'Content-type': 'audio/x-flac; rate=16000'}) try: ret = urllib.urlopen(req) except urllib.URLError: print("Error Transcribing Voicemail") sys.exit(1) resp = ret.read() text = json.loads(resp)['hypotheses'][0]['utterance'] print(text) Any ideas what I could do? Answer: You need to open the file in _binary mode_ : open(filename, 'wb') Note the `'b'`, or the file will be treated as text and decoded to Unicode.
import numpy on python 3.5 on mac osx Question: I tried to follow your advices about typing in `macport` : cd /opt/local/bin sudo ./port install py35-numpy and apparently it worked, but Python still doesn't find any module named `'numpy'`... Thanks for help Answer: If you're interested in painlessly using Python with NumPy on Linux, Mac, or Windows, I'd highly suggest using the [miniconda](http://conda.pydata.org/miniconda.html) distribution. Once it's installed, just open a terminal and run $ conda install python=3.5 numpy and you'll be ready to go. It also makes the rest of the scientific Python ecosystem similarly easy to install.
django redirect no reverse match error Question: Here is my main project timecapture/urls.py content: from django.conf.urls import url,include from django.contrib import admin from django.core.urlresolvers import reverse_lazy from . import views urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^$',views.index,name='index'), url(r'^login/$',views.auth_login,name='auth_login'), url(r'^logout/$',views.auth_logout,name='auth_logout'), url(r'^timesheet/',include('timesheet.urls'),name='timesheet') ] And here is an app inside main project timesheet/urls.py: from django.conf.urls import url,include from django.contrib import admin from django.http import HttpResponse from . import views urlpatterns = [ url(r'^$',views.index), ] I am unable to redirect to 'timesheet' url. I am using the following command: return redirect('timesheet') But this is working: return redirect('/timesheet/') Exact error is `enter code here django.core.urlresolvers.NoReverseMatch: Reverse for 'timesheet' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: [] Btw I am using latest django 1.9.2 with python 3.4 Answer: You don't have a URL named "timesheet". You've got an include with that name, but not the views. Remove `name='timesheet'` from the include in the main urls.py, and add it instead to the index url in timesheet/urls.py.
Python Numpy Poisson Distribution Question: So basically I am generating a gaussian, for the sake of completeness, that's my implementation: from numpy import * x=linspace(0,1,1000) y=exp(-(x-0.5)**2/(2.0*(0.1/(2*sqrt(2*log(2))))**2)) with peak at `0.5` and `fwhm=0.1`. So far so not interesting. In the next step I calculate the poisson distribution of my set of data using `numpys random.poisson implementation` poi = random.poisson(lam=y) So I'm having two major problems. 1. A specialty of poisson is that the variance equals the exp. value, comparing the output of mean() and var() does confuse me as the outputs are not equal. 2. When plotting this, the poisson dist. takes up integer values _only_ and the max. value is around 7, sometimes 6, whilst my old function y has its max. at 1. Afai understand, the poisson-function should give me sort of a 'fit' of my actual function y. How come the max. values are not equal? Sorry for my mathematical incorrectness, actually I'm doing this to emulate poisson-distributed noise but I guess you understand 'fit' in this context. Thanks in advance. EDIT: 3. question: What's the 'size' variable used for in this context? I've seen different types of usage but in the end they did not give me different results but failing when choosing it wrong... EDIT2: OK, from the answer I got I think that I was not clear enough (although it already helped me correct some other stupid errors I did, thanks for that!). What I want to do is apply poisson (white) noise to the function y. As described by MSeifert in the post below, I now use the expectation value as lam. But this only gives me the noise. I guess I have some understanding problems on the level of how th{is,e} noise is applied (and maybe it's more physics related?!). Answer: First of all, I'll write this answer assuming you `import numpy as np` because it clearly distinguishes `numpy` functions from the builtins or those of the `math` and `random` package of python. I think it is not necessary to answer your specified questions because your basic assumption is wrong: Yes, the poisson-statistics has a mean that equals the variance but that assumes you use a **constant** `lam`. But you don't. You input the y-values of your gaussian, so you cannot expect them to be constant (they are by your definition gaussian!). Use `np.random.poisson(lam=0.5)` to get one random value from a poisson distribution. But be careful since this the poisson distribution is not even approximatly identical to you gaussian distribution because you are in the "low-mean" interval where both of these are significantly different, see for example the [Wikipedia article about Poisson distribution](https://en.wikipedia.org/wiki/Poisson_distribution). Also you are creating random numbers, so you shouldn't really plot them but plot a `np.histogram` of them. Since statistical distributions are all about probabilitiy density functions (see [Probability density function](https://en.wikipedia.org/wiki/Probability_density_function)). Before, I already mentioned that you create a poisson distribution with a constant `lam` so now it is time to talk about the `size`: You create random numbers, so to approximate the real poisson distribution you need to draw a lot of random numbers. There the size comes in: `np.random.poisson(lam=0.5, size=10000)` for example creates an array of 10000 elements each drawn from a poissonian probability density function for a mean value of `0.5`. And if you haven't read it in the Wikipedia article mentioned before the poisson distribution gives by definition only unsigned (>= 0) integer as result. So I guess what you wanted to do is create a gaussian and poisson distribution containing 1000 values: gaussian = np.random.normal(0.5, 2*np.sqrt(2*np.log(2)), 1000) poisson = np.random.poisson(0.5, 1000) and then to plot it, plot the histograms: import matplotlib.pyplot as plt plt.hist(gaussian) plt.hist(poisson) plt.show() or use the [`np.histogram`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.histogram.html) instead. To get statistics from your random samples you can still use `np.var` and `np.mean` on the gaussian and poisson samples. And this time (at least on my sample run) they give good results: print(np.mean(gaussian)) 0.653517935138 print(np.var(gaussian)) 5.4848398775 print(np.mean(poisson)) 0.477 print(np.var(poisson)) 0.463471 Notice how the gaussian values are almost exactly what we defined as parameters. On the other hand poisson mean and var are almost equal. You can increase the precision of the mean and var by increasing the `size` above. # Why the poisson distribution doesn't approximate your original signal Your original signal contains only values between 0 and 1, so the poisson distribution only allows positive integer and the standard deviation is linked to the mean value. So far from the mean of the gaussian your signal is approximatly 0, so the poisson distribution will almost always draw 0. Where the gaussian has it's maximum the value is 1. The poisson distribution for 1 looks like this (left is the signal + poisson and on the right the poisson distribution around a value of 1) [![enter image description here](http://i.stack.imgur.com/hDxf4.png)](http://i.stack.imgur.com/hDxf4.png) so you'll get a lot of 0 and 1 and some 2 in that region. But also there is some probability that you draw values up to 7. This is exactly the antisymmetry that I mentioned. If you change the amplitude of your gaussian (multiply it by 1000 for example) the "fit" is much better since the poisson distribution is almost symmetric there: [![enter image description here](http://i.stack.imgur.com/LLSza.png)](http://i.stack.imgur.com/LLSza.png)
WSGI import app from python package yields 500 Question: I want to deploy my python app on apache so I made a hello world app (contained in one python file) and imported the app object from there and it worked. This was just to test out things. Then I move my app file there and tried the same with it. I try to import the app object( now from a python package), nothing blows up, however when I visit the server I get a 500 error code. Note: the application works just fine if I run it with python. It is not broken. File structure: app.wsgi main(this is the python package) hello.py(this is the simple app) In app.wsgi I have this: import sys sys.path.append('/var/www/html') from hello import app as application and that works, but when I change it to : import sys sys.path.append('/var/www/html') from main import app as application it gives me a 500. Any ideas as to why is this happening? Answer: apparently I had to call run in my __init__.py file. If anyone finds this helpful keep it otherwise take it out.
Send email through smtp in superlance using crashmail Question: I'm trying to set up the email sending when a process changes state in [supervisord](http://supervisord.org/) by using [crashmail](https://superlance.readthedocs.org/en/latest/crashmail.html). Having no luck with the default `sendmail` program which requires quite a lot of setup, I decided to go with a small script in Python that sends email using SMTP. This worked very well (I received indeed an email saying that the process state changes) for the first state change but stop working afterward. I have tried to change different options in `supervisord` such as `buffer_size` or `autorestart` but it has no effect. Here is the script I use to trigger the `supervisord` state changes: import time from datetime import datetime if __name__ == '__main__': print(">>>>> STARTING ...", flush=True) while True: print("sleep now:", datetime.utcnow(), flush=True) time.sleep(30) raise Exception("meo meo") This is the script that sends email through Gmail. This one will send the `stdin`. #!/usr/bin/env python import smtplib def get_server(): smtpserver = smtplib.SMTP('smtp.gmail.com:587') smtpserver.ehlo() smtpserver.starttls() smtpserver.login("[email protected]", "password") return smtpserver if __name__ == '__main__': import sys data = sys.stdin.read() s = get_server() s.sendmail('[email protected]', ['[email protected]'], data) s.quit() Here is my `supervisord.conf` [eventlistener:crashmail] command=crashmail -a -m [email protected] -s /home/ubuntu/mysendmail.py events=PROCESS_STATE buffer_size=102400 autorestart=true Does anyone have any idea why? Thanks! Answer: I moved the `eventlistener` section to a separate file in `/etc/supervisor/conf.d` (instead of putting at the end of `supervisord.conf`) and now everything is working as expected ...
Django Shell image upload _io.BufferedReader no attribute size Question: My problem is that when I try to save image to my model using Django shell I get this error that I can't find solution anywere. models.py class AdImage(models.Model): ad = models.ForeignKey(Ad) full_photo = models.ImageField(upload_to='uploads/', blank=True) I import models create AdImage instance add 'ad' and try to imagead.full_photo.save("NowHiring.jpg",open("C:\\NowHiring.jpg", "rb")) but i get an error Traceback (most recent call last): File "<console>", line 1, in <module> File "C:\Users\hp\Envs\platform\lib\site-packages\django\db\models\fields\file s.py", line 106, in save self._size = content.size AttributeError: '_io.BufferedReader' object has no attribute 'size' Using: **Python 3.5, Django 1.9** What could I do ? Answer: The `FieldFile.save` method needs to be called with an instance of `django.core.files.File`, rather than a built-in python file handle. Change the save invocation to: from django.core.files import File imagead.full_photo.save("NowHiring.jpg", File(open("C:\\NowHiring.jpg", "rb"))) Django docs reference for [`FieldFile.save`](https://docs.djangoproject.com/en/1.9/ref/models/fields/#django.db.models.fields.files.FieldFile.save).
Python Parsing XML with a complex hierarchy - Nuke9.0v8 Question: I am working with NukeX9.0v8, Adobe Premiere Pro CC 2015 and nukes internal python interrupter. # Result: 2.7.3 (default, Jul 24 2013, 15:50:23) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] I am a vfx artist and I'm trying to wrap my brain around the best method to parse xml files in order to: create a folder structure, batch create .nk comp files and plug in the data within specific parts as I make my .nk comps. I have a bit of a grasp of how to do each of these things in isolation, but plugging it all together, and trying to find tutorials on such complex parse as ground me to a halt. I know the scope of this is big but any small pieces of advice are appreciated. Right now I have a nuke comp that has a node tree that takes in camera inputs and stitches them into a latlong image for 360 video, I am going to wrap that up into a gizmo for each different kind of rig configuration. This just simplifies the .nk files that are created and I can expose the parts of that gizmo I can feed data into. Every day we receive a ton of footage from a shoot and we have to make a new .nk comp for each shot and set it to render right away. What I want to do is have the guys on set create a premiere project and organize the files based on this folder structure. That premiere project will be exported as an .xml file. The design of the structure in premiere. * Day_01 (the day of the shoot) * -^-R001 (Roll number for the shots. R referring to camera type) * \--^-R001_C001 (The name of the shot) * \---^-Acamera clip (path to file name, video in point as frame#) * \---^-Bcamera clip (path to file name, video in point as frame#) * \---^-Ccamera clip (path to file name, video in point as frame#) Right now in my script panel inside Nuke I can enter the information of where is the xml for the day what day to look for. Then it is suppose to look into each folder name for the roll, and using the first letter (R for RED camera) and looks inside for the clip folder. It then uses the pathurl directory for the camera files on the drive and also can take it data like the in and out points if present in the xml. I also have points to enter for the template version if I update a stitch process. That will tell the nuke comp which gizmo to use. Here is my panel in Nuke. def sesquixmlparse(): ''' This imports the xml file from premiere. It looks for the bin that it is working for today and starts looking in what is inside the bins It then sees the bins inside and uses them to create nuke scripts with these as inputs It asks what template version to use for the rig. things change or maybe even get better ''' # Lets build the Nuke Panel that tells us our inputs p = nuke.Panel("Sesqui XML Parse for Dailies") xml_file = 'Daily XML' daynumber = 'Day_##' nk_output_dir = 'Directory to build VFX folder structure' dnx_render_dir = 'Directory for write nodes' r_template_vr = 'VER1' g_template_vr = 'VER1' c_template_vr = 'VER1' p.addFilenameSearch("Daily XML", xml_file) p.addSingleLineInput("Bin to process", daynumber) p.addFilenameSearch("Directory to build VFX folder structure", nk_output_dir) p.addFilenameSearch("Directory to render from write nodes", dnx_render_dir) p.addSingleLineInput("3 Red stmap version", r_template_vr) p.addSingleLineInput("6 Gopro stmap verison", g_template_vr) p.addSingleLineInput("5 Canon stmap verison", c_template_vr) p.setWidth(600) print "Panel created" if not p.show(): return # Assign var from nuke panel user-entered data xml_file = p.value("Daily XML") daynumber = p.value("Bin to process") nk_output_dir = p.value("Directory to build VFX folder structure") dnx_render_dir = p.value("Directory to render from write nodes") r_template_vr = p.value("3 Red stmap version") g_template_vr = p.value("6 Gopro stmap verison") c_template_vr = p.value("5 Canon stmap verison") print "var's assigned from panel" # Create paths for render directory if it does not exist if not os.path.isdir(dnx_render_dir): os.mkdir(dnx_render_dir) print dnx_render_dir + " directory created" if not os.path.isdir(nk_output_dir): os.mkdir(nk_output_dir) print nk_output_dir + " directory created" I am at a loss on how to best read the xml file. All the tutorials I have seen on both `DOM` and `elementtree` are very basic and deal with direct code to read known XML tags and break data down to a simple `str` output. I need to enter variables, which then constrain the parsing to a specific part of the tree, and go into an unknown hierarchy setup and seeing what is inside, and then make decisions on what to do with what it finds. Here is a sample of my test XML file. The eventual plan is to have other different roll types that reference different camera types but for now I'm just working with 3 camera red rigs. # It's a very big file so here is a pastebin: <http://pastebin.com/vLaRA0X8> Basically I am wanting to constrain the script to looking within my variable `<bin><name>'daynumber'</name>~~~~</bin>.` In this case looking in the `Day_00` bin. If there is anything else in the root hierarchy I want to ignore it as sequences, unused clips and other data can get very very huge. I then want to create the directory of `daynumber` in the `nk_output_dir` & `dnx_render_dir` so that everything for this shoot day is contained in that folder. A annoying part of the XML file is the name of a bin is a child to the `<bin>` itself, so once a bin name is found, any `<children>` of that bin would be the same level of the tree as the `<name>`. I can't find sample code of locating a tag and then looking working with the tags that are in the same branch instead of it's children. Now that it has found the bin for the day I want it to start to look for all the bins in `<children></children>`. Example being `<bin><name>R001</name>~~~</bin>` and create directories inside the Day_00 folder I made in `nk_output_dir` & `dnx_render_dir` for each bin it finds in this part of the structure. Every time the camera reloads that will roll up to R002, R003, etc etc. Also different camera types like Gopros will create G001, G002, G003. Then I want to look for in the `<children>` of the above bins and find all the bins inside like `<bin><name>R001_C001</name>~~~</bin>` and create folders in the `nk_output_dir\`daynumber`\~whatever bin this is contained~\~name of this bin~\`. Which is user created of the roll number and clip number. (R001_C001, R001_C002, etc etc) This will be the new clip name, the name of the .nk comp that will be generated and the file name of the render on the write node. The goal here is to recreate the bin folder structure in the directory I've choosen for `nk_output_dir`. The `dnx_render_dir` that is for being plugged into the write nodes of my nuke scripts later to where the files should be rendered to. It's separate because I'd have a different RAID drive that it will go to that will change as they fill up. The renders just need to be put in a directory for the `daynumber\~rollnumber~` but doesn't need to be constrained into a folder for the clipname. Here is where I am really lost. Now, because I have to account for user error, I can't be entirely sure how deep in the tree I need to be going. I know I want the `<pathurl>~</pathurl>` which I can plug into the .nk (nuke) scripts I make. With red camera files they can either be the directly here .R3D or the folder structure which can been 2-3 bins deep. I know that I can't 100% rely on the guys on set to be consistent on how they make this bin. All I can trust them to do is make sure they are in correct alphabetic order. If you look at the xml so the order of them is important. I also know if I am looking at a R### roll bin that I need 3 `<pathurl></pathurl>` and if im looking inside G### I need 6 and for C### only 5. The order of them is important as they can rename the name tag inside `~~~~ to rename cameras that were the wrong setting without renaming source files. (which breaks important metadata that is needed in other programs) While in this part of the tree I'd also like to grab the `<clip id=~><in>###</in>` to grab the in marker frame offset. If the cameras have gone out of sync and their start points can be set. But of course this tag is not child to the `<pathurl></pathurl>` and is actually 3 parents up! Also this tag won't be on every clip so I can't look for it first! <clip id="masterclip-40" explodedTracks="true" frameBlend="FALSE"> <uuid>85f87acc-308f-401e-bf82-55e8ea41e55a</uuid> <masterclipid>masterclip-40</masterclipid> <ismasterclip>TRUE</ismasterclip> <duration>5355</duration> <rate> <timebase>30</timebase> <ntsc>TRUE</ntsc> </rate> <in>876</in> <name>B002_C002_0216AM_002.R3D</name> <media> <video> <track> <clipitem id="clipitem-118" frameBlend="FALSE"> <masterclipid>masterclip-40</masterclipid> <name>B002_C002_0216AM_002.R3D</name> <rate> <timebase>30</timebase> <ntsc>TRUE</ntsc> </rate> <alphatype>none</alphatype> <pixelaspectratio>square</pixelaspectratio> <anamorphic>FALSE</anamorphic> <file id="file-40"> <name>B002_C002_0216AM_002.R3D</name> <pathurl>file://localhost/Volumes/REDLAB_3A/SESQUI/MASTER_FILES/DAY_00/RED/R002/B002/B002_0216G4.RDM/B002_C002_0216AM.RDC/B002_C002_0216AM_002.R3D</pathurl> So once I've parsed all this the information I'd like to have is. * The original bin folder structure of the XML contained in the `daynumber`. Take the names of the bins and construct the same folder structure in the `nk_output_dir` (Day_00/R001/R001_C001 etc etc) * I also want to make a `daynumber` directory in the `dnx_render_dir` folder and a directory for each bin referencing a camera roll. * Based on if the clipname is starts with a R, G or C I want to be able to access that for selecting what kind of .nk to make. * I want the pathurl information for each bin that is referring to a clip and plug. I also want any `<in>` information if there is any for that clip. That way I can plug it into the read node information for my nuke gizmo. I think once I figure out how to parse such a complicated xml tree I'll able to fuss and fumble the rest of the process. I am just really struggling with finding examples of parsing an complicated XML file like this. Answer: Whenever faced with a complex XML, consider an XSLT script to transform your XML into a simpler structure. As information, [XSLT](https://www.w3.org/TR/xslt) is a special-purpose, declarative language (same type as SQL) designed to transform XML into various structures for end use needs. Python like other general purpose languages maintains an XSLT processor, specifically in its [lxml](http://lxml.de/) module. While this transformation does not address your entire needs, you can parse the simpler structure for your Nuke application needs. Directories and names are simplified and labeled for daynumber, rollnumber, shotnames, and clip with pathurls. **XSLT** script _(save as .xsl or .xslt to be referenced in .py script below)_ <xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output version="1.0" encoding="UTF-8" indent="yes" /> <xsl:strip-space elements="*"/> <xsl:key name="idkey" match="ctype" use="@id" /> <xsl:template match="/"> <root> <xsl:apply-templates select="*"/> </root> </xsl:template> <xsl:template match="xmeml/bin"> <daynumber><xsl:value-of select="name"/></daynumber> <xsl:apply-templates select="children/bin"/> </xsl:template> <xsl:template match="xmeml/bin/children/bin"> <roll> <rollnumber><xsl:value-of select="name"/></rollnumber> <rollnumberdir><xsl:value-of select="concat(ancestor::bin/name, '/', name)"/></rollnumberdir> <xsl:apply-templates select="children/bin"/> </roll> </xsl:template> <xsl:template match="xmeml/bin/children/bin/children/bin"> <shot> <shotname><xsl:value-of select="name"/></shotname> <shotnamedir><xsl:value-of select="concat(/xmeml/bin/name, '/', /xmeml/bin/children/bin/name, '/', name)"/></shotnamedir> <xsl:apply-templates select="descendant::clip[position() &lt; 4]"/> </shot> </xsl:template> <xsl:template match="clip"> <clip> <clipname><xsl:value-of select="descendant::name"/></clipname> <xsl:copy-of select="in"/> <pathurl><xsl:value-of select="descendant::pathurl"/></pathurl> </clip> </xsl:template> </xsl:transform> **Python** script _(transform, parse, and export simpler structure)_ #!/usr/bin/python import lxml.etree as ET # LOAD INPUT XML AND XSLT dom = ET.parse('Input.xml')) xslt = ET.parse('XSLTScript.xsl') # TRANSFORM XML (SIMPLER NEWDOM CAN BE FURTHER PARSED: ITER(), FINDALL(), XPATH()) transform = ET.XSLT(xslt) newdom = transform(dom) # XPATH EXPRESSIONS (LIST OUTPUTS) daynumber = newdom.xpath('//daynumber/text()') # ['Day_00'] rolls = newdom.xpath('//rollnumber/text()') # ['R001', 'R002'] shots = newdom.xpath('//shotname/text()') # ['R001_C001', 'R002_C001', 'R002_C002'] # CONVERT TO STRING (IF NEEDED) tree_out = ET.tostring(newdom, encoding='UTF-8', pretty_print=True, xml_declaration=True) print(tree_out.decode("utf-8")) # OUTPUT TO FILE (IF NEEDED) xmlfile = open('Output.xml'),'wb') xmlfile.write(tree_out) xmlfile.close() **TRANSFORMED XML** _(contained in newdom object in .py script)_ <?xml version='1.0' encoding='UTF-8'?> <root> <daynumber>Day_00</daynumber> <roll> <rollnumber>R001</rollnumber> <rollnumberdir>Day_00/R001</rollnumberdir> <shot> <shotname>R001_C001</shotname> <shotnamedir>Day_00/R001/R001_C001</shotnamedir> <clip> <clipname>A002_C001_0216MW_001.R3D</clipname> <pathurl>file://localhost/Volumes/REDLAB_3A/SESQUI/MASTER_FILES/DAY_00/RED/R001/A002/A002_0216FE.RDM/A002_C001_0216MW.RDC/A002_C001_0216MW_001.R3D</pathurl> </clip> <clip> <clipname>A002_C001_0216MW_002.R3D</clipname> <pathurl>file://localhost/Volumes/REDLAB_3A/SESQUI/MASTER_FILES/DAY_00/RED/R001/A002/A002_0216FE.RDM/A002_C001_0216MW.RDC/A002_C001_0216MW_002.R3D</pathurl> </clip> <clip> <clipname>A002_C001_0216MW_003.R3D</clipname> <pathurl>file://localhost/Volumes/REDLAB_3A/SESQUI/MASTER_FILES/DAY_00/RED/R001/A002/A002_0216FE.RDM/A002_C001_0216MW.RDC/A002_C001_0216MW_003.R3D</pathurl> </clip> </shot> </roll> <roll> <rollnumber>R002</rollnumber> <rollnumberdir>Day_00/R002</rollnumberdir> <shot> <shotname>R002_C001</shotname> <shotnamedir>Day_00/R001/R002_C001</shotnamedir> <clip> <clipname>A003_C001_0216XI_001.R3D</clipname> <pathurl>file://localhost/Volumes/REDLAB_3A/SESQUI/MASTER_FILES/DAY_00/RED/R002/A003/A003_0216XO.RDM/A003_C001_0216XI.RDC/A003_C001_0216XI_001.R3D</pathurl> </clip> <clip> <clipname>B002_C001_02169H_002.R3D</clipname> <pathurl>file://localhost/Volumes/REDLAB_3A/SESQUI/MASTER_FILES/DAY_00/RED/R002/B002/B002_0216G4.RDM/B002_C001_02169H.RDC/B002_C001_02169H_002.R3D</pathurl> </clip> <clip> <clipname>C002_C001_02168R_001.R3D</clipname> <pathurl>file://localhost/Volumes/REDLAB_3A/SESQUI/MASTER_FILES/DAY_00/RED/R002/C002/C002_0216RL.RDM/C002_C001_02168R.RDC/C002_C001_02168R_001.R3D</pathurl> </clip> </shot> <shot> <shotname>R002_C002</shotname> <shotnamedir>Day_00/R001/R002_C002</shotnamedir> <clip> <clipname>C002_C002_0216M9_001.R3D</clipname> <pathurl>file://localhost/Volumes/REDLAB_3A/SESQUI/MASTER_FILES/DAY_00/RED/R002/C002/C002_0216RL.RDM/C002_C002_0216M9.RDC/C002_C002_0216M9_001.R3D</pathurl> </clip> <clip> <clipname>C002_C002_0216M9_002.R3D</clipname> <pathurl>file://localhost/Volumes/REDLAB_3A/SESQUI/MASTER_FILES/DAY_00/RED/R002/C002/C002_0216RL.RDM/C002_C002_0216M9.RDC/C002_C002_0216M9_002.R3D</pathurl> </clip> <clip> <clipname>C002_C002_0216M9_003.R3D</clipname> <pathurl>file://localhost/Volumes/REDLAB_3A/SESQUI/MASTER_FILES/DAY_00/RED/R002/C002/C002_0216RL.RDM/C002_C002_0216M9.RDC/C002_C002_0216M9_003.R3D</pathurl> </clip> </shot> </roll> </root>
Why does my repository object return Nonetype with github3.py? Question: Using github3.py version 0.9.5 [documentation](http://github3py.readthedocs.org/en/0.9.5/repos.html), I'm trying to create a repository object but it keeps returning `Nonetype` and therefore I am unable to access the contents of the repository. There doesn't seem to be any other posts on StackOverflow, or conversations on the library's GitHub issues that address this problem. `AttributeError: 'NoneType' object has no attribute 'contents'` is the exact error I received. On the line that says `repo = repository('Django', auth)` I tried changing **auth** with **fv4** but that doesn't change anything other. #!/usr/bin/env python from github3 import authorize, repository, login from pprint import PrettyPrinter as ppr import github3 from getpass import getuser pp = ppr(indent=4) username = 'myusername' password = 'mypassword' scopes = ['user', 'repo', 'admin:public_key', 'admin:repo_hook'] note = 'github3.py test' note_url = 'http://github.com/FreddieV4' print("Attemping authorization...") token = id = '' with open('CREDENTIALS.txt', 'r') as fi: token = fi.readline().strip() id = fi.readline().strip() print("AUTH token {}\nAUTH id {}\n".format(token, id)) print("Attempting login...\n") fv4 = login(username, password, token=token) print("Login successful!", str(fv4), '\n') print("Attempting auth...\n") auth = fv4.authorization(id) print("Auth successful!", auth, '\n') print("Reading repo...\n") repo = repository('Django', auth) print("Repo object...{}\n\n".format(dir(repo))) print("Repo...{}\n\n".format(repo)) contents = repo.contents('README.md') pp.pprint('CONTENTS {}'.format(contents)) contents.update('Testing github3.py', contents) #print("commit: ", commit) Answer: So there are a few things up with your code, but let me help you with your immediate problem first and then I'll move on to the other issues. You're using `github3.repository` in the line you're confused about. Let's look at the [documentation](https://github3py.readthedocs.org/en/0.9.5/api.html#github3.repository) for that specific function (which you can also see by calling `help(repository)`). You'll see that `repository` expects two arguments `owner` and `repository` and describes them as the owner of the repository and the name of the repository itself. So in your usage you would do repo = repository('Django', 'Django') But where does that leave your authentication credentials... Well here's the other thing, you're doing fv4 = login(username, password, token) You only need to specify some of those arguments. If you want to use a token then do fv4 = login(token=token) Or if you want to use basic authentication fv4 = login(username, password) Both will work just fine. If you want to continue to be authenticated you can then do repo = fv4.repository('Django', 'Django') Because `fv4` is a `GitHub` object which is documented [here](https://github3py.readthedocs.org/en/0.9.5/github.html#github-object) and which the `repository` function uses underneath everything. So that should help you get through most of your problems. * * * Note that in the documented examples for github3.py we usually call the result of `login()` `gh`. This is because `gh` is just a `GitHub` object with credentials stored. It isn't your user or anything like that. That would be (on your version of github3.py) `fv4 = gh.user()`. (If someone else is reading this and using a version of github3.py 1.0 (currently in pre-release) then it would be `fv4 = gh.me()`.)
Python random character string repeated 7/2000 records Question: I am using the below to generate a random set of characters and numbers: tag = ''.join([random.choice(string.ascii_letters + string.digits) for n in xrange(36)]) I thought that this was a decent method. 36 character length, with each character being one of 36 unique options. Should be a good amount of randomness, right? Then, I was running a query off an instance with what I thought was a unique `tag`. Turns out, there were **SEVEN (7)** records with the same "random" `tag`. So, I opened the DB, and ran a query to see the repeatability of my tags. Turns out that not only does mine show up 7 times, but there are a number of tags that repeatedly appear over and over again. With approximately 2000 rows, it clearly should not be happening. Two questions: **(1) What is wrong with my approach, and why would it be repeating the same`tag` so often?** **(2) What would be a better approach to get unique tags for each record?** Here is the code I am using to save this to the DB. While it is written in Django, clearly this is not a django related question. class Note(models.Model): ... def save(self, *args, **kwargs): import random import string self.tag = ''.join([random.choice(string.ascii_letters + string.digits) for n in xrange(36)]) super(Note, self).save(*args, **kwargs) Answer: The problem with your approach: 1. true randomness/crypto is hard, you should try to use tested existing solutions instead of implementing your own. 2. Randomness isn't guaranteed - while 'unlikely', there's nothing preventing the same string to be generated more than once. A better solution would be to not reinvent the wheel, and use the [uuid](https://docs.python.org/2/library/uuid.html) module, a common solution to generating unique identifiers: import uuid tag = uuid.uuid1()
Jinja2 templating not working in in Django 1.9 Question: Per the instructions on the [Django 1.9 tutorial](https://docs.djangoproject.com/en/1.9/topics/templates/#django.template.backends.jinja2.Jinja2) I've added another file in the project root with the Environment settings - from __future__ import absolute_import # Python 2 only from django.contrib.staticfiles.storage import staticfiles_storage from django.core.urlresolvers import reverse from jinja2 import Environment def environment(**options): env = Environment(**options) env.globals.update({ 'static': staticfiles_storage.url, 'url': reverse, }) return env` (Granted to load the proper `jinja2` I had to rename the file something differently, in this case `jinja2env.py` in project root) And I updated `settings.py` with the new templating backend: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(PROJECT_ROOT, 'templates').replace('\\','/')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, { 'BACKEND': "django.template.backends.jinja2.Jinja2", 'DIRS': [os.path.join(PROJECT_PATH, 'campaigns/templates').replace('\\','/')], "APP_DIRS": True, "OPTIONS": { 'environment': 'jinja2env.Environment', } }, In the view I'm working on I use the `using` parameter to specify the `jinja2` templating engine: return render(request, 'jinja2/index.html', context={'projects': projects, 'counter': 0}, status=200, using='jinja2') Yet when the template goes to render I have following error: `'static' is undefined`. Clearly my setup is wrong or I am not doing something correct. The template starts as such: <link rel="stylesheet" type="text/css" href="{{ static('stylesheets/main.css') }}"> What am I doing wrong? I don't use `{% load static %}` since it isn't a Django template ... so I'm at a loss. Answer: Based on your settings and the accepted answer to [this question](http://stackoverflow.com/questions/6261823/static-url-not-working), it appears you should try adding the `static` context processor.
Why is the ipython output of set different from the __repr__ or __str__ of the set? Question: I am running the following code in ipython and am surprised at the print outputs and the ipython cell outputs of the code: print set(["A", "B", "C"]) print set(["A", "C", "B"]) print list(set(["A", "C", "B"])) print list(set(["A", "B", "C"])) print [k for k in set(["A", "C", "B"])] print [k for k in set(["A", "B", "C"])] a = set(["A", "B", "C"]) print a print a.__repr__() print a.__str__() print [(k, hash(k)) for k in a] a The output of the above program is as follows: set(['A', 'C', 'B']) set(['A', 'C', 'B']) ['A', 'C', 'B'] ['A', 'C', 'B'] ['A', 'C', 'B'] ['A', 'C', 'B'] set(['A', 'C', 'B']) set(['A', 'C', 'B']) set(['A', 'C', 'B']) [('A', -269909568), ('C', -13908798), ('B', -141909181)] Out[34]: {'A', 'B', 'C'} Note, that the cell output is `{'A', 'B', 'C'}` while the printed output is `set(['A', 'C', 'B'])` My Python details are as follows: import sys print sys.version 2.7.11 |Anaconda 2.3.0 (64-bit)| (default, Jan 29 2016, 14:26:21) [MSC v.1500 64 bit (AMD64)] Answer: IPython adds some magic from time-to-time to make things more readable. In this case it's showing you a set literal (new in python2.7) Here's the code that makes that happen: <https://github.com/ipython/ipython/blob/f49962dc931870a1eba4b6467ce302c8ae095b3f/IPython/lib/pretty.py#L560>
Finding Term Frequency and Inverse Document Frequency Utilizng NLTK (Python 3.5) Question: I'm trying to utilize NLTK to perform term frequency (TF) and inverse document frequency (IDF) analyses on a batch of files (they happen to be corporate press releases from IBM). I know that the assertion of whether or not NLTK has TF IDF capabilities [has been disputed on SO beforehand](http://stackoverflow.com/questions/29570207/does-nltk-have-tf-idf- implemented), but I've found docs indicating the module does have them: <http://www.nltk.org/_modules/nltk/text.html> <http://www.nltk.org/api/nltk.html#nltk.text.TextCollection> I've never seen or used "self" or **init** to execute code beforehand. This is what I have so far. Any advice on how to amend this code so it works is very much appreciated. What I currently have doesn't return anything. I don't really understand what "source," "self" or "term" and "text" in the NLTK docs represent. import nltk.corpus from nltk.text import TextCollection from nltk.corpus import gutenberg gutenberg.fileids() ibm1 = gutenberg.words('ibm-github.txt') ibm2 = gutenberg.words('ibm-alior.txt') mytexts = TextCollection([ibm1, ibm2]) term = 'software' def __init__(self, source): if hasattr(source, 'words'): source = [source.words(f) for f in source.fileids()] self._texts = source Text.__init__(self, LazyConcatenation(source)) self._idf_cache = {} def tf(self, term, mytexts): result = mytexts.count(term) / len(mytexts) print(result) Answer: from nltk.text import TextCollection from nltk.book import text1, text2, text3 mytexts = TextCollection([text1, text2, text3]) # Print the IDF of a word print(mytexts.idf("Moby")) # tf_idf print(mytexts.tf_idf("Moby", text1))
'str' object has not attribute 'subs' Question: Here is my Python code written in Vim. Whenever I run it, I get the error `'str' object has not attribute 'subs'` from sympy import * x,a_test,b_test,fa_test,fb_test=symbols('x a_test b_test fa_test fb_test') expr=raw_input("enter the equation") print expr print "hello" try: print "hello" inc=0 a=inc fa=expr.subs(x,inc) print "hello" if(fa<0): print "hello" inc+=1 fb=expr.subs(x,inc) if(fb<=0): while(fb<=0): inc+=1 else: print "hello" inc+=1 fb=expr.subs(x,inc) if(fb<=0): while(fb<=0): inc+=1 b=inc print a print b print fa print fb except Exception,e: print e Answer: The return value of `raw_input` is a `str`; you cannot use it as if it is a SymPy `expr`. You need to [parse](http://docs.sympy.org/dev/modules/parsing.html) it first: from sympy.parsing.sympy_parser import parse_expr ... expr = parse_expr(raw_input("enter the equation"))
Printing out a proper bill producing program in python Question: relatively new to programming in python, thank you for all the fast help that was provided on my last question I had on another python project. Anyways, Ive written a new program for a project in python that produces a bill for a catering venue. This is my code below, everything runs fine, and I get the intended results required for the project, The two problems I am experience are, 1. I need the cost of the desert to not print---> 3.0 but ---> $3.00, essentially, how can I print dollar signs, and round e.x 3.0 --> 3.00, or 45.0--> 45.00..and with dollar signs before prices. Sorry if something like this has been asked.. import math # constants Cost_Per_Desert = 3.00 Tax_Rate = .075 Gratuity_Tips = .15 Adult_Meal_Cost = 12.75 Child_Meal_Cost = .60*12.75 Room_Fee = 450.00 Less_Deposit = 250.00 def main(): # Input Section Name = input("\n\n Customer:\t\t\t ") Number_Of_Adults = int(input(" Number of Adults:\t\t ")) Number_Of_Children = int(input(" Number of Children:\t\t ")) Number_Of_Deserts = int(input(" Number of Deserts:\t\t ")) print("\n\nCost Of Meal Per Adult:\t\t" , Adult_Meal_Cost) print("Cost of Meal Per Child:\t\t" , round(Child_Meal_Cost,2)) print("Cost Per Desert:\t\t" , round(Cost_Per_Desert,2)) # Processing/Calculations Total_Adult_Meal_Cost = Adult_Meal_Cost* Number_Of_Adults Total_Child_Meal_Cost = Child_Meal_Cost* Number_Of_Children Total_Desert_Cost = Cost_Per_Desert* Number_Of_Deserts Total_Food_Cost = Total_Adult_Meal_Cost + Total_Child_Meal_Cost + Total_Desert_Cost Total_Taxes = Total_Food_Cost * Tax_Rate Tips = Total_Food_Cost * Gratuity_Tips Total_Bill = Total_Food_Cost + Total_Taxes + Tips + Room_Fee # Output Section print("\n\n Total Cost for Adult Meals: \t", Total_Adult_Meal_Cost) print(" Total Cost for Childs Meals: \t", Total_Child_Meal_Cost) print(" Total Cost for Desert: \t", Total_Desert_Cost) print(" Total Food Cost: \t\t", Total_Food_Cost) print("\n\n Plus 7.5% Taxes: \t\t", round(Total_Taxes,2)) print(" Plus 15.0% Tips: \t\t", round(Tips,2)) print(" Plus Room Fee: \t\t", Room_Fee) print("\n\n Total Bill: \t\t\t", round(Total_Bill,2)) print(" Less Deposit: \t\t\t", Less_Deposit) print("\n\nBalance Due: \t\t\t", round(Total_Bill - Less_Deposit,2)) print("\n\n\n\n\t\t Thank You For Using Passaic County Catering Services. ") main() input("\n\n\n\n\nPress Enter to Continue") Answer: Let's say cost of desert is $3.00 cost = 3 print("${0:.2f}".format(cost)) Output: $3.00
sha1 collision of less than 512 bit Question: In Wikipedia [SHA-1 pseudocode](https://en.wikipedia.org/wiki/SHA-1), it's said: > Pre-processing: append the bit '1' to the message e.g. by adding 0x80 if > message length is a multiple of 8 bits. append 0 ≤ k < 512 bits '0', such > that the resulting message length in bits So is it same with `\x01` and `x00\x01`? Python example: import hashlib for s in ('01', '0001'): m=hashlib.sha1() m.update(s.decode('hex')) print m.hexdigest() >>>bf8b4530d8d246dd74ac53a13471bba17941dff7 >>>0e356ba505631fbf715758bed27d503f8b260e3a It turns out not to be the same, why? Answer: **`\x01` and `\x0001` cannot be the same after pre-processing.** I think that you misunderstand the pseudocode in [Wiki](https://en.wikipedia.org/wiki/SHA-1). Here I take `\x01` and `\x0001` as examples, which their length are less than `512` bits. (Now suppose that the original message length is less than `448` bits) * first step: fill in to let its length **=448 (mod 512)**. We append the `1` bit first, then append `0`. Thus, for `\x01`, it becomes `\x01800000...000`, and its length is 448 bits, and the ellipsis represents `0` in **hex**. * second step: fill in the length. We should add the reminder `(512 - 448) = 64` bits to represent its original length. Thus, for `\x01`, its original length is `8`, it shall append `\x00000000 00000008`. From mentioned above, after pre-processing, `\x01` becomes `\x01800000...08`, and whole length is 512, and the ellipsis represents `0` in **hex**.
Python - Unzip .gz files in parallel Question: I have multiple .gz files that add up to 1TB in total. How can I utilize Python 2.7 to unzip these files in parallel? looping on the files takes too much time. I tried this code as well: filenames = [gz for gz in glob.glob(filesFolder + '*.gz')] def uncompress(path): with gzip.open(path, 'rb') as src, open(path.rstrip('.gz'), 'wb') as dest: shutil.copyfileobj(src, dest) with multiprocessing.Pool() as pool: for _ in pool.imap_unordered(uncompress, filenames, chunksize=1): pass However I get the following error: with multiprocessing.Pool() as pool: AttributeError: __exit__ Thanks! Answer: To use `with` construct, the object used inside must have `__enter__` and `__exit__` methods. The error says that the `Pool` class (or instance) doesn't have these so you can't use it in the `with` statement. Try this (just removed the with statement): import glob, multiprocessing, shutil filenames = [gz for gz in glob.glob('.' + '*.gz')] def uncompress(path): with gzip.open(path, 'rb') as src, open(path.rstrip('.gz'), 'wb') as dest: shutil.copyfileobj(src, dest) for _ in multiprocessing.Pool().imap_unordered(uncompress, filenames, chunksize=1): pass **EDIT** I agree with @dhke, unless all (or most) of gz files are physically located adjacently, frequent disk reads for different locations (which are called more frequently when using multiprocessing) will be slower as compared to doing these operations file by file one by one (serially).
moving average of 3 elements by C or Python Question: I want to calculate the moving average of 3 elements. For example, I have a 25 elements of sales data. I need to calculate the moving average taken from averaging these 25 elements of data. When a real array is given as data, I want to write a program that will determines a 3 element moving average and creates an array. The number of elements in the array becomes 2 elements shorter than the given sequence. For example, if I am given: [7.0, 9.0, 5.0, 1.0, 3.0] I want to get: [7.0, 5.0, 3.0] Answer: The best (and fastest, by far) way to approach this is [convolution](https://en.wikipedia.org/wiki/Convolution). Using [numpy's convolve](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.convolve.html): import numpy as np x = np.asarray([7.0, 9.0, 5.0, 1.0, 3.0]) # create what's known as the convolution 'kernel' # note that this sums to 1, which results in an average kernel = np.ones(3) / 3 # do the convolution to compute the moving average moving_avg = np.convolve(x, kernel, mode='valid') You can view the convolution operation as the kernel "sliding" over the data sequence. Every point `moving_avg[k]` in the output of the convolution will be the area under the product between your data and the kernel, when the kernel is centered at that point `k`. This is an animation (from the wikipedia article linked above) illustrating the principle for the square kernel used in moving average computation: [![enter image description here](http://i.stack.imgur.com/9truS.gif)](http://i.stack.imgur.com/9truS.gif)
How to remove an app from a django projects (and all its tables) Question: I want to remove an app from a django project. I want to remove * the tables of the app * the content-types * foreign-key usages of these content-types Running `manage.py migrate app_to_remove zero` does not work: django.db.migrations.migration.IrreversibleError: Operation <RunPython <function forwards_func at 0x7ff76075d668>> in fooapp.0007_add_bar is not reversible I guess there are several migrations which are not reversible ... Answer: # First: Remove references in the code * remove `app_to_remove` from `settings.INSTALLED_APPS` * remove other references in `urls.py` or other places # Second: Clean the database Create an empty migration for your django-project: manage.py makemigrations your_django_project --empty Edit the file. Here is a template: # -*- coding: utf-8 -*- from __future__ import unicode_literals from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ('your_django_project', '0001_initial'), ] operations = [ migrations.RunSQL(''' drop if exists table app_to_remove_table1; drop if exists table app_to_remove_table2; .... delete from auth_permission where content_type_id in (select id from django_content_type where app_label = '{app_label}'); delete from django_admin_log where content_type_id in (select id from django_content_type where app_label = '{app_label}'); delete from reversion_version where content_type_id in (select id from django_content_type where app_label = '{app_label}'); delete from django_content_type where app_label = '{app_label}'; delete from django_migrations where app='{app_label}'; '''.format(app_label='app_to_remove')) ] Run the migration, run tests. About "drop if exists": You have two cases: 1. The production system: You want to drop the tables. 2. New development systems: These systems never had this app, and they don't have this table :-)
Replace 4-5 lines of a large text file using python Question: I have a text file in which i need to change USER PROMPT [Program SIM GENTEST05] <Description /> <MultiPartMessage>false</MultiPartMessage> <NumberOfMultiParts>0</NumberOfMultiParts> <Commented>false</Commented> <ProgrammingCmdData xsi:type="UserPromptData"> <Prompt>Program SIM GENTEST05</Prompt> with some customize values. I need some hint to do it in python 2.7 Answer: I would use the [fileinput](https://docs.python.org/2/library/fileinput.html) library, it would be something like this: import sys import fileinput for i, line in enumerate(fileinput.input('lorem_ipsum.txt', inplace=1)): if "<MultiPartMessage>" in line: sys.stdout.write(line.replace( '<MultiPartMessage>false</MultiPartMessage>', '<MultiPartMessage>something</MultiPartMessage>')) elif "<NumberOfMultiParts>" in line: sys.stdout.write(line.replace( '<NumberOfMultiParts>0</NumberOfMultiParts>', '<NumberOfMultiParts>something</NumberOfMultiParts>')) else sys.stdout.write(line)
Importing from a git sub-module (ImportError) Question: I get an `ImportError` when I am importing from a sub-module in the way I thought One Was Supposed To Do It. I have the following package: pkg/ __init__.py cow.py pizza.py pkg.py components/ components.py otherstuff.py __init__.py cow.py: print "Hello" from components import foodle components.py: foodle=5 and the `__init__`'s are empty. I am having trouble putting things in the right place or organizing them properly. When, from the pkg directory, I try from pkg import foodle I get "ImportError: cannot import name foodle" What is the right way to arrange files and import from submodules? I have read [How to import python file from git submodule](http://stackoverflow.com/questions/29746958/how-to-import-python- file-from-git-submodule) ; I have tried messing with `sys.path` in `components/__init__.py` and in `cow.py`, to no avail. This package is shared on git, so it needs to be portable. `components` is actually a git sub-module. Putting `from components import *` in the `__init__py` in components/ seems to work, but I thought usually that file stays empty. Answer: The elements I was missing are (these are my interpretation, may still be incorrect): * If it's a package (with `__init__.py`), use it from outside the `pkg` folder, not from inside. ie, using a package both ways (calling from outside and using modules from within) might be hard to set up, so don't. This is the main insight that solves my problem. * the dot notation for getting submodules and subpackages works both for files and for folders within `pkg`. Thus, from some other folder, but with `pkg` in my path, I can call any of the following: import pkg from pkg.cow import foodle from pkg.components import foodle from pkg.components.components import foodle
Amino acid sequences in pyplot (Same regions in both sequences) Question: I'm beginner at python, and need your help :) I have to compare two sequences, when amino acid is similar at both sequences (doesn't matter which position it is) program should put in into graph as point. For example: If we have two sequences x=['M','N','K','M'] y=['M','K','M'] Program should tick a point in position (1,1) (because of 2 M's), (1,3) (same), (3,2) (because of 2 K's), (4,1) and (4,1) At finish, with bigger sequence it should looks like that: <http://wrzucaj.net/images/2016/03/02/853cd842715edd881296c804502d475b.png> But i stuck, the effect of my works looks like that: <http://wrzucaj.net/images/2016/03/02/005862962dea7e3a6eb24f3449704ef5.png> In left side and bottom, there should be shown sequence which is covered by other amino acids, i don't know why. # Importing import wx import matplotlib.pyplot as plt ## And this is how program looks like: def pajplot(evt): Ktory1=Lista.GetSelection() Ktory2=Listaa.GetSelection() a=Sekwencje[Ktory1][3] b=Sekwencje[Ktory2][3] x1=[] y1=[] x2=[] y2=[] for i in range(30): for j in range(30): if a[i]==b[j]: x1.append(i) y1.append(j) else: x2.append(i) y2.append(j) Sekwencja1=[] Sekwencja2=[] for i in range(30): Sekwencja1.append(a[i]) for j in range(30): Sekwencja2.append(b[j]) plt.plot(x1 ,y1, 'ko') plt.plot(x2 ,y2, 'wo') plt.xticks(x1, Sekwencja1,'ko') plt.yticks(y1, Sekwencja2,'wo') plt.show() Where Sekwencje is list, which have lists inside with names, etc and sequence at 4 place. I tried to make it for 30 amino acids at start. No need full answer, could be only a little hint. Thank you in Advance. Answer: It looks like you are putting x- and y-ticks only at the coordinates of your matched amino acids. But you are using all the amino acid letters. It is not entirely clear whether you want all of the labels (in their proper places) or only labels where you find a match. So you could do **one** of those two things depending on what you want: **Only put labels where there are matches** : In that case you need to change your definitions of `Sekwencja` 1 and 2: Sekwencja1 = [a[i] for i in x1] Sekwencja2 = [b[j] for j in y1] **Put labels on everything** : In this case you need to use `range(30)` instead of `x1` and `y1` when setting the ticks: plt.xticks([range(30)], Sekwencja1,'ko') plt.yticks([range(30)], Sekwencja2,'wo')
Find and rename files using a Python script Question: I am new to Python coding so here a question. I want to find files that are called "untitled" with any kind of extension, e.g. jpg, indd, psd. Then rename them to the date of the current day. I have tried the following: import os for file in os.listdir("/Users/shirin/Desktop/Artez"): if file.endswith("untitled.*"): print(file) When I run the script, nothing happens. Answer: You might find the [`glob`](https://docs.python.org/3.5/library/glob.html?highlight=glob.glob#glob.glob) function more useful in this situation: import glob for file in glob.glob("/Users/shirin/Desktop/Artez/untitled.*"): print(file) Your function does not print anything as there are probably no files ending with `.*` in the name. The `glob.glob()` function will carry out the file expansion for you. You can then use this to do your file renaming as follows: import glob import os from datetime import datetime current_day = datetime.now().strftime("%Y-%m-%d") for source_name in glob.glob("/Users/shirin/Desktop/Artez/untitled.*"): path, fullname = os.path.split(source_name) basename, ext = os.path.splitext(fullname) target_name = os.path.join(path, '{}{}'.format(current_day, ext)) os.rename(source_name, target_name)
installing Theano with Enthought Canopy on Windows Question: I'm trying to install Theano, here is my situation. The system is Windows 10 (64-bit), with CUDA 7.5 installed with Visual Studio 2013. The Python distribution is Enthought Canopy (2.7.10, 32-bit) with pip, numpy (1.9.2-3) and scipy (0.17.0-2). The installation is as follows, 1\. install Theano (0.7) with pip; 2\. install mingw (4.8.1-2) and libpython (1.2) using enpkg tool; 3\. copy the newly created libpython27.a into ${PYTHONHOME}/Libs; 4\. edit and save .theanorc.txt under c:\users\${myName} as [global] devive=gpu floatX=float32 [blas] ldflags=${PYTHONHOME}\Scripts -lmk2_core -lmk2_intel_thread -lmk2_rt [nvcc] flags=-LC:\Users\${myName}\AppData\Local\Enthought\Canopy32\User\libs compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin [gcc] cxxflags= When I try to import theano, I get the following warning and error: > mod.cu(1019): warning: statement is unreachable > > mod.cu(1019): warning: statement is unreachable > > mod.cu LINK : fatal error LNK1181: cannot open input file 'cublas.lib' > > ['nvcc', '-shared', '-O3', > '-LC:\Users\${myName}\AppData\Local\Enthought\Canopy32\User\libs', '-- > compiler-bindir', 'C:\Program Files (x86)\Microsoft Visual Studio > 12.0\VC\bin', '-Xlinker', '/DEBUG', '-D HAVE_ROUND', '-m32', '-Xcompiler', > '-DCUDA_NDARRAY_CUH=11b90075e2397c684f9dc0f7276eab8f,-D > NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,/Zi,/MD', > '-IC:\Users\${myName}\AppData\Local\Enthought\Canopy32\User\lib\site- > packages\theano\sandbox\cuda', > '-IC:\Users\${myName}\AppData\Local\Enthought\Canopy32\App\appdata\canopy-1.6.2.3262.win-x86\lib\site- > packages\numpy\core\include', > '-IC:\Users\${myName}\AppData\Local\Enthought\Canopy32\App\appdata\canopy-1.6.2.3262.win-x86\include', > '-o', > 'C:\Users\${myName}\AppData\Local\Theano\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_71_Stepping_1_GenuineIntel-2.7.10-32\cuda_ndarray\cuda_ndarray.pyd', > 'mod.cu', '-LC:\Users\${myName}\AppData\Local\Enthought\Canopy32\User\EGG- > INFO\mingw\usr\x86_64-w64-mingw32\lib', > '-LC:\Users\${myName}\AppData\Local\Enthought\Canopy32\App\appdata\canopy-1.6.2.3262.win-x86\libs', > '-LC:\Users\${myName}\AppData\Local\Enthought\Canopy32\App\appdata\canopy-1.6.2.3262.win-x86', > '-lpython27', '-lcublas', '-lcudart'] > > ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: ('nvcc > return status', 2, 'for cmd', 'nvcc -shared -O3 > -LC:\Users\${myName}\AppData\Local\Enthought\Canopy32\User\libs --compiler- > bindir C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin -Xlinker > /DEBUG -D HAVE_ROUND -m32 -Xcompiler > -DCUDA_NDARRAY_CUH=11b90075e2397c684f9dc0f7276eab8f,-D > NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,/Zi,/MD > -IC:\Users\${myName}\AppData\Local\Enthought\Canopy32\User\lib\site- > packages\theano\sandbox\cuda > -IC:\Users\${myName}\AppData\Local\Enthought\Canopy32\App\appdata\canopy-1.6.2.3262.win-x86\lib\site- > packages\numpy\core\include > -IC:\Users\${myName}\AppData\Local\Enthought\Canopy32\App\appdata\canopy-1.6.2.3262.win-x86\include > -o > C:\Users\${myName}\AppData\Local\Theano\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_71_Stepping_1_GenuineIntel-2.7.10-32\cuda_ndarray\cuda_ndarray.pyd > mod.cu -LC:\Users\${myName}\AppData\Local\Enthought\Canopy32\User\EGG- > INFO\mingw\usr\x86_64-w64-mingw32\lib > -LC:\Users\${myName}\AppData\Local\Enthought\Canopy32\App\appdata\canopy-1.6.2.3262.win-x86\libs > -LC:\Users\${myName}\AppData\Local\Enthought\Canopy32\App\appdata\canopy-1.6.2.3262.win-x86 > -lpython27 -lcublas -lcudart') > > WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not > available (error: cuda unavilable) I'm wondering if it's the 32-bit Python vs 64-bit system that causes the problem. Answer: As @Robert point out in the comment, the warning suggests compatibility issue. I finally solved the problem by installing the 64-bit Enthought Canopy python distribution, the other steps are the same as described in the question. It's worth mentioning that mingw can be installed by Enthough Canopy package manager as well, so you don't have to download independent mingw if you are using the distribution.
WARNING:oauth2client.util:build() takes at most 2 positional arguments (3 given) Question: I am doing the "Label Detection" tutorial for the Google Cloud Vision API. When I pass an image to the command like so I expect to get back some json telling me what is in the image. However, I am getting this error instead. >python label_request.py faulkner.jpg No handlers could be found for logger "oauth2client.util" WARNING:root:No module named locked_file Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/googleapiclient/discovery_cache/__init__.py", line 38, in autodetect from . import file_cache File "/usr/local/lib/python2.7/site-packages/googleapiclient/discovery_cache/file_cache.py", line 32, in <module> from oauth2client.locked_file import LockedFile ImportError: No module named locked_file Traceback (most recent call last): File "label_request.py", line 44, in <module> main(args.image_file) File "label_request.py", line 18, in main service = build('vision', 'v1', http, discoveryServiceUrl=API_DISCOVERY_FILE) File "/usr/local/lib/python2.7/site-packages/oauth2client/util.py", line 140, in positional_wrapper return wrapped(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/googleapiclient/discovery.py", line 202, in build raise e googleapiclient.errors.HttpError: <HttpError 403 when requesting https://vision.googleapis.com/$discovery/rest?version=v1 returned "Project has not activated the vision.googleapis.com API. Please enable the API for project google.com:cloudsdktool (#32555940559)."> Lots going on here. But the Project API _is_ enabled. So this is part of the error message is erroneous. It seems that "there was a change in the newest version of the oauth2client, v2.0.0, which broke compatibility with the google-api-python-client module". <http://stackoverflow.com/a/35492604/2341218> I applied this fix ... pip install --upgrade git+https://github.com/google/google-api-python-client After applying this fix, I get fewer errors ... >python label_request.py faulkner.jpg No handlers could be found for logger "oauth2client.util" Traceback (most recent call last): File "label_request.py", line 44, in <module> main(args.image_file) File "label_request.py", line 18, in main service = build('vision', 'v1', http, discoveryServiceUrl=API_DISCOVERY_FILE) File "/usr/local/lib/python2.7/site-packages/oauth2client/util.py", line 137, in positional_wrapper return wrapped(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/googleapiclient/discovery.py", line 209, in build raise e googleapiclient.errors.HttpError: <HttpError 403 when requesting https://vision.googleapis.com/$discovery/rest?version=v1 returned "Project has not activated the vision.googleapis.com API. Please enable the API for project google.com:cloudsdktool (#32555940559)."> It appears that this error message: "No handlers could be found for logger "oauth2client.util" is actually masking a more detailed warning/error message and that I can see the more detailed one by adding this code ... import logging logging.basicConfig() <http://stackoverflow.com/a/29966147/2341218> >python label_request.py faulkner.jpg WARNING:oauth2client.util:build() takes at most 2 positional arguments (3 given) Traceback (most recent call last): File "label_request.py", line 47, in <module> main(args.image_file) File "label_request.py", line 21, in main service = build('vision', 'v1', http, discoveryServiceUrl=API_DISCOVERY_FILE) File "/usr/local/lib/python2.7/site-packages/oauth2client/util.py", line 137, in positional_wrapper return wrapped(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/googleapiclient/discovery.py", line 209, in build raise e googleapiclient.errors.HttpError: <HttpError 403 when requesting https://vision.googleapis.com/$discovery/rest?version=v1 returned "Project has not activated the vision.googleapis.com API. Please enable the API for project google.com:cloudsdktool (#32555940559)."> So no I am stuck on this error message: WARNING:oauth2client.util:build() takes at most 2 positional arguments (3 given) It has been suggested that this error can be avoided by using named parameters instead of positional notation. <http://stackoverflow.com/a/16643215/2341218> However, I am uncertain exactly where I might make this change. I don't actually see the oauth2client.util:build() function in the code. Here is the google code (slightly modified): >cat label_request.py import argparse import base64 import httplib2 from apiclient.discovery import build from oauth2client.client import GoogleCredentials import logging logging.basicConfig() def main(photo_file): '''Run a label request on a single image''' API_DISCOVERY_FILE = 'https://vision.googleapis.com/$discovery/rest?version=v1' http = httplib2.Http() credentials = GoogleCredentials.get_application_default().create_scoped( ['https://www.googleapis.com/auth/cloud-platform']) credentials.authorize(http) service = build('vision', 'v1', http, discoveryServiceUrl=API_DISCOVERY_FILE) with open(photo_file, 'rb') as image: image_content = base64.b64encode(image.read()) service_request = service.images().annotate( body={ 'requests': [{ 'image': { 'content': image_content }, 'features': [{ 'type': 'LABEL_DETECTION', 'maxResults': 1, }] }] }) response = service_request.execute() label = response['responses'][0]['labelAnnotations'][0]['description'] print('Found label: %s for %s' % (label, photo_file)) return 0 if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( 'image_file', help='The image you\'d like to label.') args = parser.parse_args() main(args.image_file) Answer: I've had the exactly same problem and I just solved doing this code line (you have to have gcloud installed): gcloud auth activate-service-account --key-file <service-account file.json> and then: $ export GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file> Hope that helps!
python 'TypeError: argument must be string or read-only character buffer, not tuple' Question: I wrote this code using python 2.7: class LoadBalancerHandler: def __init__(self, file_name): self.server_socket = socket.socket(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) file = open(file_name) setup_apps(file.read()) def listen(self, host='localhost', port=80): self.server_socket.bind((host,port)) self.server_socket.listen(5) while True: (client_socket, address) = self.server_socket.accept() threadHandling = ThreadHandling(client_socket, self) threadHandling.start() but I get this error: > TypeError: argument must be string or read-only character buffer, not tuple This error is raised by the line:`self.server_socket.bind((host,port))` Answer: Again, i think your options to socket.socket(...) are incorrect. If you're trying to create a TCP listener, this works import socket s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) s.bind(('localhost',5555)) s.listen(5)
How to reshape this array with numpy? Question: My file looks like this 3.766204987418e+00 3.016098278453e+00 2.882128625608e+00 2.789447648712e+00 2.704276250639e+00 2.540138572067e+00 2.318587560199e+00 2.272640952350e+00 2.138794921589e+00 I have also code to read it import numpy as np zxyr=[] with open('det.txt') as f: zxyr=list(f) c=np.asarray(zxyr) print c.shape print len(c) which produces (2,) 2 How to reshape array to have (9,) shape? EDIT I have tried import numpy as np c = np.loadtxt('det.txt') print (c.shape) BUt Traceback (most recent call last): File "kiev.py", line 3, in <module> c = np.loadtxt('det.txt') File "/usr/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 856, in loadtxt X = np.array(X, dtype) ValueError: setting an array element with a sequence. Answer: As @B.M. said in the comments you need to use `np.loadtxt`. As shown below: import numpy as np c = np.loadtxt('mytextfile.txt) print (c.shape) This gets the data from the file and puts it into a numpy array. If the text file in my example is the same as yours then it will have the shape `(9,)` as you require. Another method you can do is the following: with open('mytextfile.txt') as f: content = f.readlines()
Incompatible GraphDef versions in Extend Question: I have some code which creates a graph to process some images and then iterates `sess.run()` in a loop to fetch batches of image tensors of shape [*, 299, 299, 3]. I'd like to then feed these images into the inception model. So, I added some code to load the inception model: def create_graph(): """"Creates a graph from saved GraphDef file and returns a saver.""" # Creates graph from saved graph_def.pb. print 'Loading graph...' with tf.Session() as sess: with gfile.FastGFile('/web/tensorflow_transfer/resources/classify_image_graph_def.pb', 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) _ = tf.import_graph_def(graph_def, name='') return sess.graph g = create_graph() for i in range(training_steps): sess.run(...) Now Im getting this error when running `run()`: File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 368, in run results = self._do_run(target_list, unique_fetch_targets, feed_dict_string) File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 420, in _do_run raise RuntimeError(compat.as_text(tf_session.TF_Message(status))) RuntimeError: Incompatible GraphDef versions in Extend: 1 != 0 Answer: This is most likely from using too old a version of TensorFlow to read in and run the graph -- the graph was created using a newer version of the GraphDef. Try upgrading to 0.7 or to HEAD and then run your code again.
Executing R script with Python via subprocess.call Question: I have a R script which works fine on its own but I need it to be a part of python script. So, when I run the python script the R script be executed automatically. I use the below command; there is no error but the R script output files are not created. import subprocess retcode = subprocess.call("C:/Program Files/R/R-3.2.2/bin/Rscript --vanilla T:/2012.R", shell=True) Thank you so much in advance. Answer: Simply place your string command in brackets and break string into separate components as the first parameter of function expects a list of arguments, per the [doc](https://docs.python.org/2/library/subprocess.html): > subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False) import subprocess retcode = subprocess.call(['C:/Program Files/R/R-3.2.2/bin/Rscript', '--vanilla', 'T:/2012.R'], shell=True) Alternatively, break it up into multiple strings: command = 'C:/Program Files/R/R-3.2.2/bin/Rscript' arg = '--vanilla' path2script = 'T:/2012.R' retcode = subprocess.call([command, arg, path2script], shell=True)
How to prevent a QDoubleSpinBox from changing values Question: I've written a GUI for a script that does some geometrical calculations. Certain ranges of values break the computation (e.g. find the intersection of two shapes that don't intersect.) I raise exceptions in those cases. I'd like to prevent the user from adjusting the spinbox value beyond the point where exceptions are raised. I've tried overwriting the validator method for the QDoubleSpinBox. This works great when I manually enter values with the keyboard. But, it doesn't prevent me from clicking the up and down arrows. How I can limit the ability of the user to run-up the values outside of the acceptable range? Note: The actual `some_complicated_function` involves the values from 5 different spinboxes. from PyQt4 import QtCore, QtGui import sys def some_complicated_function(val_a): if val_a + 3 < 10: return True else: raise Exception("Giant number!") class SpinBoxSpecial(QtGui.QDoubleSpinBox): def validate(self, value, pos): # print float(value) try: some_complicated_function(float(value)) print "yup" return QtGui.QValidator.Acceptable, QtGui.QValidator.Acceptable except: print "nope" return QtGui.QValidator.Invalid, QtGui.QValidator.Invalid a = QtGui.QApplication(sys.argv) w = QtGui.QMainWindow() w.resize(320, 100) w.setWindowTitle("PyQT Python Widget!") spinbox = SpinBoxSpecial(w) spinbox.move(20, 20) spinbox.CorrectionMode = QtGui.QAbstractSpinBox.CorrectToPreviousValue w.show() sys.exit(a.exec_()) **Edit:** The basic ask is: I want to call a function when the value of a spinbox changes (via mouse or keyboard). If that function throws an exception, I want the value of the spinbox to revert to what it was. Answer: Here is a simple way to dynamically set the range on a spinbox: class SpinBoxSpecial(QtGui.QDoubleSpinBox): def __init__(self, parent=None): super(SpinBoxSpecial, self).__init__(parent) self._last = self.value() self.valueChanged.connect(self.handleValueChanged) def handleValueChanged(self, value): try: some_complicated_function(float(value)) print "yup", value self._last = value except: print "nope", value if value > self._last: self.setMaximum(self._last) else: self.setMinimum(self._last) **EDIT** : Just realized the above won't work correctly if a value is typed in directly, because it could fix the min/max too early. So maybe this would be better: def handleValueChanged(self, value): try: some_complicated_function(float(value)) print "yup", value self._last = value except: print "nope", value self.setValue(self._last)
Plotting to browser continuously using serve_figure Question: I want to see plots in progress continuously driven by the plot program using browser whenever it is connected. I searched and found serve_figure.py examples that are similar to what I need. But I cant get the following test code to work. Serve_figure.py holds up the for-loop after the first plot. At the browser only the first plot is shown. I don't need the mouse event in serve_figure.py. If there is another way to do this will be most welcome. #!/usr/bin/env pythonnter import serve_figure import time import numpy as np import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt fig, ax = plt.subplots() def animate(): x = np.arange(0, 2*np.pi, 0.01) line, = ax.plot(x, np.sin(x)) for i in np.arange(1,200): line.set_ydata(np.sin(x+i/10.0)) fig.canvas.draw() time.sleep(1) serve_figure.serve_figure(fig, port=8888) win = fig.canvas.manager.window fig.canvas.manager.window.after(200, animate) plt.show() Answer: BTW, the link to serve_figure.py is <https://github.com/mdboom/mpl_browser_experiments/blob/master/serve_figure.py>
Django won't create a table for one model of many in an app Question: _I'm using Django 1.9_ For whatever reason, I simply cannot get Django to create a table for any more models in my `products` app. After I added the `store` model and registered it on `admin.py` and running `manage.py makemigrations` & `manage.py migrate` **countless** times, I tried adding instances to which I got a `Operation error: no such table products_store`. I have the following `models.py`: from __future__ import unicode_literals from django.db import models # Create your models here. def image_upload_location(instance, filename): print instance.name print filename return "static/images/products/%s" %(filename) class Category(models.Model): title = models.CharField(max_length=120, unique=True) description = models.TextField(null=True,blank=True) def __unicode__(self): return self.title class Product(models.Model): name = models.CharField(max_length = 120) description = models.TextField(blank=True,null=True) main_image = models.ImageField(upload_to=image_upload_location) price = models.DecimalField(decimal_places=2, max_digits=20) available = models.BooleanField(default=True) categories = models.ManyToManyField('Category', blank=True) def __unicode__(self): return self.name class Store(models.Model): name = models.CharField(max_length=120) description = models.TextField(blank=True,null=True) def __unicode__(self): return self.name class Building(models.Model): name = models.CharField(max_length=30) class Variant(models.Model): variant_name = models.CharField(max_length=120) description = models.TextField(blank=True,null=True) variant_image = models.ImageField(upload_to=image_upload_location, null=True) price = models.DecimalField(decimal_places=2,max_digits=20) available = models.BooleanField(default=True) product = models.ForeignKey(Product) store = models.ForeignKey(Store) def __unicode__(self): return self.variant_name Then on the shell, I tried the following: In [1]: from products.models import Store In [2]: from products.models import Product In [3]: Store Out[3]: products.models.Store In [4]: Product Out[4]: products.models.Product In [5]: Store.objects.all() OperationalError: no such table: products_store In [8]: Product.objects.all() Out[8]: [] Seems really strange to me. I also tried deleting all the migrations, and then running all the migrations again, but that didn't seem to work. Here's the output from the migrations: Migrations for 'products': 0001_initial.py: - Create model Category - Create model Product - Create model Store - Create model Variant A:try3 a$ python manage.py migrate Operations to perform: Apply all migrations: sessions, admin, sites, auth, contenttypes, products Running migrations: Rendering model states... DONE Applying sites.0001_initial... OK Applying sites.0002_alter_domain_unique... OK **Update:** Output from `manage.py dbshell` SQLite version 3.9.2 2015-11-02 18:31:45 Enter ".help" for usage hints. sqlite> .tables auth_group django_migrations auth_group_permissions django_session auth_permission django_site auth_user products_category auth_user_groups products_product auth_user_user_permissions products_product_categories django_admin_log products_variant django_content_type Contents of `migrations/0001_initial.py class Migration(migrations.Migration): initial = True dependencies = [ ] operations = [ migrations.CreateModel( name='Category', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=120, unique=True)), ('description', models.TextField(blank=True, null=True)), ], ), migrations.CreateModel( name='Product', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('name', models.CharField(max_length=120)), ('description', models.TextField(blank=True, null=True)), ('main_image', models.ImageField(upload_to=products.models.image_upload_location)), ('price', models.DecimalField(decimal_places=2, max_digits=20)), ('available', models.BooleanField(default=True)), ('categories', models.ManyToManyField(blank=True, to='products.Category')), ], ), migrations.CreateModel( name='Store', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('name', models.CharField(max_length=120)), ('description', models.TextField(blank=True, null=True)), ], ), migrations.CreateModel( name='Variant', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('variant_name', models.CharField(max_length=120)), ('description', models.TextField(blank=True, null=True)), ('variant_image', models.ImageField(null=True, upload_to=products.models.image_upload_location)), ('price', models.DecimalField(decimal_places=2, max_digits=20)), ('available', models.BooleanField(default=True)), ('product', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='products.Product')), ('store', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='products.Store')), ], ), ] Answer: It seems as though the `store` table was dropped from the database and migrations cannot figure out how to add it back. You could always recreate the table in `dbshell`: sqlite> .tables auth_group django_migrations auth_group_permissions django_session auth_permission products_building auth_user products_category auth_user_groups products_product auth_user_user_permissions products_product_categories django_admin_log products_variant django_content_type sqlite> PRAGMA foreign_key s=OFF; sqlite> BEGIN TRANSACTION; sqlite> CREATE TABLE "products_store" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(120) NOT NULL, "description" text NULL); sqlite> COMMIT; sqlite> .tables auth_group django_migrations auth_group_permissions django_session auth_permission products_building auth_user products_category auth_user_groups products_product auth_user_user_permissions products_product_categories django_admin_log products_store django_content_type products_variant
Alpine 3.3, Python 2.7.11, urllib2 causing SSL: CERTIFICATE_VERIFY_FAILED Question: I have this small Dockerfile FROM alpine:3.3 RUN apk --update add python CMD ["python", "-c", "import urllib2; response = urllib2.urlopen('https://www.python.org')"] Building it with `docker build -t alpine-py/01 .` and then running it with `docker run -it --rm alpine-py/01` creates the following output Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 431, in open response = self._open(req, data) File "/usr/lib/python2.7/urllib2.py", line 449, in _open '_open', req) File "/usr/lib/python2.7/urllib2.py", line 409, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 1240, in https_open context=self._context) File "/usr/lib/python2.7/urllib2.py", line 1197, in do_open raise URLError(err) urllib2.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)> Yesterday I got bitten by the recent OpenSSL 1.0.2g release, which caused `py- cryptograpy` to not compile. Luckily the guys from `py-cryptography` released a new version on PyPI a couple of hours later. The issue was that a function in OpenSSL got a new signature. Could this be related or am I missing something? Answer: You need to install ca-certificates to be able to validate signed certs by public CAs: FROM alpine:3.3 RUN apk --no-cache add python ca-certificates CMD ["python", "-c", "import urllib2; response = urllib2.urlopen('https://www.python.org')"]
Problems using MySQL with AWS Lambda in Python Question: I am trying to get up and running with AWS Lambda Python (beginner in Python btw) but having some problems with including MySQL dependency. I am trying to follow the instructions [here](http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to- create-deployment-package.html) on my Mac. For step number 3, I am getting some problems with doing the command at the root of my project sudo pip install MySQL-python -t / Error: > Exception: Traceback (most recent call last): File > "/Library/Python/2.7/site-packages/pip-1.5.6-py2.7.egg/pip/basecommand.py", > line 122, in main status = self.run(options, args) File > "/Library/Python/2.7/site- > packages/pip-1.5.6-py2.7.egg/pip/commands/install.py", line 311, in run > os.path.join(options.target_dir, item) File > "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", > line 292, in move raise Error, "Destination path '%s' already exists" % > real_dst Error: Destination path '/MySQL_python-1.2.5-py2.7.egg- > info/MySQL_python-1.2.5-py2.7.egg-info' already exists I end up writing my following lambda function (works fine on my Mac), which is: import MySQLdb def lambda_handler(event, context): # Open database connection db = MySQLdb.connect(...) # prepare a cursor object using cursor() method cursor = db.cursor() sql = "SELECT * FROM Users" try: # Execute the SQL command cursor.execute(sql) # Fetch all the rows in a list of lists. results = cursor.fetchall() for row in results: fname = row[0] lname = row[1] age = row[2] sex = row[3] income = row[4] # Now print fetched result print ("lname=%s" %(lname)) except: print "Error: unable to fecth data" # disconnect from server db.close() What I went on to do is go to /Library/Python/2.7/site-packages and copying over the the MySQLdb folders/files that were downloaded when I did sudo pip install MySQL-python (without -t /) (I'm sure I'm doing something wrong here), to my lambda project, and then zipped the content along with the lambda_function.py and uploaded to AWS Lambda. Then I get: > Unable to import module 'lambda_function': No module named MySQLdb Grateful for any help and suggestions! **EDIT** Was able to do make sudo pip install MySQL-python -t /pathToProject work (thanks for the help in the comments) but now I get this when runing the lambda function: > Unable to import module 'lambda_function': /var/task/_mysql.so: invalid ELF > header I know that if I work on a Linux box, then it should work fine (as suggested by some people), but I am wondering if I can make it work from an OS X box. Answer: For a use case like Lambda you'll be a lot happier using a pure python implementation like [PyMySQL](https://github.com/PyMySQL/PyMySQL). It's a drop in replacement for MySQLdb that follows the [Python Database API](https://www.python.org/dev/peps/pep-0249/) specification. For most things like triggered Lambda events it will be just as fast. I've used it in production a lot and it works great.
Python script for performance testing of server Question: I dont want to use any testing tool.. I am writing python script which hits rest api to server... I want to hit 500/any_number_of hits of login/any api to server at a same time so that I can check performance of server. def login(): api = "http://0.0.0.0/api/method/login" params = {"usr":"[email protected]","pwd":"password"} reponse = requests.post(api,params=params) def main(): for i in range(2): for j in range(5): t1 = Thread(target=login(), args=(j,)) t2 = Thread(target=get_profile(), args=(j,)) t1.start() t2.start() main() will this code work ?? please suggest me how to do it... Answer: Use a couroutine module like gevent or asyncio. You can have many coroutines, but you will be limited in the number of threads. This limitation comes from both the number of threads the OS will allow your process to have, and the fact that threads incur a much greater performance hit (each thread needs its own stack + cost of context switching). Here's a way to do it with gevent: from gevent import monkey monkey.patch_all() import urllib2 from gevent.pool import Pool def download(url): return urllib2.urlopen(url).read() if __name__ == '__main__': urls = ['http://your_url'] * 100 pool = Pool(20) print pool.map(download, urls)
Substract current time to -GMT in python Question: I have a time which is 13:11:06 and i want to -GMT (i.e -0530). I can minus it by simply doing -5 by splitting the string taking the first digit (convert to int) and then minus it and then re-join. But then i get it in a format which is 8:11:06 which is not right as it should be 08:11:06, secondly its a lengthy process. Is there a easy way to get my time in -GMT format (08:11:06) This is what i did to get -GMT time after getting the datetime timesplithour = int(timesplit[1]) + -5 timesplitminute = timesplit[2] timesplitseconds = timesplit[3] print timesplithour print timesplitminute print timesplitseconds print timesplithour + ":" + timesplitminute + ":" + timesplitseconds Answer: You could use Python's [`datatime`](https://docs.python.org/3.5/library/datetime.html?highlight=timedelta#module- datetime) library to help you as follows: import datetime my_time = "13:11:06" new_time = datetime.datetime.strptime("2016 " + my_time, "%Y %H:%M:%S") - datetime.timedelta(hours=5, minutes=30) print new_time.strftime("%H:%M:%S") This would print: 07:41:06 First it converts your string into a `datetime` object. It then creates a `timedelta` object allowing you to subtract 5 hours 30 minutes from the datetime object. Finally it uses `strftime` to format the resulting datetime into a string in the same format.
Extracting datetime info from yeardatescalendar in python Question: import calendar calendar.Calendar().yeardatescalendar(2014) >>> [[[[datetime.date(2013, 12, 30), datetime.date(2013, 12, 31),... The above code returns datetimes for calendar year 2014. However, it also includes the last 2 days of 2013 and firs couple of days for 2015. Is there any way I can just extract the 2014 info? Answer: One way of doing it: import datetime def myFun(year): res = [] d = datetime.datetime(year, 1, 1) while d.year != year +1: res.append(d) d = d + datetime.timedelta(days=1) return res >>> myFun(2014)[:2] [datetime.datetime(2014, 1, 1, 0, 0), datetime.datetime(2014, 1, 2, 0, 0)] >>> len(myFun(2014)) 365
How to run commands on same TCL shell using Python Question: I am having all the libraries written in TCL. I want to create a GUI in Python which will have few buttons and other options. In the start TCL shell will open. When I will click the buttons, respective commands will be executed on the TCL shell. Is it possible to fire commands on the same shell of TCL without closing TCL shell. I searched google and find `Tkniter` module in Python but it will open TCL shell everytime I need to execute command. Answer: You can certainly use Tkinter to run a series of commands in the same Tcl interpreter: Python 2.7.9 (default, Feb 28 2016, 05:52:45) [C] on sunos5 Type "help", "copyright", "credits" or "license" for more information. >>> import Tkinter >>> root = Tkinter.Tk() >>> root.tk.eval('set msg "hello world"') 'hello world' >>> root.tk.eval('string length $msg') '11' >>> root.tk.eval('foreach x {1 2 4} {puts "$msg $x"}') hello world 1 hello world 2 hello world 4 '' >>> \- here the variable msg is set in one command and its value is used in later commands, which would not work if we were creating a new interpreter for each command. If you don't want the Tk window that gets created, just run `root.tk.eval('wm withdraw .')` to hide it. If this doesn't answer your question you had better explain what else it is that you need :-)
Splitting Regex response column on python Question: I am receiving an object array after applying `re.findall` for link and hashtags on Tweets data. My data looks like b=['https://t.co/1u0dkzq2dV', 'https://t.co/3XIZ0SN05Q'] ['https://t.co/CJZWjaBfJU'] ['https://t.co/4GMhoXhBQO', 'https://t.co/0V'] ['https://t.co/Erutsftlnq'] ['https://t.co/86VvLJEzvG', 'https://t.co/zCYv5WcFDS'] Now I want to split it in columns, I am using following df = pd.DataFrame(b.str.split(',',1).tolist(),columns = ['flips','row']) But it is not working because of weird datatype I guess, I tried few other solutions as well. Nothing worked.And this is what I am expecting, two separate columns https://t.co/1u0dkzq2dV https://t.co/3XIZ0SN05Q https://t.co/CJZWjaBfJU https://t.co/4GMhoXhBQO https://t.co/0V https://t.co/Erutsftlnq https://t.co/86VvLJEzvG Answer: It's not clear from your question what exactly is part of your data. (Does it include the square brackets and single quotes?). In any case, the pandas [`read_csv`](http://pandas.pydata.org/pandas- docs/stable/generated/pandas.read_csv.html) function is very versitile and can handle ragged data: import StringIO import pandas as pd raw_data = """ ['https://t.co/1u0dkzq2dV', 'https://t.co/3XIZ0SN05Q'] ['https://t.co/CJZWjaBfJU'] ['https://t.co/4GMhoXhBQO', 'https://t.co/0V'] ['https://t.co/Erutsftlnq'] ['https://t.co/86VvLJEzvG', 'https://t.co/zCYv5WcFDS'] """ # You'll probably replace the StringIO part with the filename of your data. df = pd.read_csv(StringIO.StringIO(raw_data), header=None, names=('flips','row')) # Get rid of the square brackets and single quotes for col in ('flips', 'row'): df[col] = df[col].str.strip("[]'") df Output: flips row 0 https://t.co/1u0dkzq2dV https://t.co/3XIZ0SN05Q 1 https://t.co/CJZWjaBfJU NaN 2 https://t.co/4GMhoXhBQO https://t.co/0V 3 https://t.co/Erutsftlnq NaN 4 https://t.co/86VvLJEzvG https://t.co/zCYv5WcFDS
Python: Open .doc file with antiword on windows Question: I am trying to open a bunch of .doc files (not docx) in a folder using python. I downloaded a modified "antiword" which was a precompiled execute file for windows as the original antiword is only available for linux. <http://www-stud.rbi.informatik.uni-frankfurt.de/~markus/antiword/> After I downloaded it I insert it into my python27 folder and I run the antiword.exe file. honestly my problem is that I do not understand what is going on when I run the execute. I had hoped that it would add a library to my python directory and I could the import "antiword". Can anyone help me? Thanks a lot. BR Jonas Answer: As I understand, antiword is not a python module that you can import. It's an executable that can be run directly from the command prompt (cmd). I installed antiword as explained in [00README.WIN document](http://www- stud.rbi.informatik.uni-frankfurt.de/~markus/antiword/00README.WIN) and could run it in cmd after adding its folder to PATH environment variable as well as creating a HOME environment variable exactly as outlined in README. I could successfully run the following example using testdoc.doc found in antiword\Doc\ antiword -m cp852.txt filename.doc > filename.txt I think if you need to do this in Python, you can run antiword similar to any cmd command as previously [explained](http://stackoverflow.com/questions/14894993/running-windows-shell- commands-with-python). The same thing was previously [suggested](http://stackoverflow.com/questions/125222/extracting-text-from-ms- word-files-in-python). You could also give IronPython a try as previuosly [recommended](http://ironpython.codeplex.com/). I hope this helps!
How to split file into chunks by string delimiter in Python Question: I'm gonna need to upload a potentially large csv file into my application. Each section of that file is indicated by a `#TYPE *`. How should I go about splitting it into chunks and doing further processing on each chunk? Each chunk is a list of headers followed by all the values. Right now I have written the processing for a single chunk but I'm not sure how to do the operation for each chunk. I think that a regex operation would be the best option because of the constant return of `#TYPE *`. #TYPE Lorem.Text.A ... #TYPE Lorem.Text.B ... #TYPE Lorem.Text.C ... **UPDATE** This solution has been changed from saving all sections in one file to saving all sections to separate files and zipping them into a zip file. This zip file is read by python and further analyzed. If someone would be interested in that explanation message me and I'll update this question. Answer from @Padraic was the most helpful for the old course. Answer: You could use a _groupby_ presuming the sections are delimited by lines starting with `#TYPE`: from itertools import groupby, chain def get_sections(fle): with open(fle) as f: grps = groupby(f, key=lambda x: x.lstrip().startswith("#TYPE")) for k, v in grps: if k: yield chain([next(v)], (next(grps)[1])) # all lines up to next #TYPE You can get each section as you iterate: In [13]: cat in.txt #TYPE Lorem.Text.A first #TYPE Lorem.Text.B second #TYPE Lorem.Text.C third In [14]: for sec in get_sections("in.txt"): ....: print(list(sec)) ....: ['#TYPE Lorem.Text.A\n', 'first\n'] ['#TYPE Lorem.Text.B\n', 'second\n'] ['#TYPE Lorem.Text.C\n', 'third\n'] If no other lines start with `#` then that alone will be enough to use in startswith, there is nothing complicated in your pattern so it is not really a use case for a regex. This also only stores a section at a time not the whole file into memory. If you have no leading whitespace and the only place `#` appears is before TYPE it may be sufficient to just call groupby: from itertools import groupby, chain def get_sections(fle): with open(fle) as f: grps = groupby(f) for k, v in grps: if k: yield chain([next(v)], (next(grps)[1])) # all lines up to next #TYPE If there was some metadata at the start you could use dropwhile to skip lines until we hit the `#Type` and then just group: from itertools import groupby, chain, dropwhile def get_sections(fle): with open(fle) as f: grps = groupby(dropwhile(lambda x: not x.startswith("#"), f)) for k, v in grps: if k: yield chain([next(v)], (next(grps)[1])) # all lines up to next #TYPE Demo: In [16]: cat in.txt meta more meta #TYPE Lorem.Text.A first #TYPE Lorem.Text.B second second #TYPE Lorem.Text.C third In [17]: for sec in get_sections("in.txt"): print(list(sec)) ....: ['#TYPE Lorem.Text.A\n', 'first\n'] ['#TYPE Lorem.Text.B\n', 'second\n', 'second\n'] ['#TYPE Lorem.Text.C\n', 'third\n']
Choosing and iterating specific sub-arrays in multidimensional arrays in Python Question: This is a question that comes from the post here [Iterating and selecting a specific array from a multidimensional array in Python](http://stackoverflow.com/questions/35769536/iterating-and-selecting-a- specific-array-from-a-multidimensional-array-in-python) In that post, user @Cleb solved what it was my original problem: how to perform a sum through columns in a 3d array: import numpy as np arra = np.arange(16).reshape(2, 2, 4) which gives array([[[0, 1, 2, 3], [4, 5, 6, 7]], [[8, 9, 10, 11], [12, 13, 14, 15]]]) and the problem was how to perform the sum of columns in each matrix, i. e., 0 + 4, 1 + 5, ... , 8 + 12, ..., 11 + 15. It was solved by @Cleb. Then I wondered how to do it in the case of a sum of 0 + 8, 1 + 9, ..., 4 + 12, ..., 7 + 15, (odd and even columns) which was also solved by @Cleb. But then I wondered if there are a general idea (which can be modified in each specific case). Imagine you can add the first and the last rows and the center rows, in columns, separately, i. e., 0 + 12, 1 + 13, ..., 3 + 15, 4 + 8, 5 + 9, ..., 7 + 11. Is there a general way? Thank you. Answer: Depending on the how exactly `arra` is defined, you can shift your values appropriately using [`np.roll`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.roll.html): arra_mod = np.roll(arra, arra.shape[2]) `arra_mod` then looks as follows: array([[[12, 13, 14, 15], [ 0, 1, 2, 3]], [[ 4, 5, 6, 7], [ 8, 9, 10, 11]]]) Now you can simply use the command from your [previous question](http://stackoverflow.com/questions/35769536/iterating-and-selecting- a-specific-array-from-a-multidimensional-array-in-python?lq=1) to get your desired output: map(sum, arra_mod) which gives you the desired output: [array([12, 14, 16, 18]), array([12, 14, 16, 18])] You can also use a list comprehension [sum(ai) for ai in arra_mod] which gives you the same output. If you prefer one-liner, you can therefore simply do: map(sum, np.roll(arra, arra.shape[2]))
Accessing Flask wsgi structured app from interpreter Question: I have a Flask app structured for deployment as wsgi. An issue I have is when I want to load files from the Python interpreter. The structure is: |----api |---------api |--------------static |--------------templates |--------------venv |--------------models.py |--------------... So that in the Python interpreter, when I load files in the directory `api/api` I need to access them as `dir.filename`. For example, if I have a file, `api/api/models.py`, I can load it by entering the command `from api.mb_models import db`... however, this syntax chokes when I run the wsgi app via a web server. I then get an error that the file does not exits: `Traceback (most recent call last): File "api/extract.py", line 15, in <module> import api.models as models ImportError: No module named api.models` I also tried set the os variable as follows, path = 'api' os.chdir(path) and while this works now as wsgi, I got the same error when trying to load from the interpreter. The only way I can get this to currently run as wsgi is without specifying the path in my import statements or using the os attribute, which does not work when I load the files in the interpreter The question is how to share the same configuration for deployment as a wsgi app AND for running from the Python interpreter. Note: I am using a virtual environment. **EDIT** I removed all references of `dir` in my file imports, so the wsgi app is functioning as desired, then ran `os.cwd()` in the interpreter, which gave me the expected result of `/Users/gms/development/python/api`. I then set the working directory using `os.chdir('/Users/gms/development/python/api/api')`, ran `os.cwd()` and got the desired result for the working directory, but I still get a `ImportError: No module named extract` when I try loading it from the interpreter. Answer: I figured it out. I need to set these in the interpreter: import sys sys.path.append('working path here') And, voila, everything works as desired.
Python: Finding unknown repeated word(s) in a list of strings Question: I have a list of strings, which are subjects from different email conversations. I would like to see if there are words or word combinations which are being used frequently. An example list would be: subjects = [ 'Proposal to cooperate - Company Name', 'Company Name Introduction', 'Into Other Firm / Company Name', 'Request for Proposal' ] The function would have to detect that "Company Name" as combination is used more than once, and that "Proposal" is being used more than once. These words won't be known in advance though, so I guess it would have to start trying all possible combinations. The actual list is of course a lot longer than this example, so manually trying all combinations doesn't seem like the best way to go. What would be the best way to go about this? **UPDATE** I've used Tim Pietzcker's answer to start developing a function for this, but I get stuck on applying the Counter correctly. It keeps returning the length of the list as count for all phrases. The phrases function, including punctuation filter and a check if this phrase has already been checked, and a max length per phrase of 3 words: def phrases(string, phrase_list): words = string.split() result = [] punctuation = '\'\"-_,.:;!? ' for number in range(len(words)): for start in range(len(words)-number): if number+1 <= 3: phrase = " ".join(words[start:start+number+1]) if phrase in phrase_list: pass else: phrase_list.append(phrase) phrase = phrase.strip(punctuation).lower() if phrase: result.append(phrase) return result, phrase_list And then the loop through the list of subjects: phrase_list = [] ranking = {} for s in subjects: result, phrase_list = phrases(s, phrase_list) all_phrases = collections.Counter(phrase.lower() for s in subjects for phrase in result) "all_phrases" returns a list with tuples where each count value is 167, which is the length of the subject list I'm using. Not sure what I'm missing here... Answer: You also want to find phrases that are composed of more than single words. No problem. This should even scale quite well. import collections subjects = [ 'Proposal to cooperate - Company Name', 'Company Name Introduction', 'Into Other Firm / Company Name', 'Request for Proposal', 'Some more Firm / Company Names' ] def phrases(string): words = string.split() result = [] for number in range(len(words)): for start in range(len(words)-number): result.append(" ".join(words[start:start+number+1])) return result The function `phrases()` splits the input string on whitespace and returns all possible substrings of any length: In [2]: phrases("A Day in the Life") Out[2]: ['A', 'Day', 'in', 'the', 'Life', 'A Day', 'Day in', 'in the', 'the Life', 'A Day in', 'Day in the', 'in the Life', 'A Day in the', 'Day in the Life', 'A Day in the Life'] Now you can count how many times each of these phrases are found in all your subjects: all_phrases = collections.Counter(phrase for subject in subjects for phrase in phrases(subject)) Result: In [3]: print([(phrase, count) for phrase, count in all_phrases.items() if count > 1]) Out [3]: [('Company', 4), ('Proposal', 2), ('Firm', 2), ('Name', 3), ('Company Name', 3), ('Firm /', 2), ('/', 2), ('/ Company', 2), ('Firm / Company', 2)] Note that you might want to use other criteria than simply splitting on whitespace, maybe ignore punctuation and case etc.
Python - OpenCV VideoCapture = False (Windows) Question: I have a simple piece of code, written in Python (version 2.7.11) designed to do things to a video file as follows: import cv2 cap = cv2.VideoCapture('MyVideo.mov') print(cap) print(cap.isOpened()) while(cap.isOpened()): #Do some stuff The result of print(cap) is a 8 digit hex number, so I don't know if that means that the video has been found. However, the print statement for cap.isOpened() returns False. I have tried several fixes, but none of them worked. Any help or insight would be very helpful. Things to note/things I have tried * I am running Windows 8.1, Python 2.7.11 and OpenCV 3.1.0 * The location of the video file is in the same directory as the Python script * I have the following directories appended to my PATH variable: C:\Users\MyName\OpenCV3\opencv\build\x64\vc14\bin; C:\Users\MyName\OpenCV3\opencv\sources\3rdparty\ffmpeg; C:\Python27\; C:\Python27\Scripts * I have checked that I have opencv_ffmpeg.dll in the OpenCV vc14 bin directory * I have checked that said dll file is titled opencv_ffmpeg310_64.dll * I have tried redownloading said dll file, and renaming it to include the version of OpenCV and the fact that my system is a 64-bit one * I have tried placing the dll file in the Python27 directory * The code above works on Mac, but not on Windows (tried the code on 2 different Macs and it worked, tried it on 2 different Windows machines and it returned false both times) Answer: Since your code shows no issues on Mac, try using other file extensions (eg. mp4 or wmv) on your Windows system, for testing. If your video is loaded then, that means OpenCV is correctly configured on your Windows, but apparently there is no driver to play .mov files
Sorting Issue in Python Question: I am attempting to sort a list which I have imported from a file in Python, however I am having no luck, does anyone know what I am doing wrong? fr = open("database_results\Class_" + option + ".txt", 'r') lines = [line for line in fr if line.strip()] fr.close() lines.sort(key=int) fw = open("database_results\Class_" + option + ".txt", 'w') fw.writelines(lines) print(lines) print("ALERT: File successfully sorted numerically!") The reason I am splitting the lines like this is because the data is stored like this in the database: [Name: 'John Smith' Score: '7'] [Name: 'Mitchel Jones' Score: '5'] I want the file to be sorted so the people who have received the highest score are put to the top of the list. For the example above, this would be formatted fine, because John has a higher score than Mitchel. Answer: 1) You're using the key argument of sort wrong. Key specifies what part of each item in your list is being used to sort. In my example, I'm supplying an anonymous function which is saying that for each item in `l`, use the value at index 0 for determining sort order l = [ [3,1,2], [1,5,2], [0,9,3], [2,5,2] ] l.sort(key=lambda x : x[0]) print l # prints [[0, 9, 3], [1, 5, 2], [2, 5, 2], [3,1,2]] 2) In order to do your sort, you'll likely need to perform a split on each line, identify which index the score is sitting in, and cast it to an int. This should do the sort: `lines.sort(reverse=True, key=lambda x : int(x.strip().split('Score: ')[1].replace(']', '').replace('"', "").replace("'", "")))` Here's what's happening: 1. Split the string on score so we get back `["[Name: Blah Blah", "10]"]` 2. We grab the item from the new list where the score is sitting (index 1) 3. Replace the ], ', and " characters so we don't fail our int cast 4. Cast the score to an int for numeric sorting 3) Probably not the best idea to write from the same file you were just reading from. It would be better to put it somewhere like `"database_results\Class_{0}_sorted.txt".format(option)"`
Python and sqlite3 : insert matrix into table Question: Let's say i have this matrix: [[[u'artist1'], [u'song1']], [[u'artist2'], [u'song2']], [[u'artist3'], [u'song3']]] I was wondering how can i insert it into a sql table so it will look like this: ID | ARTIST |SONG ----------------- 1 | artist1 | song1 2 | artist2 | song2 3 | artist3 | song3 I have about 1 million artists and 1 million songs i have to insert so it's just an example :P Thanks in advance! Source code: import sqlite3 import numpy as np print('Opening SQL Database') sql = sqlite3.connect('mblite_post.db') cur = sql.cursor() cursor=cur.execute("SELECT entity0 from r_artist_release LIMIT 500000000000000 ") result_author = [row[0] for row in cur.fetchall()] cursor=cur.execute("SELECT entity1 from r_artist_release LIMIT 500000000000000 ") result_song = [row[0] for row in cur.fetchall()] artistlist=[] for x in result_author: y=cur.execute("SELECT name FROM artist_name where id='%s'" % x) artistname=[row[0] for row in cur.fetchall()] artistlist.append(artistname) songlist=[] for y in result_song: z=cur.execute("SELECT name FROM release_name where id='%s'" % y) songname=[row[0] for row in cur.fetchall()] songlist.append(songname) matrix2 = [[artistlist[i], songlist[i]] for i in range(len(artistlist))] print(len(matrix2)) sql2 = sqlite3.connect('itaidb.db') cur2 = sql2.cursor() cur2.execute('CREATE TABLE IF NOT EXISTS main (artist TEXT,song TEXT )') for i, v in enumerate(matrix2): cur2.execute('INSERT INTO main VALUES (?,?)', (v[0][0], v[1][0])) sql2.commit() # # # for ton in songlist: # cur2.execute("UPDATE main SET (song) values (?) WHERE id='%r'", [''.join(ton)] % integer) # sql2.commit() # integer=integer+1 # sql2.close() sql.close() # artistname=[] # for x in author: # y=cur.execute("SELECT name FROM artist_name where id='%s'" % x) # artistname=artistname.append(list(y.fetchall())) # # print artistname Answer: I suggest you create your list like this: yourlist = [(artistlist[i], songlist[i]) for i, v in enumerate(artistlist)] Then you can use a for loop to execute the `INSERT` statements: for i, v in enumerate(yourlist, start=1): cursor.execute('INSERT INTO yourtable VALUES (?,?,?)', (i, v[0], v[1])) And don't forget to commit afterwards: db.commit()
CX_Oracle - import data from Oracle to Pandas dataframe Question: Hy, I'm new in python and I want import some data from a Oracle Database to python (pandas dataframe) using this simple query SELECT* FROM TRANSACTION WHERE DIA_DAT >=to_date('15.02.28 00:00:00', 'YY.MM.DD HH24:MI:SS') AND (locations <> 'PUERTO RICO' OR locations <> 'JAPAN') AND CITY='LONDON' What I did import cx_Oracle ip = 'XX.XX.X.XXX' port = YYYY SID = 'DW' dsn_tns = cx_Oracle.makedsn(ip, port, SID) connection = cx_Oracle.connect('BA', 'PASSWORD', dsn_tns) df_ora = pd.read_sql('SELECT* FROM TRANSACTION WHERE DIA_DAT>=to_date('15.02.28 00:00:00', 'YY.MM.DD HH24:MI:SS') AND (locations <> 'PUERTO RICO' OR locations <> 'JAPAN') AND CITY='LONDON'', con=connection) But I have this error SyntaxError: invalid syntax What did I do wrong? Thanks Answer: You need to properly quote your SQL Query. If you look at the syntax highlighting in your question (or an IDE), you'll notice that the single quotes aren't working as you expect. Change the outer most quotes to double quotes - if you want it all on one line - or triple quotes if you want it across multiple lines: query = """SELECT* FROM TRANSACTION WHERE DIA_DAT >=to_date('15.02.28 00:00:00', 'YY.MM.DD HH24:MI:SS') AND (locations <> 'PUERTO RICO' OR locations <> 'JAPAN') AND CITY='LONDON'""" df_ora = pd.read_sql(query, con=connection)
Faking a time stamp from time.time() in Nose Question: I'm building a device object to poll data from connected devices in python and I'm trying to test the inter workings of the object and all it's functions using nose. I am running into a problem in general when writing a timestamp using `time.time()` as each time the function is called the result the value is different and this seriously screws up some test, for example this data collection function: def getData(self, data): if type(data) is not type({}): #print "Bad Type!" raise TypeError else: #print "Good Type!" self.outD = {} datastr = "" for key, value in data.iteritems(): self.outD[key] = value self.outD["timestamp"] = time.time() self.outD["id"] = self.id self._writeCSV() When I test the output of this function and compare the generated CSV file, it always fails due to the time stamp. I can sub it out in my device object by adding testing flags, but I was wondering if nose has a built in way to deal with issues like this where the result of a function can be substituted with a given value or a local function to fake the `time.time()` call. Is this possible? Answer: You can use [`unittest.mock`](https://docs.python.org/dev/library/unittest.mock.html). Example: import time from unittest import mock @mock.patch('time.time', mock.MagicMock(return_value=12345)) def test_something(): assert time.time() == 12345 Produces the following output: $ nosetests3 -v test.test_something ... ok ---------------------------------------------------------------------- Ran 1 test in 0.006s OK Despite the fact that `mock` is part of the `unittest` package, it's actually unrelated and works with any testing framework. For Python < 3.3, you can use the [`mock` package from PyPI](https://pypi.python.org/pypi/mock).
Best way to partially deep copy a dictionary in Python? Question: I have a graph that is represented as a `dict` where each key is an instance of a (self-defined) `Node` class and the values are sets of instances of `Nodes`. So, basically, the graph looks something like this, but way larger: `g = { Node1 : {Node2, Node3}, Node2 : {Node4}, Node3: set(), Node4 : {Node1} }` **In very short words of what I want to do:** I want to arbitrarily change a dictionary and then revert it to its original value before those changes. I have a series of transformations that I can make to this graph, but I don't have any possible way of knowing beforehand which transformation that I want to do. The transformation will always: * Delete one key * Add in two new vertices * Change about 1/n edges Since I don't know which transformation I want to do, I want to perform every transformation to the same graph and then pick one. An obvious idiom could be: * Copy the dictionary to a temporary dictionary (`temp = g`) * Repeat for each transformation `t`: * Perform a transformation `t` on the graph `g` to form the graph `g'` * Jot down information about the graph `g'` for later * Set `g = temp` to restore the graph to the known-good state This does not work because copying the dictionary only performs a shallow copy, so all changes to `g` are also performed on `temp`. I can use `deepcopy`, but even that cannot work as is. I want to `deepcopy` the actual dictionary `g`, but the keys and elements in the values of `g` (the `Nodes`) cannot be deepcopied, because in my transformation, I will look for elements that are in `g` but not in `temp` because they are now new objects. I can attempt to redefine `deepcopy` for the `Node` class, which _works_ , but it is very hackish and I _do_ need to use deepcopy to perform my transformations `t`. Following is a minimal working example that does what I want it to do -- but in a way that I _do not want to_ because then I need to create a workaround for making deepcopy actually work when I need it to. Also, the code to do the transformations is all using `g` and the actual `Nodes` in `g` as it stands right now, so not having to change much of that would be nice. Is there a better way to do this? (all `print`s should be `True`) from copy import deepcopy class MyClass: def __init__(self, value): self.value = value def __repr__(self): return str(self.value) def __deepcopy__(self, memo): # do not rly deep copy plx return self class1 = MyClass(1) class2 = MyClass(2) class3 = MyClass(3) dict_class = { class1 : {class2, class3}, class2 : {class3}, class3 : {class1}} #### Want to create a copy of the dictionary with exact same objects... #### dict_class_copy = deepcopy(dict_class) print(dict_class) print(dict_class_copy) print(class1 in dict_class and class2 in dict_class and class3 in dict_class) print(class1 in dict_class_copy and class2 in dict_class_copy and class3 in dict_class_copy) dict_class.pop(class2) dict_class[class1].remove(class3) # don't want to change original dictionary key values print(class2 not in dict_class and class2 in dict_class_copy) # this breaks w/o deepcopy! print(class3 not in dict_class[class1] and class3 in dict_class_copy[class1]) # reassign it back dict_class = dict_class_copy # i want this back! print(class2 in dict_class) print(class3 in dict_class[class1]) Answer: Make a "1 level deep" copy of the dict by copying the keys and making copies of the sets. Replace this: dict_class_copy = deepcopy(dict_class) with this: dict_class_copy = {k:set(v) for k,v in dict_class.items()} and your tests pass.
Click error "takes no arguments" in main() Question: I am having issues which I believe are due to click. I am trying to run this code and I keep running into the same traceback TypeError stating an argument is being passed to main(). #!/usr/bin/python # -*- coding: utf-8 -*- import sys import boto import boto.ec2 import click @click.command() @click.option('--region', '-r', default='us-west-2', help='AWS Region') def main(): try: ec2 = boto.ec2.connect_to_region(region) except Exception, error: print("Boto Error: %s" ) % str(error) else: print("It's working!") finally: sys.exit(1) if __name__ == "__main__": main() This code seems pretty simple, so I take click is doing something here to prevent this for working correctly? Traceback (most recent call last): File "test.py", line 21, in <module> main() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click-5.1-py2.7.egg/click/core.py", line 700, in __call__ return self.main(*args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click-5.1-py2.7.egg/click/core.py", line 680, in main rv = self.invoke(ctx) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click-5.1-py2.7.egg/click/core.py", line 873, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click-5.1-py2.7.egg/click/core.py", line 508, in invoke return callback(*args, **kwargs) TypeError: main() takes no arguments (1 given) Answer: I'm not familiar with `click` module but from what I can see from the docs, you should change `def main():` to `def main(region):` since you want the function to accept the parameter you are passing from the command line.
Encrypting user input in Python with pyCrypto Question: I am trying to do basic encryption in Python, in below program I encrypt whatever user types in, and then I am displaying it back to the user after decryption. I am using the pyCrypto library which I downloaded from here: <http://www.voidspace.org.uk/python/modules.shtml#pycrypto> Below is the code I have written so far: from Crypto.Cipher import AES AES_key = AES.new('This is a key') #message = "The answer is no" message = input("Enter text:") ciphertext = AES_key.encrypt(message) print (ciphertext) decrypted_message = AES_key.decrypt(ciphertext) print (decrypted_message) The problem is when I taken input from the user, my code does not work, but when I give static input the one which I have commented my code works fine. Can anyone please help what should I do so that my accepts user input and encrypts it? Answer: `AES` is a block cipher algorithm so it requires you to have the text to be a multiple of 16 bytes. your message `"The answer is no"` is of 16 bytes.But the input wouldn't be so you can use the `MOD_CFB` of the `AES` to get around this. from Crypto.Cipher import AES # Encryption encryption_suite = AES.new('This is a key123', AES.MODE_CFB, 'This is an IV456') cipher_text = encryption_suite.encrypt("A really secret message. Not for prying eyes.") # Decryption decryption_suite = AES.new('This is a key123', AES.MODE_CFB, 'This is an IV456') plain_text = decryption_suite.decrypt(cipher_text) this would work for user input. For more info on `IV`(initialization vectors) go [Here](http://www.cryptofails.com/post/70059609995/crypto- noobs-1-initialization-vectors)
Get Common OS name of Linux Python Question: I am trying to get Linux distro name like centos, fedora, ubuntu, SuSE etc in python. I made use of platform module. My code is import platform version = platform.platform(aliased=0) version = version.split('-with-',1)[1].split('-',1)[0] print version It gives me the desired output. But is there any simpler way other than this, or any module which gives me the name directly? Answer: Try this: >>> import platform >>> platform.linux_distribution() ('Red Hat Enterprise Linux Server', '6.6', 'Santiago')
logistic_sgd module, where to find it? Question: doing a deep learning tutorial and my python cannot find that module. from logistic_sgd import LogisticRegression, load_data ImportError: No module named logistic_sgd How can i install it ? Answer: Download and Save logistic_sgd.py from the following link: <http://deeplearning.net/tutorial/code/logistic_sgd.py> Store it in the working directory. That's it!
How to sort a dictionary in Python? Question: I would like to generate a report where my first column would contain the duration of my SQL queries. That should be sorted by highest duration to the lowest one. Code: import os directory = "./" results = {} def isfloat(value): try: float(value) return True except ValueError: pass for root,dirs,files in os.walk(directory): for file in files: if file.endswith(".csv"): input_file=open(file, 'r') for line in input_file: if line: try: duration=line.split(',')[13].split(' ')[1] if isfloat(duration): # check if string is a float results[duration]=line except: pass output_file = open('report.csv', 'w') for k,v in sorted(results.items()): print k output_file.write(k + ',' + v) output_file.close() output: 1266.114 1304.450 1360.771 1376.104 1514.518 500.105 519.432 522.594 522.835 528.622 529.664 I wonder why is the `sorted()` function sorting function is messing my results ? Answer: Your keys are _strings_ , not numbers. They are sorted lexicographically. Convert to a number first if you want numeric sorting: for k,v in sorted(results.items(), key=lambda k_v: float(k_v[0])):
Using py_compile.compile() from within a Python script Question: I am trying to understand [the docs](https://docs.python.org/2/library/py_compile.html). MY Python script generates Python code which will be run much later, so I want to check now if what I generated is valid. The docs say `py_compile.compile(file[, cfile[, dfile[, doraise]]])` and > If doraise is true, a PyCompileError is raised when an error is encountered > while compiling file. So, I tried source = open(generatedScriptPath, 'rt').read() + '\n' try: import py_compile x = py_compile.compile(source, '', '', True) except py_compile.PyCompileError, e: print str(e) but that inner Except is never hit, instead, an outer except catches > Traceback (most recent call last): File > "H:/code/testgen/testGen.py", line 293, in > x = py_compile.compile(source, '', '', True) File > "C:\Python27\lib\py_compile.py", line 106, in compile > with open(file, 'U') as f: IOError: [Errno 2] No such file or directory: '# > This script was auto-generated ... How do I fix this? Note that I am open to alternatives, I just want the simplest possible way to ask " is the code which I just generated syntactically valid Python?" Answer: Read the error message: "No such file or directory". The first argument is supposed to be a file name to open. Or read the docs: "The source code is loaded from the file name /file/." You might prefer the "compile" builtin, which can compile a string of Python.
How to use Boost.Python Question: I just recently discovered Boost.Python and I am trying to figure out how it works. I tried to go through [the tutorial on the official website](http://www.boost.org/doc/libs/1_60_0/libs/python/doc/html/tutorial/tutorial/hello.html). However, I got link.jam: No such file or directory when running bjam as in the example (which appears to be just a warning), and Traceback (most recent call last): File "hello.py", line 7, in <module> import hello_ext ImportError: libboost_python.so.1.55.0: cannot open shared object file: No such file or directory when running python hello.py. I also tried to compile a module as described in [another tutorial](http://www.shocksolution.com/python-basics-tutorials-and- examples/linking-python-and-c-with-boostpython/) with similar results. I am running Ubuntu14.04 with boost1.55 compiled myself. * * * I tried to compile the following: #include <boost/python.hpp> char const* greet() { return "hello, world"; } BOOST_PYTHON_MODULE(hello_ext) { using namespace boost::python; def("greet", greet); } with the following command from command line: g++ -o hello_ext.so hello.cpp -I /usr/include/python2.7/ -I /home/berardo/boost_1_55_0/ -L /usr/lib/python2.7/ -L /home/berardo/boost/lib/ -lboost_python -lpython2.7 -Wl, -fPIC -expose-dynamic which still gives me a: /usr/bin/ld: impossibile trovare : File o directory non esistente collect2: error: ld returned 1 exit status. Answer: Finally, I was able to make it work. First, I fixed the linker issues, as suggested by Dan. It finally compiled but I still got: ImportError: libboost_python.so.1.55.0: cannot open shared object file: No such file or directory The problem was that the python module was not able to load correctly so I needed to add another linker option. Here, I report the final Makefile: # location of the Python header file PYTHON_VERSION = 2.7 PYTHON_INCLUDE = /usr/include/python$(PYTHON_VERSION) # location of the Boost Python include files and library BOOST_INC = ${HOME}/boost/include BOOST_LIB = ${HOME}/boost/lib # compile mesh classes TARGET = hello_ext $(TARGET).so: $(TARGET).o g++ -shared -Wl,-rpath,$(BOOST_LIB) -Wl,--export-dynamic $(TARGET).o -L$(BOOST_LIB) -lboost_python -L/usr/lib/python$(PYTHON_VERSION)/config -lpython$(PYTHON_VERSION) -o $(TARGET).so $(TARGET).o: $(TARGET).C g++ -I$(PYTHON_INCLUDE) -I$(BOOST_INC) -fPIC -c $(TARGET).C Notice the **-Wl,-rpath,** option, which apparently makes the new created shared library available to the python script. @Dan: Thanks for the valuable hints.
How to validate the syntax of a Python script? Question: I just want the simplest possible way for my Python script to ask "is the Python code which I just generated syntactically valid Python?" I tried: try: import py_compile x = py_compile.compile(generatedScriptPath, doraise=True) pass except py_compile.PyCompileError, e: print str(e) pass But even with a file containing invalid Python, the exception is not thrown and afterwards `x == None`. Answer: There is no need to use [`py_compile`](https://docs.python.org/3/library/py_compile.html). It's intended use is to _write a bytecode file_ from the given source file. In fact it will fail if you don't have the permissions to write in the directory, and thus you could end up with some false negatives. To just parse, and thus validate the syntax, you can use the [`ast`](https://docs.python.org/2/library/ast.html) module to [`parse`](https://docs.python.org/2/library/ast.html#ast.parse) the contents of the file, or directly call the [`compile`](https://docs.python.org/2/library/functions.html#compile) built-in function. import ast def is_valid_python_file(fname): with open(fname) as f: contents = f.read() try: ast.parse(contents) #or compile(contents, fname, 'exec', ast.PyCF_ONLY_AST) return True except SyntaxError: return False Be sure to _not_ execute the file, since if you cannot trust its contents (and if you don't even know whether the file contains valid syntax I doubt you can actually trust the contents even if you generated them) you could end up executing malicious code.
Python will not print to a text file Question: For some reason, the following code is unable to print to `Cache.txt` import random import time text_file = open("Cache.txt", "w") text_file.write("Numbers Used \n") print("Welcome to the Bingo number generator") UserNumber = str((random.randint(1,90))) print(UserNumber) text_file.write ("UserNumber") text_file.close Can someone help me figure out what is wrong with this code? Answer: you need to call `close()` function like this: text_file.close() Plus, a more pythonic way would be to call `open()` with context manager [with](https://docs.python.org/2/library/contextlib.html) : with open("Cache.txt", "w") as text_file: text_file.write("Numbers Used \n") print("Welcome to the Bingo number generator") UserNumber = str((random.randint(1,90))) print(UserNumber) text_file.write ("UserNumber") No need to call `close()` doing that way
error using twilio.rest in python3 Question: I'm having an exception occur that I can't understand. I'm using `twilio.rest` in python 3.4.2 and when I call the twilio api, I'm receiving > AttributeError: 'module' object has no attribute 'Http' > (http2lib?) I am using from `twilio.rest import TwilioRestClient` on the twilio website it shows that that should be the only import that I need, but clearly something isn't working properly. Answer: UPDATE: i finally fixed this. when i would install twilio, it would give the message requirement already satisfied for httplib2, six, pytz, pysocks. i uninstalled twilio, and uninstalled each of those packages, and then re installed [Twilio](https://www.twilio.com/docs/python/install) (hoping it would re-apply those packages) it works now!! i can't really explain why, or perhaps there was a better method, but I wanted to share.
Problems importing from python package (local vs on the python path) Question: I am unable to get the following things to work simultaneously in Python2.7: * import submodules from a local package * import submodules from the package when it is on the PYTHONPATH I have set up the sample directory structure as in the [python docs for packages](https://docs.python.org/2/tutorial/modules.html#packages) cd tmp mkdir sound mkdir sound/formats mkdir sound/effects mkdir sound/filters cd sound touch __init__.py cp !$ formats/ cp __init__.py formats/ cp __init__.py effects/ cp __init__.py filters/ echo "def echofilter(): return(1) " > effects/echo.py cd ../.. So, I end up with: $ ls tmp/sound effects/ filters/ formats/ __init__.py If I make the package local, as it would be if it were acting as a git submodule, for instance, ie by `cd tmp`, the following is successful: $ python Python 2.7.10 (default, Oct 14 2015, 16:09:02) [GCC 5.2.1 20151010] on linux2 >>> from sound.effects import echo >>> Yet if I am located somewhere else (`cd ..`), and I put the package on my PYTHONPATH, I cannot import as in the docs: [~/tmp]$ cd .. [~]$ python Python 2.7.10 (default, Oct 14 2015, 16:09:02) [GCC 5.2.1 20151010] on linux2 >>> import sys >>> sys.path.append('/home/meuser/tmp/sound') >>> from sound.effects import echo Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named sound.effects >>> sys.path.append('/home/meuser/tmp/sound/effects') >>> from sound.effects import echo Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named sound.effects So how am I supposed to arrange things? Answer: Ah! If it's a package, I need the folder _containing_ the package (ie ~/tmp/) in my path, not the folder of the package itself (~/tmp/sound). Thus, the following works: >>> import sys >>> sys.path.append('/home/meuser/tmp/') >>> from sound.effects import echo Now I believe this solves all my recent conundrums..
2D heat map using python and matplotlib Question: I need to plot 2D "heat map" using python using data from my file. My file has 3 columns x,y, value. x goes from 1 to 199 and y from 1 to 49. I've managed to use code from here: [Make a 2D pixel plot with matplotlib](http://stackoverflow.com/questions/6323737/make-a-2d-pixel-plot- with-matplotlib) but my area is rectangular and I need it to be "lying" rectangle, but code above makes it "standing" rectangle. Any way how to rotate it by 90 degrees anti-clockwise or transpose the data? I'm very new to python and all the solutions I've found doesn't work... Here's my code that produces "standing" rectangle: import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm x,y,temp = np.loadtxt('snorm000990987662298').T nrows, ncols = 199, 49 grid = temp.reshape((nrows, ncols)) plt.imshow(grid, cmap=cm.gist_gray) plt.show() Answer: Try using [`numpy.transpose`](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.transpose.html): grid = np.transpose(grid) plt.imshow(grid, cmap=cm.gist_gray) plt.show()
Python.net and DLLs Question: I want to call a .Net dll using Python.net. Essentially what I want to do is operate a hardware device that comes with some dlls and sample C# code for operating it. The sample of C# code I want to reimplement in python is: string[] strDeviceKeys = null; CmdLib8742 cmdLib = new CmdLib8742 (true, 5000, ref strDeviceKeys); CmdLib8742 comes from a .net dll. python.net _seems_ to be installed correctly. I can do this in python: import clr clr.AddReference('CmdLib') import NewFocus.Picomotor cl = NewFocus.Picomotor.CmdLib8742() That runs without error, but obviously doesn't seem to actually connect to the device. It seems like I need to convert the python types to types that are understood by C#, so I probably have to replace `true` with `System.Boolean(True)`. But it's not clear to me how I pass this `ref strDeviceKeys`. I'm guessing this is passing a reference to the array of strings, but I don't know how to do that in python. Any help would be appreciated. Thanks. Answer: You will probably need to look at [the `ctypes` library](https://docs.python.org/3.5/library/ctypes.html), which allows conversion between architecture-specific and Python values.
Python 2.7: applying str to collections.Counter and collections.defaultdict Question: collections.Counter and collections.defaultdict are both inherited from dict. So what is the difference between them which causes non-similar output ('class' and 'type')? import collections print str(collections.Counter) print str(collections.defaultdict) Output: <class 'collections.Counter'> <type 'collections.defaultdict'> Answer: I'm afraid your answer here boils down to something rather boring: `Counter` is written in Python and `defaultdict` is written in C. Here's [`collections.py`](https://svn.python.org/projects/python/trunk/Lib/collections.py). Notice you can scroll down and find a standard class definition for `Counter`: ######################################################################## ### Counter ######################################################################## class Counter(dict): '''Dict subclass for counting hashable items. Sometimes called a bag or multiset. Elements are stored as dictionary keys and their counts are stored as dictionary values. ... ''' However, `defaultdict` is imported from `_collections`: from _collections import deque, defaultdict As noted in [this answer](http://stackoverflow.com/a/18075267/2588818), that's a built-in extension written in C. You'll notice you get this same behavior if you string-ify `deque` (also C) or some other class from `collections` written in Python: >>> from collections import deque >>> str(deque) "<type 'collections.deque'>" >>> from collections import OrderedDict >>> str(OrderedDict) "<class 'collections.OrderedDict'>"*
Passing a list to a function in order to move objects in tkinter (python 3) Question: I am making a rubiks slide game. The first functioning button I am doing is the clockwise button (every other button does nothing). i have the four positions of squares set, and put them in a list called board. I then call the list to the clockwise_move function from the clockwise button, and it is supposed to move the square in whatever position stated. I cannot understand why this will no work. from tkinter import * from tkinter import ttk # --- Functions --- def clockwise_move(square): canvas.coords(square[0], 500, 2, 250, 250) canvas.coords(square[1], 500, 490, 249, 250) canvas.coords(square[2], 2, 2, 249, 249) canvas.coords(square[3], 2, 490, 249, 250) # --- Setup --- main = Tk() main.title("Rubik's Slide") main.resizable(width=FALSE, height=FALSE) main.geometry("700x550") # --- Objects --- frame = ttk.Frame(main) button_frame = ttk.Frame(frame) canvas = Canvas(frame, width=500, height=700) #squares square1 = canvas.create_rectangle(2, 2, 249, 249, fill="red") square2 = canvas.create_rectangle(500, 2, 250, 250, fill="white") square3 = canvas.create_rectangle(2, 490, 249, 250, fill="blue") square4 = canvas.create_rectangle(500, 490, 250, 250, fill="black") board = [square1,square2,square3,square4] #buttons clockwise = ttk.Button(button_frame, text="Clockwise", command=clockwise_move(board)) clockwise.grid(column=2, row=3) counter_clockwise = ttk.Button(button_frame, text="Counterclock") counter_clockwise.grid(column=3, row=3) right = ttk.Button(button_frame, text="Right") right.grid(column=2, row=1) left = ttk.Button(button_frame, text="Left") left.grid(column=2, row=2) up = ttk.Button(button_frame, text="Up") up.grid(column=3, row=1) down = ttk.Button(button_frame, text="Down") down.grid(column=3, row=2) start = ttk.Button(button_frame, text="Start") start.grid(column=2, row=4) reset = ttk.Button(button_frame, text="Reset") reset.grid(column=3, row=4) #frame grid frame.grid(column=1, row=1) canvas.grid(column=1, row=1) button_frame.grid(column=2, row=1) #misc settings for child in button_frame.winfo_children(): child.grid_configure(padx=10, pady=20) main.mainloop() #end of GUI Answer: In your line where you assign the command `clockwise_function` onto the button, your are asigning the result of the function. So the outcome is, that your function get called after you have started your app without clicking the button. The prove this you can simply put a print("Call") into your definition of `clockwise_function`. Instead you can wrap the function call into a lambda expression `command=lambda:clockwise_move(board))`. So instead of passing in the result of the function you are passing in the instance of an anonymous function. An alternative would be removing the parameter `square` and using a global variable. Without an parameter you would simply: def prototype(): print("do something") ... command = prototype # the reference to the function the button click would then act like `command()` Btw for making things easier you also could save your positions like in a dictionary, list or Enum: here I am using an dictionary # Top Left | Top Right | Bottom Left | Bottom Right p= {"tr":[500, 2, 250, 250], "br":[500, 490, 249, 250], "tl":[2, 2, 249, 249], "bl":[2, 490, 249, 250]} def clockwise_move(square): canvas.coords(square[0], *p["tr"]) canvas.coords(square[1], *p["br"]) canvas.coords(square[2], *p["tl"]) canvas.coords(square[3], *p["bl"])
How to get all forms on webpage using python selenium? Question: I searched for answer before ask here but didn't get lucky enough. So here it goes, I am doing web scraping using python selenium. before choosing selenium I checked for mechanize, scrapy but I failed to execute some button clicks with them then I checked selenium it seems okay with some cons. I am saying all this because if I chose the wrong tool then please correct me before it is too late. My question about selenium is how do I get all elements on webpage using xpath. eg: On webpage each webpage I got 10 forms and each of them has a button. So I want to get all the forms on web page to loop on them and click button one by one. eg: <form id="#F0"> <input type="button" name="itens" value="Items" class="texField2" onClick="somefunction()"/> </form> <form id="#F0"> <input type="button" name="itens" value="Items" class="texField2" onClick="somefunction()"/> </form> <form id="#F1"> <input type="button" name="itens" value="Items" class="texField2" onClick="somefunction()"/> </form> <form id="#F2"> <input type="button" name="itens" value="Items" class="texField2" onClick="somefunction()"/> </form> <form id="#F3"> <input type="button" name="itens" value="Items" class="texField2" onClick="somefunction()"/> </form> <form id="#F4"> <input type="button" name="itens" value="Items" class="texField2" onClick="somefunction()"/> </form> Another thing, What I am doing is I click on button on 1st form (it takes me to another page, I go back to history and click on 2nd button then go back to history and so on) it seems scraping would be slow. Is there any better way to do the same? Thank you !! Edit: from selenium import webdriver mydriver = webdriver.Firefox() baseurl = """http://www.comprasnet.gov.br/consultalicitacoes/ConsLicitacao_Filtro.asp?numprp=&dt_publ_ini=24/02/2016&dt_publ_fim=10/03/2016&chkModalidade=1,2,3,20,5,99&chk_concor=31,32,41,42&chk_pregao=1,2,3,4&chk_rdc=1,2,3,4&optTpPesqMat=M&optTpPesqServ=S&chkTodos=-1&chk_concorTodos=-1&chk_pregaoTodos=-1&txtlstUf=&txtlstMunicipio=&txtlstUasg=&txtlstGrpMaterial=&txtlstClasMaterial=&txtlstMaterial=&txtlstGrpServico=&txtlstServico=&txtObjeto=""" mydriver.get(baseurl) mydriver.find_element_by_id('ok').click() buttons = mydriver.find_element_by_xpath("//form//input[@type='button']") for button in buttons: button.click() Answer: You can get forms buttons by xpath with code below: buttons = driver.find_elements_by_xpath(".//form//input[@type='button']") And iterate them via simple for loop: for button in buttons: button.click() Alternativelly you can use [find_elements_by_css_selector](http://selenium- python.readthedocs.org/locating-elements.html#locating-elements) function to get elements: buttons = driver.find_elements_by_css_selector("input[type='button']")
Multiple Matrix Multiplications with Numpy Question: I have about 650 csv-based matrices. I plan on loading each one using Numpy as in the following example: m1 = numpy.loadtext(open("matrix1.txt", "rb"), delimiter=",", skiprows=1) There are matrix2.txt, matrix3.txt, ..., matrix650.txt files that I need to process. My end goal is to multiply each matrix by each other, meaning I don't necessarily have to maintain 650 matrices but rather just 2 (1 ongoing and 1 that I am currently multiplying my ongoing by.) Here is an example of what I mean with matrices defined from 1 to n: M1, M2, M3, .., Mn. M1*M2*M3*...*Mn The dimensions on all the matrices are the same. **The matrices are not square. There are 197 rows and 11 columns.** None of the matrices are sparse and every cell comes into play. What is the best/most efficient way to do this in python? EDIT: I took what was suggested and got it to work by taking the transpose since it isn't a square matrix. As an addendum to the question, i**s there a way in Numpy to do element by element multiplication**? Answer: A Python3 solution, if "each matrix by each other" actually means just multiplying them in a row and _the matrices have compatible dimensions_ ( (n, m) · (m, o) · (o, p) · ... ), which you hint at with "(1 ongoing and 1 that...)", then use (if available): from functools import partial fnames = map("matrix{}.txt".format, range(1, 651)) np.linalg.multi_dot(map(partial(np.loadtxt, delimiter=',', skiprows=1), fnames)) or: from functools import reduce, partial fnames = map("matrix{}.txt".format, range(1, 651)) matrices = map(partial(np.loadtxt, delimiter=',', skiprows=1), fnames) res = reduce(np.dot, matrices) Maps etc. are lazy in python3, so files are read as needed. Loadtxt doesn't require a pre-opened file, a filename will do. Doing all the combinations lazily, given that the matrices have the same shape (will do a lot of rereading of data): from functools import partial from itertools import starmap, combinations map_loadtxt = partial(map, partial(np.loadtxt, delimiter=',', skiprows=1)) fname_combs = combinations(map("matrix{}.txt".format, range(1, 651)), 2) res = list(starmap(np.dot, map(map_loadtxt, fname_combs))) Using a bit of grouping to reduce reloading of files: from itertools import groupby, combinations, chain from functools import partial from operator import itemgetter loader = partial(np.loadtxt, delimiter=',', skiprows=1) fname_pairs = combinations(map("matrix{}.txt".format, range(1, 651)), 2) groups = groupby(fname_pairs, itemgetter(0)) res = list(chain.from_iterable( map(loader(k).dot, map(loader, map(itemgetter(1), g))) for k, g in groups )) Since the matrices are not square, but have the same dimensions, you would have to add transposes before multiplication to match the dimensions. For example either `loader(k).T.dot` or `map(np.transpose, map(loader, ...))`. If on the other hand the question actually was meant to address element wise multiplication, replace `np.dot` with `np.multiply`.
Django error: datetime.datetime Question: When I execute `python manage.py migrate`, I get this error on the screen: File "/usr/local/lib/python2.7/dist-packages/django/db/backends/base/schema.py", line 146, in column sql default_value = self.effective_default(field) File "/usr/local/lib/python2.7/dist-packages/django/db/backends/base/schema.py", line 211, in effect ive_default default = field.get_db_prep_save(default, self.connection) File "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/__init_.py", line 710, in get_ db_prep_save prepared=False) File "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/init.py", line 2003, in get db_prep_value value = self.get_prep_value(value) File "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/__init_.py", line 2010, in get _prep_value if value and ':' in value: TypeError: argument of type 'datetime.datetime' is not iterable I tried to execute `python manage.py syncdb` but the same error appears. What could be the issue? `Models.py` for more information: from django.db import models import datetime from django.utils import timezone class Machine(models.Model): name = models.CharField(max_length=20, unique=True) ins_date = models.DateField(default=timezone.now) mod_date = models.DateField(default=timezone.now) nagios_name = models.CharField(max_length=20) dns = models.CharField(max_length=30) ip_int = models.GenericIPAddressField(null=True, blank=True) ip_ext = models.GenericIPAddressField() vlan = models.IntegerField(null=True, blank=True) custom = models.BooleanField(default=False) def __str__(self): return self.name class SecurityGroup(models.Model): name = models.CharField(max_length=20) description = models.CharField(max_length=30, null=True, blank=True) ins_date = models.DateField(default=timezone.now) mod_date = models.DateField(default=timezone.now) def __str__(self): return self.name class NACL(models.Model): machine = models.ForeignKey(Machine) securityGroup = models.ForeignKey(SecurityGroup) ins_date = models.DateField(default=timezone.now) mod_date = models.DateField(default=timezone.now) BYADMIN_CHOICES = ((1, "Input"),(0, "Output"),) class Rule(models.Model): type_rule = models.CharField(max_length=10) description = models.CharField(max_length=30, null=True, blank=True, default=None) protocol = models.CharField(max_length=10) port_range_min = models.IntegerField() port_range_max = models.IntegerField(null=True, blank=True) #sg_object = models.CharField(max_length=20, null=True, blank=True) ip_object = models.GenericIPAddressField(null=True, blank=True) securityGroup_object = models.ForeignKey(SecurityGroup, null=True, blank=True) bound = models.BooleanField(choices=BYADMIN_CHOICES, default=1) #if bound == True: input else: output class Meta: unique_together = (("type_rule", "protocol", "port_range_min", "bound")) def __str__(self): return ("INPUT: " if(self.bound) else "OUTPUT: ") + self.type_rule + " " + self.protocol + " " + str(self.port_range_min) + " " + (str(self.securityGroup_object) if(self.securityGroup_object != None) else self.ip_object) class Sg_rule(models.Model): securityGroup = models.ForeignKey(SecurityGroup) rule = models.ForeignKey(Rule) ins_date = models.DateField(default=timezone.now) mod_date = models.DateField(default=timezone.now) class Service(models.Model): name = models.CharField(max_length=20) macroService = models.CharField(max_length=20) def __str__(self): return self.name class M_Service(models.Model): machine = models.ForeignKey(Machine) service = models.ForeignKey(Service) ins_date = models.DateField(default=timezone.now) mod_date = models.DateField(default=timezone.now) And the migration that is failing: # -*- coding: utf-8 -*- from __future__ import unicode_literals from django.db import migrations, models import datetime from django.utils.timezone import utc class Migration(migrations.Migration): dependencies = [ ('adminApp', '0009_auto_20160229_1716'), ] operations = [ migrations.AddField( model_name='rule', name='ip_object', field=models.GenericIPAddressField(default=datetime.datetime(2016, 2, 29, 17, 17, 42, 169176, tzinfo=utc)), preserve_default=False, ), migrations.AlterField( model_name='rule', name='sg_object', field=models.CharField(max_length=20), ), ] Answer: Your migration is trying to use a datetime as the default for an ip address field. That doesn't make sense! field=models.GenericIPAddressField(default=datetime.datetime(2016, 2, 29, 17, 17, 42, 169176, tzinfo=utc)),
python - web scraping BeautifulSoup and urllib Question: I am using python 3.4 and my script looks like: import urllib from urllib.request import Request, urlopen from urllib.error import URLError, HTTPError from bs4 import BeautifulSoup url = "http://www.embassy-worldwide.com/" headers={'User-Agent': 'Mozilla/5.0'} #req = Request(url, headers) try: req = urllib.request.Request(url, headers) #print (req) except HTTPError as e: print('Error code: ', e.code) except URLError as e: print('Reason: ', e.reason) else: print('good!') print (req) #html = urllib.request.urlopen(req) with urllib.request.urlopen(req) as response: html = response.read() print(html) the code above results in an error: **ValueError: Content-Length should be specified for iterable data of type {'User-Agent': 'Mozilla/5.0'}** How can I get the html code and then iterate the tags to get a list with all countries? Answer: Try this style in urllib3: import sys import re import time import pprint import codecs import unicodedata import urllib3 import json urllib3.disable_warnings() cookie = '_session_id=29913b5f1b8836d2a8387ef4db00745e' header = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/536.26.17 (KHTML, like Gecko) Version/6.0.2 Safari/536.26.17' url = 'https://yoururl.com/' m = urllib3.PoolManager(num_pools = 15) r = m.request('GET', url, None, {'User-Agent' : header, 'Cookie' : cookie}) print(r.data) The imports are more than needed. It's just a snippet from a bigger part of a scraper I use. And mine uses some regex because the tiny snippets I need are in my case faster in regex than a full beautifulsoup implementation.
Attribute error : RPi.GPIO.PWM has no attribute 'ChangeDutyCycle' Question: So i'm making a little robot using a raspberry pi B+ , everything is going fairly nicely, but I get this error message that I don't seem to be able to solve .. Attribute error : RPi.GPIO.PWM has no attribute 'ChangeDutyCycle' Here's two things you might need to know : :D \- The code worked perfectly before I put it as a class, ChangeDutyCycle DOES exist \- If I put my mouse in an 'unallowed' i.e. which gives ChangeDutyCycle a value less than 0 or more than 100, the error changes and becomes 'ChangeDutyCycle can't accept value less than 0 or more than 100' (So first you tell me there is no such function, and then tell me it can't have some values? :roll: ) So I'm going slightly crazy now. Note : I'm a complete beginner in python, and honestly, it seems to me like a bad version of java, but the RPi GPIO seems to be optimised for this language so I'm trying it out, so if you see any other mistake or bad things, do let me know :D The code now : My 'main', where I take input from a pygame window (position of mouse) #!/usr/bin/python # -*- coding: utf-8 -*- import pygame import RPi.GPIO as GPIO from pygame.locals import * from control import control print('Path :', pygame.__file__) def main(): pygame.init() screen = pygame.display.set_mode((200,200)) ctrl = control() bg = pygame.Surface(screen.get_size()) bg = bg.convert() bg.fill((250,250,250)) font = pygame.font.Font(None, 36) text = font.render("Hello", 1, (10,10,10,)) textpos = text.get_rect() textpos.centerx = bg.get_rect().centerx bg.blit(text, textpos) screen.blit(bg, (0,0)) pygame.display.flip() try: while 1: for event in pygame.event.get(): bg.fill((250, 250, 250)) if event.type == QUIT: return pos = pygame.mouse.get_pos() x = pos[0]-100 y = -(pos[1]-200)-100 text = font.render(str(x)+' '+str(y), 1, (10,10,10,)) ctrl.updateEngine(x,y) bg.blit(text,textpos) screen.blit(bg, (0,0)) pygame.display.flip() except KeyboardInterrupt: return finally: ctrl.cleanup() if __name__ == '__main__': main() And my class with functions to control motors , control.py : import RPi.GPIO as GPIO from time import sleep import sys import Tkinter as tk class control: def __init__(self): import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) GPIO.setup(3, GPIO.OUT)# set GPIO 25 as output for white led GPIO.setup(18, GPIO.OUT)# set GPIO 24 as output for red led GPIO.setup(2, GPIO.OUT) GPIO.setup(17, GPIO.OUT) self.Rb = GPIO.PWM(2, 100) self.Lf = GPIO.PWM(18, 100) self.Lb = GPIO.PWM(17, 100) self.Rf = GPIO.PWM(3, 100) self.Rb.start(0) self.Rf.start(0) self.Lb.start(0) self.Lf.start(0) def cleanup(self): self.Rb.stop() self.Rf.stop() self.Lb.stop() self.Lf.stop() def updateEngine(self, x, y): self.clear() if y>15 and (x>15 or x<-15) : self.Rf.ChangeDutyCucle(y-15-x) self.Lf.ChangeDutyCycle(y-15+x) elif y<-15 and (x>15 or x<-15) : self.Rb.ChangeDutyCycle(-15-y-x) self.Lb.ChangeDutyCycle(-15-y+x) def clear(self): self.Rf.ChangeDutyCycle(0) self.Rb.ChangeDutyCycle(0) self.Lf.ChangeDutyCycle(0) self.Lb.ChangeDutyCycle(0) So the error happens in control.py, at the updateEngine method. Also you'll note I imported an amazing number of 3 times the same package (RPi.GPIO) cause I'm not sure where to import it ! :) Any help would be graciously accepted :) Answer: I tracked down my issue to a typo: My `updateEngine` function had a call to `self.Rf.ChangeDutyCucle`, when it should have been `self.Rf.ChangeDutyCycle`.
Unicode error ascii can't encode character Question: I am trying to import a csv file in order to train my classifier but I keep receiving this error traceback (most recent call last): File "updateClassif.py", line 17, in <module> myClassif = NaiveBayesClassifier(fp, format="csv") File "C:\Python27\lib\site-packages\textblob\classifiers.py", line 191, in __init__ super(NLTKClassifier, self).__init__(train_set, feature_extractor, format, **kwargs) File "C:\Python27\lib\site-packages\textblob\classifiers.py", line 123, in __init__ self.train_set = self._read_data(train_set, format) File "C:\Python27\lib\site-packages\textblob\classifiers.py", line 143, in _read_data return format_class(dataset, **self.format_kwargs).to_iterable() File "C:\Python27\lib\site-packages\textblob\formats.py", line 68, in __init__ self.data = [row for row in reader] File "C:\Python27\lib\site-packages\textblob\unicodecsv\__init__.py", line 106, in next row = self.reader.next() UnicodeEncodeError: 'ascii' codec can't encode character u'\xe6' in position 55: ordinal not in range(128) The CSV file contains 1600000 lines of tweets so I believe some tweets contain special characters. I have tried saving it using open office as someone recommended but still the same result. I also tried using latin encoding but the same result. This is my code : with codecs.open('tr.csv', 'r' ,encoding='latin-1') as fp: myClassif = NaiveBayesClassifier(fp, format="csv") This is the code from the library I am using: def __init__(self, csvfile, fieldnames=None, restkey=None, restval=None, dialect='excel', encoding='utf-8', errors='strict', *args, **kwds): if fieldnames is not None: fieldnames = _stringify_list(fieldnames, encoding) csv.DictReader.__init__(self, csvfile, fieldnames, restkey, restval, dialect, *args, **kwds) self.reader = UnicodeReader(csvfile, dialect, encoding=encoding, errors=errors, *args, **kwds) if fieldnames is None and not hasattr(csv.DictReader, 'fieldnames'): # Python 2.5 fieldnames workaround. (http://bugs.python.org/issue3436) reader = UnicodeReader(csvfile, dialect, encoding=encoding, *args, **kwds) self.fieldnames = _stringify_list(reader.next(), reader.encoding) self.unicode_fieldnames = [_unicodify(f, encoding) for f in self.fieldnames] self.unicode_restkey = _unicodify(restkey, encoding) def next(self): row = csv.DictReader.next(self) result = dict((uni_key, row[str_key]) for (str_key, uni_key) in izip(self.fieldnames, self.unicode_fieldnames)) rest = row.get(self.restkey) Answer: Note that the traceback says _En_ codeError, not DecodeError. It looks like the NaiveBayesClassifier is expecting ascii. Either make it accept Unicode, or, if this is OK for your application, replace non-ascii characters with '?' or something.
Cookies and render for an angular js app using Phantomjs and python Question: I have an object that can call a web page fine, even with `add_cookie`. However, I want to render a site that is making heavy use of angularjs. For some reason I seem to be unable to set the correct cookies, which I have pulled from a live session. So I am uncertain how much of this is cookie-foo that I'm getting wrong, and how much is angular-js-foo that needs to happen that I'm missing. If people have input here that would be great, but even comments pointing me to where to go ingest would be helpful because I'm not getting far on my own. I've scrubbed the values following. Even some pointers on how to debug this. My goal is to render the `div`s on the page. from selenium import webdriver from selenium.webdriver.common.desired_capabilities import DesiredCapabilities from bs4 import BeautifulSoup class WebPage(): def __init__(self): dcap = dict(DesiredCapabilities.PHANTOMJS) dcap["phantomjs.page.settings.userAgent"] = ( "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/53 " "(KHTML, like Gecko) Chrome/15.0.87" ) self.driver = webdriver.PhantomJS(desired_capabilities=dcap, service_args=['--ignore-ssl-errors=true', '--ssl-protocol=any']) self.driver.set_window_size(1024, 768) def get_page(self): url = "http://proadvisor.intuit.com/search/index-standalone.html?v2#/search-result?location=60613&distance=500" self.driver.get(url) cookie = {'aam_aud': 'a_value', 'ttax': 0} self.driver.add_cookie(cookie) page = BeautifulSoup(self.driver.page_source, "lxml") page.find_all("div") if __name__ == '__main__': wp = WebPage() wp.get_page() and just now getting: > selenium.common.exceptions.WebDriverException: Message: Error Message => > 'Can only set Cookies for the current domain' caused by Request => > {"headers":{"Accept":"application/json","Accept- > Encoding":"identity","Connection":"close","Content-Length":"104","Content- > Type":"application/json;charset=UTF-8","Host":"127.0.0.1:34940","User- > Agent":"Python- > urllib/3.3"},"httpVersion":"1.1","method":"POST","post":"{\"cookie\": > {\"aam_aud\": \"a_value\", \"ttax\": 0}, \"sessionId\": > \"A-hash\"}","url":"/cookie","urlParsed":{"anchor":"","query":"","file":"cookie","directory":"/","path":"/cookie","relative":"/cookie","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/cookie","queryKey":{},"chunks":["cookie"]},"urlOriginal":"/session/a_hash/cookie"} Answer: You have to add domain to your cookie. Try something like this: cookie = {'aam_aud': 'a_value', 'ttax': 0, 'domain' : http://proadvisor.intuit.com} You might have to play with it a bit for your base URL, but I think that should work. You have to tell the cookie on what websites you want to use that cookie on.
normalizing JSON datestrings to UTC python Question: I have an important test that says "Calculate users that logged in during the month of April normalized to the UTC timezone." Items look as such: [ {u'email': u' [email protected]', u'login_date': u'2014-05-08T22:30:57-04:00'}, {u'email': u'[email protected]', u'login_date': u'2014-04-25T13:27:48-08:00'}, ] It seems to me that an item like `2014-04-13T17:12:20-04:00` means "April 13th, 2014, at 5:12:20 pm, 4 hours behind UTC". Then I just use strptime to convert to datetime ([Converting JSON date string to python datetime](http://stackoverflow.com/questions/10805589/converting-json-date- string-to-python-datetime)), and subtract a timedelta of however many hours I get from a regex that grabs the end of string? I feel this way because some have a + at the end instead of -, like `2014-05-07T00:30:06+07:00` Thank you Answer: It is probably best to use the [`dateutil.parser.parse`](https://pypi.python.org/pypi/python-dateutil/2.5.0) and [`pytz`](https://pypi.python.org/pypi/pytz) packages for this purpose. This will allow you to parse a string and convert it to a datetime object with UTC timezone: >>> s = '2014-05-08T22:30:57-04:00' >>> import dateutil.parser >>> import pytz >>> pytz.UTC.normalize(dateutil.parser.parse(s)) datetime.datetime(2014, 5, 9, 2, 30, 57, tzinfo=<UTC>)
Concatenate audio files [Python 2.7] Question: **Is** there any way to concatenate or merge two audio files into one? **Requirements** : Must use built-in modules only [may use PyGame] **Audio File format** : .wma OR .wav OR .mp3 I have looked at many questions now and found solutions that involved downloading modules (which I do not prefer). Any help would be welcome. Answer: I make some research and found this. #import libraries from glob import iglob import shutil import os #create path variable PATH = r'C:\music' #create everything.mp3 destination = open('everything.mp3', 'wb') for filename in iglob(os.path.join(PATH, '*.mp3')): shutil.copyfileobj(open(filename, 'rb'), destination) #make them all together with for destination.close() #close file from [here](http://stackoverflow.com/questions/1001538/how-do-i-concatenate- files-in-python).
Reading Twitter JSON result Question: What is the correct way to read these twitter search results? {u'contributors': None, u'truncated': False, u'text': u"Google's deep learning project can figure out where any photo was taken, without geotags https://t.co/8URtvHUgjx https://t.co/hTQobCpA4U", u'is_quote_status': False, u'in_reply_to_status_id': None, u'id': 703129624285286400, u'favorite_count': 198, u'source': u'<a href="http://sproutsocial.com" rel="nofollow">Sprout Social</a>', u'retweeted': False, u'coordinates': None, u'entities': {u'symbols': [], u'user_mentions': [], u'hashtags': [], u'urls': [{u'url': u'https://t.co/8URtvHUgjx', u'indices': [89, 112], u'expanded_url': u'http://www.theverge.com/2016/2/25/11112594/google-new-deep-learning-image-location-planet?utm_campaign=theverge&utm_content=chorus&utm_medium=social&utm_source=twitter', u'display_url': u'theverge.com/2016/2/25/1111\u2026'}], u'media': [{u'source_user_id': 275686563, u'source_status_id_str': u'702916863450345474', u'expanded_url': u'http://twitter.com/verge/status/702916863450345474/photo/1', u'display_url': u'pic.twitter.com/hTQobCpA4U', u'url': u'https://t.co/hTQobCpA4U', u'media_url_https': u'https://pbs.twimg.com/media/CcFDKaHWEAEyUOR.jpg', u'source_user_id_str': u'275686563', u'source_status_id': 702916863450345474, u'id_str': u'702916862934388737', u'sizes': {u'small': {u'h': 383, u'resize': u'fit', u'w': 680}, u'large': {u'h': 675, u'resize': u'fit', u'w': 1200}, u'medium': {u'h': 675, u'resize': u'fit', u'w': 1200}, u'thumb': {u'h': 150, u'resize': u'crop', u'w': 150}}, u'indices': [113, 136], u'type': u'photo', u'id': 702916862934388737, u'media_url': u'http://pbs.twimg.com/media/CcFDKaHWEAEyUOR.jpg'}]}, u'in_reply_to_screen_name': None, u'in_reply_to_user_id': None, u'retweet_count': 232, u'id_str': u'703129624285286400', u'favorited': False, u'user': {u'follow_request_sent': False, u'has_extended_profile': False, u'profile_use_background_image': True, u'default_profile_image': False, u'id': 275686563, u'profile_background_image_url_https': u'https://pbs.twimg.com/profile_background_images/481546505468145664/a59ZFvIP.jpeg', u'verified': True, u'profile_text_color': u'333333', u'profile_image_url_https': u'https://pbs.twimg.com/profile_images/615501837341466624/I4jVBBp-_normal.jpg', u'profile_sidebar_fill_color': u'EFEFEF', u'entities': {u'url': {u'urls': [{u'url': u'http://t.co/W2SFxIXkC4', u'indices': [0, 22], u'expanded_url': u'http://www.theverge.com', u'display_url': u'theverge.com'}]}, u'description': {u'urls': [{u'url': u'https://t.co/W2SFxIXkC4', u'indices': [0, 23], u'expanded_url': u'http://www.theverge.com', u'display_url': u'theverge.com'}]}}, u'followers_count': 1180845, u'profile_sidebar_border_color': u'000000', u'id_str': u'275686563', u'profile_background_color': u'FFFFFF', u'listed_count': 29266, u'is_translation_enabled': True, u'utc_offset': -18000, u'statuses_count': 88374, u'description': u'https://t.co/W2SFxIXkC4 covers the future of technology, science, art, and culture. Snapchat: verge', u'friends_count': 139, u'location': u'New York', u'profile_link_color': u'FA4D2A', u'profile_image_url': u'http://pbs.twimg.com/profile_images/615501837341466624/I4jVBBp-_normal.jpg', u'following': False, u'geo_enabled': True, u'profile_banner_url': u'https://pbs.twimg.com/profile_banners/275686563/1433249898', u'profile_background_image_url': u'http://pbs.twimg.com/profile_background_images/481546505468145664/a59ZFvIP.jpeg', u'screen_name': u'verge', u'lang': u'en', u'profile_background_tile': False, u'favourites_count': 1217, u'name': u'The Verge', u'notifications': False, u'url': u'http://t.co/W2SFxIXkC4', u'created_at': u'Fri Apr 01 19:54:22 +0000 2011', u'contributors_enabled': False, u'time_zone': u'Eastern Time (US & Canada)', u'protected': False, u'default_profile': False, u'is_translator': False}, u'geo': None, u'in_reply_to_user_id_str': None, u'possibly_sensitive': False, u'lang': u'en', u'created_at': u'Fri Feb 26 08:09:00 +0000 2016', u'in_reply_to_status_id_str': None, u'place': None, u'metadata': {u'iso_language_code': u'en', u'result_type': u'popular'}} I tried the following code but it always throws errors: with open('../data/full_results.txt', 'r') as fh: for tweet in fh: print(tweet['text']) > TypeError: string indices must be integers, not str while trying the below code, I get ValueError: with open('../data/full_results.txt', 'r') as fh: for line in fh: tweet = json.loads(line) print(tweet['text']) > ValueError: Expecting property name: line 1 column 2 (char 1) But when I assign the same twitter response line to a variable in `Ipython`, In [2]: tweet = {u'contributors': None, ... u'result_type': u'popular'}} In [3]: tweet[text] Out [3]: u"Google's deep learning ...." It gives correct result. But I can't understand why? Answer: `tweet` is a line read from the file, not a dictionary. And, it looks like each line is not a valid JSON string, but looks like a string representation of a dictionary. First thing to check/fix is how these tweets were dumped into this file in this format in the first place. You need to use [`json.dump()`](https://docs.python.org/2/library/json.html#json.dump) or [`json.dumps()`](https://docs.python.org/2/library/json.html#json.dumps) to have a proper JSON in the output file. Then, to read the tweets, if you have a tweet per line, the following should work: import json with open('../data/full_results.txt', 'r') as fh: for line in fh: tweet = json.loads(line) print(tweet['text']) If you have a _list of tweets_ dumped to JSON: import json with open('../data/full_results.txt', 'r') as fh: tweets = json.load(fh) for tweet in tweets: print(tweet['text']) If you cannot change the way tweets were dumped into the file, you might load the tweets with [`ast.literal_eval()`](https://docs.python.org/2/library/ast.html#ast.literal_eval): from ast import literal_eval with open('../data/full_results.txt', 'r') as fh: for line in fh: tweet = literal_eval(line) print(tweet['text'])
How to remove first few characters from every 1st line of each json file Question: I am relatively new to python. I am trying to merge all JSON files into a one single JSON file from a folder. I could do my merge. However I would like to remove the some characters of the 1st line in every file to make the entire JSON valid. # Script to combine all jsons but need to remove the closing , at the end import glob import re # read the whole folder read_files = glob.glob("bus_stop_1012/*.json") with open("bus_stop_1012/bus_arrival_1012.json", "wb") as outfile: # this is the beginning of the combined file outfile.write(' ') for f in read_files: # will append each data file with open(f, "rb") as infile: outfile.write(infile.read()) # will have to add , at the end of each element outfile.write(',') # move back 1 character to remove the last , and end the file outfile.seek(-1,1) outfile.write(']}') which generates this single JSON file from a example of 2 json files: {"data": [{"time": "2016-03-02 17:45:20 SGT+0800", "result":{ "BusStopID": "1012", "Services": [ { "NextBus": { "EstimatedArrival": "2016-03-02T17:48:21+08:00", "Feature": "WAB", "Latitude": "1.2871405", "Load": "Seats Available", "Longitude": "103.8456715", "VisitNumber": "1" }, "Operator": "SBST", "OriginatingID": "10589", "ServiceNo": "12", "Status": "In Operation", "SubsequentBus": { "EstimatedArrival": "2016-03-02T17:56:02+08:00", "Feature": "WAB", "Latitude": "0", "Load": "Seats Available", "Longitude": "0", "VisitNumber": "1" }, "SubsequentBus3": { "EstimatedArrival": "2016-03-02T18:06:02+08:00", "Feature": "WAB", "Latitude": "0", "Load": "Seats Available", "Longitude": "0", "VisitNumber": "1" }, "TerminatingID": "77009" } ], "odata.metadata": "http://datamall2.mytransport.sg/ltaodataservice/$metadata#BusArrival/@Element" }},{"data": [{"time": "2016-03-02 17:49:36 SGT+0800", "result":{ "BusStopID": "1012", "Services": [ { "NextBus": { "EstimatedArrival": "2016-03-02T17:48:47+08:00", "Feature": "WAB", "Latitude": "1.2944553333333333", "Load": "Seats Available", "Longitude": "103.85045283333334", "VisitNumber": "1" }, "Operator": "SBST", "OriginatingID": "10589", "ServiceNo": "12", "Status": "In Operation", "SubsequentBus": { "EstimatedArrival": "2016-03-02T17:58:26+08:00", "Feature": "WAB", "Latitude": "1.2821243333333334", "Load": "Seats Available", "Longitude": "103.841401", "VisitNumber": "1" }, "SubsequentBus3": { "EstimatedArrival": "2016-03-02T18:06:02+08:00", "Feature": "WAB", "Latitude": "0", "Load": "Seats Available", "Longitude": "0", "VisitNumber": "1" }, "TerminatingID": "77009" } ], "odata.metadata": "http://datamall2.mytransport.sg/ltaodataservice/$metadata#BusArrival/@Element" }}]} I would need the **{"data": [** of each subsequent JSON file to be removed as it is found in every JSON file. Answer: You could decode from JSON, extract the elements you want, then write those out as JSON again. If the goal is to produce one large `{"data": [....]}` list, you can get away with writing each element in the list separately if you take care not to write a last comma: import glob import json # read the whole folder read_files = glob.glob("bus_stop_1012/*.json") with open("bus_stop_1012/bus_arrival_1012.json", "wb") as outfile: # this is the beginning of the combined file outfile.write('{"data": [\n') sep = '' for f in read_files: # will append each data file with open(f) as infile: try: for obj in json.load(infile)['data']: outfile.write(sep) json.dump(obj, outfile) sep = ',' except ValueError: print 'Failed to load {}'.format(f) outfile.write(']}')
python read csv file with row and column headers into dictionary with two keys Question: I have csv file of the following format, ,col1,col2,col3 row1,23,42,77 row2,25,39,87 row3,48,67,53 row4,14,48,66 I need to read this into a dictionary of two keys such that dict1['row1']['col2'] = 42 dict1['row4']['col3'] = 66 If I try to use [csv.DictReader](https://docs.python.org/3/library/csv.html#csv.DictReader) with default options with open(filePath, "rb" ) as theFile: reader = csv.DictReader(theFile, delimiter=',') for line in reader: print line I get the following output {'': 'row1', 'col2': '42', 'col3': '77', 'col1': '23'} {'': 'row2', 'col2': '39', 'col3': '87', 'col1': '25'} {'': 'row3', 'col2': '67', 'col3': '53', 'col1': '48'} {'': 'row4', 'col2': '48', 'col3': '66', 'col1': '14'} I'm not sure of how to process this output to create the type of dictionary that I'm interested in. For sake of completeness, it would also help if you can address how to write back the dictionary into a csv file with the above format Answer: Using the CSV module: import csv dict1 = {} with open("test.csv", "rb") as infile: reader = csv.reader(infile) headers = next(reader)[1:] for row in reader: dict1[row[0]] = {key: int(value) for key, value in zip(headers, row[1:])}
Proxy to mock ec2.describe_regions() (AWS) Question: I'm makeing a simple proxy with Flask to mock the call describe_regions() of AWS. The Flask server has de following code: from __future__ import unicode_literals from flask import Flask from flask import Response from flask import stream_with_context # from httpretty import HTTPretty, register_uri import httpretty import requests from flask import request import time RESPONSE = u"""<DescribeRegionsResponse xmlns="http://ec2.amazonaws.com/doc/2015-10-01/"> <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId> <regionInfo> <item> <regionName>us-east-1</regionName> <regionEndpoint>ec2.us-east-1.amazonaws.com</regionEndpoint> </item> <item> <regionName>eu-west-1</regionName> <regionEndpoint>ec2.eu-west-1amazonaws.com</regionEndpoint> </item> </regionInfo> </DescribeRegionsResponse>""" app = Flask(__name__) @app.route('/<path:url>', methods=['GET', 'PUT', 'POST', 'DELETE', 'HEAD', 'PATCH', 'OPTIONS', 'CONNECT']) def home(url): return Response(RESPONSE, mimetype='text/xml') if __name__ == '__main__': app.run(debug=True) Then I has the following code to test it. I use Boto3 to call the API for AWS. from boto3.session import Session import os credentials = { 'aws_access_key_id': 'sadasdasda', 'aws_secret_access_key': 'dasdasdasd' } os.environ["HTTP_PROXY"] = 'http://localhost:5000/' os.environ["HTTPS_PROXY"] = 'http://localhost:5000/' session_boto3 = Session(**credentials) ec2 = session_boto3.client('ec2', 'eu-west-1', verify=False) regions = ec2.describe_regions() print regions The problem is: the Flask server get the petition, but the Response doesn't like to Boto3 and I get the following traceback error: Traceback (most recent call last): File "/pruebas_mock/prueba.py", line 82, in <module> regions = ec2.describe_regions() File "/mock_aws/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call return self._make_api_call(operation_name, kwargs) File "/mock_aws/local/lib/python2.7/site-packages/botocore/client.py", line 475, in _make_api_call operation_model, request_dict) File "/mock_aws/local/lib/python2.7/site-packages/botocore/endpoint.py", line 117, in make_request return self._send_request(request_dict, operation_model) File "/mock_aws/local/lib/python2.7/site-packages/botocore/endpoint.py", line 146, in _send_request success_response, exception): File "/mock_aws/local/lib/python2.7/site-packages/botocore/endpoint.py", line 219, in _needs_retry caught_exception=caught_exception) File "/mock_aws/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit return self._emit(event_name, kwargs) File "/mock_aws/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit response = handler(**kwargs) File "/mock_aws/local/lib/python2.7/site-packages/botocore/retryhandler.py", line 183, in __call__ if self._checker(attempts, response, caught_exception): File "/mock_aws/local/lib/python2.7/site-packages/botocore/retryhandler.py", line 250, in __call__ caught_exception) File "/mock_aws/local/lib/python2.7/site-packages/botocore/retryhandler.py", line 273, in _should_retry return self._checker(attempt_number, response, caught_exception) File "/mock_aws/local/lib/python2.7/site-packages/botocore/retryhandler.py", line 313, in __call__ caught_exception) File "/mock_aws/local/lib/python2.7/site-packages/botocore/retryhandler.py", line 222, in __call__ return self._check_caught_exception(attempt_number, caught_exception) File "/mock_aws/local/lib/python2.7/site-packages/botocore/retryhandler.py", line 355, in _check_caught_exception raise caught_exception botocore.vendored.requests.exceptions.SSLError: [Errno 1] _ssl.c:510: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol How should I make the response with Flask? Thanks. Answer: By default, boto3 (and all of the other AWS SDK's) will connect to services using SSL. Your proxy Flask server does not appear to be using SSL so you can either use SSL in your proxy or tell boto3 not to use SSL for your proxy server: ec2 = session_boto3.client('ec2', 'eu-west-1', use_ssl=False, verify=False) The `verify` parameter tells boto3 not to try to validate the SSL cert but it will still try to connect via SSL. the `use_ssl=False` tells it to use plain HTTP to talk to your endpoint.
How can I improve the performance of the below python code Question: I wrote this piece of code shown below. I am having severe performance issues with it. Especially the loop where i loop 50 million times(for z in range(total):) seems very slow. Could I modify it to be a bit more efficient? - Maybe modify how it is storing sum of last 10 values in r1,r2? import numpy as np import math import scipy.stats as sp # Define sample size sample=4999999 cutoff=int((sample+1)/100) # Define days for x-day VaR xdays=10 # Calculate the whole sample size and extended total sample size size=sample*xdays+xdays-1 total=size+xdays cutoff_o=int((size+1)/100) # Sample values for kurtosis #kurt=[0.0000001,1.0,2.0,3.0,4.0,5.0,6.0,10.0] kurt=[6.0] # Number of repetitions rep=2 # Define correlation coefficient rho=0.5 # Loop for different iterations for x in range(rep): uni=sp.uniform.rvs(size=total) # Loop for different values of kurtosis for y in kurt: df=(6.0/y)+4.0 # Initialize arrays t_corr=np.empty(total) n_corr=np.empty(total) t_corr_2=np.empty(total) r1=np.empty(sample) r2=np.empty(size) r3=np.empty(sample) r4=np.empty(size) # Define t dist from uniform t_dist=sp.t.ppf(uni,df) n_dist=sp.norm.ppf(uni) # Loop to generate autocorrelated distributions for z in range(total): if z==0: t_corr[z]=t_dist[z] n_corr[z]=n_dist[z] t_corr_2[z]=sp.t.ppf(sp.norm.cdf(n_corr[z]),df) else: t_corr[z]=rho*t_dist[z-1] + math.sqrt((1-rho**2))*t_dist[z] n_corr[z]=rho*n_dist[z-1] + math.sqrt((1-rho**2))*n_dist[z] t_corr_2[z]=sp.t.ppf(sp.norm.cdf(n_corr[z]),df) if z>xdays-1: z_x=int(z/xdays)-1 if (z%xdays)==0 and z_x<sample: r1[z_x]= sum(t_corr[z-10:z]) r3[z_x]= sum(t_corr_2[z-10:z]) r2[z-xdays]= sum(t_corr[z-10:z]) r4[z-xdays]= sum(t_corr_2[z-10:z]) print (np.partition(r1, cutoff-1)[cutoff-1], np.partition(r3, cutoff-1)[cutoff-1], np.partition(r2, cutoff_o-1)[cutoff_o-1], np.partition(r4, cutoff_o-1)[cutoff_o-1]) print () Answer: Some suggestions: ## Unneccessary ifs First, you could remove your `if` statements from your loop. Checking `z == 0` millions of times seems a bit unnecessary when you, the programmer, _knows_ that `z` is equal to zero on the first loop. The same goes for `if z>xdays-1`: if z==0: t_corr[z]=t_dist[z] n_corr[z]=n_dist[z] t_corr_2[z]=sp.t.ppf(sp.norm.cdf(n_corr[z]),df) for z in range(1, xdays - 1): t_corr[z]=rho*t_dist[z-1] + math.sqrt((1-rho**2))*t_dist[z] n_corr[z]=rho*n_dist[z-1] + math.sqrt((1-rho**2))*n_dist[z] t_corr_2[z]=sp.t.ppf(sp.norm.cdf(n_corr[z]),df) for z in range(xdays - 1, total) t_corr[z]=rho*t_dist[z-1] + math.sqrt((1-rho**2))*t_dist[z] n_corr[z]=rho*n_dist[z-1] + math.sqrt((1-rho**2))*n_dist[z] t_corr_2[z]=sp.t.ppf(sp.norm.cdf(n_corr[z]),df) z_x=int(z/xdays)-1 if (z%xdays)==0 and z_x<sample: r1[z_x]= sum(t_corr[z-10:z]) r3[z_x]= sum(t_corr_2[z-10:z]) r2[z-xdays]= sum(t_corr[z-10:z]) r4[z-xdays]= sum(t_corr_2[z-10:z]) Please double check this; I just threw it out :) ## Compile your code! A cheap/hack fix that could actually provide some serious benefit! You could try compile your python code into a binary, using Cython for example. I actually tested this with a contrived but not dissimilar example to yours that I hope will provide you enough information to start with. Suppose I have the following python script: import math for j in range(1000): for i in range(1000): a = math.sqrt(i) * math.sqrt(j) Running it with `python3 fast.py` takes consistently .4s of real time on my Ubuntu VM. Running the following: $ cython3 --embed -o fast.c fast.py $ gcc -I /usr/include/python3.4m/ -o fast fast.c -lpython3.4m produces a `.c` file from my python code and automatically compiles the binary `fast` for me from it. Running the executable now gives me an average real time of .14 seconds - a huge improvement! ## Less list slicing (EDIT - not going to help, this is NumPy slicing not list slicing!) ~~Another problem could be down to your list slicing. Remember that slice notation involves creating a new list each time, meaning you're creating ~200,000,000 new lists with your four slices. Now I'm not certain this will be faster, but you could achieve the same behavior without copying, e.g.: sum(t_corr[z-10:z]) could be replaced with sum(t_coor[i] for i in range(z, 10)) Again, fix this to be what you actually want; this is just a concept piece.~~ Let me know if that helps at all!
how to remove headers/meta data of JSON using Python Question: I'm learning Python-JSON. Ive been trying to pull data from Quandl API, I was successful in loading the data, but when i've tried to convert it to a Python Dict, Its throwing a ValueError ! ValueError Traceback (most recent call last) <ipython-input-10-0b58998505ee> in <module>() ----> 1 data=dict(data) ValueError: dictionary update sequence element #0 has length 1; 2 is required The Below is my code, import json,urllib2 url = "https://www.quandl.com/api/v3/datasets/NASDAQOMX/NQASIA0001LMGBPT.json" loaded = urllib2.urlopen(url).read() data = json.loads(loaded) type(data) # shows string data=dict(data) # here i'm getting value error this's how the Data looks like {"dataset":{"id":12835494,"dataset_code":"NQASIA0001LMGBPT","database_code":"NASDAQOMX","name":"NASDAQ Asia Oil \u0026 Gas Large Mid Cap GBP TR Index (NQASIA0001LMGBPT)","description":" \u003cp\u003eThe comprehensive NASDAQ Global Index Family covers international securities segmented by geography, sector, and size. NASDAQ OMX's transparent and rules-based selection method results in a complete representation of the global investable equity marketplace. The indexes cover 45 individual countries within Developed and Emerging Markets, and facilitate a multitude of tracking, trading, and investing opportunities.\u003c/p\u003e\n \u003cp\u003e\u003cb\u003eComponents:\u003c/b\u003e \u003ca href=https://indexes.nasdaqomx.com/Index/Weighting/NQASIA0001LMGBPT\u003e40\u003c/a\u003e\u003c/p\u003e\n \u003cp\u003e\u003cb\u003eCurrency:\u003c/b\u003e GBP\n \u003cp\u003e\u003cb\u003eEntitlements:\u003c/b\u003e \u003ca href=https://indexes.nasdaqomx.com/Index/Overview/NQASIA0001LMGBP\u003eNASDAQ Global Index Family\u003c/a\u003e\u003c/p\u003e\n \u003cp\u003e\u003cb\u003eTotal Market Value:\u003c/b\u003e is the sum of the market value of all companies in the index.\u003c/p\u003e\n \u003cp\u003e\u003cb\u003eDividend Market Value:\u003c/b\u003e is the sum of the market value of all dividends generated by companies in the index. \u003c/p\u003e\n \u003cp\u003e\u003cb\u003eTerms of Use:\u003c/b\u003e This data cannot be republished or used as the basis of a financial product without the permission of Nasdaq OMX.\u003c/p\u003e\n\n","refreshed_at":"2016-03-05T02:26:29.308Z","newest_available_date":"2016-03-04","oldest_available_date":"2001-03-30","column_names":["Trade Date","Index Value","High","Low","Total Market Value","Dividend Market Value"],"frequency":"daily","type":"Time Series","premium":false,"limit":null,"transform":null,"column_index":null,"start_date":"2001-03-30","end_date":"2016-03-04","data":[["2016-03-04",901.68,901.68,901.68,120990409547.0,10184040.0],["2016-03-03",888.22,888.22,888.22,119195278884.0,74919059.0],["2016-03-02",876.66,876.66,876.66,117717482960.0,0.0],["2016-03-01",861.69,861.69,861.69,115706487736.0,31420802.0],["2016-02-29",840.1,840.1,840.1,112838933060.0,0.0],["2016-02-26",856.96,856.96,856.96,115103827172.0,0.0],["2016-02-25",836.8,836.8,836.8,112395722181.0,43584397.0],["2016-02-24",846.48,846.48,846.48,113739936161.0,29138803.0],["2016-02-23",846.58,846.58,846.58,113782545450.0,0.0],["2016-02-22",839.75,839.75,839.75,112864607315.0,0.0],["2016-02-19",833.0,833.0,833.0,111957089747.0,0.0],["2016-02-18",832.63,832.63,832.63,111907945844.0,0.0],["2016-02-17",808.34,808.34,808.34,108642319107.0,0.0],["2016-02-16",821.18,821.18,821.18,110368331892.0,7742456.0],["2016-02-15",801.89,801.89,801.89,107783839163.0,0.0],["2016-02-12",770.54,770.54,770.54,103569144401.0,0.0],["2016-02- I think I'm missing something here, do i've to do anything with URL, passing parameters? or is there any step I've to include before data=dict(data) ? I've checked StackOverFlow before posting this question, I was't successful, I did google, every site I've clicked on are using Quandl package,(link below) #Quandl(dataset) any description/tutorial would make me understand this better. Thanks for your time. PS: I want to do this without using pandas and any other libraries. Thank you, [Retrieving data from Quandl with Python](http://stackoverflow.com/questions/30550177/retrieving-data-from- quandl-with-python) Answer: Use `requests`: import json, requests url = "https://www.quandl.com/api/v3/datasets/NASDAQOMX/NQASIA0001LMGBPT.json" data = requests.get(url).json() print(data)
Drawing half a square with Python turtle Question: Good day, I'm trying to write this python code for this two part problem and here is what I have so far. How somebody be able to help me finish it and or correct it? ![The Question](http://i.stack.imgur.com/uf1VT.png) Here is my attempt: #Question 11a Print("Question 11a") import turtle s = turtle.Screen() t = turtle.Turtle() def halfSquare(t, length): for i in range(2) t.down() t.forward(length) t.right(90) #Question 11b print("Question 11b") def halfSqaures(t, initial, increment, reps): halfSquare(length): Please help!! Answer: I'll give you a bit of help on the first part, but I won't write the code because this is _your_ homework, not mine. In your `halfSquare` function you have a `SyntaxError` (you're missing the `:` on the end of the `for` statement) and an `IndentationError` (the code inside the `for` loop). Also, `t.right(90)` should be `t.left(90)`. BTW, you can put `turtle.mainloop()` at the end of your program to wait for the user to close the window. * * * Ok. I see you're having some difficulties, so I'll post a fully-working program for you. But _please_ try to understand how it works. import turtle print("Question 11a") t = turtle.Turtle() def halfSquare(t, length): t.down() for i in (0, 1): t.forward(length) t.left(90) #halfSquare(t, 100) print("Question 11b") def halfSquares(t, initial, increment, reps): length = initial for i in range(reps): halfSquare(t, length) length += increment halfSquares(t, 20, 20, 10) turtle.mainloop()
Module inside module error, python Question: I created a File named StringPlay.py, and called it upon a file ScanTheFile.py then called a function in ScanTheFile.py in another file named Controller.py. But it raise an error that StringPlay.py does not exist In ScanTheFile.py import StringPlay as SP def TNews(FileFirstLine): FileLine = SP.RemoveSpases(FileFirstLine) if True: Statement return(FileLine) In Controller.py from HelpFiles import StringPlay as SP from HelpFiles import ScanTheFile as StF File1 = open("TextFiles\File1.txt") print(Stf.TNews(File1.readline())) When this is executed it raises a Message Stating that in ScanTheFile.py in Line 1: File "C:\Users\***\My Documents\Python\HelpFiles\ScanTheFile.py", line 1, in <module> import StringPlay as SP ImportError: No module named 'StringPlay' But when ScanTheFile.py is being executed there's no problem. This is my directory structure My Documents\Python My Documents\Python\Controller.py My Documents\Python\HelpFiles My Documents\Python\HelpFiles\ScanTheFile.py My Documents\Python\HelpFiles\StringPlay.py My Documents\Python\TextFiles My Documents\Python\TextFiles\File1.txt My Documents\Python\TextFiles\File2.txt Answer: You are having this problem because when you run `Controller.py` you are running it in say `C:\...\My Documents` when you call `from HelpFiles import ScanTheFile as StF` you are telling python to look in `C:\...\My Documents\HelpFiles` for a file named `ScanTheFile.py`. It finds this file and runs it _in`C:\...\My Documents`_ so when it hits the line in `ScanTheFile.py`: import StringPlay as SP it looks in `C:\...\My Documents` for `StringPlay.py` which from what I can gather exists in `C:\...\My Documents\HelpFiles`. Since you are already importing `StringPlay.py` explicitly in `Controller.py` I would say just change `ScanTheFile.py` to only import `StringPlay.py` when it is the file being run directly, and not imported: `ScanTheFile.py` if __name__ == "__main__": import StringPlay as SP def TNews(FileFirstLine): FileLine = SP.RemoveSpases(FileFirstLine) if True: Statement return(FileLine)
Functional start and stop button in a GUI using pyqt or pyside for real time data acquisition using pyqtgraph Question: I am implementing my program using the scrollingplots example provided by pyqtgraph here <https://github.com/skycaptain/gazetrack/blob/master/gui/pyqtgraph/examples/scrollingPlots.py> import pyqtgraph as pg from pyqtgraph.Qt import QtCore, QtGui import numpy as np win = pg.GraphicsWindow() win.setWindowTitle('pyqtgraph example: Scrolling Plots') win.nextRow() p3 = win.addPlot() p4 = win.addPlot() # Use automatic downsampling and clipping to reduce the drawing load p3.setDownsampling(mode='peak') p4.setDownsampling(mode='peak') p3.setClipToView(True) p4.setClipToView(True) p3.setRange(xRange=[-100, 0]) p3.setLimits(xMax=0) curve3 = p3.plot() curve4 = p4.plot() data3 = np.empty(100) ptr3 = 0 def update2(): global data3, ptr3 data3[ptr3] = np.random.normal() ptr3 += 1 if ptr3 >= data3.shape[0]: tmp = data3 data3 = np.empty(data3.shape[0] * 2) data3[:tmp.shape[0]] = tmp curve3.setData(data3[:ptr3]) curve3.setPos(-ptr3, 0) curve4.setData(data3[:ptr3]) # update all plots timer = pg.QtCore.QTimer() timer.timeout.connect(update3) timer.start(50) ## Start Qt event loop unless running in interactive mode or using pyside. if __name__ == '__main__': import sys if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'): QtGui.QApplication.instance().exec_() At first I wanted to use Ctrl+C as a signal to stop the continuous data plotting and save the data obtained into a file. However, the only way to quit the program is to close the graph window. Executing Ctrl+C in the terminal does not do anything. Therefore, I would like to implement a button to start and stop(and save the data) in the program. As a newbie in Python and Object Oriented Programming, I looked for examples online. I have found examples specifically for the button implementation in the GUI: * stackoverflow.com/questions/8762870/how-to-implement-a-simple-button-in-pyqt * groups.google.com/forum/#!topic/pyqtgraph/bxvZHtb1KKg * www.youtube.com/watch?v=z33vwdHrAFM and GUI related tutorials by the youtuber None of these examples have aided me in achieving what I want as I do not know how to combine them with the scrollingplots example. From Qt crash course webpage (pyqtgraph.org/documentation/qtcrashcourse.html): from PyQt4 import QtGui # (the example applies equally well to PySide) import pyqtgraph as pg ## Always start by initializing Qt (only once per application) app = QtGui.QApplication([]) ## Define a top-level widget to hold everything w = QtGui.QWidget() ## Create some widgets to be placed inside btn = QtGui.QPushButton('press me') text = QtGui.QLineEdit('enter text') listw = QtGui.QListWidget() plot = pg.PlotWidget() ## Create a grid layout to manage the widgets size and position layout = QtGui.QGridLayout() w.setLayout(layout) ## Add widgets to the layout in their proper positions layout.addWidget(btn, 0, 0) # button goes in upper-left layout.addWidget(text, 1, 0) # text edit goes in middle-left layout.addWidget(listw, 2, 0) # list widget goes in bottom-left layout.addWidget(plot, 0, 1, 3, 1) # plot goes on right side, spanning 3 rows ## Display the widget as a new window w.show() ## Start the Qt event loop app.exec_() As there will app.exec_() at the end of most of the button example codes and there is also and update loop in the scrollingplot example itself, I am confused as to how they can run at the same time. I have read somewhere that regarding such using gui for a continuously running process, I should consider using timer or multithreading. Nevertheless, I presently do not have any knowledge in threading. I have even tried Tkinter as I found some a guide on how to use Tkinter with matplotlib -> pythonprogramming.net/how-to-embed-matplotlib-graph-tkinter-gui/ Looking forward to receiving any advice regarding this problem. Answer: from PyQt4 import QtCore, QtGui import pyqtgraph as pg class MainForm(QtGui.QMainWindow): def __init__(self): super(MainForm, self).__init__() self.playTimer = QtCore.QTimer() self.playTimer.setInterval(500) self.playTimer.timeout.connect(self.playTick) self.toolbar = self.addToolBar("Play") self.playScansAction = QtGui.QAction(QtGui.QIcon("control_play_blue.png"), "play scans", self) self.playScansAction.triggered.connect(self.playScansPressed) self.playScansAction.setCheckable(True) self.toolbar.addAction(self.playScansAction) def playScansPressed(self): if self.playScansAction.isChecked(): self.playTimer.start() else: self.playTimer.stop() def playTick(self): pass def main(): app = QtGui.QApplication(sys.argv) form = MainForm() form.initUI("Scan Log Display") form.show() app.exec_() if __name__ == "__main__": main()
Python code to click on anchor tag Question: I am writing a python code to automate a web page. I need to click on play button to play the recording. But I am not able to do so through the code. **Inspect element gives me this - outer HTML of 'play' :** <div class="play"> <a id="sm_1855464769" class="sm2_button" href="#"> </a> </div> **Inspect element gives me this - Xpath of 'play' :** //*[@id="recording_1855464769"]/div/div/div[8] **The python code what i wrote is :** element = WebDriverWait(self.driver, 15).until(EC.presence_of_element_located(EC.find_element_by_xpath("//*[@id='recording_1855464769']/div/div/div[8]")) element.click() **Error message what I get in the terminal :** element = WebDriverWait(self.driver, 15).until(EC.presence_of_element_located(EC.find_element_by_xpath("//*[@id='recording_1855464769']/div/div/div[8]"))) AttributeError: 'module' object has no attribute 'find_element_by_xpath' I need to click on the anchor tag in order to play the audio. How can I do that ?? Plz help.. Answer: You need to use `By`. Replace `EC.find_element_by_xpath` with `By.XPATH`. from selenium.webdriver.common.by import By element = WebDriverWait(self.driver, 15).until(EC.presence_of_element_located (By.XPATH("//*[@id='recording_1855464769']/div/div/div[8]"))
My Python turtle doesn't work when ran in python shell Question: I am using the Python turtle module and have created the code below: import turtle def draw_square(some_turtle) : some_turtle.forward(100) some_turtle.right(90) some_turtle.forward(6) some_turtle.right(90) some_turtle.forward(100) some_turtle.left(90) some_turtle.forward(6) some_turtle.left(90) def draw_art(): window = turtle.Screen() window.bgcolor("green") brad = turtle.Turtle() brad.shape("turtle") brad.color("red") brad.speed(50) for i in range(1,10) : draw_square(brad) window.exitonclick() draw_art() I want this code to draw a square, but it doesn't. Can you explain why this doesn't work? problem: I see shell stops working.... ;; can't do more job I want to ... when I click, It doesn't move at all Answer: There are two syntax errors: `bard.color("red")` should be `brad.color("red")` `window.extionclick()` should be `window.exitoneclick()` Correcting those draws a rectangle made of lines going back and forth, if this is what you were trying to do.
Tkinter error UnboundLocalError: local variable 'flag' referenced before assignment Question: I am trying to build quiz using tkinter and python 2 I am having trouble with maintaining score. The validate button can be pressed multiple times which increments global score counter. 1) I've one flag variable which will allow to answer only once and increment global counter only once. Is it possible to disable validate button for correctly answered question? But I'm getting error with it. Exception in Tkinter callback Traceback (most recent call last): File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1489, in __call__ return self.func(*args) File "testtest.py", line 22, in validate if(q=="4" and flag==0): UnboundLocalError: local variable 'flag' referenced before assignment **I read other answers for this problem** but if I pass arguments to validate function it gives error. 2)Is there any way to implement quiz as it will display 1st question and if answer is correct button event will direct to 2nd question unlike my code. It displays all the questions My code is: import Tkinter as tk import tkMessageBox count=0 class Question(tk.Frame): def __init__(self, *args, **kwargs): tk.Frame.__init__(self, *args, **kwargs) def show(self): self.lift() class Question1(Question): def __init__(self, *args, **kwargs): Question.__init__(self, *args, **kwargs) label = tk.Label(self, text="Question 1: What is 2^2") label.pack(side="top", fill="both", expand=True) entry = tk.Entry(self) entry.pack(side="top", fill="both", expand=True) def validate( event ): q=entry.get() if(q=="4" and flag==0): tkMessageBox.showinfo('Correct Answer', 'Proceed to next question') global count count=count+1 fp=open("scores.txt","a") fp.write("\nScore after Question 1 :%d "%(count)) flag=1 else: tkMessageBox.showinfo('Wrong Answer', 'Try Again') #flag=0 to avoid alternate event button_1 = tk.Button(self, text="Validate") flag=0 button_1.bind("<Button-1>", validate) button_1.pack(side="top", fill="both", expand=True) class Question2(Question): def __init__(self, *args, **kwargs): Question.__init__(self, *args, **kwargs) label = tk.Label(self, text="Question 1: What is 2^3") label.pack(side="top", fill="both", expand=True) entry = tk.Entry(self) entry.pack(side="top", fill="both", expand=True) def validate( event ): q=entry.get() if(q=="8" and flag==0): tkMessageBox.showinfo('Correct Answer', 'Proceed to next question') global count count=0 count=count+1 fp=open("scores.txt","a") fp.write("\nScore after Question 2 :%d "%(count)) flag=1 else: tkMessageBox.showinfo('Wrong Answer', 'Try Again') button_1 = tk.Button(self, text="Validate") flag=0 button_1.bind("<Button-1>", validate) button_1.pack(side="top", fill="both", expand=True) class MainView(tk.Frame): def __init__(self, *args, **kwargs): tk.Frame.__init__(self, *args, **kwargs) p1 = Question1(self) p2 = Question2(self) buttonframe = tk.Frame(self) container = tk.Frame(self) buttonframe.pack(side="top", fill="x", expand=False) container.pack(side="top", fill="both", expand=True) p1.place(in_=container, x=0, y=0, relwidth=1, relheight=1) p2.place(in_=container, x=0, y=0, relwidth=1, relheight=1) b1 = tk.Button(buttonframe, text="Question 1", command=p1.lift) b2 = tk.Button(buttonframe, text="Question 2", command=p2.lift) b1.pack(side="left") b2.pack(side="left") p1.show() if __name__ == "__main__": root = tk.Tk() main = MainView(root) main.pack(side="top", fill="both", expand=True) root.wm_geometry("1080x720") root.mainloop() Answer: If you assign to a name in a function, it becomes a local name, even if the function is nested in another function's scope that provides a definition. In Python 3, you could explicitly declare `nonlocal flag` at the top of your `validate` function to make it operate on `flag` from the nested scope, but in Python 2, you can't; Python 2 only has `global` for this purpose, which would skip the nested scope and look for `flag` at the top level scope of the module; usually a bad idea for cases like this. One approach is to use a `list` so you're assigning to an index, rather than overwriting a local name. class Question1(Question): def __init__(self, *args, **kwargs): Question.__init__(self, *args, **kwargs) label = tk.Label(self, text="Question 1: What is 2^2") label.pack(side="top", fill="both", expand=True) entry = tk.Entry(self) entry.pack(side="top", fill="both", expand=True) flag = [0] # Use list over plain int so you can index def validate( event ): q=entry.get() if q == "4" and not flag[0]: # Test index of list tkMessageBox.showinfo('Correct Answer', 'Proceed to next question') global count count=count+1 with open("scores.txt","a") as fp: fp.write("\nScore after Question 1 :%d "%(count)) flag[0] = 1 # Assign to index of list else: tkMessageBox.showinfo('Wrong Answer', 'Try Again') flag[0] = 0 # Assign to index of list button_1 = tk.Button(self, text="Validate") button_1.bind("<Button-1>", validate) button_1.pack(side="top", fill="both", expand=True) You'd need to make similar edits to `Question2` as well. Note: I also changed you to using a `with` statement for the file manipulation, so the file is closed and the data is written to disk at a predictable time.
Detect holes, ends and beginnings of a line using openCV? Question: I'm trying to create a Python script that detects holes, ends and beginnings of a line. I thought that openCV would be great to achieve this. So for example everything starts with this image: [![enter image description here](http://i.stack.imgur.com/sojmN.png)](http://i.stack.imgur.com/sojmN.png) finally what I want to achieve is this: [![enter image description here](http://i.stack.imgur.com/abiRm.png)](http://i.stack.imgur.com/abiRm.png) So I began with importing the image into Python and converting it in grayscale. Now I came to the idea to track the holes by using the `goodFeaturesToTrack()` method. It's normally used to find corners in the image. However that didn't work so well because after that the script knows the points, but it doesn't know if a point is from a hole or if it's the beginning or end of the line. Another problem is that if I use another image this method detects more points than just the holes, beginnings and ends of the line. Here is my full code to understand my problem a bit better: import cv2 import numpy as np import matplotlib.pyplot as plt # lodes in img img = cv2.imread('png1.png', cv2.IMREAD_COLOR) img_gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) corners = cv2.goodFeaturesToTrack(img_gray, 200, 0.05, 10) for corner in corners: x, y = corner.ravel() cv2.circle(img, (x,y), 7, (255,255,0), -1) cv2.imshow('img',img) I have no idea to get around this problem. Answer: I added a func getLandmarks() it returns all the wholes. So here I assume that it will be counted as a hole if there are 2 corners in a radius of 30 pix if abs(x1-x2)<=30 and abs(y1-y2)<=30: This line defines the range. import cv2 import numpy as np def getLandmarks(corners): holes=[] for i in range(0,len(corners)): for j in range(i+1,len(corners)): x1,y1=corners[i].ravel() x2,y2=corners[j].ravel() if abs(x1-x2)<=30 and abs(y1-y2)<=30: holes.append((int((x1+x2)/2),int((y1+y2)/2))) return holes # lodes in img img = cv2.imread('img.png', cv2.IMREAD_COLOR) img_gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) corners = cv2.goodFeaturesToTrack(img_gray, 200, 0.05, 10) holes=getLandmarks(corners) print len(holes) for corner in holes: cv2.circle(img, (corner), 7, (255,255,0), -1) cv2.imshow('img',img) cv2.waitKey(0) # output [![enter image description here](http://i.stack.imgur.com/5MI5Z.png)](http://i.stack.imgur.com/5MI5Z.png) Now for the Start and end You can easily sort the corners in either X(if the Path is along left to right) or Y(If the path is along top to down) and the min and max will be your start and end!
How to to search a key in an xml and replace it's value with Python? Question: I have thousands of these app name lines in an xml file: <app name="app-sq-461-author-core-0"> I want to do the following: 1. check through all lines that 'app name' exists 2. if so, see if the value matches "test" 3. If so, replace the value with "delete me" Currently, I have: bare = importedxml.find('app name') testvalue = bare.split("=")[1] if testvalue = "test" in importedxml: testvalue = "delete me" What is the best way to do this? I'm encountering many problems. Answer: Have You tried [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/bs4/doc/)? Something along these lines import bs4 xml = "Your bare XML String" soup = bs4.BeautifulSoup(xml) test_apps = soup.findChildren("app", {"name": "test"}) for app in test_apps: app["name"] = "delete me" with open("filename.xml", "w") as f: f.write(str(soup)) but, as you mentioned in you comment below, that you do not have bs4, the only thing I can think of is using a regex replace. import re xml = "your XML String" pattern = re.compile('app name="test"') replaced = pattern.sub('app name="delete me"', xml)
Python Insert Image into the middle of an existing PowerPoint Question: I have an existing PowerPoint presentation with 20 slides. This presentation serves as an template with each slide having different backgrounds. I want to take this (existing) PowerPoint presentation, insert an image in slide number 4 (do nothing with the first 3) and save it as a new PowerPoint presentation. This is what I have up until now. This code loads an existing presentation and saves it as a new one. Now I just need to know how to use this to insert an image to slide number 4 like described above. Note: I am using normal Python. from pptx import Presentation def open_PowerPoint_Presentation(oldFileName, newFileName): prs = Presentation(oldFileName) #Here I guess I need to type something to complete the task. prs.save(newFileName) open_PowerPoint_Presentation('Template.pptx', 'NewTemplate.pptx') Answer: I'm not really familiar with this module, but I looked at their [quickstart](http://python- pptx.readthedocs.org/en/latest/user/quickstart.html) from pptx.util import Inches from pptx import Presentation def open_PowerPoint_Presentation(oldFileName, newFileName, img, left, top): prs = Presentation(oldFileName) slide = prs.slides[3] pic = slide.shapes.add_picture(img, left, top) prs.save(newFileName) open_PowerPoint_Presentation('Template.pptx', 'NewTemplate.pptx', 'mypic.png', Inches(1), Inches(1))
Creating a Pandas Series with a period in the name Question: I ran the following Python code, which creates a Pandas DataFrame with two Series (`a` and `b`), and then attempts to create two new Series (`c` and `d`): import pandas as pd df = pd.DataFrame({'a':[1, 2, 3], 'b':[4, 5, 6]}) df['c'] = df.a + df.b df.d = df.a + df.b My understanding is that if a Pandas Series is part of a DataFrame, and the Series name does not have any spaces (and does not collide with an existing attribute or method), the Series can be accessed as an attribute of the DataFrame. As such, I expected that **line 3** would work (since that's how you create a new Pandas Series), and I expected that **line 4** would fail (since the `d` attribute does not exist for the DataFrame until after you execute that line of code). To my surprise, line 4 did not result in an error. Instead, the DataFrame now contains three Series: >>> df a b c 0 1 4 5 1 2 5 7 2 3 6 9 And there is a new object, `df.d`, which is a Pandas Series: >>> df.d 0 5 1 7 2 9 dtype: int64 >>> type(df.d) pandas.core.series.Series **My questions are as follows:** * Why did line 4 not result in an error? * Is `df.d` now a "normal" Pandas Series with all of the regular Series functionality? * Is `df.d` in any way "connected" to the `df` DataFrame, or is it a completely independent object? My motivation in asking this question is simply that I want to better understand Pandas, and not because there is a particular use case for line 4. My Python version is 2.7.11, and my Pandas version is 0.17.1. Answer: When doing assignment, you need to use bracket notation, e.g. `df['d'] = ...` `d` is now a property of the dataframe `df`. As with any object, you can assign properties to them. That is why it did not generate the error. It just didn't behave as you expected... df.some_property = 'What?' >>> df.some_property 'What?' This is a common area of misunderstanding for beginners to Pandas. _Always_ use bracket notation for assignment. The dot notation is for convenience when referencing the dataframe/series. To be safe, you could always use bracket notation. And yes, `df.d` per your example is a normal series that is now an unexpected property of the dataframe. This series is its own object, connected by the reference you created when you assigned it to `df`.
Get all contents between the result tags of a SOAP response in Python Question: I have this SOAP response : <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <GetCurrencyCodeByCurrencyNameResponse xmlns="http://www.webserviceX.NET"> <GetCurrencyCodeByCurrencyNameResult> &lt;NewDataSet /&gt; </GetCurrencyCodeByCurrencyNameResult> </GetCurrencyCodeByCurrencyNameResponse> </soap:Body></soap:Envelope> And I use this code to get the contents of the result tag: import xml.etree.ElementTree as ET root = ET.fromstring(SoapResponse) child=root[0][0][0] contenu= child.text But when I have a response which contains other tags inside the results tag (other children) like this SOAP response : <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <GetUserInfoResponse xmlns="http://tempuri.org/"> <GetUserInfoResult> <ErrorOccured>true</ErrorOccured> <ErrorStr>System.Data.OleDb.OleDbException: Conversion failed when converting the varchar value '4CuTrO8O6Tn' to data type int. at System.Data.OleDb.OleDbDataReader.ProcessResults(OleDbHResult hr) at System.Data.OleDb.OleDbDataReader.NextResult() at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method) at System.Data.OleDb.OleDbCommand.ExecuteReader(CommandBehavior behavior) at Service.GetUserInfo(String username, String password) </ErrorStr> <SqlQuery>SELECT * FROM users WHERE username=''+(select convert(int,CHAR(52)+CHAR(67)+CHAR(117)+CHAR(84)+CHAR(114)+CHAR(79)+CHAR(56)+CHAR(79)+CHAR(54)+CHAR(84)+CHAR(110)) FROM syscolumns)+'' AND password='32cc5886dc1fa8c106a02056292c4654' </SqlQuery><id>-1</id><joindate>0001-01-01T00:00:00</joindate> </GetUserInfoResult> </GetUserInfoResponse> </soap:Body></soap:Envelope> I can not get the contents between result tags with the previous code. So, how can I get the whole contents between the result tags of a SOAP response ? Answer: I'm not quite clear on exactly what you want, but this might do it: # This gets all of the text data in the indicated region import xml.etree.ElementTree as ET root = ET.fromstring(SoapResponse) child=root[0][0][0] contenu = ET.tostring(child, encoding='UTF-8', method='text').decode('UTF-8') Or # This gets the indicated XML fragment as a string import xml.etree.ElementTree as ET root = ET.fromstring(SoapResponse) child=root[0][0][0] contenu = ET.tostring(child, encoding='UTF-8', method='xml')
Why is pool.map slower than normal map? Question: I'm trying the following code: import multiprocessing import time import random def square(x): return x**2 pool = multiprocessing.Pool(4) l = [random.random() for i in xrange(10**8)] now = time.time() pool.map(square, l) print time.time() - now now = time.time() map(square, l) print time.time() - now and the `pool.map` version consistently runs several seconds more slowly than the normal `map` version (19 seconds vs 14 seconds). I've looked at the questions: [Why is multiprocessing.Pool.map slower than builtin map?](http://stackoverflow.com/questions/9169538/why-is- multiprocessing-pool-map-slower-than-builtin-map) and [multiprocessing.Pool() slower than just using ordinary functions](http://stackoverflow.com/questions/20727375/multiprocessing-pool- slower-than-just-using-ordinary-functions) and they seem to chalk it up to to either IPC overhead or disk saturation, but I feel like in my example those aren't obviously the issue; I'm not writing/reading anything to/from disk, and the computation is long enough that it seems like IPC overhead should be small compared to the total time saved by the multiprocessing (I'm estimating that, since I'm doing work on 4 cores instead of 1, I should cut the computation time down from 14 seconds to about 3.5 seconds). I'm not saturating my cpu I don't think; checking `cat /proc/cpuinfo` shows that I have 4 cores, but even when I multiprocess to only 2 processes it's still slower than just the normal map function (and even slower than 4 processes). What else could be slowing down the multiprocessed version? Am I misunderstanding how IPC overhead scales? If it's relevant, this code is written in Python 2.7, and my OS is Linux Mint 17.2 Answer: `pool.map` splits a list into N jobs (where N is the size of the list) and dispatches those to the processes. The work a single process is doing is shown in your code: def square(x): return x**2 This operation takes very little time on modern CPUs, no matter how big the number is. In your example you're creating a huge list and performing an irrelevant operation on every single element. Of course the IPC overhead will be greater compared to the regular `map` function which is optimized for fast looping. In order to see your example working as you expect, just add a `time.sleep(0.1)` call to the square function. This simulates a long running task. Of course you might want to reduce the size of the list or it will take forever to complete.
How to define a new function in pdb Question: Why can't I define new functions when I run `pdb`? For example take myscript.py: #!/gpfs0/export/opt/anaconda-2.3.0/bin/python print "Hello World" print "I see you" If I run `python -m pdb myscript.py` and try to interactively define a new function: def foo(): I get the error: *** SyntaxError: unexpected EOF while parsing (<stdin>, line 1) Why is this? Answer: I don't think it supports multi-line input. You can workaround by spawning up an interactive session from within pdb. Once you are done in the interactive session, exit it with Ctrl+D. >>> import pdb >>> pdb.set_trace() (Pdb) !import code; code.interact(local=vars()) (InteractiveConsole) In : def foo(): ...: print 'hello in pdb' ...: In : # use ctrl+d here to return to pdb shell... (Pdb) foo() hello in pdb
Python and Arduino issues Question: I’m having some issues, trying to control a servo connected to a arduino board from python. In the program I write a value 0 – 180 and send it to the Arduino, the Arduino should than turn the servo to the selected place. The problem is that it’s seams that the data sent from python is not read correctly. (or written correctly). After much googling, trying and failing I’m still having the same issue. When sending a data from python, te servo moves from start position, to almost centre and than back to starting point, I now changed the code so the Arduino now reply, back to python, the data it receive, and I don’t understand what’s going on. I enter the value 1. And the Arduino reply with : b’5. If I write 1. The Arduino reply with b’2. .. If I write 2, it respond with b’5, and the same if I write 5 :S(and it's not always the same) The Python code i Use: import serial def sendSerialData(): global set_ser, run set_ser = serial.Serial() set_ser.port="COM4" set_ser.baudrate=9600 print('**********************') print('* Serial comunicator *') print('**********************\n') run = 0 while run==0: print('type \'open\'to open serial port. \ntype \'close\' anyplace in the procram to close serial port:') openSerial = input(': ').lower() if (openSerial == "open"): set_ser.close() set_ser.open() while set_ser.isOpen(): output = input('\nType what you want to send, hit enter\n: ') set_ser.write(output.encode()) print('Arduino is retriving: ') print(set_ser.read()) if (output == "close"): set_ser.close() print ('Closed') a = 1 elif (openSerial == "close"): set_ser.close() a = 1 else: print ('open or close') sendSerialData() The Arduino code: int pos = 0; // variable to store the servo position int incomingByte= 180; void setup() { myservo.attach(5); // attaches the servo on pin 5 to the servo object Serial.begin(9600); } void loop() { byte incomingByte = Serial.read(); pos = incomingByte; myservo.write(pos); Serial.print(pos); delay(500); Here is the output in the program: type 'open'to open serial port. type 'close' anyplace in the procram to close serial port: : open Type what you want to send, hit enter : 100 Arduino is retriving: b'\xff' Type what you want to send, hit enter : 1 Arduino is retriving: b'2' Type what you want to send, hit enter : 5 Arduino is retriving: b'5' Type what you want to send, hit enter : 2 Arduino is retriving: b'5' Type what you want to send, hit enter Does anybody know how to fix this? Do I need to convert binary to decimal. I'v tryed to declare incommingByte as int and byte, but the result are not eksaktly the same but almost. Thankful for any help. i use: Python 3.4, pyserial and Windows 10. Answer: I think your problem lies in the arduino side of the code. when you send data over the serial protocol, all plain text(including integers) gets converted into ASCII. For example, a lowercase "a" gets converted into a 97, so when you say: byte incomingByte = Serial.read(); pos = incomingByte; myservo.write(pos); You're trying to write ASCII numbers to a servo, so if you write 42, you might get 5250 instead of 42. A possible way to fix this would be to cast the byte as a char: byte incomingByte = Serial.read(); pos = char(incomingByte); myservo.write(pos); You might also want to do this only when you are receiving data: if (Serial.available() > 0) { byte incomingByte = Serial.read(); pos = char(incomingByte); myservo.write(pos); Hope this helped! -Dave