text
stringlengths
226
34.5k
is it possible not to use "self" in a class? Question: > **Possible Duplicate:** > [Python: How to avoid explicit > 'self'?](http://stackoverflow.com/questions/1984104/python-how-to-avoid- > explicit-self) In python class if I need to reference the class member variable, I need to have a self. before it. This is anonying, can I not use that but reference the class member? Thanks. Bin Answer: No. >>> import this ... Explicit is better than implicit. ...
Synchronizing embedded Python in multi-threaded program Question: Here is the example of using Python interpreter in multi-threaded program: #include <python.h> #include <boost/thread.hpp> void f(const char* code) { static volatile auto counter = 0; for(; counter < 20; ++counter) { auto state = PyGILState_Ensure(); PyRun_SimpleString(code); PyGILState_Release(state); boost::this_thread::yield(); } } int main() { PyEval_InitThreads(); Py_Initialize(); PyRun_SimpleString("x = 0\n"); auto mainstate = PyEval_SaveThread(); auto thread1 = boost::thread(f, "print('thread #1, x =', x)\nx += 1\n"); auto thread2 = boost::thread(f, "print('thread #2, x =', x)\nx += 1\n"); thread1.join(); thread2.join(); PyEval_RestoreThread(mainstate); Py_Finalize(); } It looks fine, but it isn't synchronized. Python interpreter releases and reacquires GIL multiple times during PyRun_SimpleString (see [docs, p.#2](http://docs.python.org/py3k/c-api/init.html#thread-state-and-the-global- interpreter-lock)). We can serialize PyRun_SimpleString call by using our own synchronization object, but it's a wrong way. Python has its own synchronization modules - `_thread` and `threading`. But they don't work in this code: Py_Initialize(); PyRun_SimpleString(R"( import _thread sync = _thread.allocate_lock() x = 0 )"); auto mainstate = PyEval_SaveThread(); auto thread1 = boost::thread(f, R"( with sync: print('thread #1, x =', x) x += 1 )"); * it yields an error `File "<string>", line 3, in <module> NameError: name '_[1]' is not defined` and deadlocks. How to synchronize embedded python code most efficient way? Answer: When CPython calls out to a function that may block (or re-enter Python), it releases the global interpreter lock before calling the function, and then re- acquires the lock after the function returns. In your code, it's your call to the built-in `print` function that causes the interpreter lock to be released and the other thread to run (see `string_print` in [stringobject.c](https://svn.python.org/projects/python/branches/release27-maint/Objects/stringobject.c)). So you need your own lock: the global interpreter lock is not suitable for ensuring serialization of Python code that does I/O. Since you're using the Boost thread framework, you't probably find it most convenient to use one of the Boost [thread synchronization primitives](http://www.boost.org/doc/libs/1_37_0/doc/html/interprocess/synchronization_mechanisms.html), e.g. `boost::interprocess::interprocess_mutex`. [Edited: my original answer was wrong, as pointed out by Abyx.]
Python reading a tabbed text file into a series of lists Question: Hi as an inexperieced python user I would appreciate any help with the following programming challenge: I have a text file with tabulated data, I want to read it and put the values on each line into different python lists. The file looks like this: 1 303233.479 233942.326 52.500 0.000 97 47 39.5 INFINITY 0.00034 0.00000 PBT PBT A001 B001 2 303386.031 233921.445 52.553 153.975 97 47 39.5 INFINITY 0.00034 0.00000 TS A001 3 303397.931 233919.897 52.557 165.975 96 38 54.2 -300.000 0.00034 0.00000 SC A002 4 303405.224 233919.137 52.559 173.308 95 14 52.6 -300.000 0.00034 6.25000 PC B002 There are 13 colums and I want to put the values into 13 lists, I understand how to do this for a couple of colums but I am a bit stumped at how to do this for 13 colummns. #Here is my pathetic attempt at this pntnums = [] #a xcogo = [] #b ycogo = [] #c zcogo = [] #d chain = [] #e bearing = [] #f rad = [] #g grad = [] #h mval = [] #i HCOD = [] #j VCOD = [] #k fd = file("align.txt").readlines(): a, b, c, d, e, f, g, h, i, j, k, = [int(s) for s in l.split()] pntnums.append(int(a)) xcogo.append(int(b)) ycogo.append(int(c)) zcogo.append(int(d)) chain.append(int(e)) bearing.append(int(f)) rad.append(int(g)) grad.append(int(h)) mval.append(int(i)) HCOD.append(int(j)) VCOD.append(int(k)) for val in pntnums: print val #and the corresponding output: Traceback (most recent call last): File "C:\MYPY\test.py", line 2, in <module> dataDict = dict(zip([float(i[1]) for i in data], [j[0] for j in data])) IndexError: list index out of range Any help on this would be most appreciated (evan a url), as I have searched and could not find a solution. newuser Answer: You should use a [`csv.reader`](http://docs.python.org/library/csv.html#csv.reader); this is a built-in class in Python designed specifically for reading files like this. >>> import csv >>> fieldnames = ("pntnums", "xcogo", "ycogo", "zcogo", "bearing", "rad", "grad", "mval", "HCOD", "VCOD") >>> reader = csv.DictReader(open(...), delimiter="\t", fieldnames=fieldnames) You can then iterate over the elements of `reader` and it will give you dictionaries: >>> import pprint >>> for row in reader: ... pprint.pprint(row) ... {None: ['0.00000', 'PBT PBT', 'A001 B001 '], 'HCOD': 'INFINITY', 'VCOD': '0.00034', 'bearing': '0.000', 'grad': '47', 'mval': '39.5', 'pntnums': '1', 'rad': '97', 'xcogo': '303233.479', 'ycogo': '233942.326', 'zcogo': '52.500'} {None: ['0.00000', 'TS', 'A001'], 'HCOD': 'INFINITY', 'VCOD': '0.00034', 'bearing': '153.975', 'grad': '47', 'mval': '39.5', 'pntnums': '2', 'rad': '97', 'xcogo': '303386.031', 'ycogo': '233921.445', 'zcogo': '52.553'} {None: ['0.00000', 'SC', 'A002'], 'HCOD': '-300.000', 'VCOD': '0.00034', 'bearing': '165.975', 'grad': '38', 'mval': '54.2', 'pntnums': '3', 'rad': '96', 'xcogo': '303397.931', 'ycogo': '233919.897', 'zcogo': '52.557'} {None: ['6.25000', 'PC', 'B002'], 'HCOD': '-300.000', 'VCOD': '0.00034', 'bearing': '173.308', 'grad': '14', 'mval': '52.6', 'pntnums': '4', 'rad': '95', 'xcogo': '303405.224', 'ycogo': '233919.137', 'zcogo': '52.559'} (The data probably don't match up exactly with the fields here, because I don't have the original tab-separated text, just what I can copy-paste from SO. It will work if you feed it the original file =).)
Encoding problems in python x64 Question: i´m trying to write a little script for writting a sqlite table from an archive list saved in a file. the code so far is this: import os import _sqlite3 import sys print sys.path[0] mydir = sys.path[0] print (mydir) def listdir(mydir): lis=[] for root, dirs, files in os.walk(mydir): for name in files: lis.append(os.path.join(root,name)) return lis filename = "list.txt" print ("writting in %s" % filename) file = open(filename, 'w' ) for i in listdir(mydir): file.write(i) file.write("\n") file.close() con = _sqlite3.connect("%s/conection"%mydir) c=con.cursor() c.execute(''' drop table files ''') c.execute('create table files (name text, other text)') file = open(filename,'r') for line in file : a = 1 for t in [("%s"%line, "%i"%a)]: c.execute('insert into files values(?,?)',t) a=a+1 c.execute('select * from files') print c.fetchall() con.commit() c.close() when i run i get the following: Traceback (most recent call last): File "C:\Users\josh\FORGE.py", line 32, in <module> c.execute('insert into files values(?,?)',t) ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. i´ve tried with the unicode() built in function but still won´t work, saying that he can´t decode the character 0xed or something. I know the problem is on the encoding of the list strings, but i can´t find a way to put them right. any ideas? thanks in advance! Answer: (zero). please reformat your code 1. after `for line in file:` do something like `line = line.decode('encoding-of-the-file')`, with encoding being something like `utf-8`, or `iso-8859-1` \-- **you have to know your input encoding** If you don't know the encoding or not care about having a clean decoding, you can guess the most probable encoding and do a `line.decode('uft-8', 'ignore')`, omitting all characters not decodable. Also, you can use `'replace'`, which replaces these chars with the 'Unicode Replacement Character' (\ufffd) 2. use internally and during communication with the database **only** `unicode`objects, e.g. `u'this is unicode'` (3). Don't use `file` as variable name also look here: [Best Practices for Python UnicodeDecodeError](http://stackoverflow.com/questions/3933911/best-practices- for-python-unicodedecodeerror/3933973#3933973)
Bootstrapping a web server in Scala Question: The following is possible using Python: $ apt-get install python $ easy_install Flask $ cat > hello.py from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" if __name__ == "__main__": app.run() $ python hello.py 4 commands and 7 lines of code to get a web server running is very impressive indeed. What's the Scala equivalent? Answer: You might find [Unfiltered](http://unfiltered.databinder.net) worth a look.
Is `extend` faster than `+=`? Question: In python, we can concatenate lists in two ways: 1. lst.extend(another_lst) 2. lst += another_lst I thought `extend` would be faster than using `+=`, because it reuses the list instead of creating a new one using the other two. But when I test it out with `timeit`, it turns out that `+=` is faster, >>> timeit('l.extend(x)', 'l = range(10); x = range(10)') 0.16929602623 >>> timeit('l += x', 'l = range(10); x = range(10)') 0.15030503273 >>> timeit('l.extend(x)', 'l = range(500); x = range(100)') 0.805264949799 >>> timeit('l += x', 'l = range(500); x = range(100)') 0.750471830368 Is there something wrong with the code I put in `timeit`? Answer: EDIT: I've tested the performance and I can't replicate the differences to any significant level. * * * Here's the bytecode -- thanks to @John Machin for pointing out inconsistencies. >>> import dis >>> l = [1,2,3] >>> m = [4,5,6] >>> def f1(l, m): ... l.extend(m) ... >>> def f2(l,m): ... l += m ... >>> dis.dis(f1) 2 0 LOAD_FAST 0 (l) 3 LOAD_ATTR 0 (extend) 6 LOAD_FAST 1 (m) 9 CALL_FUNCTION 1 12 POP_TOP 13 LOAD_CONST 0 (None) 16 RETURN_VALUE >>> dis.dis(f2) 2 0 LOAD_FAST 0 (l) 3 LOAD_FAST 1 (m) 6 INPLACE_ADD 7 STORE_FAST 0 (l) 10 LOAD_CONST 0 (None) 13 RETURN_VALUE Notice that `extend` uses a `CALL_FUNCTION` instead of an `INPLACE_ADD`. Any trivial performance differences can probably be put down to this.
How to copy a message from one imap server to another imap server using Python imaplib? Question: I want to copy a message from one IMAP server to another IMAP server. I don't want to alter any of the message data. I'm using python imaplib. This is the code I tried: typ, data = connection1.uid('FETCH', uid, 'RFC822') connection2.uid('APPEND', None, data[0][1]) But this raises an exception: > _imaplib.error: UID command error: BAD ['"Delivered-To: [email protected]']_ So the argument (data[0][1]) is not properly formatted I think. The contents of data[0][1] look like this: > Delivered-To: [email protected]\r\nReceived: by 10.216.207.222 with SMTP id > n27cs38120weo;\r\nFri, 12 Nov 2010 09:43:47 -0800 (PST)\r\nReceived: by > 10.200.19.19 with SMTP id y19mr234526eba.52.12894526694;\r\nFri, 12 Nov 2010 > 09:43:46 -0800 (PST)\r\nReturn-Path: [email protected]\r\nReceived: from > dub0-omc1-s20.dub03.hotmail.com (dub0-omc1-s20.dub03.hotmail.com > [157.55.0.220])\r\n ...... How can I fix this? **Update:** With the help of Wodin and Avadhesh I can append messages now, but how do I get the UID of a just appended message? Answer: U can try the follwing code: typ, data = connection1.uid('FETCH', uid, 'RFC822') import email msg_str = email.message_from_string(data[0][1]) msg_create = connection2.append(str(dest_fold_code) , flags, '', str(msg_str)) where flags would be '(\Seen)' in case of seen email or '' in case of unseen email.
Capture single picture with opencv Question: I have seen several things about capturing frames from a webcam stream using python and opencv, But how do you capture only one picture at a specified resolution with python and opencv? Answer: Use SetCaptureProperty : import cv capture = cv.CaptureFromCAM(0) cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, my_height) cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, my_width) cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FORMAT, cv.IPL_DEPTH_32F) img = cv.QueryFrame(capture) Do not know how to close the camera, though. Example of a ipython session with the above code : In [1]: import cv In [2]: capture = cv.CaptureFromCAM(0) In [7]: img = cv.QueryFrame(capture) In [8]: print img.height, img.width 480 640 In [9]: cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, 480/2) Out[9]: 1 In [10]: cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, 640/2) Out[10]: 1 In [11]: img = cv.QueryFrame(capture) In [12]: print img.height, img.width 240 320
Best way to store a python time.struct_time in a mysql database Question: I am using python's feedparser module, to parse an RSS feed. Once parsed, feedparser returns dates in a python 9 tuple time format (time.struct_time). I am want to store these values in my mysql database so I can later check the [Last-Modified headers](http://www.feedparser.org/docs/http-etag.html) of the feed. It's import that if the times tuples are converted, that when converted back they stay the same, so I can later use them for comparison. I tried this to convert the time tuple to datetime and then back, but it wasn't the same when converted back: dt = datetime.fromtimestamp(time.mktime(struct)) time_tuple = dt.timetuple() What do you think is the method to do this? Answer: Store the datetime as UTC in the database, along with the timezone. Convert back to a local time when pulling from the database if needed.
Formatting data quantity/capacity as string Question: A common task in many programs is converting a byte count (such as from a drive capacity or file size), into a more human readable form. Consider 150000000000 bytes as being more readable as "150 GB", or "139.7 GiB". Are there any libraries that contain functionality to perform these conversions? In Python? In C? In pseudocode? Are there any best practises regarding the "most readable" form, such as number of significant characters, precision etc.? Answer: Here's a method that uses logarithms to determine the file size unit exponent: from math import log byteunits = ('B', 'KiB', 'MiB', 'GiB', 'TiB', 'PiB', 'EiB', 'ZiB', 'YiB') def filesizeformat(value): exponent = int(log(value, 1024)) return "%.1f %s" % (float(value) / pow(1024, exponent), byteunits[exponent])
Python module visiable only in iPython Question: I have used `set PYTHONPATH=%PYTHONPATH%;dictionary_containing_modules` to make some modules globaly visiable. I have a script importing the modules with `import my_module`. When i run the script from the iPython shell (`run my_script.py`), i get no errors and the script runs as intended, but when I run the script from the command promt (windows) with `python my_script.py` I get the error: ImportError: No module named my_module I checked with `pwd` that they use the same working directory. Answer: Remember that you can dynamically change your system path from within your script, using [`sys.path`](http://docs.python.org/library/sys.html#sys.path) or the [`site` module](http://docs.python.org/library/site.html). So maybe you want to add them to your script... Or maybe you want to write a BAT or Python launcher script that sets the PYTHONPATH... Or you want to edit the Windows environment variables (somewhere inside System properties `Win`+`Break`).
How do I use a relative path in a Python module when the CWD has changed? Question: I have a Python module which uses some resources in a subdirectory of the module directory. After searching around on stack overflow and finding related answers, I managed to direct the module to the resources by using something like import os os.path.join(os.path.dirname(__file__), 'fonts/myfont.ttf') This works fine when I call the module from elsewhere, but it breaks when I call the module after changing the current working directory. The problem is that the contents of `__file__` are a relative path, which doesn't take into account the fact that I changed the directory: >>> mymodule.__file__ 'mymodule/__init__.pyc' >>> os.chdir('..') >>> mymodule.__file__ 'mymodule/__init__.pyc' How can I encode the absolute path in `__file__`, or barring that, how can I access my resources in the module no matter what the current working directory is? Thanks! Answer: Store the absolute path to the module directory at the very beginning of the module: package_directory = os.path.dirname(os.path.abspath(__file__)) Afterwards, load your resources based on this `package_directory`: font_file = os.path.join(package_directory, 'fonts', 'myfont.ttf') And after all, do not modify of process-wide resources like the current working directory. There is never a real need to change the working directory in a well-written program, consequently avoid `os.chdir()`.
Python+ubuntu error Question: Am trying to run the following python program import re regex=re.compile("http...imgs.xkcd.com.comics.[\\S]*.[jpg|png]") f=open('out.txt') for a in f: print regex.findall(a) print '\n' when I type the code into the interpreter manually, it works as expected but when i save it as a file and try to run it , it gives errors. The command i used to run it is chmod +x sudo ./pymod.py ERROR: ./pymod.py: 2: Syntax error: "(" unexpected if i dont use sudo, the error i get is ./pymod.py: line 2: syntax error near unexpected token `(' ./pymod.py: line 2: `regex=re.compile("http...imgs.xkcd.com.comics.[\\S]*.[jpg|png]")' am using ubuntu 10.04 with everything on default it takes about 10-15 seconds for the error to appear Answer: Your file should start with shebang. You should include the path to the python interpreter #!/usr/bin/env python import re regex=re.compile("http...imgs.xkcd.com.comics.[\\S]*.[jpg|png]") Check out : <http://en.wikipedia.org/wiki/Shebang_(Unix>)
Connecting and Saving Data With Redis Inside Celery Task Question: I have an object that saves data to Redis. It needs to block as less as possible, so I've decided to use Celery to offload the task. When I try to .save() the object outside of celery, it connects to Redis and stores the data just fine. However, when I try to do the exact same thing from a Celery task, it looks like it runs, but there is no connection to Redis, no exception, no error output and nothing gets saves to the Redis server. I replicated the problem with the small bit of code below. test.py: from celery.decorators import task import redis class A(object): def __init__(self): print "init" def save(self): self.r = self.connect() self.r.set('foo', 'bar') print "saved" def connect(self): return redis.Redis(host="localhost", port=6379) a = A() @task def something(a): a.save() Here is the Python console output: >>> from test import * init >>> a <test.A object at 0x1010e3c10> >>> result = something.delay(a) >>> result.ready() True >>> result.successful() True And here is the celeryd output: [2010-11-15 12:05:33,672: INFO/MainProcess] Got task from broker: test.something[d1d71ee5-7206-4fa7-844c-04445fd8bead] [2010-11-15 12:05:33,688: WARNING/PoolWorker-2] saved [2010-11-15 12:05:33,694: INFO/MainProcess] Task test.something[d1d71ee5-7206-4fa7-844c-04445fd8bead] succeeded in 0.00637984275818s: None Any help would be awesome! I've replicated the issue on multiple computers, with multiple python versions. Answer: The problem was being caused by a misconfiguration in the celeryconfig.py. CELERY_IMPORTS needed to include the task module. This is resolved.
Abuse yield to avoid condition in loop Question: I need to search for the first, last, any, or all occurence of something in something else. To avoid repeating myself ([DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself)) I came up with the following solution. Of interest are the methods `search_revisions()` and `collect_one_occurence()` of both `Searcher` classes. In `SearcherYield` I create a generator in `search_revisions()` only to abandon the generator in `collect_one_occurence()` after collecting the first result. In `SearcherCondition` I put a condition in the loop. This condition will have to be checked for every iteration of the loop. I can't decide whether my (ab)use of yield and subsequent abandoning of the generator is a strike of genius or a hideous hack. What do you think? Do you have any other ideas for such a situation? #!/usr/bin/python class Revision: # a revision is something like a textfile. # the search() method will search the textfile # and return the lines which match the given pattern. # for demonstration purposes this class is simplified # to return predefined results def __init__(self, results): self.results = results def search(self, pattern): return self.results class AbstractSearcher: def __init__(self, revisions): self.revisions = revisions def search_for_first_occurence(self, pattern): keys = sorted(self.revisions.iterkeys()) return self.collect_one_occurence(keys, pattern) def search_for_last_occurence(self, pattern): keys = sorted(self.revisions.iterkeys(), reverse = True) return self.collect_one_occurence(keys, pattern) def search_for_any_occurence(self, pattern): keys = self.revisions.iterkeys() return self.collect_one_occurence(keys, pattern) def search_for_all_occurences(self, pattern): keys = self.revisions.iterkeys() return self.collect_all_occurences(keys, pattern) class SearcherYield(AbstractSearcher): def search_revisions(self, keys, pattern): # create generator which yields the results one by one for key in keys: rev = self.revisions[key] result = rev.search(pattern) if result: yield result def collect_one_occurence(self, keys, pattern): # take the first result and then abandon the generator for result in self.search_revisions(keys, pattern): return result return [] def collect_all_occurences(self, keys, pattern): # collect all results from generator results = [] for result in self.search_revisions(keys, pattern): results.extend(result) return results class SearcherCondition(AbstractSearcher): def search_revisions(self, keys, pattern, just_one): # collect either all results from all revisions # or break the loop after first result found results = [] for key in keys: rev = self.revisions[key] result = rev.search(pattern) if result: results.extend(result) if just_one: break return results def collect_one_occurence(self, keys, pattern): return self.search_revisions(keys, pattern, just_one = True) def collect_all_occurences(self, keys, pattern): return self.search_revisions(keys, pattern, just_one = False) def demo(searcher): print searcher.__class__.__name__ print 'first:', searcher.search_for_first_occurence('foo') print 'last: ', searcher.search_for_last_occurence('foo') print 'any: ', searcher.search_for_any_occurence('foo') print 'all: ', searcher.search_for_all_occurences('foo') def main(): revisions = { 1: Revision([]), 2: Revision(['a', 'b']), 3: Revision(['c']), 4: Revision(['d','e', 'f']), 5: Revision([])} demo(SearcherYield(revisions)) demo(SearcherCondition(revisions)) if __name__ == '__main__': main() Some context: revisions are basically text files. You can think of them like the revisions of a wiki page. Typically there are hundreds of revisions, sometimes thousands. Each revision contains up to thousands of lines of text. There are also cases when there are just a few revision with a few lines each. A search in a revision will search for a pattern in the text and return the matching lines. Sometimes there are thousands of results, sometimes there are no results. Sometimes I just need to know whether there are any results in any revision (search for any). Sometimes I have to collect all the results for further processing (search for all). Sometimes I just need the first revision with a match, sometimes just the last revision (search for first and last). Answer: I did a benchmark. Here are the results: $ ./benchmark.py benchmark with revcount: 1000 timeitcount: 1000 last, first, yield: 0.902059793472 last, first, cond: 0.897155046463 last, all, yield: 0.818709135056 last, all, cond: 0.818334102631 all, all, yield: 1.26602506638 all, all, cond: 1.17208003998 benchmark with revcount: 2000 timeitcount: 1000 last, first, yield: 1.80768609047 last, first, cond: 1.84234118462 last, all, yield: 1.64661192894 last, all, cond: 1.67588806152 all, all, yield: 2.55621600151 all, all, cond: 2.37582707405 benchmark with revcount: 10000 timeitcount: 1000 last, first, yield: 9.34304785728 last, first, cond: 9.33725094795 last, all, yield: 8.4673140049 last, all, cond: 8.49153590202 all, all, yield: 12.9636368752 all, all, cond: 11.780673027 The yield and the condition solution show very similar times. I think this is because the generator (yield) has a loop with a condition in it (if not empty or something like that). I thought I avoided the condition in the loop, but I just moved it out of sight. Anyway, the numbers show that the performance is mostly equal, so the code should be judged by readability. I will stick with the condition in the loop. I like explicit. Here is the benchmark code: #!/usr/bin/python import functools import timeit class Revision: # a revision is something like a textfile. # the search() method will search the textfile # and return the lines which match the given pattern. # for demonstration purposes this class is simplified # to return predefined results def __init__(self, results): self.results = results def search(self, pattern): return self.results class AbstractSearcher: def __init__(self, revisions): self.revisions = revisions def search_for_first_occurence(self, pattern): keys = sorted(self.revisions.iterkeys()) return self.collect_one_occurence(keys, pattern) def search_for_last_occurence(self, pattern): keys = sorted(self.revisions.iterkeys(), reverse = True) return self.collect_one_occurence(keys, pattern) def search_for_any_occurence(self, pattern): keys = self.revisions.iterkeys() return self.collect_one_occurence(keys, pattern) def search_for_all_occurences(self, pattern): keys = self.revisions.iterkeys() return self.collect_all_occurences(keys, pattern) class SearcherYield(AbstractSearcher): def search_revisions(self, keys, pattern): # create generator which yields the results one by one for key in keys: rev = self.revisions[key] result = rev.search(pattern) if result: yield result def collect_one_occurence(self, keys, pattern): # take the first result and then abandon the generator for result in self.search_revisions(keys, pattern): return result return [] def collect_all_occurences(self, keys, pattern): # collect all results from generator results = [] for result in self.search_revisions(keys, pattern): results.extend(result) return results class SearcherCondition(AbstractSearcher): def search_revisions(self, keys, pattern, just_one): # collect either all results from all revisions # or break the loop after first result found results = [] for key in keys: rev = self.revisions[key] result = rev.search(pattern) if result: results.extend(result) if just_one: break return results def collect_one_occurence(self, keys, pattern): return self.search_revisions(keys, pattern, just_one = True) def collect_all_occurences(self, keys, pattern): return self.search_revisions(keys, pattern, just_one = False) def benchmark(revcount, timeitcount): lastrev = {} for i in range(revcount): lastrev[i] = Revision([]) lastrev[revcount] = Revision([1]) allrevs = {} for i in range(revcount): allrevs[i] = Revision([1]) last_yield = SearcherYield(lastrev) last_cond = SearcherCondition(lastrev) all_yield = SearcherYield(allrevs) all_cond = SearcherCondition(allrevs) lfy = functools.partial(last_yield.search_for_first_occurence, 'foo') lfc = functools.partial(last_cond.search_for_first_occurence, 'foo') lay = functools.partial(last_yield.search_for_all_occurences, 'foo') lac = functools.partial(last_cond.search_for_all_occurences, 'foo') aay = functools.partial(all_yield.search_for_all_occurences, 'foo') aac = functools.partial(all_cond.search_for_all_occurences, 'foo') print 'benchmark with revcount: %d timeitcount: %d' % (revcount, timeitcount) print 'last, first, yield:', timeit.timeit(lfy, number = timeitcount) print 'last, first, cond:', timeit.timeit(lfc, number = timeitcount) print 'last, all, yield:', timeit.timeit(lay, number = timeitcount) print 'last, all, cond:', timeit.timeit(lac, number = timeitcount) print ' all, all, yield:', timeit.timeit(aay, number = timeitcount) print ' all, all, cond:', timeit.timeit(aac, number = timeitcount) def main(): timeitcount = 1000 benchmark(1000, timeitcount) benchmark(2000, timeitcount) benchmark(10000, timeitcount) if __name__ == '__main__': main() Some information about my system: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.1 LTS Release: 10.04 Codename: lucid $ uname -a Linux lesmana-laptop 2.6.32-26-generic #46-Ubuntu SMP Tue Oct 26 16:46:46 UTC 2010 i686 GNU/Linux $ python --version Python 2.6.5 $ cat /proc/cpuinfo | grep name model name : Intel(R) Pentium(R) M processor 1.60GHz
wxReactor Problem Question: I am developing a program that uses Twisted and wxReactor. Everytime I try to exit the application, it hangs, and I have to force quit it. My wxPython OnClose() event does call reactor.stop(), which to my knowledge should fix this issue. In my quest for an answer, I've come across this ticket: www.twistedmatrix.com/trac/ticket/3948. I've tried a patch listed on the page, with no luck. I've been at this problem for 2 weeks now, and am pretty desperate :). To give you all some background on this project: It is a freeware client that uses sockets to connect to a multiplayer game server (currently plays Monopoly and Uno). If you run it, you will immediately notice there are no graphics. That is because this client only provides audio feedback through speech synthesis and sound effects. The target audience of this project is visually impaired gamers. To test the issue, run python rsg.py (which can be found in the src folder. In case you need to know, I use Python 2.6.5). In a terminal, you will see the output the program produces (which is mostly what the server sends to our client). Once you see the line "Connection Made" (which should print very shortly after running), try closing the program (by clicking the X). The client will hang for a few seconds, and then you will need to force quit the application (on Ubuntu, it comes up asking if I want to force quit the application). I do have an idea of why it doesn't exit properly. When I ran it through gdb, two threads weren't exiting. Strange enough, if the server closed the connection, then I exited the program, it would work fine. I really appreciate all help. Thank you in advance. **Edit** _Since I was asked to provide a basic demo of my problem, here it is:_ import wx import sys from twisted.internet import wxreactor wxreactor.install() # import t.i.reactor only after installing wxreactor: from twisted.internet import reactor from twisted.protocols.basic import LineReceiver from twisted.internet.protocol import ClientFactory class ZGPClient(LineReceiver): """Our client object.""" def lineReceived(self, datavar): "As soon as any data is received" print datavar class EchoFactory(ClientFactory): protocol = ZGPClient def startedConnecting(self, connector): global conn conn = connector print 'Started to connect.' def sendData(self, data=""): conn.transport.write(data.encode("ascii", "ignore") + "\n") class main_window(wx.Frame): def __init__(self, parent, id, title): super(main_window, self).__init__(parent, id, title, style=wx.DEFAULT_FRAME_STYLE) self.Bind(wx.EVT_CLOSE, self.OnClose) self.Show(True) def OnClose(self, event): reactor.stop() sys.exit() if __name__ == "__main__": app = wx.App() frame = main_window(None, wx.ID_ANY, "RS Games Client - No Game") reactor.registerWxApp(app) sockObj = EchoFactory() reactor.connectTCP("rsgamesmonserver.webhop.org", 3555, sockObj) reactor.run() app.MainLoop() Answer: This will work for you: def OnClose(self, evt): # ugly hack until wxreactor is patched: reactor._stopping = True reactor.callFromThread(_threadedselect.ThreadedSelectReactor.stop, reactor)
Ttk on python 2.7 Question: I just installed python 2.7 from the python website, and was surprised to find that ttk wasn't included. Did I make a mistake installing, or is ttk really not included in the standard release? Anyway, where can I get a copy of ttk to install in my python instalation. Note: I also heard that the activestate release has ttk. Should I unistall and use that instead? Answer: I think you mean "ttk" not "tkk" The following should solve your problems if this is the case: from Tkinter import * from ttk import * for more about ttk and Tkinter in python2.7, reference: <http://docs.python.org/library/ttk.html>
How to fix Python error importing ElementTree? Question: I'm beginning to learn python and here I'm trying to read from an xml file using ElementTree: import sys from elementtree.ElementTree import ElementTree doc = ElementTree(file="test.xml") doc.write(sys.stdout) However I get this error: File "my_xml.py", line 2, in from elementtree.ElementTree import ElementTree ImportError: No module named elementtree.ElementTree I do have lib files in /usr/lib/python2.6/xml/etree/... What am I doing wrong? Thanks a lot for your help :) Answer: It should be: from xml.etree.ElementTree import ElementTree More information on this can be found at the [Python docs](http://docs.python.org/library/xml.etree.elementtree.html).
Use WTForms with webapp and Django templates on Google App Engine Question: I'm trying to use WTForms with webapp without much luck. I would like to be able to use the `form_field` templatetag, as shown in the documentation: `{% form_field form.username class="big_text" onclick="do_something()" %}` I've got WTForms installed fine in my application, but its Django template tags isn't working for me. Does anyone have instructions on how to get this installed? If I can't get this working, I will probably give up and switch to Jinja2 templates. The reason we have kept with Django so far is to limit the number of dependencies as much as possible. * * * Here's what I've tried so far: I've install Django 1.1 locally and enabled it [per the documentation](http://code.google.com/appengine/docs/python/tools/libraries.html#Django). I tried adding `INSTALLED_APPS = ['wtforms.ext.django']` to my `settings.py` \- no effect. I tried registering the wtforms templatetag manually:. register = webapp.template.create_template_register() from wtforms.ext.django.templatetags import wtforms register.tag('form_field', wtforms.do_form_field) This gave me an error: `InvalidTemplateLibrary: Could not load template library from template_helpers, No module named django.templatetags` So I tried copying and pasting the template tags into my own code, and I got the error `TemplateSyntaxError: Could not parse the remainder: ' form.foobar' from 'form_field form.foobar`. However, I don't think the templatetag registration worked, because the error was the same without that code. * * * **Update:** I'm leaving this question up in case someone on the internet can some day answer it, but I switched to Jinja2 and now everything works perfectly. Webapp with Django templates is dead to me. ;-) Answer: I recommend jinja2 templates over django: <http://jinja.pocoo.org/> It's based on the django templates but more powerful and easy to use. I don't think it's a good idea to use Django templates without the django stack. Also, if you want a more structured framework, a VERY nice minimalist framework is Flask: <http://www.pocoo.org/projects/flask/#flask>. I can't praise those two libraries enough. I worked a long time in Django and found this combo to be very refreshing and succinct. P.S. this should be a very simple process to port over. It took me 10 minutes to port over a webapp site when I just found out about Flask.
How do I use xml namespaces with find/findall in lxml? Question: I'm trying to parse content in an OpenOffice ODS spreadsheet. The ods format is essentially just a zipfile with a number of documents. The content of the spreadsheet is stored in 'content.xml'. import zipfile from lxml import etree zf = zipfile.ZipFile('spreadsheet.ods') root = etree.parse(zf.open('content.xml')) The content of the spreadsheet is in a cell: table = root.find('.//{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table') We can also go straight for the rows: rows = root.findall('.//{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-row') The individual elements know about the namespaces: >>> table.nsmap['table'] 'urn:oasis:names:tc:opendocument:xmlns:table:1.0' How do I use the namespaces directly in find/findall? The obvious solution does not work. Trying to get the rows from the table: >>> root.findall('.//table:table') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "lxml.etree.pyx", line 1792, in lxml.etree._ElementTree.findall (src/lxml/lxml.etree.c:41770) File "lxml.etree.pyx", line 1297, in lxml.etree._Element.findall (src/lxml/lxml.etree.c:37027) File "/usr/lib/python2.6/dist-packages/lxml/_elementpath.py", line 225, in findall return list(iterfind(elem, path)) File "/usr/lib/python2.6/dist-packages/lxml/_elementpath.py", line 200, in iterfind selector = _build_path_iterator(path) File "/usr/lib/python2.6/dist-packages/lxml/_elementpath.py", line 184, in _build_path_iterator selector.append(ops[token[0]](_next, token)) KeyError: ':' Answer: If `root.nsmap` contains the `table` namespace prefix then you could: root.xpath('.//table:table', namespaces=root.nsmap) `findall(path)` accepts `{namespace}name` syntax instead of `namespace:name`. Therefore `path` should be preprocessed using namespace dictionary to the `{namespace}name` form before passing it to `findall()`.
lxml build on Solaris 10 Question: Please can you help and advise with a problem with python 2.6.6 and lxml Solaris 10 build? Installation instructions: www.sunfreeware.com/download.html direct link to the file: <http://www.sunfreeware.com/ftp/pub/freeware/sparc/10/lxml-2.2.8-sol10-sparc- local.gz> [rainier]/usr/apps/openet/bmsystest/relAuto/RAP_SW> python Python 2.6.6 (r266:84292, Oct 12 2010, 15:25:47) [C] on sunos5 Type "help", "copyright", "credits" or "license" for more information. >>> import lxml >>> from lxml import etree Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: ld.so.1: python: fatal: relocation error: file /opt/csw/lib/python/site-packages/lxml-2.2.8-py2.6-solaris-2.10-sun4u.egg/lxml/etree.so: symbol xsltDocDefaultLoader: referenced symbol not found >>> Thanks * * * Mismatch of version: this is identical to the advise I got independently and I can only pass it onto the installer as I am developer and do not have root privilege. Thanks for such a quick response! Answer: I've seen this before. Think it was due to a mismatch between two versions of python. I think it was that python was calling /usr/local/bin/python, but lxml had compiled against a different version of python (found in /bin/python or something like that)
How to force Jinja2 templates to recompile? Question: I'm trying to switch Jinja2 template in the django app without restarting the application. Has anyone done this? Basically I need to force jinja2 to reload the templates once the skin selection change is applied. I've tried to re-create cache object on the template environment object with no effect. myskin_utils.py: from jinja2.environment import create_cache ENV_OBJECT.cache = create_cache(50) I've also tried to reload the module that contains my ENV_OBJECT with reload(myskin) #also no effect on the output Another thing I'd like to change on the fly is language, but I guess it's a separate question. Thanks for any advice. **edit:** I don't have cache set up with jinja2, but I do see a speed up from using Jinja after switching from Django templates, I suspect that template bytecode lives in the compiled code of my view functions but I did not look into details of jinja. I have ENV (an instance of `CoffinEnvironment` which subclasses Jinja's `Environment`) imported in the global namespace of a view module and calls `ENV.get_template()` inside view functions (Django+Coffin+Jinja2). Found that if I call python's `reload()` builtin on my environment module **within** the view function the template does switch, but I would not like to stick that code into every function. Answer: Per default Jinja2 doesn't use any caching at all, but it's recommended to configure a caching backend to speed things up a little bit. So that jinja2 doesn't have to parse and compile every template on every requests. Jinja2 supports currently 2 different cache types out of the box: One of them is `FileSystemBytecodeCache` which is (like the name suggests) file based. So all compiled templates are stored on the file system and retrieved from there. If you look closely at the implementation, you will also find a `cache.clear()` method there which simply deletes all files in this temporary folder. Causing all templates to be parsed/compiled again. The other cache type is a called `MemcachedBytecodeCache` which is just a thin wrapper for Memcache. This method is recommended, because Memcache stores everything in memory, so it's a little bit faster than hitting the disk, and you can use the same cache from different hosts (which is useful if you are running some kind of cluster). The underlying Memcache client (either werkzeug.contrib.cache, python- memcached or cmemcache) does also provide a `clear()` method which will delete everything inside the cache. But because you probably use the cache for other things too (e.g. storing the result of expensive database queries there), the `clear()` method isn't exposed in jinja, because it will affect everything (and not just the templates). So, to summarize your options are: * Use Jinja2 without a Cache * Use Jinja2 with a `FileSystemBytecodeCache` and call `cache.clear()` * Use Jinja2 with a `MemcachedBytecodeCache` and call `memcache_client.clear()` (which will also clear everything else in the cache). * Run a separate memcached process on another port which is only used with Jinja2. Then call `memcache_client.clear()` and all templates will be cleared.
ImportError: cannot import name tz (psycopg2) Question: I am using Windows XP, and using Python run time from <http://www.python.org/ftp/python/2.7/python-2.7.msi> If I am running in standalone application, `import psycopg2` doesn't cause me any trouble. However, when come to mod_wsgi + apache, I will get the following error [Thu Nov 18 14:26:51 2010] [error] [client 127.0.0.1] mod_wsgi (pid=2832): Target WSGI script 'C:/Projects/SandBox/web/script/index.py' cannot be loaded as Python module. [Thu Nov 18 14:26:51 2010] [error] [client 127.0.0.1] mod_wsgi (pid=2832): Exception occurred processing WSGI script 'C:/Projects/SandBox/web/script/index.py'. [Thu Nov 18 14:26:51 2010] [error] [client 127.0.0.1] Traceback (most recent call last): [Thu Nov 18 14:26:51 2010] [error] [client 127.0.0.1] File "C:/Projects/SandBox/web/script/index.py", line 9, in <module> [Thu Nov 18 14:26:51 2010] [error] [client 127.0.0.1] import psycopg2 [Thu Nov 18 14:26:51 2010] [error] [client 127.0.0.1] File "build\\bdist.win32\\egg\\psycopg2\\__init__.py", line 65, in <module> [Thu Nov 18 14:26:51 2010] [error] [client 127.0.0.1] from psycopg2 import tz [Thu Nov 18 14:26:51 2010] [error] [client 127.0.0.1] ImportError: cannot import name tz Here is the python script. import sys, os sys.path.append(os.path.dirname(__file__)) import psycopg2 def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] and here is the httpd.conf file. LoadModule wsgi_module modules/mod_wsgi-win32-ap22py27-3.3.so WSGIScriptAlias / "C:/Projects/SandBox/web/" <Directory "C:/Projects/SandBox/web"> AllowOverride None Options None Order deny,allow Allow from all </Directory> I check the archive `C:\Python27\Lib\site- packages\psycopg2-2.2.2-py2.7-win32.egg\`, there is `C:\Python27\Lib\site- packages\psycopg2-2.2.2-py2.7-win32.egg\psycopg2\tz.py` Answer: My guess would be that Python doesn't know your egg cache location (or doesn't have privileges to it). You just need to set that. More information **[here](http://lethain.com/entry/2009/feb/13/when-psycopg2-can-t-import- tz/)**. Try setting the [`WSGIPythonEggs`](http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIPythonEggs) directive.
How to bind engine when I want, when using declarative_base in SQLAlchemy? Question: Here's my code: from sqlalchemy import create_engine, Column, Integer from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker database_url = 'mysql://some_path' engine = create_engine(database_url) Base = declarative_base(engine) class Users(Base): __tablename__ = 'Users' __table_args__ = {'autoload':True} metadata = Base.metadata Session = sessionmaker(bind=engine) session = Session() It works, but... **Is it possible to bind engine when I want, not only at import time?** So I can wrap this implementation into class. For now, I get class Users(Base): File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.6.5-py2.5.egg/sqlalchemy/ext/declarative.py", line 1231, in __init__ _as_declarative(cls, classname, cls.__dict__) File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.6.5-py2.5.egg/sqlalchemy/ext/declarative.py", line 1122, in _as_declarative **table_kw) File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.6.5-py2.5.egg/sqlalchemy/schema.py", line 209, in __new__ table._init(name, metadata, *args, **kw) File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.6.5-py2.5.egg/sqlalchemy/schema.py", line 260, in _init msg="No engine is bound to this Table's MetaData. " File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.6.5-py2.5.egg/sqlalchemy/schema.py", line 2598, in _bind_or_error raise exc.UnboundExecutionError(msg) sqlalchemy.exc.UnboundExecutionError: No engine is bound to this Table's MetaData. Pass an engine to the Table via autoload_with=<someengine>, or associate the MetaData with an engine via metadata.bind=<someengine> when engine is not specified: `Base = declarative_base()` Answer: At least with SQLAlchemy 0.9 you can defer the binding by using DeferredReflection. See the example in the [Using Reflection with Declarative section of the manual](http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/declarative.html#using- reflection-with-declarative). There, you can find the following example (simplified): from sqlalchemy.ext.declarative import declarative_base, DeferredReflection Base = declarative_base(cls=DeferredReflection) class Foo(Base): __tablename__ = 'foo' bars = relationship("Bar") class Bar(Base): __tablename__ = 'bar' Base.prepare(e) e is the engine.
Python CMS to create a video site like youtube? Question: Is anyone aware of a open source CMS written in python using which I can make a site like YouTube? Answer: [Django](http://www.djangoproject.com/) is a good Python Framework, as well as [CherryPy](http://www.cherrypy.org/) and [Pylons](http://pylonshq.com/). However, a framework is not a CMS. An open source video CMS would be: [Media Core](http://getmediacore.com/) Here is some info about how YouTube is build: (source: [Google Video](http://video.google.com/videoplay?docid=-6304964351441328559#)) ### Platform: 1. Apache 2. Python 3. Linux (SuSe) 4. MySQL 5. psyco, a dynamic python->C compiler 6. lighttpd for video instead of Apache ### Webservers: 1. NetScalar is used for load balancing and caching static content. 2. Run Apache with mod_fast_cgi. 3. Requests are routed for handling by a Python application server. 4. Application server talks to various databases and other informations sources to get all the data and formats the html page. 5. Can usually scale web tier by adding more machines. 6. The Python web code is usually NOT the bottleneck, it spends most of its time blocked on RPCs. 7. Python allows rapid flexible development and deployment. This is critical given the competition they face. 8. Usually less than 100 ms page service times. 9. Use psyco, a dynamic python->C compiler that uses a JIT compiler approach to optimize inner loops. 10. For high CPU intensive activities like encryption, they use C extensions. 11. Some pre-generated cached HTML for expensive to render blocks. 12. Row level caching in the database. 13. Fully formed Python objects are cached. 14. Some data are calculated and sent to each application so the values are cached in local memory. This is an underused strategy. The fastest cache is in your application server and it doesn't take much time to send precalculated data to all your servers. Just have an agent that watches for changes, precalculates, and sends. ### Video serving: 1. Costs include bandwidth, hardware, and power consumption. 2. Each video hosted by a mini-cluster. Each video is served by more than one machine. 3. Using a a cluster means: * More disks serving content which means more speed. * Headroom. If a machine goes down others can take over. * There are online backups. 4. Servers use the lighttpd web server for video: * Apache had too much overhead. * Uses epoll to wait on multiple fds. * Switched from single process to multiple process configuration to handle more connections. 5. Most popular content is moved to a CDN (content delivery network): * CDNs replicate content in multiple places. There's a better chance of content being closer to the user, with fewer hops, and content will run over a more friendly network. * CDN machines mostly serve out of memory because the content is so popular there's little thrashing of content into and out of memory. 6. Less popular content (1-20 views per day) uses YouTube servers in various colo sites. * There's a long tail effect. A video may have a few plays, but lots of videos are being played. Random disks blocks are being accessed. * Caching doesn't do a lot of good in this scenario, so spending money on more cache may not make sense. This is a very interesting point. If you have a long tail product caching won't always be your performance savior. * Tune RAID controller and pay attention to other lower level issues to help. * Tune memory on each machine so there's not too much and not too little. ### Serving Video Key Points: 1. Keep it simple and cheap. 2. Keep a simple network path. Not too many devices between content and users. Routers, switches, and other appliances may not be able to keep up with so much load. 3. Use commodity hardware. More expensive hardware gets the more expensive everything else gets too (support contracts). You are also less likely find help on the net. 4. Use simple common tools. They use most tools build into Linux and layer on top of those. 5. Handle random seeks well (SATA, tweaks). ### Serving Thumbnails: 1. Surprisingly difficult to do efficiently. 2. There are a like 4 thumbnails for each video so there are a lot more thumbnails than videos. 3. Thumbnails are hosted on just a few machines. 4. Saw problems associated with serving a lot of small objects: * Lots of disk seeks and problems with inode caches and page caches at OS level. * Ran into per directory file limit. Ext3 in particular. Moved to a more hierarchical structure. Recent improvements in the 2.6 kernel may improve Ext3 large directory handling up to 100 times, yet storing lots of files in a file system is still not a good idea. * A high number of requests/sec as web pages can display 60 thumbnails on page. * Under such high loads Apache performed badly. * Used squid (reverse proxy) in front of Apache. This worked for a while, but as load increased performance eventually decreased. Went from 300 requests/second to 20. * Tried using lighttpd but with a single threaded it stalled. Run into problems with multiprocesses mode because they would each keep a separate cache. * With so many images setting up a new machine took over 24 hours. * Rebooting machine took 6-10 hours for cache to warm up to not go to disk. 5. To solve all their problems they started using Google's BigTable, a distributed data store: * Avoids small file problem because it clumps files together. * Fast, fault tolerant. Assumes its working on a unreliable network. * Lower latency because it uses a distributed multilevel cache. This cache works across different collocation sites. ### Databases: 1. The Early Years * Use MySQL to store meta data like users, tags, and descriptions. * Served data off a monolithic RAID 10 Volume with 10 disks. * Living off credit cards so they leased hardware. When they needed more hardware to handle load it took a few days to order and get delivered. * They went through a common evolution: single server, went to a single master with multiple read slaves, then partitioned the database, and then settled on a sharding approach. * Suffered from replica lag. The master is multi-threaded and runs on a large machine so it can handle a lot of work. Slaves are single threaded and usually run on lesser machines and replication is asynchronous, so the slaves can lag significantly behind the master. * Updates cause cache misses which goes to disk where slow I/O causes slow replication. * Using a replicating architecture you need to spend a lot of money for incremental bits of write performance. * One of their solutions was prioritize traffic by splitting the data into two clusters: a video watch pool and a general cluster. The idea is that people want to watch video so that function should get the most resources. The social networking features of YouTube are less important so they can be routed to a less capable cluster. 2. The later years: * Went to database partitioning. * Split into shards with users assigned to different shards. * Spreads writes and reads. * Much better cache locality which means less IO. * Resulted in a 30% hardware reduction. * Reduced replica lag to 0. * Can now scale database almost arbitrarily. ### Data Center Strategy 1. Used manage hosting providers at first. Living off credit cards so it was the only way. 2. Managed hosting can't scale with you. You can't control hardware or make favorable networking agreements. 3. So they went to a colocation arrangement. Now they can customize everything and negotiate their own contracts. 4. Use 5 or 6 data centers plus the CDN. 5. Videos come out of any data center. Not closest match or anything. If a video is popular enough it will move into the CDN. 6. Video bandwidth dependent, not really latency dependent. Can come from any colo. 7. For images latency matters, especially when you have 60 images on a page. 8. Images are replicated to different data centers using BigTable. Code looks at different metrics to know who is closest.
Web Proxy to Simulate Network Problems Question: I need a way to simulate connectivity problems in an automated test suite, on Linux, and preferably from Python. Some sort of proxy that I can put in front of the web server that can hang or drop connections after one trigger or another (after X bytes transferred, etc) would be perfect. It doesn't seem too hard to build, but I'd rather grab something pre-existing, if anyone has any good recommendations. Answer: when i needed one, i found that building it yourself is the best thing.. start by raising a threaded server in python <http://docs.python.org/dev/library/socketserver.html> (you don't have to use the class itself). and it's very simple: in the new connection thread, you create a new socket and connects it to the real server. then, you put both of them in a list and sends it to select.select (import select). then, when socket x receive data - sends it to y. when socket y receives data sends it to x. (don't forget to close the socket when you receive empty string). now you can do whatever you want.. if you need anything, i'm here..
How do I find mime-type in Python Question: I am trying out some CGI-scripting in Python. If a request come to my python script, how do I find the mime-type of the filetype? UPDATE: Adding more info Some images for now, adderess/image.jpg. But if I understand mime-type is part of the headers that the web-browsers sends, right? So how do I read these headers? Answer: You have two options. If your lucky the client can determine the mimetype of the file and it can be included in the form post. Usually this is with the value of the an input element whose name is "filetype" or something similar. Otherwise you can guess the mimetype from the file extension on the server. This is somewhat dependent on how up-to-date the mimetypes module is. Note that you can add types or override types in the module. Then you use the "guess_type" function that interprets the mimetype from the filename's extension. import mimetypes mimetypes.add_type('video/webm','.webm') ... mimetypes.guess_type(filename) UPDATE: If I remember correctly you can get the client's interpretation of the mimetype from the "Content-Type" header. A lot of the time this turns out to be 'application/octet-stream' which is almost useless. So assuming your using the cgi module, and you're uploading files with the usual multipart form, the browser is going to guess the mimetype for you. It seems to do a decent job of it, and it gets passed through to the form.type parameter. So you can do something like this: import cgi form = cgi.FieldStorage() files_types = {}; if form.type == 'multipart/form-data': for part in form.keys(): files_types[form[part].filename] = form[part].type else: files_types[form.filename] = form.type
How to insert a string(Fri Nov 19 16:23:54 +0800 2010) into a table field whose type is TIMESTAMP? Question: I'm using Python and MYSQLdb to connect to Mysql. and now i need to insert a string "Fri Nov 19 16:23:54 +0800 2010" into a field, how can i do? Answer: You can parse that sting with Python `time.strptime()` method and then convert it to string required by MySQL using `strftime()`. Unfortunately my Python 2.6 cannot handle `%z` directive, but you can check on your implementation that code: import datetime def test_dt(d, f): dt = datetime.datetime.strptime(d, f) print('%s -> %s' % (d, dt.strftime('%Y-%m-%d %H:%M:%S %Z'))) print('-' * 10) test_dt('Fri Nov 19 16:23:54 2010', '%a %b %d %H:%M:%S %Y') test_dt('Fri Nov 19 16:23:54 +8000 2010', '%a %b %d %H:%M:%S %z %Y') If it fails with `%z` then think if you need that time zone, or if you can remove it from string and apply it to `dt` object other way. It seems that `%z` issue was reported and solved: <http://bugs.python.org/issue6641>
Prevent python shelve corruption Question: How should I prevent corruption in a shelve file? Should the shelve be closed most of the time and then opened only when I need to read or edit a value? Answer: If safety of your persistent objects is of high importance in your project, using `shelve` is not a good idea. Neither is pickling objects and manually writing them into files. Consider that real databases invest huge resources (brainpower and code) to be safe in case of failures. So keep your data in a real DB. The simplest would be `sqlite`, as it comes bundled with Python. `sqlite` is quite safe and has a lot of smarts in it about keeping your data in some valid state even in case of system failures (like when someone trips on your PC's power cable).
using stored variables as regex patterns Question: is there a way for python to use values stored in variables as patterns in regex? supposing i have two variables: begin_tag = '<%marker>' end_tag = '<%marker/>' doc = '<html> something here <%marker> and here and here <%marker/> and more here <html>' how do you extract the text between begin_tag and end_tag? the tags are determined after parsing another file, so they're not fixed. Answer: Don't use a regex at all. parse html inteligently! from BeautifulSoup import BeautifulSoup marker = 'mytag' doc = '<html>some stuff <mytag> different stuff </mytag> other things </html>' soup = BeautifulSoup(doc) print soup.find(marker).renderContents()
need fix Tkinter Error Question: I have my windows Wtere this AttributeError: 'NoneType' object has no attribute 'tk' ??? from Tkinter import * root = Tk() root.minsize(428, 285) root.maxsize(428, 285) root.resizable(width=NO, height=NO) root.title("TEST") root.wm_iconbitmap('C:\Python27\iconfile.ico') # create the canvas, size in pixels canvas = Canvas(width = 428, height = 255, bg = 'gray95') # pack the canvas into a frame/form canvas.pack(expand = YES, fill = BOTH) gif1 = PhotoImage(file = 'C:\Python27\image.gif') # put gif image on canvas # pic's upper left corner (NW) on the canvas is at x=50 y=10 canvas.create_image(0, 0, image = gif1, anchor = NW) def die(event): root.destroy() b = Button(root, text="text") b.bind("<Button-1>", die) b["command"] = die b.pack() root.mainloop() mainloop() Answer: If you get an error like "NoneType' object has no attribute 'tk'", it means somewhere in your code you have something that looks like `foo.tk()`, and `foo` is not defined. Look for that line of code and figure out why `foo` is not defined. Maybe you misspelled it, maybe you are calling it in the wrong order, etc.
How to call Python functions dynamically Question: I have this code: fields = ['name','email'] def clean_name(): pass def clean_email(): pass How can I call `clean_name()` and `clean_email()` dynamically? For example: for field in fields: clean_{field}() I used the curly brackets because it's how I used to do it in PHP but obviously doesn't work. How to do this with Python? Answer: If don't want to use `globals, vars` and don't want make a separate module and/or class to encapsulate functions you want to call dynamically, you can call them as the attributes of the current module: import sys ... getattr(sys.modules[__name__], "clean_%s" % fieldname)()
how do i insert inputs to find value of integrals in python? Question: I am working a physics coursework, and I am currently stuck at this section. I've trying but I couldn't get it right. Really need help here. Its about the trapezium rule, Question: What is the value of the integral in equation f(x)=x4*(1-x)44/(1+x^2) this is the code I've tried, but I can not get the answer from math import * def f(x): f(x)=x**4*(1-x)**4/(1+x**2) return f(x) def trap0 (f,a,b,n): h= float (b-a)/n s =0.5*( f(a)+f(b)) for i in range (1,n): s=s+f(a+i*h) return s*h Answer: Your definition of `f` is bogus. This is all you need to write: def f(x): return x**4 * (1 - x)**4 / (1 + x**2) The rest of your code looks good to me, so long as you call `trap0` with floating-point arguments for `a` and `b`. >>> trap0(math.cos, 0.0, math.pi/2, 100) 0.99997943823960744 If you want to call it with integer `a` and `b` then things can go wrong, because `f` ends up doing integer division instead of floating-point division: >>> f(4.0) 1219.7647058823529 >>> f(4) 1219 The easiest fix to is to coerce the division to be floating-point, perhaps like this: def g(x): return x**4 * (1 - x)**4 / (1.0 + x**2) >>> g(4.0) == g(4) True
I want to write a tool without usage entry barriers. Do I have to write it in C? Question: I want to write an open-source tool for use by developers. I want to eliminate entry barriers, so if they like the idea, they just get the tool and start playing with it. In particular, I don't want an "Oh, should I also install 200Mb of ThatLanguage runtime libraries? Oh, so they don't build on my latest version of Linux?" entry barrier. Should I write this tool in C, then? Or is Python, or Java, or whatever, already sufficiently widespread to not worry about this sort of things altogether (everyone already has them installed)? Well, of course I know that they are freaking hugely widespread, but still - **are there any major benefits to writing a super-lightweight zero-dependency tool** , or am I being too much of a perfectionist? Answer: Just write it first. If it is worth it people will use it. Beyond that, (almost) everyone has Java, Python, and Ruby installed (especially devs). Some languages are still esoteric enough that it might not be worth it for 'that one app' (erlang, haskell, etc.). Just write it though, that's the important part. From there it can be ported, rewritten, adopted, but none of that can happen if the tool isn't written first.
Switching from amara to lxml in Python Question: I am trying to accomplish with lxml library something like this: <http://www.xml.com/pub/a/2005/01/19/amara.html> from amara import binderytools container = binderytools.bind_file('labels.xml') for l in container.labels.label: print l.name, 'of', l.address.city but I have had the hardest time to get my feel wet! What I want to do is: descend to the root node named 'X', then descend to its second child named 'Y', then grab all of its children 'named Z', then of those keep only the children than have an attribute 'name' set to 'bacon', then for each remaining node look at all of its children named 'W', and keep only a subset based on some filter, which looks at W's only children named A, B, and C. Then I need to process them with the following (non-optimized) pseudo-code: result = [] X = root(doc(parse(xml_file_name))) Y = X[1] # Second child Zs = Y.children() for Z in Zs: if Z.name != 'bacon': continue # skip Ws = Z.children() record = [] assert(len(Ws) == 9) W0 = Ws[0] assert(W0.A == '42') record.append(str(W0.A) + " " + W0.B + " " + W0.C)) ... W1 = Ws[1] assert(W1.A == '256') ... result.append(record) This is sort of what I am trying to accomplish. Before I try to make this code cleaner, I would like to make it work. Please help, as I am lost in this API. Let me know if you have questions. Answer: import lxml.etree as le import io content='''\ <foo><X><Y>skip this</Y><Y><Z name="apple"><W>not here</W></Z> <Z name="bacon"><W><A>42</A><B>b</B><C>c</C></W><W><A>256</A><B>b</B><C>c</C></W></Z> <Z name="bacon"><W><A>42</A><B>b</B><C>c</C></W><W><A>256</A><B>b</B><C>c</C></W></Z> </Y></X></foo> ''' doc=le.parse(io.BytesIO(content)) # print(le.tostring(doc, pretty_print=True)) result=[] Zs=doc.xpath('//X/Y[2]/Z[@name="bacon"]') for Z in Zs: Ws=Z.xpath('W') record=[] assert(len(Ws)==2) #<--- Change to 9 abc=Ws[0].xpath('descendant::text()') # print(abc) # ['42', 'b', 'c'] assert(abc[0] == '42') record.append(' '.join(abc)) abc=Ws[1].xpath('descendant::text()') assert(abc[0] == '256') result.append(record) print(result) # [['42 b c'], ['42 b c']] This might be a way to tighten-up the inner loop, though I'm only guessing what records you wish to keep: for Z in Zs: Ws=Z.xpath('W') assert(len(Ws)==2) #<--- Change to 9 a_vals=('42','256') for W,a_val in zip(Ws,a_vals): abc=W.xpath('descendant::text()') assert(abc[0] == a_val) result.append([' '.join(abc)]) print(result) # [['42 b c'], ['256 b c'], ['42 b c'], ['256 b c']]
Testing Google App Engine app from terminal (python cli) Question: I'm running `from appname import model`, which gives me: ImportError: No module named google.appengine.api So I add the following Python path (the only path I could `find`): `PYTHONPATH=/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine- default.bundle/Contents/Resources/google_appengine/:~/src/appname/src/ python` And then I run the command again. But that tells me: ImportError: No module named yaml I'm running Mac OS X Snow Leopard and the latest GAE. Any tips? All I want to do is run some of the methods in my model. Answer: From dev_appserver.py: DIR_PATH = os.path.abspath(os.path.dirname(os.path.realpath(__file__))) # ... EXTRA_PATHS = [ DIR_PATH, os.path.join(DIR_PATH, 'lib', 'antlr3'), os.path.join(DIR_PATH, 'lib', 'django'), os.path.join(DIR_PATH, 'lib', 'fancy_urllib'), os.path.join(DIR_PATH, 'lib', 'ipaddr'), os.path.join(DIR_PATH, 'lib', 'webob'), os.path.join(DIR_PATH, 'lib', 'yaml', 'lib'), ] # ... sys.path = EXTRA_PATHS + sys.path I think it should work if you put these bits in a separate script, and import it before importing your own code. Or, as you've pointed out, use the Appengine console in the SDK (but that's not there for Linux users).
shutil.move -> WindowsError: [Error32] The process cannot access the file Question: I use Python 2.5. and have a problem with shutil.move print(srcFile) print(dstFile) shutil.move(srcFile, dstFile) Output: c:\docume~1\aaa\locals~1\temp\3\tmpnw-sgp D:\dirtest\d\c\test.txt ... WindowsError: [Error32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\aaa\\locals~1\\temp\\3\\tmpnw-sgp' I use it on a Windows 2003 Server. So, what's wrong here? Does anyone know? Best Regards. Answer: If you want to continue in your script use: try: shutil.move(srcFile, dstFile) except WindowsError: pass The reason your getting error 32 is because there is another process on your computer or server that is using that file. You might want to not copy temp files as they are not really important by name.
Module uncallable? Unsure how to fix Question: I am just beginning with python and I've run in to an error I don't know how to fix. Please see the code examples below. I run `event_timer.py` and I get the following error message. Both of the files listed below are in the same folder. Traceback (most recent call last): File "E:\python\event_timer\event_timer.py", line 7, in timer = EventTimer() TypeError: 'module' object is not callable Can someone tell me what I'm missing? **event_timer.py:** import EventTimer timer = EventTimer() timer.addStep("Preheat Oven", seconds = 10) timer.addStep("Cook Pizza", seconds = 20) timer.addStep("Done!") timer.start() **EventTimer.py:** import time class Timer: event = 'Event' steps = [] def __init__(self, event = None): if event is not None: self.event = event def addStep(self, step, seconds = None, minutes = None, hours = None, days = None): if seconds is not None: unit = 'seconds' amount = seconds elif minutes is not None: unit = 'minutes' amount = minutes elif hours is not None: unit = 'hours' amount = hours elif days is not None: unit = 'days' amount = days else: print 'Invalid arguments' return False self.steps.append({'unit': unit, 'amount': amount}) return True def __timeInSeconds(self, unit, amount): if unit == 'seconds': return amount elif unit == 'minutes': return amount * 60 elif unit == 'hours': return amount * 60 * 60 elif unit == 'days': return amount * 60 * 60 * 24 else: print 'Invalid unit' return False def start(self): if len(self.steps) == 0: print 'No steps to complete' return False print "{0} has started.".format(self.event) for step in self.steps: print step.step time.sleep(self.__timeInSeconds(step.unit, step.amount)) print "Completed" print 'Event complete' Answer: When you write import EventTimer you make a new variable, `EventTimer`, pointing to a _module_ \-- the module that you've just written! Inside that module is a class, `Timer`. So to make an instance of that class, you do timer = EventTimer.Timer()
Screen Scraping in Python Question: I am currently trying to screen scrape a website to put info into a dictionary. I am using urllib2 and BeautifulSoup. I cannot figure out how to parse the web pages source info to get what I want and to read it into a dictionary. The info I want is displayed as `<title>Nov 24 | 8:00AM | Sole In. Peace Out. </title>` in the source code. I am thinking of using a reg expression to read in the line, convert the time and date to a datetime, and then parse the line to read the data into a dictionary. The dictionary output should be something along the lines of `[ { "date": dateime(2010, 11, 24, 23, 59), "title": "Sole In. Peace Out.", } ]` Current Code: from BeautifulSoup import BeautifulSoup import re import urllib2 url = 'http://events.cmich.edu/RssStudentEvents.aspx' response = urllib2.urlopen(url) html = response.read() soup = BeautifulSoup(html) Sorry for the wall of text, and thank you for your time and help! Answer: Something like this.. titletext = soup.findAll('title')[1].string #assuming it's the second title element.. I've seen worse in html import datetime datetext = titletext.split("|")[0] title = titletext.split("|")[2] date = datetime.datetime.strptime(datetext,"%b %d").replace(year=2010) the_final_dict = {'date':date,'title':title} `findAll()` returns all instances of the search element.. so you can just treat it like any other list. That should just about do it :) Edit: small fix Edit2: fix from comments below
How to write PIL image filter for plain pgm format? Question: How can I write a filter for python imaging library for pgm plain ascii format (P2). Problem here is that basic PIL filter assumes constant number of bytes per pixel. My goal is to open feep.pgm with Image.open(). See <http://netpbm.sourceforge.net/doc/pgm.html> or below. Alternative solution is that I find other well documented ascii grayscale format that is supported by PIL and all major graphics programs. Any suggestions? feep.pgm: P2 # feep.pgm 24 7 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 3 3 3 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 15 15 15 0 0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 15 0 0 3 3 3 0 0 0 7 7 7 0 0 0 11 11 11 0 0 0 15 15 15 15 0 0 3 0 0 0 0 0 7 0 0 0 0 0 11 0 0 0 0 0 15 0 0 0 0 0 3 0 0 0 0 0 7 7 7 7 0 0 11 11 11 11 0 0 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 **edit:** Thanks for the answer, It works... **but** I need a solution that uses Image.open(). Most of python programs out there use PIL for graphics manipulation (google: python image open). Thus, I need to be able to register a filter to PIL. Then, I can use any software that uses PIL. I now think mostly scipy, pylab, etc. dependent programs. **edit** Ok, I think I got it now. Below is the wrapper pgm2pil.py: import Image import numpy def pgm2pil(filename): try: inFile = open(filename) header = None size = None maxGray = None data = [] for line in inFile: stripped = line.strip() if stripped[0] == '#': continue elif header == None: if stripped != 'P2': return None header = stripped elif size == None: size = map(int, stripped.split()) elif maxGray == None: maxGray = int(stripped) else: for item in stripped.split(): data.append(int(item.strip())) data = numpy.reshape(data, (size[1],size[0]))/float(maxGray)*255 return numpy.flipud(data) except: pass return None def imageOpenWrapper(fname): pgm = pgm2pil(fname) if pgm is not None: return Image.fromarray(pgm) return origImageOpen(fname) origImageOpen = Image.open Image.open = imageOpenWrapper There is a slight upgrade to misha's answer. Image.open has to be saved in order to prevent never ending loops. If pgm2pil returns None wrapper calls pgm2pil which returns None which calls pgm2pil... Below is the test function (feep_false.pgm is a malformed pgm e.g. "P2" -> "FOO" and lena.pgm is just **the** image file): import pgm2pil import pylab try: pylab.imread('feep_false.pgm') except IOError: pass else: raise ValueError("feep_false should fail") pylab.subplot(2,1,1) a = pylab.imread('feep.pgm') pylab.imshow(a) pylab.subplot(2,1,2) b = pylab.imread('lena.png') pylab.imshow(b) pylab.show() Answer: The way I currently deal with this is through [numpy](http://numpy.scipy.org/): 1. Read image into a 2D `numpy` array. You don't _need_ to use `numpy`, but I've found it easier to use than the regular Python 2D arrays 2. Convert 2D numpy array into `PIL.Image` object using `PIL.Image.fromarray` If you insist on using `PIL.Image.open`, you could write a wrapper that attempts to load a PGM file first (by looking at the header). If it's a PGM, load the image using the steps above, otherwise just hands off responsibility to `PIL.Image.open`. Here's some code that I use to get a **PBM** image into a [numpy](http://numpy.scipy.org/) array. import re import numpy def pbm2numpy(filename): """ Read a PBM into a numpy array. Only supports ASCII PBM for now. """ fin = None debug = True try: fin = open(filename, 'r') while True: header = fin.readline().strip() if header.startswith('#'): continue elif header == 'P1': break elif header == 'P4': assert False, 'Raw PBM reading not implemented yet' else: # # Unexpected header. # if debug: print 'Bad mode:', header return None rows, cols = 0, 0 while True: header = fin.readline().strip() if header.startswith('#'): continue match = re.match('^(\d+) (\d+)$', header) if match == None: if debug: print 'Bad size:', repr(header) return None cols, rows = match.groups() break rows = int(rows) cols = int(cols) assert (rows, cols) != (0, 0) if debug: print 'Rows: %d, cols: %d' % (rows, cols) # # Initialise a 2D numpy array # result = numpy.zeros((rows, cols), numpy.int8) pxs = [] # # Read to EOF. # while True: line = fin.readline().strip() if line == '': break for c in line: if c == ' ': continue pxs.append(int(c)) if len(pxs) != rows*cols: if debug: print 'Insufficient image data:', len(pxs) return None for r in range(rows): for c in range(cols): # # Index into the numpy array and set the pixel value. # result[r, c] = pxs[r*cols + c] return result finally: if fin != None: fin.close() fin = None return None You will have to modify it slightly to fit your purposes, namely: * Deal with P2 (ASCII, greyscale) instead of P1 (ASCII, bilevel). * Use a different container if you're not using numpy. Normal Python 2D arrays will work just fine. **EDIT** Here is how I would handle a wrapper: def pgm2pil(fname): # # This method returns a PIL.Image. Use pbm2numpy function above as a # guide. If it can't load the image, it returns None. # pass def wrapper(fname): pgm = pgm2pil(fname) if pgm is not None: return pgm return PIL.Image.open(fname) # # This is the line that "adds" the wrapper # PIL.Image.open = wrapper I didn't write `pgm2pil` because it's going to be very similar to `pgm2numpy`. The only difference will be that it's storing the result in a `PIL.Image` as opposed to a `numpy` array. I also didn't test the wrapper code (sorry, a bit short on time at the moment) but it's a fairly common approach so I expect it to work. Now, it sounds like you want **other** applications that use PIL for image loading to be able to handle PGMs. It's possible using the above approach, but you need to be sure that the above wrapper code gets added **before** the first call to `PIL.Image.open`. You can make sure that happens by adding the wrapper source code to the PIL source code (if you have access).
Creating a .zip archive with Python in MS DOS Question: I am new to programming. I am trying to learn Python using CH Swaroop's "Byte of Python". One example is to create a program which will backup some files from one directory to another and compress them into a .zip format. Unfortunately the example he gives is only helpful if you happen to be a linux/unix user. For windows users he says only "Windows users can use the Info-Zip program" but doesn't elaborate further. This is the code he provides ... #!/usr/bin/python # Filename : backup_ver1.py import os import time # 1. The files and directories to be backed up are specified in a list. source = [r'C:\Users\ClickityCluck\Documents'] # 2. The backup must be stored in a main backup directory target_dir = r'C:\Backup' # 3. Zip seems good # 4. Let's make the name of the file the current date/time target = target_dir + time.strftime('%Y%m%d%H%M%S') + '.zip' # 5. We use the zip command to put the files in a zip archive zip_command = "zip -qr '%s' %s" % (target, ''.join(source)) # Run Backup if os.system(zip_command) == 0: print "Succesful backup to", target else: print 'BACKUP FAILED :(' Can anyone lay out for me a way to do this in the command line on windows 7? Thank you for your time and I apologize in advance if I have failed to provide some pertinent information :) Answer: `zip` is command line utility to create/update/extract ZIP archives, available on Unix/Linux/Mac OS X. If you want to archive files using a command line utility, you should find and install an appropriate on (`compress`, for example, is a part of resource kit). The another way is to use python's `zipfile` module and make a useful command- line utility for windows :) BTW, why your question refers to MS DOS?
How can I use Python to get the system hostname? Question: I'm writing a chat program for a local network. I would like be able to identify computers and get the user-set computer name with Python. Answer: Use [`socket`](http://docs.python.org/library/socket.html) and its [`gethostname()`](http://docs.python.org/library/socket.html#socket.gethostname) functionality. This will get the `hostname` of the computer where the Python interpreter is running: import socket print(socket.gethostname())
Accessing static java methods in Python through jython Question: I am currently trying to access a static class in java, within python. I import as normal, then I try to get the class instance of the java class. from com.exmaple.util import Foo Foo. __class___.run_static_method() This doesn't seem to work. suggestions? What am i doing wrong. Answer: Try using Foo.run_static_method()
Executable path to Mac App Question: In a py2app/Mac Application Bundle, is there a way to spawn another instance of same app from within the app, by passing different command line arguments? or given a mac app bundle, how can I run it from command line and pass some arguments too? Edit1:forking is a limited option, which may not work with 3rd party executables bundle with app+I need to run this on mac and windows. Edit2: **Question is how to run a a bundled python script using subprocess module** ## Details: I am using py2app to generate a app bundle for my appilcation. My application has two parts 1. MainApp: which is the UI 2. BackgroundApp: a background process, which does the real job Both MainApp and BackgroundApp have been implemented as python script and actually they are the same python script with different commandline e.g. python myapp.py python myapp.py --backgroundprocess So when I run `python myapp.py` it automatically starts background process based on program path, but as I have now bundled my app as py2app I am not sure what executable I should be calling and passing `--backgroundprocess` option? ## What I have tried 1. `$ open MyApp.app/` this opens the app but I can't pass the arguments to it, as they will be arguments for open command and will not be passed to my app 2. `$ MyApp.app/Contents/MacOS/MyApp --backgroundprocess` opens the app but not the backgroun process as it seems arguments are not being passed to app also it throws error Traceback (most recent call last): File "/Users/agyey/projects/myapp/release4.26/py2exe/dist/MyApp.app/Contents/Resources/run.py", line 4, in <module> from renderprocess import RenderEngineApp File "renderprocess/RenderEngineApp.pyc", line 6, in <module> File "wx/__init__.pyc", line 45, in <module> File "wx/_core.pyc", line 4, in <module> File "wx/_core_.pyc", line 18, in <module> File "wx/_core_.pyc", line 11, in __load ImportError: dlopen(/Users/agyey/projects/myapp/release4.26/py2exe/dist/MyApp.app/Contents/Resources/lib/python2.5/lib-dynload/wx/_core_.so, 2): Library not loaded: @executable_path/../Frameworks/libwx_macud-2.8.0.dylib Referenced from: /Users/agyey/projects/myapp/release4.26/py2exe/dist/MyApp.app/Contents/Resources/lib/python2.5/lib-dynload/wx/_core_.so Reason: Incompatible library version: _core_.so requires version 7.0.0 or later, but libwx_macud-2.8.0.dylib provides version 2.6.0 Conclusion: it looks like it may not be possible [Launch an app on OS X with command line](http://stackoverflow.com/questions/1308755/launch-an-app-on-os- x-with-command-line) `open` doesn't except arguments. Answer: **Howto find cwd and execute an arbitrary supplied binary** First place the binary in AppName.app/Contents/Resources then run this code from the python script: import subprocess process=subprocess.Popen((os.getcwd() + "/3rd_party_binary","--subprocess")) process.poll() # is running? **Howto properly spawn two version of your python app** Fork is the old tried way to do this on MacOSX (unix) #!/usr/bin/env python import os, sys pid = os.fork() if pid: # we are the parent background_process.start() os.waitpid(pid, 0) # make sure the child process gets cleaned up else: # we are the child gui_app.start() sys.exit(0) print "parent: got it; text =", txt [Multiprocessing in Python](http://www.ibm.com/developerworks/aix/library/au- multiprocessing/) is apparently something that works on windows as well which I guess would be interesting to you(?).
Insert javascript at top of including file in Jinja 2 Question: In Jinja2, I would like the following to work as it looks like it should, by running: from jinja2 import Environment, FileSystemLoader env = Environment(loader=FileSystemLoader('.')) template = env.get_template('x.html') print template.render() Essentially the objective is to coalesce all the javascript into the `<head>` tags by using a a `{% call js() %} /* some js */ {% endcall %}` macro. * * * ## x.html <html> <head> <script type="text/javascript> {% block head_js %}{% endblock %} </script> </head> <body> {% include "y.html" %} </body> </html> * * * ## y.html {% macro js() -%} // extend head_js {%- block head_js -%} {{ super() }} try { {{ caller() }} } catch (e) { my.log.error(e.name + ": " + e.message); } {%- endblock -%} {%- endmacro %} Some ... <div id="abc">text</div> ... {% call js() %} // jquery parlance: $(function () { $("#abc").css("color", "red"); }); {% endcall %} * * * ## Expected result When I run X.html through jinja2, I would expect the result to be: <html> <head> <script type="text/javascript> try { {{ $("#abc").css("color", "red"); }} } catch (e) { usf.log.error(e.name + ": " + e.message); } </script> </head> <body> Some ... <div id="abc">text</div> ... </body> </html> * * * ## Actual result The actual results are not encouraging. I get a couple types of potentially illuminating errors, e.g.: > TypeError: macro 'js' takes no keyword argument 'caller' or, when I try adding another basis macro such as {% macro js2() -%} {%- block head_js -%} // ... something {%- endblock -%} {%- endmacro %} I get the following exception > jinja2.exceptions.TemplateAssertionError: block 'head_js' defined twice I feel as though I am running into a design issue regarding the precedence of the `block` tags over the `macro` tags (i.e. macros do not seem to encapsulate block tags in the way I expect). * * * I suppose my questions are quite simple: 1. Can Jinja2 do what I am attempting? If so, how? 2. If not, is there another Python based templating engine that does support this sort of pattern (e.g. mako, genshi, etc.), which would work without issue in Google App Engine Thank you for reading - I appreciate your input. Brian * * * # Edit: I'm trying to write an extension to resolve this problem. I'm halfway there -- using the following code: from jinja2 import nodes, Environment, FileSystemLoader from jinja2.ext import Extension class JavascriptBuilderExtension(Extension): tags = set(['js', 'js_content']) def __init__(self, environment): super(JavascriptBuilderExtension, self).__init__(environment) environment.extend( javascript_builder_content = [], ) def parse(self, parser): """Parse tokens """ tag = parser.stream.next() return getattr(self, "_%s" % str(tag))(parser, tag) def _js_content(self, parser, tag): """ Return the output """ content_list = self.environment.javascript_builder_content node = nodes.Output(lineno=tag.lineno) node.nodes = [] for o in content_list: print "\nAppending node: %s" % str(o) node.nodes.extend(o[0].nodes) print "Returning node: %s \n" % node return node def _js(self, parser, tag): body = parser.parse_statements(['name:endjs'], drop_needle=True) print "Adding: %s" % str(body) self.environment.javascript_builder_content.append(body) return nodes.Const('<!-- Slurped Javascript -->') env = Environment( loader = FileSystemLoader('.'), extensions = [JavascriptBuilderExtension], ) This makes it simple to add Javascript to the end of a template ... e.g. <html> <head></head> <body> {% js %} some javascript {{ 3 + 5 }} {% endjs %} {% js %} more {{ 2 }} {% endjs %} <script type="text/javascript"> {% js_content %} </script> </body> </html> Running `env.get_template('x.html').render()` will result in some illuminating comments and the expected output of: <html> <head> <script type="text/javascript> </script> </head> <body> <!-- Slurped Javascript --> <!-- Slurped Javascript --> <script type="text/javascript"> some javascript 8 more 2 </script> </body> </html> Of course, this isn't the same as having the script in the head, as hoped, but at least it's conveniently coalesced into one place. However, the solution is not complete because when you have a `{% include "y.html" %}` in there, where "y.html" includes a `{% js %}` statement, the `{% js_content %}` gets called before the include's `{% js %}` statement (i.e. `x.html` is fully parsed before `y.html` starts). I also need to, but have not yet, inserted constant nodes that would have the static javascript `try/catch`, which I indicated I wanted to have in there. This is not an issue. I'm pleased to be making progress, and I'm grateful for input. I've opened the related question: [Jinja2 compile extension after includes](http://stackoverflow.com/questions/4294837/jinja2-compile-extension- after-includes) * * * ## Edit Solution class JavascriptBuilderExtension(Extension): tags = set(['js']) def __init__(self, environment): super(JavascriptBuilderExtension, self).__init__(environment) environment.extend(jbc = "",) def parse(self, parser): """Parse tokens """ tag = parser.stream.next() body = parser.parse_statements(['name:endjs'], drop_needle=True) return nodes.CallBlock( self.call_method('_jbc', [], [], None, None), [], [], body ).set_lineno(tag.lineno) def _jbc(self, caller=None): self.environment.jbc += "\ntry { %s } catch (e) { ; };" % caller() return "<!-- Slurped -->" After completed, the environment will contain a variable `jbc` that has all the Javascript. I can insert this via, for example, `string.Template`. * * * Answer: From my comment: > If you would use extend instead of include you could do it. But because of > the full separation between the parse and render step you won't be able to > change the context of the parent scope till after it's too late. Also, the > Jinja context is supposed to be immutable. Example: **base.html** <html> <head> {% block head %} <title>{% block title %}This is the main template{% endblock %}</title> <script type="text/javascript"> {% block head_js %} $(function () { $("#abc").css("color", "red"); }); {% endblock %} </script> {% endblock head_js %} </head> <body> {% block body %} <h1>{% block body_title %}This is the main template{% endblock body_title %}</h1> {% endblock body %} </body> </html> **some_page.html** {% block title %}This is some page{% endblock title %} {% block head_js %} {{ super() }} try { {{ caller() }} } catch (e) { my.log.error(e.name + ": " + e.message); } // jquery parlance: {% endblock head_js %}
How to add custom parameters to an URL query string with Python? Question: I need to add custom parameters to an URL query string using Python Example: This is the URL that the browser is fetching (GET): /scr.cgi?q=1&ln=0 then some python commands are executed, and as a result I need to set following URL in the browser: /scr.cgi?q=1&ln=0&SOMESTRING=1 Is there some standard approach? Answer: You can use [`urlsplit()`](http://docs.python.org/library/urlparse.html#urlparse.urlsplit) and [`urlunsplit()`](http://docs.python.org/library/urlparse.html#urlparse.urlunsplit) to break apart and rebuild a URL, then use [`urlencode()`](http://docs.python.org/library/urllib.html#urllib.urlencode) on the parsed query string: from urllib import urlencode from urlparse import parse_qs, urlsplit, urlunsplit def set_query_parameter(url, param_name, param_value): """Given a URL, set or replace a query parameter and return the modified URL. >>> set_query_parameter('http://example.com?foo=bar&biz=baz', 'foo', 'stuff') 'http://example.com?foo=stuff&biz=baz' """ scheme, netloc, path, query_string, fragment = urlsplit(url) query_params = parse_qs(query_string) query_params[param_name] = [param_value] new_query_string = urlencode(query_params, doseq=True) return urlunsplit((scheme, netloc, path, new_query_string, fragment)) Use it as follows: >>> set_query_parameter("/scr.cgi?q=1&ln=0", "SOMESTRING", 1) '/scr.cgi?q=1&ln=0&SOMESTRING=1'
Help building a mac application from python using py2app? Question: I have a Tkinter app written in python, and I want to make "native" (easy to run) mac and windows executables of it. I've successfully built a windows .exe using py2exe, but the equivalent process with py2app isn't working. Here's my setup.py: ` from setuptools import setup import sys MAIN_SCRIPT = "myapp.py" WINDOWS_ICON = "myicon.ico" MAC_ICON = "myicon.icns" if sys.platform in ("win32", "win64"): # does win64 exist? import py2exe setup( windows=[{ "script":MAIN_SCRIPT, "icon_resources":[(0x0004, WINDOWS_ICON)] }], ) elif sys.platform == "darwin": import py2app setup( app=[MAIN_SCRIPT], # doesn't include the icon yet setup_requires=["py2app"], ) ` I just `cd` to my app directory and run `python setup.py py2app`. The .app appears without errors, but it crashes on launch with "myapp has encountered a fatal error, and will now terminate." I'm running Snow Leopard, and I've tried this with both the standard Apple Python 2.6 and python25 from MacPorts. I read somewhere that it's better to use a different Python because py2app won't bundle the system version in your app. EDIT: Here's what the mac console has to say about it: ` 11/27/10 1:54:44 PM [0x0-0x80080].org.pythonmac.unspecified.myapp[77495] dlsym(0x10b120, Py_SetProgramName): symbol not found 11/27/10 1:54:46 PM [0x0-0x80080].org.pythonmac.unspecified.myapp[77495] 0x99274242 11/27/10 1:54:46 PM com.apple.launchd.peruser.501[185] ([0x0-0x80080].org.pythonmac.unspecified.myapp[77495]) Exited with exit code: 255 ` Answer: Turns out it was a problem with using Snow Leopard. I tried it on a Leopard machine at school and it builds fine.
Reliably force Return-Path with Python Question: I'm a Python junior, so keep that in mind. In a Python script, I need to set a Return-Path address that is different than the sender's address. (I'm using Gmail as SMTP server.) I've done lots of searching on this question and found plenty of "answers", but no solutions. I tried this link [Setting Return-Path with Python sendmail for a MIME message](http://stackoverflow.com/questions/3337055/setting-return- path-with-python-sendmail-for-a-mime-message) but it's not working for me at all. I can change the "To:" address that the email recipient sees, but when they click "Reply", it's back to the sending email address again. This is the function that I'm trying to write. It works well enough, except that I need to force a different Return-Path. #!/usr/bin/python import smtplib import os from email.MIMEMultipart import MIMEMultipart from email.MIMEBase import MIMEBase from email.MIMEText import MIMEText from email.Utils import COMMASPACE, formatdate from email import Encoders import sap_mailserverdata as sf def send_mail(sent_to, subject, body_text, sent_from_addr='', sent_from_name='', files=[], cc=[], bcc=[]): """Send emails with or without attachments.""" assert type(sent_to)==list assert type(files)==list assert type(cc)==list assert type(bcc)==list message = MIMEMultipart() message['From'] = sent_from_addr message['To'] = COMMASPACE.join(sent_to) message['Date'] = formatdate(localtime=True) message['Subject'] = subject message['Cc'] = COMMASPACE.join(cc) message.preamble = 'You need a MIME enabled mail reader to see this message.\n' message.attach(MIMEText(body_text, 'html')) for f in files: part = MIMEBase('application', 'octet-stream') part.set_payload(open(f, 'rb').read()) Encoders.encode_base64(part) part.add_header('Content-Disposition', 'attachment; filename="%s"' % os.path.basename(f)) message.attach(part) addresses = [] for x in sent_to: addresses.append(x) for x in cc: addresses.append(x) for x in bcc: addresses.append(x) mail_server = smtplib.SMTP(sf.server, sf.server_port) mail_server.ehlo() mail_server.set_debuglevel(1) mail_server.starttls() mail_server.login(sf.username, sf.password) mail_server.sendmail(sent_from_addr, addresses, message.as_string()) mail_server.quit() What am I missing with this function to be able to reliably specify a different replyto Return-Path? Answer: Reply-to and return path are two distinct beasts. See the [RFC](http://www.faqs.org/rfcs/rfc2822.html). You can set Reply-to with: msg['reply-to'] = '[email protected]' The return-path is set by the MTA to the address that receives bounces. It is controlled by the server administrator, so unless you work for Google I don't think this is under your control. Most of the time one is after "Reply-to"; if you really need to change the return path you must use a SMTP server under your control and google for how to do this for the specific MTA you are using - many will have a white list of users and/or hosts that can override the return path.
Getting macports django and python to work together Question: I'm a webdeveloper and I have a django project that I need to work on. I am running mac OSX 10.6.5 on a macbook pro. I used macports to install django and python 2.6. I now have some sort of problem, possibly related to my PATH / PYTHONPATH that prevents me from running django. In terminal echo $PATH gives: echo $PATH /Library/Python/2.6/site-packages:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin and echo $PYTHONPATH gives nothing. sudo port installed gives : sudo port installed The following ports are currently installed: autoconf @2.65_1 (active) automake @1.11.1_0 (active) bash-completion @1.1_8 bash-completion @1.2_0 (active) bzip2 @1.0.5_3+darwin bzip2 @1.0.6_0 (active) db46 @4.6.21_6 (active) expat @2.0.1_0 expat @2.0.1_1 (active) fontconfig @2.8.0_0+macosx (active) freetype @2.3.12_0+macosx (active) gdbm @1.8.3_2 (active) gettext @0.18_0 gettext @0.18.1.1_2 (active) gperf @3.0.4_0 (active) help2man @1.38.2_0 (active) ImageMagick @6.6.2-0_0+q16 (active) jpeg @8a_0 (active) lcms @1.19_2 (active) libiconv @1.13.1_0 (active) libpng @1.2.43_0 (active) libtool @2.2.6b_1+darwin (active) libxml2 @2.7.7_0 (active) m4 @1.4.14_0 (active) ncurses @5.7_0+darwin_10 ncurses @5.7_1 (active) ncursesw @5.7_0+darwin_10 ncursesw @5.7_1 (active) openssl @1.0.0b_0 (active) p5-locale-gettext @1.05_2 (active) p7zip @9.04_0 (active) perl5 @5.8.9_0 (active) perl5.8 @5.8.9_3 (active) pkgconfig @0.25_0 (active) py26-distribute @0.6.14_0 (active) py26-django @1.2.3_0+bash_completion (active) python26 @2.6.6_0+no_tkinter (active) readline @6.1.002_0 (active) sqlite3 @3.7.3_0 (active) tiff @3.9.2_3+macosx (active) xorg-bigreqsproto @1.1.0_0 (active) xorg-inputproto @2.0_0 (active) xorg-kbproto @1.0.4_0 (active) xorg-libice @1.0.6_0 (active) xorg-libsm @1.1.1_0 (active) xorg-libX11 @1.3.3_0 (active) xorg-libXau @1.0.5_0 (active) xorg-libXdmcp @1.0.3_0 (active) xorg-libXext @1.1.1_0 (active) xorg-libXt @1.0.8_0 (active) xorg-util-macros @1.7.0_0 (active) xorg-xcmiscproto @1.2.0_0 (active) xorg-xextproto @7.1.1_0 (active) xorg-xf86bigfontproto @1.2.0_0 (active) xorg-xproto @7.0.16_0 (active) xorg-xtrans @1.2.5_0 (active) zlib @1.2.5_0 (active) and when I type python I get: python Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> I'm pretty certain that this is the incorrect version. When I try and test if django is available to python I get: >>> import django Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named django >>> Could anyone help me figure out what is going on here? I only need to get django running so that I can view and work on the django project that my colleague sent me. Thanks for any help in advance! Answer: First, Macports writes the file ~/.profile to set its PATH variables. If you have created a ~/.bash_profile file then ~/.profile will be **ignored**. You will have to copy the contents over. To see what python version Macports has selected use: port select --list python which will show you something like this: Available versions for python: none python25-apple python26-apple python27 (active) python32 To tell Macports to use a specific version use: port select --set python python27 That should get your python version correct. You can use the python easy_install to install Django now or use the Macports distribution of Django.
RTMP: check if stream is online with Python Question: I have several Flash streams and I want to display only active/live/online streams. Can someone provide sample code that can check status of stream? or point out where I can grab it (I think Red5 and RTMPy should have this, but I completely don't have any experience with RTMP; there is also RTMP specification, but wiki says it is incomplete)? My target language is Python, but code in any language will be helpful. Answer: In case of connection lost or connection fail the code below will print error message. from twisted.internet import reactor from rtmpy.client import ClientFactory reactor.connectTCP('localhost', 1935, ClientFactory()) reactor.run()
Scrape Facebook in Python Question: I'm interested in getting the number of friends each of my friends on Facebook has. Apparently the official Facebook API does not allow getting the friends of friends, so I need to get around this (somehwhat sensible) limitation somehow. I tried the following: import sys import urllib, urllib2, cookielib username = '[email protected]' password = 'mypassword' cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) login_data = urllib.urlencode({'email' : username, 'pass' : password}) request = urllib2.Request('https://login.facebook.com/login.php') request.add_header('User-Agent','Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101027 Fedora/3.6.12-1.fc14 Firefox/3.6.12') opener.open(request, login_data) resp = opener.open('http://facebook.com') print resp.read() but I only end up with a captcha page. Any idea how FB is detecting that the request is not from a "normal" browser? I could add an extra step and solve the captcha but that would add unnecessary complexity to the program so I would rather avoid it. When I use a web browser with the same User-Agent string I don't get a captcha. Alternatively, does anyone have any saner ideas on how to accomplish my goal, i.e. get a list of friends of friends? Answer: Have you tried tracing and comparing HTTP transactions with Fiddler2 or Wireshark? Fiddler can even trace https, as long as your client code can be made to work with bogus certs.
Python - Way to encode foreign characters in format Python can work with? Question: I'm working with large files in French and German. Basically, writing strings of characters from one file to another, collecting data from them, and so forth. Unfortunately, I have no idea what to import in order to let Python handle these characters. Even when collecting data from files that Python has already converted (in french you get weird things like écouteur ça), I get key errors when checking dicts for things that I _know_ have already been placed in that dict, but only when the items have special characters in them like in the example of écouteur ça. For example, when the tuple ('écouteur', 'ça') has been added to a dict which collects the frequency that any given pair of words occur together, you get a key error when probing that dict for the tuple ('écouteur', 'ça'), but not when probing the dict for other tuples that don't contain the wacky characters. Does anyone know a quick way to get around this issue at every level? Best, Georgina Answer: ["Unicode in Python, Completely Demystified"](http://farmdev.com/talks/unicode/)
Is it possible to reset locale to the portable locale? Question: According to the Python documentation: > "a program which has not called `setlocale(LC_ALL, '')` runs using the > portable 'C' locale. After having set the locale with `setlocale(LC_ALL, '')` is it possible to reset the locale back to the "portable" state? I work on a library which includes some misbehaved components which attempt to globally reset the locale to the region specific - I need to find a way to revert the locale back to the portable state. import locale loc = locale.getlocale(locale.LC_ALL) # get current locale assert loc == (None, None) # Locale is unset, therefore in the "portable" state. locale.setlocale(locale.LC_ALL, '') # use user's preferred locale # Loc is not set to regional default ???? DO SOMETHING HERE assert loc == locale.getlocale(locale.LC_ALL) # I want to make this true! Needs to work on Python 2.4.4 on Windows XP 32bit Answer: You could try: `locale.setlocale(locale.LC_ALL, loc)`. >>> locale.getlocale(locale.LC_ALL) (None, None) >>> locale.setlocale(locale.LC_ALL, "") 'en_US.utf8' >>> locale.getlocale(locale.LC_ALL) ('en_US', 'UTF8') >>> locale.setlocale(locale.LC_ALL, "C") 'C' >>> locale.getlocale(locale.LC_ALL) (None, None) >>> locale.setlocale(locale.LC_ALL, (None,None)) 'C' >>> locale.getlocale(locale.LC_ALL) (None, None)
How does Python keep track of modules installed with eggs? Question: If I have a module, `foo`, in `Lib/site-packages`, I can just `import foo` and it will work. However, when I install stuff from eggs, I get something like `blah-4.0.1-py2.7-win32.egg` as a folder, with the module contents inside, yet I still only need do `import foo`, not anything more complicated. How does Python keep track of eggs? It is not just dirname matching as if I drop that folder into a Python installation without going through dist-utils, it does not find the module. To be clearer: I just installed zope. The folder name is "zope.interface-3.3.0-py2.7-win32.egg". This works: Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import zope.interface >>> I create a "blah-4.0.1-py2.7-win32.egg" folder with an empty module "haha" in it (and `__init__.py`). This does not work: Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import blah.haha Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named blah.haha >>> This does, though: Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from pkg_resources import require >>> require("blah>=1.0") [blah 4.0.1 (c:\python27\lib\site-packages\blah-4.0.1-py2.7-win32.egg)] >>> import haha >>> So how do I make it work without a `require`? Answer: If you use the `easy_install` script provided by `setuptools` (or the `Distribute` fork of it) to install packages as eggs, you will see that, by default, it creates a file named `easy-install.pth` in the `site-packages` directory of your Python installation. [Path configuration files](http://docs.python.org/library/site.html) are a standard feature of Python: > A path configuration file is a file whose name has the form package.pth and > exists in one of the four directories mentioned above; its contents are > additional items (one per line) to be added to sys.path. `easy_install` makes heavy use of this Python feature. When you use `easy_install` to add or update a distribution, it modifies `easy-install.pth` to add the egg directory or zip file. In this way, `easy_install` maintains control of the module searching order and ensures that the eggs it installs appear early in the search order. Here is an example of the contents of an `easy-install.pth`: import sys; sys.__plen = len(sys.path) ./appscript-0.21.1-py2.6-macosx-10.5-ppc.egg ./yolk-0.4.1-py2.6.egg ./Elixir-0.7.1-py2.6.egg ./Fabric-0.9.0-py2.6.egg import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginse rt',0); sys.path[p:p]=new; sys.__egginsert = p+len(new) As you can see here and if you examine the code in `setuptools`, you will find it goes to some trickery to bootstrap itself and then cover its tracks which can make debugging problems with `site.py` and interpreter startup a bit _interesting_. (That is one of the reasons that some developers are not fond of using it.) If you use the `-m` parameter of `easy_install` to install a distribution as _multi-version_ , the `easy-install.pth` entry for it is not added or is removed if it already exists. This is why the [`easy_install` documentation](http://peak.telecommunity.com/DevCenter/EasyInstall#command- line-options) tells you to use `-m` before deleting an installed egg.
Math question regarding Python's uuid4 Question: I'm not great with statistical mathematics, etc. I've been wondering, if I use the following: import uuid unique_str = str(uuid.uuid4()) double_str = ''.join([str(uuid.uuid4()), str(uuid.uuid4())]) Is `double_str` string squared as unique as `unique_str` or just some amount more unique? Also, is there any negative implication in doing something like this (like some birthday problem situation, etc)? This may sound ignorant, but I simply would not know as my math spans algebra 2 at best. Answer: The `uuid4` function returns a UUID created from 16 random bytes and it is **extremely** unlikely to produce a collision, to the point at which you probably shouldn't even worry about it. If for some reason `uuid4` _does_ produce a duplicate it is far more likely to be a programming error such as a failure to correctly initialize the random number generator than genuine bad luck. In which case the approach you are using it will _not_ make it any better - an incorrectly initialized random number generator can still produce duplicates even with your approach. If you use the default implementation `random.seed(None)` you can see in the [source](http://svn.python.org/view/python/trunk/Lib/random.py?view=markup) that only 16 bytes of randomness are used to initialize the random number generator, so this is an a issue you would have to solve first. Also, if the OS doesn't provide a source of randomness the system time will be used which is not very random at all. But ignoring these practical issues, you are basically along the right lines. To use a mathematical approach we first have to define what you mean by "uniqueness". I think a reasonable definition is the number of ids you need to generate before the probability of generating a duplicate exceeds some probability `p`. An approcimate formula for this is: ![alt text](http://i.stack.imgur.com/PexBS.png) where `d` is `2**(16*8)` for a single randomly generated uuid and `2**(16*2*8)` with your suggested approach. The square root in the formula is indeed due to the [Birthday Paradox](http://en.wikipedia.org/wiki/Birthday_problem). But if you work it out you can see that if you square the range of values `d` while keeping `p` constant then you also square `n`.
mod_wsgi WSGI script for non-frameworked Python web development Question: Most guides I saw for creating the WSGI file for use with mod_wsgi either sets it up for Django or Pylons. However, I would like to create the wsgi file without setting it up for any particular framework. How do I do this. The following is a code from the wsgi script for use with Django: import os, sys sys.path.append('/home/user/dev') sys.path.append('/home/user/dev/site1') os.environ['DJANGO_SETTINGS_MODULE'] = 'site1.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() The [mod_wsgi integration from google](http://code.google.com/p/modwsgi/wiki/VirtualEnvironments) said that I need to add the following code to the WSGI script to overlay the BASELINE environment (yep, I am using a baseline and application-specific virtualenv): import site site.addsitedir('/usr/local/pythonenv/PYLONS-1/lib/python2.5/site-packages') How will my WSGI script look like if I am not using it for any particular framework? **EDIT:** This is for use with Apache server Answer: For the simple wsgi application in [PEP 333](http://www.python.org/dev/peps/pep-0333/): def simple_app(environ, start_response): status = '200 OK' response_headers = [('Content-type', 'text/plain')] start_response(status, response_headers) return ['Hello world!\n'] application = simple_app In other words, you don't _have_ to do any setup at all. You just have to make sure that mod_wsgi can find an `application` object that conforms to wsgi in your module. For security reasons, you really ought to define your application in another module outside any directories published by apache, and limit the code in your wsgi file to the minimum required to import that module and bind the wsgi application within to the `application` variable.
Single precision big endian float values to Python's float (double precision, big endian) Question: I need to receive hex encoded single precision big endian float values coming from an Arduino over a serial line (RS-232). How do convert them to Python's float which are big endians with double precision? The Arduino send something like "8192323E" and in Python I would like to have 0.174387. I found "[Convert hex to float](http://stackoverflow.com/questions/1592158/python-convert-hex-to- float)" but it seems that all of them don't work for single precision floats. From the linked page, this looks promising: from ctypes import * def convert(s): i = int(s, 16) # convert from hex to a Python int cp = pointer(c_int(i)) # make this into a c integer fp = cast(cp, POINTER(c_float)) # cast the int pointer to a float pointer return fp.contents.value # dereference the pointer, get the float But it still doesn't work with my single precision floats. In Java ([Processing](http://en.wikipedia.org/wiki/Processing_%28programming_language%29)) I've been able to do that: float decodeFloat(String inString) { byte [] inData = new byte[4]; inString = inString.substring(2, 10); // discard the leading "f:" inData[0] = (byte) unhex(inString.substring(0, 2)); inData[1] = (byte) unhex(inString.substring(2, 4)); inData[2] = (byte) unhex(inString.substring(4, 6)); inData[3] = (byte) unhex(inString.substring(6, 8)); int intbits = (inData[3] << 24) | ((inData[2] & 0xff) << 16) | ((inData[1] & 0xff) << 8) | (inData[0] & 0xff); //unhex(inString.substring(0, 8)); return Float.intBitsToFloat(intbits); } For your reference, this is the C code running on the Arduino implementing the hex encoding. void serialFloatPrint(float f) { byte * b = (byte *) &f; Serial.print("f:"); for(int i=0; i<4; i++) { byte b1 = (b[i] >> 4) & 0x0f; byte b2 = (b[i] & 0x0f); char c1 = (b1 < 10) ? ('0' + b1) : 'A' + b1 - 10; char c2 = (b2 < 10) ? ('0' + b2) : 'A' + b2 - 10; Serial.print(c1); Serial.print(c2); } } Answer: Building on [Ignacio Vazquez-Abrams's answer](http://stackoverflow.com/questions/4315190/single-precision-big- endian-float-values-to-pythons-float-double-precision-big/4315230#4315230), import binascii import struct text='8192323E' print(struct.unpack('<f',binascii.unhexlify(text))[0]) # 0.17438699305057526
How can I send a JSON object from a Python script to jQuery? Question: I've looked through APIs and all sorts of resources, but I can't seem to get the hang of fetching a JSON object from a Python script using AJAX. I'm sure the issue is with how I'm dealing with the JSON object. First, in a python script on my server, I generate and print a JSON array import json print "Content-type: application/json" print print json.dumps(['Price',{'Cost':'99'}]) Then, in a separate html file, I try something like <body> <div id="test"> </div> <script> $(document).ready(function() { $.getJSON("http://www.example.com/cgi-bin/makeJSON.py", function(data) { $('#test').html("JSON Data: " + data.Price); }); }); </script> </body> But I don't get anything. I'm sure that `data.Price` is wrong, but I'm also pretty certain that I should be doing something instead of just printing the results of `json.dumps` Any help is appreciated! Thanks in advance, and sorry if this is an obvious question. Answer: In your case you have enclosed the JSON response in an `array`. To access price you need to access `data[0]`. You need to structure your JSON data properly. The following changes in your Python script should allow you to access `data.Price`. Let me know in case you still face any issues. import json print "Content-type: application/json" print response={'Price':54,'Cost':'99'} print(json.JSONEncoder().encode(response))
SqlSoup relate() for many-to-many relation throwing exception Question: I'm using the following code to use SqlSoup with an existing database. import sqlalchemy from sqlalchemy.ext.sqlsoup import SqlSoup from sqlalchemy.orm import backref engine = sqlalchemy.create_engine('postgresql:///test') db = SqlSoup(engine) db.books.relate('author', db.authors) db.books.relate('tags', db.tags, secondary=db.tags2books, backref=backref('books', lazy=False)) However, the last `relate()` call throws an exception: Traceback (most recent call last): File "sqlsoup-test.py", line 10, in <module> db.books.relate('tags', db.tags, secondary=db.tags2books, backref=backref('books', lazy=False)) File "/usr/lib64/python2.6/site-packages/sqlalchemy/ext/sqlsoup.py", line 384, in relate class_mapper(cls)._configure_property(propname, relationship(*args, **kwargs)) File "/usr/lib64/python2.6/site-packages/sqlalchemy/orm/mapper.py", line 758, in _configure_property prop.init() File "/usr/lib64/python2.6/site-packages/sqlalchemy/orm/interfaces.py", line 476, in init self.do_init() File "/usr/lib64/python2.6/site-packages/sqlalchemy/orm/properties.py", line 895, in do_init self._determine_joins() File "/usr/lib64/python2.6/site-packages/sqlalchemy/orm/properties.py", line 1010, in _determine_joins self.secondary) File "/usr/lib64/python2.6/site-packages/sqlalchemy/orm/properties.py", line 1002, in _search_for_join return join_condition(mapper.local_table, table) File "/usr/lib64/python2.6/site-packages/sqlalchemy/sql/util.py", line 219, in join_condition for fk in b.foreign_keys: AttributeError: 'MappedTags2books' object has no attribute 'foreign_keys' The tables have been created using the following definition: authors_table = Table('authors', metadata, Column('id', Integer, primary_key=True), Column('name', String, nullable=False, index=True) ) books_table = Table('books', metadata, Column('id', Integer, primary_key=True), Column('author_id', Integer, ForeignKey('authors.id', onupdate='CASCADE', ondelete='CASCADE'), nullable=False), Column('title', String, nullable=False, index=True) ) tags_table = Table('tags', metadata, Column('id', Integer, primary_key=True), Column('tag', String, index=True, unique=True, nullable=False) ) tags2books_table = Table('tags2books', metadata, Column('tag_id', Integer, ForeignKey('tags.id', onupdate='CASCADE', ondelete='CASCADE'), primary_key=True), Column('book_id', Integer, ForeignKey('books.id', onupdate='CASCADE', ondelete='CASCADE'), primary_key=True) ) In case you are going to ask "Why is he using SqlSoup when he can use the real SQLAlchemy?": I'm writing a term paper about SQLAlchemy and I'm mentioning SqlSoup in one chapter and I'd like to add an example. And what's easier than using the database I've used for the SA examples... Answer: Looks like `secondary` can't be a mapped class. Passing a table fixes the error: db.books.relate('tags', db.tags, secondary=db.tags2books._table, backref=backref('books', lazy=False))
Removing control characters from a string in python Question: I currently have the following code def removeControlCharacters(line): i = 0 for c in line: if (c < chr(32)): line = line[:i - 1] + line[i+1:] i += 1 return line This is just does not work if there are more than one character to be deleted. Answer: There are _hundreds_ of control characters in unicode. If you are sanitizing data from the web or some other source that might contain non-ascii characters, you will need Python's [unicodedata module](http://docs.python.org/2/library/unicodedata.html). The `unicodedata.category(…)` function returns the [unicode category code](http://www.unicode.org/reports/tr44/#GC_Values_Table) (e.g., control character, whitespace, letter, etc.) of any character. For control characters, the category always starts with "C". This snippet removes all control characters from a string. import unicodedata def remove_control_characters(s): return "".join(ch for ch in s if unicodedata.category(ch)[0]!="C") * * * Examples of [unicode categories](http://www.unicode.org/reports/tr44/#GC_Values_Table): >>> from unicodedata import category >>> category('\r') # carriage return --> Cc : control character 'Cc' >>> category('\0') # null character ---> Cc : control character 'Cc' >>> category('\t') # tab --------------> Cc : control character 'Cc' >>> category(' ') # space ------------> Zs : separator, space 'Zs' >>> category(u'\u200A') # hair space -------> Zs : separator, space 'Zs' >>> category(u'\u200b') # zero width space -> Cf : control character, formatting 'Cf' >>> category('A') # letter "A" -------> Lu : letter, uppercase 'Lu' >>> category(u'\u4e21') # 両 ---------------> Lo : letter, other 'Lo' >>> category(',') # comma -----------> Po : punctuation 'Po' >>>
Serious instability with pygtk 2.22 and python 2.6 Question: Has anybody come across this? I've taken the GTK [HelloWorld sample](http://www.pygtk.org/pygtk2tutorial/examples/helloworld.py). It runs fine. However, if i `import win32ui`, then it does not shutdown properly (as explained in [this question](http://stackoverflow.com/questions/4308346/twisted-gtk-shutdown-not- working-properly) ). There are other problems. In the process of narrowing down my application to see what caused it to not shutdown, I came upon a point where I was deconstructing a logger class I had written that I had no problems with. In the constructor I had the line: self.logger = logger I had commented out the rest of the class so `self.logger` was not even used. If I left the line as-is, I would get this error upon clicking the "Hello World" button: c:\python26\lib\site-packages\twisted\internet\gtk2reactor.py:350: GtkWarning: gtk_widget_destroy: assertion `GTK_IS_WIDGET (widget)' failed gtk.main() If I change the line to: self.logger = 0 then the error would not show up. This happened with various other lines of code in the logger class as well. I suspect it would happen with other random bits of code, too. But in any case - setting a variable that **isn't even used** should **not** cause any kind of problem. Note that at this point I was still using twisted with pygtk. I haven't reproduced this yet with just gtk. This makes me suspect pygtk is being really unstable... has anybody come across this before? Is there anything I can do? The reason I updated from `gtk 2.12` was because my application would crash randomly, but this is far worse. UPDATE: The bugs happen on 64-bit Windows 7. I've tried both with 32-bit python with the versions listed above, and 64-bit python with pygtk-2.17.1, pygobject-2.21.2, pycairo-1.8.8 and pywin32-214. So far the `win32ui` bug does not happen with the same install on a Windows XP VirtualBox virtual machine. Answer: I've tried this on a different install of Win7 64-bit on a different computer, and it worked fine. So it's some strange thing with the hardware/other software on a particular computer. Probably nothing gtk or win32 related.
Is there anyway to give a Zookeeper node a NULL ACL via python-bindings? Question: I'm trying to create a node using the python bindings that are bundled with the zookeeper distribution ( src/contrib/zkpython/ ). Using zookeeper-3.3.1. The pydoc for create states that if the ACL is NULL then that node will inherit its parents ACL however setting this parameter to None causes an invalid ACL exception. I've taken a look at [parse_acls](https://github.com/apache/zookeeper/blob/trunk/src/contrib/zkpython/src/c/zookeeper.c#L352) in zookeeper.c and it seems to conflict with the pydoc but I don't have much experience with py c api. If I can't set this ACL to null is there some other ACL that will cause it to inherit from a parent? Answer: I had the same problem and found no solution. As workaround you could add an additional step. Example: import zookeeper as zk parent_path = "/" stat, acl = zk.get_acl(zoohandle, parent_path) ret = zk.create(zoohandle, node_path, "", acl, 0) Further tip: In zookeeper-3.3.1 the zk.get() only returns up to 512 bytes of the node data. This should be fixed in the next release (3.3.3).
Realtime data processing in Django/Python Question: We are working on a project involving realtime data processing. We plan to use Django/Python. The actual process is: 1. Tens of thousands of devices take 4 samples per seconds (0, 0.25, 0.5, 0.75) and continuously send back to our Django server, basically they are time series with timestamp and value 2. We need to align samples from all devices according to the timestamp (need to have milliseconds precision) and do a simple average of all the time series 3. All these needs to be done in realtime (maximum 1 second delay) and send away using another thread We are looking into RRDTool and scikits.timeseries, but they don't have the precision of milliseconds, so they couldn't align our time series. Just wondering is there any tools/data structure we can use with Django/Python for this type of realtime data processing. And thread safe is important, as sending the result away will be done in another thread. Thanks in advance. Answer: Your options for real time web services in python are: [Twisted](http://twistedmatrix.com/trac/), [Tornado](http://www.tornadoweb.org/) and [Eventlet](http://eventlet.net/) You can integrate all this to work with Python/Django. [Tutorial on that](http://lincolnloop.com/blog/2009/sep/15/using-django-inside-tornado-web- server/).
windows virtualenv using global packages Question: I have the latest versions of virtualenv,django-nonrel, djangotoolbox and django_mongodb_engine. The virtualenv was created with -no-site-packages. I attempted to follow the [quick start](http://django-mongodb- engine.github.com/mongodb-engine/) but I see the following errors when trying to run syncdb Traceback (most recent call last): File "C:\www\environments\mongotest\djangomongo\manage.py", line 11, in < module> execute_manager(settings) File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\core\managem ent\__init__.py", line 438, in execute_manager utility.execute() File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\core\managem ent\__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\core\managem ent\__init__.py", line 261, in fetch_command klass = load_command_class(app_name, subcommand) File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\core\managem ent\__init__.py", line 67, in load_command_class module = import_module('%s.management.commands.%s' % (app_name, name)) File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\utils\import lib.py", line 35, in import_module __import__(name) File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\core\managem ent\commands\syncdb.py", line 7, in <module> from django.core.management.sql import custom_sql_for_model, emit_post_sync_ signal File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\core\managem ent\sql.py", line 5, in <module> from django.contrib.contenttypes import generic File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\contrib\cont enttypes\generic.py", line 6, in <module> from django.db import connection File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\db\__init__. py", line 77, in <module> connection = connections[DEFAULT_DB_ALIAS] File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\db\utils.py" , line 91, in __getitem__ backend = load_backend(db['ENGINE']) File "C:\Python25\lib\site-packages\django-1.2.3-py2.5.egg\django\db\utils.py" , line 49, in load_backend raise ImproperlyConfigured(error_msg) django.core.exceptions.ImproperlyConfigured: 'django_mongodb_engine' isn't an av ailable database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: No module named django_mongodb_engine.base It appears to be trying to use the default django installation instead of my environments django-nonrel installation. I have tried adding set PYTHONPATH=%PYTHONPATH%;C:\path\to\env\Lib\site- packages\django but no change. Any ideas why the default django installation is being used here? Answer: The problem was with the file associations under windows. I was trying to syncdb via "manage.py syncdb" which used the default windows python installation. Using "python manage.py syncdb" used the correct python and environment.
Which abbreviations should we use for python variable names? Question: In generally I'm using the standard naming stated in PEP-8 for variables. Like: delete_projects connect_server However sometimes I can't find any good name and the name just extend to a long one: project_name_to_be_deleted I could use `pr_nm_del` , but this makes the code unreadable. I'm really suffering finding good variable names for functions. Whenever I begin to write a new function I just spent time to find a good variable name. Is there any standard for choosing certain abbreviations for well known variable names like, `delete,project,configuration, etc.` ? How do you choose short but good and readable variable names ? This question might be not depend directly to Python, but as different programming languages uses different variable names formatting I thought I limit this question to _Python only_. Answer: `pr_nm_del`? You might as well let a cat name it. I believe abbreviations should be avoided at all cost, except well-known/obvious ones (like `del`, as mentioned in the comments - that one's even a language keyword!) that save a whole lot of typing. But that doesn't mean overly verbose identifiers. Just as context is important to understand statements in natural languages, identifiers can often be kept much shorter (and just as understandable) by referring to context. In your example, `project_name` is perfectly fine - the procedure is already called `delete_project`, so `project_name` obviously refers to the name of the project to be deleted. Even `name` alone might be fine. No need to state that again by appending `_to_be_deleted`.
Python3.1 - Open Opera Question: I have no idea why this won't work....I'm trying to open opera but it says cannot find runnable browser. op = webbrowser.get('C:\\Program Files\\Opera\\opera.exe') op.open_new_tab('http://www.stackoverflow.com') op.open_new_tab('http://www.stackoverflow.com') Answer: The name parameter should just be 'opera': op = webbrowser.get('opera') Make sure you have installed Opera on your computer, and that the executable opera.exe is in the path. >>> import webbrowser >>> webbrowser.get('opera') <webbrowser.BackgroundBrowser object at 0x02095490> See the [table of allowed values for the name parameter](http://docs.python.org/library/webbrowser.html#webbrowser.get) in the documentation. If you want to specify the exact path to the executable (which by the way is a bad idea if you want your application to be portable) then you can specify the command line as follows: op = webbrowser.get(r'C:\\Program Files\\Opera\\opera.exe %s')
Jython: Import Modules From Other Sources (DB for instance)? Question: I'm using a Java program to load and run Jython scripts - using the org.python.util.PythonInterpreter. I'm storing the Jython scripts in a database : currently I'm having to extract the Python scripts to a file system prior to running them - to ensure that any 'import' statements within the scripts work. Is there a way of avoiding this extraction step: that is - is there a way to hook into the Python interpreter to intercept the imports and call out to a Java Method (which would load the jython source from the DB) ? Answer: You can add importers from either Python or Java (there's a standard one in the Jython code which imports from the classpath: `org.python.core.ClasspathPyImporter`; there's also some Javadocs in the `org.python.core.util.importer` interface it implements which may be useful). The code is relatively simple; see [PEP 302](http://www.python.org/dev/peps/pep-0302/) as well.
Last.fm api invalid method Question: I am trying to write a python script to do a query to Last.fm, but I keep getting an invalid method error returned. I don't want links to pre-written last.fm python libraries, I am trying to do this as a "test what I know" kind of project. Thanks in advance! import urllib import httplib params = urllib.urlencode({'method' : 'artist.getsimilar', 'artist' : 'band', 'limit' : '5', 'api_key' : #API key goes here}) header = {"user-agent" : "myapp/1.0"} lastfm = httplib.HTTPConnection("ws.audioscrobbler.com") lastfm.request("POST","/2.0/?",params,header) response = lastfm.getresponse() print response.read() Answer: You lack **Content-type** for your request: **"application/x-www-form- urlencoded"**. This works: import urllib import httplib params = urllib.urlencode({'method' : 'artist.getsimilar', 'artist' : 'band', 'limit' : '5', 'api_key' : '#API key goes here'}) header = {"user-agent" : "myapp/1.0", "Content-type": "application/x-www-form-urlencoded"} lastfm = httplib.HTTPConnection("ws.audioscrobbler.com") lastfm.request("POST","/2.0/?",params,header) response = lastfm.getresponse() print response.read()
Asynchronous getNext errors out when going out of table Question: I am using one of the examples for `GetNext` operation for an SNMPWalk of the tree. I am using the asynchronous variant to collect the OIDs - # GETNEXT Command Generator from pysnmp.entity.rfc3413.oneliner import cmdgen from pysnmp.proto import rfc1902 # ( ( authData, transportTarget, varNames ), ... ) targets = ( # 1-st target (SNMPv1) ( cmdgen.CommunityData('test-agent-1', 'public'), cmdgen.UdpTransportTarget(('localhost', 161)), (rfc1902.ObjectName((1,3,6,1,2,1)), rfc1902.ObjectName((1,3,6,1,3,1)))), # 2-nd target (SNMPv2c) ( cmdgen.CommunityData('test-agent-2', 'public', 1), cmdgen.UdpTransportTarget(('localhost', 161)), (rfc1902.ObjectName((1,3,6,1,2,1,2)),) ), ) def cbFun( sendRequestHandle, errorIndication, errorStatus, errorIndex, varBindTable, (varBindHead, authData, transportTarget) ): if errorIndication: print 'SNMP engine error', errorIndication return 1 if errorStatus: print 'SNMP error %s at %s' % (errorStatus, errorIndex) return 1 varBindTableRow = varBindTable[-1] for idx in range(len(varBindTableRow)): name, val = varBindTableRow[idx] if val is not None and varBindHead[idx].isPrefixOf(name): # still in table break else: print 'went out of table at %s' % (name, ) return for varBindRow in varBindTable: for oid, val in varBindRow: if val is None: print oid.prettyPrint() else: print '%s = %s' % (oid.prettyPrint(), val.prettyPrint()) return 1 # continue table retrieval cmdGen = cmdgen.CommandGenerator() for authData, transportTarget, varNames in targets: cmdGen.asyncNextCmd( authData, transportTarget, varNames, # User-space callback function and its context (cbFun, (varNames, authData, transportTarget)) ) cmdGen.snmpEngine.transportDispatcher.runDispatcher() I get the OIDs I need; however when it goes out the table and returns from cbFun, the Dispatcher in the last line throws an error, which I am not able to resolve, the output looks something like - ... (some 1.3.6.1.2.1.* stuff) 1.3.6.1.2.1.2.2.1.22.1 = 0.0 1.3.6.1.2.1.2.2.1.22.2 = 0.0 1.3.6.1.2.1.2.2.1.22.3 = 0.0 went out of table at (1, 3, 6, 1, 2, 1, 3, 1, 1, 1, 2, 1, 212, 201, 49, 186) Traceback (most recent call last): File "asyncsnmpwalk.py", line 55, in <module> cmdGen.snmpEngine.transportDispatcher.runDispatcher() File "/var/lib/python-support/python2.5/pysnmp/v4/carrier/asynsock/dispatch.py", line 61, in runDispatcher self.handleTimerTick(time()) File "/var/lib/python-support/python2.5/pysnmp/v4/carrier/base.py", line 81, in handleTimerTick self.__timerCbFun(timeNow) File "/var/lib/python-support/python2.5/pysnmp/v4/entity/engine.py", line 60, in __receiveTimerTickCbFun self.msgAndPduDsp.receiveTimerTick(self, timeNow) File "/var/lib/python-support/python2.5/pysnmp/v4/proto/rfc3412.py", line 491, in receiveTimerTick self.__cacheExpire(snmpEngine, self.__expireRequest) File "/var/lib/python-support/python2.5/pysnmp/v4/proto/rfc3412.py", line 56, in __cacheExpire if cbFun(snmpEngine, cachedParams): File "/var/lib/python-support/python2.5/pysnmp/v4/proto/rfc3412.py", line 486, in __expireRequest cbCtx File "/var/lib/python-support/python2.5/pysnmp/v4/entity/rfc3413/cmdgen.py", line 77, in processResponsePdu (self.processResponsePdu, (cbFun, cbCtx)) File "/var/lib/python-support/python2.5/pysnmp/v4/entity/rfc3413/cmdgen.py", line 164, in _sendPdu (processResponsePdu, float(timeout)/100 + time.time(), cbCtx) File "/var/lib/python-support/python2.5/pysnmp/v4/proto/rfc3412.py", line 161, in sendPdu sendPduHandle File "/var/lib/python-support/python2.5/pysnmp/v4/proto/mpmod/rfc2576.py", line 89, in prepareOutgoingMessage scopedPDU File "/var/lib/python-support/python2.5/pysnmp/v4/proto/secmod/rfc2576.py", line 78, in generateRequestMsg errorIndication = 'unknownCommunityName' pysnmp.proto.error.StatusInformation: {'errorIndication': 'unknownCommunityName'} Answer: changing CommunityData string did the trick (I don't know why) BEFORE cmdgen.CommunityData('test-agent-1', 'public'), AFTER cmdgen.CommunityData('test-agent', 'public'),
question about pexpect in python Question: I tried both pexpect and subprocess.Popen from python to call an external long term background process (this process use socket to communicate with external applications), with following details. 1. subprocess.Popen(launchcmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) This works fine. I do not need to do anything else. However, because I have to get the output immediately, I choose pexpect to avoid the pipe file buffer problem. 2. obj= pexpect.spawn(launchcmd, timeout=None) after launching external process, I use a separate thread to do "readline" to read the output of the launched process "obj", and everything is ok. 3. obj= pexpect.spawn(launchcmd, timeout=None) after launching external process, I did nothing further, i.e., just leave it there. Although, by using the "ps -e" command I can find the launched process, but the launched process seems blocked and cannot communicate in sockets with other applications. OK. To be more specific, I put some sample code to formulate my question. import subprocess import pexpect import os t=1 while(True): if(t==1): background_process="./XXX.out" launchcmd = [background_process] #---option 3-------- p=pexpect.spawn(launchcmd, timeout=None) # process launced, problem with socket. #---option 1-------- p=subprocess.Popen(launchcmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # process launced, everything fine t=0 Could anyone tell me what's wrong with the 3rd option? And if it is due to the fact that I did not use a separate thread to manipulate the output, why 1st option works with subprocess.popen? I suspect there is something wrong with pexpect to launch a process using socket, but I am not sure, especially considering option 2 works well. Answer: I think that you are making this too complicated. Yes, it is a good idea to use a `pty` instead of a `pipe` to communicate with the background process because most applications recognize tty/pty devices and switch to using unbuffered output, (or at least line-buffered). But why pexpect? Just use Python's pty module. First call `openpty` to get some filehandles and then use `Popen` to spawn the process. Example code is found in the following question (the answer with the green checkmark) [Python Run a daemon sub-process & read stdout](http://stackoverflow.com/questions/5411780/python-run-a-daemon-sub- process-read-stdout)
Which Python async library would be best suited for my code? Asyncore? Twisted? Question: I have a program I'm working on that will be reading from two 'network sources' simultaneously. I wanted to try out an asynchronous approach rather than use threading. This has lead me to wonder which library to use... I've come up with some simple example code that kind of demonstrates what my program will be doing: import sniffer def first(): for station in sniffer.sniff_wifi(): log(station.mac()) def second(): for station in sniffer.sniff_ethernet(): log(station.mac()) first() second() The two `sniffer` methods look somewhat like this: def sniff_wifi(self): while True: yield mac_address The `while True` loop obviously makes them blocking. I want to use `asyncore` for this as it is part of the standard library. No 3rd party dependencies are a bonus. However, that doesn't mean I won't use it if you recommend I do... Can I achieve what I'm trying to do with asyncore? If so, could you show me how to convert my example code to 'asyncore code'? Do you know of any good asyncore tutorials? Answer: Twisted is better in pretty much every possible way. It's more portable, more featureful, simpler, more scalable, better maintained, better documented, and it can make a delicious omelette. Asyncore is, for all intents and purposes, obsolete. It's hard to demonstrate all the ways in which Twisted is superior in a short answer (how could I demonstrate a [http](http://twistedmatrix.com/documents/10.2.0/api/twisted.web.html)/[dns](http://twistedmatrix.com/documents/10.2.0/api/twisted.names.html)/[ssh](http://twistedmatrix.com/documents/10.2.0/api/twisted.conch.html)/[smtp/pop/imap](http://twistedmatrix.com/documents/10.2.0/api/twisted.mail.html)/[irc/xmpp](http://twistedmatrix.com/documents/10.2.0/api/twisted.words.html)/[process- spawning](http://twistedmatrix.com/documents/10.2.0/api/twisted.internet.interfaces.IReactorProcess.html)/[multi- threading](http://twistedmatrix.com/documents/10.2.0/api/twisted.internet.interfaces.IReactorThreads.html) server in a short example?), so instead I'll focus on one of the most common misconceptions that people seem to have about Twisted: that it's somehow more complex or harder to use than asyncore. Let's start with an asyncore example. In order to avoid a biased presentation, I'll use an example from someone else who still likes asyncore a bit. Here's a simple asyncore example [taken from Richard Jones' weblog](http://www.mechanicalcat.net/richard/log/Python/A_simple_asyncore__echo_server__example) (with comments elided for brevity). First, here's the server: import asyncore, socket class Server(asyncore.dispatcher): def __init__(self, host, port): asyncore.dispatcher.__init__(self) self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind(('', port)) self.listen(1) def handle_accept(self): socket, address = self.accept() print 'Connection by', address EchoHandler(socket) class EchoHandler(asyncore.dispatcher_with_send): def handle_read(self): self.out_buffer = self.recv(1024) if not self.out_buffer: self.close() s = Server('', 5007) asyncore.loop() and here's the client: import asyncore, socket class Client(asyncore.dispatcher_with_send): def __init__(self, host, port, message): asyncore.dispatcher.__init__(self) self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.connect((host, port)) self.out_buffer = message def handle_close(self): self.close() def handle_read(self): print 'Received', self.recv(1024) self.close() c = Client('', 5007, 'Hello, world') asyncore.loop() There are a few obscure cases that this code doesn't handle correctly, but explaining them is boring and complicated, and the code has already made this answer long enough. Now, here's some code that does basically the same thing, with Twisted. First, the server: from twisted.internet import reactor, protocol as p class Echo(p.Protocol): def dataReceived(self, data): self.transport.write(data) class EchoFactory(p.Factory): def buildProtocol(self, addr): print 'Connection by', addr return Echo() reactor.listenTCP(5007, EchoFactory()) reactor.run() And now, the client: from twisted.internet import reactor, protocol as p class EchoClient(p.Protocol): def connectionMade(self): self.transport.write(self.factory.data) def dataReceived(self, data): print 'Received:', data self.transport.loseConnection() class EchoClientFactory(p.ClientFactory): protocol = EchoClient def __init__(self, data): self.data = data reactor.connectTCP('localhost', 5007, EchoClientFactory('Hello, world')) reactor.run() There are a couple of things that I'd like to draw your attention to. First of all, the Twisted example is 25% shorter, even for something this trivial. 40 lines for asyncore, only 30 for Twisted. As your protocol grows more complex, this difference will get bigger and bigger, as you need to write more and more support code for asyncore that would have been provided for you by Twisted. Second, Twisted provides a _complete abstraction_. With the asyncore example, you have to use the `socket` module to do the actual networking; asyncore provides only multiplexing. This is a problem if you need [portable behavior on platforms such as Windows](http://itamarst.org/writings/win32sockets.html). It also means that asyncore completely lacks facilities for doing asynchronous sub-process communication on other platforms; you can't stuff arbitrary file descriptors into a `select()` call on Windows. Third, the Twisted example is _transport neutral_. None of `Echo` and `EchoFactory` and `EchoClient` and `EchoClientFactory` is in any way specific to TCP. You can make these classes into a library that can be connected via SSH, or SSL, or a UNIX socket, or a pipe, only by changing the one `connectTCP`/`listenTCP` call at the bottom. This is important, as supporting something like TLS directly in your protocol logic is very tricky. For example, a 'write' in TLS will trigger a 'read' at the lower level. So, you need to separate these concerns out to get them right. Finally, specific to your use-case, if you're dealing with MAC addresses and ethernet frames directly, Twisted contains [Twisted Pair](http://twistedmatrix.com/documents/10.2.0/api/twisted.pair.html), a low- level library for dealing with IP and ethernet-level networking. This isn't the most actively maintained part of Twisted; the code is quite old. But, it should work, and if it doesn't we will take any bugs in it seriously and (eventually) see that they get fixed. As far as I'm aware, there's no comparable library for asyncore, and it certainly doesn't contain any such code itself.
SCons in Python Question: I would like to load SCons in an interactive Python session and enter directives that way as opposed to through an SConstruct or SConscript file. Is this possible? I'm trying to embed SCons functionality into another Python application. Answer: It would appear so, > Just add Scons to your PYTHONPATH and import whatever you need. Copy the > appropriate bits out of scons' main that suit you - Scons/Script/Main.py - > or just run its main from the appropriate folder. <http://osdir.com/ml/programming.tools.scons.user/2005-03/msg00058.html>
C++ | Progresssion Path Question: Inspired by the Question on [Python-Progression Path](http://stackoverflow.com/questions/2573135/python-progression-path-from- apprentice-to-guru) \- I know the `basic OOP-related topics`, `RTTI, Templates`. Reverting back from `Java' Collection Framework`, I tried to find such collections in `C++` and found `STL`, and am trying to use it in my projects (although I don't know them in and out). I searched and found recommendations for books like `Accelerated C++, Effective and More Effective C++`. But I am not sure what should be my progression path, so I am looking for something like - def apprentice(): read(diveintopython) experiment(interpreter) read(python_tutorial) experiment(interpreter, modules/files) watch(pycon) def master(): refer(python-essential-reference) refer(PEPs/language reference) experiment() read(good_python_code) # Eg. twisted, other libraries write(basic_library) # reinvent wheel and compare to existing wheels if have_interesting_ideas: give_talk(pycon) def guru(): pass # Not qualified to comment. Fix the GIL perhaps? 1. Discover [list comprehensions](http://en.wikipedia.org/wiki/List_comprehension#Python) 2. Discover [generators](http://en.wikipedia.org/wiki/Python_syntax_and_semantics#Generators) 3. Incorporate [map, reduce, filter, iter, range, xrange](http://docs.python.org/library/functions.html) often into your code 4. Discover [Decorators](http://wiki.python.org/moin/PythonDecorators) 5. Write recursive functions, a lot 6. Discover [itertools](http://docs.python.org/library/itertools.html) and [functools](http://docs.python.org/library/functools.html) 7. Read [Real World Haskell](http://rads.stackoverflow.com/amzn/click/0596514980) 8. Rewrite all your old Python code with tons of higher order functions, recursion, and whatnot. 9. Annoy your cubicle mates every time they present you with a Python class. Claim it could be "better" implemented as a dictionary plus some functions. Embrace functional programming. 10. Rediscover the [Strategy](http://en.wikipedia.org/wiki/Strategy_pattern#Python) pattern and then [all those things](http://rads.stackoverflow.com/amzn/click/0596007124) from imperative code you tried so hard to forget after Haskell. 11. Find a balance. Answer: It's a tough question, because what you really need is becoming good at what you do, and thus no authoritative list exists. That being said... * Read `Effective C++` by Meyers and `C++ Coding Standards` by Sutter, you're not likely to understand everything if you're a beginner, so re-read them from time to time (it's also a good vaccine) * Time to introduce the STL (it's an amazing little pearl), learn to use its algorithms instead of hand-crafting everything, if possible jump straight to the C++0x version * Incorporate Boost into the mix, softly at first: `boost::optional`, `boost::variant`, `boost::lexical_cast`, `boost::numeric_cast` make your code safer and more idiomatic. Also poke the Boost String Algorithms library. * Template Meta Programming and Boost.MPL are next: C++ Template Meta Programming by Abrahams Gurtovoy will help there. You might have to leverage Boost.Preprocessor for some template stuff. * Learn more Boost Libraries, it's a gigormous repository and it's amazing all the libraries there are. I am still at that last part myself, so cannot comment on going further :) At each step, you should write a lot of code, reading isn't sufficient, you need to experiment. Programming is not just technic, the architectural part of the program is extremely important in the field. Oh and try and join (if only to read) an open-source project, nothing beats writing code and it's better when someone else reviews it :)
How do I set up a local python library directory / PYTHONPATH? Question: In the process of trying to write a Python script that uses PIL today, I discovered I don't seem have it on my local machine (OS X 10.5.8, default 2.5 Python install). So I run: easy_install --prefix=/usr/local/python/ pil and it complains a little about /usr/local/python/lib/python2.5/site-packages not yet existing, so I create it, and try again, and get this: > TEST FAILED: /usr/local/python//lib/python2.5/site-packages does NOT support > .pth files error: bad install directory or PYTHONPATH > > You are attempting to install a package to a directory that is not on > PYTHONPATH and which Python does not read ".pth" files from. The > installation directory you specified (via --install-dir, --prefix, or the > distutils default setting) was: > > > /usr/local/python//lib/python2.5/site-packages > > > and your PYTHONPATH environment variable currently contains: > > > '' > OK, fair enough -- I hadn't done anything to set the path. So I add a quick line to ~/.bash_profile: > PYTHONPATH="$PYTHONPATH:/usr/local/python/lib/python2.5" and `source` it, and try again. Same error message. This is kindof curious, given that PYTHONPATH is clearly set; I can `echo $PYTHONPATH` and get back `:/usr/local/python/lib/python2.5`. I decided to check out what the include path looked like from inside: import sys print "\n".join(sys.path) which yields: > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python25.zip > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5 > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/plat- > darwin > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/plat- > mac > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/plat- > mac/lib-scriptpackages > /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib- > tk > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib- > dynload /Library/Python/2.5/site-packages > /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/PyObjC from which `/usr/local/python/yadda/yadda` is notably missing. Not sure what I'm supposed to do here. How do I get python to recognize this location as an include path? **UPDATE** As Sven Marnach suggested, I was neglecting to export PYTHONPATH. I've corrected that problem, and now see it show up when I print out `sys.path` from within Python. However, I still got the `TEST FAILED` error message I mentioned above, just with my new PYTHONPATH environment variable. So, I tried changing it from `/usr/local/python/lib/python2.5` to `/usr/local/python/lib/python2.5/site-packages`, exporting, and running the same `easy_install` command again. This leads to an all new result that at first _looked_ like success (but isn't): Creating /usr/local/python/lib/python2.5/site-packages/site.py Searching for pil Reading http://pypi.python.org/simple/pil/ Reading http://www.pythonware.com/products/pil Reading http://effbot.org/zone/pil-changes-115.htm Reading http://effbot.org/downloads/#Imaging Best match: PIL 1.1.7 Downloading http://effbot.org/media/downloads/PIL-1.1.7.tar.gz Processing PIL-1.1.7.tar.gz Running PIL-1.1.7/setup.py -q bdist_egg --dist-dir /var/folders/XW/XWpClVq7EpSB37BV3zTo+++++TI/-Tmp-/easy_install-krj9oR/PIL-1.1.7/egg-dist-tmp--Pyauy --- using frameworks at /System/Library/Frameworks [snipped: compiler warnings] -------------------------------------------------------------------- PIL 1.1.7 SETUP SUMMARY -------------------------------------------------------------------- version 1.1.7 platform darwin 2.5.1 (r251:54863, Sep 1 2010, 22:03:14) [GCC 4.0.1 (Apple Inc. build 5465)] -------------------------------------------------------------------- --- TKINTER support available --- JPEG support available --- ZLIB (PNG/ZIP) support available *** FREETYPE2 support not available *** LITTLECMS support not available -------------------------------------------------------------------- To add a missing option, make sure you have the required library, and set the corresponding ROOT variable in the setup.py script. To check the build, run the selftest.py script. zip_safe flag not set; analyzing archive contents... Image: module references __file__ No eggs found in /var/folders/XW/XWpClVq7EpSB37BV3zTo+++++TI/-Tmp-/easy_install-krj9oR/PIL-1.1.7/egg-dist-tmp--Pyauy (setup script problem?) Again, this looks good, but when I go to run my script: > Traceback (most recent call last): > File "checkerboard.py", line 1, in import Image, ImageDraw ImportError: No > module named Image When I check what's now under `/usr/local/python/` using `find .`, I get: > ./lib ./lib/python2.5 ./lib/python2.5/site-packages ./lib/python2.5/site- > packages/site.py ./lib/python2.5/site-packages/site.pyc So... nothing module-looking (I'm assuming site.py and site.pyc are metadata or helper scripts). Where did the install go? I note this: > To check the build, run the selftest.py script. But don't really know what that is. And I also noticed the "No eggs found" message. Are either of these hints? Answer: You are using the Apple-supplied Python 2.5 in OS X; it's a framework build and, by default, uses `/Library/Python/2.5/site-packages` as the location for installed packages, not `/usr/local`. Normally you shouldn't need to specify `--prefix` with an OS X framework build. Also beware that the `setuptools` (`easy_install`) supplied by Apple with OS X 10.5 is also rather old as is the version of Python itself. That said, installing `PIL` completely and correctly on OS X especially OS X 10.5 is not particularly simple. Search the archives or elsewhere for tips and/or binary packages. Particularly if you are planning to use other modules like MySQL or Django, my recommendation is to install everything (Python and PIL) using a package manager like [MacPorts](http://www.macports.org/).
Can PyQt4 and Qt4/C++ co-exist peacefully on Windows? Question: I had previously installed PyQt4 on my Windows XP machine and was successful in getting everything to run. This evening, I downloaded the full Qt4 SDK (open source version) and installed it as well. After adding the path to the bin directory to my system environment path I was able to build a few small programs I'd written in C++. Without changing my environment, I started up python and tried to import PyQt4.QtCore and got an error that it couldn't load the DLL. I removed from my environment path, the path to the Qt4 SDK bin directory and was now able to run my python PyQt4 programs but I could no longer build my C++ programs. First off, I'm not sure why the presence of the C++ SDK should impact the python version because they're in different directories. I assume the issue is that, when python attempts to load the PyQt4 DLL, it thinks its using an executable from the python path but, because the SDK path is first, that (incompatible) version is what is actually invoked. Is there a way that these two environments can leave peacefully with each other such that I can build and run either type of program? Answer: The issue, as you correctly suggest, tends to be that the versions of the various DLLs (QtCore4.dll, QtGui4.dll etc) are different. I've tended to find that the problem occurs for the one expecting a newer version (so if the Qt SDK is installed second, but PyQt4 is in the path first, the Qt SDK would complain), but it sounds like you've got a different problem. There are two solutions that I've found to this: * Change your path for the different build/run environments (not very nice). * Make sure both PyQt4 and Qt/C++ are at the same Qt version so that either DLL will work (generally much easier). Since I have started installing both PyQt4 and Qt/C++ on a computer when I first start using it (and therefore they are at equivalent release versions), I have rarely had any problems with them coexisting.
Python logging datagram handler Question: I am having a problem getting the example code shown in the python docs for the logging DatagramHandler, the code shown below gives EOFError exceptions on every datagram received. import socket import logging import cPickle import struct import sys sock = socket.socket (socket.AF_INET, socket.SOCK_DGRAM) sock.bind (('localhost', 9000)) handler = logging.StreamHandler() handler.setFormatter(logging.Formatter("UDP LogViewer %(asctime)s %(message)s")) logger = logging.getLogger("Test") logger.addHandler(handler) try: while True: dgram_size = sock.recv(4) if len(dgram_size) < 4: break slen = struct.unpack(">L", dgram_size)[0] data = sock.recv(slen) while len(data) < slen: data = data + sock.recv(slen - len(data)) try: obj = cPickle.loads(data) record = logging.makeLogRecord(obj) logger.handle(record) except: print "exception", sys.exc_info()[0] finally: sock.close() However this code works, any ideas data, address = sock.recvfrom(8192) rec = logging.makeLogRecord(cPickle.loads(data[4:])) logger.handle(rec) Regards Answer: I expect your first `recv(4)` call copies the first four bytes out of your datagram and then throws the rest of the packet on the floor; your second call to `recv` then finds nothing to read and returns EOFError. From my system's `udp(7)` manpage: All receive operations return only one packet. When the packet is smaller than the passed buffer, only that much data is returned; when it is bigger, the packet is truncated and the MSG_TRUNC flag is set. MSG_WAITALL is not supported. Try reading in the entire datagram, grabbing the length out of the first four bytes, and then working on the subset of the array that stores the entire datagram. Of course, if your pickles don't fit entirely within the [MTU](http://en.wikipedia.org/wiki/Maximum_transmission_unit) of the connection, it'll probably never work as you intend.
How to maintain different version of a python module? Question: I have this core python module we use in our facility called mfxLib. I need to be able to keep different version of this module without breaking all the other modules/plugin that are importing this module. My solution was keep a duplicate of my module by renaming them mfxLib01 and mfxLib02 then to replace the original mfxLib module with an empty module containing only a `__init__.py` file that import the latest version. # content of mfxLib.__init__.py from mfxLib02 import * This seems logical and seems to work but I was wondering if there was a common practice for doing this? guidelines to follow? etc Thanks Answer: You can import a module _as_ another name. Commonly people use this to save typing in a long module name, for example: import numpy as np np.array([1,2,3,4]) Hence you could do: import mfxLib01 as mfxLib or import mfxLib02 as mfxLib then your code uses mfxLib everywhere. That might help...
Python decorator for Django views: Check for a particular setting in UserProfile Question: I'd like to write a decorator to use on views all over my site to first check if the logged in user's UserProfile has a particular setting. In my case it's user.get_profile.user_status and the value could be either "expired" or "active". If the user_status = "expired" i want to redirect them to a billing account update page. If they are active, they can pass. I'd like to be something like `@must_be_active` or `@paywall_check`. Never wrote a decorator before. Ideas on how best to get started? Answer: First, read this <http://docs.djangoproject.com/en/1.2/topics/auth/#limiting- access-to-logged-in-users-that-pass-a-test> It's actually simpler if you don't write a decorator. from django.contrib.auth.decorators import user_passes_test def must_be_active( user ): if .... whatever .... def paywall_check( user ): if .... whatever .... @user_passes_test(must_be_active) def my_view(request): do the work @user_pass_test(paywall_check) def another_view(request): do the work
Key to OpenGL. Java / Scala / Python Question: one topic that has always been of highest interest to me is 3D Programming. I've made several attempts at programming small games and never really successed. After experiences with DirectX and C++, XNA and C#, as well as Unity3d and C#, I would like to try OpenGL. Just being curious. When using C++ the way to go is rather clear. However Java (and Scala that I'm currently learning), Python ... are way more comfortable. After about 2 years of struggling with C++ without any remarkable success, I turned away from it. Now for Java/Scala/... there are many OpenGL bindings and I would like to choose the right one. On the contrary there are few books on them. Java 3d and/or JOGL books are available but when looking at Scala or Python things aren't that good. **What layer/wrapper/binding would you recommend (Java or Scala). Is there a kind of standard ?** **Is it possible to learn this binding by reading for eyample "OpenGL Superbible" ? If not, can you recommend a book ?** Any advice is welcome. If there's a good IDE (plugin), tool, website, tutorial, ... please let me know it. Answer: I have used JOGL in many of my 3D projects. Learning how to use a binding is not as important as learning the actual api. Using opengl in c and java are pretty much the same. The only thing that differs is the way you set up your rendering windows and buffers. I use netbeans ide with the opengl for netbeans plugin because it sets everything up for you and all you have to worry about is the opengl part. To learn opengl any book is fine. If you have a good background in programming then you should catch on quite easily.
How to find recurring patterns on a hexdump? Question: I need to find recurring patterns from an hexdump output. Every line in my output file is something like: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Where `00` is a byte in hexadecimal. The patterns aren't of fixed length but they always lie in one line. I have an idea on how to do this but I'd like to know what would be the most efficent method in your opinion, like if there is some sort of known algorhitm I am unaware of. Also I'd like to code this in Python. Any suggestion is grealty appreciated :) Thanks **EDIT:** I need to find partition boot sectors in a disk dump. The problem is that the filesystem is uncommon so I need to scan the hexdump to find pattern frequently used in order to restrict the area of research. For example I am looking for byte-patterns like: 00 56 f0 43 d0 Answer: It is not apparent whether you know the substrings that you want to search for, or whether you need to discover a set of query substrings first. I think that discovery can be achieved by finding frequently occurring n-grams. One you have your set of query substrings, you can proceed to where they are, and how far apart they are (e.g. if some substring occurs every 1024 bytes, that may be a block size). Step 1: read your hexdump file and convert it back to a single string. I'll leave the details up to you. Step 2: for each interesting value of n (say 3, 4, 5 (like your example), 6, etc) use this function: from collections import Counter # needs 2.7 from operator import itemgetter def get_ngrams(strg, n, top=10, min_count=2): counter = Counter() for i in xrange(len(strg) - n + 1): gram = strg[i:i+n] counter[gram] += 1 sort_these = [(gram, count) for gram, count in counter.iteritems() if count >= min_count] best = sorted(sort_these, key=itemgetter(1), reverse=True)[:top] return best That will give you the most frequent occurring substrings. Step 3: where those strings occur: def multifind(strg, gram): positions = [] end = len(strg) pos = 0 while pos < end: pos = strg.find(gram, pos) if pos == -1: break positions.append(pos) pos += 1 return positions Step 4: how far apart those occurrences are: deltas = [b - a for a, b in zip(positions, positions[1:])]
python, dynamically implement a class onthefly Question: this is related to [python, dynamically implement a class onthefly](http://stackoverflow.com/questions/4387847/python-dynamically- implement-a-class-onthefly). when i restarted my pc, couldnt get back to add comments to the post - below is an example to explain what meant by save the class_with_the_methods_used class bank(object): def __init__(self, bal=0): self.bal = bal def deposit(self, amount): self.bal+=amount print self.bal def debit(self, amt): self.bal-=amt print self.bal bank.debit = debit myacct = bank() myacct.deposit(1000) # prints 1000 myacct.debit(99) # print 901 dir(myacct) # print [ ....'bal', 'debit', 'deposit'] then i used pickle and saved the object myacct after saving, restarted my python and tried the commands below >>> import pickle >>> obj = pickle.load(open('bank.pkl')) >>> dir(obj) # prints [....'bal', 'deposit'] note that 'debit' is not among the attributes . So my problem is how to make methods like 'debit' persistent? Answer: Check out the `new` module (http://docs.python.org/library/new.html) It has a lot of tools for doing things dynamically. The problem you are having is debit is not a instance method, it is just a normal function. Methods defined in classes are different than functions defined outside.
Convert commented out print statements to logging calls - Python Question: I'm hacking on a framwork right now, and I'd like to use logging. However, the developers of the framework have been using print statements to debug, and they are all commented out for the production release. I was wondering, does anyone know of a regex to find these, and convert them to logging calls. This is what I was thinking so far: import re import sys import StringIO if not len(sys.argv) != 2: print 'Syntax: printtologging.py file_to_process' regex = r'#print (?P<debugstring>*)$' output = StringIO.StringIO('w+') def replace(match_object): return 'logging.debug({0})'.format(match_object.group_dict['debugstring']) with open(sys.argv[1]) as f: output.writelines([re.sub(regex, replace, line) for line in f.readlines()]) output.seek(0) print output.read() Though that dosn't seem to be working. My regex's are far from brilliant, could anyone help? Answer: If you want to do it in Python, consider using the [fileinput](http://docs.python.org/library/fileinput.html#module-fileinput) module: import fileinput import sys import re def convert(paths): for line in fileinput.input(paths, inplace=True, backup='~'): line=re.sub(r'#\s*print\s*(.*)',r'logging.debug(\1)',line) sys.stdout.write(line) if __name__=='__main__': convert(sys.argv[1:]) You could call the script this way % print2log.py *.py It converts the scripts in-place, and makes a backup file which ends with `~`. Note that the regex converts #print 'foo' --> logging.debug('foo') but adds an extra set of parentheses if the print statement already had one: #print('foo') --> logging.debug(('foo')) It also screws up multi-line print statements: #print('''foo logging.debug(('''foo) #bar''') --> #bar''') Fixing this is a much harder problem, requiring parsing the comments and using something smarter than regex, which can't handle nested parentheses properly. I think you could do it with [tokenize](http://docs.python.org/library/tokenize.html), and with code similar in spirit to [reindent.py](http://svn.python.org/projects/python/trunk/Tools/scripts/reindent.py), but it would take some time and thought to get it right.
How can I change the extension name in a string with bash? Question: I want to translate the following python code to bash. The code changes the extension name to .html and runs Safari to open it. #!/usr/bin/env python import os.path import os oldName = $TM_FILEPATH (name, ext) = os.path.splitext(oldName) rename = name + ".html" os.system("open -a Safari %s" % rename) How can I change the file extension with bash? Answer: file=somefile.whatevs open -a Safari "${file%.*}.html"
Making HTTP HEAD request with urllib2 from Python 2 Question: I'm trying to do a HEAD request of a page using Python 2. I am trying import misc_urllib2 ..... opender = urllib2.build_opener([misc_urllib2.MyHTTPRedirectHandler(), misc_urllib2.HeadRequest()]) with `misc_urllib2.py` containing class HeadRequest(urllib2.Request): def get_method(self): return "HEAD" class MyHTTPRedirectHandler(urllib2.HTTPRedirectHandler): def __init__ (self): self.redirects = [] def http_error_301(self, req, fp, code, msg, headers): result = urllib2.HTTPRedirectHandler.http_error_301( self, req, fp, code, msg, headers) result.redirect_code = code return result http_error_302 = http_error_303 = http_error_307 = http_error_301 But I am getting TypeError: __init__() takes at least 2 arguments (1 given) If I just do opender = urllib2.build_opener(misc_urllib2.MyHTTPRedirectHandler()) then it works fine Answer: This works just fine: import urllib2 request = urllib2.Request('http://localhost:8080') request.get_method = lambda : 'HEAD' response = urllib2.urlopen(request) print response.info() Tested with quick and dirty HTTPd hacked in python: Server: BaseHTTP/0.3 Python/2.6.6 Date: Sun, 12 Dec 2010 11:52:33 GMT Content-type: text/html X-REQUEST_METHOD: HEAD I've added a custom header field X-REQUEST_METHOD to show it works :) Here is HTTPd log: Sun Dec 12 12:52:28 2010 Server Starts - localhost:8080 localhost.localdomain - - [12/Dec/2010 12:52:33] "HEAD / HTTP/1.1" 200 - Edit: there is also [httplib2](https://github.com/httplib2/httplib2) import httplib2 h = httplib2.Http() resp = h.request("http://www.google.com", 'HEAD')
Running shell commands from bytes objects in python 3.1 (Or writing UTF-8 mp3 tags in python 3.1) Question: I'm writing a script which uses .CUE files to split monolithic music files into individual tracks, then encodes them in MP3 along with correct tags from the CUE. It's all working, but unfortunately the tags (applied simply through including them with `lame`'s command line arguments) reliably show up as gibberish in iTunes when they include unicode characters (Which they always do, due to the music being Japanese). I can fix them by running them through [this](http://code.google.com/p/mp3tagiconv/) script, but that's another script to run , and occasionally attaches quote marks to the tags (A bug I have been unable to fix), requiring yet one more run of a script to remove. So my preferred solution is to encode the `lame` command + arguments string as UTF-8 before running it, but python 3.1 appears to have no support for running commands from bytes rather than strings. Simply passing a string instead encodes incorrectly. Alternatively, I'm happy with simply using a tagging library to insert the tags after the encoding is finished, though a solution like that is slower and less elegant. Any suggestions are welcome, though! Thanks in advance. EDIT: I invoke `lame` like this (sorry for long line): args = "lame --tt \"{0}\" --tn {1:02d} --ta \"{2}\" --tl \"{3}\" \"{4}\" \"{5}.mp3\"".format(item.title, item.tracknumber, item.artist, albumObject.title, item.wavFile, "{0:02d} ".format(item.tracknumber) + item.title) #args = bytearray(args, "utf-8") retcode = subprocess.check_call(args) Answer: If you choose to use tagging library you may look at [eyeD3](http://eyed3.nicfit.net); then you could set utf-8 encoded tags as follows: import eyeD3 tag = eyeD3.Tag ('file.mp3') tag.setVersion (eyeD3.ID3_V2_4) tag.setTextEncoding (eyeD3.UTF_8_ENCODING) tag.setArtist ('artist') tag.setAlbum ('album') tag.setTitle ('title') tag.update ()
Error importing module _md5, and relevant RPM Question: I have a python script that used to run, although since moving servers at work it now throws up a strange error: >>> import _md5 ImportError: No module named _md5 The general setup is all correct, as is my python path and seemingly everything else. I was told that I need to install the relevant RPM for this to work, but have no idea what this might be - could anyone please point me in the right direction? These machines have a setup that prohibits me using `yum`, so I need to make a request to those maintaining the system about which RPM I want installed. Answer: Based on extra bit of information from the OP, they use Python 2.5+ on the new server. Suggested remedy is to use standard `hashlib` module. Which provides MD5 hash implementation among other things.
Python Automated Web Page downloading, with Username, Password and Cookies Question: I'm trying to implement in Python a simple program that reads rom web pages and writes them to files. There are about 2000 pages of messages incrementally numbered, but some numbers are missing. The Web site is username and password protected, and I'm using the same username and password I normally use to access it manually. I'm using some code examples with cookie handling I found in the official Python web site, but when I try them the website I'm trying to copy replies > "Your browser is not accepting our cookies. To view this page, please set > your browser preferences to accept cookies. (Code 0)" Obviously there is a problem with cookies, and perhaps I'm not handling username and password correctly. Any suggestion regarding the following code? import urllib2 import cookielib import string import urllib def cook(): url="http://www.URL.com/message/" cj = cookielib.LWPCookieJar() authinfo = urllib2.HTTPBasicAuthHandler() realm = "http://www.URL.com" username = "ID" password = "PSWD" host = "http://www.URL.com/message/" authinfo.add_password(realm, host, username, password) opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj), authinfo) urllib2.install_opener(opener) # Create request object txheaders = { 'User-agent' : "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)" } try: req = urllib2.Request(url, None, txheaders) cj.add_cookie_header(req) f = urllib2.urlopen(req) except IOError, e: print "Failed to open", url if hasattr(e, 'code'): print "Error code:", e.code else: print f cook url="http://www.URL.com/message/" urllib.urlretrieve(url + '1', 'filename') Answer: Take a look in [Bolacha](https://github.com/gabrielfalcao/bolacha), it's a wrapper to httplib2 that handles cookies and other stuff...
Create and parse multipart HTTP requests in Python Question: I'm trying to write some python code which can create multipart mime http requests in the client, and then appropriately interpret then on the server. I have, I think, partially succeeded on the client end with this: from email.mime.multipart import MIMEMultipart, MIMEBase import httplib h1 = httplib.HTTPConnection('localhost:8080') msg = MIMEMultipart() fp = open('myfile.zip', 'rb') base = MIMEBase("application", "octet-stream") base.set_payload(fp.read()) msg.attach(base) h1.request("POST", "http://localhost:8080/server", msg.as_string()) The only problem with this is that the email library also includes the Content-Type and MIME-Version headers, and I'm not sure how they're going to be related to the HTTP headers included by httplib: Content-Type: multipart/mixed; boundary="===============2050792481==" MIME-Version: 1.0 --===============2050792481== Content-Type: application/octet-stream MIME-Version: 1.0 This may be the reason that when this request is received by my web.py application, I just get an error message. The web.py POST handler: class MultipartServer: def POST(self, collection): print web.input() Throws this error: Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 242, in process return self.handle() File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 233, in handle return self._delegate(fn, self.fvars, args) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 415, in _delegate return handle_class(cls) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/application.py", line 390, in handle_class return tocall(*args) File "/home/richard/Development/server/webservice.py", line 31, in POST print web.input() File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/webapi.py", line 279, in input return storify(out, *requireds, **defaults) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 150, in storify value = getvalue(value) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 139, in getvalue return unicodify(x) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 130, in unicodify if _unicode and isinstance(s, str): return safeunicode(s) File "/usr/local/lib/python2.6/dist-packages/web.py-0.34-py2.6.egg/web/utils.py", line 326, in safeunicode return obj.decode(encoding) File "/usr/lib/python2.6/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 137-138: invalid data My line of code is represented by the error line about half way down: File "/home/richard/Development/server/webservice.py", line 31, in POST print web.input() It's coming along, but I'm not sure where to go from here. Is this a problem with my client code, or a limitation of web.py (perhaps it just can't support multipart requests)? Any hints or suggestions of alternative code libraries would be gratefully received. EDIT The error above was caused by the data not being automatically base64 encoded. Adding encoders.encode_base64(base) Gets rid of this error, and now the problem is clear. HTTP request isn't being interpreted correctly in the server, presumably because the email library is including what should be the HTTP headers in the body instead: <Storage {'Content-Type: multipart/mixed': u'', ' boundary': u'"===============1342637378=="\n' 'MIME-Version: 1.0\n\n--===============1342637378==\n' 'Content-Type: application/octet-stream\n' 'MIME-Version: 1.0\n' 'Content-Transfer-Encoding: base64\n' '\n0fINCs PBk1jAAAAAAAAA.... etc So something is not right there. Thanks Richard Answer: There is a number of things wrong with your request. As TokenMacGuy suggests, multipart/mixed is unused in HTTP; use multipart/form-data instead. In addition, parts should have a Content-disposition header. A python fragment to do that can be found in the [Code Recipes](http://code.activestate.com/recipes/146306-http-client-to-post-using- multipartform-data/).
Apache mod_python with django issue Question: While running a django application on top of apache2 mod_python, I am getting this error message in my apache error log. [Tue Dec 14 14:26:45 2010] [error] [client SOME_IP] IOError: Write failed, client closed connection., referer: http://example.com/ Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1931, in ReportError req.write(text) IOError: Write failed, client closed connection. [Tue Dec 14 14:26:45 2010] [error] [client SOME_IP] python_handler: Dispatch() returned non-integer., referer: http://example.com/ Can anyone please suggest some solution on this? Answer: The better long-term solution is to not use mod_python, since mod_python is no longer in development, and will not be supported in future versions of Django. Consider using [mod_wsgi](http://docs.djangoproject.com/en/dev/howto/deployment/modwsgi/) instead. The [Django documentation](http://docs.djangoproject.com/en/dev/howto/deployment/modpython/) has this to say about mod_python: > Support for mod_python has been deprecated, and will be removed in Django > 1.5. If you are configuring a new deployment, you are strongly encouraged to > consider using mod_wsgi or any of the other supported backends.
Is it possible to import a 3D model into pyglet? Question: Me and a friend are working on an openGL game written in python, using the pyglet library. Now we finished the map drawing code, and have come to a point where we need to import some models that will be displayed on the map. Pyglet doesn't seem to support it, apart from some old SVN code, apparently (I googled a bit around with not much luck). Does anyone know a good python script/library that can import 3D models? Answer: [Open Asset Import Library](http://assimp.sf.net) (Assimp) supports ~35 different 3D file formats. It has python bindings in its repository, but I don't know for sure if they're up-to-date and feature-complete. Still, assimp might be worth a try (even though I am not unbiased since i'm one of its founders).
Python solve equation for one variable Question: I'm trying to solve an equation in python using SymPy. I have a generated equation (something like `function = y(8.0-(y**3.0))` which I use with SymPy to create a new equation like this: `eq = sympy.Eq(function, 2)` which outputs `y(8.0-(y**3.0)) == 2`. but `sympy.solve(eq)` doesn't seem to work. >>> from sympy import Eq, Symbol as sym, solve >>> y = sym('y') >>> eqa = Eq(y(8.0-(y**3.0)), 8) >>> solve(eqa) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.6/sympy/solvers/solvers.py", line 332, in solve result = tsolve(f, *symbols) File "/usr/lib/pymodules/python2.6/sympy/solvers/solvers.py", line 716, in tsolve raise NotImplementedError("Unable to solve the equation.") NotImplementedError: Unable to solve the equation. thanks for reading. Answer: Yours is a non linear equation ... So you can use `optimize.fsolve` for it. For further details look for the function in this tutorial [scipy](http://www.tau.ac.il/~kineret/amit/scipy_tutorial/)
Python (newbie) Parse XML from API call Question: I've look for some tutorials/other questions on stack/documentation and still can't figure it out. ugh!!! Making the API request and the parsing out (want to assign to variables but that's a bonus to this question), This is what I'm trying. Why can't I list the title and link for the items? #!/usr/bin/python # Screen Scraper for Subs import urllib from xml.etree import ElementTree as ET show = 'heroes' season = '4' language = 'en' limit = '1' requestURL = 'http://api.allsubs.org/index.php?' \ + 'search=' + show \ + '+season+' + season \ + '&language=' + language \ + '&limit=' + limit root = ET.parse(urllib.urlopen(requestURL)).getroot() print root print '\n' items = root.findall('items') for item in items: item.find('title').text # should print: <![CDATA[Heroes Season 4 Subtitles]]> item.find('link').text # Should print: http://www.allsubs.org/subs-download/heroes+season+4/1223435/ XML Response <AllSubsAPI> <title>AllSubs API: Subtitles Search</title> <link>http://www.allsubs.org</link> <description><![CDATA[Subtitles Search for Heroes Season 4]]></description> <language>en-us</language> <results>1</results> <found_results>24</found_results> <items> <item> <title><![CDATA[Heroes Season 4 Subtitles]]></title> <link>http://www.allsubs.org/subs-download/heroes+season+4/1223435/</link> <filename>heroes-season-4-english-heroes-season-4-en.zip</filename> <files_in_archive>Heroes - 4x01-02 - Orientation.HDTV.FQM.en.srt|Heroes - 4x17 - The Art of Deception.HDTV.2HD.en.srt|Heroes - 4x07 - Strange Attractors.HDTV.LOL.en.srt|Heroes - 4x08 - Once Upon a Time in Texas.HDTV.2HD.en.srt|Heroes - 4x07 - Strange Attractors.720p HDTV.DIMENSION.en.srt|Heroes - 4x05 - Hysterical Blindness.720p HDTV.X264.en.srt|Heroes - 4x09 - Shadowboxing.HDTV.LOL.en.srt|Heroes - 4x16 - Pass Fail.HDTV.LOL.en.srt|Heroes - 4x04 - Acceptance.HDTV.en.srt|Heroes - 4x01-02 - Orientation.720p HDTV.DIMENSION.en.srt|Heroes - 4x06 - Tabula Rasa.HDTV.NoTV.en.srt|Heroes - 4x10 - Brother's Keeper.HDTV.FQM.en.srt|Heroes - 4x04 - Acceptance.HDTV.FQM.en.srt|Heroes - 4x14 - Let It Bleed.720p HDTV.DIMENSION.en.srt|Heroes - 4x06 - Tabula Rasa.720p HDTV.SiTV.en.srt|Heroes - 4x08 - Once Upon a Time in Texas.HDTV.NoTV.en.srt|Heroes - 4x12 - The Fifth Stage.HDTV.LOL.en.srt|Heroes - 4x19 - Brave New World.HDTV.LOL.en.srt|Heroes - 4x15 - Close to You.720p HDTV.DIMENSION.en.srt|Heroes - 4x03 - Ink.720p HDTV.DIMENSION.en.srt|Heroes - 4x11 - Thanksgiving.720p HDTV.DIMENSION.en.srt|Heroes - 4x13 - Upon This Rock.720p HDTV.DIMENSION.en.srt|Heroes - 4x13 - Upon This Rock.HDTV.LOL.en.srt|Heroes - 4x14 - Let It Bleed.HDTV.LOL.en.srt|Heroes - 4x15 - Close to You.HDTV.LOL.en.srt|Heroes - 4x12 - The Fifth Stage.720p HDTV.DIMENSION.en.srt|Heroes - 4x18 - The Wall.HDTV.LOL.en.srt|Heroes - 4x08 - Once Upon a Time in Texas.720p HDTV.CTU.en.srt|Heroes - 4x17 - The Art of Deception.HDTV.CTU.en.srt|Heroes - 4x09 - Shadowboxing.720p HDTV.DIMENSION.en.srt|Heroes - 4x10 - Brother's Keeper.720p HDTV.DIMENSION.en.srt|Heroes - 4x04 - Acceptance.720p HDTV.CTU.en.srt|Heroes - 4x11 - Thanksgiving.HDTV.FQM.en.srt|Heroes - 4x03 - Ink.HDTV.FQM.en.srt|Heroes - 4x05 - Hysterical Blindness.HDTV.XII.en.srt|</files_in_archive> <languages>en</languages> <added_on>2010-02-16</added_on> </item> </items> </AllSubsAPI> UPDATE: This worked, thanks for the help and pointing out my typo items = root.findall('items/item') for item in items: print item.find('title').text print item.find('link').text Answer: items = root.findall('items') should be items = root.findall('items/item')
how to optimally count elements in a python list Question: This is almost the same question than [**here**](http://stackoverflow.com/questions/3710976/counting-unique- elements-in-a-list), except that I am asking about the most efficient solution for a sorted result. I have a list (about 10 integers randomly between 0 and 12), for example: the_list = [5, 7, 6, 5, 5, 4, 4, 7, 5, 4] I want to create a function that returns a list of tuples (item, counts) ordered by the first element, for example output = [(4, 3), (5, 4), (6, 1), (7, 2)] So far I have used: def dupli(the_list): return [(item, the_list.count(item)) for item in sorted(set(the_list))] But I call this function almost a millon time and I need to make it as fast as I (python) can. Therefore my question: **How to make this function less time comsuming? (what about memory?)** I have played around a bit, but nothing obvious came up: from timeit import Timer as T number=10000 setup = "the_list=[5, 7, 6, 5, 5, 4, 4, 7, 5, 4]" stmt = "[(item, the_list.count(item)) for item in sorted(set(the_list))]" T(stmt=stmt, setup=setup).timeit(number=number) Out[230]: 0.058799982070922852 stmt = "L = []; \nfor item in sorted(set(the_list)): \n L.append((item, the_list.count(item)))" T(stmt=stmt, setup=setup).timeit(number=number) Out[233]: 0.065041065216064453 stmt = "[(item, the_list.count(item)) for item in set(sorted(the_list))]" T(stmt=stmt, setup=setup).timeit(number=number) Out[236]: 0.098351955413818359 Thanks Christophe Answer: I would try: from collections import defaultdict output = defaultdict(lambda: 0) for item in the_list: output[item] += 1 return sorted(output.items())
Linear Interpolation. How to implement this algorithm in C ? (Python version is given) Question: There exists one very good linear interpolation method. It performs linear interpolation requiring **at most one multiply per output sample**. I found its description in a third edition of Understanding DSP by Lyons. This method involves a special hold buffer. Given a number of samples to be inserted between any two input samples, it produces output points using linear interpolation. Here, I have rewritten this algorithm using Python: temp1, temp2 = 0, 0 iL = 1.0 / L for i in x: hold = [i-temp1] * L temp1 = i for j in hold: temp2 += j y.append(temp2 *iL) where x contains input samples, L is a number of points to be inserted, y will contain output samples. My question is **how to implement such algorithm in ANSI C in a most effective way** , e.g. is it possible to avoid the second loop? NOTE: presented Python code is just to understand how this algorithm works. UPDATE: here is an example how it works in Python: x=[] y=[] hold=[] num_points=20 points_inbetween = 2 temp1,temp2=0,0 for i in range(num_points): x.append( sin(i*2.0*pi * 0.1) ) L = points_inbetween iL = 1.0/L for i in x: hold = [i-temp1] * L temp1 = i for j in hold: temp2 += j y.append(temp2 * iL) Let's say x=[.... 10, 20, 30 ....]. Then, if L=1, it will produce [... 10, 15, 20, 25, 30 ...] Answer: ## Interpolation in the sense of "signal sample rate increase" ... or i call it, "upsampling" (wrong term, probably. disclaimer: i have not read Lyons'). I just had to understand what the code does and then re-write it for readability. As given it has couple of problems: a) it is inefficient - two loops is ok but it does multiplication for every single output item; also it uses intermediary lists(`hold`), generates result with `append` (small beer) b) it interpolates wrong the first interval; it generates fake data in front of the first element. Say we have multiplier=5 and seq=[20,30] - it will generate [0,4,8,12,16,20,22,24,28,30] instead of [20,22,24,26,28,30]. So here is the algorithm in form of a generator: def upsampler(seq, multiplier): if seq: step = 1.0 / multiplier y0 = seq[0]; yield y0 for y in seq[1:]: dY = (y-y0) * step for i in range(multiplier-1): y0 += dY; yield y0 y0 = y; yield y0 Ok and now for some tests: >>> list(upsampler([], 3)) # this is just the same as [Y for Y in upsampler([], 3)] [] >>> list(upsampler([1], 3)) [1] >>> list(upsampler([1,2], 3)) [1, 1.3333333333333333, 1.6666666666666665, 2] >>> from math import sin, pi >>> seq = [sin(2.0*pi * i/10) for i in range(20)] >>> seq [0.0, 0.58778525229247314, 0.95105651629515353, 0.95105651629515364, 0.58778525229247325, 1.2246063538223773e-016, -0.58778525229247303, -0.95105651629515353, -0.95105651629515364, -0.58778525229247336, -2.4492127076447545e-016, 0.58778525229247214, 0.95105651629515353, 0.95105651629515364, 0.58778525229247336, 3.6738190614671318e-016, -0.5877852522924728, -0.95105651629515342, -0.95105651629515375, -0.58778525229247347] >>> list(upsampler(seq, 2)) [0.0, 0.29389262614623657, 0.58778525229247314, 0.76942088429381328, 0.95105651629515353, 0.95105651629515364, 0.95105651629515364, 0.7694208842938135, 0.58778525229247325, 0.29389262614623668, 1.2246063538223773e-016, -0.29389262614623646, -0.58778525229247303, -0.76942088429381328, -0.95105651629515353, -0.95105651629515364, -0.95105651629515364, -0.7694208842938135, -0.58778525229247336, -0.29389262614623679, -2.4492127076447545e-016, 0.29389262614623596, 0.58778525229247214, 0.76942088429381283, 0.95105651629515353, 0.95105651629515364, 0.95105651629515364, 0.7694208842938135, 0.58778525229247336, 0.29389262614623685, 3.6738190614671318e-016, -0.29389262614623618, -0.5877852522924728, -0.76942088429381306, -0.95105651629515342, -0.95105651629515364, -0.95105651629515375, -0.76942088429381361, -0.58778525229247347] And here is my translation to C, fit into Kratz's fn template: /** * * @param src caller supplied array with data * @param src_len len of src * @param steps to interpolate * @param dst output param will be filled with (src_len - 1) * steps + 1 samples */ float* linearInterpolation(float* src, int src_len, int steps, float* dst) { float step, y0, dy; float *src_end; if (src_len > 0) { step = 1.0 / steps; for (src_end = src+src_len; *dst++ = y0 = *src++, src < src_end; ) { dY = (*src - y0) * step; for (int i=steps; i>0; i--) { *dst++ = y0 += dY; } } } } Please note the C snippet is "typed but never compiled or run", so there might be syntax errors, off-by-1 errors etc. But overall the idea is there.
What does Python return when instantiating new classes? Question: I'm finding loads of quirks with Python when instantiating a new class. I'm sure it's just because I'm not used to the language, but even so, the behaviour I can see is really strange. If I open up iPython and type the following: class Person: def __init__(self, name): self.name = name def hello(self): print "Hello, " + self.name Everything works exactly as I'd expect it to: In [2]: Person Out[2]: <class __main__.Person at 0x1c97330> In [3]: p = Person("Jamie") In [4]: p Out[4]: <__main__.Person instance at 0x1c90b98> In [5]: p.hello() Hello, Jamie However, if I then access a separate class inside package - nothing too fancy, I might add - and instantiate a new class, it all goes wrong. [Here's the link to the code](https://github.com/jamierumbelow/palestrina/blob/master/palestrina/cache.py) for **palestrina/cache.py** In [6]: from palestrina.cache import Cache In [7]: Cache Out[7]: <class palestrina.cache.Cache at 0x1c97750> In [8]: c = Cache(application = 'example', backend = 'filesystem') In [9]: c Out[9]: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/jamierumbelow/Sites/Os/palestrina/<ipython console> in <module>() /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/IPython/Prompts.pyc in __call__(self, arg) 550 551 # and now call a possibly user-defined print mechanism --> 552 manipulated_val = self.display(arg) 553 554 # user display hooks can change the variable to be stored in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/IPython/Prompts.pyc in _display(self, arg) 576 return IPython.generics.result_display(arg) 577 except TryNext: --> 578 return self.shell.hooks.result_display(arg) 579 580 # Assign the default display method: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/IPython/hooks.pyc in __call__(self, *args, **kw) 139 #print "prio",prio,"cmd",cmd #dbg 140 try: --> 141 ret = cmd(*args, **kw) 142 return ret 143 except ipapi.TryNext, exc: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/IPython/hooks.pyc in result_display(self, arg) 169 170 if self.rc.pprint: --> 171 out = pformat(arg) 172 if '\n' in out: 173 # So that multi-line strings line up with the left column of /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pprint.pyc in pformat(self, object) 109 def pformat(self, object): 110 sio = _StringIO() --> 111 self._format(object, sio, 0, 0, {}, 0) 112 return sio.getvalue() 113 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pprint.pyc in _format(self, object, stream, indent, allowance, context, level) 127 self._readable = False 128 return --> 129 rep = self._repr(object, context, level - 1) 130 typ = _type(object) 131 sepLines = _len(rep) > (self._width - 1 - indent - allowance) /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pprint.pyc in _repr(self, object, context, level) 221 def _repr(self, object, context, level): 222 repr, readable, recursive = self.format(object, context.copy(), --> 223 self._depth, level) 224 if not readable: 225 self._readable = False /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pprint.pyc in format(self, object, context, maxlevels, level) 233 and whether the object represents a recursive construct. 234 """ --> 235 return _safe_repr(object, context, maxlevels, level) 236 237 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pprint.pyc in _safe_repr(object, context, maxlevels, level) 318 return format % _commajoin(components), readable, recursive 319 --> 320 rep = repr(object) 321 return rep, (rep and not rep.startswith('<')), False 322 TypeError: 'bool' object is not callable I can't understand what is going on here. Can someone explain to me what might be happening? Thanks. Answer: Well, you removed the relevant part (the traceback) and replaced it by `...` in your paste. But it looks like you have an error in the representation of the class. Here's a simulation of the error: >>> class MyClass(object): ... def __repr__(self): ... return True() ... >>> c = MyClass() >>> c Please check the traceback you removed and you'll see what is happening exactly. If you can't, please edit your question and include it, so we can explain further. Providing the source code of that class would also help.
Mixed lexicographic and numeric sorting Question: I have a list such as: ## input.txt foo.bar.1 foo.bar.2 foo.bar.3.x.y.z foo.bar.10 foo.bar.0 baz.10.qux baz.3.qux that needed to be sorted. I need to treat this as a dot separated fields, where numeric fields are needed to be sorted numerically. So I wrote: ## mixsort.py #!/usr/bin/env python import sys seq = map(lambda l: map(lambda s: (s.isdigit() and [int(s)] or [s])[0], l), [ s.rstrip().split('.') for s in sys.stdin.readlines() ]) seq.sort() sys.stdout.write( '\n'.join(['.'.join([str(i) for i in l]) for l in seq]) ) ## usage $ mixsort.py < input.txt baz.3.qux baz.10.qux foo.bar.0 foo.bar.1 foo.bar.2 foo.bar.3.x.y.z foo.bar.10 But am I re-inventing the wheel here? Is there a common *nix utility that do what I need? is there a switch to `sort(1)` that will help me out. (note that the position of the numeric fields is unknown). Is there a better way to do the same thing? Answer: ### coreutils version 8.7 Use version sort: `sort -V`: Input: frayser@gentoo ~ $ cat /tmp/list.ran foo.bar.1 foo.bar.2 baz.10.qux baz.3.qux foo.bar.0 foo.bar.3.x.y.z foo.bar.10 Sort: frayser@gentoo ~ $ LC_ALL=C sort -V /tmp/list.ran baz.3.qux baz.10.qux foo.bar.0 foo.bar.1 foo.bar.2 foo.bar.3.x.y.z foo.bar.10 I found this in the [info pages](http://www.gnu.org/software/coreutils/manual/html_node/Details-about- version-sort.html#Details-about-version-sort) as suggested in the manpage: `info coreutils "sort invocation"`
python: shuffle characters in string to get all possible string combinations Question: just looking for a script in Python which receives some string and returns all possible strings made up of all the possible combinations of the chars in the original string... I've found scripts to shuffle randomly the chars in a string, but they only return one randome combination, and what I'm looking for is all the possible combinations... Say, for example: script.py "abc" abc acb bac bca cab cba Thanks! Answer: [`itertools.permutations`](http://docs.python.org/library/itertools.html#itertools.permutations) >>> import itertools >>> import pprint >>> pprint.pprint(list(itertools.permutations("spam"))) [('s', 'p', 'a', 'm'), ('s', 'p', 'm', 'a'), ('s', 'a', 'p', 'm'), ('s', 'a', 'm', 'p'), ('s', 'm', 'p', 'a'), ('s', 'm', 'a', 'p'), ('p', 's', 'a', 'm'), ('p', 's', 'm', 'a'), ('p', 'a', 's', 'm'), ('p', 'a', 'm', 's'), ('p', 'm', 's', 'a'), ('p', 'm', 'a', 's'), ('a', 's', 'p', 'm'), ('a', 's', 'm', 'p'), ('a', 'p', 's', 'm'), ('a', 'p', 'm', 's'), ('a', 'm', 's', 'p'), ('a', 'm', 'p', 's'), ('m', 's', 'p', 'a'), ('m', 's', 'a', 'p'), ('m', 'p', 's', 'a'), ('m', 'p', 'a', 's'), ('m', 'a', 's', 'p'), ('m', 'a', 'p', 's')] (The `pprint` is just there to make the output look neater.) Or, if you prefer, >>> list(map("".join, itertools.permutations("spam"))) ['spam', 'spma', 'sapm', 'samp', 'smpa', 'smap', 'psam', 'psma', 'pasm', 'pams', 'pmsa', 'pmas', 'aspm', 'asmp', 'apsm', 'apms', 'amsp', 'amps', 'mspa', 'msap', 'mpsa', 'mpas', 'masp', 'maps']
How to convert regexp to python from perl Question: in Perl: if ($test =~ /^id\:(.*)$/ ) { print $1; } Answer: In Python: import re test = 'id:foo' match = re.search(r'^id:(.*)$', test) if match: print match.group(1) In Python, regular expressions are available through the [`re` library](http://docs.python.org/library/re.html). The `r` before the string indicates that it is a [raw string literal](http://docs.python.org/library/re.html#raw-string-notation), meaning that backslashes are not treated specially (otherwise every backslash would need to be escaped with another backslash in order for a literal backslash to make its way into the regex string). I have used `re.search` here because this is the closest equivalent to Perl's `=~` operator. There is another function `re.match` which does the same thing but only checks for a match starting at the beginning of the string (counter- intuitive to a Perl programmer's definition of "matching"). See [this explanation](http://docs.python.org/library/re.html#search-vs-match) for full details of the differences between the two. Also note that there is no need to escape the `:` since it is not a special character in regular expressions.
Why does "c.execute(...)" break the loop? Question: I am trying to change some data in an sqlite3 file and I my non-existing knowledge in python and google-fu made me end up with this code: #!/usr/bin/python # Filename : hello.py from sqlite3 import * conn = connect('database') c = conn.cursor() c.execute('select * from table limit 2') for row in c: newname = row[1] newname = newname[:-3]+"hello" newdata = "UPDATE table SET name = '" + newname + "', originalPath = '' WHERE id = '" + str(row[0]) + "'" print row c.execute(newdata) conn.commit() c.close() It works like a charm on the first row but for some reason it only runs the loop one time (only the first row in the table gets modified). When I remove "c.execute(newdata)" it loops through the first two rows in the table, as it should. How do I make it work? Answer: It's doing that because as soon as you do `c.execute(newdata)` the cursor is no longer pointing at the original result set anymore. I would do it this way: #!/usr/bin/python # Filename : hello.py from sqlite3 import * conn = connect('database') c = conn.cursor() c.execute('select * from table limit 2') result = c.fetchall() for row in result: newname = row[1] newname = newname[:-3]+"hello" newdata = "UPDATE table SET name = '" + newname + "', originalPath = '' WHERE id = '" + str(row[0]) + "'" print row c.execute(newdata) conn.commit() c.close() conn.close()
Python BeautifulSoup equivalent to lxml make_links_absolute Question: So lxml has a very hand feature: make_links_absolute: doc = lxml.html.fromstring(some_html_page) doc.make_links_absolute(url_for_some_html_page) and all the links in doc are absolute now. Is there an easy equivalent in BeautifulSoup or do I simply need to pass it through urlparse and normalize it: soup = BeautifulSoup(some_html_page) for tag in soup.findAll('a', href=True): url_data = urlparse(tag['href']) if url_data[0] == "": full_url = url_for_some_html_page + test_url Answer: In my answer to [What is a simple way to extract the list of URLs on a webpage using python?](http://stackoverflow.com/questions/4139989/what-is-a-simple- way-to-extract-the-list-of-urls-on-a-webpage-using-python/4140102#4140102) I covered that incidentally as part of the extraction step; you could easily write a method to do it on the soup and not just extract it. import urlparse def make_links_absolute(soup, url): for tag in soup.findAll('a', href=True): tag['href'] = urlparse.urljoin(url, tag['href'])