content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Run Pylons controller as separate app?
I have a Pylons app where I would like to move some of the logic to a separate batch process. I've been running it under the main app for testing, but it is going to be doing a lot of work in the database, and I'd like it to be a separate process that will be running in the background constantly. The main pylons app will submit jobs into the database, and the new process will do the work requested in each job.
How can I launch a controller as a stand alone script?
I currently have:
from warehouse2.controllers import importServer
importServer.runServer(60)
and in the controller file, but not part of the controller class:
def runServer(sleep_secs):
try:
imp = ImportserverController()
while(True):
imp.runImport()
sleepFor(sleep_secs)
except Exception, e:
log.info("Unexpected error: %s" % sys.exc_info()[0])
log.info(e)
But starting ImportServer.py on the command line results in:
2008-09-25 12:31:12.687000 Could not locate a bind configured on mapper Mapper|I
mportJob|n_imports, SQL expression or this Session
A:
If you want to load parts of a Pylons app, such as the models from outside Pylons, load the Pylons app in the script first:
from paste.deploy import appconfig
from pylons import config
from YOURPROJ.config.environment import load_environment
conf = appconfig('config:development.ini', relative_to='.')
load_environment(conf.global_conf, conf.local_conf)
That will load the Pylons app, which sets up most of the state so that you can proceed to use the SQLAlchemy models and Session to work with the database.
Note that if your code is using the pylons globals such as request/response/etc then that won't work since they require a request to be in progress to exist.
A:
I'm redacting my response and upvoting the other answer by Ben Bangert, as it's the correct one. I answered and have since learned the correct way (mentioned below). If you really want to, check out the history of this answer to see the wrong (but working) solution I originally proposed.
|
Run Pylons controller as separate app?
|
I have a Pylons app where I would like to move some of the logic to a separate batch process. I've been running it under the main app for testing, but it is going to be doing a lot of work in the database, and I'd like it to be a separate process that will be running in the background constantly. The main pylons app will submit jobs into the database, and the new process will do the work requested in each job.
How can I launch a controller as a stand alone script?
I currently have:
from warehouse2.controllers import importServer
importServer.runServer(60)
and in the controller file, but not part of the controller class:
def runServer(sleep_secs):
try:
imp = ImportserverController()
while(True):
imp.runImport()
sleepFor(sleep_secs)
except Exception, e:
log.info("Unexpected error: %s" % sys.exc_info()[0])
log.info(e)
But starting ImportServer.py on the command line results in:
2008-09-25 12:31:12.687000 Could not locate a bind configured on mapper Mapper|I
mportJob|n_imports, SQL expression or this Session
|
[
"If you want to load parts of a Pylons app, such as the models from outside Pylons, load the Pylons app in the script first:\nfrom paste.deploy import appconfig\nfrom pylons import config\n\nfrom YOURPROJ.config.environment import load_environment\n\nconf = appconfig('config:development.ini', relative_to='.')\nload_environment(conf.global_conf, conf.local_conf)\n\nThat will load the Pylons app, which sets up most of the state so that you can proceed to use the SQLAlchemy models and Session to work with the database.\nNote that if your code is using the pylons globals such as request/response/etc then that won't work since they require a request to be in progress to exist.\n",
"I'm redacting my response and upvoting the other answer by Ben Bangert, as it's the correct one. I answered and have since learned the correct way (mentioned below). If you really want to, check out the history of this answer to see the wrong (but working) solution I originally proposed.\n"
] |
[
11,
1
] |
[] |
[] |
[
"pylons",
"python"
] |
stackoverflow_0000134387_pylons_python.txt
|
Q:
Investigating python process to see what's eating CPU
I have a python process (Pylons webapp) that is constantly using 10-30% of CPU. I'll improve/tune logging to get some insight of what's going on, but until then, are there any tools/techniques that allow to see what python process is doing, how many and how busy threads it has etc?
Update:
configured access log which shows that there are no requests going on, webapp is just idling
no point to plug in paste.profile in middleware chain since there are no requests, activity must be happening either in webapp's worker threads or paster web server
running paster like this: "python -m cProfile -o outfile /usr/bin/paster serve dev.ini" and inspecting results shows that most time is spent in "posix.waitpid". Paster runs webapp in subprocess, subprocess activity is not picked up by profiler
looking into ;hacking PasteScript "serve" command so that subprocesses would get profiled
Another update:
After much tinkering, sticking profiler in various places and getting familiar with PasteScript insides, I discovered that the constant CPU load goes away if application is started without "--reload" parameter (this flag tells paster to restart itself if code changes, handy in development), which is fine in production environment.
A:
Profiling might help you learn a bit of what it's doing. If your sort the output by "time" you will see which functions are chowing up cpu time, which should give you some good hints.
A:
As you noted, in --reload mode, Paste sweeps the filesystem every second to see if any of the files loaded have changed. If they have, then Paste reloads the process. You can also manually tell Paste to monitor non-Python code modules for changes if desired.
You can change the reload interval with the --reload-interval option, this will reduce the CPU usage when using --reload as it will sweep less often.
|
Investigating python process to see what's eating CPU
|
I have a python process (Pylons webapp) that is constantly using 10-30% of CPU. I'll improve/tune logging to get some insight of what's going on, but until then, are there any tools/techniques that allow to see what python process is doing, how many and how busy threads it has etc?
Update:
configured access log which shows that there are no requests going on, webapp is just idling
no point to plug in paste.profile in middleware chain since there are no requests, activity must be happening either in webapp's worker threads or paster web server
running paster like this: "python -m cProfile -o outfile /usr/bin/paster serve dev.ini" and inspecting results shows that most time is spent in "posix.waitpid". Paster runs webapp in subprocess, subprocess activity is not picked up by profiler
looking into ;hacking PasteScript "serve" command so that subprocesses would get profiled
Another update:
After much tinkering, sticking profiler in various places and getting familiar with PasteScript insides, I discovered that the constant CPU load goes away if application is started without "--reload" parameter (this flag tells paster to restart itself if code changes, handy in development), which is fine in production environment.
|
[
"Profiling might help you learn a bit of what it's doing. If your sort the output by \"time\" you will see which functions are chowing up cpu time, which should give you some good hints.\n",
"As you noted, in --reload mode, Paste sweeps the filesystem every second to see if any of the files loaded have changed. If they have, then Paste reloads the process. You can also manually tell Paste to monitor non-Python code modules for changes if desired.\nYou can change the reload interval with the --reload-interval option, this will reduce the CPU usage when using --reload as it will sweep less often.\n"
] |
[
8,
7
] |
[] |
[] |
[
"debugging",
"monitoring",
"multithreading",
"pylons",
"python"
] |
stackoverflow_0000760039_debugging_monitoring_multithreading_pylons_python.txt
|
Q:
Is there a way to resize images in Django via imagename.230x150.jpg?
There's a nice plugin for Frog CMS that lets you just type in yourpicture.120x120.jpg or whatever, and it will automatically use the image in that dimension. If it doesn't exist, it creates it and adds it to the filesystem.
http://www.naehrstoff.ch/code/image-resize-for-frog
I was wondering if there's anything like this in Django/Python?
A:
I think this snippet is close to what you need: Dynamic thumbnail generator
You might also want to investigate sorl-thumbnail which, even though it codes the thumbnail dimensions in the template instead of the URL, is more flexible/powerful.
|
Is there a way to resize images in Django via imagename.230x150.jpg?
|
There's a nice plugin for Frog CMS that lets you just type in yourpicture.120x120.jpg or whatever, and it will automatically use the image in that dimension. If it doesn't exist, it creates it and adds it to the filesystem.
http://www.naehrstoff.ch/code/image-resize-for-frog
I was wondering if there's anything like this in Django/Python?
|
[
"I think this snippet is close to what you need: Dynamic thumbnail generator\nYou might also want to investigate sorl-thumbnail which, even though it codes the thumbnail dimensions in the template instead of the URL, is more flexible/powerful.\n"
] |
[
5
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000784099_django_python.txt
|
Q:
Django Model.object.get pre_save Function Weirdness
I have made a function that connects to a models 'pre_save' signal. Inside the function I am trying to check if the model instance's pk already exists in the table with:
sender.objects.get(pk=instance._get_pk_val())
The first instance of the model raises an error. I catch the error and generate a slug field from the title. In a second instance, it doesn't throw the error. I checked the value of instance._get_pk_val() on both instances and they are the same: None
So:
# This one raises an error in the sluggit function
instance1 = Model(title="title 1")
instance1.save()
# This one doesn't raise an error
instance2 = Model(title="title 2")
instance2.save()
This is my 3rd day messing around with python and django. So I am sorry if it something newbish that I am not seeing.
Edit:
The Model:
class Test(models.Model):
title = models.CharField(max_length=128)
slug = models.SlugField(max_length=128)
slug.prepopulate_from=('title',)
signals.pre_save.connect(package.sluggit, sender=Test)
The Function Basics:
def sluggit(sender, instance, signal, *args, **kwargs):
try:
sender.objects.get(pk=instance._get_pk_val())
except:
# Generate Slug Code
@S.Lot told me to override the save() method in the comments. I'll have to try that. I would still like to know why the second call to model.objects.get() isn't raising an error with this method.
Edit 2
Thank you @S.Lot. Overriding the save method works perfectly. Still curious about the signal method. Hmm, weird.
Edit 3
After playing around a little more, I found that using instance.objects.get() instead of sender.objects.get() works:
def sluggit(sender, instance, signal, *args, **kwargs):
try:
sender.objects.get(pk=instance._get_pk_val())
except:
# Generate Slug Code
needs to be:
def sluggit(sender, instance, signal, *args, **kwargs):
try:
instance.objects.get(pk=instance._get_pk_val())
except:
# Generate Slug Code
A bug? For some reason I thought sender.objects.get() would be the same as Test.objects.get().
A:
S.Lott is correct... use save(), as you've already acknowledged that you have started doing.
As for the signal question, I can honestly see nothing wrong with your code. I've even run it locally myself with success. Are you sure that you're representing it properly in the question? Or that instance2 isn't already an existing database object (perhaps a goof in your test code)?
A:
Thanks for posting this. The top google results (at the time I'm posting this) are a little outdated and show the old way of connecting signals (which was recently rewritten, apparently). Your edits, with the corrected code snippets showed me how it's done.
I wish more posters edited their comments to place a fix in it. Thanks. :-)
|
Django Model.object.get pre_save Function Weirdness
|
I have made a function that connects to a models 'pre_save' signal. Inside the function I am trying to check if the model instance's pk already exists in the table with:
sender.objects.get(pk=instance._get_pk_val())
The first instance of the model raises an error. I catch the error and generate a slug field from the title. In a second instance, it doesn't throw the error. I checked the value of instance._get_pk_val() on both instances and they are the same: None
So:
# This one raises an error in the sluggit function
instance1 = Model(title="title 1")
instance1.save()
# This one doesn't raise an error
instance2 = Model(title="title 2")
instance2.save()
This is my 3rd day messing around with python and django. So I am sorry if it something newbish that I am not seeing.
Edit:
The Model:
class Test(models.Model):
title = models.CharField(max_length=128)
slug = models.SlugField(max_length=128)
slug.prepopulate_from=('title',)
signals.pre_save.connect(package.sluggit, sender=Test)
The Function Basics:
def sluggit(sender, instance, signal, *args, **kwargs):
try:
sender.objects.get(pk=instance._get_pk_val())
except:
# Generate Slug Code
@S.Lot told me to override the save() method in the comments. I'll have to try that. I would still like to know why the second call to model.objects.get() isn't raising an error with this method.
Edit 2
Thank you @S.Lot. Overriding the save method works perfectly. Still curious about the signal method. Hmm, weird.
Edit 3
After playing around a little more, I found that using instance.objects.get() instead of sender.objects.get() works:
def sluggit(sender, instance, signal, *args, **kwargs):
try:
sender.objects.get(pk=instance._get_pk_val())
except:
# Generate Slug Code
needs to be:
def sluggit(sender, instance, signal, *args, **kwargs):
try:
instance.objects.get(pk=instance._get_pk_val())
except:
# Generate Slug Code
A bug? For some reason I thought sender.objects.get() would be the same as Test.objects.get().
|
[
"S.Lott is correct... use save(), as you've already acknowledged that you have started doing.\nAs for the signal question, I can honestly see nothing wrong with your code. I've even run it locally myself with success. Are you sure that you're representing it properly in the question? Or that instance2 isn't already an existing database object (perhaps a goof in your test code)?\n",
"Thanks for posting this. The top google results (at the time I'm posting this) are a little outdated and show the old way of connecting signals (which was recently rewritten, apparently). Your edits, with the corrected code snippets showed me how it's done.\nI wish more posters edited their comments to place a fix in it. Thanks. :-)\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"django_models",
"django_signals",
"error_handling",
"python"
] |
stackoverflow_0000702150_django_django_models_django_signals_error_handling_python.txt
|
Q:
One view ( frontpage ) for many controllers (sub views)
Notes: Cannot use Javascript or iframes. In fact I can't trust the client browser to do just about anything but the ultra basics.
I'm rebuilding a legacy PHP4 app as a MVC application, with most of my research currently focused with the Pylon's framework.
One of the first weird issues I've run into and one I've solved in the past by using iframes or better yet javascript is displaying a dynamic collection of "widgets" that are like digest views of a typical controller's index view.
Best way to visualize my problem would be to look at Google's personalized homepage. They solve the problem with Javascript, but for my scenario javascript and pretty much anything above basic XHTML is not possible.
One idea I started working on was to have my Frontpage controller poll a database or other service for the currently activated widgets, then taking a list of tuples/dicts, dynamically instantiate each controller and build a list/dict of render sub-views and pass that to the frontpage view and let it figure things out.
So with peusudo code:
Get request goes to WSGI
WSGI calls pylons
Pylons routes to Frontpage.index()
Frontpage.index()
myViews = list()
for WidgetController in ActiveWidegets():
myViews.append(subRender(WidgetController, widgetView))
c.subviews = myViews
render(frontpage.mako)
Weird bits about subRender
Dynamically imports controllers via __import__ (currently hardcoded to project's namespace :( )
Has a potential to be very expensive (most widget calls can be cached, but one is a user panel)
I feel like there has to be a better way or perhaps a mechanism already implemented in WSGI or better yet Pylons to do this, but so far the closest I've found is this utility method: http://www.pylonshq.com/docs/en/0.9.7/modules/controllers_util/#pylons.controllers.util.forward but it seems a little crazy to build N instances of pylons on top of pylons just to get a collection views.
A:
While in most cases I'd recommend what you originally stated, using Javascript to load each widget, since that isn't an option I think you'll need to do something a little different.
In addition to using the approach of trying to have a single front controller go through all the widgets needed and building them, an alternative you might want to consider is making more powerful use of the templating in Mako.
You can actually define small blocks as Mako def's, which of course have full Python power. To avoid polluting your Mako templates with domain logic, make sure to keep that all in your models, and just make calls to the model instances in the Mako def's as needed for that component of the page to build itself.
A huge advantage of this approach is that since Mako def's support cache args, you can actually have components of the page decide how to cache themselves. Maybe the sidebar should be cached for 5 mins, but the top bar changes every hit for example. Also, since the component is triggering the db hit, you'll save db hits when the component caches itself.
ToscaWidgets doesn't have the performance to make it a very feasible option on a larger scale, so I'd stay away from trying that out.
As for some tweaks to your existing idea, make sure not to actually use Pylons controllers for 'widgets', as they do much more as needed to support WSGI that you don't need for building a page up of widgets.
I'd consider having all Widget classes work like so:
class Widget(object):
def process(self):
# Determine if this widget should process a POST aimed at it
# ie, one of the POST args is a widget id indicating the widget
# to handle the POST
def prepare(self):
# Load data from the database if needed in prep for the render
def render(self):
# return the rendered content
def __call__(self):
self.process()
self.prepare()
return self.render()
Then just have your main Mako template iterate through the widget instances, and call them to render them out.
A:
You could use ToscaWidgets to encapsulate your widgets, along with a stored list of the widgets enabled for each user (in database or other service, as you suggest). Pass a list of the enabled ToscaWidgets to the view and the widgets will render themselves (including dynamically adding CSS/JavaScript references to the page if widget requires those resources).
|
One view ( frontpage ) for many controllers (sub views)
|
Notes: Cannot use Javascript or iframes. In fact I can't trust the client browser to do just about anything but the ultra basics.
I'm rebuilding a legacy PHP4 app as a MVC application, with most of my research currently focused with the Pylon's framework.
One of the first weird issues I've run into and one I've solved in the past by using iframes or better yet javascript is displaying a dynamic collection of "widgets" that are like digest views of a typical controller's index view.
Best way to visualize my problem would be to look at Google's personalized homepage. They solve the problem with Javascript, but for my scenario javascript and pretty much anything above basic XHTML is not possible.
One idea I started working on was to have my Frontpage controller poll a database or other service for the currently activated widgets, then taking a list of tuples/dicts, dynamically instantiate each controller and build a list/dict of render sub-views and pass that to the frontpage view and let it figure things out.
So with peusudo code:
Get request goes to WSGI
WSGI calls pylons
Pylons routes to Frontpage.index()
Frontpage.index()
myViews = list()
for WidgetController in ActiveWidegets():
myViews.append(subRender(WidgetController, widgetView))
c.subviews = myViews
render(frontpage.mako)
Weird bits about subRender
Dynamically imports controllers via __import__ (currently hardcoded to project's namespace :( )
Has a potential to be very expensive (most widget calls can be cached, but one is a user panel)
I feel like there has to be a better way or perhaps a mechanism already implemented in WSGI or better yet Pylons to do this, but so far the closest I've found is this utility method: http://www.pylonshq.com/docs/en/0.9.7/modules/controllers_util/#pylons.controllers.util.forward but it seems a little crazy to build N instances of pylons on top of pylons just to get a collection views.
|
[
"While in most cases I'd recommend what you originally stated, using Javascript to load each widget, since that isn't an option I think you'll need to do something a little different.\nIn addition to using the approach of trying to have a single front controller go through all the widgets needed and building them, an alternative you might want to consider is making more powerful use of the templating in Mako.\nYou can actually define small blocks as Mako def's, which of course have full Python power. To avoid polluting your Mako templates with domain logic, make sure to keep that all in your models, and just make calls to the model instances in the Mako def's as needed for that component of the page to build itself.\nA huge advantage of this approach is that since Mako def's support cache args, you can actually have components of the page decide how to cache themselves. Maybe the sidebar should be cached for 5 mins, but the top bar changes every hit for example. Also, since the component is triggering the db hit, you'll save db hits when the component caches itself.\nToscaWidgets doesn't have the performance to make it a very feasible option on a larger scale, so I'd stay away from trying that out.\nAs for some tweaks to your existing idea, make sure not to actually use Pylons controllers for 'widgets', as they do much more as needed to support WSGI that you don't need for building a page up of widgets.\nI'd consider having all Widget classes work like so:\nclass Widget(object):\n def process(self):\n # Determine if this widget should process a POST aimed at it\n # ie, one of the POST args is a widget id indicating the widget\n # to handle the POST\n\n def prepare(self):\n # Load data from the database if needed in prep for the render\n\n def render(self):\n # return the rendered content\n\n def __call__(self):\n self.process()\n self.prepare()\n return self.render()\n\nThen just have your main Mako template iterate through the widget instances, and call them to render them out.\n",
"You could use ToscaWidgets to encapsulate your widgets, along with a stored list of the widgets enabled for each user (in database or other service, as you suggest). Pass a list of the enabled ToscaWidgets to the view and the widgets will render themselves (including dynamically adding CSS/JavaScript references to the page if widget requires those resources).\n"
] |
[
6,
0
] |
[] |
[] |
[
"cherrypy",
"model_view_controller",
"pylons",
"python"
] |
stackoverflow_0000574140_cherrypy_model_view_controller_pylons_python.txt
|
Q:
Getting Python System Calls as string results
I'd like to use os.system("md5sum myFile") and have the result returned from os.system instead of just runned in a subshell where it's echoed.
In short I'd like to do this:
resultMD5 = os.system("md5sum myFile")
And only have the md5sum in resultMD5 and not echoed.
A:
subprocess is better than using os.system or os.popen
import subprocess
resultMD5 = subprocess.Popen(["md5sum","myFile"],stdout=subprocess.PIPE).communicate()[0]
Or just calculate the md5sum yourself with the hashlib module.
import hashlib
resultMD5 = hashlib.md5(open("myFile").read()).hexdigest()
A:
You should probably use the subprocess module as a replacement for os.system.
A:
import subprocess
p = subprocess.Popen("md5sum gmail.csv", shell=True, stdout=subprocess.PIPE)
resultMD5, filename = p.communicate()[0].split()
print resultMD5
|
Getting Python System Calls as string results
|
I'd like to use os.system("md5sum myFile") and have the result returned from os.system instead of just runned in a subshell where it's echoed.
In short I'd like to do this:
resultMD5 = os.system("md5sum myFile")
And only have the md5sum in resultMD5 and not echoed.
|
[
"subprocess is better than using os.system or os.popen\nimport subprocess\nresultMD5 = subprocess.Popen([\"md5sum\",\"myFile\"],stdout=subprocess.PIPE).communicate()[0]\n\nOr just calculate the md5sum yourself with the hashlib module.\nimport hashlib\nresultMD5 = hashlib.md5(open(\"myFile\").read()).hexdigest()\n\n",
"You should probably use the subprocess module as a replacement for os.system.\n",
"import subprocess\n\np = subprocess.Popen(\"md5sum gmail.csv\", shell=True, stdout=subprocess.PIPE)\nresultMD5, filename = p.communicate()[0].split()\nprint resultMD5\n\n"
] |
[
12,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000785078_python.txt
|
Q:
AttributeError: xmlNode instance has no attribute 'isCountNode'
I'm using libxml2 in a Python app I'm writing, and am trying to run some test code to parse an XML file. The program downloads an XML file from the internet and parses it. However, I have run into a problem.
With the following code:
xmldoc = libxml2.parseDoc(gfile_content)
droot = xmldoc.children # Get document root
dchild = droot.children # Get child nodes
while dchild is not None:
if dchild.type == "element":
print "\tAn element with ", dchild.isCountNode(), "child(ren)"
print "\tAnd content", repr(dchild.content)
dchild = dchild.next
xmldoc.freeDoc();
...which is based on the code example found on this article on XML.com, I receive the following error when I attempt to run this code on Python 2.4.3 (CentOS 5.2 package).
Traceback (most recent call last):
File "./xml.py", line 25, in ?
print "\tAn element with ", dchild.isCountNode(), "child(ren)"
AttributeError: xmlNode instance has no attribute 'isCountNode'
I'm rather stuck here.
Edit: I should note here I also tried IsCountNode() and it still threw an error.
A:
isCountNode should read "lsCountNode" (a lower-case "L")
|
AttributeError: xmlNode instance has no attribute 'isCountNode'
|
I'm using libxml2 in a Python app I'm writing, and am trying to run some test code to parse an XML file. The program downloads an XML file from the internet and parses it. However, I have run into a problem.
With the following code:
xmldoc = libxml2.parseDoc(gfile_content)
droot = xmldoc.children # Get document root
dchild = droot.children # Get child nodes
while dchild is not None:
if dchild.type == "element":
print "\tAn element with ", dchild.isCountNode(), "child(ren)"
print "\tAnd content", repr(dchild.content)
dchild = dchild.next
xmldoc.freeDoc();
...which is based on the code example found on this article on XML.com, I receive the following error when I attempt to run this code on Python 2.4.3 (CentOS 5.2 package).
Traceback (most recent call last):
File "./xml.py", line 25, in ?
print "\tAn element with ", dchild.isCountNode(), "child(ren)"
AttributeError: xmlNode instance has no attribute 'isCountNode'
I'm rather stuck here.
Edit: I should note here I also tried IsCountNode() and it still threw an error.
|
[
"isCountNode should read \"lsCountNode\" (a lower-case \"L\")\n"
] |
[
3
] |
[] |
[] |
[
"centos",
"centos5",
"libxml2",
"python",
"xml"
] |
stackoverflow_0000785972_centos_centos5_libxml2_python_xml.txt
|
Q:
Color picking from given coordinates
What is the simplest way to pick up the RGB color code of the given coordinates? For simplicity let's assume that the screen resolution is 1024x768 and color depth/quality 32 bits. The coordinates are given relative to the upper left corner of the screen. I'd like to get some tips or examples how it can be done with Python.
A:
The win32gui ActivePython documentation should be useful.
I think you can construct something like:
import win32gui
GetPixel(GetDC(WindowFromPoint( (XPos,YPos) )), XPos , YPos )
|
Color picking from given coordinates
|
What is the simplest way to pick up the RGB color code of the given coordinates? For simplicity let's assume that the screen resolution is 1024x768 and color depth/quality 32 bits. The coordinates are given relative to the upper left corner of the screen. I'd like to get some tips or examples how it can be done with Python.
|
[
"The win32gui ActivePython documentation should be useful.\nI think you can construct something like:\nimport win32gui\nGetPixel(GetDC(WindowFromPoint( (XPos,YPos) )), XPos , YPos )\n\n"
] |
[
1
] |
[] |
[] |
[
"color_picker",
"python",
"windows"
] |
stackoverflow_0000785157_color_picker_python_windows.txt
|
Q:
Django blows up with 1.1, Can't find urls module
EDIT: Issue solved, answered it below. Lame error. Blah
So I upgraded to Django 1.1 and for the life of me I can't figure out what I'm missing. Here is my traceback:
http://dpaste.com/37391/ - This happens on any page I try to go to.
I've modified my urls.py to include the admin in the new method:
from django.contrib import admin
admin.autodiscover()
.... urlpatterns declaration
(r'^admin/', include(admin.site.urls)),
I've tried fidgeting with paths and the like but nothing fixes my problem and I can't figure it out.
Has something major changed since Django 1.1 alpha -> Django 1.1 beta that I am missing? Apart from the admin I can't see what else is new. Are urls still stored in a urls.py within each app?
Thanks for the help in advance, this is beyond frustrating.
A:
I figured it out. I was missing a urls.py that I referenced (for some reason, SVN said it was in the repo but it never was fetched on an update) and it simply said could not find urls (with no reference to notes.urls which WAS missing) so it got very confusing.
Either way, fixed -- Awesome!
A:
try this:
(r'^admin/(.*)', admin.site.root),
More info
A:
What is the value of your ROOT_URLCONF in your settings.py file? Is the file named by that setting on your python path?
Are you using the development server or what?
|
Django blows up with 1.1, Can't find urls module
|
EDIT: Issue solved, answered it below. Lame error. Blah
So I upgraded to Django 1.1 and for the life of me I can't figure out what I'm missing. Here is my traceback:
http://dpaste.com/37391/ - This happens on any page I try to go to.
I've modified my urls.py to include the admin in the new method:
from django.contrib import admin
admin.autodiscover()
.... urlpatterns declaration
(r'^admin/', include(admin.site.urls)),
I've tried fidgeting with paths and the like but nothing fixes my problem and I can't figure it out.
Has something major changed since Django 1.1 alpha -> Django 1.1 beta that I am missing? Apart from the admin I can't see what else is new. Are urls still stored in a urls.py within each app?
Thanks for the help in advance, this is beyond frustrating.
|
[
"I figured it out. I was missing a urls.py that I referenced (for some reason, SVN said it was in the repo but it never was fetched on an update) and it simply said could not find urls (with no reference to notes.urls which WAS missing) so it got very confusing.\nEither way, fixed -- Awesome!\n",
"try this:\n (r'^admin/(.*)', admin.site.root),\n\nMore info\n",
"What is the value of your ROOT_URLCONF in your settings.py file? Is the file named by that setting on your python path? \nAre you using the development server or what?\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"django",
"django_1.1",
"python"
] |
stackoverflow_0000785987_django_django_1.1_python.txt
|
Q:
Django restapi passing parameter to read()
In the test example http://django-rest-interface.googlecode.com/svn/trunk/django_restapi_tests/examples/custom_urls.py on line 19 to they parse the request.path to get the poll_id. This looks very fragile to me. If the url changes then this line breaks. I have attempted to pass in the poll_id but this did not work.
So my question is how do I use the poll_id (or any other value) gathered from the url?
A:
Views are only called when the associated url is matched. By crafting the url regex properly, you can guarantee that any request passed to your view will have the poll_id at the correct position in the request path. This is what the example does:
url(r'^json/polls/(?P<poll_id>\d+)/choices/$', json_choice_resource, {'is_entry':False}),
The json_choice_resource view is an instance of django_restapi.model_resource.Collection and thus the read() method of Collection will only ever act on requests with paths of the expected format.
|
Django restapi passing parameter to read()
|
In the test example http://django-rest-interface.googlecode.com/svn/trunk/django_restapi_tests/examples/custom_urls.py on line 19 to they parse the request.path to get the poll_id. This looks very fragile to me. If the url changes then this line breaks. I have attempted to pass in the poll_id but this did not work.
So my question is how do I use the poll_id (or any other value) gathered from the url?
|
[
"Views are only called when the associated url is matched. By crafting the url regex properly, you can guarantee that any request passed to your view will have the poll_id at the correct position in the request path. This is what the example does:\nurl(r'^json/polls/(?P<poll_id>\\d+)/choices/$', json_choice_resource, {'is_entry':False}),\n\nThe json_choice_resource view is an instance of django_restapi.model_resource.Collection and thus the read() method of Collection will only ever act on requests with paths of the expected format.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000786199_django_python.txt
|
Q:
Is there a way to retrieve process stats using Perl or Python?
Is there a way to generically retrieve process stats using Perl or Python? We could keep it Linux specific.
There are a few problems: I won't know the PID ahead of time, but I can run the process in question from the script itself. For example, I'd have no problem doing:
./myscript.pl some/process/I/want/to/get/stats/for
Basically, I'd like to, at the very least, get the memory consumption of the process, but the more information I can get the better (like run time of the process, average CPU usage of the process, etc.)
Thanks.
A:
Have a look at the Proc::ProcessTable module which returns quite a bit of information on the processes in the system. Call the "fields" method to get a list of details that you can extract from each process.
I recently discovered the above module which has just about replaced the Process module that I had written when writing a Perl kill program for Linux. You can have a look at my script here.
It can be easily extended to extract further information from the ps command. For eg. the 'getbycmd' method returns a list of pid's whose command line invocation matches the passed argument. You can then retrieve a specific process' details by calling 'getdetail' by passing that PID to it like so,
my $psTable = Process->new();
# Get list of process owned by 'root'
for my $pid ( $psTable->getbyuser("root") ) {
$psDetail = $psList->getdetail( $pid );
# Do something with the psDetail..
}
A:
If you are fork()ing the child, you will know it's PID.
From within the parent you can then parse the files in /proc/<PID/ to check the memory and CPU usage, albeit only for as long as the child process is running.
A:
A common misconception is that reading /proc is like reading /home. /proc is designed to give you the same information with one open() that 20 similar syscalls filling some structure could provide. Reading it does not pollute buffers, send innocent programs to paging hell or otherwise contribute to the death of kittens.
Accessing /proc/foo is just telling the kernel "give me information on foo that I can process in a language agnostic way"
If you need more details on what is in /proc/{pid}/ , update your question and I'll post them.
|
Is there a way to retrieve process stats using Perl or Python?
|
Is there a way to generically retrieve process stats using Perl or Python? We could keep it Linux specific.
There are a few problems: I won't know the PID ahead of time, but I can run the process in question from the script itself. For example, I'd have no problem doing:
./myscript.pl some/process/I/want/to/get/stats/for
Basically, I'd like to, at the very least, get the memory consumption of the process, but the more information I can get the better (like run time of the process, average CPU usage of the process, etc.)
Thanks.
|
[
"Have a look at the Proc::ProcessTable module which returns quite a bit of information on the processes in the system. Call the \"fields\" method to get a list of details that you can extract from each process.\nI recently discovered the above module which has just about replaced the Process module that I had written when writing a Perl kill program for Linux. You can have a look at my script here. \nIt can be easily extended to extract further information from the ps command. For eg. the 'getbycmd' method returns a list of pid's whose command line invocation matches the passed argument. You can then retrieve a specific process' details by calling 'getdetail' by passing that PID to it like so,\nmy $psTable = Process->new();\n\n# Get list of process owned by 'root'\nfor my $pid ( $psTable->getbyuser(\"root\") ) {\n\n $psDetail = $psList->getdetail( $pid );\n # Do something with the psDetail..\n\n}\n\n",
"If you are fork()ing the child, you will know it's PID.\nFrom within the parent you can then parse the files in /proc/<PID/ to check the memory and CPU usage, albeit only for as long as the child process is running.\n",
"A common misconception is that reading /proc is like reading /home. /proc is designed to give you the same information with one open() that 20 similar syscalls filling some structure could provide. Reading it does not pollute buffers, send innocent programs to paging hell or otherwise contribute to the death of kittens.\nAccessing /proc/foo is just telling the kernel \"give me information on foo that I can process in a language agnostic way\"\nIf you need more details on what is in /proc/{pid}/ , update your question and I'll post them.\n"
] |
[
7,
2,
1
] |
[] |
[] |
[
"linux",
"perl",
"process",
"python"
] |
stackoverflow_0000785810_linux_perl_process_python.txt
|
Q:
How to get the public channel URL from YouTubeVideoFeed object using the YouTube API?
I'm using the Python version of the YouTube API to get a YouTubeVideoFeed object using the following URL:
http://gdata.youtube.com/feeds/api/users/USERNAME/uploads
Note: I've replaced USERNAME with the account I need to follow.
So far getting the feed, iterating the entries, getting player urls, titles and thumbnails has all been straightforward. But now I want to add a "Visit Channel" link to the page. I can't figure out how to get the "public" URL of a channel (in this case, the default channel from the user) out of the feed. From what I can tell, the only URLs stored directly in the feed point to the http://gdata.youtube.com/, not the public site.
How can I link to a channel based on a feed?
A:
Well, the youtube.com/user/USERNAME is a pretty safe bet if you want to construct the URL yourself, but I think what you want is the link rel='alternate'
You have to get the link array from the feed and iterate to find alternate, then grab the href
something like:
client = gdata.youtube.service.YouTubeService()
feed = client.GetYouTubeVideoFeed('http://gdata.youtube.com/feeds/api/users/username/uploads')
for link in feed.link:
if link.rel == 'alternate':
print link.href
Output:
http://www.youtube.com/profile_videos?user=username
The most correct thing would be to grab the 'alternate' link from the user profile feed, as technically the above URL goes to the uploaded videos, not the main channel page
feed = client.GetYouTubeUserEntry('http://gdata.youtube.com/feeds/api/users/username')
for link in feed.link:
if link.rel == 'alternate':
print link.href
output:
http://www.youtube.com/profile?user=username
A:
you might be confusing usernames... when I use my username I get my public page
http://gdata.youtube.com/feeds/api/users/drdredel/uploads
They have some wacky distinction between your gmail username and your youtube username. Or am I misunderstanding your question?
|
How to get the public channel URL from YouTubeVideoFeed object using the YouTube API?
|
I'm using the Python version of the YouTube API to get a YouTubeVideoFeed object using the following URL:
http://gdata.youtube.com/feeds/api/users/USERNAME/uploads
Note: I've replaced USERNAME with the account I need to follow.
So far getting the feed, iterating the entries, getting player urls, titles and thumbnails has all been straightforward. But now I want to add a "Visit Channel" link to the page. I can't figure out how to get the "public" URL of a channel (in this case, the default channel from the user) out of the feed. From what I can tell, the only URLs stored directly in the feed point to the http://gdata.youtube.com/, not the public site.
How can I link to a channel based on a feed?
|
[
"Well, the youtube.com/user/USERNAME is a pretty safe bet if you want to construct the URL yourself, but I think what you want is the link rel='alternate'\nYou have to get the link array from the feed and iterate to find alternate, then grab the href\nsomething like:\nclient = gdata.youtube.service.YouTubeService()\n\nfeed = client.GetYouTubeVideoFeed('http://gdata.youtube.com/feeds/api/users/username/uploads')\n\nfor link in feed.link:\n if link.rel == 'alternate':\n print link.href\n\nOutput: \nhttp://www.youtube.com/profile_videos?user=username\nThe most correct thing would be to grab the 'alternate' link from the user profile feed, as technically the above URL goes to the uploaded videos, not the main channel page\nfeed = client.GetYouTubeUserEntry('http://gdata.youtube.com/feeds/api/users/username')\n\nfor link in feed.link:\n if link.rel == 'alternate':\n print link.href\n\noutput:\nhttp://www.youtube.com/profile?user=username\n",
"you might be confusing usernames... when I use my username I get my public page\nhttp://gdata.youtube.com/feeds/api/users/drdredel/uploads\nThey have some wacky distinction between your gmail username and your youtube username. Or am I misunderstanding your question?\n"
] |
[
1,
0
] |
[] |
[] |
[
"feed",
"python",
"youtube",
"youtube_api"
] |
stackoverflow_0000776110_feed_python_youtube_youtube_api.txt
|
Q:
How to design an email system?
I am working for a company that provides customer support to its clients. I am trying to design a system that would send emails automatically to clients when some event occurs. The system would consist of a backend part and a web interface part. The backend will handle the communication with a web interface (which will be only for internal use to change the email templates) and most important it will check some database tables and based on those results will send emails ... lots of them.
Now, I am wondering how can this be designed so it can be made scalable and provide the necessary performance as it will probably have to handle a few thousands emails per hours (this should be the peek). I am mostly interested about how would this kind of architecture should be thought in order to be easily scaled in the future if needed.
Python will be used on the backend with Postgres and probably whatever comes first between a Python web framework and GWT on the frontend (which seems the simplest task).
A:
This is a real good candidate for using some off the shelf software. There are any number of open-source mailing list manager packages around; they already know how to do the mass mailings. It's not completely clear whether these mailings would go to the same set of people each time; if so, get any one of the regular mailing list programs.
If not, the easy answer is
$ mail address -s subject < file
once per mail.
By the way, investigate the policies of whoever is upstream from you on the net. Some ISPs see lots of mails as probable spam, and may surprise you by cutting off or metering your internet access.
A:
A few thousand emails per hour isn't really that much, as long as your outgoing mail server is willing to accept them in a timely manner.
I would send them using a local mta, like postfix, or exim (which would then send them through your outgoing relay if required). That service is then responsible for the mail queues, retries, bounces, etc. If your looking for more "mailing list" features, try adding mailman into the mix. It's written in python, and you've probably seen it, as it runs tons of internet mailing lists.
A:
This sound to me, that you're trying to optimize for batch processing, where the heat doenst happen on the web interface but in the backend. This also sounds a job for a queuing architecture.
Amazon offers queuing systems for instance if you really need massive scale. So you can add multiple machines on your side to deliver the messages as eMails. So you allow one machines only taking perhaps 100 messages from the queue at one time.
The pattern with eMail systems should be asychonous, so have a look at other asynchonous archictures if you dont like queues.
A:
You might want to try Twisted Mail for implementing your own backend in pure Python.
|
How to design an email system?
|
I am working for a company that provides customer support to its clients. I am trying to design a system that would send emails automatically to clients when some event occurs. The system would consist of a backend part and a web interface part. The backend will handle the communication with a web interface (which will be only for internal use to change the email templates) and most important it will check some database tables and based on those results will send emails ... lots of them.
Now, I am wondering how can this be designed so it can be made scalable and provide the necessary performance as it will probably have to handle a few thousands emails per hours (this should be the peek). I am mostly interested about how would this kind of architecture should be thought in order to be easily scaled in the future if needed.
Python will be used on the backend with Postgres and probably whatever comes first between a Python web framework and GWT on the frontend (which seems the simplest task).
|
[
"This is a real good candidate for using some off the shelf software. There are any number of open-source mailing list manager packages around; they already know how to do the mass mailings. It's not completely clear whether these mailings would go to the same set of people each time; if so, get any one of the regular mailing list programs.\nIf not, the easy answer is\n$ mail address -s subject < file\n\nonce per mail.\nBy the way, investigate the policies of whoever is upstream from you on the net. Some ISPs see lots of mails as probable spam, and may surprise you by cutting off or metering your internet access.\n",
"A few thousand emails per hour isn't really that much, as long as your outgoing mail server is willing to accept them in a timely manner.\nI would send them using a local mta, like postfix, or exim (which would then send them through your outgoing relay if required). That service is then responsible for the mail queues, retries, bounces, etc. If your looking for more \"mailing list\" features, try adding mailman into the mix. It's written in python, and you've probably seen it, as it runs tons of internet mailing lists. \n",
"This sound to me, that you're trying to optimize for batch processing, where the heat doenst happen on the web interface but in the backend. This also sounds a job for a queuing architecture.\nAmazon offers queuing systems for instance if you really need massive scale. So you can add multiple machines on your side to deliver the messages as eMails. So you allow one machines only taking perhaps 100 messages from the queue at one time.\nThe pattern with eMail systems should be asychonous, so have a look at other asynchonous archictures if you dont like queues.\n",
"You might want to try Twisted Mail for implementing your own backend in pure Python.\n"
] |
[
5,
3,
2,
0
] |
[] |
[] |
[
"email",
"linux",
"python"
] |
stackoverflow_0000786138_email_linux_python.txt
|
Q:
is it possible to define name of function's arguments dynamically?
Now I have this code:
attitude = request.REQUEST['attitude']
if attitude == 'want':
qs = qs.filter(attitudes__want=True)
elif attitude == 'like':
qs = qs.filter(attitudes__like=True)
elif attitude == 'hate':
qs = qs.filter(attitudes__hate=True)
elif attitude == 'seen':
qs = qs.filter(attitudes__seen=True)
It's will be better to define name of "attitudes__xxxx" dynamically. Is there any ways to do that ?
Thanks!
A:
Yes.
qs.filter( **{ 'attitudes__%s'%arg:True } )
|
is it possible to define name of function's arguments dynamically?
|
Now I have this code:
attitude = request.REQUEST['attitude']
if attitude == 'want':
qs = qs.filter(attitudes__want=True)
elif attitude == 'like':
qs = qs.filter(attitudes__like=True)
elif attitude == 'hate':
qs = qs.filter(attitudes__hate=True)
elif attitude == 'seen':
qs = qs.filter(attitudes__seen=True)
It's will be better to define name of "attitudes__xxxx" dynamically. Is there any ways to do that ?
Thanks!
|
[
"Yes.\nqs.filter( **{ 'attitudes__%s'%arg:True } )\n\n"
] |
[
7
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000787262_django_python.txt
|
Q:
Python-getting data from an asp.net AJAX application
Using Python, I'm trying to read the values on http://utahcritseries.com/RawResults.aspx. I can read the page just fine, but am having difficulty changing the value of the year combo box, to view data from other years. How can I read the data for years other than the default of 2002?
The page appears to be doing an HTTP Post once the year combo box has changed. The name of the control is ct100$ContentPlaceHolder1$ddlSeries. I try setting a value for this control using urllib.urlencode(postdata), but I must be doing something wrong-the data on the page is not changing. Can this be done in Python?
I'd prefer not to use Selenium, if at all possible.
I've been using code like this(from stackoverflow user dbr)
import urllib
postdata = {'ctl00$ContentPlaceHolder1$ddlSeries': 9}
src = urllib.urlopen(
"http://utahcritseries.com/RawResults.aspx",
data = urllib.urlencode(postdata)
).read()
print src
But seems to be pulling up the same 2002 data. I've tried using firebug to inspect the headers and I see a lot of extraneous and random-looking data being sent back and forth-do I need to post these values back to the server also?
A:
Use the excellent mechanize library:
from mechanize import Browser
b = Browser()
b.open("http://utahcritseries.com/RawResults.aspx")
b.select_form(nr=0)
year = b.form.find_control(type='select')
year.get(label='2005').selected = True
src = b.submit().read()
print src
Mechanize is available on PyPI: easy_install mechanize
|
Python-getting data from an asp.net AJAX application
|
Using Python, I'm trying to read the values on http://utahcritseries.com/RawResults.aspx. I can read the page just fine, but am having difficulty changing the value of the year combo box, to view data from other years. How can I read the data for years other than the default of 2002?
The page appears to be doing an HTTP Post once the year combo box has changed. The name of the control is ct100$ContentPlaceHolder1$ddlSeries. I try setting a value for this control using urllib.urlencode(postdata), but I must be doing something wrong-the data on the page is not changing. Can this be done in Python?
I'd prefer not to use Selenium, if at all possible.
I've been using code like this(from stackoverflow user dbr)
import urllib
postdata = {'ctl00$ContentPlaceHolder1$ddlSeries': 9}
src = urllib.urlopen(
"http://utahcritseries.com/RawResults.aspx",
data = urllib.urlencode(postdata)
).read()
print src
But seems to be pulling up the same 2002 data. I've tried using firebug to inspect the headers and I see a lot of extraneous and random-looking data being sent back and forth-do I need to post these values back to the server also?
|
[
"Use the excellent mechanize library:\nfrom mechanize import Browser\n\nb = Browser()\nb.open(\"http://utahcritseries.com/RawResults.aspx\")\nb.select_form(nr=0)\n\nyear = b.form.find_control(type='select')\nyear.get(label='2005').selected = True\n\nsrc = b.submit().read()\nprint src\n\nMechanize is available on PyPI: easy_install mechanize\n"
] |
[
3
] |
[] |
[] |
[
"asp.net",
"asp.net_ajax",
"python",
"screen_scraping"
] |
stackoverflow_0000786603_asp.net_asp.net_ajax_python_screen_scraping.txt
|
Q:
How to add seconds on a datetime value in Python?
I tried modifying the second property, but didn't work.
Basically I wanna do:
datetime.now().second += 3
A:
Have you checked out timedeltas?
from datetime import datetime, timedelta
x = datetime.now() + timedelta(seconds=3)
x += timedelta(seconds=3)
A:
You cannot add seconds to a datetime object. From the docs:
A DateTime object should be considered immutable; all conversion and numeric operations return a new DateTime object rather than modify the current object.
You must create another datetime object, or use the product of the existing object and a timedelta.
|
How to add seconds on a datetime value in Python?
|
I tried modifying the second property, but didn't work.
Basically I wanna do:
datetime.now().second += 3
|
[
"Have you checked out timedeltas?\nfrom datetime import datetime, timedelta\nx = datetime.now() + timedelta(seconds=3)\nx += timedelta(seconds=3)\n\n",
"You cannot add seconds to a datetime object. From the docs:\n\nA DateTime object should be considered immutable; all conversion and numeric operations return a new DateTime object rather than modify the current object.\n\nYou must create another datetime object, or use the product of the existing object and a timedelta.\n"
] |
[
82,
3
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000787564_python.txt
|
Q:
Python embedding -- how to get the if() truth test behavior from C/C++?
I'm trying to write a function to return the truth value of a given PyObject. This function should return the same value as the if() truth test -- empty lists and strings are False, etc.
I have been looking at the python/include headers, but haven't found anything that seems to do this. The closest I came was PyObject_RichCompare() with True as the second value, but that returns False for "1" == True for example.
Is there a convenient function to do this, or do I have to test against a sequence of types and do special-case tests for each possible type? What does the internal implementation of if() do?
A:
Isn't this it, in object.h:
PyAPI_FUNC(int) PyObject_IsTrue(PyObject *);
?
A:
Use
int PyObject_IsTrue(PyObject *o)
Returns 1 if the object o is considered to be true, and 0 otherwise. This is equivalent to the Python expression not not o. On failure, return -1.
(from Python/C API Reference Manual)
|
Python embedding -- how to get the if() truth test behavior from C/C++?
|
I'm trying to write a function to return the truth value of a given PyObject. This function should return the same value as the if() truth test -- empty lists and strings are False, etc.
I have been looking at the python/include headers, but haven't found anything that seems to do this. The closest I came was PyObject_RichCompare() with True as the second value, but that returns False for "1" == True for example.
Is there a convenient function to do this, or do I have to test against a sequence of types and do special-case tests for each possible type? What does the internal implementation of if() do?
|
[
"Isn't this it, in object.h:\nPyAPI_FUNC(int) PyObject_IsTrue(PyObject *);\n\n?\n",
"Use\nint PyObject_IsTrue(PyObject *o)\nReturns 1 if the object o is considered to be true, and 0 otherwise. This is equivalent to the Python expression not not o. On failure, return -1.\n\n(from Python/C API Reference Manual)\n"
] |
[
5,
1
] |
[] |
[] |
[
"embedded_language",
"python"
] |
stackoverflow_0000787711_embedded_language_python.txt
|
Q:
Python: email get_payload decode fails when hitting equal sign?
Running into strangeness with get_payload: it seems to crap out when it sees an equal sign in the message it's decoding. Here's code that displays the error:
import email
data = file('testmessage.txt').read()
msg = email.message_from_string( data )
payload = msg.get_payload(decode=True)
print payload
And here's a sample message: test message.
The message is printed only until the first "=" . The rest is omitted. Anybody know what's going on?
The same script with "decode=False" returns the full message, so it appears the decode is unhappy with the equal sign.
This is under Python 2.5 .
A:
You have a line endings problem. The body of your test message uses bare carriage returns (\r) without newlines (\n). If you fix up the line endings before parsing the email, it all works:
import email, re
data = file('testmessage.txt').read()
data = re.sub(r'\r(?!\n)', '\r\n', data) # Bare \r becomes \r\n
msg = email.message_from_string( data )
payload = msg.get_payload(decode=True)
print payload
|
Python: email get_payload decode fails when hitting equal sign?
|
Running into strangeness with get_payload: it seems to crap out when it sees an equal sign in the message it's decoding. Here's code that displays the error:
import email
data = file('testmessage.txt').read()
msg = email.message_from_string( data )
payload = msg.get_payload(decode=True)
print payload
And here's a sample message: test message.
The message is printed only until the first "=" . The rest is omitted. Anybody know what's going on?
The same script with "decode=False" returns the full message, so it appears the decode is unhappy with the equal sign.
This is under Python 2.5 .
|
[
"You have a line endings problem. The body of your test message uses bare carriage returns (\\r) without newlines (\\n). If you fix up the line endings before parsing the email, it all works:\nimport email, re\ndata = file('testmessage.txt').read()\ndata = re.sub(r'\\r(?!\\n)', '\\r\\n', data) # Bare \\r becomes \\r\\n\nmsg = email.message_from_string( data )\npayload = msg.get_payload(decode=True)\nprint payload\n\n"
] |
[
7
] |
[] |
[] |
[
"email",
"python"
] |
stackoverflow_0000787739_email_python.txt
|
Q:
How to install python-rsvg without python-gnome2-desktop on Ubuntu 8.10?
I need rsvg support in Python 2.5.2. It appears that I have to install all 199 deps along with the package python-gnome2-desktop, which doesn't sound fun at all.
Alternatives?
A:
No longer relevant. Installed the entire package, and got rsvg that way.
|
How to install python-rsvg without python-gnome2-desktop on Ubuntu 8.10?
|
I need rsvg support in Python 2.5.2. It appears that I have to install all 199 deps along with the package python-gnome2-desktop, which doesn't sound fun at all.
Alternatives?
|
[
"No longer relevant. Installed the entire package, and got rsvg that way.\n"
] |
[
2
] |
[] |
[] |
[
"librsvg",
"python",
"rsvg"
] |
stackoverflow_0000787812_librsvg_python_rsvg.txt
|
Q:
python 2.5 dated?
I am just learning python on my ubuntu 8.04 machine which comes with
python 2.5 install. Is 2.5 too dated to continue learning? How much
of 2.5 version is still valid python code in the newer version?
A:
Basically, python code, for the moment, will be divided into python 2.X code and python 3 code. Python 3 breaks many changes in the interest of cleaning up the language. The majority of code and libraries are written for 2.X in mind. It is probably best to learn one, and know what is different with the other. On an ubuntu machine, the python3 package will install Python 3, which can be run with the command python3, at least on my 8.10 install.
To answer your question, learning with 2.5 is fine, just keep in mind that 3 is a significant change, and learn the changes - ask yourself as you code, "how would this be different in 3, if at all?".
(As an aside, I do wish Ubuntu would upgrade to 2.6 though. It has a nice compatibility mode which tries and points out potential difficulties. But python is in such big use on a modern Linux distro, it can be a difficult change to make)
Here's an article describing 2.6 -> 3's changes
A:
Python 2.5 will be fine for learning purposes. In the interest of learning you will probably want to look into the differences that python 3.0 has introduced, but I think most of the Python community is still using Python 2, as the majority of libraries haven't been ported over yet.
If your interested in 2.6 here is a blog post on compiling it on Hardy, there may even be a package for it somewhere out there on the internets.
Follow up, if there is a package I'm not finding it. Self compiling is pretty simple for most things, though I've never tried to compile Python.
A:
I don't think it is 'too dated' to use, but there are some really nice features in python 2.6 that make it worth the update. This article will give you the details. As long as you have control of the machine, it is worth it.
A:
I don't have any statistics but my impression is that Python 2.5 is the version most in use today. It is certainly not "dated" - I still use Python 2.5 and I expect that I will be using it for weeks or months yet to come.
If you have Python 2.6 available, though, I would suggest upgrading, as it's still fairly similar to Python 2.5 but will put you in better position for using Python 3.
A:
Also, right now the 2.x branch is the most supported one, so I would also say that it's a good reason to start with that version.
And when the moment comes, you can always switch to Python 3.
A:
Python 2.5 is fine. There are still plenty of people on Python 2.4 and 2.3.
A:
One thing to keep in mind about python 2.6 is that some libraries may not work. Numpy comes to mind..
|
python 2.5 dated?
|
I am just learning python on my ubuntu 8.04 machine which comes with
python 2.5 install. Is 2.5 too dated to continue learning? How much
of 2.5 version is still valid python code in the newer version?
|
[
"Basically, python code, for the moment, will be divided into python 2.X code and python 3 code. Python 3 breaks many changes in the interest of cleaning up the language. The majority of code and libraries are written for 2.X in mind. It is probably best to learn one, and know what is different with the other. On an ubuntu machine, the python3 package will install Python 3, which can be run with the command python3, at least on my 8.10 install.\nTo answer your question, learning with 2.5 is fine, just keep in mind that 3 is a significant change, and learn the changes - ask yourself as you code, \"how would this be different in 3, if at all?\".\n(As an aside, I do wish Ubuntu would upgrade to 2.6 though. It has a nice compatibility mode which tries and points out potential difficulties. But python is in such big use on a modern Linux distro, it can be a difficult change to make)\nHere's an article describing 2.6 -> 3's changes\n",
"Python 2.5 will be fine for learning purposes. In the interest of learning you will probably want to look into the differences that python 3.0 has introduced, but I think most of the Python community is still using Python 2, as the majority of libraries haven't been ported over yet.\nIf your interested in 2.6 here is a blog post on compiling it on Hardy, there may even be a package for it somewhere out there on the internets.\nFollow up, if there is a package I'm not finding it. Self compiling is pretty simple for most things, though I've never tried to compile Python.\n",
"I don't think it is 'too dated' to use, but there are some really nice features in python 2.6 that make it worth the update. This article will give you the details. As long as you have control of the machine, it is worth it.\n",
"I don't have any statistics but my impression is that Python 2.5 is the version most in use today. It is certainly not \"dated\" - I still use Python 2.5 and I expect that I will be using it for weeks or months yet to come.\nIf you have Python 2.6 available, though, I would suggest upgrading, as it's still fairly similar to Python 2.5 but will put you in better position for using Python 3.\n",
"Also, right now the 2.x branch is the most supported one, so I would also say that it's a good reason to start with that version.\nAnd when the moment comes, you can always switch to Python 3. \n",
"Python 2.5 is fine. There are still plenty of people on Python 2.4 and 2.3.\n",
"One thing to keep in mind about python 2.6 is that some libraries may not work. Numpy comes to mind..\n"
] |
[
6,
4,
3,
3,
2,
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000787849_python.txt
|
Q:
Python Django Template: Iterate Through List
Technically it should iterate from 0 to rangeLength outputting the user name of the c[i][0].from_user...but from looking at example online, they seem to replace the brackets with dot notation. I have the following code:
<div id="right_pod">
{%for i in rangeLength%}
<div class="user_pod" >
{{c.i.0.from_user}}
</div>
{% endfor %}
This currently outputs nothing :( If I replace "i" with 0...{{c.0.0.from_user}}...it will output something.. (the first user 10 times)
A:
Do you need i to be an index? If not, see if the following code does what you're after:
<div id="right_pod">
{% for i in c %}
<div class="user_pod">
{{ i.0.from_user }}
</div>
{% endfor %}
A:
Please read the entire documentation on the template language's for loops. First of all, that iteration (like in Python) is over objects, not indexes. Secondly, that within any for loop there is a forloop variable with two fields you'll be interested in:
Variable Description
forloop.counter The current iteration of the loop (1-indexed)
forloop.counter0 The current iteration of the loop (0-indexed)
A:
You should use the slice template filter to achieve what you want:
Iterate over the object (c in this case) like so:
{% for c in objects|slice:":30" %}
This would make sure that you only iterate over the first 30 objects.
Also, you can use the forloop.counter object to keep track of which loop iteration you're on.
|
Python Django Template: Iterate Through List
|
Technically it should iterate from 0 to rangeLength outputting the user name of the c[i][0].from_user...but from looking at example online, they seem to replace the brackets with dot notation. I have the following code:
<div id="right_pod">
{%for i in rangeLength%}
<div class="user_pod" >
{{c.i.0.from_user}}
</div>
{% endfor %}
This currently outputs nothing :( If I replace "i" with 0...{{c.0.0.from_user}}...it will output something.. (the first user 10 times)
|
[
"Do you need i to be an index? If not, see if the following code does what you're after:\n<div id=\"right_pod\">\n{% for i in c %}\n <div class=\"user_pod\">\n {{ i.0.from_user }}\n </div>\n{% endfor %}\n\n",
"Please read the entire documentation on the template language's for loops. First of all, that iteration (like in Python) is over objects, not indexes. Secondly, that within any for loop there is a forloop variable with two fields you'll be interested in:\nVariable Description\nforloop.counter The current iteration of the loop (1-indexed)\nforloop.counter0 The current iteration of the loop (0-indexed)\n\n",
"You should use the slice template filter to achieve what you want:\nIterate over the object (c in this case) like so:\n{% for c in objects|slice:\":30\" %}\n\nThis would make sure that you only iterate over the first 30 objects.\nAlso, you can use the forloop.counter object to keep track of which loop iteration you're on.\n"
] |
[
28,
15,
9
] |
[] |
[] |
[
"django",
"django_templates",
"python"
] |
stackoverflow_0000784124_django_django_templates_python.txt
|
Q:
How to get publisher.authors when you have book.publisher and book.author?
Fresh from the Djangobook tutorial using the Books app example, you have Book related to Author through a many-to-many relationship and Book related to Publisher. You can get a set of books associated with a publisher with p.book_set.all(), but what do you need to do to get a set of authors associated with a publisher (through the books published)?
This is the models.py as is:
http://pastie.org/457781
Thanks!
A:
Something like that:
publisher = Publisher.objects.get(...)
authors = Author.objects.filter(book__publisher=publisher).distinct()
|
How to get publisher.authors when you have book.publisher and book.author?
|
Fresh from the Djangobook tutorial using the Books app example, you have Book related to Author through a many-to-many relationship and Book related to Publisher. You can get a set of books associated with a publisher with p.book_set.all(), but what do you need to do to get a set of authors associated with a publisher (through the books published)?
This is the models.py as is:
http://pastie.org/457781
Thanks!
|
[
"Something like that:\npublisher = Publisher.objects.get(...)\nauthors = Author.objects.filter(book__publisher=publisher).distinct()\n\n"
] |
[
4
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000788192_django_python.txt
|
Q:
How do i interface with the MSN Protocol using Python?
I am trying to connect to the MSN network using Python.. I've done some searching and it seems like http://blitiri.com.ar/p/msnlib/ and http://msnp.sourceforge.net/ are the available libraries. However both seem very old and is there any other up to date library that i can use?
Dupplicate of : MSN with Python
A:
I might be babbling here, but I think Python Twisted has a protocol implementation of msn.
A:
libpurple at http://developer.pidgin.im/wiki/WhatIsLibpurple
is the library that drives pidgin, and allows you to connect to MSN and others, not sure if there's a python wrapper for it.
|
How do i interface with the MSN Protocol using Python?
|
I am trying to connect to the MSN network using Python.. I've done some searching and it seems like http://blitiri.com.ar/p/msnlib/ and http://msnp.sourceforge.net/ are the available libraries. However both seem very old and is there any other up to date library that i can use?
Dupplicate of : MSN with Python
|
[
"I might be babbling here, but I think Python Twisted has a protocol implementation of msn.\n",
"libpurple at http://developer.pidgin.im/wiki/WhatIsLibpurple\nis the library that drives pidgin, and allows you to connect to MSN and others, not sure if there's a python wrapper for it.\n"
] |
[
2,
2
] |
[] |
[] |
[
"msn",
"python"
] |
stackoverflow_0000788715_msn_python.txt
|
Q:
ORM (object relational manager) solution with multiple programming language support
Is there a good ORM (object relational manager) solution that can use the same database from C++, C#, Python?
It could also be multiple solutions, e.g. one per language, as long as they can can access the same database and use the same schema.
Multi platform support is also needed.
Clarification:
The idea is to have one database and access this from software written in several different programming languages. Ideally this would be provided by one ORM having APIs (or bindings) in all of these languages.
One other solution is to have a different ORM in each language, that use compatible schemas. However I believe that schema migration will be very hard in this setting.
A:
With SQLAlchemy, you can use reflection to get the schema, so it should work with any of the supported engines.
I've used this to migrate data from an old SQLite to Postgres.
A:
I know DataAbstract for Pascal, C# and soon for objective C for Mac and Iphone but no Python support.
A:
We have an O/RM that has C++ and C# (actually COM) bindings (in FOST.3) and we're putting together the Python bindings which are new in version 4 together with Linux and Mac support.
|
ORM (object relational manager) solution with multiple programming language support
|
Is there a good ORM (object relational manager) solution that can use the same database from C++, C#, Python?
It could also be multiple solutions, e.g. one per language, as long as they can can access the same database and use the same schema.
Multi platform support is also needed.
Clarification:
The idea is to have one database and access this from software written in several different programming languages. Ideally this would be provided by one ORM having APIs (or bindings) in all of these languages.
One other solution is to have a different ORM in each language, that use compatible schemas. However I believe that schema migration will be very hard in this setting.
|
[
"With SQLAlchemy, you can use reflection to get the schema, so it should work with any of the supported engines.\nI've used this to migrate data from an old SQLite to Postgres.\n",
"I know DataAbstract for Pascal, C# and soon for objective C for Mac and Iphone but no Python support.\n",
"We have an O/RM that has C++ and C# (actually COM) bindings (in FOST.3) and we're putting together the Python bindings which are new in version 4 together with Linux and Mac support.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"c#",
"c++",
"orm",
"python"
] |
stackoverflow_0000482612_c#_c++_orm_python.txt
|
Q:
Problem running twisted.words example using msn protocol
I am currently trying to use the Twisted library specifically twisted words to try and interat with MSN. However when i run the sample script provided by twisted , i get an error. Specifically the error is found here http://i42.tinypic.com/wl945w.jpg . The script can be found over here http://twistedmatrix.com/projects/words/documentation/examples/msn_example.py.
Platform is Vista with Python 2.6
EDIT: Full output:
Email (passport): [email protected]
Password: ******
2009-04-25 10:52:49-0300 [-] Log opened.
2009-04-25 10:52:49-0300 [-] Starting factory <twisted.internet.protocol.ClientFactory instance at 0x9d87e8c>
2009-04-25 10:52:55-0300 [Dispatch,client] Starting factory <twisted.words.protocols.msn.NotificationFactory instance at 0x9e28bcc>
2009-04-25 10:52:55-0300 [Dispatch,client] Stopping factory <twisted.internet.protocol.ClientFactory instance at 0x9d87e8c>
2009-04-25 10:52:55-0300 [Notification,client] Unhandled Error
Traceback (most recent call last):
File "/usr/local/lib/python2.5/site-packages/twisted/python/log.py", line 84, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/usr/local/lib/python2.5/site-packages/twisted/python/log.py", line 69, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/usr/local/lib/python2.5/site-packages/twisted/python/context.py", line 59, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/usr/local/lib/python2.5/site-packages/twisted/python/context.py", line 37, in callWithContext
return func(*args,**kw)
--- <exception caught here> ---
File "/usr/local/lib/python2.5/site-packages/twisted/internet/selectreactor.py", line 146, in _doReadOrWrite
why = getattr(selectable, method)()
File "/usr/local/lib/python2.5/site-packages/twisted/internet/tcp.py", line 460, in doRead
return self.protocol.dataReceived(data)
File "/usr/local/lib/python2.5/site-packages/twisted/protocols/basic.py", line 238, in dataReceived
why = self.lineReceived(line)
File "/usr/local/lib/python2.5/site-packages/twisted/words/protocols/msn.py", line 651, in lineReceived
handler(params.split())
File "/usr/local/lib/python2.5/site-packages/twisted/words/protocols/msn.py", line 827, in handle_USR
d = _login(f.userHandle, f.password, f.passportServer, authData=params[3])
File "/usr/local/lib/python2.5/site-packages/twisted/words/protocols/msn.py", line 182, in _login
reactor.connectSSL(_parsePrimitiveHost(nexusServer)[0], 443, fac, ClientContextFactory())
exceptions.TypeError: 'NoneType' object is not callable
2009-04-25 10:52:55-0300 [Notification,client] Stopping factory <twisted.words.protocols.msn.NotificationFactory instance at 0x9e28bcc>
A:
Since MSN involves SSL connections, you must have pyOpenSSL installed in order to use it. It seems as though you probably do not. This isn't a very good way for Twisted to be reporting this missing dependency, though. I recommend filing a ticket in the Twisted issue tracker for improving this reporting.
A:
What happened
This exception you get is when you try to call an object that is None. Check this out :
>>> a = str
>>> a() # it's ok, a string is a callable class
''
>>> a = None
>>> a() # it fails, None a special Singleton not meant to be called
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
a()
TypeError: 'NoneType' object is not callable
What you can do
You can't guess it like that, so you'll need to make some debugging.
Apparently, the last line (refactor.connectSSL...) contains three object calls, and one of the object is None.
The first thing you can do, if you are not into debuggers, if to take each element of the line and add, just before it :
assert object1 is None
assert object2 is None
Then you'll have the source of your Exception. After that, check why is this object set to None. You'll probably have to check the doc to see in which case some method that may have initilized it returns None.
May the force...
|
Problem running twisted.words example using msn protocol
|
I am currently trying to use the Twisted library specifically twisted words to try and interat with MSN. However when i run the sample script provided by twisted , i get an error. Specifically the error is found here http://i42.tinypic.com/wl945w.jpg . The script can be found over here http://twistedmatrix.com/projects/words/documentation/examples/msn_example.py.
Platform is Vista with Python 2.6
EDIT: Full output:
Email (passport): [email protected]
Password: ******
2009-04-25 10:52:49-0300 [-] Log opened.
2009-04-25 10:52:49-0300 [-] Starting factory <twisted.internet.protocol.ClientFactory instance at 0x9d87e8c>
2009-04-25 10:52:55-0300 [Dispatch,client] Starting factory <twisted.words.protocols.msn.NotificationFactory instance at 0x9e28bcc>
2009-04-25 10:52:55-0300 [Dispatch,client] Stopping factory <twisted.internet.protocol.ClientFactory instance at 0x9d87e8c>
2009-04-25 10:52:55-0300 [Notification,client] Unhandled Error
Traceback (most recent call last):
File "/usr/local/lib/python2.5/site-packages/twisted/python/log.py", line 84, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/usr/local/lib/python2.5/site-packages/twisted/python/log.py", line 69, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/usr/local/lib/python2.5/site-packages/twisted/python/context.py", line 59, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/usr/local/lib/python2.5/site-packages/twisted/python/context.py", line 37, in callWithContext
return func(*args,**kw)
--- <exception caught here> ---
File "/usr/local/lib/python2.5/site-packages/twisted/internet/selectreactor.py", line 146, in _doReadOrWrite
why = getattr(selectable, method)()
File "/usr/local/lib/python2.5/site-packages/twisted/internet/tcp.py", line 460, in doRead
return self.protocol.dataReceived(data)
File "/usr/local/lib/python2.5/site-packages/twisted/protocols/basic.py", line 238, in dataReceived
why = self.lineReceived(line)
File "/usr/local/lib/python2.5/site-packages/twisted/words/protocols/msn.py", line 651, in lineReceived
handler(params.split())
File "/usr/local/lib/python2.5/site-packages/twisted/words/protocols/msn.py", line 827, in handle_USR
d = _login(f.userHandle, f.password, f.passportServer, authData=params[3])
File "/usr/local/lib/python2.5/site-packages/twisted/words/protocols/msn.py", line 182, in _login
reactor.connectSSL(_parsePrimitiveHost(nexusServer)[0], 443, fac, ClientContextFactory())
exceptions.TypeError: 'NoneType' object is not callable
2009-04-25 10:52:55-0300 [Notification,client] Stopping factory <twisted.words.protocols.msn.NotificationFactory instance at 0x9e28bcc>
|
[
"Since MSN involves SSL connections, you must have pyOpenSSL installed in order to use it. It seems as though you probably do not. This isn't a very good way for Twisted to be reporting this missing dependency, though. I recommend filing a ticket in the Twisted issue tracker for improving this reporting.\n",
"What happened\nThis exception you get is when you try to call an object that is None. Check this out :\n>>> a = str\n>>> a() # it's ok, a string is a callable class\n''\n>>> a = None\n>>> a() # it fails, None a special Singleton not meant to be called\n\nTraceback (most recent call last):\n File \"<pyshell#4>\", line 1, in <module>\n a()\nTypeError: 'NoneType' object is not callable\n\nWhat you can do\nYou can't guess it like that, so you'll need to make some debugging.\nApparently, the last line (refactor.connectSSL...) contains three object calls, and one of the object is None.\nThe first thing you can do, if you are not into debuggers, if to take each element of the line and add, just before it :\nassert object1 is None \nassert object2 is None\n\nThen you'll have the source of your Exception. After that, check why is this object set to None. You'll probably have to check the doc to see in which case some method that may have initilized it returns None.\nMay the force...\n"
] |
[
3,
2
] |
[] |
[] |
[
"msn",
"python",
"twisted",
"twisted.words"
] |
stackoverflow_0000788902_msn_python_twisted_twisted.words.txt
|
Q:
Case insensitivity in Python strings
I know that you can use the ctypes library to perform case insensitive comparisons on strings, however I would like to perform case insensitive replacement too. Currently the only way I know to do this is with Regex's and it seems a little poor to do so via that.
Is there a case insensitive version of replace()?
A:
You can supply the flag re.IGNORECASE to functions in the re module as described in the docs.
matcher = re.compile(myExpression, re.IGNORECASE)
A:
Using re is the best solution even if you think it's complicated.
To replace all occurrences of 'abc', 'ABC', 'Abc', etc., with 'Python', say:
re.sub(r'(?i)abc', 'Python', a)
Example session:
>>> a = 'abc asd Abc asd ABCDE XXAbCXX'
>>> import re
>>> re.sub(r'(?i)abc', 'Python', a)
'Python asd Python asd PythonDE XXPythonXX'
>>>
Note how embedding (?i) at the start of the regexp makes it case sensitive. Also note the r'...' string literal for the regexp (which in this specific case is redundant but helps as soon as your regexp has backslashes (\) in them.
A:
The easiest way is to convert it all to lowercase then do the replace. But is obviously an issue if you want to retain the original case.
I would do a regex replace, you can instruct the Regex engine to ignore casing all together.
See this site for an example.
|
Case insensitivity in Python strings
|
I know that you can use the ctypes library to perform case insensitive comparisons on strings, however I would like to perform case insensitive replacement too. Currently the only way I know to do this is with Regex's and it seems a little poor to do so via that.
Is there a case insensitive version of replace()?
|
[
"You can supply the flag re.IGNORECASE to functions in the re module as described in the docs.\nmatcher = re.compile(myExpression, re.IGNORECASE)\n\n",
"Using re is the best solution even if you think it's complicated.\nTo replace all occurrences of 'abc', 'ABC', 'Abc', etc., with 'Python', say:\nre.sub(r'(?i)abc', 'Python', a)\n\nExample session:\n>>> a = 'abc asd Abc asd ABCDE XXAbCXX'\n>>> import re\n>>> re.sub(r'(?i)abc', 'Python', a)\n'Python asd Python asd PythonDE XXPythonXX'\n>>> \n\nNote how embedding (?i) at the start of the regexp makes it case sensitive. Also note the r'...' string literal for the regexp (which in this specific case is redundant but helps as soon as your regexp has backslashes (\\) in them.\n",
"The easiest way is to convert it all to lowercase then do the replace. But is obviously an issue if you want to retain the original case.\nI would do a regex replace, you can instruct the Regex engine to ignore casing all together.\nSee this site for an example.\n"
] |
[
10,
5,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000787842_python.txt
|
Q:
Web development with python and sql
I need to build a web site with the following features:
1) user forum where we expect light daily traffic
2) database backend for users to create profiles, where they can log in
and upload media (pictures)
3) users can uses their profile to buy content from an online inventory
4) create web pages, shopping carts etc for online inventory
5) secure online credit card processing
I am very familiar with python but not with python web frameworks. I do know
some SQL. How do I get started developing something like this? Is Django
a good alternative?
Not programming related per se: Where do you recommend I get web hosting with a domain
name for an application like this?
A:
Django was made for this kind of thing. Check it out.
As far as hosting, djangofriendly.com is a great resource. I have used WebFaction before and I am absolutely in love with how easy it is to get Django going with them and with their excellent customer service. Very top notch for reasonable prices if you are going the shared hosting route.
If you are looking to speed up some of the tasks described, you should check out Pinax and Django Pluggables. Thanks to the way Django applications are setup it is trivially easy to plug an application into your project.
A:
You can try Pylons lightweight web framework.
A:
Your requirements make pinax sound like a library you might want to look into if you go the django route.
A:
Google App Engine will provide hosting for free as well as Django and a db..
|
Web development with python and sql
|
I need to build a web site with the following features:
1) user forum where we expect light daily traffic
2) database backend for users to create profiles, where they can log in
and upload media (pictures)
3) users can uses their profile to buy content from an online inventory
4) create web pages, shopping carts etc for online inventory
5) secure online credit card processing
I am very familiar with python but not with python web frameworks. I do know
some SQL. How do I get started developing something like this? Is Django
a good alternative?
Not programming related per se: Where do you recommend I get web hosting with a domain
name for an application like this?
|
[
"Django was made for this kind of thing. Check it out.\nAs far as hosting, djangofriendly.com is a great resource. I have used WebFaction before and I am absolutely in love with how easy it is to get Django going with them and with their excellent customer service. Very top notch for reasonable prices if you are going the shared hosting route.\nIf you are looking to speed up some of the tasks described, you should check out Pinax and Django Pluggables. Thanks to the way Django applications are setup it is trivially easy to plug an application into your project.\n",
"You can try Pylons lightweight web framework.\n",
"Your requirements make pinax sound like a library you might want to look into if you go the django route.\n",
"Google App Engine will provide hosting for free as well as Django and a db..\n"
] |
[
8,
1,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000788083_python.txt
|
Q:
Handling Authorization in web frameworks
I want to write a simple web framework myself using WSGI, Python. I am in study to understand the authorization system.
The system needs to be more modular and abstract enough to add new system into the project as a plug-in. User may use DB or distributed key/value pair, bigtable, etc to store their information.
Lets say, these sort of stuffs are containers or providers which can be written as plug-ins into the system.
I want to define very higher level IDENTITY to the user who logged in. "Identity" is the right word, used by the many frameworks. But it is really tough to define "Identity" as an object due to its complex nature. It may contain anything, that is specific to application. But, when we writing the application, the application shall take care, what is in the identity. But as a framework, it doesn't care about what is identity.
Authentication shall be separated from authorization.
Users, Group, Role/Permissions can be designed as a plug-ins. The idea behind this concept is, write a good framework (atleast for me for research) with enough space for plug-ins and allow the application developers write the portable code which suites the application.
Is it possible to work with 'identity' object at entire framework?
A:
"Is it possible to work with 'identity' object at entire framework?"
"But it is really tough to define "Identity" as an object due to its complex nature. "
Until you define identity, yes, it's difficult to work with.
Identity has to be positively specified. Leaving it so vague that "It may contain anything, that is specific to application" means you can't ever get started writing anything useful because you're too worried that "someday someone might invent a concept of identity that you can't handle".
Stop worrying. Identity is well defined and is not complex. HTTP and other protocols define "authorization" (really authentication) with usernames, passwords and realms. And that's all you really need.
Do what Django does: allow someone to add a "Profile" with additional facts about the person. The Profile is not central to identify and authentication. It's not central to authorization. But anyone can add "Profile" stuff for their specific application.
Do not write one model that does everything.
Write one model that works and someone can add to.
|
Handling Authorization in web frameworks
|
I want to write a simple web framework myself using WSGI, Python. I am in study to understand the authorization system.
The system needs to be more modular and abstract enough to add new system into the project as a plug-in. User may use DB or distributed key/value pair, bigtable, etc to store their information.
Lets say, these sort of stuffs are containers or providers which can be written as plug-ins into the system.
I want to define very higher level IDENTITY to the user who logged in. "Identity" is the right word, used by the many frameworks. But it is really tough to define "Identity" as an object due to its complex nature. It may contain anything, that is specific to application. But, when we writing the application, the application shall take care, what is in the identity. But as a framework, it doesn't care about what is identity.
Authentication shall be separated from authorization.
Users, Group, Role/Permissions can be designed as a plug-ins. The idea behind this concept is, write a good framework (atleast for me for research) with enough space for plug-ins and allow the application developers write the portable code which suites the application.
Is it possible to work with 'identity' object at entire framework?
|
[
"\n\"Is it possible to work with 'identity' object at entire framework?\"\n\"But it is really tough to define \"Identity\" as an object due to its complex nature. \"\n\nUntil you define identity, yes, it's difficult to work with.\nIdentity has to be positively specified. Leaving it so vague that \"It may contain anything, that is specific to application\" means you can't ever get started writing anything useful because you're too worried that \"someday someone might invent a concept of identity that you can't handle\".\nStop worrying. Identity is well defined and is not complex. HTTP and other protocols define \"authorization\" (really authentication) with usernames, passwords and realms. And that's all you really need.\nDo what Django does: allow someone to add a \"Profile\" with additional facts about the person. The Profile is not central to identify and authentication. It's not central to authorization. But anyone can add \"Profile\" stuff for their specific application.\nDo not write one model that does everything.\nWrite one model that works and someone can add to.\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000789468_python.txt
|
Q:
What is the easiest way to build Python26.zip for embedded distribution?
I am using Python as a plug-in scripting language for an existing C++ application. I am able to embed the python interpreter as stated in the Python documentation. Everything works successfully with the initialization and de-initialization of the interpreter. I am, however, having trouble loading modules because I have not been able to zip up the standard library in to a zip file (normally PythonXX.zip, corresponding to the version number of the python dll).
What is the simplest way to zip up all of the standard library after optimized bytecode compiling? I'm looking for a simple script or command to do so for me, as I really don't want to do this by hand.
Any ideas?
Thanks!
A:
It shouldn't be too difficult to write a script for that. Check out the zipfile.PyZipFile class and it's writepy method.
A:
I would probably use setuptools to create an egg (basically a java jar for python). The setup.py would probably look something like this:
from setuptools import setup, find_packages
setup(
name='python26_stdlib',
package_dir = {'' : '/path/to/python/lib/directory'},
packages = find_packages(),
#any other metadata
)
You could run this using python setup.py bdist_egg. Once you have the egg, you can either add it to the python path or you can install it using setuptools. I believe this should also handle the generation of pycs for you as well.
NOTE: I wouldn't use this on my system python directory. You might want to set up a virtualenv for this.
|
What is the easiest way to build Python26.zip for embedded distribution?
|
I am using Python as a plug-in scripting language for an existing C++ application. I am able to embed the python interpreter as stated in the Python documentation. Everything works successfully with the initialization and de-initialization of the interpreter. I am, however, having trouble loading modules because I have not been able to zip up the standard library in to a zip file (normally PythonXX.zip, corresponding to the version number of the python dll).
What is the simplest way to zip up all of the standard library after optimized bytecode compiling? I'm looking for a simple script or command to do so for me, as I really don't want to do this by hand.
Any ideas?
Thanks!
|
[
"It shouldn't be too difficult to write a script for that. Check out the zipfile.PyZipFile class and it's writepy method.\n",
"I would probably use setuptools to create an egg (basically a java jar for python). The setup.py would probably look something like this:\nfrom setuptools import setup, find_packages\n\nsetup(\n name='python26_stdlib',\n package_dir = {'' : '/path/to/python/lib/directory'},\n packages = find_packages(),\n #any other metadata\n)\n\nYou could run this using python setup.py bdist_egg. Once you have the egg, you can either add it to the python path or you can install it using setuptools. I believe this should also handle the generation of pycs for you as well.\nNOTE: I wouldn't use this on my system python directory. You might want to set up a virtualenv for this.\n"
] |
[
2,
2
] |
[] |
[] |
[
"c++",
"distribution",
"embedded_language",
"python"
] |
stackoverflow_0000789598_c++_distribution_embedded_language_python.txt
|
Q:
Why builtin functions instead of root class methods?
(I'm sure this is a FAQ, but also hard to google)
Why does Python use abs(x) instead of x.abs?
As far as I see everything abs() does besides calling x.__abs__ could just as well be implemented in object.abs()
Is it historical, because there hasn't always been a root class?
A:
The official answer from Guido van Rossum, with additional explanation from Fredrik Lundh, is here: http://effbot.org/pyfaq/why-does-python-use-methods-for-some-functionality-e-g-list-index-but-functions-for-other-e-g-len-list.htm
In a nutshell:
abs(x) reads more naturally than x.abs() for most such operations
you know that abs(x) is getting an absolute value, whereas a method x.abs() could mean something different depending on the class of x.
A:
I think you are looking a typical example where a language designer decides that readability and terseness trump purist constructs.
A:
Python is a language that supports object oriented coding, but it deliberately isn't a pure OO language. As you correctly mention, Python classes, even user defined ones, haven't always derived from a single base class.
Functions are the basic unit of functionality in Python, so it makes sense for the core operations (random sample: str, dir, print, hash) to look like functions.
|
Why builtin functions instead of root class methods?
|
(I'm sure this is a FAQ, but also hard to google)
Why does Python use abs(x) instead of x.abs?
As far as I see everything abs() does besides calling x.__abs__ could just as well be implemented in object.abs()
Is it historical, because there hasn't always been a root class?
|
[
"The official answer from Guido van Rossum, with additional explanation from Fredrik Lundh, is here: http://effbot.org/pyfaq/why-does-python-use-methods-for-some-functionality-e-g-list-index-but-functions-for-other-e-g-len-list.htm\nIn a nutshell:\n\nabs(x) reads more naturally than x.abs() for most such operations\nyou know that abs(x) is getting an absolute value, whereas a method x.abs() could mean something different depending on the class of x.\n\n",
"I think you are looking a typical example where a language designer decides that readability and terseness trump purist constructs.\n",
"Python is a language that supports object oriented coding, but it deliberately isn't a pure OO language. As you correctly mention, Python classes, even user defined ones, haven't always derived from a single base class.\nFunctions are the basic unit of functionality in Python, so it makes sense for the core operations (random sample: str, dir, print, hash) to look like functions.\n"
] |
[
13,
1,
0
] |
[
"i think it involves how object oriented way python has been used, because the first parameter of method calls on object is the object itself, so x.abs() is in essential abs(x)\nlook at the follow page under chapter 3.2.3 Python supports object-oriented programming \nthat should explain some things\n"
] |
[
-3
] |
[
"oop",
"python"
] |
stackoverflow_0000789718_oop_python.txt
|
Q:
How do I watch a serial port with QSocketNotifier (linux)?
Could someone give me an example on how to setup QSocketNotifier to fire an event if something comes on /dev/ttyS0 ? (preferably in python/pyqt4)
A:
Here's an example that just keeps reading from a file using QSocketNotifier. Simply replace that 'foo.txt' with '/dev/ttyS0' and you should be good to go.
import os
from PyQt4.QtCore import QCoreApplication, QSocketNotifier, SIGNAL
def readAllData(fd):
bufferSize = 1024
while True:
data = os.read(fd, bufferSize)
if not data:
break
print 'data read:'
print repr(data)
a = QCoreApplication([])
fd = os.open('foo.txt', os.O_RDONLY)
notifier = QSocketNotifier(fd, QSocketNotifier.Read)
a.connect(notifier, SIGNAL('activated(int)'), readAllData)
a.exec_()
|
How do I watch a serial port with QSocketNotifier (linux)?
|
Could someone give me an example on how to setup QSocketNotifier to fire an event if something comes on /dev/ttyS0 ? (preferably in python/pyqt4)
|
[
"Here's an example that just keeps reading from a file using QSocketNotifier. Simply replace that 'foo.txt' with '/dev/ttyS0' and you should be good to go.\n\nimport os\n\nfrom PyQt4.QtCore import QCoreApplication, QSocketNotifier, SIGNAL\n\n\ndef readAllData(fd):\n bufferSize = 1024\n while True:\n data = os.read(fd, bufferSize)\n if not data:\n break\n print 'data read:'\n print repr(data)\n\n\na = QCoreApplication([])\n\nfd = os.open('foo.txt', os.O_RDONLY)\nnotifier = QSocketNotifier(fd, QSocketNotifier.Read)\na.connect(notifier, SIGNAL('activated(int)'), readAllData)\n\na.exec_()\n\n\n"
] |
[
5
] |
[] |
[] |
[
"pyqt4",
"python",
"qt",
"serial_port"
] |
stackoverflow_0000789304_pyqt4_python_qt_serial_port.txt
|
Q:
What is wrong with my attempt to do a string replace operation in Python?
What am I doing wrong here?
import re
x = "The sky is red"
r = re.compile ("red")
y = r.sub(x, "blue")
print x # Prints "The sky is red"
print y # Prints "blue"
How do i get it to print "The sky is blue"?
A:
The problem with your code is that there are two sub functions in the re module. One is the general one and there's one tied to regular expression objects. Your code is not following either one:
The two methods are:
re.sub(pattern, repl, string[, count]) (docs here)
Used like so:
>>> y = re.sub(r, 'blue', x)
>>> y
'The sky is blue'
And for when you compile it before hand, as you tried, you can use:
RegexObject.sub(repl, string[, count=0]) (docs here)
Used like so:
>>> z = r.sub('blue', x)
>>> z
'The sky is blue'
A:
You read the API wrong
http://docs.python.org/library/re.html#re.sub
pattern.sub(repl, string[, count])¶
r.sub(x, "blue")
# should be
r.sub("blue", x)
A:
You have the arguments to your call to sub the wrong way round it should be:
import re
x = "The sky is red"
r = re.compile ("red")
y = r.sub("blue", x)
print x # Prints "The sky is red"
print y # Prints "The sky is blue"
A:
By the way, for such a simple example, the re module is overkill:
x= "The sky is red"
y= x.replace("red", "blue")
print y
A:
Try:
x = r.sub("blue", x)
|
What is wrong with my attempt to do a string replace operation in Python?
|
What am I doing wrong here?
import re
x = "The sky is red"
r = re.compile ("red")
y = r.sub(x, "blue")
print x # Prints "The sky is red"
print y # Prints "blue"
How do i get it to print "The sky is blue"?
|
[
"The problem with your code is that there are two sub functions in the re module. One is the general one and there's one tied to regular expression objects. Your code is not following either one:\nThe two methods are:\nre.sub(pattern, repl, string[, count]) (docs here)\nUsed like so:\n>>> y = re.sub(r, 'blue', x)\n>>> y\n'The sky is blue'\n\nAnd for when you compile it before hand, as you tried, you can use:\nRegexObject.sub(repl, string[, count=0]) (docs here)\nUsed like so:\n>>> z = r.sub('blue', x)\n>>> z\n'The sky is blue'\n\n",
"You read the API wrong\nhttp://docs.python.org/library/re.html#re.sub\npattern.sub(repl, string[, count])¶\nr.sub(x, \"blue\")\n# should be\nr.sub(\"blue\", x)\n\n",
"You have the arguments to your call to sub the wrong way round it should be:\n\n\nimport re\nx = \"The sky is red\"\nr = re.compile (\"red\")\ny = r.sub(\"blue\", x)\nprint x # Prints \"The sky is red\"\nprint y # Prints \"The sky is blue\"\n\n\n",
"By the way, for such a simple example, the re module is overkill:\nx= \"The sky is red\"\ny= x.replace(\"red\", \"blue\")\nprint y\n\n",
"Try:\nx = r.sub(\"blue\", x)\n\n"
] |
[
12,
6,
3,
3,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000786881_python.txt
|
Q:
Locales and temperature/length conversion
Do locales contain information about preferred units for temperature, lengths, etc. on Unix/Linux? Is it possible to access these properties from Python? I checked out the "locales" module, but didn't find anything suitable.
I'd like my application to automatically convert values into the most suitable unit.
A:
No, that's not possible.
I think every country in the world is on the metric system, with the dubious exceptions of the United States and a few others. With that said, you can be confident about choosing metric.
You'd want to write classes with conversion and math rules to define proper operations for each measure.
You won't know what variables to apply the conversions to, and you won't know if micrometers or kilometers are most appropriate for your length measures. It's necessary to know the measurement system, but not sufficient for problems that want to use units properly.
A:
For what it's worth, KDE offers a choice of "Metric" or "Imperial" as the standard unit system, so I would presume that it's possible to access that information through Python somehow. Gnome might have a similar setting, I'm not sure... but I don't think there's any equivalent for a generic UNIX/Linux system.
The most recent version of SciPy (0.7) includes a module for unit handling, and you can use that to do your conversions if necessary.
|
Locales and temperature/length conversion
|
Do locales contain information about preferred units for temperature, lengths, etc. on Unix/Linux? Is it possible to access these properties from Python? I checked out the "locales" module, but didn't find anything suitable.
I'd like my application to automatically convert values into the most suitable unit.
|
[
"No, that's not possible.\nI think every country in the world is on the metric system, with the dubious exceptions of the United States and a few others. With that said, you can be confident about choosing metric.\nYou'd want to write classes with conversion and math rules to define proper operations for each measure.\nYou won't know what variables to apply the conversions to, and you won't know if micrometers or kilometers are most appropriate for your length measures. It's necessary to know the measurement system, but not sufficient for problems that want to use units properly.\n",
"For what it's worth, KDE offers a choice of \"Metric\" or \"Imperial\" as the standard unit system, so I would presume that it's possible to access that information through Python somehow. Gnome might have a similar setting, I'm not sure... but I don't think there's any equivalent for a generic UNIX/Linux system.\nThe most recent version of SciPy (0.7) includes a module for unit handling, and you can use that to do your conversions if necessary.\n"
] |
[
3,
0
] |
[] |
[] |
[
"localization",
"python"
] |
stackoverflow_0000789953_localization_python.txt
|
Q:
finding substrings in python
Can you please help me to get the substrings between two characters at each occurrence
For example to get all the substrings between "Q" and "E" in the given example sequence in all occurrences:
ex: QUWESEADFQDFSAEDFS
and to find the substring with minimum length.
A:
import re
DATA = "QUWESEADFQDFSAEDFS"
# Get all the substrings between Q and E:
substrings = re.findall(r'Q([^E]+)E', DATA)
print "Substrings:", substrings
# Sort by length, then the first one is the shortest:
substrings.sort(key=lambda s: len(s))
print "Shortest substring:", substrings[0]
A:
RichieHindle has it right, except that
substrings.sort(key=len)
is a better way to express it than that redundant lambda;-).
If you're using Python 2.5 or later, min(substrings, key=len) will actually give you the one shortest string (the first one, if several strings tie for "shortest") quite a bit faster than sorting and taking the [0]th element, of course. But if you're stuck with 2.4 or earlier, RichieHindle's approach is the best alternative.
|
finding substrings in python
|
Can you please help me to get the substrings between two characters at each occurrence
For example to get all the substrings between "Q" and "E" in the given example sequence in all occurrences:
ex: QUWESEADFQDFSAEDFS
and to find the substring with minimum length.
|
[
"import re\nDATA = \"QUWESEADFQDFSAEDFS\"\n\n# Get all the substrings between Q and E:\nsubstrings = re.findall(r'Q([^E]+)E', DATA)\nprint \"Substrings:\", substrings\n\n# Sort by length, then the first one is the shortest:\nsubstrings.sort(key=lambda s: len(s))\nprint \"Shortest substring:\", substrings[0]\n\n",
"RichieHindle has it right, except that\nsubstrings.sort(key=len)\n\nis a better way to express it than that redundant lambda;-). \nIf you're using Python 2.5 or later, min(substrings, key=len) will actually give you the one shortest string (the first one, if several strings tie for \"shortest\") quite a bit faster than sorting and taking the [0]th element, of course. But if you're stuck with 2.4 or earlier, RichieHindle's approach is the best alternative.\n"
] |
[
16,
7
] |
[] |
[] |
[
"algorithm",
"python",
"regex",
"substring"
] |
stackoverflow_0000788699_algorithm_python_regex_substring.txt
|
Q:
What is the best way to redirect email to a Python script?
I'd like to provide a functionality for users of my website to get assigned an email address upon registration (such as [email protected]) but I don't really think it is feasible to actually support all these emails account normally through a webmail program. I am also not sure if my webhost would be cool with it. What I'd really want is to be able to have a seamless integration of this email into the bigger system that the website is, as it is mostly going to be for intra-site messaging but we want to allow users to put actual email addresses. So what I would like to do instead is have a catch-all account under mydomain and have this email look at incoming mail, see who it was meant to be sent to, and add a message for the user in the system.
So, the questions are:
1) Is this the right approach? How expensive would it be to get a host that would allow me to just assign emails to at will to my domain? I am currently using WebFaction's shared hosting.
2) If it is an okay approach, what is the best way to route this catch all account to my python script? I have read about .forward but I am not very good at UNIX stuff. Once I figure that out, how would I get the script to be in the "Django environment" so I can use Django's model functionality to add the new messages to the user?
3) Is there anything Django can do to make this any easier?
4) Are there any tools in Python to help me parse the email address? How do they work?
A:
To directly answer your questions:
1,2) Check out this FAQ in the WebFaction website. It explains how to easily route incoming emails into the script of your choice. When creating an email address, you can just not specify a username to make it be a catch-all email that anything sent to the domain goes to.
3) As others have suggested, you could check out django-messages, but maybe Django Plugables has something better.
4) Check out the email.parser module as it takes care of most of the scary parts of parsing emails.
A:
See my answer to a similar question. It has all the basic code to get you started with an email parser for Django.
Edit: On second thought, here's the code:
There's an app called jutda-helpdesk that uses Python's poplib and imaplib to process incoming emails. You just have to have an account somewhere with POP3 or IMAP access.
This is adapted from their get_email.py:
def process_mail(mb):
print "Processing: %s" % q
if mb.email_box_type == 'pop3':
if mb.email_box_ssl:
if not mb.email_box_port: mb.email_box_port = 995
server = poplib.POP3_SSL(mb.email_box_host, int(mb.email_box_port))
else:
if not mb.email_box_port: mb.email_box_port = 110
server = poplib.POP3(mb.email_box_host, int(mb.email_box_port))
server.getwelcome()
server.user(mb.email_box_user)
server.pass_(mb.email_box_pass)
messagesInfo = server.list()[1]
for msg in messagesInfo:
msgNum = msg.split(" ")[0]
msgSize = msg.split(" ")[1]
full_message = "\n".join(server.retr(msgNum)[1])
# Do something with the message
server.dele(msgNum)
server.quit()
elif mb.email_box_type == 'imap':
if mb.email_box_ssl:
if not mb.email_box_port: mb.email_box_port = 993
server = imaplib.IMAP4_SSL(mb.email_box_host, int(mb.email_box_port))
else:
if not mb.email_box_port: mb.email_box_port = 143
server = imaplib.IMAP4(mb.email_box_host, int(mb.email_box_port))
server.login(mb.email_box_user, mb.email_box_pass)
server.select(mb.email_box_imap_folder)
status, data = server.search(None, 'ALL')
for num in data[0].split():
status, data = server.fetch(num, '(RFC822)')
full_message = data[0][1]
# Do something with the message
server.store(num, '+FLAGS', '\\Deleted')
server.expunge()
server.close()
server.logout()
mb is just some object to store all the mail server info, the rest should be pretty clear.
You'll probably need to check the docs on poplib and imaplib to get specific parts of the message, but hopefully this is enough to get you going.
A:
but I don't really think it is
feasible to actually support all these
emails account normally through a
webmail program
I think that your base assumption here is incorrect. You see, most 'webmail' programs are just frontends (or clients) to the backend mail system (postfix etc). You will need to see how your webhost is set up. There is no reason why you can not create these accounts programmatically and then let them use a normal webmail interface like SquirrelMail or RoundCube. For instance, my webhost (bluehost) allows me 2500 email accounts - I am not sure how many yours allows - but I can upgrade to unlimited for a few extra dollars a month. I think that using the builtin email handling facility is a more robust way to go.
A:
Your question is similar to this question.
Use a project like django-messages to handle messaging between your users.
If you want to let users receive mail from outside your Django site then you will need to set up an MTA to handle receiving and storing the email, then something like procmail to retrieve it into your Django message database.
Common MTA's are postfix, exim, and qmail. Python based ones listed in answers to this question
You'll also need to roll your own code to make each new user on your Django site a valid email recipient so they won't be rejected by the MTA.
|
What is the best way to redirect email to a Python script?
|
I'd like to provide a functionality for users of my website to get assigned an email address upon registration (such as [email protected]) but I don't really think it is feasible to actually support all these emails account normally through a webmail program. I am also not sure if my webhost would be cool with it. What I'd really want is to be able to have a seamless integration of this email into the bigger system that the website is, as it is mostly going to be for intra-site messaging but we want to allow users to put actual email addresses. So what I would like to do instead is have a catch-all account under mydomain and have this email look at incoming mail, see who it was meant to be sent to, and add a message for the user in the system.
So, the questions are:
1) Is this the right approach? How expensive would it be to get a host that would allow me to just assign emails to at will to my domain? I am currently using WebFaction's shared hosting.
2) If it is an okay approach, what is the best way to route this catch all account to my python script? I have read about .forward but I am not very good at UNIX stuff. Once I figure that out, how would I get the script to be in the "Django environment" so I can use Django's model functionality to add the new messages to the user?
3) Is there anything Django can do to make this any easier?
4) Are there any tools in Python to help me parse the email address? How do they work?
|
[
"To directly answer your questions:\n1,2) Check out this FAQ in the WebFaction website. It explains how to easily route incoming emails into the script of your choice. When creating an email address, you can just not specify a username to make it be a catch-all email that anything sent to the domain goes to.\n3) As others have suggested, you could check out django-messages, but maybe Django Plugables has something better.\n4) Check out the email.parser module as it takes care of most of the scary parts of parsing emails.\n",
"See my answer to a similar question. It has all the basic code to get you started with an email parser for Django.\nEdit: On second thought, here's the code:\nThere's an app called jutda-helpdesk that uses Python's poplib and imaplib to process incoming emails. You just have to have an account somewhere with POP3 or IMAP access.\nThis is adapted from their get_email.py:\ndef process_mail(mb):\n print \"Processing: %s\" % q\n if mb.email_box_type == 'pop3':\n if mb.email_box_ssl:\n if not mb.email_box_port: mb.email_box_port = 995\n server = poplib.POP3_SSL(mb.email_box_host, int(mb.email_box_port))\n else:\n if not mb.email_box_port: mb.email_box_port = 110\n server = poplib.POP3(mb.email_box_host, int(mb.email_box_port))\n server.getwelcome()\n server.user(mb.email_box_user)\n server.pass_(mb.email_box_pass)\n\n messagesInfo = server.list()[1]\n\n for msg in messagesInfo:\n msgNum = msg.split(\" \")[0]\n msgSize = msg.split(\" \")[1]\n full_message = \"\\n\".join(server.retr(msgNum)[1])\n\n # Do something with the message\n\n server.dele(msgNum)\n server.quit()\n\n elif mb.email_box_type == 'imap':\n if mb.email_box_ssl:\n if not mb.email_box_port: mb.email_box_port = 993\n server = imaplib.IMAP4_SSL(mb.email_box_host, int(mb.email_box_port))\n else:\n if not mb.email_box_port: mb.email_box_port = 143\n server = imaplib.IMAP4(mb.email_box_host, int(mb.email_box_port))\n server.login(mb.email_box_user, mb.email_box_pass)\n server.select(mb.email_box_imap_folder)\n status, data = server.search(None, 'ALL')\n for num in data[0].split():\n status, data = server.fetch(num, '(RFC822)')\n full_message = data[0][1]\n\n # Do something with the message\n\n server.store(num, '+FLAGS', '\\\\Deleted')\n server.expunge()\n server.close()\n server.logout()\n\nmb is just some object to store all the mail server info, the rest should be pretty clear.\nYou'll probably need to check the docs on poplib and imaplib to get specific parts of the message, but hopefully this is enough to get you going.\n",
"\nbut I don't really think it is\n feasible to actually support all these\n emails account normally through a\n webmail program\n\nI think that your base assumption here is incorrect. You see, most 'webmail' programs are just frontends (or clients) to the backend mail system (postfix etc). You will need to see how your webhost is set up. There is no reason why you can not create these accounts programmatically and then let them use a normal webmail interface like SquirrelMail or RoundCube. For instance, my webhost (bluehost) allows me 2500 email accounts - I am not sure how many yours allows - but I can upgrade to unlimited for a few extra dollars a month. I think that using the builtin email handling facility is a more robust way to go.\n",
"Your question is similar to this question.\nUse a project like django-messages to handle messaging between your users.\nIf you want to let users receive mail from outside your Django site then you will need to set up an MTA to handle receiving and storing the email, then something like procmail to retrieve it into your Django message database.\nCommon MTA's are postfix, exim, and qmail. Python based ones listed in answers to this question\nYou'll also need to roll your own code to make each new user on your Django site a valid email recipient so they won't be rejected by the MTA.\n"
] |
[
4,
2,
0,
0
] |
[] |
[] |
[
"django",
"email",
"python"
] |
stackoverflow_0000789685_django_email_python.txt
|
Q:
Updating tkinter labels in python
I'm working on giving a python server a GUI with tkinter by passing the Server's root instance to the Tkinter window. The problem is in keeping information in the labels up to date.
For instance, the server has a Users list, containing the users that are logged on. It's simple enough to do this for an initial list:
string = ""
for user in self.server.Users:
string += user + "\n"
Label(master, text=string)
But that will only do it once. After that, how am I supposed to update the list? I could add an 'update users' button, but I need the list to be self-updating.
A:
You could use callbacks on the server instance. Install a callback that updates the label whenever the user-list changes.
If you can't change the server code, you would need to poll the list for updates every few seconds. You could use the Tkinter event system to keep track of the updates.
def user_updater(self):
self.user_updater_id = self.user_label.after(1000, self.user_updater)
lines = []
for user in self.server.Users:
lines.append(user)
self.user_label["text"] = "\n".join(lines)
def stop_user_updater(self):
self.user_label.after_cancel(self.user_updater_id)
A:
You change the text of a Label by setting the text of its corresponding StringVar object, for example:
from tkinter import *
root = Tk()
string = StringVar()
lab = Label(root, textvariable=string)
lab.pack()
string.set('Changing the text displayed in the Label')
root.mainloop()
Note the use of the set function to change the displayed text of the Label lab.
See New Mexico Tech Tkinter reference about this topic for more information.
|
Updating tkinter labels in python
|
I'm working on giving a python server a GUI with tkinter by passing the Server's root instance to the Tkinter window. The problem is in keeping information in the labels up to date.
For instance, the server has a Users list, containing the users that are logged on. It's simple enough to do this for an initial list:
string = ""
for user in self.server.Users:
string += user + "\n"
Label(master, text=string)
But that will only do it once. After that, how am I supposed to update the list? I could add an 'update users' button, but I need the list to be self-updating.
|
[
"You could use callbacks on the server instance. Install a callback that updates the label whenever the user-list changes.\nIf you can't change the server code, you would need to poll the list for updates every few seconds. You could use the Tkinter event system to keep track of the updates.\ndef user_updater(self):\n self.user_updater_id = self.user_label.after(1000, self.user_updater)\n lines = []\n for user in self.server.Users:\n lines.append(user)\n self.user_label[\"text\"] = \"\\n\".join(lines)\n\ndef stop_user_updater(self):\n self.user_label.after_cancel(self.user_updater_id)\n\n",
"You change the text of a Label by setting the text of its corresponding StringVar object, for example:\nfrom tkinter import *\n\nroot = Tk()\nstring = StringVar()\nlab = Label(root, textvariable=string)\nlab.pack()\nstring.set('Changing the text displayed in the Label')\nroot.mainloop()\n\nNote the use of the set function to change the displayed text of the Label lab. \nSee New Mexico Tech Tkinter reference about this topic for more information.\n"
] |
[
3,
2
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0000773797_python_tkinter.txt
|
Q:
Is there a replacement for Paste.Template?
I have grown tired of all the little issues with paste template, it's horrible to maintain the templates, it has no way of updating an old project and it's very hard to test.
I'm wondering if someone knows of an alternative for quickstart generators as they have proven to be useful.
|
Is there a replacement for Paste.Template?
|
I have grown tired of all the little issues with paste template, it's horrible to maintain the templates, it has no way of updating an old project and it's very hard to test.
I'm wondering if someone knows of an alternative for quickstart generators as they have proven to be useful.
|
[] |
[] |
[
"I haven't used paste templates, so I'm not sure how it compares, but Mako seems like a fairly good system.\nA snippet of the template language from their front page:\n<%inherit file=\"base.html\"/>\n<%\n rows = [[v for v in range(0,10)] for row in range(0,10)]\n%>\n<table>\n % for row in rows:\n ${makerow(row)}\n % endfor\n</table>\n\n<%def name=\"makerow(row)\">\n <tr>\n % for name in row:\n <td>${name}</td>\\\n % endfor\n </tr>\n</%def>\n\n"
] |
[
-1
] |
[
"generator",
"python",
"templates"
] |
stackoverflow_0000790534_generator_python_templates.txt
|
Q:
Python library to modify MP3 audio without transcoding
I am looking for some general advice about the mp3 format before I start a small project to make sure I am not on a wild-goose chase.
My understanding of the internals of the mp3 format is minimal. Ideally, I am looking for a library that would abstract those details away. I would prefer to use Python (but could be convinced otherwise).
I would like to modify a set of mp3 files in a fairly simple way. I am not so much interested in the ID3 tags but in the audio itself. I want to be able to delete sections (e.g. drop 10 seconds from the 3rd minute), and insert sections (e.g. add credits to the end.)
My understanding is that the mp3 format is lossy, and so decoding it to (for example) PCM format, making the modifications, and then encoding it again to MP3 will lower the audio quality. (I would love to hear that I am wrong.)
I conjecture that if I stay in mp3 format, there will be some sort of minimum frame or packet-size to deal with, so the granularity of the operations may be coarser. I can live with that, as long as I get an accuracy of within a couple of seconds.
I have looked at PyMedia, but it requires me to migrate to PCM to process the data. Similarly, LAME wants to help me encode, but not access the data in place. I have seen several other libraries that only deal with the ID3 tags.
Can anyone recommend a Python MP3 library? Alternatively, can you disabuse me of my assumption that going to PCM and back is bad and avoidable?
A:
If you want to do things low-level, use pymad. It turns MP3s into a buffer of sample data.
If you want something a little higher-level, use the Echo Nest Remix API (disclosure: I wrote part of it for my dayjob). It includes a few examples. If you look at the cowbell example (i.e., MoreCowbell.dj), you'll see a fork of pymad that gives you a NumPy array instead of a buffer. That datatype makes it easier to slice out sections and do math on them.
A:
I got three quality answers, and I thank you all for them. I haven't chosen any as the accepted answer, because each addressed one aspect, so I wanted to write a summary.
Do you need to work in MP3?
Transcoding to PCM and back to MP3 is unlikely to result in a drop in quality.
Don't optimise audio-quality prematurely; test it with a simple prototype and listen to it.
Working in MP3
Wikipedia has a summary of the MP3 File Format.
MP3 frames are short (1152 samples, or just a few milliseconds) allowing for moderate precision at that level.
However, Wikipedia warns that "Frames are not independent items ("byte reservoir") and therefore cannot be extracted on arbitrary frame boundaries."
Existing libraries are unlikely to be of assistance, if I really want to avoid decoding.
Working in PCM
There are several libraries at this level:
LAME (latest release: October 2017)
PyMedia (latest release: February 2006)
PyMad (Linux only? Decoder only? Latest release: January 2007)
Working at a higher level
Echo Nest Remix API (Mac or Linux only, at the moment) is an API to a web-service that supports quite sophisticated operations (e.g. finding the locations of music beats and tempo, etc.)
mp3DirectCut (Windows only) is a GUI that apparently performs the operations I want, but as an app. It is not open-source. (I tried to run it, got an Access Denied installer error, and didn't follow up. A GUI isn't suitably for me, as I want to repeatedly run these operations on a changing library of files.)
My plan is now to start out in PyMedia, using PCM.
A:
Mp3 is lossy, but it is lossy in a very specific way. The algorithms used as designed to discard certain parts of the audio which your ears are unable to hear (or are very difficult to hear). Re-doing the compression process at the same level of compression over and over is likely to yield nearly identical results for a given piece of audio. However, some additional losses may slowly accumulate. If you're going to be modifying files a lot, this might be a bad idea. It would also be a bad idea if you were concerned about quality, but then using MP3 if you are concerned about quality is a bad idea over all.
You could construct a test using an encoder and a decoder to re-encode a few different mp3 files a few times and watch how they change, this could help you determine the rate of deterioration and figure out if it is acceptable to you. Sounds like you have libraries you could use to run this simple test already.
MP3 files are composed of "frames" of audio and so it should be possible, with some effort, to remove entire frames with minimal processing (remove the frame, update some minor details in the file header). I believe frames are pretty short (a few milliseconds each) which would give the precision you're looking for. So doing some reading on the MP3 File Format should give you enough information to code your own python library to do this. This is a fair bit different than traditional "audio processing" (since you don't care about precision) and so you're unlikely to find an existing library that does this. Most, as you've found, will decompress the audio first so you can have complete fine-grained control.
A:
Not a direct answer to your needs, but check the mp3DirectCut software that does what you want (as a GUI app). I think that the source code is available, so even if you don't find a library, you could build one of your own, or build a python extension using code from mp3DirectCut.
A:
As for removing or extracting mp3 segments from an mp3 file while staying in the MP3 domain (that is, without conversion to PCM format and back), there is also the open source package PyMp3Cut.
As for splicing MP3 files together (adding e.g. 'Credits' to the end or beginning of an mp3 file) I've found you can simply concatenate the MP3 files providing that the files have the same sampling rate (e.g. 44.1khz) and the same number of channels (e.g. both are stereo or both are mono).
|
Python library to modify MP3 audio without transcoding
|
I am looking for some general advice about the mp3 format before I start a small project to make sure I am not on a wild-goose chase.
My understanding of the internals of the mp3 format is minimal. Ideally, I am looking for a library that would abstract those details away. I would prefer to use Python (but could be convinced otherwise).
I would like to modify a set of mp3 files in a fairly simple way. I am not so much interested in the ID3 tags but in the audio itself. I want to be able to delete sections (e.g. drop 10 seconds from the 3rd minute), and insert sections (e.g. add credits to the end.)
My understanding is that the mp3 format is lossy, and so decoding it to (for example) PCM format, making the modifications, and then encoding it again to MP3 will lower the audio quality. (I would love to hear that I am wrong.)
I conjecture that if I stay in mp3 format, there will be some sort of minimum frame or packet-size to deal with, so the granularity of the operations may be coarser. I can live with that, as long as I get an accuracy of within a couple of seconds.
I have looked at PyMedia, but it requires me to migrate to PCM to process the data. Similarly, LAME wants to help me encode, but not access the data in place. I have seen several other libraries that only deal with the ID3 tags.
Can anyone recommend a Python MP3 library? Alternatively, can you disabuse me of my assumption that going to PCM and back is bad and avoidable?
|
[
"If you want to do things low-level, use pymad. It turns MP3s into a buffer of sample data.\nIf you want something a little higher-level, use the Echo Nest Remix API (disclosure: I wrote part of it for my dayjob). It includes a few examples. If you look at the cowbell example (i.e., MoreCowbell.dj), you'll see a fork of pymad that gives you a NumPy array instead of a buffer. That datatype makes it easier to slice out sections and do math on them.\n",
"I got three quality answers, and I thank you all for them. I haven't chosen any as the accepted answer, because each addressed one aspect, so I wanted to write a summary.\nDo you need to work in MP3?\n\nTranscoding to PCM and back to MP3 is unlikely to result in a drop in quality.\n\nDon't optimise audio-quality prematurely; test it with a simple prototype and listen to it.\n\n\nWorking in MP3\n\nWikipedia has a summary of the MP3 File Format.\n\nMP3 frames are short (1152 samples, or just a few milliseconds) allowing for moderate precision at that level.\n\nHowever, Wikipedia warns that \"Frames are not independent items (\"byte reservoir\") and therefore cannot be extracted on arbitrary frame boundaries.\"\n\nExisting libraries are unlikely to be of assistance, if I really want to avoid decoding.\n\n\nWorking in PCM\nThere are several libraries at this level:\n\nLAME (latest release: October 2017)\nPyMedia (latest release: February 2006)\nPyMad (Linux only? Decoder only? Latest release: January 2007)\n\nWorking at a higher level\n\nEcho Nest Remix API (Mac or Linux only, at the moment) is an API to a web-service that supports quite sophisticated operations (e.g. finding the locations of music beats and tempo, etc.)\n\nmp3DirectCut (Windows only) is a GUI that apparently performs the operations I want, but as an app. It is not open-source. (I tried to run it, got an Access Denied installer error, and didn't follow up. A GUI isn't suitably for me, as I want to repeatedly run these operations on a changing library of files.)\n\n\nMy plan is now to start out in PyMedia, using PCM.\n",
"Mp3 is lossy, but it is lossy in a very specific way. The algorithms used as designed to discard certain parts of the audio which your ears are unable to hear (or are very difficult to hear). Re-doing the compression process at the same level of compression over and over is likely to yield nearly identical results for a given piece of audio. However, some additional losses may slowly accumulate. If you're going to be modifying files a lot, this might be a bad idea. It would also be a bad idea if you were concerned about quality, but then using MP3 if you are concerned about quality is a bad idea over all.\nYou could construct a test using an encoder and a decoder to re-encode a few different mp3 files a few times and watch how they change, this could help you determine the rate of deterioration and figure out if it is acceptable to you. Sounds like you have libraries you could use to run this simple test already.\nMP3 files are composed of \"frames\" of audio and so it should be possible, with some effort, to remove entire frames with minimal processing (remove the frame, update some minor details in the file header). I believe frames are pretty short (a few milliseconds each) which would give the precision you're looking for. So doing some reading on the MP3 File Format should give you enough information to code your own python library to do this. This is a fair bit different than traditional \"audio processing\" (since you don't care about precision) and so you're unlikely to find an existing library that does this. Most, as you've found, will decompress the audio first so you can have complete fine-grained control.\n",
"Not a direct answer to your needs, but check the mp3DirectCut software that does what you want (as a GUI app). I think that the source code is available, so even if you don't find a library, you could build one of your own, or build a python extension using code from mp3DirectCut.\n",
"As for removing or extracting mp3 segments from an mp3 file while staying in the MP3 domain (that is, without conversion to PCM format and back), there is also the open source package PyMp3Cut. \nAs for splicing MP3 files together (adding e.g. 'Credits' to the end or beginning of an mp3 file) I've found you can simply concatenate the MP3 files providing that the files have the same sampling rate (e.g. 44.1khz) and the same number of channels (e.g. both are stereo or both are mono).\n"
] |
[
7,
6,
3,
1,
1
] |
[] |
[] |
[
"codec",
"mp3",
"python"
] |
stackoverflow_0000310765_codec_mp3_python.txt
|
Q:
Python - lines from files - all combinations
I have two files - prefix.txt and terms.txt both have about 100 lines. I'd like to write out a third file with the Cartesian product
http://en.wikipedia.org/wiki/Join_(SQL)#Cross_join
-about 10000 lines.
What is the best way to approach this in Python?
Secondly, is there a way to write the 10,000 lines to the third file in a random order?
A:
You need itertools.product.
for prefix, term in itertools.product(open('prefix.txt'), open('terms.txt')):
print(prefix.strip() + term.strip())
Print them, or accumulate them, or write them directly. You need the .strip() because of the newline that comes with each of them.
Afterwards, you can shuffle them using random.shuffle(list(open('thirdfile.txt')), but I don't know how fast that will be on a file of the sizes you are using.
A:
A Cartesian product enumerates all combinations. The easiest way to enumerate all combinations is to use nested loops.
You cannot write files in a random order very easily. To write to a "random" position, you must use file.seek(). How will you know what position to which you will seek? How do you know how long each part (prefix+term) will be?
You can, however, read entire files into memory (100 lines is nothing) and process the in-memory collections in "random" orders. This will assure that the output is randomized.
A:
from random import shuffle
a = list(open('prefix.txt'))
b = list(open('terms.txt'))
c = [x.strip() + y.strip() for x in a for y in b]
shuffle(c)
open('result.txt', 'w').write('\n'.join(c))
Certainly, not the best way in terms of speed and memory, but 10000 is not big enough to sacrifice brevity anyway. You should normally close your file objects and you can loop through at least one of the files without storing its content in RAM. This: [:-1] removes the trailing newlline from each element of a and b.
Edit: using s.strip() instead of s[:-1] to get rid of the newlines---it's more portable.
|
Python - lines from files - all combinations
|
I have two files - prefix.txt and terms.txt both have about 100 lines. I'd like to write out a third file with the Cartesian product
http://en.wikipedia.org/wiki/Join_(SQL)#Cross_join
-about 10000 lines.
What is the best way to approach this in Python?
Secondly, is there a way to write the 10,000 lines to the third file in a random order?
|
[
"You need itertools.product.\nfor prefix, term in itertools.product(open('prefix.txt'), open('terms.txt')):\n print(prefix.strip() + term.strip())\n\nPrint them, or accumulate them, or write them directly. You need the .strip() because of the newline that comes with each of them.\nAfterwards, you can shuffle them using random.shuffle(list(open('thirdfile.txt')), but I don't know how fast that will be on a file of the sizes you are using.\n",
"A Cartesian product enumerates all combinations. The easiest way to enumerate all combinations is to use nested loops.\nYou cannot write files in a random order very easily. To write to a \"random\" position, you must use file.seek(). How will you know what position to which you will seek? How do you know how long each part (prefix+term) will be?\nYou can, however, read entire files into memory (100 lines is nothing) and process the in-memory collections in \"random\" orders. This will assure that the output is randomized.\n",
"from random import shuffle\na = list(open('prefix.txt'))\nb = list(open('terms.txt'))\nc = [x.strip() + y.strip() for x in a for y in b]\nshuffle(c)\nopen('result.txt', 'w').write('\\n'.join(c))\n\nCertainly, not the best way in terms of speed and memory, but 10000 is not big enough to sacrifice brevity anyway. You should normally close your file objects and you can loop through at least one of the files without storing its content in RAM. This: [:-1] removes the trailing newlline from each element of a and b.\nEdit: using s.strip() instead of s[:-1] to get rid of the newlines---it's more portable.\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"file_io",
"python",
"random"
] |
stackoverflow_0000790860_file_io_python_random.txt
|
Q:
Is it possible to call a Python module from ObjC?
Using PyObjC, is it possible to import a Python module, call a function and get the result as (say) a NSString?
For example, doing the equivalent of the following Python code:
import mymodule
result = mymodule.mymethod()
..in pseudo-ObjC:
PyModule *mypymod = [PyImport module:@"mymodule"];
NSString *result = [[mypymod getattr:"mymethod"] call:@"mymethod"];
A:
As mentioned in Alex Martelli's answer (although the link in the mailing-list message was broken, it should be https://docs.python.org/extending/embedding.html#pure-embedding).. The C way of calling..
print urllib.urlopen("http://google.com").read()
Add the Python.framework to your project (Right click External Frameworks.., Add > Existing Frameworks. The framework in in /System/Library/Frameworks/
Add /System/Library/Frameworks/Python.framework/Headers to your "Header Search Path" (Project > Edit Project Settings)
The following code should work (although it's probably not the best code ever written..)
#include <Python.h>
int main(){
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
Py_Initialize();
// import urllib
PyObject *mymodule = PyImport_Import(PyString_FromString("urllib"));
// thefunc = urllib.urlopen
PyObject *thefunc = PyObject_GetAttrString(mymodule, "urlopen");
// if callable(thefunc):
if(thefunc && PyCallable_Check(thefunc)){
// theargs = ()
PyObject *theargs = PyTuple_New(1);
// theargs[0] = "http://google.com"
PyTuple_SetItem(theargs, 0, PyString_FromString("http://google.com"));
// f = thefunc.__call__(*theargs)
PyObject *f = PyObject_CallObject(thefunc, theargs);
// read = f.read
PyObject *read = PyObject_GetAttrString(f, "read");
// result = read.__call__()
PyObject *result = PyObject_CallObject(read, NULL);
if(result != NULL){
// print result
printf("Result of call: %s", PyString_AsString(result));
}
}
[pool release];
}
Also this tutorial is good
A:
Not quite, AFAIK, but you can do it "the C way", as suggested for example in http://lists.apple.com/archives/Cocoa-dev/2004/Jan/msg00598.html -- or "the Pyobjc way" as per http://osdir.com/ml/python.pyobjc.devel/2005-06/msg00019.html (see also all other messages on that thread for further clarification).
|
Is it possible to call a Python module from ObjC?
|
Using PyObjC, is it possible to import a Python module, call a function and get the result as (say) a NSString?
For example, doing the equivalent of the following Python code:
import mymodule
result = mymodule.mymethod()
..in pseudo-ObjC:
PyModule *mypymod = [PyImport module:@"mymodule"];
NSString *result = [[mypymod getattr:"mymethod"] call:@"mymethod"];
|
[
"As mentioned in Alex Martelli's answer (although the link in the mailing-list message was broken, it should be https://docs.python.org/extending/embedding.html#pure-embedding).. The C way of calling.. \nprint urllib.urlopen(\"http://google.com\").read()\n\n\nAdd the Python.framework to your project (Right click External Frameworks.., Add > Existing Frameworks. The framework in in /System/Library/Frameworks/\nAdd /System/Library/Frameworks/Python.framework/Headers to your \"Header Search Path\" (Project > Edit Project Settings)\n\nThe following code should work (although it's probably not the best code ever written..)\n#include <Python.h>\n\nint main(){\n NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];\n Py_Initialize();\n\n // import urllib\n PyObject *mymodule = PyImport_Import(PyString_FromString(\"urllib\"));\n // thefunc = urllib.urlopen\n PyObject *thefunc = PyObject_GetAttrString(mymodule, \"urlopen\");\n\n // if callable(thefunc):\n if(thefunc && PyCallable_Check(thefunc)){\n // theargs = ()\n PyObject *theargs = PyTuple_New(1);\n\n // theargs[0] = \"http://google.com\"\n PyTuple_SetItem(theargs, 0, PyString_FromString(\"http://google.com\"));\n\n // f = thefunc.__call__(*theargs)\n PyObject *f = PyObject_CallObject(thefunc, theargs);\n\n // read = f.read\n PyObject *read = PyObject_GetAttrString(f, \"read\");\n\n // result = read.__call__()\n PyObject *result = PyObject_CallObject(read, NULL);\n\n\n if(result != NULL){\n // print result\n printf(\"Result of call: %s\", PyString_AsString(result));\n }\n }\n [pool release];\n}\n\nAlso this tutorial is good\n",
"Not quite, AFAIK, but you can do it \"the C way\", as suggested for example in http://lists.apple.com/archives/Cocoa-dev/2004/Jan/msg00598.html -- or \"the Pyobjc way\" as per http://osdir.com/ml/python.pyobjc.devel/2005-06/msg00019.html (see also all other messages on that thread for further clarification).\n"
] |
[
12,
3
] |
[] |
[] |
[
"objective_c",
"pyobjc",
"python"
] |
stackoverflow_0000790103_objective_c_pyobjc_python.txt
|
Q:
Object class override or modify
Is it possible to add a method to an object class, and use it on all objects?
A:
In Python attributes are implemented using a dictionary :
>>> t = test()
>>> t.__dict__["foo"] = "bla"
>>> t.foo
'bla'
But for "object", it uses a 'dictproxy' as an interface to prevent such assignement :
>>> object.__dict__["test"] = "test"
TypeError: 'dictproxy' object does not support item assignment
So no, you can't.
NB : you can't modify the metaclass Type directly neither. But as Python is very flexible, I am sure a Guru could find a way to achieve what you want. Any black wizard around here :-) ?
A:
No, Python's internals take great care to make built-in types NOT mutable -- very different design choices from Ruby's. It's not possible to make object "monkeypatchable" without deeply messing with the C-coded internals and recompiling the Python runtime to make a very different version (this is for the classic CPython, but I believe exactly the same principle holds for other good implementations such as Jython and IronPython, just s/C/Java/ and S/C/C#/ respectively;-).
A:
>>> object.test = "Test"
Traceback (most recent call last):
File "", line 1, in
TypeError: can't set attributes of built-in/extension type 'object'
Doesn't look like it. (Python 2.5.1)
|
Object class override or modify
|
Is it possible to add a method to an object class, and use it on all objects?
|
[
"In Python attributes are implemented using a dictionary :\n>>> t = test()\n>>> t.__dict__[\"foo\"] = \"bla\"\n>>> t.foo\n'bla'\n\nBut for \"object\", it uses a 'dictproxy' as an interface to prevent such assignement :\n>>> object.__dict__[\"test\"] = \"test\"\nTypeError: 'dictproxy' object does not support item assignment\n\nSo no, you can't.\nNB : you can't modify the metaclass Type directly neither. But as Python is very flexible, I am sure a Guru could find a way to achieve what you want. Any black wizard around here :-) ?\n",
"No, Python's internals take great care to make built-in types NOT mutable -- very different design choices from Ruby's. It's not possible to make object \"monkeypatchable\" without deeply messing with the C-coded internals and recompiling the Python runtime to make a very different version (this is for the classic CPython, but I believe exactly the same principle holds for other good implementations such as Jython and IronPython, just s/C/Java/ and S/C/C#/ respectively;-).\n",
"\n>>> object.test = \"Test\"\nTraceback (most recent call last):\n File \"\", line 1, in \nTypeError: can't set attributes of built-in/extension type 'object'\n\nDoesn't look like it. (Python 2.5.1)\n"
] |
[
8,
5,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000790560_python.txt
|
Q:
Elixir reflection
I define some Entities which works fine; for meta programming issues. I now need to reflect the field properties defined in the model.
For example:
class Foo(Entity):
bar = OneToMany('Bar')
baz = ManyToMany('Baz')
Which type of relation is set: "ManyToMany", "OneToMany" or even a plain "Field", and the relation target?
Is there any simple way to reflect the Elixir Entities?
A:
You can do introspection in Elixir as you would anywhere in Python -- get all names of attributes of class Foo with dir(Foo), extract an attribute given its name with getattr(Foo, thename), check the type of the attribute with type(theattr) or isinstance, etc. The string 'Bar' that you pass as the attribute to the constructor of any Relationship subclass (including OneToMany and ManyToMany) ends up as the r.of_kind attribute of the resulting instance r of the Relationship subclass.
Module inspect in the Python standard library may be a friendlier way to do introspection, but dir / getattr / isinstance &c are perfectly acceptable in many cases.
|
Elixir reflection
|
I define some Entities which works fine; for meta programming issues. I now need to reflect the field properties defined in the model.
For example:
class Foo(Entity):
bar = OneToMany('Bar')
baz = ManyToMany('Baz')
Which type of relation is set: "ManyToMany", "OneToMany" or even a plain "Field", and the relation target?
Is there any simple way to reflect the Elixir Entities?
|
[
"You can do introspection in Elixir as you would anywhere in Python -- get all names of attributes of class Foo with dir(Foo), extract an attribute given its name with getattr(Foo, thename), check the type of the attribute with type(theattr) or isinstance, etc. The string 'Bar' that you pass as the attribute to the constructor of any Relationship subclass (including OneToMany and ManyToMany) ends up as the r.of_kind attribute of the resulting instance r of the Relationship subclass.\nModule inspect in the Python standard library may be a friendlier way to do introspection, but dir / getattr / isinstance &c are perfectly acceptable in many cases.\n"
] |
[
4
] |
[] |
[] |
[
"pylons",
"python",
"python_elixir",
"sqlalchemy"
] |
stackoverflow_0000791150_pylons_python_python_elixir_sqlalchemy.txt
|
Q:
Sorting by key and value in case keys are equal
The official oauth guide makes this recommendation:
It is important not to try and perform
the sort operation on some combined
string of both name and value as some
known separators (such as '=') will
cause the sort order to change due to
their impact on the string value.
If this is the case, then what would be an efficient way of doing this? A second iteration after the initial sort looking for equal keys?
A:
Just sort the list of tuples (name, value) -- Python does lexicographic ordering for you.
|
Sorting by key and value in case keys are equal
|
The official oauth guide makes this recommendation:
It is important not to try and perform
the sort operation on some combined
string of both name and value as some
known separators (such as '=') will
cause the sort order to change due to
their impact on the string value.
If this is the case, then what would be an efficient way of doing this? A second iteration after the initial sort looking for equal keys?
|
[
"Just sort the list of tuples (name, value) -- Python does lexicographic ordering for you.\n"
] |
[
4
] |
[] |
[] |
[
"oauth",
"python",
"sorting"
] |
stackoverflow_0000791316_oauth_python_sorting.txt
|
Q:
wxPython crashes under Vista
I am following the Getting Started guide for wxPython. But unfortunately the first 'Hello World' example crashes. The dialog window shows just fine, but as soon as I move my mouse over the window a "pythonw.exe has stopped working" Windows message appears.
I use:
Python 2.6.2
wxPython2.8-win32-unicode-2.8.9.2-py26
Vista (latest SP and updates installed) 32 bits, running as Admin
Any ideas what can be wrong, or how to fix this?
A:
See here for why: http://www.tejerodgers.com/snippets/2009/why-wxpython-crashes-python-26/
See wxPython's README for a hack that will let you work around the problem.
A fix has been discovered and will be included in the next release.
A:
32 or 64 bit Vista? When you did installs did you "run as admin"? I also had some issues with permissions on vista early on.
Also, this may be a fix if it isn't just the installation problem.
http://www.python-forum.org/pythonforum/viewtopic.php?f=4&t=11331
Hope this helps...
|
wxPython crashes under Vista
|
I am following the Getting Started guide for wxPython. But unfortunately the first 'Hello World' example crashes. The dialog window shows just fine, but as soon as I move my mouse over the window a "pythonw.exe has stopped working" Windows message appears.
I use:
Python 2.6.2
wxPython2.8-win32-unicode-2.8.9.2-py26
Vista (latest SP and updates installed) 32 bits, running as Admin
Any ideas what can be wrong, or how to fix this?
|
[
"See here for why: http://www.tejerodgers.com/snippets/2009/why-wxpython-crashes-python-26/\nSee wxPython's README for a hack that will let you work around the problem.\nA fix has been discovered and will be included in the next release.\n",
"32 or 64 bit Vista? When you did installs did you \"run as admin\"? I also had some issues with permissions on vista early on.\nAlso, this may be a fix if it isn't just the installation problem.\nhttp://www.python-forum.org/pythonforum/viewtopic.php?f=4&t=11331\nHope this helps...\n"
] |
[
4,
2
] |
[] |
[] |
[
"python",
"windows_vista",
"wxpython"
] |
stackoverflow_0000791341_python_windows_vista_wxpython.txt
|
Q:
Using Python set type to implement ACL
Currently I have tables like: Pages, Groups, GroupPage, Users, UserGroup. With pickled sets I can implement the same thing with only 3 tables: Pages, Groups, Users.
set seems a natural choice for implementing ACL, as group and permission related operations can be expressed very naturally with sets. If I store the allow/deny lists as pickled sets, it can eliminate few intermediate tables for many-to-many relationship and allow permission editing without many database operations.
If human readability is important, I can always use json instead of cPickle for serialization and use set when manipulating the permission list in Python. It is highly unlikely that permissions will ever be edited directly using SQL. So is it a good design idea?
We're using SQLAlchemy as ORM, so it's likely to be implemented with PickleType column. I'm not planning to store the whole pickled "resource" recordset, only the set object made out of "resource" primary key values.
A:
If you're going to pickle sets, you should find a good object database (like ZODB). In a pure-relational world, your sets are stored as BLOBS, which works out well. Trying to pickle sets in an ORM situation may lead to confusing problems with the ORM mappings, since they mostly assume purely relational mappings without any BLOBs that must be decoded.
Sets and other first-class objects are really what belongs in a database. The ORM is a hack because some folks think relational databases are "better", so we hack in a mapping layer.
Go with an object database and you'll find that things are often much smoother.
Edit
SQLAlchemy has it's own serializer.
http://www.sqlalchemy.org/docs/05/reference/ext/serializer.html
This is neither pickle or cPickle. However, because it needs to be extensible, it will behave like pickle. Which -- for your purposes -- will be as fast as you need. You won't be deserializing ACL's all the time.
A:
You need to consider what it is that a DBMS provides you with, and which of those features you'll need to reimplement.
The issue of concurrency is a big one. There are a few race conditions to be considered (such as multiple writes taking place in different threads and processes and overwriting the new data), performance issues (write policy? What if your process crashes and you lose your data?), memory issues (how big are your permission sets? Will it all fit in RAM?).
If you have enough memory and you don't have to worry about concurrency, then your solution might be a good one. Otherwise I'd stick with a databases -- it takes care of those problems for you, and lots of work has gone into them to make sure that they always take your data from one consistent state to another.
A:
Me, I'd stick with keeping persistent info in the relational DB in a form that's independent from a specific programming language used to access it -- much as I love Python (and that's a lot), some day I may want to access that info from some other language, and if I went for Python-specific formats... boy would I ever regret it...
A:
If it simplifies things and you won't be editing the file a whole lot (or it will be edited infrequently), I say go for it. Of course, a third option to consider is using a sqlite database to store this stuff. There are tools to make these easily human-readable.
|
Using Python set type to implement ACL
|
Currently I have tables like: Pages, Groups, GroupPage, Users, UserGroup. With pickled sets I can implement the same thing with only 3 tables: Pages, Groups, Users.
set seems a natural choice for implementing ACL, as group and permission related operations can be expressed very naturally with sets. If I store the allow/deny lists as pickled sets, it can eliminate few intermediate tables for many-to-many relationship and allow permission editing without many database operations.
If human readability is important, I can always use json instead of cPickle for serialization and use set when manipulating the permission list in Python. It is highly unlikely that permissions will ever be edited directly using SQL. So is it a good design idea?
We're using SQLAlchemy as ORM, so it's likely to be implemented with PickleType column. I'm not planning to store the whole pickled "resource" recordset, only the set object made out of "resource" primary key values.
|
[
"If you're going to pickle sets, you should find a good object database (like ZODB). In a pure-relational world, your sets are stored as BLOBS, which works out well. Trying to pickle sets in an ORM situation may lead to confusing problems with the ORM mappings, since they mostly assume purely relational mappings without any BLOBs that must be decoded.\nSets and other first-class objects are really what belongs in a database. The ORM is a hack because some folks think relational databases are \"better\", so we hack in a mapping layer.\nGo with an object database and you'll find that things are often much smoother.\n\nEdit\nSQLAlchemy has it's own serializer.\nhttp://www.sqlalchemy.org/docs/05/reference/ext/serializer.html\nThis is neither pickle or cPickle. However, because it needs to be extensible, it will behave like pickle. Which -- for your purposes -- will be as fast as you need. You won't be deserializing ACL's all the time. \n",
"You need to consider what it is that a DBMS provides you with, and which of those features you'll need to reimplement.\nThe issue of concurrency is a big one. There are a few race conditions to be considered (such as multiple writes taking place in different threads and processes and overwriting the new data), performance issues (write policy? What if your process crashes and you lose your data?), memory issues (how big are your permission sets? Will it all fit in RAM?).\nIf you have enough memory and you don't have to worry about concurrency, then your solution might be a good one. Otherwise I'd stick with a databases -- it takes care of those problems for you, and lots of work has gone into them to make sure that they always take your data from one consistent state to another.\n",
"Me, I'd stick with keeping persistent info in the relational DB in a form that's independent from a specific programming language used to access it -- much as I love Python (and that's a lot), some day I may want to access that info from some other language, and if I went for Python-specific formats... boy would I ever regret it...\n",
"If it simplifies things and you won't be editing the file a whole lot (or it will be edited infrequently), I say go for it. Of course, a third option to consider is using a sqlite database to store this stuff. There are tools to make these easily human-readable.\n"
] |
[
3,
2,
2,
1
] |
[] |
[] |
[
"acl",
"pickle",
"python",
"set"
] |
stackoverflow_0000790613_acl_pickle_python_set.txt
|
Q:
pyinotify bug with reading file on creation?
I want to parse a file everytime a new file is created in a certain directory. For this, I'm trying to use pyinotify to setup a directory to watch for IN_CREATE kernel events, and fire the parse() method.
Here is the module:
from pyinotify import WatchManager,
ThreadedNotifier, ProcessEvent, IN_CREATE
class Watcher(ProcessEvent):
watchdir = '/tmp/watch'
def __init__(self):
ProcessEvent.__init__(self)
wm = WatchManager()
self.notifier = ThreadedNotifier(wm, self)
wdd = wm.add_watch(self.watchdir, IN_CREATE)
self.notifier.start()
def process_IN_CREATE(self, event):
pfile = self._parse(event.pathname)
print(pfile)
def _parse(self, filename):
f = open(filename)
file = [line.strip() for line in f.readlines()]
f.close()
return file
if __name__ == '__main__':
Watcher()
The problem is that the list returned by _parse is empty when triggered by a new file creation event, like so (the file is created in another window while watcher.py is running):
$ python watcher.py
[]
...but strangely enough, it works from an interpreter session when called directly.
>>> import watcher
>>> w = watcher.Watcher()
>>> w._parse('/tmp/watch/sample')
['This is a sample file', 'Another line', 'And another...']
Why is this happening? The farthest I've come debugging this thing is to know that something is making pyinotify not read the file correctly. But... why?
A:
may be you want to wait till file is closed?
A:
As @SilentGhost mentioned, you may be reading the file before any content has been added to file (i.e. you are getting notified of the file creation not file writes).
Update: The loop.py example with pynotify tarball will dump the sequence of inotify events to the screen. To determine which event you need to trigger on, launch loop.py to monitor /tmp and then perform the file manipulation you want to track.
A:
Here's some code that works for me, with a 2.6.18 kernel, Python 2.4.3, and pyinotify 0.7.1 -- you may be using different versions of some of these, but it's important to make sure we're talking about the same versions, I think...:
#!/usr/bin/python2.4
import os.path
from pyinotify import pyinotify
class Watcher(pyinotify.ProcessEvent):
watchdir = '/tmp/watch'
def __init__(self):
pyinotify.ProcessEvent.__init__(self)
wm = pyinotify.WatchManager()
self.notifier = pyinotify.ThreadedNotifier(wm, self)
wdd = wm.add_watch(self.watchdir, pyinotify.EventsCodes.IN_CREATE)
print "Watching", self.watchdir
self.notifier.start()
def process_IN_CREATE(self, event):
print "Seen:", event
pathname = os.path.join(event.path, event.name)
pfile = self._parse(pathname)
print(pfile)
def _parse(self, filename):
f = open(filename)
file = [line.strip() for line in f]
f.close()
return file
if __name__ == '__main__':
Watcher()
when this is running in a terminal window, and in another terminal window I do
echo "ciao" >/tmp/watch/c3
this program's output is:
Watching /tmp/watch
Seen: event_name: IN_CREATE is_dir: False mask: 256 name: c3 path: /tmp/watch wd: 1
['ciao']
as expected. So can you please try this script (fixing the Python version in the hashbang if needed, of course) and tell us the exact releases of Linux kernel, pyinotify, and Python that you are using, and what do you observe in these exact circunstances? Quite possibly with more detailed info we may identify which bug or anomaly is giving you problems, exactly. Thanks!
A:
I think I solved the problem by using the IN_CLOSE_WRITE event instead. I'm not sure what was happening before that made it not work.
@Alex: Thanks, I tried your script, but I'm using newer versions: Python 2.6.1, pyinotify 0.8.6 and Linux 2.6.28, so it didn't work for me.
It was definitely a matter of trying to parse the file before it was written, so kudos to SilentGhost and DanM for figuring it out.
|
pyinotify bug with reading file on creation?
|
I want to parse a file everytime a new file is created in a certain directory. For this, I'm trying to use pyinotify to setup a directory to watch for IN_CREATE kernel events, and fire the parse() method.
Here is the module:
from pyinotify import WatchManager,
ThreadedNotifier, ProcessEvent, IN_CREATE
class Watcher(ProcessEvent):
watchdir = '/tmp/watch'
def __init__(self):
ProcessEvent.__init__(self)
wm = WatchManager()
self.notifier = ThreadedNotifier(wm, self)
wdd = wm.add_watch(self.watchdir, IN_CREATE)
self.notifier.start()
def process_IN_CREATE(self, event):
pfile = self._parse(event.pathname)
print(pfile)
def _parse(self, filename):
f = open(filename)
file = [line.strip() for line in f.readlines()]
f.close()
return file
if __name__ == '__main__':
Watcher()
The problem is that the list returned by _parse is empty when triggered by a new file creation event, like so (the file is created in another window while watcher.py is running):
$ python watcher.py
[]
...but strangely enough, it works from an interpreter session when called directly.
>>> import watcher
>>> w = watcher.Watcher()
>>> w._parse('/tmp/watch/sample')
['This is a sample file', 'Another line', 'And another...']
Why is this happening? The farthest I've come debugging this thing is to know that something is making pyinotify not read the file correctly. But... why?
|
[
"may be you want to wait till file is closed?\n",
"As @SilentGhost mentioned, you may be reading the file before any content has been added to file (i.e. you are getting notified of the file creation not file writes).\nUpdate: The loop.py example with pynotify tarball will dump the sequence of inotify events to the screen. To determine which event you need to trigger on, launch loop.py to monitor /tmp and then perform the file manipulation you want to track.\n",
"Here's some code that works for me, with a 2.6.18 kernel, Python 2.4.3, and pyinotify 0.7.1 -- you may be using different versions of some of these, but it's important to make sure we're talking about the same versions, I think...:\n#!/usr/bin/python2.4\n\nimport os.path\nfrom pyinotify import pyinotify\n\nclass Watcher(pyinotify.ProcessEvent):\n\n watchdir = '/tmp/watch'\n\n def __init__(self):\n pyinotify.ProcessEvent.__init__(self)\n wm = pyinotify.WatchManager()\n self.notifier = pyinotify.ThreadedNotifier(wm, self)\n wdd = wm.add_watch(self.watchdir, pyinotify.EventsCodes.IN_CREATE)\n print \"Watching\", self.watchdir\n self.notifier.start()\n\n def process_IN_CREATE(self, event):\n print \"Seen:\", event\n pathname = os.path.join(event.path, event.name)\n pfile = self._parse(pathname)\n print(pfile)\n\n def _parse(self, filename):\n f = open(filename)\n file = [line.strip() for line in f]\n f.close()\n return file\n\nif __name__ == '__main__':\n Watcher()\n\nwhen this is running in a terminal window, and in another terminal window I do\necho \"ciao\" >/tmp/watch/c3\n\nthis program's output is:\nWatching /tmp/watch\nSeen: event_name: IN_CREATE is_dir: False mask: 256 name: c3 path: /tmp/watch wd: 1 \n['ciao']\n\nas expected. So can you please try this script (fixing the Python version in the hashbang if needed, of course) and tell us the exact releases of Linux kernel, pyinotify, and Python that you are using, and what do you observe in these exact circunstances? Quite possibly with more detailed info we may identify which bug or anomaly is giving you problems, exactly. Thanks!\n",
"I think I solved the problem by using the IN_CLOSE_WRITE event instead. I'm not sure what was happening before that made it not work. \n@Alex: Thanks, I tried your script, but I'm using newer versions: Python 2.6.1, pyinotify 0.8.6 and Linux 2.6.28, so it didn't work for me. \nIt was definitely a matter of trying to parse the file before it was written, so kudos to SilentGhost and DanM for figuring it out.\n"
] |
[
3,
1,
1,
1
] |
[] |
[] |
[
"linux",
"python"
] |
stackoverflow_0000790898_linux_python.txt
|
Q:
Adobe Flash and Python
Is it possible to use CPython to develop Adobe Flash based applications?
A:
You can try ming, a library for generating Macromedia Flash files (.swf).
It's written in C but it has wrappers that allow it to be used in C++, PHP, Python, Ruby, and Perl.
A:
take a look at Flex PyPy: http://code.google.com/p/flex-pypy/
A:
I guess it would be possible to compile the python interpreter to flash bytecode using this http://labs.adobe.com/downloads/alchemy.html and then use it to run python programs. But apart from that the answer is no.
|
Adobe Flash and Python
|
Is it possible to use CPython to develop Adobe Flash based applications?
|
[
"You can try ming, a library for generating Macromedia Flash files (.swf).\nIt's written in C but it has wrappers that allow it to be used in C++, PHP, Python, Ruby, and Perl. \n",
"take a look at Flex PyPy: http://code.google.com/p/flex-pypy/\n",
"I guess it would be possible to compile the python interpreter to flash bytecode using this http://labs.adobe.com/downloads/alchemy.html and then use it to run python programs. But apart from that the answer is no.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"flash",
"flashdevelop",
"python"
] |
stackoverflow_0000304779_flash_flashdevelop_python.txt
|
Q:
Best way to turn a list into a dict, where the keys are a value of each object?
I am attempting to take a list of objects, and turn that list into a dict. The dict values would be each object in the list, and the dict keys would be a value found in each object.
Here is some code representing what im doing:
class SomeClass(object):
def __init__(self, name):
self.name = name
object_list = [
SomeClass(name='a'),
SomeClass(name='b'),
SomeClass(name='c'),
SomeClass(name='d'),
SomeClass(name='e'),
]
object_dict = {}
for an_object in object_list:
object_dict[an_object.name] = an_object
Now that code works, but its a bit ugly, and a bit slow. Could anyone give an example of something thats faster/"better"?
edit:
Alright, thanks for the replies. I must say i am surprised to see the more pythonic ways seeming slower than the hand made way.
edit2:
Alright, i updated the test code to make it a bit more readable, with so many tests heh.
Here is where we are at in terms of code, i put authors in the code and if i messed any up please let me know.
from itertools import izip
import timeit
class SomeClass(object):
def __init__(self, name):
self.name = name
object_list = []
for i in range(5):
object_list.append(SomeClass(name=i))
def example_1():
'Original Code'
object_dict = {}
for an_object in object_list:
object_dict[an_object.name] = an_object
def example_2():
'Provided by hyperboreean'
d = dict(zip([o.name for o in object_list], object_list))
def example_3():
'Provided by Jason Baker'
d = dict([(an_object.name, an_object) for an_object in object_list])
def example_4():
"Added izip to hyperboreean's code, suggested by Chris Cameron"
d = dict(izip([o.name for o in object_list], object_list))
def example_5():
'zip, improved by John Fouhy'
d = dict(zip((o.name for o in object_list), object_list))
def example_6():
'izip, improved by John Fouhy'
d = dict(izip((o.name for o in object_list), object_list))
def example_7():
'Provided by Jason Baker, removed brackets by John Fouhy'
d = dict((an_object.name, an_object) for an_object in object_list)
timeits = []
for example_index in range(1, 8):
timeits.append(
timeit.Timer(
'example_%s()' % example_index,
'from __main__ import example_%s' % example_index)
)
for i in range(7):
timeit_object = timeits[i]
print 'Example #%s Result: "%s"' % (i+1, timeit_object.repeat(2))
With 5 objects in the list i am getting a result of:
Example #1 Result: "[1.2428441047668457, 1.2431108951568604]"
Example #2 Result: "[3.3567759990692139, 3.3188660144805908]"
Example #3 Result: "[2.8346641063690186, 2.8344728946685791]"
Example #4 Result: "[3.0710639953613281, 3.0573830604553223]"
Example #5 Result: "[5.2079918384552002, 5.2170760631561279]"
Example #6 Result: "[3.240635871887207, 3.2402129173278809]"
Example #7 Result: "[3.0856869220733643, 3.0688989162445068]"
and with 50:
Example #1 Result: "[9.8108220100402832, 9.9066231250762939]"
Example #2 Result: "[16.365023136138916, 16.213981151580811]"
Example #3 Result: "[15.77024507522583, 15.771029949188232]"
Example #4 Result: "[14.598290920257568, 14.591825008392334]"
Example #5 Result: "[20.644147872924805, 20.64064884185791]"
Example #6 Result: "[15.210831165313721, 15.212569952011108]"
Example #7 Result: "[17.317100048065186, 17.359367847442627]"
And lastly, with 500 objects:
Example #1 Result: "[96.682723999023438, 96.678673028945923]"
Example #2 Result: "[137.49416589736938, 137.48705387115479]"
Example #3 Result: "[136.58069896697998, 136.5823769569397]"
Example #4 Result: "[115.0344090461731, 115.1088011264801]"
Example #5 Result: "[165.08325910568237, 165.06769108772278]"
Example #6 Result: "[128.95187497138977, 128.96077489852905]"
Example #7 Result: "[155.70515990257263, 155.74126601219177]"
Thanks to all that replied! Im very surprised with the result.
If there are any other tips for a faster method i would love to hear them. Thanks all!
A:
In python 3.0 you can use a dict comprehension:
{an_object.name : an_object for an_object in object_list}
This is also possible in Python 2, but it's a bit uglier:
dict([(an_object.name, an_object) for an_object in object_list])
A:
d = dict(zip([o.name for o in object_list], object_list))
A:
If you're concerned with speed, then we can improve things slightly. Your "verbose" solution (which is really fine) creates no intermediate data structures. On the other hand, hyperboreean's solution,
d = dict(zip([o.name for o in object_list], object_list))
creates two unnecessary lists: [o.name for o in object_list] creates a list, and zip(_, _) creates another list. Both these lists serve only to be iterated over once in the creation of the dict.
We can avoid the creation of one list by replacing the list comprehension with a generator expression:
d = dict(zip((o.name for o in object_list), object_list))
Replacing zip with itertools.izip will return an iterator and avoid creating the second list:
import itertools
d = dict(itertools.izip((o.name for o in object_list), object_list))
We could modify Jason Baker's solution in the same way, by simply deleting the square brackets:
d = dict((an_object.name, an_object) for an_object in object_list)
|
Best way to turn a list into a dict, where the keys are a value of each object?
|
I am attempting to take a list of objects, and turn that list into a dict. The dict values would be each object in the list, and the dict keys would be a value found in each object.
Here is some code representing what im doing:
class SomeClass(object):
def __init__(self, name):
self.name = name
object_list = [
SomeClass(name='a'),
SomeClass(name='b'),
SomeClass(name='c'),
SomeClass(name='d'),
SomeClass(name='e'),
]
object_dict = {}
for an_object in object_list:
object_dict[an_object.name] = an_object
Now that code works, but its a bit ugly, and a bit slow. Could anyone give an example of something thats faster/"better"?
edit:
Alright, thanks for the replies. I must say i am surprised to see the more pythonic ways seeming slower than the hand made way.
edit2:
Alright, i updated the test code to make it a bit more readable, with so many tests heh.
Here is where we are at in terms of code, i put authors in the code and if i messed any up please let me know.
from itertools import izip
import timeit
class SomeClass(object):
def __init__(self, name):
self.name = name
object_list = []
for i in range(5):
object_list.append(SomeClass(name=i))
def example_1():
'Original Code'
object_dict = {}
for an_object in object_list:
object_dict[an_object.name] = an_object
def example_2():
'Provided by hyperboreean'
d = dict(zip([o.name for o in object_list], object_list))
def example_3():
'Provided by Jason Baker'
d = dict([(an_object.name, an_object) for an_object in object_list])
def example_4():
"Added izip to hyperboreean's code, suggested by Chris Cameron"
d = dict(izip([o.name for o in object_list], object_list))
def example_5():
'zip, improved by John Fouhy'
d = dict(zip((o.name for o in object_list), object_list))
def example_6():
'izip, improved by John Fouhy'
d = dict(izip((o.name for o in object_list), object_list))
def example_7():
'Provided by Jason Baker, removed brackets by John Fouhy'
d = dict((an_object.name, an_object) for an_object in object_list)
timeits = []
for example_index in range(1, 8):
timeits.append(
timeit.Timer(
'example_%s()' % example_index,
'from __main__ import example_%s' % example_index)
)
for i in range(7):
timeit_object = timeits[i]
print 'Example #%s Result: "%s"' % (i+1, timeit_object.repeat(2))
With 5 objects in the list i am getting a result of:
Example #1 Result: "[1.2428441047668457, 1.2431108951568604]"
Example #2 Result: "[3.3567759990692139, 3.3188660144805908]"
Example #3 Result: "[2.8346641063690186, 2.8344728946685791]"
Example #4 Result: "[3.0710639953613281, 3.0573830604553223]"
Example #5 Result: "[5.2079918384552002, 5.2170760631561279]"
Example #6 Result: "[3.240635871887207, 3.2402129173278809]"
Example #7 Result: "[3.0856869220733643, 3.0688989162445068]"
and with 50:
Example #1 Result: "[9.8108220100402832, 9.9066231250762939]"
Example #2 Result: "[16.365023136138916, 16.213981151580811]"
Example #3 Result: "[15.77024507522583, 15.771029949188232]"
Example #4 Result: "[14.598290920257568, 14.591825008392334]"
Example #5 Result: "[20.644147872924805, 20.64064884185791]"
Example #6 Result: "[15.210831165313721, 15.212569952011108]"
Example #7 Result: "[17.317100048065186, 17.359367847442627]"
And lastly, with 500 objects:
Example #1 Result: "[96.682723999023438, 96.678673028945923]"
Example #2 Result: "[137.49416589736938, 137.48705387115479]"
Example #3 Result: "[136.58069896697998, 136.5823769569397]"
Example #4 Result: "[115.0344090461731, 115.1088011264801]"
Example #5 Result: "[165.08325910568237, 165.06769108772278]"
Example #6 Result: "[128.95187497138977, 128.96077489852905]"
Example #7 Result: "[155.70515990257263, 155.74126601219177]"
Thanks to all that replied! Im very surprised with the result.
If there are any other tips for a faster method i would love to hear them. Thanks all!
|
[
"In python 3.0 you can use a dict comprehension:\n{an_object.name : an_object for an_object in object_list}\n\nThis is also possible in Python 2, but it's a bit uglier:\ndict([(an_object.name, an_object) for an_object in object_list])\n\n",
"d = dict(zip([o.name for o in object_list], object_list))\n\n",
"If you're concerned with speed, then we can improve things slightly. Your \"verbose\" solution (which is really fine) creates no intermediate data structures. On the other hand, hyperboreean's solution,\nd = dict(zip([o.name for o in object_list], object_list))\n\ncreates two unnecessary lists: [o.name for o in object_list] creates a list, and zip(_, _) creates another list. Both these lists serve only to be iterated over once in the creation of the dict.\nWe can avoid the creation of one list by replacing the list comprehension with a generator expression:\nd = dict(zip((o.name for o in object_list), object_list))\n\nReplacing zip with itertools.izip will return an iterator and avoid creating the second list:\nimport itertools\nd = dict(itertools.izip((o.name for o in object_list), object_list))\n\nWe could modify Jason Baker's solution in the same way, by simply deleting the square brackets:\nd = dict((an_object.name, an_object) for an_object in object_list)\n\n"
] |
[
13,
8,
7
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000791708_python.txt
|
Q:
Can't decode utf-8 string in python on os x terminal.app
I have terminal.app set to accept utf-8 and in bash I can type unicode characters, copy and paste them, but if I start the python shell I can't and if I try to decode unicode I get errors:
>>> wtf = u'\xe4\xf6\xfc'.decode()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
>>> wtf = u'\xe4\xf6\xfc'.decode('utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
Anyone know what I'm doing wrong?
A:
I think there is encode/decode confusion all over the place. You start with an unicode object:
u'\xe4\xf6\xfc'
This is an unicode object, the three characters are the unicode codepoints for "äöü". If you want to turn them into Utf-8, you have to encode them:
>>> u'\xe4\xf6\xfc'.encode('utf-8')
'\xc3\xa4\xc3\xb6\xc3\xbc'
The resulting six characters are the Utf-8 representation of "äöü".
If you call decode(...), you try to interpret the characters as some encoding that still needs to be converted to unicode. Since it already is Unicode, this doesn't work. Your first call tries a Ascii to Unicode conversion, the second call a Utf-8 to Unicode conversion. Since u'\xe4\xf6\xfc' is neither valid Ascii nor valid Utf-8 these conversion attempts fail.
Further confusion might come from the fact that '\xe4\xf6\xfc' is also the Latin1/ISO-8859-1 encoding of "äöü". If you write a normal python string (without the leading "u" that marks it as unicode), you can convert it to an unicode object with decode('latin1'):
>>> '\xe4\xf6\xfc'.decode('latin1')
u'\xe4\xf6\xfc'
A:
I think you have encoding and decoding backwards. You encode Unicode into a byte stream, and decode the byte stream into Unicode.
Python 2.6.1 (r261:67515, Dec 6 2008, 16:42:21)
[GCC 4.0.1 (Apple Computer, Inc. build 5370)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> wtf = u'\xe4\xf6\xfc'
>>> wtf
u'\xe4\xf6\xfc'
>>> print wtf
äöü
>>> wtf.encode('UTF-8')
'\xc3\xa4\xc3\xb6\xc3\xbc'
>>> print '\xc3\xa4\xc3\xb6\xc3\xbc'.decode('utf-8')
äöü
A:
>>> wtf = '\xe4\xf6\xfc'
>>> wtf
'\xe4\xf6\xfc'
>>> print wtf
���
>>> print wtf.decode("latin-1")
äöü
>>> wtf_unicode = unicode(wtf.decode("latin-1"))
>>> wtf_unicode
u'\xe4\xf6\xfc'
>>> print wtf_unicode
äöü
A:
The Unicode strings section of the introductory tutorial explains it well :
To convert a Unicode string into an 8-bit string using a specific encoding, Unicode objects provide an encode() method that takes one argument, the name of the encoding. Lowercase names for encodings are preferred.
>>> u"äöü".encode('utf-8')
'\xc3\xa4\xc3\xb6\xc3\xbc'
|
Can't decode utf-8 string in python on os x terminal.app
|
I have terminal.app set to accept utf-8 and in bash I can type unicode characters, copy and paste them, but if I start the python shell I can't and if I try to decode unicode I get errors:
>>> wtf = u'\xe4\xf6\xfc'.decode()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
>>> wtf = u'\xe4\xf6\xfc'.decode('utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
Anyone know what I'm doing wrong?
|
[
"I think there is encode/decode confusion all over the place. You start with an unicode object:\nu'\\xe4\\xf6\\xfc'\n\nThis is an unicode object, the three characters are the unicode codepoints for \"äöü\". If you want to turn them into Utf-8, you have to encode them:\n>>> u'\\xe4\\xf6\\xfc'.encode('utf-8')\n'\\xc3\\xa4\\xc3\\xb6\\xc3\\xbc'\n\nThe resulting six characters are the Utf-8 representation of \"äöü\".\nIf you call decode(...), you try to interpret the characters as some encoding that still needs to be converted to unicode. Since it already is Unicode, this doesn't work. Your first call tries a Ascii to Unicode conversion, the second call a Utf-8 to Unicode conversion. Since u'\\xe4\\xf6\\xfc' is neither valid Ascii nor valid Utf-8 these conversion attempts fail.\nFurther confusion might come from the fact that '\\xe4\\xf6\\xfc' is also the Latin1/ISO-8859-1 encoding of \"äöü\". If you write a normal python string (without the leading \"u\" that marks it as unicode), you can convert it to an unicode object with decode('latin1'):\n>>> '\\xe4\\xf6\\xfc'.decode('latin1')\nu'\\xe4\\xf6\\xfc'\n\n",
"I think you have encoding and decoding backwards. You encode Unicode into a byte stream, and decode the byte stream into Unicode.\nPython 2.6.1 (r261:67515, Dec 6 2008, 16:42:21) \n[GCC 4.0.1 (Apple Computer, Inc. build 5370)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> wtf = u'\\xe4\\xf6\\xfc'\n>>> wtf\nu'\\xe4\\xf6\\xfc'\n>>> print wtf\näöü\n>>> wtf.encode('UTF-8')\n'\\xc3\\xa4\\xc3\\xb6\\xc3\\xbc'\n>>> print '\\xc3\\xa4\\xc3\\xb6\\xc3\\xbc'.decode('utf-8')\näöü\n\n",
">>> wtf = '\\xe4\\xf6\\xfc'\n>>> wtf\n'\\xe4\\xf6\\xfc'\n>>> print wtf\n���\n>>> print wtf.decode(\"latin-1\")\näöü\n>>> wtf_unicode = unicode(wtf.decode(\"latin-1\"))\n>>> wtf_unicode\nu'\\xe4\\xf6\\xfc'\n>>> print wtf_unicode\näöü\n\n",
"The Unicode strings section of the introductory tutorial explains it well :\n\nTo convert a Unicode string into an 8-bit string using a specific encoding, Unicode objects provide an encode() method that takes one argument, the name of the encoding. Lowercase names for encodings are preferred.\n>>> u\"äöü\".encode('utf-8')\n'\\xc3\\xa4\\xc3\\xb6\\xc3\\xbc'\n\n\n"
] |
[
18,
4,
3,
2
] |
[] |
[] |
[
"macos",
"python",
"terminal",
"unicode"
] |
stackoverflow_0000792017_macos_python_terminal_unicode.txt
|
Q:
Navigating Callable-Iterators
I'd like to use regular expressions to extract information out of some chat logs. The format of the strings being parsed are 03:22:32 PM <b>blcArmadillo</b>. I used the python type() command to find that the variable messages is a callable-iterator. My question is how do I most efficiently navigate through a callable-iterator? Are they like arrays where you can just use an index? The only way I could find to "extract" the data was to loop through the returned values and add them to a list as shown in the code snipet below.
times = []
messages = re.compile(r'(?P<time>..:..:.. ..).*?<b>(?P<usrname>.*?):</b>').finditer(search)
for result in messages:
times.append(result.group('time'))
Is there a better more effiecnt way of doing this? Thanks for the help.
A:
An iterator is just an object with a next method. Every time you call it, it returns the next item in a collection. If you need to access arbitrary indexes, you will pretty much have to convert it into a list. Instead of this:
for result in messages:
times.append(result.group('time'))
You can say this though:
times = [result.group('time') for result in messages]
This does pretty much the same thing. However, I should warn you that doing this for large result sets will be pretty slow and will eat up a bunch of memory. Thus, you shouldn't do this if you don't need random access. If data an untrusted user enters will determine how many results will appear, you might also want to limit the number of things they can enter.
EDIT: I just noticed that my previous answer didn't quite do the same as the snippet you posted, so I've updated it.
|
Navigating Callable-Iterators
|
I'd like to use regular expressions to extract information out of some chat logs. The format of the strings being parsed are 03:22:32 PM <b>blcArmadillo</b>. I used the python type() command to find that the variable messages is a callable-iterator. My question is how do I most efficiently navigate through a callable-iterator? Are they like arrays where you can just use an index? The only way I could find to "extract" the data was to loop through the returned values and add them to a list as shown in the code snipet below.
times = []
messages = re.compile(r'(?P<time>..:..:.. ..).*?<b>(?P<usrname>.*?):</b>').finditer(search)
for result in messages:
times.append(result.group('time'))
Is there a better more effiecnt way of doing this? Thanks for the help.
|
[
"An iterator is just an object with a next method. Every time you call it, it returns the next item in a collection. If you need to access arbitrary indexes, you will pretty much have to convert it into a list. Instead of this:\nfor result in messages:\n times.append(result.group('time'))\n\nYou can say this though:\ntimes = [result.group('time') for result in messages]\n\nThis does pretty much the same thing. However, I should warn you that doing this for large result sets will be pretty slow and will eat up a bunch of memory. Thus, you shouldn't do this if you don't need random access. If data an untrusted user enters will determine how many results will appear, you might also want to limit the number of things they can enter.\nEDIT: I just noticed that my previous answer didn't quite do the same as the snippet you posted, so I've updated it.\n"
] |
[
5
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000792304_python.txt
|
Q:
Find shortest substring
I have written a code to find the substring from a string. It prints all substrings.
But I want a substring that ranges from length 2 to 6 and print the substring of minimum length.
Please help me
Program:
import re
p=re.compile('S(.+?)N')
s='ASDFANSAAAAAFGNDASMPRKYN'
s1=p.findall(s)
print s1
output:
['DFA', 'AAAAAFG', 'MPRKY']
Desired output:
'DFA' length=3
A:
If you already have the list, you can use the min function with the len function as the second argument.
>>> s1 = ['DFA', 'AAAAAFG', 'MPRKY']
>>> min(s1, key=len)
'DFA'
EDIT:
In the event that two are the same length, you can extend this further to produce a list containing the elements that are all the same length:
>>> s2 = ['foo', 'bar', 'baz', 'spam', 'eggs', 'knight']
>>> s2_min_len = len(min(s2, key=len))
>>> [e for e in s2 if len(e) is s2_min_len]
['foo', 'bar', 'baz']
The above should work when there is only 1 'shortest' element too.
EDIT 2: Just to be complete, it should be faster, at least according to my simple tests, to compute the length of the shortest element and use that in the list comprehension. Updated above.
A:
The regex 'S(.{2,6}?)N' will give you only matches with length 2 - 6 characters.
To return the shortest matching substring, use sorted(s1, key=len)[0].
Full example:
import re
p=re.compile('S(.{2,6}?)N')
s='ASDFANSAAAAAFGNDASMPRKYNSAAN'
s1=p.findall(s)
if s1:
print sorted(s1, key=len)[0]
print min(s1, key=len) # as suggested by Nick Presta
This works by sorting the list returned by findall by length, then returning the first item in the sorted list.
Edit: Nick Presta's answer is more elegant, I was not aware that min also could take a key argument...
|
Find shortest substring
|
I have written a code to find the substring from a string. It prints all substrings.
But I want a substring that ranges from length 2 to 6 and print the substring of minimum length.
Please help me
Program:
import re
p=re.compile('S(.+?)N')
s='ASDFANSAAAAAFGNDASMPRKYN'
s1=p.findall(s)
print s1
output:
['DFA', 'AAAAAFG', 'MPRKY']
Desired output:
'DFA' length=3
|
[
"If you already have the list, you can use the min function with the len function as the second argument.\n>>> s1 = ['DFA', 'AAAAAFG', 'MPRKY']\n>>> min(s1, key=len)\n'DFA'\n\nEDIT:\nIn the event that two are the same length, you can extend this further to produce a list containing the elements that are all the same length:\n>>> s2 = ['foo', 'bar', 'baz', 'spam', 'eggs', 'knight']\n>>> s2_min_len = len(min(s2, key=len))\n>>> [e for e in s2 if len(e) is s2_min_len]\n['foo', 'bar', 'baz']\n\nThe above should work when there is only 1 'shortest' element too.\nEDIT 2: Just to be complete, it should be faster, at least according to my simple tests, to compute the length of the shortest element and use that in the list comprehension. Updated above.\n",
"The regex 'S(.{2,6}?)N' will give you only matches with length 2 - 6 characters.\nTo return the shortest matching substring, use sorted(s1, key=len)[0].\nFull example:\nimport re\np=re.compile('S(.{2,6}?)N')\ns='ASDFANSAAAAAFGNDASMPRKYNSAAN'\ns1=p.findall(s)\nif s1:\n print sorted(s1, key=len)[0]\n print min(s1, key=len) # as suggested by Nick Presta\n\nThis works by sorting the list returned by findall by length, then returning the first item in the sorted list.\nEdit: Nick Presta's answer is more elegant, I was not aware that min also could take a key argument...\n"
] |
[
9,
4
] |
[] |
[] |
[
"python",
"substring"
] |
stackoverflow_0000792394_python_substring.txt
|
Q:
How do I use Tkinter with Python on Windows Vista?
I installed Python 2.6 for one user on Windows Vista. Python works okay, but when I try: import Tkinter, it says the side-by-side configuration has errors. I've tried tinkering with the Visual Studio runtime, with no good results. Any ideas on how to resolve this?
A:
Maybe you should downgrade to 2.5 version?
A:
It seems this is a one of the many weird Vista problems and some random reinstalling, installing/upgrading of the visual studio runtime or some such seems sometimes to help, or disabling sxs in the system configuration or writing a manifest file etc.
Though I think you should downgrade to windows XP.
|
How do I use Tkinter with Python on Windows Vista?
|
I installed Python 2.6 for one user on Windows Vista. Python works okay, but when I try: import Tkinter, it says the side-by-side configuration has errors. I've tried tinkering with the Visual Studio runtime, with no good results. Any ideas on how to resolve this?
|
[
"Maybe you should downgrade to 2.5 version?\n",
"It seems this is a one of the many weird Vista problems and some random reinstalling, installing/upgrading of the visual studio runtime or some such seems sometimes to help, or disabling sxs in the system configuration or writing a manifest file etc.\nThough I think you should downgrade to windows XP.\n"
] |
[
1,
1
] |
[
"python 2.6.2 + tkinter 8.5, no problems\n"
] |
[
-1
] |
[
"python",
"tkinter",
"windows",
"windows_vista"
] |
stackoverflow_0000219215_python_tkinter_windows_windows_vista.txt
|
Q:
How to offer platform-specific implementations of a module?
I need to make one function in a module platform-independent by offering several implementations, without changing any files that import it. The following works:
do_it = getattr(__import__(__name__), "do_on_" + sys.platform)
...but breaks if the module is put into a package.
An alternative would be an if/elif with hard-coded calls to the others in do_it().
Anything better?
A:
Put the code for platform support in different files in your package. Then add this to the file people are supposed to import from:
if sys.platform.startswith("win"):
from ._windows_support import *
elif sys.platform.startswith("linux"):
from ._unix_support import *
else:
raise ImportError("my module doesn't support this system")
A:
Use globals()['do_on_' + platform] instead of the getattr call and your original idea should work whether this is inside a package or not.
A:
If you need to create a platform specific instance of an class you should look into the Factory Pattern:
link text
A:
Dive Into Python offers the exceptions alternative.
|
How to offer platform-specific implementations of a module?
|
I need to make one function in a module platform-independent by offering several implementations, without changing any files that import it. The following works:
do_it = getattr(__import__(__name__), "do_on_" + sys.platform)
...but breaks if the module is put into a package.
An alternative would be an if/elif with hard-coded calls to the others in do_it().
Anything better?
|
[
"Put the code for platform support in different files in your package. Then add this to the file people are supposed to import from:\nif sys.platform.startswith(\"win\"):\n from ._windows_support import *\nelif sys.platform.startswith(\"linux\"):\n from ._unix_support import *\nelse:\n raise ImportError(\"my module doesn't support this system\")\n\n",
"Use globals()['do_on_' + platform] instead of the getattr call and your original idea should work whether this is inside a package or not.\n",
"If you need to create a platform specific instance of an class you should look into the Factory Pattern:\nlink text\n",
"Dive Into Python offers the exceptions alternative.\n"
] |
[
5,
2,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000791098_python.txt
|
Q:
Can I write my apps in python and then run them from C?
I need to write a client-server application. I want to write it in python, because I'm familiar with it, but I would like to know if the python code can be ran from C. I'm planning to have two C projects, one containing the server code, and one containing the client code.
Is it possible to eval the python code and run it ? Is there another way of doing this?
The bottom line is that the python code must run from C, and it must behave exactly as if ran under the python interpreter. I'm asking this now, because I don't want to waste time writing the python code just to find out later that I can't achieve this . As a sidenote, I only plan on using basic python modules ( socket,select, etc. ).
EDIT: maybe this edit is in order. I haven't embedded python in C before, and I don't know what the behaviour will be. The thing is, the server will have a select loop, and will therefore run "forever". Will C let me do that ?
EDIT2: here is why I need to do this. At school, a teacher asked us to do a pretty complex client-server app in C. I'm going to cheat, write the code in python and embed it in C.
A:
here's a nice tutorial for doing exactly that http://www.linuxjournal.com/article/8497
A:
It's called embedding Python -- it's well covered in the Python docs. See https://docs.python.org/extending/embedding.html
See how do i use python libraries in C++?
A:
Yes you can run the Python code from C by embedding the interpreter in your program. You can expose portions of your C code to Python and call your exposed C code from Python as if they were normal Python functions.
A good start is the Embedding section in the Python docs. Also have a look at the article linked to by cobbal.
|
Can I write my apps in python and then run them from C?
|
I need to write a client-server application. I want to write it in python, because I'm familiar with it, but I would like to know if the python code can be ran from C. I'm planning to have two C projects, one containing the server code, and one containing the client code.
Is it possible to eval the python code and run it ? Is there another way of doing this?
The bottom line is that the python code must run from C, and it must behave exactly as if ran under the python interpreter. I'm asking this now, because I don't want to waste time writing the python code just to find out later that I can't achieve this . As a sidenote, I only plan on using basic python modules ( socket,select, etc. ).
EDIT: maybe this edit is in order. I haven't embedded python in C before, and I don't know what the behaviour will be. The thing is, the server will have a select loop, and will therefore run "forever". Will C let me do that ?
EDIT2: here is why I need to do this. At school, a teacher asked us to do a pretty complex client-server app in C. I'm going to cheat, write the code in python and embed it in C.
|
[
"here's a nice tutorial for doing exactly that http://www.linuxjournal.com/article/8497\n",
"It's called embedding Python -- it's well covered in the Python docs. See https://docs.python.org/extending/embedding.html\nSee how do i use python libraries in C++?\n",
"Yes you can run the Python code from C by embedding the interpreter in your program. You can expose portions of your C code to Python and call your exposed C code from Python as if they were normal Python functions.\nA good start is the Embedding section in the Python docs. Also have a look at the article linked to by cobbal.\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"c",
"interop",
"python"
] |
stackoverflow_0000792924_c_interop_python.txt
|
Q:
How to avoid Gdk-ERROR caused by Tkinter, visual, and ipython?
The following lines cause with ipython a crash as soon as I close the tk-window instance a.
import visual, Tkinter
a = Tkinter.Tk()
a.update()
display = visual.display(title = "Hallo")
display.exit = 0
visual.sphere()
If I close the visual display first, the entire terminal crashes. I run everything on kubuntu 8.10. Is this a bug or am I doing something wrong? If this is a bug: Is there a smart way to avoid it?
Cheers, Philipp
A:
Have you tried starting ipython with the -gthread -tk command-line switches?
From ipython --help:
-gthread, -qthread, -q4thread, -wthread, -pylab
Only ONE of these can be given, and it can only be given as the
first option passed to IPython (it will have no effect in any
other position). They provide threading support for the GTK, QT
and WXWidgets toolkits, and for the matplotlib library.
With any of the first four options, IPython starts running a
separate thread for the graphical toolkit's operation, so that
you can open and control graphical elements from within an
IPython command line, without blocking. All four provide
essentially the same functionality, respectively for GTK, QT3,
QT4 and WXWidgets (via their Python interfaces).
Note that with -wthread, you can additionally use the -wxversion
option to request a specific version of wx to be used. This
requires that you have the 'wxversion' Python module installed,
which is part of recent wxPython distributions.
If -pylab is given, IPython loads special support for the mat-
plotlib library (http://matplotlib.sourceforge.net), allowing
interactive usage of any of its backends as defined in the
user's .matplotlibrc file. It automatically activates GTK, QT
or WX threading for IPyhton if the choice of matplotlib backend
requires it. It also modifies the %run command to correctly
execute (without blocking) any matplotlib-based script which
calls show() at the end.
-tk The -g/q/q4/wthread options, and -pylab (if matplotlib is
configured to use GTK, QT or WX), will normally block Tk
graphical interfaces. This means that when GTK, QT or WX
threading is active, any attempt to open a Tk GUI will result in
a dead window, and possibly cause the Python interpreter to
crash. An extra option, -tk, is available to address this
issue. It can ONLY be given as a SECOND option after any of the
above (-gthread, -qthread, q4thread, -wthread or -pylab).
If -tk is given, IPython will try to coordinate Tk threading
with GTK, QT or WX. This is however potentially unreliable, and
you will have to test on your platform and Python configuration
to determine whether it works for you. Debian users have
reported success, apparently due to the fact that Debian builds
all of Tcl, Tk, Tkinter and Python with pthreads support. Under
other Linux environments (such as Fedora Core 2/3), this option
has caused random crashes and lockups of the Python interpreter.
Under other operating systems (Mac OSX and Windows), you'll need
to try it to find out, since currently no user reports are
available.
There is unfortunately no way for IPython to determine at run-
time whether -tk will work reliably or not, so you will need to
do some experiments before relying on it for regular work.
|
How to avoid Gdk-ERROR caused by Tkinter, visual, and ipython?
|
The following lines cause with ipython a crash as soon as I close the tk-window instance a.
import visual, Tkinter
a = Tkinter.Tk()
a.update()
display = visual.display(title = "Hallo")
display.exit = 0
visual.sphere()
If I close the visual display first, the entire terminal crashes. I run everything on kubuntu 8.10. Is this a bug or am I doing something wrong? If this is a bug: Is there a smart way to avoid it?
Cheers, Philipp
|
[
"Have you tried starting ipython with the -gthread -tk command-line switches? \nFrom ipython --help:\n\n -gthread, -qthread, -q4thread, -wthread, -pylab\n\n Only ONE of these can be given, and it can only be given as the\n first option passed to IPython (it will have no effect in any\n other position). They provide threading support for the GTK, QT\n and WXWidgets toolkits, and for the matplotlib library.\n\n With any of the first four options, IPython starts running a\n separate thread for the graphical toolkit's operation, so that\n you can open and control graphical elements from within an\n IPython command line, without blocking. All four provide\n essentially the same functionality, respectively for GTK, QT3,\n QT4 and WXWidgets (via their Python interfaces).\n\n Note that with -wthread, you can additionally use the -wxversion\n option to request a specific version of wx to be used. This\n requires that you have the 'wxversion' Python module installed,\n which is part of recent wxPython distributions.\n\n If -pylab is given, IPython loads special support for the mat-\n plotlib library (http://matplotlib.sourceforge.net), allowing\n interactive usage of any of its backends as defined in the\n user's .matplotlibrc file. It automatically activates GTK, QT\n or WX threading for IPyhton if the choice of matplotlib backend\n requires it. It also modifies the %run command to correctly\n execute (without blocking) any matplotlib-based script which\n calls show() at the end.\n\n -tk The -g/q/q4/wthread options, and -pylab (if matplotlib is\n configured to use GTK, QT or WX), will normally block Tk\n graphical interfaces. This means that when GTK, QT or WX\n threading is active, any attempt to open a Tk GUI will result in\n a dead window, and possibly cause the Python interpreter to\n crash. An extra option, -tk, is available to address this\n issue. It can ONLY be given as a SECOND option after any of the\n above (-gthread, -qthread, q4thread, -wthread or -pylab).\n\n If -tk is given, IPython will try to coordinate Tk threading\n with GTK, QT or WX. This is however potentially unreliable, and\n you will have to test on your platform and Python configuration\n to determine whether it works for you. Debian users have\n reported success, apparently due to the fact that Debian builds\n all of Tcl, Tk, Tkinter and Python with pthreads support. Under\n other Linux environments (such as Fedora Core 2/3), this option\n has caused random crashes and lockups of the Python interpreter.\n Under other operating systems (Mac OSX and Windows), you'll need\n to try it to find out, since currently no user reports are\n available.\n\n There is unfortunately no way for IPython to determine at run-\n time whether -tk will work reliably or not, so you will need to\n do some experiments before relying on it for regular work.\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0000792816_python_tkinter.txt
|
Q:
Basic MVT issue in Django
I have a Django website as follows:
site has several views
each view has its own template to show its data
each template extends a base template
base template is the base of the site, has all the JS/CSS and the basic layout
So up until now it's all good. So now we have the master head of the site (which exists in the base template), and it is common to all the views.
But now I want to make it dynamic, and add some dynamic data to it. On which view do I do this? All my views are basically render_to_response('viewtemplate.html', someContext). So how do add a common view to a base template?
Obviously I will not duplicate the common code to each separate view...
I think I'm missing something fundamental in the MVT basis of Django.
A:
You want to use context_instance and RequestContexts.
First, add at the top of your views.py:
from django.template import RequestContext
Then, update all of your views to look like:
def someview(request, ...)
...
return render_to_response('viewtemplate.html', someContext, context_instance=RequestContext(request))
In your settings.py, add:
TEMPLATE_CONTEXT_PROCESSORS = (
'django.core.context_processors.auth',
...
'myproj.app.context_processors.dynamic',
'myproj.app.context_processors.sidebar',
'myproj.app.context_processors.etc',
)
Each of these context_processors is a function takes the request object and returns a context in the form of a dictionary. Just put all the functions in context_processors.py inside the appropriate app. For example, a blog might have a sidebar with a list of recent entries and comments. context_processors.py would just define:
def sidebar(request):
recent_entry_list = Entry.objects...
recent_comment_list = Comment.objects...
return {'recent_entry_list': recent_entry_list, 'recent_comment_list': recent_comment_list}
You can add as many or as few as you like.
For more, check out the Django Template Docs.
A:
Context processors and RequestContext (see Tyler's answer) are the way to go for data that is used on every page load. For data that you may need on various views, but not all (especially data that isn't really related to the primary purpose of the view, but appears in something like a navigation sidebar), it often makes most sense to define a custom template tag for retrieving the data.
|
Basic MVT issue in Django
|
I have a Django website as follows:
site has several views
each view has its own template to show its data
each template extends a base template
base template is the base of the site, has all the JS/CSS and the basic layout
So up until now it's all good. So now we have the master head of the site (which exists in the base template), and it is common to all the views.
But now I want to make it dynamic, and add some dynamic data to it. On which view do I do this? All my views are basically render_to_response('viewtemplate.html', someContext). So how do add a common view to a base template?
Obviously I will not duplicate the common code to each separate view...
I think I'm missing something fundamental in the MVT basis of Django.
|
[
"You want to use context_instance and RequestContexts. \nFirst, add at the top of your views.py:\nfrom django.template import RequestContext\n\nThen, update all of your views to look like:\ndef someview(request, ...)\n ...\n return render_to_response('viewtemplate.html', someContext, context_instance=RequestContext(request))\n\nIn your settings.py, add:\nTEMPLATE_CONTEXT_PROCESSORS = (\n 'django.core.context_processors.auth',\n ...\n 'myproj.app.context_processors.dynamic',\n 'myproj.app.context_processors.sidebar',\n 'myproj.app.context_processors.etc',\n)\n\nEach of these context_processors is a function takes the request object and returns a context in the form of a dictionary. Just put all the functions in context_processors.py inside the appropriate app. For example, a blog might have a sidebar with a list of recent entries and comments. context_processors.py would just define:\ndef sidebar(request):\n recent_entry_list = Entry.objects...\n recent_comment_list = Comment.objects...\n return {'recent_entry_list': recent_entry_list, 'recent_comment_list': recent_comment_list}\n\nYou can add as many or as few as you like.\nFor more, check out the Django Template Docs.\n",
"Context processors and RequestContext (see Tyler's answer) are the way to go for data that is used on every page load. For data that you may need on various views, but not all (especially data that isn't really related to the primary purpose of the view, but appears in something like a navigation sidebar), it often makes most sense to define a custom template tag for retrieving the data.\n"
] |
[
7,
2
] |
[
"or use a generic view, because they are automatically passed the request context.\na simple direct to template generic view can be used to avoid having to import/pass in the request context.\n"
] |
[
-1
] |
[
"django",
"django_templates",
"python"
] |
stackoverflow_0000786149_django_django_templates_python.txt
|
Q:
Is site-packages appropriate for applications or just libraries?
I'm in a bit of a discussion with some other developers on an open source project. I'm new to python but it seems to me that site-packages is meant for libraries and not end user applications. Is that true or is site-packages an appropriate place to install an application meant to be run by an end user?
A:
We do it like this.
Most stuff we download is in site-packages. They come from pypi or Source Forge or some other external source; they are easy to rebuild; they're highly reused; they don't change much.
Must stuff we write is in other locations (usually under /opt, or c:\opt) AND is included in the PYTHONPATH.
There's no great reason for keeping our stuff out of site-packages. However, our feeble excuse is that our stuff changes a lot. Pretty much constantly. To reinstall in site-packages every time we think we have something better is a bit of a pain.
Since we're testing out of our working directories or SVN checkout directories, our test environments make heavy use of PYTHONPATH.
The development use of PYTHONPATH bled over into production. We use a setup.py for production installs, but install to an alternate home under /opt and set the PYTHONPATH to include /opt/ourapp-1.1.
A:
Once you get to the point where your application is ready for distribution, package it up for your favorite distributions/OSes in a way that puts your library code in site-packages and executable scripts on the system path.
Until then (i.e. for all development work), don't do any of the above: save yourself major headaches and use zc.buildout or virtualenv to keep your development code (and, if you like, its dependencies as well) isolated from the rest of the system.
A:
The program run by the end user is usually somewhere in their path, with most of the code in the module directory, which is often in site-packages.
Many python programs will have a small script located in the path, which imports the module, and calls a "main" method to run the program. This allows the programmer to do some upfront checks, and possibly modify sys.path if needed to find the needed module. This can also speed up load time on larger programs, because only files that are imported will be run from bytecode.
A:
Site-packages is for libraries, definitely.
A hybrid approach might work: you can install the libraries required by your application in site-packages and then install the main module elsewhere.
A:
If you can turn part of the application to a library and provide an API, then site-packages is a good place for it. This is actually how many python applications do it.
But from user or administrator point of view that isn't actually the problem. The problem is how we can manage the installed stuff. After I have installed it, how can I upgrade and uninstall it?
I use Fedora. If I use the python that came with it, I don't like installing things to site-packages outside the RPM system. In some cases I have built rpm myself to install it.
If I build my own python outside RPM, then I naturally want to use python's mechanisms to manage it.
Third way is to use something like easy_install to install such thing for example as a user to home directory.
So
Allow packaging to distributions.
Allow selecting the python to use.
Allow using python installed by distribution where you don't have permissions to site-packages.
Allow using python installed outside distribution where you can use site-packages.
|
Is site-packages appropriate for applications or just libraries?
|
I'm in a bit of a discussion with some other developers on an open source project. I'm new to python but it seems to me that site-packages is meant for libraries and not end user applications. Is that true or is site-packages an appropriate place to install an application meant to be run by an end user?
|
[
"We do it like this.\nMost stuff we download is in site-packages. They come from pypi or Source Forge or some other external source; they are easy to rebuild; they're highly reused; they don't change much.\nMust stuff we write is in other locations (usually under /opt, or c:\\opt) AND is included in the PYTHONPATH.\nThere's no great reason for keeping our stuff out of site-packages. However, our feeble excuse is that our stuff changes a lot. Pretty much constantly. To reinstall in site-packages every time we think we have something better is a bit of a pain.\nSince we're testing out of our working directories or SVN checkout directories, our test environments make heavy use of PYTHONPATH. \nThe development use of PYTHONPATH bled over into production. We use a setup.py for production installs, but install to an alternate home under /opt and set the PYTHONPATH to include /opt/ourapp-1.1.\n",
"Once you get to the point where your application is ready for distribution, package it up for your favorite distributions/OSes in a way that puts your library code in site-packages and executable scripts on the system path.\nUntil then (i.e. for all development work), don't do any of the above: save yourself major headaches and use zc.buildout or virtualenv to keep your development code (and, if you like, its dependencies as well) isolated from the rest of the system.\n",
"The program run by the end user is usually somewhere in their path, with most of the code in the module directory, which is often in site-packages.\nMany python programs will have a small script located in the path, which imports the module, and calls a \"main\" method to run the program. This allows the programmer to do some upfront checks, and possibly modify sys.path if needed to find the needed module. This can also speed up load time on larger programs, because only files that are imported will be run from bytecode.\n",
"Site-packages is for libraries, definitely.\nA hybrid approach might work: you can install the libraries required by your application in site-packages and then install the main module elsewhere.\n",
"If you can turn part of the application to a library and provide an API, then site-packages is a good place for it. This is actually how many python applications do it.\nBut from user or administrator point of view that isn't actually the problem. The problem is how we can manage the installed stuff. After I have installed it, how can I upgrade and uninstall it?\nI use Fedora. If I use the python that came with it, I don't like installing things to site-packages outside the RPM system. In some cases I have built rpm myself to install it.\nIf I build my own python outside RPM, then I naturally want to use python's mechanisms to manage it.\nThird way is to use something like easy_install to install such thing for example as a user to home directory.\nSo\n\nAllow packaging to distributions.\nAllow selecting the python to use.\nAllow using python installed by distribution where you don't have permissions to site-packages.\nAllow using python installed outside distribution where you can use site-packages.\n\n"
] |
[
4,
4,
3,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000787015_python.txt
|
Q:
How do I get the dimensions of the view (not obstructed by scrollbars) in a wx.ScrolledWindow?
Is there an easy way to do this? Alternatively, if I could get the width of the scrollbars, I could just use the dimensions of the ScrolledWindow and subtract them out myself...
A:
Use wx.SystemSettings.GetMetric() with wx.SYS_HSCROLL_Y and wx.SYS_VSCROLL_X to get the scrollbar sizes. Then use window.GetClientSize() and subtract it out.
http://docs.wxwidgets.org/stable/wx_wxsystemsettings.html#wxsystemsettings
>>> wx.SystemSettings.GetMetric(wx.SYS_HSCROLL_Y)
16
>>> wx.SystemSettings.GetMetric(wx.SYS_VSCROLL_X)
16
|
How do I get the dimensions of the view (not obstructed by scrollbars) in a wx.ScrolledWindow?
|
Is there an easy way to do this? Alternatively, if I could get the width of the scrollbars, I could just use the dimensions of the ScrolledWindow and subtract them out myself...
|
[
"Use wx.SystemSettings.GetMetric() with wx.SYS_HSCROLL_Y and wx.SYS_VSCROLL_X to get the scrollbar sizes. Then use window.GetClientSize() and subtract it out.\nhttp://docs.wxwidgets.org/stable/wx_wxsystemsettings.html#wxsystemsettings\n>>> wx.SystemSettings.GetMetric(wx.SYS_HSCROLL_Y)\n16\n>>> wx.SystemSettings.GetMetric(wx.SYS_VSCROLL_X)\n16\n\n"
] |
[
4
] |
[] |
[] |
[
"python",
"scrolledwindow",
"wxpython",
"wxwidgets"
] |
stackoverflow_0000793381_python_scrolledwindow_wxpython_wxwidgets.txt
|
Q:
Getting Aspen and Gheat on Windows working
I am not really familiar Python setup, I am trying to get gheat running on a Windows box, and it tells me it can't find pygame.
I have tried Python25,26, older pygame version too.
I have installed those as well as numpy as it has a dependency.
Could someone with experience try and help me out getting it up and running.
I have tried running Process Monitor against it, and it seems to find all the files etc but aspen/gheat still tell me it can't find Pygame.
Links below.
1.) http://www.python.org/
2.) http://code.google.com/p/gheat/
3.) http://www.pygame.org/
4.) Link
Cheers for any help.
Aside, It works fine on my ubuntu box just by install pygame and it works !
A:
With thanks to SeC- from the #csharp channel on Freenode, he figured out it is the problem with the latest trunk of aspen, (I thought I'd tried the older version)
http://www.zetadev.com/software/aspen/0.8/dist/aspen-0.8.zip
You will need the 0.8 version to get it working!!
Cheers anyway!
|
Getting Aspen and Gheat on Windows working
|
I am not really familiar Python setup, I am trying to get gheat running on a Windows box, and it tells me it can't find pygame.
I have tried Python25,26, older pygame version too.
I have installed those as well as numpy as it has a dependency.
Could someone with experience try and help me out getting it up and running.
I have tried running Process Monitor against it, and it seems to find all the files etc but aspen/gheat still tell me it can't find Pygame.
Links below.
1.) http://www.python.org/
2.) http://code.google.com/p/gheat/
3.) http://www.pygame.org/
4.) Link
Cheers for any help.
Aside, It works fine on my ubuntu box just by install pygame and it works !
|
[
"With thanks to SeC- from the #csharp channel on Freenode, he figured out it is the problem with the latest trunk of aspen, (I thought I'd tried the older version)\nhttp://www.zetadev.com/software/aspen/0.8/dist/aspen-0.8.zip\nYou will need the 0.8 version to get it working!!\nCheers anyway!\n"
] |
[
0
] |
[] |
[] |
[
"aspen",
"pygame",
"python"
] |
stackoverflow_0000793341_aspen_pygame_python.txt
|
Q:
retrieve bounding box of a geodjango multipolygon object
How can I get the bounding box of a MultiPolygon object in geodjango? Can't find anything in the API http://geodjango.org/docs/geos.html ...
A:
Use the extent property.
It returns a 4-tuple comprising the lower left and upper right coordinates, respectively.
You can also use the envelope property if you want a Polygon object representation of the bounding box.
|
retrieve bounding box of a geodjango multipolygon object
|
How can I get the bounding box of a MultiPolygon object in geodjango? Can't find anything in the API http://geodjango.org/docs/geos.html ...
|
[
"Use the extent property.\nIt returns a 4-tuple comprising the lower left and upper right coordinates, respectively.\nYou can also use the envelope property if you want a Polygon object representation of the bounding box.\n"
] |
[
14
] |
[] |
[] |
[
"django",
"geodjango",
"gis",
"python"
] |
stackoverflow_0000793240_django_geodjango_gis_python.txt
|
Q:
Simple webserver or web testing framework
Need to testcase a complex webapp which does some interacting with a remote 3rd party cgi based webservices.
Iam planing to implement some of the 3rd party services in a dummy webserver, so that i have full controll about the testcases.
Looking for a simple python http webserver or framework to emulate the 3rd party interface.
A:
Use cherrypy, take a look at Hello World:
import cherrypy
class HelloWorld(object):
def index(self):
return "Hello World!"
index.exposed = True
cherrypy.quickstart(HelloWorld())
Run this code and you have a very fast Hello World server ready on localhost port 8080!! Pretty easy huh?
A:
You might be happiest with a WSGI service, since it's most like CGI.
Look at werkzeug.
A:
Take a look the standard module wsgiref:
https://docs.python.org/2.6/library/wsgiref.html
At the end of that page is a small example. Something like this could already be sufficient for your needs.
A:
I would look into Django.
A:
It might be simpler sense to mock (or stub, or whatever the term is) urllib, or whatever module you are using to communicate with the remote web-service?
Even simply overriding urllib.urlopen might be enough:
import urllib
from StringIO import StringIO
class mock_response(StringIO):
def info(self):
raise NotImplementedError("mocked urllib response has no info method")
def getinfo():
raise NotImplementedError("mocked urllib response has no getinfo method")
def urlopen(url):
if url == "http://example.com/api/something":
resp = mock_response("<xml></xml>")
return resp
else:
urllib.urlopen(url)
is_unittest = True
if is_unittest:
urllib.urlopen = urlopen
print urllib.urlopen("http://example.com/api/something").read()
I used something very similar here, to emulate a simple API, before I got an API key.
|
Simple webserver or web testing framework
|
Need to testcase a complex webapp which does some interacting with a remote 3rd party cgi based webservices.
Iam planing to implement some of the 3rd party services in a dummy webserver, so that i have full controll about the testcases.
Looking for a simple python http webserver or framework to emulate the 3rd party interface.
|
[
"Use cherrypy, take a look at Hello World:\nimport cherrypy\n\nclass HelloWorld(object):\n def index(self):\n return \"Hello World!\"\n index.exposed = True\n\ncherrypy.quickstart(HelloWorld())\n\nRun this code and you have a very fast Hello World server ready on localhost port 8080!! Pretty easy huh?\n",
"You might be happiest with a WSGI service, since it's most like CGI.\nLook at werkzeug.\n",
"Take a look the standard module wsgiref:\nhttps://docs.python.org/2.6/library/wsgiref.html\nAt the end of that page is a small example. Something like this could already be sufficient for your needs.\n",
"I would look into Django.\n",
"It might be simpler sense to mock (or stub, or whatever the term is) urllib, or whatever module you are using to communicate with the remote web-service?\nEven simply overriding urllib.urlopen might be enough:\nimport urllib\nfrom StringIO import StringIO\n\nclass mock_response(StringIO):\n def info(self):\n raise NotImplementedError(\"mocked urllib response has no info method\")\n def getinfo():\n raise NotImplementedError(\"mocked urllib response has no getinfo method\")\n\ndef urlopen(url):\n if url == \"http://example.com/api/something\":\n resp = mock_response(\"<xml></xml>\")\n return resp\n else:\n urllib.urlopen(url)\n\n\nis_unittest = True\n\nif is_unittest:\n urllib.urlopen = urlopen\n\nprint urllib.urlopen(\"http://example.com/api/something\").read()\n\nI used something very similar here, to emulate a simple API, before I got an API key.\n"
] |
[
4,
2,
2,
0,
0
] |
[] |
[] |
[
"python",
"testing",
"web_applications",
"web_services"
] |
stackoverflow_0000776495_python_testing_web_applications_web_services.txt
|
Q:
Accessing MultipleChoiceField choices values
How do I get the choices field values and not the key from the form?
I have a form where I let the user select some user's emails for a company.
For example I have a form like this (this reason for model form is that it's inside a formset - but that is not important for now):
class Contacts(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(Contacts, self).__init__(*args, **kwargs)
self.company = kwargs['initial']['company']
self.fields['emails'].choices = self.company.emails
# This produces stuff like:
# [(1, '[email protected]'), ...]
emails = forms.MultipleChoiceField(required=False)
class Meta:
model = Company
and I want to get the list of all selected emails in the view, something like this:
form = ContactsForm(request.POST)
if form.is_valid():
form.cleaned_data['emails'][0] # produces 1 and not email
There is no get_emails_display() kind of method, like in the model for example. Also, a suggestion form.fields['emails'].choices does not work, as it gives ALL the choices, whereas I need something like form.fields['emails'].selected_choices?
Any ideas, or let me know if it's unclear.
A:
Ok, hopefully this is closer to what you wanted.
emails = filter(lambda t: t[0] in form.cleaned_data['emails'], form.fields['emails'].choices)
That should give you the list of selected choices that you want.
A:
It might not be a beautiful solution, but I would imagine that the display names are all still available from form.fields['emails'].choices so you can loop through form.cleaned_data['emails'] and get the choice name from the field's choices.
|
Accessing MultipleChoiceField choices values
|
How do I get the choices field values and not the key from the form?
I have a form where I let the user select some user's emails for a company.
For example I have a form like this (this reason for model form is that it's inside a formset - but that is not important for now):
class Contacts(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(Contacts, self).__init__(*args, **kwargs)
self.company = kwargs['initial']['company']
self.fields['emails'].choices = self.company.emails
# This produces stuff like:
# [(1, '[email protected]'), ...]
emails = forms.MultipleChoiceField(required=False)
class Meta:
model = Company
and I want to get the list of all selected emails in the view, something like this:
form = ContactsForm(request.POST)
if form.is_valid():
form.cleaned_data['emails'][0] # produces 1 and not email
There is no get_emails_display() kind of method, like in the model for example. Also, a suggestion form.fields['emails'].choices does not work, as it gives ALL the choices, whereas I need something like form.fields['emails'].selected_choices?
Any ideas, or let me know if it's unclear.
|
[
"Ok, hopefully this is closer to what you wanted.\nemails = filter(lambda t: t[0] in form.cleaned_data['emails'], form.fields['emails'].choices)\n\nThat should give you the list of selected choices that you want.\n",
"It might not be a beautiful solution, but I would imagine that the display names are all still available from form.fields['emails'].choices so you can loop through form.cleaned_data['emails'] and get the choice name from the field's choices.\n"
] |
[
8,
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000794178_django_python.txt
|
Q:
Why are 0d arrays in Numpy not considered scalar?
Surely a 0d array is scalar, but Numpy does not seem to think so... am I missing something or am I just misunderstanding the concept?
>>> foo = numpy.array(1.11111111111, numpy.float64)
>>> numpy.ndim(foo)
0
>>> numpy.isscalar(foo)
False
>>> foo.item()
1.11111111111
A:
One should not think too hard about it. It's ultimately better for the mental health and longevity of the individual.
The curious situation with Numpy scalar-types was bore out of the fact that there is no graceful and consistent way to degrade the 1x1 matrix to scalar types. Even though mathematically they are the same thing, they are handled by very different code.
If you've been doing any amount of scientific code, ultimately you'd want things like max(a) to work on matrices of all sizes, even scalars. Mathematically, this is a perfectly sensible thing to expect. However for programmers this means that whatever presents scalars in Numpy should have the .shape and .ndim attirbute, so at least the ufuncs don't have to do explicit type checking on its input for the 21 possible scalar types in Numpy.
On the other hand, they should also work with existing Python libraries that does do explicit type-checks on scalar type. This is a dilemma, since a Numpy ndarray have to individually change its type when they've been reduced to a scalar, and there is no way of knowing whether that has occurred without it having do checks on all access. Actually going that route would probably make bit ridiculously slow to work with by scalar type standards.
The Numpy developer's solution is to inherit from both ndarray and Python scalars for its own scalary type, so that all scalars also have .shape, .ndim, .T, etc etc. The 1x1 matrix will still be there, but its use will be discouraged if you know you'll be dealing with a scalar. While this should work fine in theory, occasionally you could still see some places where they missed with the paint roller, and the ugly innards are exposed for all to see:
>>> from numpy import *
>>> a = array(1)
>>> b = int_(1)
>>> a.ndim
0
>>> b.ndim
0
>>> a[...]
array(1)
>>> a[()]
1
>>> b[...]
array(1)
>>> b[()]
1
There's really no reason why a[...] and a[()] should return different things, but it does. There are proposals in place to change this, but looks like they forgot to finish the job for 1x1 arrays.
A potentially bigger, and possibly non-resolvable issue, is the fact that Numpy scalars are immutable. Therefore "spraying" a scalar into a ndarray, mathematically the adjoint operation of collapsing an array into a scalar, is a PITA to implement. You can't actually grow a Numpy scalar, it cannot by definition be cast into an ndarray, even though newaxis mysteriously works on it:
>>> b[0,1,2,3] = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'numpy.int32' object does not support item assignment
>>> b[newaxis]
array([1])
In Matlab, growing the size of a scalar is a perfectly acceptable and brainless operation. In Numpy you have to stick jarring a = array(a) everywhere you think you'd have the possibility of starting with a scalar and ending up with an array. I understand why Numpy has to be this way to play nice with Python, but that doesn't change the fact that many new switchers are deeply confused about this. Some have explicit memory of struggling with this behaviour and eventually persevering, while others who are too far gone are generally left with some deep shapeless mental scar that frequently haunts their most innocent dreams. It's an ugly situation for all.
A:
You have to create the scalar array a little bit differently:
>>> x = numpy.float64(1.111)
>>> x
1.111
>>> numpy.isscalar(x)
True
>>> numpy.ndim(x)
0
It looks like scalars in numpy may be a bit different concept from what you may be used to from a purely mathematical standpoint. I'm guessing you're thinking in terms of scalar matricies?
|
Why are 0d arrays in Numpy not considered scalar?
|
Surely a 0d array is scalar, but Numpy does not seem to think so... am I missing something or am I just misunderstanding the concept?
>>> foo = numpy.array(1.11111111111, numpy.float64)
>>> numpy.ndim(foo)
0
>>> numpy.isscalar(foo)
False
>>> foo.item()
1.11111111111
|
[
"One should not think too hard about it. It's ultimately better for the mental health and longevity of the individual.\nThe curious situation with Numpy scalar-types was bore out of the fact that there is no graceful and consistent way to degrade the 1x1 matrix to scalar types. Even though mathematically they are the same thing, they are handled by very different code.\nIf you've been doing any amount of scientific code, ultimately you'd want things like max(a) to work on matrices of all sizes, even scalars. Mathematically, this is a perfectly sensible thing to expect. However for programmers this means that whatever presents scalars in Numpy should have the .shape and .ndim attirbute, so at least the ufuncs don't have to do explicit type checking on its input for the 21 possible scalar types in Numpy. \nOn the other hand, they should also work with existing Python libraries that does do explicit type-checks on scalar type. This is a dilemma, since a Numpy ndarray have to individually change its type when they've been reduced to a scalar, and there is no way of knowing whether that has occurred without it having do checks on all access. Actually going that route would probably make bit ridiculously slow to work with by scalar type standards.\nThe Numpy developer's solution is to inherit from both ndarray and Python scalars for its own scalary type, so that all scalars also have .shape, .ndim, .T, etc etc. The 1x1 matrix will still be there, but its use will be discouraged if you know you'll be dealing with a scalar. While this should work fine in theory, occasionally you could still see some places where they missed with the paint roller, and the ugly innards are exposed for all to see:\n>>> from numpy import *\n>>> a = array(1)\n>>> b = int_(1)\n>>> a.ndim\n0\n>>> b.ndim\n0\n>>> a[...]\narray(1)\n>>> a[()]\n1\n>>> b[...]\narray(1)\n>>> b[()]\n1\n\nThere's really no reason why a[...] and a[()] should return different things, but it does. There are proposals in place to change this, but looks like they forgot to finish the job for 1x1 arrays.\nA potentially bigger, and possibly non-resolvable issue, is the fact that Numpy scalars are immutable. Therefore \"spraying\" a scalar into a ndarray, mathematically the adjoint operation of collapsing an array into a scalar, is a PITA to implement. You can't actually grow a Numpy scalar, it cannot by definition be cast into an ndarray, even though newaxis mysteriously works on it:\n>>> b[0,1,2,3] = 1\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: 'numpy.int32' object does not support item assignment\n>>> b[newaxis]\narray([1])\n\nIn Matlab, growing the size of a scalar is a perfectly acceptable and brainless operation. In Numpy you have to stick jarring a = array(a) everywhere you think you'd have the possibility of starting with a scalar and ending up with an array. I understand why Numpy has to be this way to play nice with Python, but that doesn't change the fact that many new switchers are deeply confused about this. Some have explicit memory of struggling with this behaviour and eventually persevering, while others who are too far gone are generally left with some deep shapeless mental scar that frequently haunts their most innocent dreams. It's an ugly situation for all.\n",
"You have to create the scalar array a little bit differently:\n>>> x = numpy.float64(1.111)\n>>> x\n1.111\n>>> numpy.isscalar(x)\nTrue\n>>> numpy.ndim(x)\n0\n\nIt looks like scalars in numpy may be a bit different concept from what you may be used to from a purely mathematical standpoint. I'm guessing you're thinking in terms of scalar matricies?\n"
] |
[
166,
6
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0000773030_numpy_python.txt
|
Q:
PyQT combobox only react on user interaction
I have a listbox that you can select users in. To the left of that is a combobox listing the available groups the user can be put it. If the user is in a group, the combobox will automatically be set to that group. I want to make it so when you change the group selection, it will move the user to that group. I added this connection:
QtCore.QObject.connect(self.GroupsBox, QtCore.SIGNAL("currentIndexChanged(QString)"), self.HandleGrouping)
The problem is that since I'll be selecting different users in different groups, every time I select a new user, the default option in the combobox changes and Qt registers this as a 'currentIndexChanged' signal.
There appears to be no way to only fire the signal on direct user-interaction with the widget itself. What methods can I use to work around this?
A:
Catch a signal from the QComboBox (activated(int index)), and update the selected user based on that. In you Handler function, don't do anything if the selected index in the combobox is the same as the group the selected user is in.
Maybe move your combobox to the right of the user listbox, as your order of actions will be Select User --> Select Group.
|
PyQT combobox only react on user interaction
|
I have a listbox that you can select users in. To the left of that is a combobox listing the available groups the user can be put it. If the user is in a group, the combobox will automatically be set to that group. I want to make it so when you change the group selection, it will move the user to that group. I added this connection:
QtCore.QObject.connect(self.GroupsBox, QtCore.SIGNAL("currentIndexChanged(QString)"), self.HandleGrouping)
The problem is that since I'll be selecting different users in different groups, every time I select a new user, the default option in the combobox changes and Qt registers this as a 'currentIndexChanged' signal.
There appears to be no way to only fire the signal on direct user-interaction with the widget itself. What methods can I use to work around this?
|
[
"Catch a signal from the QComboBox (activated(int index)), and update the selected user based on that. In you Handler function, don't do anything if the selected index in the combobox is the same as the group the selected user is in.\nMaybe move your combobox to the right of the user listbox, as your order of actions will be Select User --> Select Group.\n"
] |
[
5
] |
[] |
[] |
[
"python",
"qcombobox",
"qt"
] |
stackoverflow_0000794813_python_qcombobox_qt.txt
|
Q:
How to convert datetime to string in python in django
I have a datetime object at my model.
I am sending it to the view, but in html i don't know what to write in order to format it.
I am trying
{{ item.date.strftime("%Y-%m-%d")|escape }}
but I get
TemplateSyntaxError: Could not parse some characters: item.date.strftime|("%Y-%m-%d")||escape
when I am just using
{{ item.date|escape }}
it's working, but now with the format I want.
Any suggestions?
A:
Try using the built-in Django date format filter instead:
{{ item.date|date:"Y M d" }}
|
How to convert datetime to string in python in django
|
I have a datetime object at my model.
I am sending it to the view, but in html i don't know what to write in order to format it.
I am trying
{{ item.date.strftime("%Y-%m-%d")|escape }}
but I get
TemplateSyntaxError: Could not parse some characters: item.date.strftime|("%Y-%m-%d")||escape
when I am just using
{{ item.date|escape }}
it's working, but now with the format I want.
Any suggestions?
|
[
"Try using the built-in Django date format filter instead:\n{{ item.date|date:\"Y M d\" }}\n\n"
] |
[
11
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0000794995_django_python.txt
|
Q:
How to compare value of 2 fields in Django QuerySet?
I have a django model like this:
class Player(models.Model):
name = models.CharField()
batting = models.IntegerField()
bowling = models.IntegerField()
What would be the Django QuerySet equivalent of the following SQL?
SELECT * FROM player WHERE batting > bowling;
A:
In django 1.1 you can do the following:
players = Player.objects.filter(batting__gt=F('bowling'))
See the other question for details
|
How to compare value of 2 fields in Django QuerySet?
|
I have a django model like this:
class Player(models.Model):
name = models.CharField()
batting = models.IntegerField()
bowling = models.IntegerField()
What would be the Django QuerySet equivalent of the following SQL?
SELECT * FROM player WHERE batting > bowling;
|
[
"In django 1.1 you can do the following:\nplayers = Player.objects.filter(batting__gt=F('bowling'))\n\nSee the other question for details\n"
] |
[
20
] |
[] |
[] |
[
"django",
"model",
"python"
] |
stackoverflow_0000795310_django_model_python.txt
|
Q:
How do you open and transfer a file on the filesystem in mod_python?
I'm new to mod_python and Apache, and I'm having trouble returning a file to a user after a GET request. I've got a very simple setup right now, and was hoping to simply open the file and write it to the response:
from mod_python import apache
def handler(req):
req.content_type = 'application/octet-stream'
fIn = open('response.bin', 'rb')
req.write(fIn.read())
fIn.close()
return apache.OK
However, I'm getting errors when I use open(), saying that the file doesn't exist (even though I've checked a dozen times that it does). This happens when using relative and absolute filepaths.
I've got two questions:
Why isn't open() finding the right
files?
What is the best way to return a file
from the filesystem? (I ask to make
sure I'm not missing some better way
to use mod_python to return a file.)
Thanks
Edit: After finding this thread: http://www.programmingforums.org/thread12384.html I discovered that open() works for me if I move the file to another directory outside of home (I was aliasing out of /home/myname/httpdocs, but it works if I use /data). Any ideas why that works?
Edit 2: Part of my debug error, as requested:
MOD_PYTHON ERROR
ProcessId: 13642
Interpreter: '127.0.1.1'
ServerName: '127.0.1.1'
DocumentRoot: '/var/www'
URI: '/test/mptest.py'
Location: None
Directory: '/home/myname/httpdocs/'
Filename: '/home/myname/httpdocs/mptest.py'
PathInfo: ''
Phase: 'PythonHandler'
Handler: 'mptest'
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch
default=default_handler, arg=req, silent=hlist.silent)
File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1229, in _process_target
result = _execute_target(config, req, object, arg)
File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1128, in _execute_target
result = object(arg)
File "/home/myname/httpdocs/mptest.py", line 13, in handler
fIn = open('/home/myname/httpdocs/files/response.bin', 'rb')
IOError: [Errno 2] No such file or directory: '/home/myname/httpdocs/files/response.bin'
A:
To debug this kind of thing, you need to gather all information from the running mod_python instance.
Stop messing with "checking a dozen times that it [exists]". Some assumption isn't correct.
Do something like this to get some debugging information.
def handler(req):
req.content_type = 'text/plain'
req.write(os.environ)
req.write(os.getcwd())
# etc.
return apache.OK
Edit
Now you have a glimpse of the Important Stuff. In this case it might be permissions -- you'll need to use os.filestat to be sure. Apache runs mod_python as a user who has almost no usable permissions. Apache does not like links, either, but this shouldn't affect mod_python. If your file doesn't have read-by-everybody and isn't in the right directory you'll have problems.
You might want to switch to mod_wsgi.
A:
Could you paste the error(s) you get?
It's likely to be a permission error (if you tried using the full path to the file). Remember the script runs as the user running the web-server process - so you will be accessing the file as "www-data", or "nobody" usually.
Check the permissions of the folder /home/myname/httpdocs/files/ also. The folder should be +x for the www-data user:
$ mkdir blah
$ echo works > blah/response.bin
$ chmod 000 blah/
$ cat blah/response.bin
cat: blah/response.bin: Permission denied
$ chmod +x blah/
$ cat blah/response.bin
works
You could eliminate Apache/your-script from the equation by doing the following:
you:~$ sudo su - www-data
www-data:~$ file /home/myname/httpdocs/files/response.bin
(the su may not work, depending on what OS/distribution you are using, for example OS X prevents you logging in as it's www user)
File permissions aside, why is the script dependant on a file in your home folder anyway? Can response.bin be moved to the same folder as your Python script? Or possibly even moved into a database? (perhaps SQLite? Might be unnecessary/excessive, depending on what is in response.bin and how much it changes)
|
How do you open and transfer a file on the filesystem in mod_python?
|
I'm new to mod_python and Apache, and I'm having trouble returning a file to a user after a GET request. I've got a very simple setup right now, and was hoping to simply open the file and write it to the response:
from mod_python import apache
def handler(req):
req.content_type = 'application/octet-stream'
fIn = open('response.bin', 'rb')
req.write(fIn.read())
fIn.close()
return apache.OK
However, I'm getting errors when I use open(), saying that the file doesn't exist (even though I've checked a dozen times that it does). This happens when using relative and absolute filepaths.
I've got two questions:
Why isn't open() finding the right
files?
What is the best way to return a file
from the filesystem? (I ask to make
sure I'm not missing some better way
to use mod_python to return a file.)
Thanks
Edit: After finding this thread: http://www.programmingforums.org/thread12384.html I discovered that open() works for me if I move the file to another directory outside of home (I was aliasing out of /home/myname/httpdocs, but it works if I use /data). Any ideas why that works?
Edit 2: Part of my debug error, as requested:
MOD_PYTHON ERROR
ProcessId: 13642
Interpreter: '127.0.1.1'
ServerName: '127.0.1.1'
DocumentRoot: '/var/www'
URI: '/test/mptest.py'
Location: None
Directory: '/home/myname/httpdocs/'
Filename: '/home/myname/httpdocs/mptest.py'
PathInfo: ''
Phase: 'PythonHandler'
Handler: 'mptest'
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch
default=default_handler, arg=req, silent=hlist.silent)
File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1229, in _process_target
result = _execute_target(config, req, object, arg)
File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1128, in _execute_target
result = object(arg)
File "/home/myname/httpdocs/mptest.py", line 13, in handler
fIn = open('/home/myname/httpdocs/files/response.bin', 'rb')
IOError: [Errno 2] No such file or directory: '/home/myname/httpdocs/files/response.bin'
|
[
"To debug this kind of thing, you need to gather all information from the running mod_python instance.\nStop messing with \"checking a dozen times that it [exists]\". Some assumption isn't correct.\nDo something like this to get some debugging information.\ndef handler(req):\n req.content_type = 'text/plain'\n req.write(os.environ)\n req.write(os.getcwd())\n # etc.\n return apache.OK\n\n\nEdit\nNow you have a glimpse of the Important Stuff. In this case it might be permissions -- you'll need to use os.filestat to be sure. Apache runs mod_python as a user who has almost no usable permissions. Apache does not like links, either, but this shouldn't affect mod_python. If your file doesn't have read-by-everybody and isn't in the right directory you'll have problems.\nYou might want to switch to mod_wsgi.\n",
"Could you paste the error(s) you get?\nIt's likely to be a permission error (if you tried using the full path to the file). Remember the script runs as the user running the web-server process - so you will be accessing the file as \"www-data\", or \"nobody\" usually.\nCheck the permissions of the folder /home/myname/httpdocs/files/ also. The folder should be +x for the www-data user:\n$ mkdir blah\n$ echo works > blah/response.bin\n$ chmod 000 blah/\n$ cat blah/response.bin\ncat: blah/response.bin: Permission denied\n$ chmod +x blah/\n$ cat blah/response.bin\nworks\n\nYou could eliminate Apache/your-script from the equation by doing the following:\nyou:~$ sudo su - www-data\nwww-data:~$ file /home/myname/httpdocs/files/response.bin\n\n(the su may not work, depending on what OS/distribution you are using, for example OS X prevents you logging in as it's www user)\nFile permissions aside, why is the script dependant on a file in your home folder anyway? Can response.bin be moved to the same folder as your Python script? Or possibly even moved into a database? (perhaps SQLite? Might be unnecessary/excessive, depending on what is in response.bin and how much it changes)\n"
] |
[
4,
0
] |
[] |
[] |
[
"file_io",
"mod_python",
"python"
] |
stackoverflow_0000795837_file_io_mod_python_python.txt
|
Q:
Wrapping a script with subprocess.Popen()
I have a script that's provided with another software package - which I would not like to modify in any way. I need to execute this script, provide a password, and then interact with it from the terminal (using raw_input, etc.).
A:
pexpect is what you want to use.
Pexpect is a Python module for
spawning child applications and
controlling them automatically.
Pexpect can be used for automating
interactive applications such as ssh,
ftp, passwd, telnet, etc. It can be
used to a automate setup scripts for
duplicating software package
installations on different servers. It
can be used for automated software
testing. It should work on any
platform that supports the standard
Python pty module. The Pexpect
interface focuses on ease of use so
that simple tasks are easy.
|
Wrapping a script with subprocess.Popen()
|
I have a script that's provided with another software package - which I would not like to modify in any way. I need to execute this script, provide a password, and then interact with it from the terminal (using raw_input, etc.).
|
[
"pexpect is what you want to use.\n\nPexpect is a Python module for\n spawning child applications and\n controlling them automatically.\n Pexpect can be used for automating\n interactive applications such as ssh,\n ftp, passwd, telnet, etc. It can be\n used to a automate setup scripts for\n duplicating software package\n installations on different servers. It\n can be used for automated software\n testing. It should work on any\n platform that supports the standard\n Python pty module. The Pexpect\n interface focuses on ease of use so\n that simple tasks are easy.\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"scripting"
] |
stackoverflow_0000795977_python_scripting.txt
|
Q:
IronPython - Convert int to byte array
What is the correct way to get the length of a string in Python, and then convert that int to a byte array? What is the right way to print that to the console for testing?
A:
Use struct.
import struct
print struct.pack('L', len("some string")) # int to a (long) byte array
A:
using .Net:
byte[] buffer = System.BitConverter.GetBytes(string.Length)
print System.BitConverter.ToString(buffer)
That will output the bytes as hex. You may have to clean up the syntax for IronPython.
|
IronPython - Convert int to byte array
|
What is the correct way to get the length of a string in Python, and then convert that int to a byte array? What is the right way to print that to the console for testing?
|
[
"Use struct.\nimport struct\n\nprint struct.pack('L', len(\"some string\")) # int to a (long) byte array\n\n",
"using .Net: \nbyte[] buffer = System.BitConverter.GetBytes(string.Length)\nprint System.BitConverter.ToString(buffer)\n\nThat will output the bytes as hex. You may have to clean up the syntax for IronPython.\n"
] |
[
4,
1
] |
[] |
[] |
[
".net",
"bytearray",
"ironpython",
"python"
] |
stackoverflow_0000796197_.net_bytearray_ironpython_python.txt
|
Q:
Converting to Precomposed Unicode String using Python-AppKit-ObjectiveC
This document by Apple Technical Q&A QA1235 describes a way to convert unicode strings from a composed to a decomposed version. Since I have a problem with file names containing some characters (e.g. an accent grave), I'd like to try the conversion function
void CFStringNormalize(CFMutableStringRef theString,
CFStringNormalizationForm theForm);
I am using this with Python and the AppKit library. If i pass a Python String as an argument, I get:
CoreFoundation.CFStringNormalize("abc",0)
2009-04-27 21:00:54.314 Python[4519:613] * -[OC_PythonString _cfNormalize:]: unrecognized selector sent to instance 0x1f02510
Traceback (most recent call last):
File "", line 1, in
ValueError: NSInvalidArgumentException - * -[OC_PythonString _cfNormalize:]: unrecognized selector sent to instance 0x1f02510
I suppose this is because a CFMutableStringRef is needed as an argument. How do I convert a Python String to CFMutableStringRef?
A:
OC_PythonString (which is what Python strings are bridged to) is an NSString subclass, so you could get an NSMutableString with:
mutableString = NSMutableString.alloc().initWithString_("abc")
then use mutableString as the argument to CFStringNormalize.
|
Converting to Precomposed Unicode String using Python-AppKit-ObjectiveC
|
This document by Apple Technical Q&A QA1235 describes a way to convert unicode strings from a composed to a decomposed version. Since I have a problem with file names containing some characters (e.g. an accent grave), I'd like to try the conversion function
void CFStringNormalize(CFMutableStringRef theString,
CFStringNormalizationForm theForm);
I am using this with Python and the AppKit library. If i pass a Python String as an argument, I get:
CoreFoundation.CFStringNormalize("abc",0)
2009-04-27 21:00:54.314 Python[4519:613] * -[OC_PythonString _cfNormalize:]: unrecognized selector sent to instance 0x1f02510
Traceback (most recent call last):
File "", line 1, in
ValueError: NSInvalidArgumentException - * -[OC_PythonString _cfNormalize:]: unrecognized selector sent to instance 0x1f02510
I suppose this is because a CFMutableStringRef is needed as an argument. How do I convert a Python String to CFMutableStringRef?
|
[
"OC_PythonString (which is what Python strings are bridged to) is an NSString subclass, so you could get an NSMutableString with:\nmutableString = NSMutableString.alloc().initWithString_(\"abc\")\n\nthen use mutableString as the argument to CFStringNormalize.\n"
] |
[
2
] |
[] |
[] |
[
"objective_c",
"python"
] |
stackoverflow_0000794836_objective_c_python.txt
|
Q:
What is a good tutorial on the QuickTime API for MS Windows?
I'm working on a project that has to read and manipulate QuickTimes on Windows. Unfortunately, all the tutorials and sample code at the Apple site seem to be pretty much Mac specific. Is there a good resource on the web that deals specifically with programming QuickTime for Windows? Yes, I know that I can bludgeon my way (eventually) through the Mac stuff and eventually get something to work, but I would really like to see a treatment of the cleanest and best way to deal with it on Windows and what gotcha's to beware.
For extra points, it would be cool to see how someone might use the QuickTime API from a dynamic language like REBOL or Python (no, the Mac Python QuickTime bindings don't count!).
Thanks!
A:
QuickTime For Windows starts off with the differences between Mac OS and Windows programming and Building QuickTime Capability Into a Windows Application then discusses how to incorporate the capability into Windows platform
A:
There is an official mailing list for QT developers. It has an archive. It would certainly be worth subscribing to it if you are seriously trying to use QT for something, especially if it is the slightest bit off the beaten path.
IMHO, the official docs are more than a little too Apple-centric. Note that the Windows book assumes you already have experience with QT on Macs. At the time I was looking (about a year ago), I had a mandate to deal with QT from .NET, either from C# or managed C++. That was not a well documented way of doing things then.
There is a body of sample code for Windows somewhere at the Apple developer site, which might help if you can find it. I seem to have lost the links I had at one time. Just knowing it does (or did a year ago) exist might be enough to nudge you in the right direction.
Almost all of the sample code available is ordinary C or C++.
A:
I have started a Google code project with my QuickTime for Windows code at code.google.com/p/qtip. The idea is to structure things in a semi-tutorial fashion (as I learn this stuff myself!) so that others can learn from my pain...
|
What is a good tutorial on the QuickTime API for MS Windows?
|
I'm working on a project that has to read and manipulate QuickTimes on Windows. Unfortunately, all the tutorials and sample code at the Apple site seem to be pretty much Mac specific. Is there a good resource on the web that deals specifically with programming QuickTime for Windows? Yes, I know that I can bludgeon my way (eventually) through the Mac stuff and eventually get something to work, but I would really like to see a treatment of the cleanest and best way to deal with it on Windows and what gotcha's to beware.
For extra points, it would be cool to see how someone might use the QuickTime API from a dynamic language like REBOL or Python (no, the Mac Python QuickTime bindings don't count!).
Thanks!
|
[
"QuickTime For Windows starts off with the differences between Mac OS and Windows programming and Building QuickTime Capability Into a Windows Application then discusses how to incorporate the capability into Windows platform\n",
"There is an official mailing list for QT developers. It has an archive. It would certainly be worth subscribing to it if you are seriously trying to use QT for something, especially if it is the slightest bit off the beaten path. \nIMHO, the official docs are more than a little too Apple-centric. Note that the Windows book assumes you already have experience with QT on Macs. At the time I was looking (about a year ago), I had a mandate to deal with QT from .NET, either from C# or managed C++. That was not a well documented way of doing things then. \nThere is a body of sample code for Windows somewhere at the Apple developer site, which might help if you can find it. I seem to have lost the links I had at one time. Just knowing it does (or did a year ago) exist might be enough to nudge you in the right direction.\nAlmost all of the sample code available is ordinary C or C++.\n",
"I have started a Google code project with my QuickTime for Windows code at code.google.com/p/qtip. The idea is to structure things in a semi-tutorial fashion (as I learn this stuff myself!) so that others can learn from my pain...\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"python",
"quicktime",
"rebol",
"windows"
] |
stackoverflow_0000722315_python_quicktime_rebol_windows.txt
|
Q:
string formatting
I am not getting why the colon shifted left in the second time
>>> print '%5s' %':'
:
>>> print '%5s' %':' '%2s' %':'
: :
Help me out of this please
A:
In Python, juxtaposed strings are concatenated:
>>> t = 'a' 'bcd'
>>> t
'abcd'
So in your second example, it is equivalent to:
>>> print '%5s' % ':%2s' % ':'
which by the precedence rules for Python's % operator, is:
>>> print ('%5s' % ':%2s') % ':'
or
>>> print ' :%2s' % ':'
: :
A:
What are you trying to do?
>>> print '%5s' % ':'
:
>>> print '%5s%2s' % (':', ':')
: :
You could achieve what you want by mixing them both into a single string formatting expression.
|
string formatting
|
I am not getting why the colon shifted left in the second time
>>> print '%5s' %':'
:
>>> print '%5s' %':' '%2s' %':'
: :
Help me out of this please
|
[
"In Python, juxtaposed strings are concatenated:\n>>> t = 'a' 'bcd'\n>>> t\n'abcd'\n\nSo in your second example, it is equivalent to:\n>>> print '%5s' % ':%2s' % ':'\n\nwhich by the precedence rules for Python's % operator, is:\n>>> print ('%5s' % ':%2s') % ':'\n\nor\n>>> print ' :%2s' % ':'\n : :\n\n",
"What are you trying to do?\n>>> print '%5s' % ':'\n :\n>>> print '%5s%2s' % (':', ':')\n : :\n\nYou could achieve what you want by mixing them both into a single string formatting expression.\n"
] |
[
9,
2
] |
[] |
[] |
[
"format",
"python",
"string"
] |
stackoverflow_0000797132_format_python_string.txt
|
Q:
import statement fails for one module
Ok I found the problem, it was an environmental issue, I had the same modules (minus options.py) on the sys.path and it was importing from there instead. Thanks everyone for your help.
I have a series of import statements, the last of which will not work. Any idea why? options.py is sitting in the same directory as everything else.
from snipplets.main import MainHandler
from snipplets.createnew import CreateNewHandler
from snipplets.db import DbSnipplet
from snipplets.highlight import HighLighter
from snipplets.options import Options
ImportError: No module named options
my __init__.py file in the snipplets directory is blank.
A:
I suspect that one of your other imports redefined snipplets with an assignment statement. Or one of your other modules changed sys.path.
Edit
"so the flow goes like this: add snipplets packages to path import..."
No.
Do not modify sys.path -- that way lies problems. Modifying site.path leads to ambiguity about what is -- or is not -- on the path, and what order they are in.
The simplest, most reliable, most obvious, most controllable things to do are the following. Pick exactly one.
Define PYTHONPATH (once, external to your program). A single, simple environment variable that is nearly identical to installation on site-packages.
Install your package in site-packages.
A:
your master branch doesn't have options.py. could it be that you dev and master branches are conflicting?
if this is your actual code then you have option variable at line 21.
A:
Does the following work?
import snipplets.options.Options
If so, one of your other snipplets files probably sets a global variable named options.
A:
Are you on windows? You might want to try defining an __all__ list in your __init__.py file like noted here. It shouldn't make a difference unless you're importing *, but I've seen modules not import unless they were defined there.
Secondly, you might try setting up a virtualenv. Using a lot of site-wide python packages can lead to these kinds of things.
Lastly, make sure the permissions of options are set correctly. I've spent hours trying to figure these things out only to find out it was an issue of me not having permission to import it.
|
import statement fails for one module
|
Ok I found the problem, it was an environmental issue, I had the same modules (minus options.py) on the sys.path and it was importing from there instead. Thanks everyone for your help.
I have a series of import statements, the last of which will not work. Any idea why? options.py is sitting in the same directory as everything else.
from snipplets.main import MainHandler
from snipplets.createnew import CreateNewHandler
from snipplets.db import DbSnipplet
from snipplets.highlight import HighLighter
from snipplets.options import Options
ImportError: No module named options
my __init__.py file in the snipplets directory is blank.
|
[
"I suspect that one of your other imports redefined snipplets with an assignment statement. Or one of your other modules changed sys.path.\n\nEdit\n\"so the flow goes like this: add snipplets packages to path import...\" \nNo.\nDo not modify sys.path -- that way lies problems. Modifying site.path leads to ambiguity about what is -- or is not -- on the path, and what order they are in. \nThe simplest, most reliable, most obvious, most controllable things to do are the following. Pick exactly one.\n\nDefine PYTHONPATH (once, external to your program). A single, simple environment variable that is nearly identical to installation on site-packages.\nInstall your package in site-packages.\n\n",
"your master branch doesn't have options.py. could it be that you dev and master branches are conflicting?\nif this is your actual code then you have option variable at line 21.\n",
"Does the following work?\nimport snipplets.options.Options\n\nIf so, one of your other snipplets files probably sets a global variable named options.\n",
"Are you on windows? You might want to try defining an __all__ list in your __init__.py file like noted here. It shouldn't make a difference unless you're importing *, but I've seen modules not import unless they were defined there.\nSecondly, you might try setting up a virtualenv. Using a lot of site-wide python packages can lead to these kinds of things.\nLastly, make sure the permissions of options are set correctly. I've spent hours trying to figure these things out only to find out it was an issue of me not having permission to import it.\n"
] |
[
2,
2,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000797241_python.txt
|
Q:
How to make two python programs interact?
I have a HTTP sever in one program and my basic application in another one. Both of them are loops, so I have no idea how to:
Write a script that would start the app and then the HTTP server;
Make these programs exchange data in operation.
How are these things usually done? I would really appriciate Python solutions because my scripts are written in Python.
Does a user make an http request which queries the app for some data and return a result? Yes
Does the app collect data and store it somewhere? The app and the HTTP Server both use SQLite database. However the DBs may be different.
A:
a) You can start applications using os.system:
os.system("command")
or you can use the subprocess module. More information here.
b) use sockets
A:
Well, you can probably just use the subprocess module. For the exchanging data, you may just be able to use the Popen.stdin and Popen.stdout streams. Of course, there's no limit to ways you /could/ do it. CORBA, DBUS, shared memory, DCOP, the list goes on. But try the simple way first, which in this case is regular python pipes/streams.
A:
Before answering, I think we need some more information:
Is there a definable pipeline of information here?
Does a user make an http request which queries the app for some data and return a result?
Does the app collect data and store it somewhere?
There are a few options depending on how you're actually using them. Sockets is an option or passing information via a file or a database.
[Edit] Based on your reply I think there's a few ways you can do it:
If you can access the app's database from the web server you could easily pull the information you're after from there. Again it depends what information it is that you want to exchange.
If your app just needs to give the http server some results, you could write them into a results table in the http server's db.
Use pipe's or sub processes as other people have suggested to exchange data with the background app directly.
Use a log file which your app can write to and your http server read from.
Some more questions:
Do you need two-way communication here or is the http server just displaying results?
What webserver are you using?
What processing languages do you have available on it?
Depending on how reliant the two parts can be, it might be best to write a new app to check the database of your app for changes (using hooks or polling or whatever) and post relevent information into the http server's own database. This has the advantage of leaving the two parts less closely coupled which is often a good thing.
I've got a webserver (Apache 2) which talks to a Django app using the fastcgi module. Have a look at the section in djangobook on fastcgi. Apache uses sockets (or regular tcp) to talk to the background app (Django).
[Edit 2] Oops - just spotted that your webserver is a python process itself. If it's all python then you could launch each in it's own thread and pass them both Queue objects which allow the two processes to send each other information in either a blocking or non-blocking manner.
A:
Depending on what you want to do you can use os.mkfifo to create a named pipe to share data between your two programs.
http://mail.python.org/pipermail/python-list/2006-August/568346.html
A:
maybe twisted is what you're looking for
A:
When I write web applications in Python, I always keep my web server in the same process as my background tasks. I don't know what web server you're using, but I personally use CherryPy. Your application can have a bunch of its threads be the web server, with however many other threads you like as background tasks. This way you don't need any kind of complex IPC with sockets, named pipes, etc. Instead you simply access shared, global, synchronized data structures to pass along information, and your different modules can directly call each others functions.
EDIT: To clarify, you can use the threading module to run your CherryPy server in different threads than your other blocking servers. For example:
def listener():
sock = get_socket_from_somewhere()
while True:
client, addr = sock.accept()
# send data back to client, etc
from threading import Thread
t1 = Thread(target=listener)
t1.setDaemon(True)
t1.start()
cherrypy.quickstart() # you'd need actual arguments here
This example shows how to have a blocking server in one thread in the same process as a web server (in this case CherryPy, though it could be anything).
|
How to make two python programs interact?
|
I have a HTTP sever in one program and my basic application in another one. Both of them are loops, so I have no idea how to:
Write a script that would start the app and then the HTTP server;
Make these programs exchange data in operation.
How are these things usually done? I would really appriciate Python solutions because my scripts are written in Python.
Does a user make an http request which queries the app for some data and return a result? Yes
Does the app collect data and store it somewhere? The app and the HTTP Server both use SQLite database. However the DBs may be different.
|
[
"a) You can start applications using os.system:\n\nos.system(\"command\")\n\nor you can use the subprocess module. More information here.\nb) use sockets\n",
"Well, you can probably just use the subprocess module. For the exchanging data, you may just be able to use the Popen.stdin and Popen.stdout streams. Of course, there's no limit to ways you /could/ do it. CORBA, DBUS, shared memory, DCOP, the list goes on. But try the simple way first, which in this case is regular python pipes/streams.\n",
"Before answering, I think we need some more information:\n\nIs there a definable pipeline of information here?\n\n\nDoes a user make an http request which queries the app for some data and return a result?\nDoes the app collect data and store it somewhere?\n\n\nThere are a few options depending on how you're actually using them. Sockets is an option or passing information via a file or a database.\n[Edit] Based on your reply I think there's a few ways you can do it:\n\nIf you can access the app's database from the web server you could easily pull the information you're after from there. Again it depends what information it is that you want to exchange.\nIf your app just needs to give the http server some results, you could write them into a results table in the http server's db.\nUse pipe's or sub processes as other people have suggested to exchange data with the background app directly.\nUse a log file which your app can write to and your http server read from.\n\nSome more questions:\n\nDo you need two-way communication here or is the http server just displaying results?\nWhat webserver are you using?\nWhat processing languages do you have available on it?\n\nDepending on how reliant the two parts can be, it might be best to write a new app to check the database of your app for changes (using hooks or polling or whatever) and post relevent information into the http server's own database. This has the advantage of leaving the two parts less closely coupled which is often a good thing.\nI've got a webserver (Apache 2) which talks to a Django app using the fastcgi module. Have a look at the section in djangobook on fastcgi. Apache uses sockets (or regular tcp) to talk to the background app (Django).\n[Edit 2] Oops - just spotted that your webserver is a python process itself. If it's all python then you could launch each in it's own thread and pass them both Queue objects which allow the two processes to send each other information in either a blocking or non-blocking manner.\n",
"Depending on what you want to do you can use os.mkfifo to create a named pipe to share data between your two programs.\nhttp://mail.python.org/pipermail/python-list/2006-August/568346.html\n",
"maybe twisted is what you're looking for\n",
"When I write web applications in Python, I always keep my web server in the same process as my background tasks. I don't know what web server you're using, but I personally use CherryPy. Your application can have a bunch of its threads be the web server, with however many other threads you like as background tasks. This way you don't need any kind of complex IPC with sockets, named pipes, etc. Instead you simply access shared, global, synchronized data structures to pass along information, and your different modules can directly call each others functions.\nEDIT: To clarify, you can use the threading module to run your CherryPy server in different threads than your other blocking servers. For example:\ndef listener():\n sock = get_socket_from_somewhere()\n while True:\n client, addr = sock.accept()\n # send data back to client, etc\n\nfrom threading import Thread\nt1 = Thread(target=listener)\nt1.setDaemon(True)\nt1.start()\n\ncherrypy.quickstart() # you'd need actual arguments here\n\nThis example shows how to have a blocking server in one thread in the same process as a web server (in this case CherryPy, though it could be anything).\n"
] |
[
3,
3,
2,
1,
1,
0
] |
[] |
[] |
[
"interaction",
"ipc",
"multithreading",
"process",
"python"
] |
stackoverflow_0000797785_interaction_ipc_multithreading_process_python.txt
|
Q:
Running numpy from cygwin
I am running a windows machine have installed Python 2.5. I also used the windows installer to install NumPy.
This all works great when I run the Python (command line) tool that comes with Python.
However, if I run cygwin and then run Python from within, it cannot find the numpy package.
What environment variable do I need to set? What value should it be set to?
A:
Cygwin comes with its own version of Python, so it's likely that you have two Python installs on your system; one that installed under Windows and one which came with Cygwin.
To test this, try opening a bash prompt in Cygwin and typing which python to see where the Python executable is located. If it says /cygdrive/c/Python25/python.exe or something similar then you'll know you're running the Windows executable. If you see /usr/local/bin/python or something like that, then you'll know that you're running the Cygwin version.
I recommend opening a DOS prompt and running Python from there when you need interactive usage. This will keep your two Python installs nicely separate (it can be very useful to have both; I do this on my own machine). Also, you may have some problems running a program designed for Windows interactive console use from within a Cygwin shell.
A:
You're running a separate copy of python provided by cygwin.
You can run /cygdrive/c/python25/python (or wherever you installed it)
to get your win32 one, or just install another copy of numpy.
A:
Ensure that PYTHONPATH has NumPy. Refer The Module Search Path (section 6.1.2) and Modifying Python's Search Path (section 4.1).
A:
numpy built for windows is not compatible with cygwin python. You have to build it by yourself on cygwin.
|
Running numpy from cygwin
|
I am running a windows machine have installed Python 2.5. I also used the windows installer to install NumPy.
This all works great when I run the Python (command line) tool that comes with Python.
However, if I run cygwin and then run Python from within, it cannot find the numpy package.
What environment variable do I need to set? What value should it be set to?
|
[
"Cygwin comes with its own version of Python, so it's likely that you have two Python installs on your system; one that installed under Windows and one which came with Cygwin.\nTo test this, try opening a bash prompt in Cygwin and typing which python to see where the Python executable is located. If it says /cygdrive/c/Python25/python.exe or something similar then you'll know you're running the Windows executable. If you see /usr/local/bin/python or something like that, then you'll know that you're running the Cygwin version.\nI recommend opening a DOS prompt and running Python from there when you need interactive usage. This will keep your two Python installs nicely separate (it can be very useful to have both; I do this on my own machine). Also, you may have some problems running a program designed for Windows interactive console use from within a Cygwin shell.\n",
"You're running a separate copy of python provided by cygwin.\nYou can run /cygdrive/c/python25/python (or wherever you installed it)\nto get your win32 one, or just install another copy of numpy.\n",
"Ensure that PYTHONPATH has NumPy. Refer The Module Search Path (section 6.1.2) and Modifying Python's Search Path (section 4.1).\n",
"numpy built for windows is not compatible with cygwin python. You have to build it by yourself on cygwin.\n"
] |
[
4,
1,
0,
0
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0000318390_numpy_python.txt
|
Q:
Python exceptions: call same function for any Exception
Notice in the code below that foobar() is called if any Exception is thrown. Is there a way to do this without using the same line in every Exception?
try:
foo()
except(ErrorTypeA):
bar()
foobar()
except(ErrorTypeB):
baz()
foobar()
except(SwineFlu):
print 'You have caught Swine Flu!'
foobar()
except:
foobar()
A:
success = False
try:
foo()
success = True
except(A):
bar()
except(B):
baz()
except(C):
bay()
finally:
if not success:
foobar()
A:
You can use a dictionary to map exceptions against functions to call:
exception_map = { ErrorTypeA : bar, ErrorTypeB : baz }
try:
try:
somthing()
except tuple(exception_map), e: # this catches only the exceptions in the map
exception_map[type(e)]() # calls the related function
raise # raise the Excetion again and the next line catches it
except Exception, e: # every Exception ends here
foobar()
|
Python exceptions: call same function for any Exception
|
Notice in the code below that foobar() is called if any Exception is thrown. Is there a way to do this without using the same line in every Exception?
try:
foo()
except(ErrorTypeA):
bar()
foobar()
except(ErrorTypeB):
baz()
foobar()
except(SwineFlu):
print 'You have caught Swine Flu!'
foobar()
except:
foobar()
|
[
"success = False\ntry:\n foo()\n success = True\nexcept(A):\n bar()\nexcept(B):\n baz()\nexcept(C):\n bay()\nfinally:\n if not success:\n foobar()\n\n",
"You can use a dictionary to map exceptions against functions to call:\nexception_map = { ErrorTypeA : bar, ErrorTypeB : baz }\ntry:\n try:\n somthing()\n except tuple(exception_map), e: # this catches only the exceptions in the map\n exception_map[type(e)]() # calls the related function\n raise # raise the Excetion again and the next line catches it\nexcept Exception, e: # every Exception ends here\n foobar() \n\n"
] |
[
18,
12
] |
[] |
[] |
[
"exception",
"python"
] |
stackoverflow_0000799293_exception_python.txt
|
Q:
Returning an object vs returning a tuple
I am developing in python a file class that can read and write a file, containing a list of xyz coordinates. In my program, I already have a Coord3D class to hold xyz coordinates.
My question is relative to the design of a getCoordinate(index) method. Should I return a tuple of floats, or a Coord3D object?
In the first case, I get very low coupling, but then I will probably have to instantiate a Coord3D object with the obtained values anyway, although outside of the file class. In the second case, I will have the file class tightly coupled with the Coord3D class.
Please note that I think there's not a huge difference between the two solutions, but I would like to read your answer and the reason behind it.
Edit: to recap the answers I got until now, it looks like there's no clearcut choice. It has been said (appropriately) that python is not Java, and you don't need a specialized class for everything just because you need it by language architecture. In my case, however, I have the following conditions:
I am working on a library, where the Coord3D object is used as is. Using it would increase the cohesiveness of my library, as the data types will be uniformly used.
The Coord3D object has state and behavior. Indeed, the Coord3D object aggregate the coordinates and the units in a single entity. Operations among Coord3D objects will keep into account the potentially different units, and act accordingly.
I can put centralize control code into the Coord3D class instantiation to refuse, for example, arrays of length 4, or non units. If I use a tuple, I cannot perform this check. Moreover, if a method accepts a Coord3D, is sort of guaranteed that it's well formed upfront (you could be bold and check for isinstance, or check the interface). A tuple can contain invalid data. Although python approach to error handling is done where the trouble happen, a class preventing me to have an xyz coordinate made out of three strings is somehow beneficial (correct me if wrong, please)
On the other hand, using a tuple has the following advantages:
Less occupation of resources, quite critical in case of huge
Simpler design. More classes means more complex design. A tuple is a standard data type which is well understood and can be unpacked easily. A personalized class is not.
Using a tuple, the XYZFile class is totally decoupled from the rest of the library (because it does not use the Coord3D object). This means that it can be reused totally as an independent entity.
further comments very welcome!
A:
Compromise solution: Instead of a class, make Coord3D a namedtuple and return that :-)
Usage:
Coord3D = namedtuple('Coord3D', 'x y z')
def getCoordinate(index):
# do stuff, creating variables x, y, z
return Coord3D(x, y, z)
The return value can be used exactly as a tuple, and has the same speed and memory properties, so you don't lose any genericity. But you can also access its values by name: if c is the result of getCoordinate(index), then you can work with c.x, c.y, etc, for increased readibility.
(obviously this is a bit less useful if your Coord3D class needs other functionality too)
[if you're not on python2.6, you can get namedtuples from the cookbook recipe]
A:
If other people (aside form yourself) will be using this class, it seems to me that returning an object would encourage some kind of uniformity in data types. If the Coord3D class has a method or property to access these coordinates as a tuple, then that still gives them that option, should they need it:
# get the object
coord_obj = my_obj.getCoordinate(my_index)
# get the tuple (for example, via a property named "coords")
coord_tup = my_obj.getCoordinate(my_index).coords
A:
The more fundamental question is "why do you have Coord3D class?" Why not just use a tuple?
The general advice most of us give to Python n00bz is "don't invent new classes until you have to."
Does your Coord3D have unique methods? Perhaps you need a new class. Or -- perhaps -- you only need some functions that operate on tuples.
Does your Coord3D have changeable state? Hardly likely. An immutable tuple starts to look like a better representation than a new class.
A:
Take a look at Will McGugan's Gameobjects library. He has a Vector3 class that can be initialized with another Vector3 object, a tuple, individual float values, etc. I think this will answer your question ... plus you may end up just using his library as it's already optimized and has plenty of useful methods already.
A:
If it's only going to be used in your application, and if you're going to create a Coord3D instance with the values anyway, I'd just return a Coord3D instance to save you the effort. If, however, you have any interest in making this portable/general, return a tuple. It'll be easy to create a Coord3D anyway, using
c3d = Coord3D(*getCoordinate(index))
(assuming your constructor is Coord3D.__init__(self, x, y, z))
A:
I've asked myself the same question, albeit while doing 2D geometry stuff.
The answer I found for myself was that if I was planning to write a larger library, with more functions and whatnot, go ahead and return the Point, or in your case the Coord3D object. If it's just a hacky implementation, the tuple will get you going faster. In the end, it's just what you're going to do with it, and is it worth the effort.
A:
Returning an object would be the best practice and would give you a better overall software design. I would recommend doing that
But still, keep in mind that creating/returning an object will take more processing time. It could change something if you do this operation a LOT and in that case you might need to think about it...
|
Returning an object vs returning a tuple
|
I am developing in python a file class that can read and write a file, containing a list of xyz coordinates. In my program, I already have a Coord3D class to hold xyz coordinates.
My question is relative to the design of a getCoordinate(index) method. Should I return a tuple of floats, or a Coord3D object?
In the first case, I get very low coupling, but then I will probably have to instantiate a Coord3D object with the obtained values anyway, although outside of the file class. In the second case, I will have the file class tightly coupled with the Coord3D class.
Please note that I think there's not a huge difference between the two solutions, but I would like to read your answer and the reason behind it.
Edit: to recap the answers I got until now, it looks like there's no clearcut choice. It has been said (appropriately) that python is not Java, and you don't need a specialized class for everything just because you need it by language architecture. In my case, however, I have the following conditions:
I am working on a library, where the Coord3D object is used as is. Using it would increase the cohesiveness of my library, as the data types will be uniformly used.
The Coord3D object has state and behavior. Indeed, the Coord3D object aggregate the coordinates and the units in a single entity. Operations among Coord3D objects will keep into account the potentially different units, and act accordingly.
I can put centralize control code into the Coord3D class instantiation to refuse, for example, arrays of length 4, or non units. If I use a tuple, I cannot perform this check. Moreover, if a method accepts a Coord3D, is sort of guaranteed that it's well formed upfront (you could be bold and check for isinstance, or check the interface). A tuple can contain invalid data. Although python approach to error handling is done where the trouble happen, a class preventing me to have an xyz coordinate made out of three strings is somehow beneficial (correct me if wrong, please)
On the other hand, using a tuple has the following advantages:
Less occupation of resources, quite critical in case of huge
Simpler design. More classes means more complex design. A tuple is a standard data type which is well understood and can be unpacked easily. A personalized class is not.
Using a tuple, the XYZFile class is totally decoupled from the rest of the library (because it does not use the Coord3D object). This means that it can be reused totally as an independent entity.
further comments very welcome!
|
[
"Compromise solution: Instead of a class, make Coord3D a namedtuple and return that :-)\nUsage:\nCoord3D = namedtuple('Coord3D', 'x y z')\n\ndef getCoordinate(index):\n # do stuff, creating variables x, y, z\n return Coord3D(x, y, z)\n\nThe return value can be used exactly as a tuple, and has the same speed and memory properties, so you don't lose any genericity. But you can also access its values by name: if c is the result of getCoordinate(index), then you can work with c.x, c.y, etc, for increased readibility.\n(obviously this is a bit less useful if your Coord3D class needs other functionality too)\n[if you're not on python2.6, you can get namedtuples from the cookbook recipe]\n",
"If other people (aside form yourself) will be using this class, it seems to me that returning an object would encourage some kind of uniformity in data types. If the Coord3D class has a method or property to access these coordinates as a tuple, then that still gives them that option, should they need it:\n# get the object\ncoord_obj = my_obj.getCoordinate(my_index)\n# get the tuple (for example, via a property named \"coords\")\ncoord_tup = my_obj.getCoordinate(my_index).coords\n\n",
"The more fundamental question is \"why do you have Coord3D class?\" Why not just use a tuple?\nThe general advice most of us give to Python n00bz is \"don't invent new classes until you have to.\"\nDoes your Coord3D have unique methods? Perhaps you need a new class. Or -- perhaps -- you only need some functions that operate on tuples.\nDoes your Coord3D have changeable state? Hardly likely. An immutable tuple starts to look like a better representation than a new class.\n",
"Take a look at Will McGugan's Gameobjects library. He has a Vector3 class that can be initialized with another Vector3 object, a tuple, individual float values, etc. I think this will answer your question ... plus you may end up just using his library as it's already optimized and has plenty of useful methods already.\n",
"If it's only going to be used in your application, and if you're going to create a Coord3D instance with the values anyway, I'd just return a Coord3D instance to save you the effort. If, however, you have any interest in making this portable/general, return a tuple. It'll be easy to create a Coord3D anyway, using\nc3d = Coord3D(*getCoordinate(index))\n\n(assuming your constructor is Coord3D.__init__(self, x, y, z))\n",
"I've asked myself the same question, albeit while doing 2D geometry stuff.\nThe answer I found for myself was that if I was planning to write a larger library, with more functions and whatnot, go ahead and return the Point, or in your case the Coord3D object. If it's just a hacky implementation, the tuple will get you going faster. In the end, it's just what you're going to do with it, and is it worth the effort.\n",
"Returning an object would be the best practice and would give you a better overall software design. I would recommend doing that\nBut still, keep in mind that creating/returning an object will take more processing time. It could change something if you do this operation a LOT and in that case you might need to think about it...\n"
] |
[
13,
2,
2,
2,
1,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000794132_python.txt
|
Q:
Print long integers in python
If I run this where vote.created_on is a python datetime:
import calendar
created_on_timestamp = calendar.timegm(vote.created_on.timetuple())*1000
created_on_timestamp = str(created_on_timestamp)
created_on_timestamp will be printed with encapsulating tick marks ('). If I do int() or something like that, I'll get something like 1240832864000L which isn't a number as far as JavaScript is concerned (which is where I need to use these datetimes).
Does anybody know the best way to handle this situation? Should I cast the long as a string and strip the tick marks? That seems crazy.
=== Edited Addendum ===
The larger problem was that Django was converting " into it's HTML encoded equivalent &39; (or similar). The best way to deal with this is to convert the long into a string and when the template parses the string, use {{ created_on_timestamp|safe }} to render the quote marks as quote marks.
A:
>>> i = 1240832864000L
>>> i
1240832864000L
>>> print i
1240832864000
>>>
>>> '<script type="text/javascript"> var num = %s; </script>' % i
'<script type="text/javascript"> var num = 1240832864000; </script>'
The L only shows up when you trigger the object's __repr__
When and how are you sending this data to JavaScript? If you send it as JSON, you shouldn't have to worry about long literals or how Python displays its objects within Python.
A:
With the line:
created_on_timestamp = str(created_on_timestamp)
You are converting something into a string. The python console represents strings with single-quotes (is this what you mean by tick marks?) The string data, of course, does not include the quotes.
When you use int() to re-convert it to a number, int() knows it's long because it's too big, and returns a long integer.
The python console represents this long number with a trailing L. but the numeric content, of course, does not include the L.
>>> l = 42000000000
>>> str(l)
'42000000000'
>>> l
42000000000L
>>> int(str(l))
42000000000L
>>> type( int(str(l)) )
<type 'long'>
Although the python console is representing numbers and strings this way (in python syntax), you should be able to use them normally. Are you anticipating a problem or have you actually run into one at this point?
|
Print long integers in python
|
If I run this where vote.created_on is a python datetime:
import calendar
created_on_timestamp = calendar.timegm(vote.created_on.timetuple())*1000
created_on_timestamp = str(created_on_timestamp)
created_on_timestamp will be printed with encapsulating tick marks ('). If I do int() or something like that, I'll get something like 1240832864000L which isn't a number as far as JavaScript is concerned (which is where I need to use these datetimes).
Does anybody know the best way to handle this situation? Should I cast the long as a string and strip the tick marks? That seems crazy.
=== Edited Addendum ===
The larger problem was that Django was converting " into it's HTML encoded equivalent &39; (or similar). The best way to deal with this is to convert the long into a string and when the template parses the string, use {{ created_on_timestamp|safe }} to render the quote marks as quote marks.
|
[
">>> i = 1240832864000L\n>>> i\n1240832864000L\n>>> print i\n1240832864000\n>>> \n>>> '<script type=\"text/javascript\"> var num = %s; </script>' % i\n'<script type=\"text/javascript\"> var num = 1240832864000; </script>'\n\nThe L only shows up when you trigger the object's __repr__\nWhen and how are you sending this data to JavaScript? If you send it as JSON, you shouldn't have to worry about long literals or how Python displays its objects within Python.\n",
"With the line:\ncreated_on_timestamp = str(created_on_timestamp)\n\nYou are converting something into a string. The python console represents strings with single-quotes (is this what you mean by tick marks?) The string data, of course, does not include the quotes. \nWhen you use int() to re-convert it to a number, int() knows it's long because it's too big, and returns a long integer. \nThe python console represents this long number with a trailing L. but the numeric content, of course, does not include the L. \n>>> l = 42000000000\n>>> str(l)\n'42000000000'\n>>> l\n42000000000L\n>>> int(str(l))\n42000000000L\n>>> type( int(str(l)) )\n<type 'long'>\n\nAlthough the python console is representing numbers and strings this way (in python syntax), you should be able to use them normally. Are you anticipating a problem or have you actually run into one at this point?\n"
] |
[
5,
4
] |
[] |
[] |
[
"datetime",
"django",
"python"
] |
stackoverflow_0000799434_datetime_django_python.txt
|
Q:
Python HTML output (first attempt), several questions (code included)
While I have been playing with Python for a few months now (just a hobbyist), I know very little about Web programming (a little HTML, zero JavaScript, etc). That said, I have a current project that is making me look at web programming for the first time. This led me to ask:
What's easiest way to get Python script output on the web?
Thx to the answers, I made some progress. For now, I'm just using Python and HTML. I can't post my project code, so I wrote a small example using twitter search (pls see below).
My questions are:
Am I doing anything terribly stupid? I feel like WebOutput() is clear but inefficient. If I used JavaScript, I'm assuming I could write an HTML template file and then just update the data. Yes? The better way to do this?
At what point would a framework be appropriate for an app like this? overkill?
Sorry for the basic questions - but I don't want to spend too much time going down the wrong path.
import simplejson, urllib, time
#query, results per page
query = "swineflu"
rpp = 25
jsonURL = "http://search.twitter.com/search.json?q=" + query + "&rpp=" + str(rpp)
#currently storing all search results, really only need most recent but want the data avail for other stuff
data = []
#iterate over search results
def SearchResults():
jsonResults = simplejson.load(urllib.urlopen(jsonURL))
for tweet in jsonResults["results"]:
try:
#terminal output
feed = tweet["from_user"] + " | " + tweet["text"]
print feed
data.append(feed)
except:
print "exception??"
# writes latest tweets to file/web
def WebOutput():
f = open("outw.html", "w")
f.write("<html>\n")
f.write("<title>python newb's twitter search</title>\n")
f.write("<head><meta http-equiv='refresh' content='60'></head>\n")
f.write("<body>\n")
f.write("<h1 style='font-size:150%'>Python Newb's Twitter Search</h1>")
f.write("<h2 style='font-size:125%'>Searching Twitter for: " + query + "</h2>\n")
f.write("<h2 style='font-size:125%'>" + time.ctime() + " (updates every 60 seconds)</h2>\n")
for i in range(1,rpp):
try:
f.write("<p style='font-size:90%'>" + data[-i] + "</p>\n")
except:
continue
f.write("</body>\n")
f.write("</html>\n")
f.close()
while True:
print ""
print "\nSearching Twitter for: " + query + " | current date/time is: " + time.ctime()
print ""
SearchResults()
WebOutput()
time.sleep(60)
A:
It would not be overkill to use a framework for something like this; Python frameworks tend to be very light and easy to work with and would make it much easier for you to add features to your tiny site. But neither is it required; I'll assume you're doing this for learning purposes and talk about how I would change the code.
You're doing templating without a template engine in your WebOutput function; there are all kinds of neat template languages for Python, my favorite of which is mako. If the code in that function ever gets hairier than it is currently, I would break it out into a template; I'll show you what that would look like in a moment. But first, I'd use multiline strings to replace all those f.write's, and string substitution instead of adding strings:
f.write("""<html>
<title>python newb's twitter search</title>
<head><meta http-equiv='refresh' content='60'></head>
<body>
<h1 style='font-size:150%'>Python Newb's Twitter Search</h1>
<h2 style='font-size:125%'>Searching Twitter for: %s</h2>
<h2 style='font-size:125%'>%s (updates every 60 seconds)</h2>""" % (query, time.ctime()))
for datum in reversed(data):
f.write("<p style='font-size:90%'>%s</p>" % (datum))
f.write("</body></html>")
Also, note that I simplified your for loop a bit; I'll explain further if what I put doesn't make sense.
If you were to convert your WebOutput function to Mako, you would first import mako at the top of your file with:
import mako
Then you would replace the whole body of WebOutput() with:
f = file("outw.html", "w")
data = reversed(data)
t = Template(filename='/path/to/mytmpl.txt').render({"query":query, "time":time.ctime(), "data":data})
f.write(t)
Finally, you would make a file /path/to/mytmpl.txt that looks like this:
<html>
<title>python newb's twitter search</title>
<head><meta http-equiv='refresh' content='60'></head>
<body>
<h1 style='font-size:150%'>Python Newb's Twitter Search</h1>
<h2 style='font-size:125%'>Searching Twitter for: ${query}</h2>
<h2 style='font-size:125%'>${time} (updates every 60 seconds)</h2>
% for datum in data:
<p style'font-size:90%'>${datum}</p>
% endfor
</body>
</html>
And you can see that the nice thing you've accomplished is separating the output (or "view layer" in web terms) from the code that grabs and formats the data (the "model layer" and "controller layer"). This will make it much easier for you to change the output of your script in the future.
(Note: I didn't test the code I've presented here; apologies if it isn't quite right. It should basically work though)
A:
String formatting can make things a lot neater, and less error-prone.
Simple example, %s is replaced by a title:
my_html = "<html><body><h1>%s</h1></body></html>" % ("a title")
Or multiple times (title is the same, and now "my content" is displayed where the second %s is:
my_html = "<html><body><h1>%s</h1>%s</body></html>" % ("a title", "my content")
You can also use named keys when doing %s, like %(thekey)s, which means you don't have to keep track of which order the %s are in. Instead of a list, you use a dictionary, which maps the key to a value:
my_html = "<html><body><h1>%(title)s</h1>%(content)s</body></html>" % {
"title": "a title",
"content":"my content"
}
The biggest issue with your script is, you are using a global variable (data). A much better way would be:
call search_results, with an argument of "swineflu"
search_results returns a list of results, store the result in a variable
call WebOutput, with the search results variable as the argument
WebOutput returns a string, containing your HTML
write the returned HTML to your file
WebOutput would return the HTML (as a string), and write it to a file. Something like:
results = SearchResults("swineflu", 25)
html = WebOutput(results)
f = open("outw.html", "w")
f.write(html)
f.close()
Finally, the twitterd module is only required if you are accessing data that requires a login. The public timeline is, well, public, and can be accessed without any authentication, so you can remove the twitterd import, and the api = line. If you did want to use twitterd, you would have to do something with the api variable, for example:
api = twitterd.Api(username='username', password='password')
statuses = api.GetPublicTimeline()
So, the way I might have written the script is:
import time
import urllib
import simplejson
def search_results(query, rpp = 25): # 25 is default value for rpp
url = "http://search.twitter.com/search.json?q=%s&%s" % (query, rpp)
jsonResults = simplejson.load(urllib.urlopen(url))
data = [] # setup empty list, within function scope
for tweet in jsonResults["results"]:
# Unicode!
# And tweet is a dict, so we can use the string-formmating key thing
data.append(u"%(from_user)s | %(text)s" % tweet)
return data # instead of modifying the global data!
def web_output(data, query):
results_html = ""
# loop over each index of data, storing the item in "result"
for result in data:
# append to string
results_html += " <p style='font-size:90%%'>%s</p>\n" % (result)
html = """<html>
<head>
<meta http-equiv='refresh' content='60'>
<title>python newb's twitter search</title>
</head>
<body>
<h1 style='font-size:150%%'>Python Newb's Twitter Search</h1>
<h2 style='font-size:125%%'>Searching Twitter for: %(query)s</h2>
<h2 style='font-size:125%%'> %(ctime)s (updates every 60 seconds)</h2>
%(results_html)s
</body>
</html>
""" % {
'query': query,
'ctime': time.ctime(),
'results_html': results_html
}
return html
def main():
query_string = "swineflu"
results = search_results(query_string) # second value defaults to 25
html = web_output(results, query_string)
# Moved the file writing stuff to main, so WebOutput is reusable
f = open("outw.html", "w")
f.write(html)
f.close()
# Once the file is written, display the output to the terminal:
for formatted_tweet in results:
# the .encode() turns the unicode string into an ASCII one, ignoring
# characters it cannot display correctly
print formatted_tweet.encode('ascii', 'ignore')
if __name__ == '__main__':
main()
# Common Python idiom, only runs main if directly run (not imported).
# Then means you can do..
# import myscript
# myscript.search_results("#python")
# without your "main" function being run
(2) at what point would a framework be appropriate for an app like this? overkill?
I would say always use a web-framework (with a few exceptions)
Now, that might seem strange, given all time I just spent explaining fixes to your script.. but, with the above modifications to your script, it's incredibly easy to do, since everything has been nicely function'ified!
Using CherryPy, which is a really simple HTTP framework for Python, you can easily send data to the browser, rather than constantly writing a file.
This assumes the above script is saved as twitter_searcher.py.
Note I've never used CherryPy before, this is just the HelloWorld example on the CherryPy homepage, with a few lines copied from the above script's main() function!
import cherrypy
# import the twitter_searcher.py script
import twitter_searcher
# you can now call the the functions in that script, for example:
# twitter_searcher.search_results("something")
class TwitterSearcher(object):
def index(self):
query_string = "swineflu"
results = twitter_searcher.search_results(query_string) # second value defaults to 25
html = twitter_searcher.web_output(results, query_string)
return html
index.exposed = True
cherrypy.quickstart(TwitterSearcher())
Save and run that script, then browse to http://0.0.0.0:8080/ and it'll show your page!
The problem with this, on every page load it will query the Twitter API. This will not be a problem if it's just you using it, but with hundreds (or even tens) of people looking at the page, it would start to slow down (and you could get rate-limited/blocked by the twitter API, eventually)
The solution is basically back to the start.. You would write (cache) the search result to disc, re-searching twitter if the data is more than ~60 seconds old. You could also look into CherryPy's caching options.. but this answer is getting rather absurdly long..
A:
I'd suggest using a template to generate the output, you can start with the buildin string.Template or try something fancier, for example Mako (or Cheetah, Genshi, Jinja, Kid, etc).
Python has many nice web frameworks, the smallest of them would be web.py or werkzeug
If you want a fullblown framework, look at Pylons or Django but these are really overkill for a project like that.
A:
The issue that you will run into is that you will need to change the Python whenever you want to change the HTML. For this case, that might be fine, but it can lead to trouble. I think using something with a template system makes a lot of sense. I would suggest looking at Django. The tutorial is very good.
|
Python HTML output (first attempt), several questions (code included)
|
While I have been playing with Python for a few months now (just a hobbyist), I know very little about Web programming (a little HTML, zero JavaScript, etc). That said, I have a current project that is making me look at web programming for the first time. This led me to ask:
What's easiest way to get Python script output on the web?
Thx to the answers, I made some progress. For now, I'm just using Python and HTML. I can't post my project code, so I wrote a small example using twitter search (pls see below).
My questions are:
Am I doing anything terribly stupid? I feel like WebOutput() is clear but inefficient. If I used JavaScript, I'm assuming I could write an HTML template file and then just update the data. Yes? The better way to do this?
At what point would a framework be appropriate for an app like this? overkill?
Sorry for the basic questions - but I don't want to spend too much time going down the wrong path.
import simplejson, urllib, time
#query, results per page
query = "swineflu"
rpp = 25
jsonURL = "http://search.twitter.com/search.json?q=" + query + "&rpp=" + str(rpp)
#currently storing all search results, really only need most recent but want the data avail for other stuff
data = []
#iterate over search results
def SearchResults():
jsonResults = simplejson.load(urllib.urlopen(jsonURL))
for tweet in jsonResults["results"]:
try:
#terminal output
feed = tweet["from_user"] + " | " + tweet["text"]
print feed
data.append(feed)
except:
print "exception??"
# writes latest tweets to file/web
def WebOutput():
f = open("outw.html", "w")
f.write("<html>\n")
f.write("<title>python newb's twitter search</title>\n")
f.write("<head><meta http-equiv='refresh' content='60'></head>\n")
f.write("<body>\n")
f.write("<h1 style='font-size:150%'>Python Newb's Twitter Search</h1>")
f.write("<h2 style='font-size:125%'>Searching Twitter for: " + query + "</h2>\n")
f.write("<h2 style='font-size:125%'>" + time.ctime() + " (updates every 60 seconds)</h2>\n")
for i in range(1,rpp):
try:
f.write("<p style='font-size:90%'>" + data[-i] + "</p>\n")
except:
continue
f.write("</body>\n")
f.write("</html>\n")
f.close()
while True:
print ""
print "\nSearching Twitter for: " + query + " | current date/time is: " + time.ctime()
print ""
SearchResults()
WebOutput()
time.sleep(60)
|
[
"It would not be overkill to use a framework for something like this; Python frameworks tend to be very light and easy to work with and would make it much easier for you to add features to your tiny site. But neither is it required; I'll assume you're doing this for learning purposes and talk about how I would change the code.\nYou're doing templating without a template engine in your WebOutput function; there are all kinds of neat template languages for Python, my favorite of which is mako. If the code in that function ever gets hairier than it is currently, I would break it out into a template; I'll show you what that would look like in a moment. But first, I'd use multiline strings to replace all those f.write's, and string substitution instead of adding strings:\nf.write(\"\"\"<html>\n<title>python newb's twitter search</title>\n<head><meta http-equiv='refresh' content='60'></head>\n<body>\n<h1 style='font-size:150%'>Python Newb's Twitter Search</h1>\n<h2 style='font-size:125%'>Searching Twitter for: %s</h2>\n<h2 style='font-size:125%'>%s (updates every 60 seconds)</h2>\"\"\" % (query, time.ctime()))\n\nfor datum in reversed(data):\n f.write(\"<p style='font-size:90%'>%s</p>\" % (datum))\n\nf.write(\"</body></html>\")\n\nAlso, note that I simplified your for loop a bit; I'll explain further if what I put doesn't make sense.\nIf you were to convert your WebOutput function to Mako, you would first import mako at the top of your file with:\nimport mako\n\nThen you would replace the whole body of WebOutput() with:\nf = file(\"outw.html\", \"w\")\ndata = reversed(data)\nt = Template(filename='/path/to/mytmpl.txt').render({\"query\":query, \"time\":time.ctime(), \"data\":data})\nf.write(t)\n\nFinally, you would make a file /path/to/mytmpl.txt that looks like this:\n<html>\n<title>python newb's twitter search</title>\n<head><meta http-equiv='refresh' content='60'></head>\n<body>\n<h1 style='font-size:150%'>Python Newb's Twitter Search</h1>\n<h2 style='font-size:125%'>Searching Twitter for: ${query}</h2>\n<h2 style='font-size:125%'>${time} (updates every 60 seconds)</h2>\n\n% for datum in data:\n <p style'font-size:90%'>${datum}</p>\n% endfor\n\n</body>\n</html>\n\nAnd you can see that the nice thing you've accomplished is separating the output (or \"view layer\" in web terms) from the code that grabs and formats the data (the \"model layer\" and \"controller layer\"). This will make it much easier for you to change the output of your script in the future.\n(Note: I didn't test the code I've presented here; apologies if it isn't quite right. It should basically work though)\n",
"String formatting can make things a lot neater, and less error-prone.\nSimple example, %s is replaced by a title:\nmy_html = \"<html><body><h1>%s</h1></body></html>\" % (\"a title\")\n\nOr multiple times (title is the same, and now \"my content\" is displayed where the second %s is:\nmy_html = \"<html><body><h1>%s</h1>%s</body></html>\" % (\"a title\", \"my content\")\n\nYou can also use named keys when doing %s, like %(thekey)s, which means you don't have to keep track of which order the %s are in. Instead of a list, you use a dictionary, which maps the key to a value:\nmy_html = \"<html><body><h1>%(title)s</h1>%(content)s</body></html>\" % {\n \"title\": \"a title\",\n \"content\":\"my content\"\n}\n\nThe biggest issue with your script is, you are using a global variable (data). A much better way would be:\n\ncall search_results, with an argument of \"swineflu\"\nsearch_results returns a list of results, store the result in a variable\ncall WebOutput, with the search results variable as the argument\nWebOutput returns a string, containing your HTML\nwrite the returned HTML to your file\n\nWebOutput would return the HTML (as a string), and write it to a file. Something like:\nresults = SearchResults(\"swineflu\", 25)\nhtml = WebOutput(results)\nf = open(\"outw.html\", \"w\")\nf.write(html)\nf.close()\n\nFinally, the twitterd module is only required if you are accessing data that requires a login. The public timeline is, well, public, and can be accessed without any authentication, so you can remove the twitterd import, and the api = line. If you did want to use twitterd, you would have to do something with the api variable, for example:\napi = twitterd.Api(username='username', password='password')\nstatuses = api.GetPublicTimeline()\n\nSo, the way I might have written the script is:\nimport time\nimport urllib\nimport simplejson\n\ndef search_results(query, rpp = 25): # 25 is default value for rpp\n url = \"http://search.twitter.com/search.json?q=%s&%s\" % (query, rpp)\n\n jsonResults = simplejson.load(urllib.urlopen(url))\n\n data = [] # setup empty list, within function scope\n for tweet in jsonResults[\"results\"]:\n # Unicode!\n # And tweet is a dict, so we can use the string-formmating key thing\n data.append(u\"%(from_user)s | %(text)s\" % tweet)\n\n return data # instead of modifying the global data!\n\ndef web_output(data, query):\n results_html = \"\"\n\n # loop over each index of data, storing the item in \"result\"\n for result in data:\n # append to string\n results_html += \" <p style='font-size:90%%'>%s</p>\\n\" % (result)\n\n html = \"\"\"<html>\n <head>\n <meta http-equiv='refresh' content='60'>\n <title>python newb's twitter search</title>\n </head>\n <body>\n <h1 style='font-size:150%%'>Python Newb's Twitter Search</h1>\n <h2 style='font-size:125%%'>Searching Twitter for: %(query)s</h2>\n <h2 style='font-size:125%%'> %(ctime)s (updates every 60 seconds)</h2>\n %(results_html)s\n </body>\n </html>\n \"\"\" % {\n 'query': query,\n 'ctime': time.ctime(),\n 'results_html': results_html\n }\n\n return html\n\n\ndef main():\n query_string = \"swineflu\"\n results = search_results(query_string) # second value defaults to 25\n\n html = web_output(results, query_string)\n\n # Moved the file writing stuff to main, so WebOutput is reusable\n f = open(\"outw.html\", \"w\")\n f.write(html)\n f.close()\n\n # Once the file is written, display the output to the terminal:\n for formatted_tweet in results:\n # the .encode() turns the unicode string into an ASCII one, ignoring\n # characters it cannot display correctly\n print formatted_tweet.encode('ascii', 'ignore')\n\n\nif __name__ == '__main__':\n main()\n# Common Python idiom, only runs main if directly run (not imported).\n# Then means you can do..\n\n# import myscript\n# myscript.search_results(\"#python\")\n\n# without your \"main\" function being run\n\n\n\n(2) at what point would a framework be appropriate for an app like this? overkill?\n\nI would say always use a web-framework (with a few exceptions)\nNow, that might seem strange, given all time I just spent explaining fixes to your script.. but, with the above modifications to your script, it's incredibly easy to do, since everything has been nicely function'ified!\nUsing CherryPy, which is a really simple HTTP framework for Python, you can easily send data to the browser, rather than constantly writing a file.\nThis assumes the above script is saved as twitter_searcher.py.\nNote I've never used CherryPy before, this is just the HelloWorld example on the CherryPy homepage, with a few lines copied from the above script's main() function!\nimport cherrypy\n\n# import the twitter_searcher.py script\nimport twitter_searcher\n# you can now call the the functions in that script, for example:\n# twitter_searcher.search_results(\"something\")\n\nclass TwitterSearcher(object):\n def index(self):\n query_string = \"swineflu\"\n results = twitter_searcher.search_results(query_string) # second value defaults to 25\n html = twitter_searcher.web_output(results, query_string)\n\n return html\n index.exposed = True\n\ncherrypy.quickstart(TwitterSearcher())\n\nSave and run that script, then browse to http://0.0.0.0:8080/ and it'll show your page!\nThe problem with this, on every page load it will query the Twitter API. This will not be a problem if it's just you using it, but with hundreds (or even tens) of people looking at the page, it would start to slow down (and you could get rate-limited/blocked by the twitter API, eventually)\nThe solution is basically back to the start.. You would write (cache) the search result to disc, re-searching twitter if the data is more than ~60 seconds old. You could also look into CherryPy's caching options.. but this answer is getting rather absurdly long..\n",
"I'd suggest using a template to generate the output, you can start with the buildin string.Template or try something fancier, for example Mako (or Cheetah, Genshi, Jinja, Kid, etc). \nPython has many nice web frameworks, the smallest of them would be web.py or werkzeug\nIf you want a fullblown framework, look at Pylons or Django but these are really overkill for a project like that.\n",
"The issue that you will run into is that you will need to change the Python whenever you want to change the HTML. For this case, that might be fine, but it can lead to trouble. I think using something with a template system makes a lot of sense. I would suggest looking at Django. The tutorial is very good.\n"
] |
[
8,
5,
4,
1
] |
[] |
[] |
[
"javascript",
"python"
] |
stackoverflow_0000799479_javascript_python.txt
|
Q:
Correlate one set of vectors to another in numpy?
Let's say I have a set of vectors (readings from sensor 1, readings from sensor 2, readings from sensor 3 -- indexed first by timestamp and then by sensor id) that I'd like to correlate to a separate set of vectors (temperature, humidity, etc -- also all indexed first by timestamp and secondly by type).
What is the cleanest way in numpy to do this? It seems like it should be a rather simple function...
In other words, I'd like to see:
> a.shape
(365,20)
> b.shape
(365, 5)
> correlations = magic_correlation_function(a,b)
> correlations.shape
(20, 5)
Cheers,
/YGA
P.S. I've been asked to add an example.
Here's what I would like to see:
$ In [27]: x
$ Out[27]:
array([[ 0, 0, 0],
[-1, 0, -1],
[-2, 0, -2],
[-3, 0, -3],
[-4, 0.1, -4]])
$ In [28]: y
$ Out[28]:
array([[0, 0],
[1, 0],
[2, 0],
[3, 0],
[4, 0.1]])
$ In [28]: magical_correlation_function(x, y)
$ Out[28]:
array([[-1. , 0.70710678, 1. ]
[-0.70710678, 1. , 0.70710678]])
Ps2: whoops, mis-transcribed my example. Sorry all. Fixed now.
A:
The simplest thing that I could find was using the scipy.stats package
In [8]: x
Out[8]:
array([[ 0. , 0. , 0. ],
[-1. , 0. , -1. ],
[-2. , 0. , -2. ],
[-3. , 0. , -3. ],
[-4. , 0.1, -4. ]])
In [9]: y
Out[9]:
array([[0. , 0. ],
[1. , 0. ],
[2. , 0. ],
[3. , 0. ],
[4. , 0.1]])
In [10]: import scipy.stats
In [27]: (scipy.stats.cov(y,x)
/(numpy.sqrt(scipy.stats.var(y,axis=0)[:,numpy.newaxis]))
/(numpy.sqrt(scipy.stats.var(x,axis=0))))
Out[27]:
array([[-1. , 0.70710678, -1. ],
[-0.70710678, 1. , -0.70710678]])
These aren't the numbers you got, but you've mixed up your rows. (Element [0,0] should be 1.)
A more complicated, but purely numpy solution is
In [40]: numpy.corrcoef(x.T,y.T)[numpy.arange(x.shape[1])[numpy.newaxis,:]
,numpy.arange(y.shape[1])[:,numpy.newaxis]]
Out[40]:
array([[-1. , 0.70710678, -1. ],
[-0.70710678, 1. , -0.70710678]])
This will be slower because it computes the correlation of each element in x with each other element in x, which you don't want. Also, the advanced indexing techniques used to get the subset of the array you desire can make your head hurt.
If you're going to use numpy intensely, get familiar with the rules on broadcasting and indexing. They will help you push as much down to the C-level as possible.
A:
Will this do what you want?
correlations = dot(transpose(a), b)
Note: if you do this, you'll probably want to standardize or whiten a and b first, e.g. something equivalent to this:
a = sqrt((a - mean(a))/(var(a)))
b = sqrt((b - mean(b))/(var(b)))
|
Correlate one set of vectors to another in numpy?
|
Let's say I have a set of vectors (readings from sensor 1, readings from sensor 2, readings from sensor 3 -- indexed first by timestamp and then by sensor id) that I'd like to correlate to a separate set of vectors (temperature, humidity, etc -- also all indexed first by timestamp and secondly by type).
What is the cleanest way in numpy to do this? It seems like it should be a rather simple function...
In other words, I'd like to see:
> a.shape
(365,20)
> b.shape
(365, 5)
> correlations = magic_correlation_function(a,b)
> correlations.shape
(20, 5)
Cheers,
/YGA
P.S. I've been asked to add an example.
Here's what I would like to see:
$ In [27]: x
$ Out[27]:
array([[ 0, 0, 0],
[-1, 0, -1],
[-2, 0, -2],
[-3, 0, -3],
[-4, 0.1, -4]])
$ In [28]: y
$ Out[28]:
array([[0, 0],
[1, 0],
[2, 0],
[3, 0],
[4, 0.1]])
$ In [28]: magical_correlation_function(x, y)
$ Out[28]:
array([[-1. , 0.70710678, 1. ]
[-0.70710678, 1. , 0.70710678]])
Ps2: whoops, mis-transcribed my example. Sorry all. Fixed now.
|
[
"The simplest thing that I could find was using the scipy.stats package\nIn [8]: x\nOut[8]: \narray([[ 0. , 0. , 0. ],\n [-1. , 0. , -1. ],\n [-2. , 0. , -2. ],\n [-3. , 0. , -3. ],\n [-4. , 0.1, -4. ]])\nIn [9]: y\nOut[9]: \narray([[0. , 0. ],\n [1. , 0. ],\n [2. , 0. ],\n [3. , 0. ],\n [4. , 0.1]])\n\nIn [10]: import scipy.stats\n\nIn [27]: (scipy.stats.cov(y,x)\n /(numpy.sqrt(scipy.stats.var(y,axis=0)[:,numpy.newaxis]))\n /(numpy.sqrt(scipy.stats.var(x,axis=0))))\nOut[27]: \narray([[-1. , 0.70710678, -1. ],\n [-0.70710678, 1. , -0.70710678]])\n\nThese aren't the numbers you got, but you've mixed up your rows. (Element [0,0] should be 1.)\nA more complicated, but purely numpy solution is\nIn [40]: numpy.corrcoef(x.T,y.T)[numpy.arange(x.shape[1])[numpy.newaxis,:]\n ,numpy.arange(y.shape[1])[:,numpy.newaxis]]\nOut[40]: \narray([[-1. , 0.70710678, -1. ],\n [-0.70710678, 1. , -0.70710678]])\n\nThis will be slower because it computes the correlation of each element in x with each other element in x, which you don't want. Also, the advanced indexing techniques used to get the subset of the array you desire can make your head hurt.\nIf you're going to use numpy intensely, get familiar with the rules on broadcasting and indexing. They will help you push as much down to the C-level as possible.\n",
"Will this do what you want?\ncorrelations = dot(transpose(a), b)\n\nNote: if you do this, you'll probably want to standardize or whiten a and b first, e.g. something equivalent to this:\na = sqrt((a - mean(a))/(var(a)))\nb = sqrt((b - mean(b))/(var(b)))\n\n"
] |
[
2,
1
] |
[
"As David said, you should define the correlation you're using. I don't know of any definitions of correlation that gives sensible numbers when correlating empty and non-empty signals.\n"
] |
[
-1
] |
[
"numpy",
"python"
] |
stackoverflow_0000795570_numpy_python.txt
|
Q:
keeping same formatting for floating point values
I have a python program that reads floating point values using the following regular expression
(-?\d+\.\d+)
once I extract the value using float(match.group(1)), I get the actual floating point number. However, I am not able to distinguish if the number was 1.2345678 or 1.234 or 1.2340000.
The problem I am facing is to print out the floating point value again, with the exact same formatting. An easy solution is to "split and count" the floating point value when still a string, eg splitting at the decimal point, and counting the integer part length and the fractional part length, then create the formatter as
print "%"+str(total_len)+"."+str(fractional_len)+"f" % value
but maybe you know a standard way to achieve the same result ?
A:
If you want to keep a fixed precision, avoid using floats and use Decimal instead:
>>> from decimal import Decimal
>>> d = Decimal('-1.2345')
>>> str(d)
'-1.2345'
>>> float(d)
-1.2344999999999999
A:
You method is basically correct.
String formatting has a less often used * operator you can put for the formatting sizes, here's some code:
import re
def parse_float(str):
re_float = re.compile(r'(-?)(\d+)\.(\d+)')
grps = re_float.search(str)
sign, decimal, fraction = grps.groups()
float_val = float('%s%s.%s' % (sign, decimal, fraction))
total_len = len(grps.group(0))
print '%*.*f' % (total_len, len(fraction), float_val)
parse_float('1.2345678')
parse_float('1.234')
parse_float('1.2340000')
and it outputs
1.2345678
1.234
1.2340000
A:
>>> from decimal import Decimal as d
>>> d('1.13200000')
Decimal('1.13200000')
>>> print d('1.13200000')
1.13200000
|
keeping same formatting for floating point values
|
I have a python program that reads floating point values using the following regular expression
(-?\d+\.\d+)
once I extract the value using float(match.group(1)), I get the actual floating point number. However, I am not able to distinguish if the number was 1.2345678 or 1.234 or 1.2340000.
The problem I am facing is to print out the floating point value again, with the exact same formatting. An easy solution is to "split and count" the floating point value when still a string, eg splitting at the decimal point, and counting the integer part length and the fractional part length, then create the formatter as
print "%"+str(total_len)+"."+str(fractional_len)+"f" % value
but maybe you know a standard way to achieve the same result ?
|
[
"If you want to keep a fixed precision, avoid using floats and use Decimal instead:\n>>> from decimal import Decimal\n>>> d = Decimal('-1.2345')\n>>> str(d)\n'-1.2345'\n>>> float(d)\n-1.2344999999999999\n\n",
"You method is basically correct. \nString formatting has a less often used * operator you can put for the formatting sizes, here's some code:\nimport re\n\ndef parse_float(str):\n re_float = re.compile(r'(-?)(\\d+)\\.(\\d+)')\n grps = re_float.search(str)\n sign, decimal, fraction = grps.groups()\n float_val = float('%s%s.%s' % (sign, decimal, fraction))\n total_len = len(grps.group(0))\n print '%*.*f' % (total_len, len(fraction), float_val)\n\nparse_float('1.2345678')\nparse_float('1.234')\nparse_float('1.2340000')\n\nand it outputs\n1.2345678\n1.234\n1.2340000\n\n",
">>> from decimal import Decimal as d\n>>> d('1.13200000')\nDecimal('1.13200000')\n>>> print d('1.13200000')\n1.13200000\n\n"
] |
[
8,
3,
1
] |
[] |
[] |
[
"floating_point",
"formatting",
"python"
] |
stackoverflow_0000800015_floating_point_formatting_python.txt
|
Q:
Best Python podcasts?
Could any one suggest good Python-related podcasts out there, it could be anything about Python or its eco-system (like django, pylons, etc).
A:
Google Code University (several languages there)
Python Podcasts
Python Learning Foundation
Python411 on PodcastAlley.com
A:
I didn't think much of Python411 - the episode I downloaded primarily consisted of the host talking about how he was planning on writing a GAE site.
This Week in Django as pointed out by Geo is (or possibly was) a good Python podcast. Obviously, as it's focused on Django development there is a lot of Djangoisms discussed however there's also a lot of general Python knowledge shared as well. TWID is currently going through a revamp, keep an eye on @djangodose for updates.
A:
The Changelog (read the blog, listen to the podcast, it is very good) has some Python material, for instance:
http://thechangelog.com/post/1174335646/episode-0-3-6-django-dash
http://thechangelog.com/post/1087757312/episode-0-3-4-mongrel2-guitar-and-more-with-zed-shaw
http://thechangelog.com/post/610697985/episode-0-2-4-facebook-open-source-projects-tornado-hip
There has been a few IronPython-related shows on some of the .NET podcasts:
http://www.hanselminutes.com/default.aspx?showID=177
http://www.dotnetrocks.com/default.aspx?showNum=429
http://www.craigmurphy.com/blog/?p=708
A:
thisweekindjango
A:
Some Pycon talks are available on blip.tv
Pycon podcasts are available here
|
Best Python podcasts?
|
Could any one suggest good Python-related podcasts out there, it could be anything about Python or its eco-system (like django, pylons, etc).
|
[
"Google Code University (several languages there)\nPython Podcasts\nPython Learning Foundation\nPython411 on PodcastAlley.com\n",
"I didn't think much of Python411 - the episode I downloaded primarily consisted of the host talking about how he was planning on writing a GAE site.\nThis Week in Django as pointed out by Geo is (or possibly was) a good Python podcast. Obviously, as it's focused on Django development there is a lot of Djangoisms discussed however there's also a lot of general Python knowledge shared as well. TWID is currently going through a revamp, keep an eye on @djangodose for updates. \n",
"The Changelog (read the blog, listen to the podcast, it is very good) has some Python material, for instance:\n\nhttp://thechangelog.com/post/1174335646/episode-0-3-6-django-dash\nhttp://thechangelog.com/post/1087757312/episode-0-3-4-mongrel2-guitar-and-more-with-zed-shaw\nhttp://thechangelog.com/post/610697985/episode-0-2-4-facebook-open-source-projects-tornado-hip\n\nThere has been a few IronPython-related shows on some of the .NET podcasts:\n\nhttp://www.hanselminutes.com/default.aspx?showID=177\nhttp://www.dotnetrocks.com/default.aspx?showNum=429\nhttp://www.craigmurphy.com/blog/?p=708\n\n",
"thisweekindjango\n",
"\nSome Pycon talks are available on blip.tv\nPycon podcasts are available here \n\n"
] |
[
23,
4,
3,
2,
2
] |
[] |
[] |
[
"podcast",
"python"
] |
stackoverflow_0000791618_podcast_python.txt
|
Q:
Updating data in google app engine
I'm attempting my first google app engine project – a simple player stats database for a sports team I'm involved with. Given this model:
class Player(db.Model):
""" Represents a player in the club. """
first_name = db.StringProperty()
surname = db.StringProperty()
gender = db.StringProperty()
I want to make a basic web interface for creating and modifying players. My code structure looks something like this:
class PlayersPage(webapp.RequestHandler):
def get(self):
# Get all the current players, and store the list.
# We need to store the list so that we can update
# if necessary in post().
self.shown_players = list(Player.all())
# omitted: html-building using django template
This code produces a very basic HTML page consisting of a form and a table. The table has one row for each Player, looking like something like this:
<tr>
<td><input type=text name="first_name0" value="Test"></td>
<td><input type=text name="surname0" value="Guy"></td>
<td><select name="gender0">
<option value="Male" selected>Male</option>
<option value="Female" >Female</option>
</select></td>
</tr>
<!-- next row: first_name1, etc. -->
My idea is that I would store the Player instances that I used in self.shown_players, so that I could later update Players if necessary in my post() method (of the same class) by doing:
def post(self):
# some code skipped
for i, player in enumerate(self.shown_players):
fn = self.request.get('first_name'+str(i)).strip()
sn = self.request.get('surname'+str(i)).strip()
gd = self.request.get('gender'+str(i)).strip()
if any([fn != player.first_name,
sn != player.surname,
gd != player.gender]):
player.first_name = fn
player.surname = sn
player.gender = gd
player.put()
However, this doesn't work because self.shown_players does not exist when the post() method is called. I guess the app engine creates a new instance of the class every time the page is accessed.
I experimented with the same idea but putting shown_players at the class or module level (and calling it global) but this didn't work for reasons that I cannot divine.
For example:
shown_players = []
class PlayersPage(webapp.RequestHandler):
def get(self):
# Get all the current players, and store the list.
# We need to store the list so that we can update
# if necessary in post().
global shown_players
shown_players[:] = list(Player.all())
shown_players appears to have the right value within get() because the HTML generates correctly, but it is empty within post().
What should I do?
EDIT: Thanks, all. The answer ("Just retrieve the players again!") should have been obvious :-) Maybe I'll look at memcache one day, but I'm not expecting the player list to get beyond 30 in the near future..
A:
On each request you are working on a new instance of the same class. That's why you can't create a varable in get() and use its value in post(). What you could do is either retrieve the values again in your post()-method or store the data in the memcache.
Refer to the documentation of memcache here:
http://code.google.com/intl/de-DE/appengine/docs/python/memcache/
A:
In your post method, just before the "for" clause, retrieve the players list from where it is stored:
def post(self):
# some code skipped
self.shown_players = Player.all().fetch()
for i, player in enumerate(self.shown_players):
...
A:
I've never tried building a google app engine app, but I understand it's somewhat similar to Django in it's handling of databases etc.
I don't think you should be storing things in global variables and instead should be treating each transaction seperately. The get request works because you're doing what you ought to be doing and re-requesting the information from the db.
If you want to update a player in your post function, you probably want to pass in the details, [look up players with those details again], modify them as you please. The bit in brackets is the step you're missing.
|
Updating data in google app engine
|
I'm attempting my first google app engine project – a simple player stats database for a sports team I'm involved with. Given this model:
class Player(db.Model):
""" Represents a player in the club. """
first_name = db.StringProperty()
surname = db.StringProperty()
gender = db.StringProperty()
I want to make a basic web interface for creating and modifying players. My code structure looks something like this:
class PlayersPage(webapp.RequestHandler):
def get(self):
# Get all the current players, and store the list.
# We need to store the list so that we can update
# if necessary in post().
self.shown_players = list(Player.all())
# omitted: html-building using django template
This code produces a very basic HTML page consisting of a form and a table. The table has one row for each Player, looking like something like this:
<tr>
<td><input type=text name="first_name0" value="Test"></td>
<td><input type=text name="surname0" value="Guy"></td>
<td><select name="gender0">
<option value="Male" selected>Male</option>
<option value="Female" >Female</option>
</select></td>
</tr>
<!-- next row: first_name1, etc. -->
My idea is that I would store the Player instances that I used in self.shown_players, so that I could later update Players if necessary in my post() method (of the same class) by doing:
def post(self):
# some code skipped
for i, player in enumerate(self.shown_players):
fn = self.request.get('first_name'+str(i)).strip()
sn = self.request.get('surname'+str(i)).strip()
gd = self.request.get('gender'+str(i)).strip()
if any([fn != player.first_name,
sn != player.surname,
gd != player.gender]):
player.first_name = fn
player.surname = sn
player.gender = gd
player.put()
However, this doesn't work because self.shown_players does not exist when the post() method is called. I guess the app engine creates a new instance of the class every time the page is accessed.
I experimented with the same idea but putting shown_players at the class or module level (and calling it global) but this didn't work for reasons that I cannot divine.
For example:
shown_players = []
class PlayersPage(webapp.RequestHandler):
def get(self):
# Get all the current players, and store the list.
# We need to store the list so that we can update
# if necessary in post().
global shown_players
shown_players[:] = list(Player.all())
shown_players appears to have the right value within get() because the HTML generates correctly, but it is empty within post().
What should I do?
EDIT: Thanks, all. The answer ("Just retrieve the players again!") should have been obvious :-) Maybe I'll look at memcache one day, but I'm not expecting the player list to get beyond 30 in the near future..
|
[
"On each request you are working on a new instance of the same class. That's why you can't create a varable in get() and use its value in post(). What you could do is either retrieve the values again in your post()-method or store the data in the memcache.\nRefer to the documentation of memcache here:\nhttp://code.google.com/intl/de-DE/appengine/docs/python/memcache/\n",
"In your post method, just before the \"for\" clause, retrieve the players list from where it is stored:\ndef post(self):\n # some code skipped\n\n self.shown_players = Player.all().fetch()\n for i, player in enumerate(self.shown_players):\n ...\n\n",
"I've never tried building a google app engine app, but I understand it's somewhat similar to Django in it's handling of databases etc.\nI don't think you should be storing things in global variables and instead should be treating each transaction seperately. The get request works because you're doing what you ought to be doing and re-requesting the information from the db.\nIf you want to update a player in your post function, you probably want to pass in the details, [look up players with those details again], modify them as you please. The bit in brackets is the step you're missing.\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000801477_google_app_engine_python.txt
|
Q:
Memory efficiency: One large dictionary or a dictionary of smaller dictionaries?
I'm writing an application in Python (2.6) that requires me to use a dictionary as a data store.
I am curious as to whether or not it is more memory efficient to have one large dictionary, or to break that down into many (much) smaller dictionaries, then have an "index" dictionary that contains a reference to all the smaller dictionaries.
I know there is a lot of overhead in general with lists and dictionaries. I read somewhere that python internally allocates enough space that the dictionary/list # of items to the power of 2.
I'm new enough to python that I'm not sure if there are other unexpected internal complexities/suprises like that, that is not apparent to the average user that I should take into consideration.
One of the difficulties is knowing how the power of 2 system counts "items"? Is each key:pair counted as 1 item? That's seems important to know because if you have a 100 item monolithic dictionary then space 100^2 items would be allocated. If you have 100 single item dictionaries (1 key:pair) then each dictionary would only be allocation 1^2 (aka no extra allocation)?
Any clearly laid out information would be very helpful!
A:
Three suggestions:
Use one dictionary.
It's easier, it's more straightforward, and someone else has already optimized this problem for you. Until you've actually measured your code and traced a performance problem to this part of it, you have no reason not to do the simple, straightforward thing.
Optimize later.
If you are really worried about performance, then abstract the problem make a class to wrap whatever lookup mechanism you end up using and write your code to use this class. You can change the implementation later if you find you need some other data structure for greater performance.
Read up on hash tables.
Dictionaries are hash tables, and if you are worried about their time or space overhead, you should read up on how they're implemented. This is basic computer science. The short of it is that hash tables are:
average case O(1) lookup time
O(n) space (Expect about 2n, depending on various parameters)
I do not know where you read that they were O(n^2) space, but if they were, then they would not be in widespread, practical use as they are in most languages today. There are two advantages to these nice properties of hash tables:
O(1) lookup time implies that you will not pay a cost in lookup time for having a larger dictionary, as lookup time doesn't depend on size.
O(n) space implies that you don't gain much of anything from breaking your dictionary up into smaller pieces. Space scales linearly with number of elements, so lots of small dictionaries will not take up significantly less space than one large one or vice versa. This would not be true if they were O(n^2) space, but lucky for you, they're not.
Here are some more resources that might help:
The Wikipedia article on Hash Tables gives a great listing of the various lookup and allocation schemes used in hashtables.
The GNU Scheme documentation has a nice discussion of how much space you can expect hashtables to take up, including a formal discussion of why "the amount of space used by the hash table is proportional to the number of associations in the table". This might interest you.
Here are some things you might consider if you find you actually need to optimize your dictionary implementation:
Here is the C source code for Python's dictionaries, in case you want ALL the details. There's copious documentation in here:
dictobject.h
dictobject.c
Here is a python implementation of that, in case you don't like reading C.
(Thanks to Ben Peterson)
The Java Hashtable class docs talk a bit about how load factors work, and how they affect the space your hash takes up. Note there's a tradeoff between your load factor and how frequently you need to rehash. Rehashes can be costly.
A:
If you're using Python, you really shouldn't be worrying about this sort of thing in the first place. Just build your data structure the way it best suits your needs, not the computer's.
This smacks of premature optimization, not performance improvement. Profile your code if something is actually bottlenecking, but until then, just let Python do what it does and focus on the actual programming task, and not the underlying mechanics.
A:
"Simple" is generally better than "clever", especially if you have no tested reason to go beyond "simple". And anyway "Memory efficient" is an ambiguous term, and there are tradeoffs, when you consider persisting, serializing, cacheing, swapping, and a whole bunch of other stuff that someone else has already thought through so that in most cases you don't need to.
Think "Simplest way to handle it properly" optimize much later.
A:
Premature optimization bla bla, don't do it bla bla.
I think you're mistaken about the power of two extra allocation does. I think its just a multiplier of two. x*2, not x^2.
I've seen this question a few times on various python mailing lists.
With regards to memory, here's a paraphrased version of one such discussion (the post in question wanted to store hundreds of millions integers):
A set() is more space efficient than a dict(), if you just want to test for membership
gmpy has a bitvector type class for storing dense sets of integers
Dicts are kept between 50% and 30% empty, and an entry is about ~12 bytes (though the true amount will vary by platform a bit).
So, the fewer objects you have, the less memory you're going to be using, and the fewer lookups you're going to do (since you'll have to lookup in the index, then a second lookup in the actual value).
Like others, said, profile to see your bottlenecks. Keeping an membership set() and value dict() might be faster, but you'll be using more memory.
I'd also suggest reposting this to a python specific list, such as comp.lang.python, which is full of much more knowledgeable people than myself who would give you all sorts of useful information.
A:
If your dictionary is so big that it does not fit into memory, you might want to have a look at ZODB, a very mature object database for Python.
The 'root' of the db has the same interface as a dictionary, and you don't need to load the whole data structure into memory at once e.g. you can iterate over only a portion of the structure by providing start and end keys.
It also provides transactions and versioning.
A:
Honestly, you won't be able to tell the difference either way, in terms of either performance or memory usage. Unless you're dealing with tens of millions of items or more, the performance or memory impact is just noise.
From the way you worded your second sentence, it sounds like the one big dictionary is your first inclination, and matches more closely with the problem you're trying to solve. If that's true, go with that. What you'll find about Python is that the solutions that everyone considers 'right' nearly always turn out to be those that are as clear and simple as possible.
A:
Often times, dictionaries of dictionaries are useful for other than performance reasons. ie, they allow you to store context information about the data without having extra fields on the objects themselves, and make querying subsets of the data faster.
In terms of memory usage, it would stand to reason that one large dictionary will use less ram than multiple smaller ones. Remember, if you're nesting dictionaries, each additional layer of nesting will roughly double the number of dictionaries you need to allocate.
In terms of query speed, multiple dicts will take longer due to the increased number of lookups required.
So I think the only way to answer this question is for you to profile your own code. However, my suggestion is to use the method that makes your code the cleanest and easiest to maintain. Of all the features of Python, dictionaries are probably the most heavily tweaked for optimal performance.
|
Memory efficiency: One large dictionary or a dictionary of smaller dictionaries?
|
I'm writing an application in Python (2.6) that requires me to use a dictionary as a data store.
I am curious as to whether or not it is more memory efficient to have one large dictionary, or to break that down into many (much) smaller dictionaries, then have an "index" dictionary that contains a reference to all the smaller dictionaries.
I know there is a lot of overhead in general with lists and dictionaries. I read somewhere that python internally allocates enough space that the dictionary/list # of items to the power of 2.
I'm new enough to python that I'm not sure if there are other unexpected internal complexities/suprises like that, that is not apparent to the average user that I should take into consideration.
One of the difficulties is knowing how the power of 2 system counts "items"? Is each key:pair counted as 1 item? That's seems important to know because if you have a 100 item monolithic dictionary then space 100^2 items would be allocated. If you have 100 single item dictionaries (1 key:pair) then each dictionary would only be allocation 1^2 (aka no extra allocation)?
Any clearly laid out information would be very helpful!
|
[
"Three suggestions:\n\nUse one dictionary.\nIt's easier, it's more straightforward, and someone else has already optimized this problem for you. Until you've actually measured your code and traced a performance problem to this part of it, you have no reason not to do the simple, straightforward thing.\nOptimize later.\nIf you are really worried about performance, then abstract the problem make a class to wrap whatever lookup mechanism you end up using and write your code to use this class. You can change the implementation later if you find you need some other data structure for greater performance.\nRead up on hash tables.\nDictionaries are hash tables, and if you are worried about their time or space overhead, you should read up on how they're implemented. This is basic computer science. The short of it is that hash tables are:\n\naverage case O(1) lookup time\nO(n) space (Expect about 2n, depending on various parameters)\n\nI do not know where you read that they were O(n^2) space, but if they were, then they would not be in widespread, practical use as they are in most languages today. There are two advantages to these nice properties of hash tables:\n\nO(1) lookup time implies that you will not pay a cost in lookup time for having a larger dictionary, as lookup time doesn't depend on size.\nO(n) space implies that you don't gain much of anything from breaking your dictionary up into smaller pieces. Space scales linearly with number of elements, so lots of small dictionaries will not take up significantly less space than one large one or vice versa. This would not be true if they were O(n^2) space, but lucky for you, they're not.\n\nHere are some more resources that might help:\n\nThe Wikipedia article on Hash Tables gives a great listing of the various lookup and allocation schemes used in hashtables.\nThe GNU Scheme documentation has a nice discussion of how much space you can expect hashtables to take up, including a formal discussion of why \"the amount of space used by the hash table is proportional to the number of associations in the table\". This might interest you.\n\nHere are some things you might consider if you find you actually need to optimize your dictionary implementation:\n\nHere is the C source code for Python's dictionaries, in case you want ALL the details. There's copious documentation in here:\n\n\ndictobject.h\ndictobject.c\n\nHere is a python implementation of that, in case you don't like reading C.\n(Thanks to Ben Peterson)\nThe Java Hashtable class docs talk a bit about how load factors work, and how they affect the space your hash takes up. Note there's a tradeoff between your load factor and how frequently you need to rehash. Rehashes can be costly.\n\n\n",
"If you're using Python, you really shouldn't be worrying about this sort of thing in the first place. Just build your data structure the way it best suits your needs, not the computer's.\nThis smacks of premature optimization, not performance improvement. Profile your code if something is actually bottlenecking, but until then, just let Python do what it does and focus on the actual programming task, and not the underlying mechanics.\n",
"\"Simple\" is generally better than \"clever\", especially if you have no tested reason to go beyond \"simple\". And anyway \"Memory efficient\" is an ambiguous term, and there are tradeoffs, when you consider persisting, serializing, cacheing, swapping, and a whole bunch of other stuff that someone else has already thought through so that in most cases you don't need to.\nThink \"Simplest way to handle it properly\" optimize much later.\n",
"Premature optimization bla bla, don't do it bla bla.\nI think you're mistaken about the power of two extra allocation does. I think its just a multiplier of two. x*2, not x^2.\nI've seen this question a few times on various python mailing lists.\nWith regards to memory, here's a paraphrased version of one such discussion (the post in question wanted to store hundreds of millions integers):\n\nA set() is more space efficient than a dict(), if you just want to test for membership\ngmpy has a bitvector type class for storing dense sets of integers\nDicts are kept between 50% and 30% empty, and an entry is about ~12 bytes (though the true amount will vary by platform a bit).\n\nSo, the fewer objects you have, the less memory you're going to be using, and the fewer lookups you're going to do (since you'll have to lookup in the index, then a second lookup in the actual value).\nLike others, said, profile to see your bottlenecks. Keeping an membership set() and value dict() might be faster, but you'll be using more memory.\nI'd also suggest reposting this to a python specific list, such as comp.lang.python, which is full of much more knowledgeable people than myself who would give you all sorts of useful information.\n",
"If your dictionary is so big that it does not fit into memory, you might want to have a look at ZODB, a very mature object database for Python.\nThe 'root' of the db has the same interface as a dictionary, and you don't need to load the whole data structure into memory at once e.g. you can iterate over only a portion of the structure by providing start and end keys.\nIt also provides transactions and versioning.\n",
"Honestly, you won't be able to tell the difference either way, in terms of either performance or memory usage. Unless you're dealing with tens of millions of items or more, the performance or memory impact is just noise.\nFrom the way you worded your second sentence, it sounds like the one big dictionary is your first inclination, and matches more closely with the problem you're trying to solve. If that's true, go with that. What you'll find about Python is that the solutions that everyone considers 'right' nearly always turn out to be those that are as clear and simple as possible.\n",
"Often times, dictionaries of dictionaries are useful for other than performance reasons. ie, they allow you to store context information about the data without having extra fields on the objects themselves, and make querying subsets of the data faster.\nIn terms of memory usage, it would stand to reason that one large dictionary will use less ram than multiple smaller ones. Remember, if you're nesting dictionaries, each additional layer of nesting will roughly double the number of dictionaries you need to allocate.\nIn terms of query speed, multiple dicts will take longer due to the increased number of lookups required.\nSo I think the only way to answer this question is for you to profile your own code. However, my suggestion is to use the method that makes your code the cleanest and easiest to maintain. Of all the features of Python, dictionaries are probably the most heavily tweaked for optimal performance.\n"
] |
[
82,
16,
8,
7,
5,
2,
1
] |
[] |
[] |
[
"dictionary",
"memory",
"performance",
"python"
] |
stackoverflow_0000671403_dictionary_memory_performance_python.txt
|
Q:
Is Python faster and lighter than C++?
I've always thought that Python's advantages are code readibility and development speed, but time and memory usage were not as good as those of C++.
These stats struck me really hard.
What does your experience tell you about Python vs C++ time and memory usage?
A:
I think you're reading those stats incorrectly. They show that Python is up to about 400 times slower than C++ and with the exception of a single case, Python is more of a memory hog. When it comes to source size though, Python wins flat out.
My experiences with Python show the same definite trend that Python is on the order of between 10 and 100 times slower than C++ when doing any serious number crunching. There are many reasons for this, the major ones being: a) Python is interpreted, while C++ is compiled; b) Python has no primitives, everything including the builtin types (int, float, etc.) are objects; c) a Python list can hold objects of different type, so each entry has to store additional data about its type. These all severely hinder both runtime and memory consumption.
This is no reason to ignore Python though. A lot of software doesn't require much time or memory even with the 100 time slowness factor. Development cost is where Python wins with the simple and concise style. This improvement on development cost often outweighs the cost of additional cpu and memory resources. When it doesn't, however, then C++ wins.
A:
All the slowest (>100x) usages of Python on the shootout are scientific operations that require high GFlop/s count. You should NOT use python for those anyways. The correct way to use python is to import a module that does those calculations, and then go have a relaxing afternoon with your family. That is the pythonic way :)
A:
My experience is the same as the benchmarks. Python can be slow and uses more memory. I write much, much less code and it works the first time with much less debugging. Since it manages memory for me, I don't have to do any memory management, saving hours of chasing down core leaks.
What's your question?
A:
Source size is not really a sensible thing to measure. For example, the following shell script:
cat foobar
is much shorter than either its Python or C++ equivalents.
A:
Also: Psyco vs. C++.
It's still a bad comparison, since noone would do the numbercrunchy stuff benchmarks tend to focus on in pure Python anyway. A better one would be comparing the performance of realistic applications, or C++ versus NumPy, to get an idea whether your program will be noticeably slower.
A:
The problem here is that you have two different languages that solve two different problems... its like comparing C++ with assembler.
Python is for rapid application development and for when performance is a minimal concern.
C++ is not for rapid application development and inherits a legacy of speed from C - for low level programming.
A:
It's the same problem with managed and easy to use programming language as always - they are slow (and sometimes memory-eating).
These are languages to do control rather than processing. If I would have to write application to transform images and had to use Python too all the processing could be written in C++ and connected to Python via bindings while interface and process control would be definetely Python.
A:
I think those stats show that Python is much slower and uses more memory for those benchmarks - are you sure you're reading them the right way up?
In my experience, which is mostly with writing network- and file-system-bound programs in Python, Python isn't significantly slower in any way that matters. For that kind of work, its benefits outweigh its costs.
|
Is Python faster and lighter than C++?
|
I've always thought that Python's advantages are code readibility and development speed, but time and memory usage were not as good as those of C++.
These stats struck me really hard.
What does your experience tell you about Python vs C++ time and memory usage?
|
[
"I think you're reading those stats incorrectly. They show that Python is up to about 400 times slower than C++ and with the exception of a single case, Python is more of a memory hog. When it comes to source size though, Python wins flat out.\nMy experiences with Python show the same definite trend that Python is on the order of between 10 and 100 times slower than C++ when doing any serious number crunching. There are many reasons for this, the major ones being: a) Python is interpreted, while C++ is compiled; b) Python has no primitives, everything including the builtin types (int, float, etc.) are objects; c) a Python list can hold objects of different type, so each entry has to store additional data about its type. These all severely hinder both runtime and memory consumption.\nThis is no reason to ignore Python though. A lot of software doesn't require much time or memory even with the 100 time slowness factor. Development cost is where Python wins with the simple and concise style. This improvement on development cost often outweighs the cost of additional cpu and memory resources. When it doesn't, however, then C++ wins.\n",
"All the slowest (>100x) usages of Python on the shootout are scientific operations that require high GFlop/s count. You should NOT use python for those anyways. The correct way to use python is to import a module that does those calculations, and then go have a relaxing afternoon with your family. That is the pythonic way :)\n",
"My experience is the same as the benchmarks. Python can be slow and uses more memory. I write much, much less code and it works the first time with much less debugging. Since it manages memory for me, I don't have to do any memory management, saving hours of chasing down core leaks.\nWhat's your question?\n",
"Source size is not really a sensible thing to measure. For example, the following shell script:\ncat foobar\n\nis much shorter than either its Python or C++ equivalents.\n",
"Also: Psyco vs. C++.\nIt's still a bad comparison, since noone would do the numbercrunchy stuff benchmarks tend to focus on in pure Python anyway. A better one would be comparing the performance of realistic applications, or C++ versus NumPy, to get an idea whether your program will be noticeably slower.\n",
"The problem here is that you have two different languages that solve two different problems... its like comparing C++ with assembler.\nPython is for rapid application development and for when performance is a minimal concern.\nC++ is not for rapid application development and inherits a legacy of speed from C - for low level programming.\n",
"It's the same problem with managed and easy to use programming language as always - they are slow (and sometimes memory-eating).\nThese are languages to do control rather than processing. If I would have to write application to transform images and had to use Python too all the processing could be written in C++ and connected to Python via bindings while interface and process control would be definetely Python.\n",
"I think those stats show that Python is much slower and uses more memory for those benchmarks - are you sure you're reading them the right way up?\nIn my experience, which is mostly with writing network- and file-system-bound programs in Python, Python isn't significantly slower in any way that matters. For that kind of work, its benefits outweigh its costs.\n"
] |
[
273,
143,
26,
15,
8,
6,
4,
2
] |
[] |
[] |
[
"c++",
"memory",
"performance",
"python",
"statistics"
] |
stackoverflow_0000801657_c++_memory_performance_python_statistics.txt
|
Q:
Python selecting a value in a combo box and HTTP POST
In Python, I'm trying to read the values on http://utahcritseries.com/RawResults.aspx. How can I read years other than the default of 2002?
So far, using mechanize, I've been able to reference the SELECT and list all of its available options/values but am unsure how to change its value and resubmit the form.
I'm sure this is a common issue and is frequently asked, but I'm not sure what I should even be searching for.
A:
So how about this:
from mechanize import Browser
year="2005"
br=Browser()
br.open("http://utahcritseries.com/RawResults.aspx")
br.select_form(name="aspnetForm")
control=br.form.find_control("ctl00$ContentPlaceHolder1$ddlSeries")
control.set_value_by_label((year,))
response2=br.submit()
print response2.read()
A:
With problems relating to AJAX-loading of pages, use Firebug!
Install and open Firebug (it's a Firefox plugin), go to the Net page, and make sure "All" is selected. Open the URL and change the select box, and see what is sent to the server, and what is received.
It seems the catchily-named field ctl00$ContentPlaceHolder1$ddlSeries is what is responsible.. Does the following work..?
import urllib
postdata = {'ctl00$ContentPlaceHolder1$ddlSeries': 9}
src = urllib.urlopen(
"http://utahcritseries.com/RawResults.aspx",
data = urllib.urlencode(postdata)
).read()
print src
|
Python selecting a value in a combo box and HTTP POST
|
In Python, I'm trying to read the values on http://utahcritseries.com/RawResults.aspx. How can I read years other than the default of 2002?
So far, using mechanize, I've been able to reference the SELECT and list all of its available options/values but am unsure how to change its value and resubmit the form.
I'm sure this is a common issue and is frequently asked, but I'm not sure what I should even be searching for.
|
[
"So how about this:\nfrom mechanize import Browser\nyear=\"2005\"\n\nbr=Browser()\nbr.open(\"http://utahcritseries.com/RawResults.aspx\")\nbr.select_form(name=\"aspnetForm\")\ncontrol=br.form.find_control(\"ctl00$ContentPlaceHolder1$ddlSeries\")\ncontrol.set_value_by_label((year,))\nresponse2=br.submit()\n\nprint response2.read()\n\n",
"With problems relating to AJAX-loading of pages, use Firebug!\nInstall and open Firebug (it's a Firefox plugin), go to the Net page, and make sure \"All\" is selected. Open the URL and change the select box, and see what is sent to the server, and what is received.\nIt seems the catchily-named field ctl00$ContentPlaceHolder1$ddlSeries is what is responsible.. Does the following work..?\nimport urllib\n\npostdata = {'ctl00$ContentPlaceHolder1$ddlSeries': 9}\n\nsrc = urllib.urlopen(\n \"http://utahcritseries.com/RawResults.aspx\",\n data = urllib.urlencode(postdata)\n).read()\n\nprint src\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"asp.net",
"http",
"python",
"web_scraping"
] |
stackoverflow_0000769948_asp.net_http_python_web_scraping.txt
|
Q:
how can I debug more than one script in pyscripter?
I installed portable python on my USB drive, and I really like pyscripter a lot. The thing is, after I start debugging a script, the IDE kind of freezes ( waiting for the code to reach a breakpoint ). This means I can't do anything with it ( I can't even save files ). It would be very useful to be able to debug more than one script at a time.
I even tried starting a new instance of the IDE, but it comes back to the one open. Is there something I can do to be able to debug more than one script at a given time ?
A:
To solve your problem, use Remote Interpreter and Debugger and PyScripter will become much more responsive. Even if something goes wrong, IDE will not crash - just reinitialize remote interpreter and resume working.
|
how can I debug more than one script in pyscripter?
|
I installed portable python on my USB drive, and I really like pyscripter a lot. The thing is, after I start debugging a script, the IDE kind of freezes ( waiting for the code to reach a breakpoint ). This means I can't do anything with it ( I can't even save files ). It would be very useful to be able to debug more than one script at a time.
I even tried starting a new instance of the IDE, but it comes back to the one open. Is there something I can do to be able to debug more than one script at a given time ?
|
[
"To solve your problem, use Remote Interpreter and Debugger and PyScripter will become much more responsive. Even if something goes wrong, IDE will not crash - just reinitialize remote interpreter and resume working.\n"
] |
[
1
] |
[] |
[] |
[
"debugging",
"ide",
"pyscripter",
"python"
] |
stackoverflow_0000797754_debugging_ide_pyscripter_python.txt
|
Q:
Notifying container object: best practices
I have two classes: Account and Operator. Account contains a list of Operators. Now, whenever an operator (in the list) receives a message I want to notify Account object to perform some business logic as well.
I think of three alternatives on how to achieve this:
1) Hold a reference within Operator to the container [Account] object and call methods directly. Not absolutely good because of circular references.
2) Use events. As far as I know there is no built-in event handling mechanism in Python. So, this one is a bit tricky to implement.
3) Don't send messages to Operators directly. Instead, operate only Accounts, and within them, internally, handler operators. This one is a bit limiting because in this case I cannot pass around references to operators.
I wonder which approach is the most advantageous from the architectural point of view. How do you usually handle this task?
It would be great if you could point out snippets in Python.
A:
You're over-thinking this. Seriously. Python isn't C++; your concerns are non-issues in Python. Just write what makes sense in your problem domain.
" Not absolutely good because of circular references."
Why not? Circularity is of no relevance here at all. Bidirectional relationships are great things. Use them. Python garbage collects them just fine without any thinking on your part.
What possible problem do you have with mutual (birectional) relationships?
"...operate only Accounts, and within them, internally, handler operators. This one is a bit limiting because in this case I cannot pass around references to operators.
"
What? Your Operators are Python objects, pass all you want. All Python objects are (in effect) references, don't sweat it.
What possible problem do you have with manipulating Operator objects?
A:
There is no "one-size-fits-all" solution for the Observer pattern. But usually, it's better to define an EventManager object where interested parties can register themselves for certain events and post these events whenever they happen. It simply creates less dependencies.
Note that you need to use a global EventManager instance, which can be problematic during testing or from a general OO point of view (it's a global variable). I strongly advise against passing the EventManager around all the time because that will clutter your code.
In my own code, the "key" for registering events is the class of the event. The EventManager uses a dictionary (event class -> list of observers) to know which event goes where. In the notification code, you can then use dict.get(event.__class__, ()) to find your listeners.
A:
I would use event handling for this. You don't have to implement it yourself -- I use pydispatcher for exactly this kind of event handling, and it's always worked very well (it uses weak references internally, to avoid the circular reference problem).
Also, if you're using a gui framework, you might already have an event framework you can hook into, for example PyQt has signals and slots.
A:
>>> class Account(object):
... def notify(self):
... print "Account notified"
...
>>> class Operator(object):
... def __init__(self, notifier):
... self.notifier = notifier
...
>>> A = Account()
>>> O = Operator(A.notify)
>>> O.notifier()
Account notified
>>> import gc
>>> gc.garbage
[]
>>> del A
>>> del O
>>> gc.garbage
[]
One thing you may not know about instance methods is that they're bound when looked up when using the dot syntax. In other words saying A.notify automatically binds the self parameter of notify to A. You can then hold a reference to this function without creating uncollectable garbage.
Lastly, you can always use Kamaelia for this type of thing.
A:
There are Observer pattern snippets all over the Web. A good source of reliable code is active state, E.G :
http://code.activestate.com/recipes/131499/
|
Notifying container object: best practices
|
I have two classes: Account and Operator. Account contains a list of Operators. Now, whenever an operator (in the list) receives a message I want to notify Account object to perform some business logic as well.
I think of three alternatives on how to achieve this:
1) Hold a reference within Operator to the container [Account] object and call methods directly. Not absolutely good because of circular references.
2) Use events. As far as I know there is no built-in event handling mechanism in Python. So, this one is a bit tricky to implement.
3) Don't send messages to Operators directly. Instead, operate only Accounts, and within them, internally, handler operators. This one is a bit limiting because in this case I cannot pass around references to operators.
I wonder which approach is the most advantageous from the architectural point of view. How do you usually handle this task?
It would be great if you could point out snippets in Python.
|
[
"You're over-thinking this. Seriously. Python isn't C++; your concerns are non-issues in Python. Just write what makes sense in your problem domain.\n\" Not absolutely good because of circular references.\"\nWhy not? Circularity is of no relevance here at all. Bidirectional relationships are great things. Use them. Python garbage collects them just fine without any thinking on your part.\nWhat possible problem do you have with mutual (birectional) relationships?\n\"...operate only Accounts, and within them, internally, handler operators. This one is a bit limiting because in this case I cannot pass around references to operators.\n\"\nWhat? Your Operators are Python objects, pass all you want. All Python objects are (in effect) references, don't sweat it. \nWhat possible problem do you have with manipulating Operator objects?\n",
"There is no \"one-size-fits-all\" solution for the Observer pattern. But usually, it's better to define an EventManager object where interested parties can register themselves for certain events and post these events whenever they happen. It simply creates less dependencies.\nNote that you need to use a global EventManager instance, which can be problematic during testing or from a general OO point of view (it's a global variable). I strongly advise against passing the EventManager around all the time because that will clutter your code.\nIn my own code, the \"key\" for registering events is the class of the event. The EventManager uses a dictionary (event class -> list of observers) to know which event goes where. In the notification code, you can then use dict.get(event.__class__, ()) to find your listeners.\n",
"I would use event handling for this. You don't have to implement it yourself -- I use pydispatcher for exactly this kind of event handling, and it's always worked very well (it uses weak references internally, to avoid the circular reference problem). \nAlso, if you're using a gui framework, you might already have an event framework you can hook into, for example PyQt has signals and slots. \n",
">>> class Account(object):\n... def notify(self):\n... print \"Account notified\"\n...\n>>> class Operator(object):\n... def __init__(self, notifier):\n... self.notifier = notifier\n...\n>>> A = Account()\n>>> O = Operator(A.notify)\n>>> O.notifier()\nAccount notified\n>>> import gc\n>>> gc.garbage\n[]\n>>> del A\n>>> del O\n>>> gc.garbage\n[]\n\nOne thing you may not know about instance methods is that they're bound when looked up when using the dot syntax. In other words saying A.notify automatically binds the self parameter of notify to A. You can then hold a reference to this function without creating uncollectable garbage.\nLastly, you can always use Kamaelia for this type of thing.\n",
"There are Observer pattern snippets all over the Web. A good source of reliable code is active state, E.G :\nhttp://code.activestate.com/recipes/131499/\n"
] |
[
5,
3,
3,
3,
0
] |
[] |
[] |
[
"architecture",
"containers",
"notifications",
"python"
] |
stackoverflow_0000801931_architecture_containers_notifications_python.txt
|
Q:
Using Twisted's twisted.web classes, how do I flush my outgoing buffers?
I've made a simple http server using Twisted, which sends the Content-Type: multipart/x-mixed-replace header. I'm using this to test an http client which I want to set up to accept a long-term stream.
The problem that has arisen is that my client request hangs until the http.Request calls self.finish(), then it receives all multipart documents at once.
Is there a way to manually flush the output buffers down to the client? I'm assuming this is why I'm not receiving the individual multipart documents.
#!/usr/bin/env python
import time
from twisted.web import http
from twisted.internet import protocol
class StreamHandler(http.Request):
BOUNDARY = 'BOUNDARY'
def writeBoundary(self):
self.write("--%s\n" % (self.BOUNDARY))
def writeStop(self):
self.write("--%s--\n" % (self.BOUNDARY))
def process(self):
self.setHeader('Connection', 'Keep-Alive')
self.setHeader('Content-Type', "multipart/x-mixed-replace;boundary=%s" % (self.BOUNDARY))
self.writeBoundary()
self.write("Content-Type: text/html\n")
s = "<html>foo</html>\n"
self.write("Content-Length: %s\n\n" % (len(s)))
self.write(s)
self.writeBoundary()
time.sleep(2)
self.write("Content-Type: text/html\n")
s = "<html>bar</html>\n"
self.write("Content-Length: %s\n\n" % (len(s)))
self.write(s)
self.writeBoundary()
time.sleep(2)
self.write("Content-Type: text/html\n")
s = "<html>baz</html>\n"
self.write("Content-Length: %s\n\n" % (len(s)))
self.write(s)
self.writeStop()
self.finish()
class StreamProtocol(http.HTTPChannel):
requestFactory = StreamHandler
class StreamFactory(http.HTTPFactory):
protocol = StreamProtocol
if __name__ == '__main__':
from twisted.internet import reactor
reactor.listenTCP(8800, StreamFactory())
reactor.run()
A:
Using time.sleep() prevents twisted from doing its job. To make it work you can't use time.sleep(), you must return control to twisted instead. The easiest way to modify your existing code to do that is by using twisted.internet.defer.inlineCallbacks, which is the next best thing since sliced bread:
#!/usr/bin/env python
import time
from twisted.web import http
from twisted.internet import protocol
from twisted.internet import reactor
from twisted.internet import defer
def wait(seconds, result=None):
"""Returns a deferred that will be fired later"""
d = defer.Deferred()
reactor.callLater(seconds, d.callback, result)
return d
class StreamHandler(http.Request):
BOUNDARY = 'BOUNDARY'
def writeBoundary(self):
self.write("--%s\n" % (self.BOUNDARY))
def writeStop(self):
self.write("--%s--\n" % (self.BOUNDARY))
@defer.inlineCallbacks
def process(self):
self.setHeader('Connection', 'Keep-Alive')
self.setHeader('Content-Type', "multipart/x-mixed-replace;boundary=%s" % (self.BOUNDARY))
self.writeBoundary()
self.write("Content-Type: text/html\n")
s = "<html>foo</html>\n"
self.write("Content-Length: %s\n\n" % (len(s)))
self.write(s)
self.writeBoundary()
yield wait(2)
self.write("Content-Type: text/html\n")
s = "<html>bar</html>\n"
self.write("Content-Length: %s\n\n" % (len(s)))
self.write(s)
self.writeBoundary()
yield wait(2)
self.write("Content-Type: text/html\n")
s = "<html>baz</html>\n"
self.write("Content-Length: %s\n\n" % (len(s)))
self.write(s)
self.writeStop()
self.finish()
class StreamProtocol(http.HTTPChannel):
requestFactory = StreamHandler
class StreamFactory(http.HTTPFactory):
protocol = StreamProtocol
if __name__ == '__main__':
reactor.listenTCP(8800, StreamFactory())
reactor.run()
That works in firefox, I guess it answers your question correctly.
A:
The reason seems to be explained in the FAQ for twisted. The twisted server does not actually write anything to the underlining connection until the reactor thread is free to run, in this case at the end of your method. However you can use reactor.doSelect(timeout) before each of your sleeps to make the reactor write what it has to the connection.
|
Using Twisted's twisted.web classes, how do I flush my outgoing buffers?
|
I've made a simple http server using Twisted, which sends the Content-Type: multipart/x-mixed-replace header. I'm using this to test an http client which I want to set up to accept a long-term stream.
The problem that has arisen is that my client request hangs until the http.Request calls self.finish(), then it receives all multipart documents at once.
Is there a way to manually flush the output buffers down to the client? I'm assuming this is why I'm not receiving the individual multipart documents.
#!/usr/bin/env python
import time
from twisted.web import http
from twisted.internet import protocol
class StreamHandler(http.Request):
BOUNDARY = 'BOUNDARY'
def writeBoundary(self):
self.write("--%s\n" % (self.BOUNDARY))
def writeStop(self):
self.write("--%s--\n" % (self.BOUNDARY))
def process(self):
self.setHeader('Connection', 'Keep-Alive')
self.setHeader('Content-Type', "multipart/x-mixed-replace;boundary=%s" % (self.BOUNDARY))
self.writeBoundary()
self.write("Content-Type: text/html\n")
s = "<html>foo</html>\n"
self.write("Content-Length: %s\n\n" % (len(s)))
self.write(s)
self.writeBoundary()
time.sleep(2)
self.write("Content-Type: text/html\n")
s = "<html>bar</html>\n"
self.write("Content-Length: %s\n\n" % (len(s)))
self.write(s)
self.writeBoundary()
time.sleep(2)
self.write("Content-Type: text/html\n")
s = "<html>baz</html>\n"
self.write("Content-Length: %s\n\n" % (len(s)))
self.write(s)
self.writeStop()
self.finish()
class StreamProtocol(http.HTTPChannel):
requestFactory = StreamHandler
class StreamFactory(http.HTTPFactory):
protocol = StreamProtocol
if __name__ == '__main__':
from twisted.internet import reactor
reactor.listenTCP(8800, StreamFactory())
reactor.run()
|
[
"Using time.sleep() prevents twisted from doing its job. To make it work you can't use time.sleep(), you must return control to twisted instead. The easiest way to modify your existing code to do that is by using twisted.internet.defer.inlineCallbacks, which is the next best thing since sliced bread:\n#!/usr/bin/env python\n\nimport time\n\nfrom twisted.web import http\nfrom twisted.internet import protocol\nfrom twisted.internet import reactor\nfrom twisted.internet import defer\n\ndef wait(seconds, result=None):\n \"\"\"Returns a deferred that will be fired later\"\"\"\n d = defer.Deferred()\n reactor.callLater(seconds, d.callback, result)\n return d\n\nclass StreamHandler(http.Request):\n BOUNDARY = 'BOUNDARY'\n\n def writeBoundary(self):\n self.write(\"--%s\\n\" % (self.BOUNDARY))\n\n def writeStop(self):\n self.write(\"--%s--\\n\" % (self.BOUNDARY))\n\n @defer.inlineCallbacks\n def process(self):\n self.setHeader('Connection', 'Keep-Alive')\n self.setHeader('Content-Type', \"multipart/x-mixed-replace;boundary=%s\" % (self.BOUNDARY))\n\n self.writeBoundary()\n\n self.write(\"Content-Type: text/html\\n\")\n s = \"<html>foo</html>\\n\"\n self.write(\"Content-Length: %s\\n\\n\" % (len(s)))\n self.write(s)\n self.writeBoundary()\n\n\n yield wait(2)\n\n self.write(\"Content-Type: text/html\\n\")\n s = \"<html>bar</html>\\n\"\n self.write(\"Content-Length: %s\\n\\n\" % (len(s)))\n self.write(s)\n self.writeBoundary()\n\n yield wait(2)\n\n self.write(\"Content-Type: text/html\\n\")\n s = \"<html>baz</html>\\n\"\n self.write(\"Content-Length: %s\\n\\n\" % (len(s)))\n self.write(s)\n\n self.writeStop()\n\n self.finish()\n\n\nclass StreamProtocol(http.HTTPChannel):\n requestFactory = StreamHandler\n\nclass StreamFactory(http.HTTPFactory):\n protocol = StreamProtocol\n\n\nif __name__ == '__main__': \n reactor.listenTCP(8800, StreamFactory())\n reactor.run()\n\nThat works in firefox, I guess it answers your question correctly.\n",
"The reason seems to be explained in the FAQ for twisted. The twisted server does not actually write anything to the underlining connection until the reactor thread is free to run, in this case at the end of your method. However you can use reactor.doSelect(timeout) before each of your sleeps to make the reactor write what it has to the connection.\n"
] |
[
10,
1
] |
[] |
[] |
[
"multipart_mixed_replace",
"python",
"twisted"
] |
stackoverflow_0000776631_multipart_mixed_replace_python_twisted.txt
|
Q:
Database Reporting Services in Django or Python
I am wondering if there are any django based, or even Python Based Reporting Services ala JasperReports or SQL Server Reporting Services?
Basically, I would love to be able to create reports, send them out as emails as CSV or HTML or PDF without having to code the reports. Even if I have to code the report I wouldn't mind, but the whole framework with schedules and so on would be nice!
PS. I know I could use Django Apps to do it, but I was hoping if there was any integrated solutions or even projects such as Pinax or Satchmo which brings together the apps needed.
PPS: It would have to work off Postgres
A:
"I would love to be able to create reports ... without having to code the reports"
So would I. Sadly, however, each report seems to be unique and require custom code.
From Django model to CSV is easy. Start there with a few of your reports.
import csv
from myApp.models import This, That, TheOther
def parseCommandLine():
# setup optparse to get report query parameters
def main():
wtr= csv.DictWriter( sys.stdout, ["Col1", "Col2", "Col3"] )
this, that = parseCommandLine()
thisList= This.objects.filter( name=this, that__name=that )
for object in thisList:
write.writerow( object.col1, object.that.col2, object.theOther.col3 )
if __name__ == "__main__":
main()
HTML is pretty easy -- Django has an HTML template language. Rather than render_to_response, you simply render your template and write it to stdout. And the core of the algorithm, interestingly, is very similar to writing a CSV. Similar enough that -- without much cleverness -- you should have a design pattern that does both.
Once you have the CSV working, add the HTML using Django's templates.
PDF's are harder, because you have to actually work out the formatting in some detail. There are a lot of Python libraries for this. Interestingly, however, the overall pattern for PDF writing is very similar to CSV and HTML writing.
Emailing means using Python's smtplib directly or Django's email package. This isn't too hard. All the pieces are there, you just need to email the output files produced above to some distribution list.
Scheduling takes a little thinking to make best use of crontab. This -- perhaps -- is the hardest part of the job.
A:
I just thought after a fair bit of investigation I would report my findings...
http://code.google.com/p/django-reporting/ - I think that this project, looks like an awesome candidate for alot of the functionality I require. Unfortunately its Django 1.1 which as of this writing (29th April 2009) has not been released.At least in the ability to create reports without too much code.
http://code.google.com/p/django-cron/ - Look promising for scheduling of jobs without cron access
http://www.xhtml2pdf.com/ - Could be used or ReportLabs PDF Libraries for conversion of HTML to PDF
All these together with Django's Email functionality could make a nice Reporting System.
|
Database Reporting Services in Django or Python
|
I am wondering if there are any django based, or even Python Based Reporting Services ala JasperReports or SQL Server Reporting Services?
Basically, I would love to be able to create reports, send them out as emails as CSV or HTML or PDF without having to code the reports. Even if I have to code the report I wouldn't mind, but the whole framework with schedules and so on would be nice!
PS. I know I could use Django Apps to do it, but I was hoping if there was any integrated solutions or even projects such as Pinax or Satchmo which brings together the apps needed.
PPS: It would have to work off Postgres
|
[
"\"I would love to be able to create reports ... without having to code the reports\" \nSo would I. Sadly, however, each report seems to be unique and require custom code.\nFrom Django model to CSV is easy. Start there with a few of your reports.\nimport csv\nfrom myApp.models import This, That, TheOther\ndef parseCommandLine():\n # setup optparse to get report query parameters\ndef main():\n wtr= csv.DictWriter( sys.stdout, [\"Col1\", \"Col2\", \"Col3\"] )\n this, that = parseCommandLine()\n thisList= This.objects.filter( name=this, that__name=that )\n for object in thisList:\n write.writerow( object.col1, object.that.col2, object.theOther.col3 )\nif __name__ == \"__main__\":\n main()\n\nHTML is pretty easy -- Django has an HTML template language. Rather than render_to_response, you simply render your template and write it to stdout. And the core of the algorithm, interestingly, is very similar to writing a CSV. Similar enough that -- without much cleverness -- you should have a design pattern that does both.\nOnce you have the CSV working, add the HTML using Django's templates.\nPDF's are harder, because you have to actually work out the formatting in some detail. There are a lot of Python libraries for this. Interestingly, however, the overall pattern for PDF writing is very similar to CSV and HTML writing.\nEmailing means using Python's smtplib directly or Django's email package. This isn't too hard. All the pieces are there, you just need to email the output files produced above to some distribution list.\nScheduling takes a little thinking to make best use of crontab. This -- perhaps -- is the hardest part of the job.\n",
"I just thought after a fair bit of investigation I would report my findings...\nhttp://code.google.com/p/django-reporting/ - I think that this project, looks like an awesome candidate for alot of the functionality I require. Unfortunately its Django 1.1 which as of this writing (29th April 2009) has not been released.At least in the ability to create reports without too much code.\nhttp://code.google.com/p/django-cron/ - Look promising for scheduling of jobs without cron access\nhttp://www.xhtml2pdf.com/ - Could be used or ReportLabs PDF Libraries for conversion of HTML to PDF\nAll these together with Django's Email functionality could make a nice Reporting System.\n"
] |
[
4,
3
] |
[] |
[] |
[
"django",
"python",
"reporting_services"
] |
stackoverflow_0000793130_django_python_reporting_services.txt
|
Q:
SQLAlchemy many-to-many orphan deletion
I'm trying to use SQLAlchemy to implement a basic users-groups model where users can have multiple groups and groups can have multiple users.
When a group becomes empty, I want the group to be deleted, (along with other things associated with the group. Fortunately, SQLAlchemy's cascade works fine with these more simple situations).
The problem is that cascade='all, delete-orphan' doesn't do exactly what I want; instead of deleting the group when the group becomes empty, it deletes the group when any member leaves the group.
Adding triggers to the database works fine for deleting a group when it becomes empty, except that triggers seem to bypass SQLAlchemy's cascade processing so things associated with the group don't get deleted.
What is the best way to delete a group when all of its members leave and have this deletion cascade to related entities.
I understand that I could do this manually by finding every place in my code where a user can leave a group and then doing the same thing as the trigger however, I'm afraid that I would miss places in the code (and I'm lazy).
A:
The way I've generally handled this is to have a function on your user or group called leave_group. When you want a user to leave a group, you call that function, and you can add any side effects you want into there. In the long term, this makes it easier to add more and more side effects. (For example when you want to check that someone is allowed to leave a group).
A:
I think you want cascade='save, update, merge, expunge, refresh, delete-orphan'. This will prevent the "delete" cascade (which you get from "all") but maintain the "delete-orphan", which is what you're looking for, I think (delete when there are no more parents).
A:
I had the same problem about 3 months ago, i have a Post/Tags relation and wanted to delete unused Tags. I asked on irc and SA's author told me that cascades on many-to-many relations are not supported, which kind of makes sense since there is no "parent" in many-to-many.
But extending SA is easy, you can probably use a AttributeExtension to check if the group became empty when is removed from a User and delete it from there.
A:
Could you post a sample of your table and mapper set up? It might be easier to spot what is going on.
Without seeing the code it is hard to tell, but perhaps there is something wrong with the direction of the relationship?
|
SQLAlchemy many-to-many orphan deletion
|
I'm trying to use SQLAlchemy to implement a basic users-groups model where users can have multiple groups and groups can have multiple users.
When a group becomes empty, I want the group to be deleted, (along with other things associated with the group. Fortunately, SQLAlchemy's cascade works fine with these more simple situations).
The problem is that cascade='all, delete-orphan' doesn't do exactly what I want; instead of deleting the group when the group becomes empty, it deletes the group when any member leaves the group.
Adding triggers to the database works fine for deleting a group when it becomes empty, except that triggers seem to bypass SQLAlchemy's cascade processing so things associated with the group don't get deleted.
What is the best way to delete a group when all of its members leave and have this deletion cascade to related entities.
I understand that I could do this manually by finding every place in my code where a user can leave a group and then doing the same thing as the trigger however, I'm afraid that I would miss places in the code (and I'm lazy).
|
[
"The way I've generally handled this is to have a function on your user or group called leave_group. When you want a user to leave a group, you call that function, and you can add any side effects you want into there. In the long term, this makes it easier to add more and more side effects. (For example when you want to check that someone is allowed to leave a group).\n",
"I think you want cascade='save, update, merge, expunge, refresh, delete-orphan'. This will prevent the \"delete\" cascade (which you get from \"all\") but maintain the \"delete-orphan\", which is what you're looking for, I think (delete when there are no more parents).\n",
"I had the same problem about 3 months ago, i have a Post/Tags relation and wanted to delete unused Tags. I asked on irc and SA's author told me that cascades on many-to-many relations are not supported, which kind of makes sense since there is no \"parent\" in many-to-many.\nBut extending SA is easy, you can probably use a AttributeExtension to check if the group became empty when is removed from a User and delete it from there.\n",
"Could you post a sample of your table and mapper set up? It might be easier to spot what is going on.\nWithout seeing the code it is hard to tell, but perhaps there is something wrong with the direction of the relationship?\n"
] |
[
3,
3,
2,
0
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0000740630_python_sqlalchemy.txt
|
Q:
python - readable list of objects
This is probably a kinda commonly asked question but I could do with help on this. I have a list of class objects and I'm trying to figure out how to make it print an item from that class but rather than desplaying in the;
<__main__.evolutions instance at 0x01B8EA08>
but instead to show a selected attribute of a chosen object of the class. Can anyone help with that?
A:
If you want to just display a particular attribute of each class instance, you can do
print([obj.attr for obj in my_list_of_objs])
Which will print out the attr attribute of each object in the list my_list_of_objs. Alternatively, you can define the __str__() method for your class, which specifies how to convert your objects into strings:
class evolutions:
def __str__(self):
# return string representation of self
print(my_list_of_objs) # each object is now printed out according to its __str__() method
A:
Checkout the __str__() and __repr__() methods.
See http://docs.python.org/reference/datamodel.html#object.__repr__
A:
You'll want to override your class's "to string" method:
class Foo:
def __str__(self):
return "String representation of me"
A:
You need to override either the __str__, or __repr__ methods of your object[s]
A:
My preference is to define a __repr__ function that can reconstruct the object (whenever possible). Unless you have a __str__ as well, both repr() and str() will call this method.
So for example
class Foo(object):
def __init__(self, a, b):
self.a = a
self.b = b
def __repr__(self):
return 'Foo(%r, %r)' % (self.a, self.b)
Doing it this way, you have a readable string version, and as a bonus it can be eval'ed to get a copy of the original object.
x = Foo(5, 1 + 1)
y = eval(str(x))
print y
-> Foo(5, 2)
|
python - readable list of objects
|
This is probably a kinda commonly asked question but I could do with help on this. I have a list of class objects and I'm trying to figure out how to make it print an item from that class but rather than desplaying in the;
<__main__.evolutions instance at 0x01B8EA08>
but instead to show a selected attribute of a chosen object of the class. Can anyone help with that?
|
[
"If you want to just display a particular attribute of each class instance, you can do\nprint([obj.attr for obj in my_list_of_objs])\n\nWhich will print out the attr attribute of each object in the list my_list_of_objs. Alternatively, you can define the __str__() method for your class, which specifies how to convert your objects into strings:\nclass evolutions:\n def __str__(self):\n # return string representation of self\n\nprint(my_list_of_objs) # each object is now printed out according to its __str__() method\n\n",
"Checkout the __str__() and __repr__() methods.\nSee http://docs.python.org/reference/datamodel.html#object.__repr__\n",
"You'll want to override your class's \"to string\" method:\nclass Foo:\n def __str__(self):\n return \"String representation of me\"\n\n",
"You need to override either the __str__, or __repr__ methods of your object[s]\n",
"My preference is to define a __repr__ function that can reconstruct the object (whenever possible). Unless you have a __str__ as well, both repr() and str() will call this method.\nSo for example\nclass Foo(object):\n def __init__(self, a, b):\n self.a = a\n self.b = b\n def __repr__(self):\n return 'Foo(%r, %r)' % (self.a, self.b)\n\nDoing it this way, you have a readable string version, and as a bonus it can be eval'ed to get a copy of the original object.\nx = Foo(5, 1 + 1)\ny = eval(str(x))\n\nprint y\n-> Foo(5, 2)\n\n"
] |
[
8,
4,
4,
2,
1
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0000444058_list_python.txt
|
Q:
Inserting multiple model instances using a single db.put() on Google App Engine
Edit: Sorry I didn't clarify this, it's a Google App Engine related question.
According to this, I can give db.put() a list of model instances and ask it to input them all into the datastore. However, I haven't been able do this successfully. I'm still a little new with Python, so go easy on me
list_of_models = []
for i in range(0, len(items) - 1):
point = ModelName()
... put the model info here ...
list_of_models.append(point)
db.put(list_of_models)
Could anyone point out where I'm going wrong?
A:
Please define what you mean by "going wrong" -- the tiny pieces of code you're showing could perfectly well be part of an app that's quite "right". Consider e.g.:
class Hello(db.Model):
name = db.StringProperty()
when = db.DateTimeProperty()
class MainHandler(webapp.RequestHandler):
def get(self):
self.response.out.write('Hello world!')
one = Hello(name='Uno', when=datetime.datetime.now())
two = Hello(name='Due', when=datetime.datetime.now())
both = [one, two]
db.put(both)
this does insert the two entities correctly each time that get method is called, for example if a sample app continues with:
def main():
application = webapp.WSGIApplication([('/', MainHandler)],
debug=True)
wsgiref.handlers.CGIHandler().run(application)
if __name__ == '__main__':
main()
as in a typical "hello world" app engine app. You can verify the correct addition of both entities with the datastore viewer of the sdk console, or of course by adding another handler which gets the entities back and shows them, etc etc.
So please clarify!
|
Inserting multiple model instances using a single db.put() on Google App Engine
|
Edit: Sorry I didn't clarify this, it's a Google App Engine related question.
According to this, I can give db.put() a list of model instances and ask it to input them all into the datastore. However, I haven't been able do this successfully. I'm still a little new with Python, so go easy on me
list_of_models = []
for i in range(0, len(items) - 1):
point = ModelName()
... put the model info here ...
list_of_models.append(point)
db.put(list_of_models)
Could anyone point out where I'm going wrong?
|
[
"Please define what you mean by \"going wrong\" -- the tiny pieces of code you're showing could perfectly well be part of an app that's quite \"right\". Consider e.g.:\nclass Hello(db.Model):\n name = db.StringProperty()\n when = db.DateTimeProperty()\n\nclass MainHandler(webapp.RequestHandler):\n\n def get(self):\n self.response.out.write('Hello world!')\n one = Hello(name='Uno', when=datetime.datetime.now())\n two = Hello(name='Due', when=datetime.datetime.now())\n both = [one, two]\n db.put(both)\n\nthis does insert the two entities correctly each time that get method is called, for example if a sample app continues with:\ndef main():\n application = webapp.WSGIApplication([('/', MainHandler)],\n debug=True)\n wsgiref.handlers.CGIHandler().run(application)\n\n\nif __name__ == '__main__':\n main()\n\nas in a typical \"hello world\" app engine app. You can verify the correct addition of both entities with the datastore viewer of the sdk console, or of course by adding another handler which gets the entities back and shows them, etc etc.\nSo please clarify!\n"
] |
[
4
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000803517_google_app_engine_python.txt
|
Q:
How to get last inserted item key in Google App Engine
I am working with Google App Engine and Python.
I have a model with Items.
Immediately after I insert an item with item.put()
I want to get it's key and redirect to a page using this key.
Something like:
redirectUrl = "/view/key/%s/" % item.key
self.redirect(redirectUrl)
A:
Also, item.put() returns the key as the result, so it's hardly ever necessary to fetch that key immediately again -- just change your sequence, e.g
item.put()
redirectUrl = "/view/key/%s/" % item.key()
into
k = item.put()
redirectUrl = "/view/key/%s/" % k
A:
After you did you put() you can run
item.key().id()
Getting the id() is slightly safer than just using key() directly, since you'd be indirectly calling __str__(), which may not happen in a non strincg context.
The other options is to call id_or_name(), but then you probably would already know what the name is in that case.
A:
Thanks for the initiative Scott Kirkwood.
I was actually missing the ()
redirectUrl = "/view/key/%s/" % item.key()
self.redirect(redirectUrl)
Good to know that in google datastore you don't need to use anything like Scope_identity, but you can just get the item.key() just after item.put()..
|
How to get last inserted item key in Google App Engine
|
I am working with Google App Engine and Python.
I have a model with Items.
Immediately after I insert an item with item.put()
I want to get it's key and redirect to a page using this key.
Something like:
redirectUrl = "/view/key/%s/" % item.key
self.redirect(redirectUrl)
|
[
"Also, item.put() returns the key as the result, so it's hardly ever necessary to fetch that key immediately again -- just change your sequence, e.g\n item.put()\n redirectUrl = \"/view/key/%s/\" % item.key()\n\ninto\n k = item.put()\n redirectUrl = \"/view/key/%s/\" % k\n\n",
"After you did you put() you can run\nitem.key().id()\n\nGetting the id() is slightly safer than just using key() directly, since you'd be indirectly calling __str__(), which may not happen in a non strincg context.\nThe other options is to call id_or_name(), but then you probably would already know what the name is in that case.\n",
"Thanks for the initiative Scott Kirkwood. \nI was actually missing the ()\nredirectUrl = \"/view/key/%s/\" % item.key()\nself.redirect(redirectUrl)\n\nGood to know that in google datastore you don't need to use anything like Scope_identity, but you can just get the item.key() just after item.put()..\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000799803_google_app_engine_python.txt
|
Q:
Django caching - can it be done pre-emptively?
I have a Django view, which receives part of its data from an external website, which I parse using urllib2/BeautifulSoup.
This operation is rather expensive so I cache it using the low-level cache API, for ~5 minutes. However, each user which accesses the site after the cached data expires will receive a significant delay of a few seconds while I go to the external site to parse the new data.
Is there any way to load the new data lazily so that no user will ever get that kind of delay? Or is this unavoidable?
Please note that I am on a shared hosting server, so keep that in mind with your answers.
EDIT: thanks for the help so far. However, I'm still unsure as to how I accomplish this with the python script I will be calling. A basic test I did shows that the django cache is not global. Meaning if I call it from an external script, it does not see the cache data going on in the framework. Suggestions?
Another EDIT: coming to think of it, this is probably because I am still using local memory cache. I suspect that if I move the cache to memcached, DB, whatever, this will be solved.
A:
So you want to schedule something to run at a regular interval? At the cost of some CPU time, you can use this simple app.
Alternatively, if you can use it, the cron job for every 5 minutes is:
*/5 * * * * /path/to/project/refresh_cache.py
Web hosts provide different ways of setting these up. For cPanel, use the Cron Manager. For Google App Engine, use cron.yaml. For all of these, you'll need to set up the environment in refresh_cache.py first.
By the way, responding to a user's request is considered lazy caching. This is pre-emptive caching. And don't forget to cache long enough for the page to be recreated!
A:
"I'm still unsure as to how I accomplish this with the python script I will be calling. "
The issue is that your "significant delay of a few seconds while I go to the external site to parse the new data" has nothing to do with Django cache at all.
You can cache it everywhere, and when you go to reparse the external site, there's a delay. The trick is to NOT parse the external site while a user is waiting for their page.
The trick is to parse the external site before a user asks for a page. Since you can't go back in time, you have to periodically parse the external site and leave the parsed results in a local file or a database or something.
When a user makes a request you already have the results fetched and parsed, and all you're doing is presenting.
A:
I have no proof, but I've read BeautifulSoup is slow and consumes a lot of memory. You may want to look at using the lxml module instead. lxml is supposed to be much faster and efficient, and can do much more than BeautifulSoup.
Of course, the parsing probably isn't your bottleneck here; the external I/O is.
First off, use memcached!
Then, one strategy that can be used is as follows:
Your cached object, called A, is stored in the cache with a dynamic key (A_<timestamp>, for example).
Another cached object holds the current key for A, called A_key.
Your app would then get the key for A by first getting the value at A_key
A periodic process would populate the cache with the A_<timestamp> keys and upon completion, change the value at A_key to the new key
Using this method, all users every 5 minutes won't have to wait for the cache to be updated, they'll just get older versions until the update happens.
A:
You can also use a python script to call your view and write it to a file, then deliver it staticaly with lightpd for example :
request = HttpRequest()
request.path = url # the url of your view
(detail_func, foo, params) = resolve(url)
params['gmap_key'] = settings.GMAP_KEY_STATIC
detail = detail_func(request, **params)
out = open(dir + "index.html", 'w')
out.write(detail.content)
out.close()
then call your script with a cron
|
Django caching - can it be done pre-emptively?
|
I have a Django view, which receives part of its data from an external website, which I parse using urllib2/BeautifulSoup.
This operation is rather expensive so I cache it using the low-level cache API, for ~5 minutes. However, each user which accesses the site after the cached data expires will receive a significant delay of a few seconds while I go to the external site to parse the new data.
Is there any way to load the new data lazily so that no user will ever get that kind of delay? Or is this unavoidable?
Please note that I am on a shared hosting server, so keep that in mind with your answers.
EDIT: thanks for the help so far. However, I'm still unsure as to how I accomplish this with the python script I will be calling. A basic test I did shows that the django cache is not global. Meaning if I call it from an external script, it does not see the cache data going on in the framework. Suggestions?
Another EDIT: coming to think of it, this is probably because I am still using local memory cache. I suspect that if I move the cache to memcached, DB, whatever, this will be solved.
|
[
"So you want to schedule something to run at a regular interval? At the cost of some CPU time, you can use this simple app.\nAlternatively, if you can use it, the cron job for every 5 minutes is:\n*/5 * * * * /path/to/project/refresh_cache.py\n\nWeb hosts provide different ways of setting these up. For cPanel, use the Cron Manager. For Google App Engine, use cron.yaml. For all of these, you'll need to set up the environment in refresh_cache.py first.\nBy the way, responding to a user's request is considered lazy caching. This is pre-emptive caching. And don't forget to cache long enough for the page to be recreated!\n",
"\"I'm still unsure as to how I accomplish this with the python script I will be calling. \"\nThe issue is that your \"significant delay of a few seconds while I go to the external site to parse the new data\" has nothing to do with Django cache at all.\nYou can cache it everywhere, and when you go to reparse the external site, there's a delay. The trick is to NOT parse the external site while a user is waiting for their page.\nThe trick is to parse the external site before a user asks for a page. Since you can't go back in time, you have to periodically parse the external site and leave the parsed results in a local file or a database or something.\nWhen a user makes a request you already have the results fetched and parsed, and all you're doing is presenting.\n",
"I have no proof, but I've read BeautifulSoup is slow and consumes a lot of memory. You may want to look at using the lxml module instead. lxml is supposed to be much faster and efficient, and can do much more than BeautifulSoup.\nOf course, the parsing probably isn't your bottleneck here; the external I/O is. \nFirst off, use memcached!\nThen, one strategy that can be used is as follows:\n\nYour cached object, called A, is stored in the cache with a dynamic key (A_<timestamp>, for example).\nAnother cached object holds the current key for A, called A_key.\nYour app would then get the key for A by first getting the value at A_key\nA periodic process would populate the cache with the A_<timestamp> keys and upon completion, change the value at A_key to the new key\n\nUsing this method, all users every 5 minutes won't have to wait for the cache to be updated, they'll just get older versions until the update happens.\n",
"You can also use a python script to call your view and write it to a file, then deliver it staticaly with lightpd for example :\nrequest = HttpRequest()\nrequest.path = url # the url of your view\n(detail_func, foo, params) = resolve(url)\nparams['gmap_key'] = settings.GMAP_KEY_STATIC\ndetail = detail_func(request, **params)\nout = open(dir + \"index.html\", 'w')\nout.write(detail.content)\nout.close()\n\nthen call your script with a cron\n"
] |
[
8,
4,
4,
0
] |
[] |
[] |
[
"caching",
"django",
"python"
] |
stackoverflow_0000797773_caching_django_python.txt
|
Q:
How can I get the results of a Perl script in Python script?
I have one script in Perl and the other in Python. I need to get the results of Perl in Python and then give the final report. The results from Perl can be scalar variable, hash variable, or an array.
Please let me know as soon as possible regarding this.
A:
Use the subprocess module to run your Perl script to capture its output:
You can format the output however you choose in either script, and use Python to print the final report. For example: your Perl script can output XML which can be parsed by the Python script and then printed using a different format.
A:
Take a look at PyYAML in Python and YAML in Perl.
A:
You could serialize the results to some sort of a string format, print this to standard output in the Perl script. Then, from python call the perl script and redirect the results of stdout to a variable in python.
|
How can I get the results of a Perl script in Python script?
|
I have one script in Perl and the other in Python. I need to get the results of Perl in Python and then give the final report. The results from Perl can be scalar variable, hash variable, or an array.
Please let me know as soon as possible regarding this.
|
[
"Use the subprocess module to run your Perl script to capture its output:\nYou can format the output however you choose in either script, and use Python to print the final report. For example: your Perl script can output XML which can be parsed by the Python script and then printed using a different format.\n",
"Take a look at PyYAML in Python and YAML in Perl.\n",
"You could serialize the results to some sort of a string format, print this to standard output in the Perl script. Then, from python call the perl script and redirect the results of stdout to a variable in python. \n"
] |
[
6,
3,
2
] |
[] |
[] |
[
"perl",
"python"
] |
stackoverflow_0000805160_perl_python.txt
|
Q:
Learning Graphical Layout Algorithms
During my day-to-day work, I tend to come across data that I want to visualize in a custom manner. For example, automatically creating a call graph similar to a UML sequence diagram, display digraphs, or visualizing data from a database (scatter plots, 3D contours, etc).
For graphs, I tend to use GraphViz. For UML-like plots and 3D plots, I would like to write my own software to run under Linux.
I typically program in C++ and prototype in Python.
What books have people used to learn these basic graphical algorithms? I've seen some nice posts on force-directed layout and various block-style layout algorithms based upon the Cutting and Packing problems -- these are great starts, but I would like a more beginners guide and overview before I jump in.
Directed Graph Layout
Force directed layout
A:
Here are some sources,
Graphic Layout and Design (Paperback).
Active Layout Engine: Algorithms and Applications in Variable
Data Printing
|
Learning Graphical Layout Algorithms
|
During my day-to-day work, I tend to come across data that I want to visualize in a custom manner. For example, automatically creating a call graph similar to a UML sequence diagram, display digraphs, or visualizing data from a database (scatter plots, 3D contours, etc).
For graphs, I tend to use GraphViz. For UML-like plots and 3D plots, I would like to write my own software to run under Linux.
I typically program in C++ and prototype in Python.
What books have people used to learn these basic graphical algorithms? I've seen some nice posts on force-directed layout and various block-style layout algorithms based upon the Cutting and Packing problems -- these are great starts, but I would like a more beginners guide and overview before I jump in.
Directed Graph Layout
Force directed layout
|
[
"Here are some sources,\n\nGraphic Layout and Design (Paperback).\nActive Layout Engine: Algorithms and Applications in Variable\nData Printing\n\n"
] |
[
2
] |
[] |
[] |
[
"c++",
"graphics",
"layout",
"python",
"visualization"
] |
stackoverflow_0000805356_c++_graphics_layout_python_visualization.txt
|
Q:
Sphinx automated image numbering/captions?
Is there a way to automatically generate an image/figure caption using sphinx?
I currently have rest-sphinx files I'm converting to html and (latex)pdf using sphinx.
I'd like an easy way for users to reference a specific image in the resulting html/pdf files.
For example, if a user is refering to the documentation in an email, "In 'Image 65' it says XXX, but this doesn't work for me".
I've tried using figure where it appears to allow you to apply a caption to an image, but this has to be manually added. (And I have problems getting it to work with substitution for some reason).
Is there a rest-sphinx method I'm overlooking that would achieve this?
Or, is there a way to modify/edit sphinx's existing templates to add this ability?
A:
Sphinx consumes reStructuredText as templated by Jinja. According to the Sphinx documentation though, you have other templating options.
You should be able to use Jinja's control structures in a custom template to achieve the effect you're after.
|
Sphinx automated image numbering/captions?
|
Is there a way to automatically generate an image/figure caption using sphinx?
I currently have rest-sphinx files I'm converting to html and (latex)pdf using sphinx.
I'd like an easy way for users to reference a specific image in the resulting html/pdf files.
For example, if a user is refering to the documentation in an email, "In 'Image 65' it says XXX, but this doesn't work for me".
I've tried using figure where it appears to allow you to apply a caption to an image, but this has to be manually added. (And I have problems getting it to work with substitution for some reason).
Is there a rest-sphinx method I'm overlooking that would achieve this?
Or, is there a way to modify/edit sphinx's existing templates to add this ability?
|
[
"Sphinx consumes reStructuredText as templated by Jinja. According to the Sphinx documentation though, you have other templating options.\nYou should be able to use Jinja's control structures in a custom template to achieve the effect you're after.\n"
] |
[
2
] |
[] |
[] |
[
"python",
"python_sphinx",
"templates"
] |
stackoverflow_0000805943_python_python_sphinx_templates.txt
|
Q:
How to improve Trac's performance
I have noticed that my particular instance of Trac is not running quickly and has big lags. This is at the very onset of a project, so not much is in Trac (except for plugins and code loaded into SVN).
Setup Info: This is via a SELinux system hosted by WebFaction. It is behind Apache, and connections are over SSL. Currently the .htpasswd file is what I use to control access.
Are there any recommend ways to improve the performance of Trac?
A:
It's hard to say without knowing more about your setup, but one easy win is to make sure that Trac is running in something like mod_python, which keeps the Python runtime in memory. Otherwise, every HTTP request will cause Python to run, import all the modules, and then finally handle the request. Using mod_python (or FastCGI, whichever you prefer) will eliminate that loading and skip straight to the good stuff.
Also, as your Trac database grows and you get more people using the site, you'll probably outgrow the default SQLite database. At that point, you should think about migrating the database to PostgreSQL or MySQL, because they'll be able to handle concurrent requests much faster.
A:
We've had the best luck with FastCGI. Another critical factor was to only use https for authentication but use http for all other traffic -- I was really surprised how much that made a difference.
A:
I have noticed that if
select disctinct name from wiki
takes more than 5 seconds (for example due to a million rows in this table - this is a true story (We had a script that filled it)), browsing wiki pages becomes very slow and takes over 2*t*n, where t is time of execution of the quoted query (>5s of course), and n is a number of tracwiki links present on the viewed page.
This is due to trac having a (hardcoded) 5s cache expire for this query. It is used by trac to tell what the colour should the link be. We re-hardcoded the value to 30s (We need that many pages, so every 30s someone has to wait 6-7s).
It may not be what caused Your problem, but it may be. Good luck on speeding up Your Trac instance.
A:
Serving the chrome files statically with and expires-header could help too. See the end of this page.
|
How to improve Trac's performance
|
I have noticed that my particular instance of Trac is not running quickly and has big lags. This is at the very onset of a project, so not much is in Trac (except for plugins and code loaded into SVN).
Setup Info: This is via a SELinux system hosted by WebFaction. It is behind Apache, and connections are over SSL. Currently the .htpasswd file is what I use to control access.
Are there any recommend ways to improve the performance of Trac?
|
[
"It's hard to say without knowing more about your setup, but one easy win is to make sure that Trac is running in something like mod_python, which keeps the Python runtime in memory. Otherwise, every HTTP request will cause Python to run, import all the modules, and then finally handle the request. Using mod_python (or FastCGI, whichever you prefer) will eliminate that loading and skip straight to the good stuff.\nAlso, as your Trac database grows and you get more people using the site, you'll probably outgrow the default SQLite database. At that point, you should think about migrating the database to PostgreSQL or MySQL, because they'll be able to handle concurrent requests much faster.\n",
"We've had the best luck with FastCGI. Another critical factor was to only use https for authentication but use http for all other traffic -- I was really surprised how much that made a difference.\n",
"I have noticed that if \nselect disctinct name from wiki\n\ntakes more than 5 seconds (for example due to a million rows in this table - this is a true story (We had a script that filled it)), browsing wiki pages becomes very slow and takes over 2*t*n, where t is time of execution of the quoted query (>5s of course), and n is a number of tracwiki links present on the viewed page.\nThis is due to trac having a (hardcoded) 5s cache expire for this query. It is used by trac to tell what the colour should the link be. We re-hardcoded the value to 30s (We need that many pages, so every 30s someone has to wait 6-7s).\nIt may not be what caused Your problem, but it may be. Good luck on speeding up Your Trac instance.\n",
"Serving the chrome files statically with and expires-header could help too. See the end of this page.\n"
] |
[
5,
3,
2,
1
] |
[] |
[] |
[
"performance",
"python",
"trac"
] |
stackoverflow_0000213838_performance_python_trac.txt
|
Q:
Python equivalent of Perl's while (<>) {...}?
I write a lot of little scripts that process files on a line-by-line basis. In Perl, I use
while (<>) {
do stuff;
}
This is handy because it doesn't care where the input comes from (a file or stdin).
In Python I use this
if len(sys.argv) == 2: # there's a command line argument
sys.stdin = file(sys.argv[1])
for line in sys.stdin.readlines():
do stuff
which doesn't seem very elegant. Is there a Python idiom that easily handles file/stdin input?
A:
The fileinput module in the standard library is just what you want:
import fileinput
for line in fileinput.input(): ...
A:
import fileinput
for line in fileinput.input():
process(line)
This iterates over the lines of all files listed in sys.argv[1:], defaulting to sys.stdin if the list is empty.
A:
fileinput defaults to stdin, so would make it slightly more concise.
If you do a lot of command-line stuff, though, this piping hack is very neat.
|
Python equivalent of Perl's while (<>) {...}?
|
I write a lot of little scripts that process files on a line-by-line basis. In Perl, I use
while (<>) {
do stuff;
}
This is handy because it doesn't care where the input comes from (a file or stdin).
In Python I use this
if len(sys.argv) == 2: # there's a command line argument
sys.stdin = file(sys.argv[1])
for line in sys.stdin.readlines():
do stuff
which doesn't seem very elegant. Is there a Python idiom that easily handles file/stdin input?
|
[
"The fileinput module in the standard library is just what you want:\nimport fileinput\n\nfor line in fileinput.input(): ...\n\n",
"import fileinput\nfor line in fileinput.input():\n process(line)\n\nThis iterates over the lines of all files listed in sys.argv[1:], defaulting to sys.stdin if the list is empty.\n",
"fileinput defaults to stdin, so would make it slightly more concise.\nIf you do a lot of command-line stuff, though, this piping hack is very neat.\n"
] |
[
51,
15,
7
] |
[] |
[] |
[
"python",
"stdin"
] |
stackoverflow_0000807173_python_stdin.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.