text
stringlengths
226
34.5k
Sublime Text 3-Anaconda openCV docstring not working? Question: I'm using the Anaconda plugin in Sublime Text 3. Everything was working exactly as I expected. I love Docstring. It worked great and saved me a lot of time. But when I tried `import cv2`, cv2 was not on the autoComplete list. AutoComplete and docstring wouldn't work for anything that is in openCV. I use Mac, with **Python 3.5.1** and **openCV 3.1.0**. In Anaconda.sublime- settings, my python interpreter is set as: `"/usr/local/bin/python3.5"` I do have another Windows with Anaconda plugin installed in ST3, and it worked fine. I don't really what is going on. Any suggestion? Answer: This setting is not documented in the default `Anaconda.sublime-settings` file, but I found it once on the [Anaconda website](http://damnwidget.github.io/anaconda/anaconda_settings/#toc_5): `"extra_paths"`. Open your `.sublime-project` file, and add the following: { ... "settings": { "extra_paths": [ "/usr/local/Cellar/opencv3/3.1.0_3/lib/python3.5/site-packageβ€Œβ€‹s" ] } } or, if you're just using your user `Anaconda.sublime-settings`, just add: "extra_paths": [ "/usr/local/Cellar/opencv3/3.1.0_3/lib/python3.5/site-packageβ€Œβ€‹s" ] This will explicitly tell Anaconda to look there for additional modules. Hopefully that will do the trick.
How to install firefoxdriver webdriver for python3 selenium on ubuntu 16.04? Question: I installed python3-selenium apt package on Ubuntu 16.04. While installing, got a message: Suggested packages: chromedriver firefoxdriver The following NEW packages will be installed: python3-selenium When I try to run the following python code, #! /usr/bin/python3.5 from selenium import webdriver import time def get_profile(): profile = webdriver.FirefoxProfile() profile.set_preference("browser.privatebrowsing.autostart", True) return profile def main(): browser = webdriver.Firefox(firefox_profile=getProfile()) #browser shall call the URL browser.get("http://www.google.com") time.sleep(5) browser.quit() if __name__ == "__main__": main() I get the following error: > Traceback (most recent call last): File "./test.py", line 19, in main() File > "./test.py", line 11, in main browser = > webdriver.Firefox(firefox_profile=getProfile()) File "/usr/lib/python3/dist- > packages/selenium/webdriver/firefox /webdriver.py", line 77, in **init** > self.binary, timeout), File "/usr/lib/python3/dist- > packages/selenium/webdriver/firefox/extension_connection.py", line 47, in > **init** self.profile.add_extension() File "/usr/lib/python3/dist- > packages/selenium/webdriver/firefox/firefox_profile.py", line 91, in > add_extension self._install_extension(extension) File > "/usr/lib/python3/dist- > packages/selenium/webdriver/firefox/firefox_profile.py", line 251, in > _install_extension compressed_file = zipfile.ZipFile(addon, 'r') File > "/usr/lib/python3.5/zipfile.py", line 1009, in **init** self.fp = > io.open(file, filemode) FileNotFoundError: [Errno 2] No such file or > directory: '/usr/lib /firefoxdriver/webdriver.xpi' I did searching for packages name firefoxdriver in Ubuntu repositories but none exist. How do I solve this problem? Any help with installing the webdrivers appreciated! Answer: You can either upgrade to 16.10 (it's in yakkety) or you can download the deb from [here](http://pk.archive.ubuntu.com/ubuntu/ubuntu/pool/multiverse/s/selenium- firefoxdriver/) (it works - I tried it). Alternatively you can follow [these](https://christopher.su/2015/selenium-chromedriver-ubuntu/#selenium- version) instructions to install by hand (chromedriver but for Firefox it's the same).
cairosvg installed but ImportError Question: I just installed cairosvg and it seems to have worked. If i try to install again it says: > $ pip install cairosvg > Requirement already satisfied(...) But if I try to import it in python3, it delivers an ImportError: > >>>import cairosvg > Traceback(most recent call last): > (...) > ImportError: No Module named 'cairosvg' Any ideas whats going wrong here? By the way, im trying to convert .svg files to .png ones, if there is a simpler possibility, feel free to tell me! Answer: install with pip3: pip3 install cairosvg
Do locally set Cython compiler directives affect one or all functions? Question: I am working on speeding up some Python/Numpy code with Cython, and am a bit unclear on the effects of "locally" setting (as defined [here](http://docs.cython.org/en/latest/src/reference//compilation.html) in the docs) compiler directives. In my case I'd like to use: @cython.wraparound (False) #turn off negative indexing @cython.boundscheck(False) #turn off bounds-checking I understand that I can globally define this in my `setup.py` file, but I'm developing for non-Cython users and would like the directives to be obvious from the `.pyx` file. If I am writing a `.pyx` file with several functions defined in it, need I only set these once, or will they only apply to the next function defined? The reason I ask is that the documentation often says things like "turn off `boundscheck` for this function," making me wonder whether it only applies to the next function defined. In other words, do I need to do this: import numpy as np cimport numpy as np cimport cython ctypedef np.float64_t DTYPE_FLOAT_t @cython.wraparound (False) #turn off negative indexing @cython.boundscheck(False) # turn off bounds-checking def myfunc1(np.ndarray[DTYPE_FLOAT_t] a): do things here def myfunc2(np.ndarray[DTYPE_FLOAT_t] b): do things here Or do I need to do this: import numpy as np cimport numpy as np cimport cython ctypedef np.float64_t DTYPE_FLOAT_t @cython.wraparound (False) #turn off negative indexing @cython.boundscheck(False) # turn off bounds-checking def myfunc1(np.ndarray[DTYPE_FLOAT_t] a): do things here @cython.wraparound (False) #turn off negative indexing @cython.boundscheck(False) # turn off bounds-checking def myfunc2(np.ndarray[DTYPE_FLOAT_t] b): do things here Thanks! Answer: The [documentation](http://cython.readthedocs.io/en/latest/src/reference/compilation.html#globally) states that if you want to set a compiler directive globally, you need to do so with a comment at the top of the file. eg. #!python #cython: language_level=3, boundscheck=False
Python Flask Login login_required redirecting Question: I am working on a Flask app and using Flask-Login for authentication. Everything is set up and running. However when the user logins in and attempts to visit a page that requires login, they are redirected to the login page. When watching the console, I get a 200 for the GET login page, a 200 for the POST on the login, a 302 from the login page to the home page, and then another 302 from the homepage back to login. See code below. from flask import (Flask, render_template, g, flash, redirect, url_for, request) from flask_bcrypt import check_password_hash from flask_login import (LoginManager, UserMixin, login_required, login_user, logout_user, current_user) import models import forms application = Flask(__name__) application.secret_key = "xxx-xxx-xxx-xxx" login_manager = LoginManager() login_manager.init_app(application) login_manager.login_view = "login" @application.before_request def before_request(): g.db = models.DATABASE g.db.connect() g.user = current_user @application.after_request def after_request(response): g.db.close() return response @login_manager.user_loader def load_user(email): try: return models.User.select().where( models.User.email == email).get() except models.DoesNotExist: return None @application.route("/register", methods=['GET', 'POST']) def register(): form = forms.RegisterForm() if form.validate_on_submit(): flash("Yay! You registered!", "success") models.User.create_user( email = form.email.data, password = form.password.data ) return redirect(url_for('home')) return render_template('register.html',form=form) @application.route("/login", methods=['GET', 'POST']) def login(): form = forms.LoginForm() if form.validate_on_submit(): next = request.args.get('next') try: user = models.User.get(models.User.email == form.email.data) except models.DoesNotExist: flash("Your email or password doesn't match!", "error") else: if check_password_hash(user.password, form.password.data): login_user(user, remember=True) flash("Welcome back!", "success") return redirect(next or url_for("home")) else: flash("Your email or password doesn't match!", "error") return render_template("login.html", form=form) @application.route("/logout") @login_required def logout(): logout_user() flash("You've been logged out!", "success") return redirect(url_for("home")) @application.route("/") @login_required def home(): return render_template("home.html") if __name__ == "__main__": models.initialize() application.run(host='0.0.0.0') Here is the model: import datetime from flask_login import UserMixin from flask_bcrypt import generate_password_hash, check_password_hash from peewee import * DATABASE = MySQLDatabase("fakedatabasename", host="fakehostname", user="fakeusername", password="fakepassword") class BaseModel(Model): class Meta: database = DATABASE class Preachers(BaseModel): preacher_id = PrimaryKeyField() preacher_first_name = CharField(max_length=27) preacher_last_name = CharField(max_length=27) preacher_email = CharField() class Sermons(BaseModel): sermon_id = PrimaryKeyField() sermon_title = CharField(max_length=27) sermon_description = CharField(max_length=140) sermon_date = DateTimeField(default=datetime.datetime.now()) sermon_preacher_id = IntegerField() sermon_video_uri = CharField(max_length=255) class User(UserMixin,BaseModel): user_id = PrimaryKeyField() email = CharField(index=True, unique=True) password = CharField() date_created = DateTimeField(default=datetime.datetime.now()) @classmethod def create_user(cls, email, password): try: cls.create( email = email, password = generate_password_hash(password) ) except IntegrityError: raise ValueError("User already exists") def initialize(): DATABASE.connect() DATABASE.create_tables([Preachers, Sermons, User], safe=True) DATABASE.close() Answer: So it turns out that when working with Peewee and Flask-Login, you need to let Peewee supply its default primary key for the User model instead of using a custom one. Removing `user_id = PrimaryKeyField()`, dropping the table, and restarting fixed it.
regex - swap two phrases around Question: Python 3. Each line is constructed of a piece of text, then a pipe symbol, then a second piece of text. I want to swap the two pieces of text around and remove the pipe. This is the code so far: p = re.compile('^(.*) \| (.*)$', re.IGNORECASE) mytext = p.sub(r'\2\1', mytext) Yet for some reason that I can't work out, it is not matching. A sample of the text it should be matching is (ironically): (https://www.youtube.com/watch?v=NIKdKCQnbNo) | [Regular Expressions 101 - YouTube] and should end up like: [The Field Expedient Pump Drill - YouTube](https://www.youtube.com/watch?v=4QDXUxTrlRw) (in other words, the code is formatting the links into the format expected of a markdown converter). Here is the full code: #! /usr/bin/env python3 import re, os def create_text(myinputfile): with open(myinputfile, 'r', encoding='utf-8') as infile: mytext = infile.read() return mytext def reg_replace(mytext): p = re.compile('^(.*) \| (.*)$', re.IGNORECASE) mytext = p.sub(r'\2\1', mytext) return mytext def write_out(mytext, myfinalfile): with open(myfinalfile, 'w') as myoutfile: myoutfile.write(mytext) def main(): mytext = create_text('out.md') mytext = reg_replace(mytext) write_out(mytext, 'out1.md') os.rename("out.md", "out_original.md") os.rename("out1.md", "out.md") main() Answer: This should help you. (View demo on [regex101](https://regex101.com/r/hW2kR9/2)) (\S+)\s*\|\s*(.+) Sub with: \2\1
Python3 always shows ImportError message Question: Whenever I try to run a script the python interpreter always shows an `ImportError` message such as (e.g.) `No module named 'setuptools'`. So, I tried install (or to satisfy this requirement) with `apt-get`... I do this for both Python 2.7 and Python 3.5 until `Requirement already satisfied`. First of all, I don't work with Python 2.7, but it's the defaul version for the interpreter. So, how could I solve this problem to work with Python 3.5? I tried this: >>> import sys >>> print(sys.path) ['', '/usr/local/lib/python35.zip', '/usr/local/lib/python3.5', '/usr/local/lib/python3.5/plat-linux', '/usr/local/lib/python3.5/lib-dynload', '/usr/local/lib/python3.5/site-packages'] This was for **Python3** , for Python2 I did the same to compare the paths and I got this: >>> import sys >>> print(sys.path) ['', '/usr/local/lib/python2.7/dist-packages/pygame-1.9.2b8-py2.7-linux-x86_64.egg', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/lib/python2.7/dist-packages/gtk-2.0'] Now... Should it work if I use the `append()` method to add all the paths of Python2 to the paths in Python3? Also, I've been considered to completely uninstall Python2, but I know this will cause more problems in my system that the one I try to solve. Answer: try : python3.5 -m pip install setuptools
import hooks (custom module loaders) for pypy do not work Question: I'm successfully able to create import hooks to load files directly from memory in python2.7. The example I used was the accepted response to this question: [python:Import module from memory](http://stackoverflow.com/questions/14191900/pythonimport-module-from- memory) However; when applying this code on pypy; i get an import error. I have also tried other import hook examples that work with regular python but not with pypy, such as this: [python load zip with modules from memory](http://stackoverflow.com/questions/39135750/python-load-zip-with- modules-from-memory/39136473#39136473) Does anyone know why import hooks do not work in pypy? Is there something I am missing? Answer: The problem is that in both of the examples you point to, `load_module()` does not add the loaded module to `sys.modules`. Normally, it should do so (and then PyPy works like CPython). If `load_module()` does not add the module to `sys.modules`, then every single `import a` will call `load_module()` again and return a new copy of the module. For example, in the example from [python:Import module from memory](http://stackoverflow.com/questions/14191900/pythonimport-module-from- memory): import a as a1 import a as a2 print a1 is a2 # False! a1.foo = "foo" print a2.foo # AttributeError This is documented in <https://www.python.org/dev/peps/pep-0302/#id27>. The `load_module()` method is responsible for doing more checks than these simple examples show. In particular, note this line (emphasis in the original): > Note that the module object _must_ be in sys.modules before the loader > executes the module code. So, the fact that PyPy behaves differently than CPython in this case could be understood as a behavior difference that follows from code that fails to respect the docs. But anyway, my opinion is that it should be fixed. I've created an issue at <https://bitbucket.org/pypy/pypy/issues/2382/sysmeta_path-not-working-like- cpythons>.
python multi inheritance with parent classes have different __init__() Question: Here both `B` and `C` are derived from `A`, but with different `__init__()` parameters. My question is how to write the correct/elegant code here to initialize self.a,self.b,self.c1,self.c2 in the following example? Maybe another question is--is it a good coding practice to do this variable setting in `__init()__` function or it is better to use simpler `__init__()` function, and do `set()` function for each class later, which seems not as simple as to just do it in `__init()__`? class A(object): __init__(self,a): self.a=a class B(A): __init__(self,a,b): super(B,self).__init__(a) self.b=b class C(A): __init__(self,a,c1,c2): super(C,self).__init__(a) self.c1=c1 self.c2=c2 class D(B,C) __init__(self,a,b,c1,c2,d): #how to write the correct/elegant code here to initialize self.a,self.b,self.c1,self.c2? #can I use super(D,self) something? self.d=d self.dd=self.a+self.b+2*self.c1+5*self.c2+3*self.d d=D(1,2,3,4,5) Answer: Multiple inheritance in Python requires that all the classes cooperate to make it work. In this case, you can make them cooperate by having the `__init__` method in each class accept arbitrary `**kwargs` and pass them on when they call `super().__init__`. For your example class hierarchy, you could do something like this: class A(object): __init__(self,a): # Don't accept **kwargs here! Any extra arguments are an error! self.a=a class B(A): __init__(self, b, **kwargs): # only name the arg we care about (the rest go in **kwargs) super(B, self).__init__(**kwargs) # pass on the other keyword args self.b=b class C(A): __init__(self, c1, c2, **kwargs): super(C,self).__init__(**kwargs) self.c1=c1 self.c2=c2 class D(B,C) __init__(self, d, **kwargs): super(D,self).__init__(**kwargs) self.d=d self.dd=self.a+self.b+2*self.c1+5*self.c2+3*self.d Note that if you wanted `D` to use the argument values directly (rather than using `self.a`, etc.), you could both take them as named arguments and still pass them on in the `super()` call: class D(B,C) __init__(self,a, b, c1, c2, d, **kwargs): # **kwargs in case there's further inheritance super(D,self).__init__(a=a, b=b, c1=c1, c2=c2, **kwargs) self.d = d self.dd = a + b + 2 * c1 + 5 * c2 + 3 * d # no `self` needed in this expression! Accepting and passing on some args is important if some of the parent classes don't save the arguments (in their original form) as attributes, but you need those values. You can also use this style of code to pass on modified values for some of the arguments (e.g. with `super(D, self).__init__(a=a, b=b, c1=2*c1, c2=5*c2, **kwargs)`). This kind of collaborative multiple inheritance with varying arguments is almost impossible to make work using positional arguments. With keyword arguments though, the order of the names and values in a call doesn't matter, so it's easy to pass on named arguments and `**kwargs` at the same time without anything breaking. Using `*args` doesn't work as well (though recent versions of Python 3 are more flexible about how you can call functions with `*args`, such as allowing multiple unpackings in a single call: `f(*foo, bar, *baz)`). If you were using Python 3 (I'm assuming not, since you're explicitly passing arguments to `super`), you could make the arguments to your collaborative functions "keyword-only", which would prevent users from getting very mixed up and trying to call your methods with positional arguments. Just put a bare `*` in the argument list before the other named arguments: `def __init__(self, *, c1, c2, **kwargs):`.
How do I configure Read the Docs to use sphinx-autodoc-annotation? Question: I'm using sphinx-autodoc-annotation to read the function annotations in my Python code and use that to generate the appropriate expected argument types and return types. It's working great on my local machine, but I had to `pip install sphinx-autodoc-annotation` of course. I'm trying to generate the same documentation using [Read the Docs](https://readthedocs.org/), but it gives me an error: Could not import extension sphinx_autodoc_annotation (exception: No module named sphinx_autodoc_annotation) Is it possible to configure Read the Docs to work with sphinx-autodoc- annotation, and if so, how do I make it work? Answer: Activate the _Install Project_ option for your Read the Docs project. If the option is activated, Read the Docs will try to execute `setup.py install` on your package (see: [RtD docs](https://read-the- docs.readthedocs.io/en/latest/builds.html#understanding-what-s-going-on)). In `setup.py`, you can install packages as specified in your [requirements file](https://pip.readthedocs.io/en/1.1/requirements.html). Have a look at the [source code of the Flask-MongoRest project](https://github.com/closeio/flask- mongorest) for an example. Add `sphinx-autodoc-annotation` as the only requirement to your `requirements.txt` file.
Python code won't run? Question: Hello i was wondering why this code will not run,thanks. count = 0 finished = False total = 0 while not finished: number = int(input("Enter a number(0 to finish)")) if number == 0: finished = True else: total = total + number count = count + 1 print("the average is", total/ count) count = 0 Answer: It does run, the only issue I can see is that your else is indented inside your if block. In Python, indentation is important in a way that curled brackets or keywords are important in other programming languages. count = 0 finished = False total = 0 while not finished: number = int(input("Enter a number(0 to finish)")) if number == 0: finished = True else: total = total + number count = count + 1 print("the average is", total/ count) count = 0
Under what condition does a Python subprocess get a SIGPIPE? Question: I am reading the the Python documentation on the Popen class in the subprocess module section and I came across the following code: p1 = Popen(["dmesg"], stdout=PIPE) p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE) p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits. output = p2.communicate()[0] The [documentation](https://docs.python.org/2/library/subprocess.html#replacing- shell-pipeline) also states that > "The p1.stdout.close() call after starting the p2 is important in order for > p1 to receive a SIGPIPE if p2 exits before p1. Why must the p1.stdout be closed before we can receive a SIGPIPE and how does p1 knows that p2 exits before p1 if we already closed it? Answer: `SIGPIPE` is a signal that would be sent if `dmesg` tried to write to a closed pipe. Here, `dmesg` ends up with _two_ targets to write to, your Python process and the `grep` process. That's because `subprocess` clones file handles (using the [`os.dup2()` function](https://docs.python.org/2/library/os.html#os.dup2)). Configuring `p2` to use `p1.stdout` triggers a `os.dup2()` call that asks the OS to duplicate the pipe filehandle; the duplicate is used to connect `dmesg` to `grep`. With two open file handles for `dmesg` stdout, `dmesg` is never given a `SIGPIPE` signal if only _one_ of them closes early, so `grep` closing would never be detected. `dmesg` would needlessly continue to produce output. So by closing `p1.stdout` immediately, you ensure that the only remaining filehandle reading from `dmesg` stdout is the `grep` process, and if that process were to exit, `dmesg` receives a `SIGPIPE`.
Python Pandas subsetting based on Dates Question: I got a dataframe (using pandas) which contains the following fields: 1. Datesf-----------Price 2. 02/08/16 17:28--10 3. 02/08/16 17:29--20 4. 02/08/16 17:30--30 5. 03/08/16 09:00--40 6. 04/08/16 09:00--50 I am trying to subset the data frame into new dataframes using "Datesf" as a filter. The subsetting should only use the Datesf.Date() part of variable "Datesf" and name the new dataframe "df" as df_date. for example> new subsetted Dataframe name> df_02_08_16 1\. Datesf------------Price 2\. 02/08/16 17:28--10 3\. 02/08/16 17:29--20 4\. 02/08/16 17:30--30 I tried using the following code but obviously, I am missing out quite a few bits: datelist= df["Datesf"].map(pd.Timestamp.date).unique() for d in datelist: print d df.loc[df['Datesf'] == '%s' % d] My python skills are relatively basic at this stage. so forgive me if my query is not so challenging. Many thanks. regards, S Answer: This should do the work. import pandas as pd df = pd.DataFrame([['02/08/16 17:28',1], ['02/08/16 17:28',10],['02/08/16 17:28',100],['03/08/16 17:28',101],['04/08/16 17:28',103]], columns=['Datesf', 'Price']) df.Datesf = pd.to_datetime(df.Datesf) unique_dates = df.Datesf.unique() data_frame_dict = {elem : pd.DataFrame for elem in unique_dates} for n, key in enumerate(data_frame_dict.keys()): print ' ==== dataframe %d ======' % n data_frame_dict[key] = df[:][df.Datesf == key] print data_frame_dict[key] data_frame_dict[key].to_csv('%s.csv'%str(key))
Run unit test code from zip file Question: I followed this tutorial to create a self contained python application. <http://blog.ablepear.com/2012/10/bundling-python-files-into-stand-alone.html> What I would like to do is create a unit test application within a similar self contained application and, in addition, have several fiels within the unit test application itself. I have a directory structure as follows: src/ src1.py src2.py **main**.py test/ **main**.py **init**.py My unittest application packages up the src and test files to create a self- contained unit test application. I currently have all my test code in test/**main**.py and would like to separate the test code into different files (i.e. test1.py, test2.py, etc). However, when I try to do so, it either can't import them or I get import errors. What is the correct way to implement this scheme? I am using the unittest module in Python 2.7 and would like to continue using these. Answer: Actually, I got this working by following this: <http://blakesmith.me/2009/09/14/getting-started-with-python-unit- testing.html> Instead of calling it suite.py, I called it **main**.py and it worked within a zip file.
Splitting strings in python using import re Question: Why does this work: string = 'N{P}[ST]{P}' >>> import re >>> re.split(r"[\[\]]", string) >>> ['N{P}', 'ST', '{P}'] But this don't? >>> re.split(r"{\{\}}", string) Answer: You have to do this: re.split(r"[{}]", string) `r"{\{\}}"` is a special re syntax to repeat groups (ex: `(ab){1,3}` matches `ab`, `abab` or `ababab`) but not the character range (note that you don't have to escape the curly braces in a character range). (I admit I don't know what your strange regex is supposed to do specially in the re.split context, but not what you want :))
Python/Requests: Correct login returns 401 unauthorized Question: I have a python application logs in to a remote host via basic HTTP authentication. Authentication is as follows: def make_authenticated_request(host, username, password): url = host r = requests.get(url, auth=(username, password)) r.raise_for_status() return r test = Looter.make_authenticated_request("http://" + host + "/status/status_deviceinfo.htm", user, password) This error is printed: `401 Client Error: Unauthorized for url` Strange thing is that this doesn't always happen. It randomly fails/succeeds, for the same host with the same credentials. Login is however correct, and works flawlessly in my browser. I'm no python ninja. Any clues ? Answer: I might rewrite it to look something like this.. change it around however you need or like. The point here is that i'm using a session and passing it around to make another requests. You can reuse that session object to make other requests. Now if you making lots of requests an authing in each time like your code suggests. The site could be locking you out, which is why a session works better because you don't have to continue to auth in. import requests class Looter(object): def __init__(self): self.s = None def create_session(self, url, username, password): # create a Session s = requests.Session() # auth in res = s.get(url, auth=(username, password)) if res.status_code == 200: self.s = s def make_request(self, url): self.s.get(url) #do something with the requests l = Looter() l.create_session(url, username, password) # print the session to see if it authed in print l.s.status_code
Python automate key press of a QtGUI Question: I have a python script with the section below: for index in range(1,10): os.system('./test') os.system('xdotool key Return') What I want to do is to run the executable ./test, which brings up a QtGUI. In this GUI, a key press prompt buton comes up. I want to automate this key press of the GUI so that the executable continues. My python script though runs the executable, the GUI prompt comes up and a key press is not entered until after the executable. Is there any way to fix this? Answer: `os.system` doesn't return until the child process exits. You need `subprocess.Popen`. It's also a good idea to `sleep` a while before sending keystrokes (it may take some time for the child process to get ready to accept user input): from subprocess import Popen from time import sleep for index in range(1,10): # start the child process = Popen('./test') # sleep to allow the child to draw its buttons sleep(.333) # send the keystroke os.system('xdotool key Return') # wait till the child exits process.wait() I'm not shure that you need the last line. If all **9** child processes are supposed to stay alive -- remove it.
Python Mathematical signs in function parameter? Question: I would like to know if there is a way to add math symbols into the function parameters. def math(x, y, symbol): answer = x 'symbol' y return answer this is an small example what I mean. **EDIT: here is the whole problem** def code_message(str_val, str_val2, symbol1, symbol2): for char in str_val: while char.isalpha() == True: code = int(ord(char)) if code < ord('Z'): code symbol1= key str_val2 += str(chr(code)) elif code > ord('z'): code symbol1= key str_val2 += str(chr(code)) elif code > ord('A'): code symbol2= key str_val2 += str(chr(code)) elif code < ord('a'): code symbol2= key str_val2 += str(chr(code)) break if char.isalpha() == False: str_val2 += char return str_val2 I need to call the function a number of times but sometimes with a +/- for first symbol and sometimes a +/- for second symbol ORIGINAL CODE : def code_message(str_val, str_val2): for char in str_val: while char.isalpha() == True: code = int(ord(char)) if code < ord('Z'): code -= key str_val2 += str(chr(code)) elif code > ord('z'): code -= key str_val2 += str(chr(code)) elif code > ord('A'): code += key str_val2 += str(chr(code)) elif code < ord('a'): code += key str_val2 += str(chr(code)) break if char.isalpha() == False: str_val2 += char return str_val2 Answer: You can not pass operator to the function, but however you may pass operator's functions defined in [`operator`](https://docs.python.org/2/library/operator.html) library. Hence, your function will be like: >>> from operator import eq, add, sub >>> def magic(left, op, right): ... return op(left, right) ... **Examples** : # To Add >>> magic(3, add, 5) 8 # To Subtract >>> magic(3, sub, 5) -2 # To check equality >>> magic(3, eq, 3) True **Note** : I am using function as `magic` instead of `math` because `math` is default python library and it is not good practice to use predefined keywords.
How to refresh multiple printed lines inplace using Python? Question: I would like to understand how to reprint multiple lines in Python 3.5. This is an example of a script where I would like to refresh the printed statement in place. import random import time a = 0 while True: statement = """ Line {} Line {} Line {} Value = {} """.format(random.random(), random.random(), random.random(), a) print(statement, end='\r') time.sleep(1) a += 1 What I am trying to do is have: Line 1 Line 2 Line 3 Value = 1 Write on top of / update / refresh: Line 1 Line 2 Line 3 Value = 0 The values of each line will change each time. This is effectively giving me a status update of each Line. I saw [another question from 5 years ago](https://stackoverflow.com/questions/6840420/python-rewrite-multiple- lines-in-the-console?noredirect=1&lq=1) however with the addition of the `end` argument in Python 3+ print function, I am hoping that there is a much simpler solution. Answer: If I've understood correctly you're looking for this type of solution: import random import time import os def clear_screen(): os.system('cls' if os.name == 'nt' else 'clear') a = 0 while True: clear_screen() statement = """ Line {} Line {} Line {} Value = {} """.format(random.random(), random.random(), random.random(), a) print(statement, end='\r') time.sleep(1) a += 1 This solution won't work with some software like IDLE, Sublime Text, Eclipse... The problem with running it within this type of software is that clear/cls uses ANSI escape sequences to clear the screen. These commands write a string such as "\033[[80;j" to the output buffer. The native command prompt is able to interpret this as a command to clear the screen but these pseudo- terminals don't know how to interpret it, so they just end up printing small square as if printing an unknown character. If you're using this type of software, one hack around could be doing print('\n' * 100), it won't be the optimal solution but it's better than nothing.
Import Winreg in a Python Script Question: I am currently working on a Jenkins freestyle job and one of the build steps is to run a Python script. I have been working on this job for a couple of days now and this is one of the last build steps needed to finish it off. I have reached a point where I get an error letting me know that the **import winreg** module does not exist. I have installed Jenkins on CentOS and have read some documentation stating that I am unable to import this module on this distribution. Is there no other way to solve this than to switch over to a Windows machine? Thanks Answer: It makes sense, the [_winreg docs](https://docs.python.org/2/library/_winreg.html#module-_winreg) says: > These functions expose the Windows registry API to Python. You could try to make it run on a windows virtual machine in your centos host or just following the official [Installing+Jenkins+on+Red+Hat+distributions](https://wiki.jenkins- ci.org/display/JENKINS/Installing+Jenkins+on+Red+Hat+distributions) guide
setup.py console_scripts entry point does not resolve import Question: I have following setup.py: from setuptools import setup from distutils.core import setup setup( name="foobar", version="0.1.0", author="Batman", author_email="[email protected]", packages = ["foobar"], include_package_data=True, install_requires=[ "asyncio", ], entry_points={ 'console_scripts': [ 'foobar = foobar.__main__:main' ] }, ) Now, the **main**.py file gets installed and callable by foobar out of console after installation, which is what I wanted. Problem is, **main**.py has import at line 3 and that does not work. So my folder structure is as follows dummy/setup.py dummy/requirements.txt dummy/foobar/__init__.py dummy/foobar/__main__.py dummy/foobar/wont_be_imported_one.py I run `python3 setup.py bdist` being in dummy directory. Upon running foobar after installation, I get error File "/usr/local/bin/foobar", line 9, in <module> load_entry_point('foobar==0.1.0', 'console_scripts', 'foobar')() [...] ImportError: No module named 'wont_be_imported_one'. UPDATE. `__init__.py` has content of from wont_be_imported_one import wont_be_imported_one `wont_be_imported_one.py` has from `wont_be_imported_one` function which I actually need to import. Answer: In Python 3, `import`s are absolute by default, and so `from wont_be_imported_one import ...` inside of `foobar` will be interpreted as a reference to some module named `wont_be_imported_one` outside of `foobar`. You need to use a relative import instead: from .wont_be_imported_one import wont_be_imported_one # ^ Add this See [PEP 328](https://www.python.org/dev/peps/pep-0328/) for more information.
Basic Bar Chart with plotly Question: I try to do a bar charts, with this code import plotly.plotly as py import plotly.graph_objs as go data = [go.Bar( x=['giraffes', 'orangutans', 'monkeys'], y=[20, 14, 23] )] py.iplot(data, filename='basic-bar') But I got this error : > > PlotlyLocalCredentialsError Traceback (most recent call > last) > <ipython-input-42-9eae40f28f37> in <module>() > 3 y=[20, 14, 23] > 4 )] > ----> 5 py.iplot(data, filename='basic-bar') > > C:\Users\Demonstrator\Anaconda3\lib\site- > packages\plotly\plotly\plotly.py > > > in iplot(figure_or_data, **plot_options) 149 if 'auto_open' not in > plot_options: 150 plot_options['auto_open'] = False \--> 151 url = > plot(figure_or_data, **plot_options) 152 153 if isinstance(figure_or_data, > dict): > > > C:\Users\Demonstrator\Anaconda3\lib\site- > packages\plotly\plotly\plotly.py > > > in plot(figure_or_data, validate, **plot_options) 239 240 plot_options = > _plot_option_logic(plot_options) \--> 241 res = _send_to_plotly(figure, > **plot_options) 242 if res['error'] == '': 243 if plot_options['auto_open']: > > > C:\Users\Demonstrator\Anaconda3\lib\site- > packages\plotly\plotly\plotly.py > > > in _send_to_plotly(figure, **plot_options) 1401 cls=utils.PlotlyJSONEncoder) > 1402 credentials = get_credentials() -> 1403 > validate_credentials(credentials) 1404 username = credentials['username'] > 1405 api_key = credentials['api_key'] > > > C:\Users\Demonstrator\Anaconda3\lib\site- > packages\plotly\plotly\plotly.py > > > in validate_credentials(credentials) 1350 api_key = > credentials.get('api_key') 1351 if not username or not api_key: -> 1352 > raise exceptions.PlotlyLocalCredentialsError() 1353 1354 > > > PlotlyLocalCredentialsError: > Couldn't find a 'username', 'api-key' pair for you on your local > machine. To sign in temporarily (until you stop running Python), run: > >>> import plotly.plotly as py > >>> py.sign_in('username', 'api_key') > > Even better, save your credentials permanently using the 'tools' module: > >>> import plotly.tools as tls > >>> tls.set_credentials_file(username='username', api_key='api-key') > > For more help, see https://plot.ly/python. > Any idea to help me please? Thank you Answer: You need to pay attention to the traceback in the error. In this case, it's even more helpful than usual. The solution is given to you here: PlotlyLocalCredentialsError: Couldn't find a 'username', 'api-key' pair for you on your local machine. To sign in temporarily (until you stop running Python), run: >>> import plotly.plotly as py >>> py.sign_in('username', 'api_key') Even better, save your credentials permanently using the 'tools' module: >>> import plotly.tools as tls >>> tls.set_credentials_file(username='username', api_key='api-key') For more help, see https://plot.ly/python. So, enter your credentials used when you signed up to the site before you attempt to make a plot. You may have to sign in in a web browser and request for an API key to be generated, it is not the same as your password.
python google app engine stripe integration Question: I am working on a project in which i want to integrate stripe for payments. I am following their documentation to integrate it in python [Stripe Documentation](https://stripe.com/docs/charges). In documentation they downloaded the stripe library to use it. The code to download it was: pip install --upgrade stripe I followed the same steps. But i am getting this error. When i try to import it in my project. import stripe ImportError: No module named stripe Answer: The proper way to install a 3rd party library into your GAE application is described in [Installing a library](https://cloud.google.com/appengine/docs/python/tools/using-libraries- python-27?hl=en#installing_a_library): > The easiest way to manage this is with a ./lib directory: > > 1. Use pip to install the library and the vendor module to enable > importing packages from the third-party library directory. > > 2. Create a directory named lib in your application root directory: > > > mkdir lib > > > 3. To tell your app how to find libraries in this directory, create or > modify a file named appengine_config.py in the root of your project, then > add these lines: > > > from google.appengine.ext import vendor > > # Add any libraries installed in the "lib" folder. > vendor.add('lib') > > > 4. Use pip with the -t lib flag to install libraries in this directory: > > > pip install -t lib gcloud > > > **Notes** : * When going through the mentioned doc page pay attention as it also contains instructions for requesting and using GAE-provided _built-in_ libraries - different than those for _installed/vendored-in_ libraries. * if your app is a multi-module one you'll need an `appengine_config.py` for each module using the library at step#3, located beside the module's `.yaml` file. It can be symlinked for DRY reasons if you prefer (see <http://stackoverflow.com/a/34291789/4495081>). * step #4's goal is to just bring the content of the stripe library in a subdirectory of the `lib` dir. You can do that manually if the pip way fails for whatever reason.
Change the width of the Basic Bar Chart Question: I create a Bar Chart with matplotlib with : import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt objects = ('ETA_PRG_P2REF_RM', 'ETA_PRG_VDES_RM', 'ETA_PRG_P3REF_RM', 'Python', 'C++', 'Java', 'Perl', 'Scala', 'Lisp') y_pos = np.arange(len(objects)) performance = [220010, 234690, 235100, 21220, 83410, 119770, 210990, 190430, 888994] plt.bar(y_pos, performance, align='center', alpha=0.5) plt.xticks(y_pos, objects) plt.ylabel('Usage') plt.title('Programming language usage') plt.show() Look at the attached Image [![enter image description here](http://i.stack.imgur.com/MAUfd.png)](http://i.stack.imgur.com/MAUfd.png) But, As you can remark the width of the plot is smal, the name of object are not clear. Can you suggest me how to resolve this problem? Thank you Answer: You can add tick labels with a rotation: pyplot.xticks(y_pos, objects, rotation=70) In addition to an angle in degrees, you can specify a string: pyplot.xticks(y_pos, objects, rotation='vertical') You may also want to specify a horizontal alignment, especially is using an angle like 40. The default is to use the center, which makes the labels look weird: pyplot.xticks(y_pos, objects, rotation=40, ha='right') See the [docs](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.xticks) for a full list of options.
Python classes: method has same name as property Question: I'm constructing a class Heating. Every instance of this class has the property 'temperature'. It's mandatory that Heating also supports the method temperature() that prints the property 'temperature' as an integer. When I call the method temperature() I get the error 'int' object is not callable because self.temperature is already defined as an integer. How do I solve this? code: class Heating: """ >>> machine1 = Heating('radiator kitchen', temperature=20) >>> machine2 = Heating('radiator living', minimum=15, temperature=18) >>> machine3 = Heating('radiator bathroom', temperature=22, minimum=18, maximum=28) >>> print(machine1) radiator kitchen: current temperature: 20.0; allowed min: 0.0; allowed max: 100.0 >>> machine2 Heating('radiator living', 18.0, 15.0, 100.0) >>> machine2.changeTemperature(8) >>> machine2.temperature() 26.0 >>> machine3.changeTemperature(-5) >>> machine3 Heating('radiator bathroom', 18.0, 18.0, 28.0) """ def __init__(self, name, temperature = 10, minimum = 0, maximum = 100): self.name = name self.temperature = temperature self.minimum = minimum self.maximum = maximum def __str__(self): return '{0}: current temperature: {1:.1f}; allowed min: {2:.1f}; allowed max: {3:.1f}'.format(self.name, self.temperature, self.minimum, self.maximum) def __repr__(self): return 'Heating(\'{0}\', {1:.1f}, {2:.1f}, {3:.1f})'.format(self.name, self.temperature, self.minimum, self.maximum) def changeTemperature(self, increment): self.temperature += increment if self.temperature < self.minimum: self.temperature = self.minimum if self.temperature > self.maximum: self.temperature = self.maximum def temperature(self): return self.temperature # testen van het programma if __name__ == '__main__': import doctest doctest.testmod() Answer: You can't have both a method and an attribute with the same name. Methods are attributes too, albeit ones that you can call. Just rename the attribute to something else. You could use `_temperature`: def __init__(self, name, temperature = 10, minimum = 0, maximum = 100): self.name = name self._temperature = temperature self.minimum = minimum self.maximum = maximum def __str__(self): return '{0}: current temperature: {1:.1f}; allowed min: {2:.1f}; allowed max: {3:.1f}'.format(self.name, self._temperature, self.minimum, self.maximum) def __repr__(self): return 'Heating(\'{0}\', {1:.1f}, {2:.1f}, {3:.1f})'.format(self.name, self._temperature, self.minimum, self.maximum) def changeTemperature(self, increment): self._temperature += increment if self.temperature < self.minimum: self._temperature = self.minimum if self.temperature > self.maximum: self._temperature = self.maximum def temperature(self): return self._temperature Now your class has both `_temperature` (the integer) and `temperature()` (the method); the latter returns the former.
Function within while loop not running more than once Question: I am writing a small game as a way to try and learn python. At the bottom of my code, there is a while loop which asks for user input. If that user input is yes, it is supposed to update a variable, encounter_prob. If encounter_prob is above 20, it is supposed to call the function. I can get this behavior to happen, but only once. It seems to not be going through the if statement again. import random def def_monster_attack(): monster_attack = random.randrange(1,6) return monster_attack def def_player_attack(): player_attack = random.randrange(1,7) return player_attack def def_encounter_prob(): encounter_prob = random.randrange(1,100) return encounter_prob def def_action(): action = raw_input("Move forward? Yes or No") return action player = { 'hp': 100, 'mp': 100, 'xp': 0, 'str': 7 } monster = { 'hp': 45, 'mp': 10, 'str': 6 } def encounter(): while player['hp'] > 0 and monster['hp'] > 0: keep_fighting = raw_input("Attack, Spell, Gaurd, or Run?") if keep_fighting.startswith('a') == True: player_attack = def_player_attack() monster['hp'] = monster['hp'] - player_attack monster_attack = def_monster_attack() player['hp'] = player['hp'] - monster_attack print "player's hp is", player['hp'] print "monster's hp is", monster['hp'] while player['hp'] > 0: action = def_action() if action.startswith('y') == True: encounter_prob = def_encounter_prob() if encounter_prob > 20: encounter() Answer: It's actually calling the function `encounter` twice but the first time ends up killing the monster by decreasing its `hp` to 0 or below, while the second time the function exits without entering the `while` loop because the `monster['hp'] > 0` comparison evaluates to `False`. The monster's `hp` aren't reset at each new encounter. If you're having difficulties debugging your code, do not hesitate to put some `print` statements in different places to examine the value of your data. This should help you pinpointing what's going on.
How do I connect a PyQt5 slot to a signal function in a class? Question: I'm trying to set up a pyqt signal between a block of UI code and a separate python class that is just for handling event responses. I don't want to give the UI code access to the handler (classic MVC style). Unfortunately, I am having difficulty connecting the slot to the signal. Here is the code: from PyQt5 import QtCore from PyQt5.QtCore import QObject class UiClass(QObject): mySignal = QtCore.pyqtSignal( str ) def __init__(self): QObject.__init__(self) def send_signal(self): self.mySignal.emit("Hello world!") class HandlerClass(): currentMessage = "none" def register(self, mySignal): mySignal.connect(self.receive_signal) @QtCore.pyqtSlot(str) def receive_signal(self, message): self.currentMessage = message print(message) ui = UiClass() handler = HandlerClass() handler.register(ui.mySignal) ui.send_signal() When I run this code it fails at the handler.register line. Here is the error: > Traceback (most recent call last): > > File "C:\git\IonControl\testpyqtslot.py", line 25, in > > handler.register(ui.mySignal) > > File "C:\git\IonControl\testpyqtslot.py", line 17, in register > > mySignal.connect(self.receive_signal) > > TypeError: connect() failed between UiClass.mySignal[str] and > receive_signal() I'd like this code to successfully register the signal to the slot and have the handler print "hello world" at the end. What did I do wrong here? My basic question is this: how do I connect a signal to a slot function that is part of a class? Answer: The error happens becuase you are using `pyqtSlot` decorator in a class which doesn't inherit from `QObject`. You can fix the problem by either getting rid of the decorator, or making `HandlerClass` a subclass of `QObject`. The main purpose of `pyqtSlot` is to allow several different overloads of a slot to be defined, each with a different signature. It may also be needed sometimes when making cross-thread connections. However, these use-cases are relatively rare, and in most PyQt applications it is not necessary to use `pyqtSlot` at all. Signals can be connected to **any** python callable object, whether it is decorated as a slot or not.
Scrapy spider does not store state (persistent state) Question: Hi have a basic spider that runs to fetch all links on a given domain. I want to make sure it persists its state so that it can resume from where it left. I have followed the given url <http://doc.scrapy.org/en/latest/topics/jobs.html> .But when i try it the first time it runs fine and i end it with Ctrl+C and when I try to resume it the crawl stops on the first url itself. Below is the log when it ends: 2016-08-29 16:51:08 [scrapy] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 896, 'downloader/request_count': 4, 'downloader/request_method_count/GET': 4, 'downloader/response_bytes': 35320, 'downloader/response_count': 4, 'downloader/response_status_count/200': 4, 'dupefilter/filtered': 149, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2016, 8, 29, 16, 51, 8, 837853), 'log_count/DEBUG': 28, 'log_count/INFO': 7, 'offsite/domains': 22, 'offsite/filtered': 23, 'request_depth_max': 1, 'response_received_count': 4, 'scheduler/dequeued': 2, 'scheduler/dequeued/disk': 2, 'scheduler/enqueued': 2, 'scheduler/enqueued/disk': 2, 'start_time': datetime.datetime(2016, 8, 29, 16, 51, 7, 821974)} 2016-08-29 16:51:08 [scrapy] INFO: Spider closed (finished) Here is my spider: from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors import LinkExtractor from Something.items import SomethingItem class maxSpider(CrawlSpider): name = 'something' allowed_domains = ['thecheckeredflag.com', 'inautonews.com'] start_urls = ['http://www.thecheckeredflag.com/', 'http://www.inautonews.com/'] rules = (Rule(LinkExtractor(allow=()), callback='parse_obj', follow=True),) def parse_obj(self,response): for link in LinkExtractor(allow=self.allowed_domains,deny =() ).extract_links(response): item = SomethingItem() item['url'] = link.url yield item #print item Scrapy version: Scrapy 1.1.2 Python version: 2.7 I am new to scrapy, if i need to post any more info please let me know. Answer: The reason this was happening was due to the spider process being killed abruptly. The spider was not shutting down properly when I hit the Ctrl+C. Now, when the crawler shuts down properly the first time, it resumes properly too. So basically, make sure that you see the crawler ends/shuts down properly for it to resume.
Querying postgresql for DateTime values between two dates Question: I have the following dateTime text type variable in Postgres table "2016-05-12T23:59:11+00:00" "2016-05-13T11:00:11+00:00" "2016-05-13T23:59:11+00:00" "2016-05-15T10:10:11+00:00" "2016-05-16T10:10:11+00:00" "2016-05-17T10:10:11+00:00" I have to write a Python function to extract the data for a few variables between two dates def fn(dateTime): df1=pd.DataFrame() query = """ SELECT "recordId" from "Table" where "dateTime" BETWEEN %s AND %s """ %(dStart,dEnd) df1=pd.read_sql_query(query1,con=engine) return df1 I need to create dStart and dEnd variables and use them as function parameters as below fn('2016-05-12','2016-05-15') I tried using to_char("dateTime", 'YYYY-MM-DD') Postgres function but didn't work out. Please let me know how to solve this Answer: I'm not familiar with postgresql, but you can convert the strings to the `struct_time` class which is part of the built in [`time` package](https://docs.python.org/3/library/time.html) in Python and simply make comparisons between them. import time time_data = ["2016-05-12T23:59:11+00:00", "2016-05-13T11:00:11+00:00", "2016-05-13T23:59:11+00:00", "2016-05-15T10:10:11+00:00", "2016-05-16T10:10:11+00:00", "2016-05-17T10:10:11+00:00"] def fn(t_init, t_fin, t_all): # Convert string inputs to struct_time using time.strptime() t_init, t_fin = [time.strptime(x, '%Y-%m-%d') for x in [t_init, t_fin]] t_all = [time.strptime(x, '%Y-%m-%dT%H:%M:%S+00:00') for x in time_all] out = [] for jj in range(len(t_all)): if t_init < t_all[jj] < t_fin: out.append(jj) return out out = fn('2016-05-12','2016-05-15', time_data) print(out) # [0, 1, 2] The `time.strptime` routine uses a format specifiers to specify which parts of the string correspond to different time components. %Y Year with century as a decimal number. %m Month as a decimal number [01,12]. %d Day of the month as a decimal number [01,31]. %H Hour (24-hour clock) as a decimal number [00,23]. %M Minute as a decimal number [00,59]. %S Second as a decimal number [00,61]. %z Time zone offset from UTC. %a Locale's abbreviated weekday name. %A Locale's full weekday name. %b Locale's abbreviated month name. %B Locale's full month name. %c Locale's appropriate date and time representation. %I Hour (12-hour clock) as a decimal number [01,12]. %p Locale's equivalent of either AM or PM.
SyntaxError: python Arabic encoding Question: I have this Code -I am using Python 2.7- : #!/usr/bin/python # -*- Coding: UTF-8 -*- import nltk from nltk.tokenize import StanfordTokenizer sentence = u"Ψ§Ω„Ψ³Ω„Ψ§Ω… ΨΉΩ„ΩŠΩƒΩ… و Ψ±Ψ­Ω…Ψ© Ψ§Ω„Ω„Ω‡ و Ψ¨Ψ±ΩƒΨ§ΨͺΩ‡" print StanfordTokenizer().tokenize(sentence) I saved the code in a file called example.py, when I write python example.py in the terminal I get the following error: File "example.py", line 5 SyntaxError: Non-ASCII character '\xd8' in file example.py on line 5, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details I already declared the type of encoding as UTF-8. So what is the problem? However, if I run the code line by line in the terminal it is working and no error. Answer: > ... the first or second line must match the regular expression "^[ > \t\v]_#._?coding[:=][ \t]*([-_.a-zA-Z0-9]+)". [source](https://www.python.org/dev/peps/pep-0263/) Your encoding declaration does not match that regex. The `c` needs to be lowercase.
zip unknown number of lists with Python for more than one list Question: I need to do something very similar to what was asked here [How would you zip an unknown number of lists in Python?](http://stackoverflow.com/questions/5938786/how-would-you-zip-an- unknown-number-of-lists-in-python), but in a more general case. I have the following set up: a = [['a', '0', 'b', 'f', '78']] b = [['3', 'w', 'hh', '12', '8']] c = [['g', '7', '1', 'a0', '9'], ['45', '4', 'fe', 'h', 'k']] I need to zip these lists together to obtain: abc = [['a', '3', 'g', '45'], ['0', 'w', '7', '4'], ['b', 'hh', '1', 'fe'], ['f', '12', 'a0', 'h'], ['78', '8', '9', 'k']] which I can generate with: zip(a[0], b[0], c[0], c[1]) But the lists `a,b,c` contain a number of sublists that will vary for successive runs, so this "manual" way of expanding them won't work. The closest I can get is: zip(a[0], b[0], *c) Since unpacking a list with `*` in any other position than the last is not allowed, the "ideal" expression: zip(*a, *b, *c) does not work. How could I zip together a number of lists with an unknown number of sublists? Answer: `itertools.chain` to the rescue: import itertools z = list(zip(*itertools.chain(a, b, c)))
Subprocess.Popen vs Subprocess.call Question: Using subprocess.Popen is producing incomplete results where as subprocess.call is giving correct output This is related to a regression script which has 6 jobs and each job performs same task but on different input files. And I'm running everything in parallel using SubProcess.Popen Task is performed using a shell script which has calls to a bunch of C-compiled executables whose job is to generate some text reports followed by converting text report info into jpg images sample of shell script (runit is the file name) with calling C-compile executables #!/bin/csh -f #file name : runit #C - Executable 1 clean_spgs #C - Executable 2 scrub_spgs_all file1 scrub_spgs_all file2 #C - Executable 3 scrub_pick file1 1000 scrub_pick file2 1000 while using subprocess.Popen, both scrub_spgs_all and scrub_pick are trying to run in parallel causing the script to generate incomplete results i.e. output text files doesn't contain complete information and also missing some of output text reports. subprocess.Popen call is resrun_proc = subprocess.Popen("./"+runrescompare, shell=True, cwd=rescompare_dir, stdout=subprocess.PIPE, stderr=subprocess.POPT, universal_newlines=True) where runrescompare is a shell script and has #!/bin/csh #some other text ./runit Where as using subprocess.call is generating all the output text files and jpg images correctly but I can't run all 6 jobs in parallel. resrun_proc = subprocess.call("./"+runrescompare, shell=True, cwd=rescompare_dir, stdout=subprocess.PIPE, stderr=subprocess.POPT, universal_newlines=True) What is the correct way to call a C-exctuable from shell script using python subprocess calls where all 6 jobs can run in parallel(using python 3.5.1? Thanks. Answer: You tried to simulate multiprocessing with subprocess.Popen() which does not work like you want: the output is blocked after a while unless you consume it, for instance with `communicate()` (but this is blocking) or by reading the output, but with 6 concurrent handles in a loop, you are bound to get deadlocks. The best way is run the `subprocess.call` lines in separate threads. There are several ways to do it. Small simple example with locking: import threading,time lock=threading.Lock() def func1(a,b,c): lock.acquire() print(a,b,c) lock.release() time.sleep(10) tl=[] t = threading.Thread(target=func1,args=[1,2,3]) t.start() tl.append(t) t=threading.Thread(target=func1,args=[4,5,6]) t.start() tl.append(t) # wait for all threads to complete (if you want to wait, else # you can skip this loop) for t in tl: t.join() I took the time to create an example more suitable to your needs: 2 threads executing a command and getting the output, then printing it within a lock to avoid mixup. I have used `check_output` method for this. I'm using windows, and I list C and D drives in parallel. import threading,time,subprocess lock=threading.Lock() def func1(runrescompare,rescompare_dir): resrun_proc = subprocess.check_output(runrescompare, shell=True, cwd=rescompare_dir, stderr=subprocess.PIPE, universal_newlines=True) lock.acquire() print(resrun_proc) lock.release() tl=[] t=threading.Thread(target=func1,args=["ls","C:/"]) t.start() tl.append(t) t=threading.Thread(target=func1,args=["ls","D:/"]) t.start() tl.append(t) # wait for all threads to complete (if you want to wait, else # you can skip this loop) for t in tl: t.join()
Feature Importance for Random Forest Regressor in Python Question: I'm trying to find out which features have the most importance for my predictive model. Currently I'm using sklearn's inbuilt attribute as such Model = Model.fit(Train_Features, Labels_Train) print(Model.feature_importances_) It's just that its more of a black box type method, I'm not understanding what method it uses to weight the importance towards the features. Is there a better approach for doing this? Answer: Feature importance is not a black-box when it comes to decision trees. From the documentation for a [DecisionTreeRegressor](http://scikit- learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html#sklearn.tree.DecisionTreeRegressor.feature_importances_): > The importance of a feature is computed as the (normalized) total reduction > of the criterion brought by that feature. It is also known as the Gini > importance. For a forest, it just averages across the different trees in your forest. Check out the [source code](https://github.com/scikit-learn/scikit- learn/blob/51a765a/sklearn/ensemble/forest.py#L322): def feature_importances_(self): """Return the feature importances (the higher, the more important the feature). Returns ------- feature_importances_ : array, shape = [n_features] """ if self.estimators_ is None or len(self.estimators_) == 0: raise NotFittedError("Estimator not fitted, " "call `fit` before `feature_importances_`.") all_importances = Parallel(n_jobs=self.n_jobs, backend="threading")( delayed(getattr)(tree, 'feature_importances_') for tree in self.estimators_) return sum(all_importances) / len(self.estimators_)
troubles with pandas anaconda package Question: I got a new mac and just installed anaconda. When I use `ipython` and `spyder`, I can `import pandas` without any problem. However, when I use `sublime`, I get the error ImportError: No module named pandas `which python` gives `//anaconda/bin/python`. Also, `pandas` is listed in `conda list`. I am using `Anaconda2 4.1.1` and `python 2.7.12`. Can someone help? Answer: Just add these two line in your .bash_profile, (if you are on bash) export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8 and source it, or restart the terminal. Or if you are using "oh my zsh" shell, then add the lines in .zshrc.
Python KMax Pooling (MXNet) Question: I'm trying to recreate the char-level CNN in [this paper](https://arxiv.org/pdf/1606.01781v1.pdf) and am a bit stuck at the final step where I need to create a k-max pooling layer, because I am using MXNet and it does not have this. > An important difference is also the introduction of multiple temporal k-max > pooling layers. This allows to detect the k most important features in a > sentence, independent of their specific position, preserving their relative > order. However, MXNet does have the ability to [add a new- op](http://mxnet.readthedocs.io/en/latest/how_to/new_op.html) which I have been trying to do like so (although getting a bit confused with the shape of the data, given filters and batch-size). The shape of the data coming in: 128 (min-batch) x 512 (number of filters) x 1 (height) x 125 (width) The shape of the data coming out (k-max pooling, k = 7): 128 (min-batch) x 512 (number of filters) x 1 (height) x 7 (width) My idea so far ... : class KMaxPooling(mx.operator.CustomOp): def forward(self, is_train, req, in_data, out_data, aux): # Desired (k=3): # in_data = np.array([1, 2, 4, 10, 5, 3]) # out_data = [4, 10, 5] x = in_data[0].asnumpy() idx = x.argsort()[-k:] idx.sort(axis=0) y = x[idx] However, I'm not sure about several things: 1. How to test whether this works (once I have some complete code) 2. What the dimensions should be? I'm sorting on the last dimension (axis=0) 3. What to do for the backward() function i.e. the gradient propogation 4. Whether this will work with GPU - I'm guessing I will have to rewrite it in C/cuda? I found this example by someone else for keras (but don't have the rep to link): import numpy as np import theano.tensor as T from keras.layers.core import MaskedLayer class KMaxPooling(MaskedLayer): def __init__(self, pooling_size): super(MaskedLayer, self).__init__() self.pooling_size = pooling_size self.input = T.tensor3() def get_output_mask(self, train=False): return None def get_output(self, train=False): data = self.get_input(train) mask = self.get_input_mask(train) if mask is None: mask = T.sum(T.ones_like(data), axis=-1) mask = mask.dimshuffle(0, 1, "x") masked_data = T.switch(T.eq(mask, 0), -np.inf, data) result = masked_data[T.arange(masked_data.shape[0]).dimshuffle(0, "x", "x"), T.sort(T.argsort(masked_data, axis=1)[:, -self.pooling_size:, :], axis=1), T.arange(masked_data.shape[2]).dimshuffle("x", "x", 0)] Answer: [Here](https://github.com/CNevd/DeepLearning- Mxnet/blob/master/DCNN/dcnn_train.py#L15) you have the code in python with mxnet. It would be great to have it as well in R.
Compute Cost of Kmeans Question: I am using this [model](https://github.com/yahoo/lopq/blob/master/python/lopq/model.py), which is not written by me. In order to predict the centroids I had to do this: model = cPickle.load(open("/tmp/model_centroids_128d_pkl.lopq")) codes = d.map(lambda x: (x[0], model.predict_coarse(x[1]))) where `d.first()' yields this: (u'3768915289', array([ -86.00641097, -100.41325623, <128 coords in total>])) and `codes.first()`: (u'3768915289', (5657, 7810)) How can I [computeCost()](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.html?highlight=kmeans#pyspark.mllib.clustering.KMeansModel.computeCost) of this KMeans model? * * * After reading [train_model.py](https://github.com/yahoo/lopq/blob/master/spark/train_model.py), I am trying like this: In [23]: from pyspark.mllib.clustering import KMeans, KMeansModel In [24]: Cs = model.Cs # centroids In [25]: model = KMeansModel(Cs[0]) # I am very positive this line is good In [26]: costs = d.map(lambda x: model.computeCost(x[1])) In [27]: costs.first() but I get this error: AttributeError: 'numpy.ndarray' object has no attribute 'map' which means that Spark tries to use `map()` under the hood for `x[1]`... * * * which means that it expects an RDD!!! But of which form? I am trying now with: d = d.map(lambda x: x[1]) d.first() array([ 7.17036494e+01, 1.07987890e+01, ...]) costs = model.computeCost(d) and I don't get the error: 16/08/30 00:39:21 WARN TaskSetManager: Lost task 821.0 in stage 40.0 : java.lang.IllegalArgumentException: requirement failed at scala.Predef$.require(Predef.scala:221) at org.apache.spark.mllib.util.MLUtils$.fastSquaredDistance(MLUtils.scala:330) at org.apache.spark.mllib.clustering.KMeans$.fastSquaredDistance(KMeans.scala:595) at org.apache.spark.mllib.clustering.KMeans$$anonfun$findClosest$1.apply(KMeans.scala:569) at org.apache.spark.mllib.clustering.KMeans$$anonfun$findClosest$1.apply(KMeans.scala:563) at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:73) at org.apache.spark.mllib.clustering.KMeans$.findClosest(KMeans.scala:563) at org.apache.spark.mllib.clustering.KMeans$.pointCost(KMeans.scala:586) at org.apache.spark.mllib.clustering.KMeansModel$$anonfun$computeCost$1.apply(KMeansModel.scala:88) at org.apache.spark.mllib.clustering.KMeansModel$$anonfun$computeCost$1.apply(KMeansModel.scala:88) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144) at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157) at scala.collection.TraversableOnce$class.fold(TraversableOnce.scala:199) at scala.collection.AbstractIterator.fold(Iterator.scala:1157) at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$19.apply(RDD.scala:1086) at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$19.apply(RDD.scala:1086) at org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:1951) at org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:1951) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) <ipython-input-44-6223595c8b5f> in <module>() ----> 1 costs = model.computeCost(d) /home/gs/spark/current/python/pyspark/mllib/clustering.py in computeCost(self, rdd) 140 """ 141 cost = callMLlibFunc("computeCostKmeansModel", rdd.map(_convert_to_vector), --> 142 [_convert_to_vector(c) for c in self.centers]) 143 return cost 144 /home/gs/spark/current/python/pyspark/mllib/common.py in callMLlibFunc(name, *args) 128 sc = SparkContext.getOrCreate() 129 api = getattr(sc._jvm.PythonMLLibAPI(), name) --> 130 return callJavaFunc(sc, api, *args) 131 132 /home/gs/spark/current/python/pyspark/mllib/common.py in callJavaFunc(sc, func, *args) 121 """ Call Java Function """ 122 args = [_py2java(sc, a) for a in args] --> 123 return _java2py(sc, func(*args)) 124 125 /home/gs/spark/current/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py in __call__(self, *args) 811 answer = self.gateway_client.send_command(command) 812 return_value = get_return_value( --> 813 answer, self.gateway_client, self.target_id, self.name) 814 815 for temp_arg in temp_args: /home/gs/spark/current/python/pyspark/sql/utils.py in deco(*a, **kw) 43 def deco(*a, **kw): 44 try: ---> 45 return f(*a, **kw) 46 except py4j.protocol.Py4JJavaError as e: 47 s = e.java_exception.toString() /home/gs/spark/current/python/lib/py4j-0.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 306 raise Py4JJavaError( 307 "An error occurred while calling {0}{1}{2}.\n". --> 308 format(target_id, ".", name), value) 309 else: 310 raise Py4JError( Py4JJavaError: An error occurred while calling o25177.computeCostKmeansModel. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 821 in stage 40.0 failed 4 times, most recent failure: Lost task 821.3 in stage 40.0: java.lang.IllegalArgumentException: requirement failed at scala.Predef$.require(Predef.scala:221) at org.apache.spark.mllib.util.MLUtils$.fastSquaredDistance(MLUtils.scala:330) at org.apache.spark.mllib.clustering.KMeans$.fastSquaredDistance(KMeans.scala:595) at org.apache.spark.mllib.clustering.KMeans$$anonfun$findClosest$1.apply(KMeans.scala:569) at org.apache.spark.mllib.clustering.KMeans$$anonfun$findClosest$1.apply(KMeans.scala:563) at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:73) at org.apache.spark.mllib.clustering.KMeans$.findClosest(KMeans.scala:563) at org.apache.spark.mllib.clustering.KMeans$.pointCost(KMeans.scala:586) at org.apache.spark.mllib.clustering.KMeansModel$$anonfun$computeCost$1.apply(KMeansModel.scala:88) at org.apache.spark.mllib.clustering.KMeansModel$$anonfun$computeCost$1.apply(KMeansModel.scala:88) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144) at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157) at scala.collection.TraversableOnce$class.fold(TraversableOnce.scala:199) at scala.collection.AbstractIterator.fold(Iterator.scala:1157) at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$19.apply(RDD.scala:1086) at org.apache.spark.rdd.RDD$$anonfun$fold$1$$anonfun$19.apply(RDD.scala:1086) at org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:1951) at org.apache.spark.SparkContext$$anonfun$36.apply(SparkContext.scala:1951) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) * * * Edit: split_vecs = d.map(lambda x: np.split(x[1], 2)) seems to be a good step, since the centroids are of 64 dimensions. model.computeCost((d.map(lambda x: x[1])).first()) gives this error: `AttributeError: 'numpy.ndarray' object has no attribute 'map'`. Answer: Apparently from the [documentation](https://spark.apache.org/docs/latest/mllib-clustering.html) I've read, you have to: 1. Create a model maybe by [reading](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.html?highlight=kmeans#pyspark.mllib.clustering.KMeansModel.load) a previously [saved ](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.html?highlight=kmeans#pyspark.mllib.clustering.KMeansModel.save) model, or by fitting a new model. 2. After obtaining that [model](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.html?highlight=kmeans#pyspark.mllib.clustering.KMeansModel) you can use its method [computeCost](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.html?highlight=kmeans#pyspark.mllib.clustering.KMeansModel.clusterCenters), which needs a well formatted `RDD` to output something useful. Thus, if I assume that your variable `model` is a `KMeansModel` and the data stored in the variable `d` has the expected representation, then you should be able to run the following code: model.computeCost(d) * * * Edit: You should create an RDD that will contain vectors of the same dimensions as the centroids, and provide that as an input parameter to `computeCost()`, like this for example: split_vecs = d.map(lambda x: (x[0], np.split(x[1], 2))) costs_per_split = [KMeansModel(model.Cs[i]).computeCost(split_vecs.map(lambda x: x[1][i])) for i in range(2)]
Python dictionary converting to json or yaml Question: I have parsed a string and converted into a dictionary. however I would like to be able to get more information from my dictionary and I was thinking creating a son file or yaml file would be more useful. please feel free to comment on another way of solving this problem. import re regex = '(\w+) \[(.*?)\] "(.*?)" (\d+) "(.*?)" "(.*?)"' sample2= 'this [condo] "number is" 28 "owned by" "james"' patern = '(?P<host>\S+)\s(?P<tipe>\[\w+\])\s(?P<txt>"(.*?))\s(?P<number>\d+)\s(?P<txt1>"\w+\s\w+")\s(?P<owner>"\w+)' m =re.match(patern, sample2) res = m.groupdict() print res and I would get: {'tipe': '[condo]', 'number': '28', 'host': 'this', 'owner': '"james', 'txt': '"number is"', 'txt1': '"owned by"'} I would like to be able to sort by owner name and write out the type of house and the house number. for example: James:{ type:condo, number: 28} or any other suggestions? Answer: The question of json vs yaml boils down to what are you doing with the data you are saving? If you are reusing the data in JS or in another python script then you could use JSON. If you were using it in something that uses YAML then use YAML. But in terms of your question, unless there is a particular end game or use case for your data, it doesn't matter the format, you could even just do a typical TSV/CSV and it'd be the same. To answer your question about how to make a dict in to a json. From your example once you have the `res` dictionary made. import json with open('someFile.json', 'wb') as outfile: json.dump(res, outfile) Your output file `someFile.json` will be your dictionary `res`
Grouping of documents having the same phone number Question: My database consists of collection of a large no. of hotels (approx 121,000). This is how my collection looks like : { "_id" : ObjectId("57bd5108f4733211b61217fa"), "autoid" : 1, "parentid" : "P01982.01982.110601173548.N2C5", "companyname" : "Sheldan Holiday Home", "latitude" : 34.169552, "longitude" : 77.579315, "state" : "JAMMU AND KASHMIR", "city" : "LEH Ladakh", "pincode" : 194101, "phone_search" : "9419179870|253013", "address" : "Sheldan Holiday Home|Changspa|Leh Ladakh-194101|LEH Ladakh|JAMMU AND KASHMIR", "email" : "", "website" : "", "national_catidlineage_search" : "/10255012/|/10255031/|/10255037/|/10238369/|/10238380/|/10238373/", "area" : "Leh Ladakh", "data_city" : "Leh Ladakh" } Each document can have 1 or more phone numbers separated by "|" delimiter. I have to group together documents having same phone number. By real time, I mean when a user opens up a particular hotel to see its details on the web interface, I should be able to display all the hotels linked to it grouped by common phone numbers. While grouping, if one hotel links to another and that hotels links to another, then all 3 should be grouped together. > Example : Hotel A has phone numbers 1|2, B has phone numbers 3|4 and C has > phone numbers 2|3, then A, B and C should be grouped together. from pymongo import MongoClient from pprint import pprint #Pretty print import re #for regex #import unicodedata client = MongoClient() cLen = 0 cLenAll = 0 flag = 0 countA = 0 countB = 0 list = [] allHotels = [] conContact = [] conId = [] hotelTotal = [] splitListAll = [] contactChk = [] #We'll be passing the value later as parameter via a function call #hId = 37443; regx = re.compile("^Vivanta", re.IGNORECASE) #Connection db = client.hotel collection = db.hotelData #Finding hotels wrt search input for post in collection.find({"companyname":regx}): list.append(post) #Copying all hotels in a list for post1 in collection.find(): allHotels.append(post1) hotelIndex = 11 #Index of hotel selected from search result conIndex = hotelIndex x = list[hotelIndex]["companyname"] #Name of selected hotel y = list[hotelIndex]["phone_search"] #Phone numbers of selected hotel try: splitList = y.split("|") #Splitting of phone numbers and storing in a list 'splitList' except: splitList = y print "Contact details of",x,":" #Printing all contacts... for contact in splitList: print contact conContact.extend(contact) cLen = cLen+1 print "No. of contacts in",x,"=",cLen for i in allHotels: yAll = allHotels[countA]["phone_search"] try: splitListAll.append(yAll.split("|")) countA = countA+1 except: splitListAll.append(yAll) countA = countA + 1 # print splitListAll #count = 0 #This block has errors #Add code to stop when no new links occur and optimize the outer for loop #for j in allHotels: for contactAll in splitListAll: if contactAll in conContact: conContact.extend(contactAll) # contactChk = contactAll # if (set(conContact) & set(contactChk)): # conContact = contactChk # contactChk[:] = [] #drop contactChk list conId = allHotels[countB]["autoid"] countB = countB+1 print "Printing the list of connected hotels..." for final in collection.find({"autoid":conId}): print final This is one code I wrote in Python. In this one, I tried performing linear search in a for loop. I am getting some errors as of now but it should work when rectified. I need an optimized version of this as liner search has poor time complexity. I am pretty new to this so any other suggestions to improve the code are welcome. Thanks. Answer: The easiest answer to any Python in-memory search-for question is "use a dict". Dicts give O(ln N) key-access speed, lists give O(N). Also remember that you can put a Python object into as many dicts (or lists), and as many times into one dict or list, as it takes. They are not copied. It's just a reference. So the essentials will look like for hotel in hotels: phones = hotel["phone_search"].split("|") for phone in phones: hotelsbyphone.setdefault(phone,[]).append(hotel) At the end of this loop, `hotelsbyphone["123456"]` will be a list of hotel objects which had "123456" as one of their `phone_search` strings. The key coding feature is the `.setdefault(key, [])` method which initializes an empty list if the key is not already in the dict, so that you can then append to it. Once you have built this index, this will be fast try: hotels = hotelsbyphone[x] # and process a list of one or more hotels except KeyError: # no hotels exist with that number Alternatively to `try ... except`, test `if x in hotelsbyphone:`
pdf_multivariate_gauss() function in Python Question: Which are the necessary modules for execution of the function pdf_multivariate_gauss() in IPython? I try to execute the below code but i get errors like "Import Error" and "Name Error". **_Code:_** import numpy as np from matplotlib import pyplot as plt from matplotlib.mlab import bivariate_normal import parzen_window_est import pdf_multivariate_gauss ######## ImportError ######## import operator from mpl_toolkits.mplot3d import Axes3D ############################################## ### Predicted bivariate Gaussian densities ### ############################################## mu_vec = np.array([0,0]) cov_mat = np.array([[1,0],[0,1]]) x_2Dgauss = np.random.multivariate_normal(mu_vec, cov_mat, 10000) # generate a range of 400 window widths between 0 < h < 1 h_range = np.linspace(0.001, 1, 400) # calculate the actual density at the center [0, 0] mu = np.array([[0],[0]]) cov = np.eye(2) actual_pdf_val = pdf_multivariate_gauss.pdf_multivariate_gauss(np.array([[0],[0]]), mu, cov) ######## NameError ######### # get a list of the differnces (|estimate-actual|) for different window widths parzen_estimates = [np.abs(parzen_window_est.parzen_window_est(x_2Dgauss, h=1, center=[0, 0])) for i in h_range] # get the window width for which |estimate-actual| is closest to 0 min_index, min_value = min(enumerate(parzen_estimates), key=operator.itemgetter(1)) [IPython output](http://i.stack.imgur.com/uqWUp.png) Answer: As far as I can tell, there is no such thing as pdf_multivariate_gauss (as pointed out already). There is a python implementation of this in `scipy`, however: [scipy.stats.multivariate_normal](http://docs.scipy.org/doc/scipy-0.18.0/reference/generated/scipy.stats.multivariate_normal.html) One would use it like this: from scipy.stats import multivariate_normal mvn = multivariate_normal(mu,cov) #create a multivariate Gaussian object with specified mean and covariance matrix p = mvn.pdf(x) #evaluate the probability density at x
How to execute set of commands after sudo using python script Question: I am trying to automate deployment process using the python. In deployment I do "dzdo su - sysid" first and then perform the deployment process. But I am not able to handle this part in python. I have done similar thing in shell where I used following piece of code, /bin/bash psh su - sysid << EOF . /users/home/sysid/.bashrc ./deployment.sh EOF this handles execution of deployment.sh very well. It does the sudo and then execute the script with sysid id. I am trying to do similar thing using python but I am not able to find any alternative to << EOF in python. I am using subprocess.Popen to execute dzdo part, it does the dzdo, but when I try to execute next command for e.g. say "ls -l", then this command will not get executed with the sysid, instead, i had to exit from sysid session and as soon as i exit, it will execute "ls -l" in my home directory which is of no use. Can someone please help me on this? And one more thing, in this case I am not calling any deployment.sh but I will call commands like cp, rm, mkdir etc. Answer: The text between `<< EOF` and `EOF` in your shell script example will be written to the standard input of the `psh` process. So you have to redirect the standard input of your `Popen` instance and write the data either directly into the `stdin` file of your instance or use the `communicate()` method: #!/usr/bin/env python # coding: utf8 from __future__ import absolute_import, division, print_function from subprocess import Popen, PIPE SHELL_COMMANDS = r'''\ . /users/home/sysid/.bashrc ./deployment.sh ''' def main(): process = Popen(['psh', 'su', '-', 'sysid'], stdin=PIPE) process.communicate(SHELL_COMMANDS) if __name__ == '__main__': main() If you need the output of the process' stdandard output and/or error then you need to pipe those too and work with the return value of the `communicate()` call.
Python Import Text Array with Numpy Question: I have a text file that looks like this: ... 5 [0, 1] [512, 479] 991 10 [1, 0] [706, 280] 986 15 [1, 0] [807, 175] 982 20 [1, 0] [895, 92] 987 ... Each column is tab separated, but there are arrays in some of the columns. Can I import these with `np.genfromtxt` in some way? The resulting unpacked lists should be, for example: data1 = [..., 5, 10, 15, 20, ...] data2 = [..., [512, 479], [706, 280], ... ] (i.e. a 2D list) etc. I tried `data1, data2, data3, data4 = np.genfromtxt('data.txt', dtype=None, delimiter='\t', unpack=True)` but `data2` and `data3` are lists containing 'nan'. Answer: Potential approach for given data, however not using numpy: import ast data1, data2, data3, data4 = [],[],[],[] for l in open('data.txt'): data = l.split('\t') data1.append(int(data[0])) data2.append(ast.literal_eval(data[1])) data3.append(ast.literal_eval(data[2])) data4.append(int(data[3])) print 'data1', data1 print 'data2', data2 print 'data3', data3 print 'data4', data4 Gives "data1 [5, 10, 15, 20]" "data2 [[0, 1], [1, 0], [1, 0], [1, 0]]" "data3 [[512, 479], [706, 280], [807, 175], [895, 92]]" "data4 [991, 986, 982, 987]"
Retrieve geocodes from Google API and append to original table - python Question: I am trying to retrieve the geocodes of a bunch of addresses through the Google geocoding API and append them to my table with addresses. After spending two days reviewing the internet I coulndΒ΄t find any simple way of doing while it shouldnΒ΄t be that hard. I am specially having problems parsing the json output and append it to my original table. I use python 3.5 on windows I originally got the data from a database which I added to a dataframe in python. But to paste it here it was easier to convert it to a dictionary and back to a dataframe: data_dict={'street': {0: 'ROMULO', 1: 'SAN BARTOLOME', 2: 'GARBI', 3: 'SAN JOSE'}, 'concat': {0: '3+ROMULO+CALLE+ALMERIA', 1: '5+SAN BARTOLOME+CALLE+TOLEDO', 2: '48+GARBI+CALLE+CASTELLON', 3: '30+SAN JOSE+CALLE+SANTA CRUZ DE TENERIFE'}, 'number': {0: '3', 1: '5', 2: '48', 3: '30'}, 'province': {0: 'ALMERIA', 1: 'TOLEDO', 2: 'CASTELLON', 3: 'SANTA CRUZ DE TENERIFE'}, 'region': {0: 'ANDALUCIA', 1: 'CASTILLA LA MANCHA', 2: 'COMUNIDAD VALENCIANA', 3: 'CANARIAS'}} Back to dataframe: import pandas as pd table=pd.DataFrame.from_dict(data_dict) Now I retrieve the data from the Google geocoding API: import requests import json key="MyKey" jsonout=[] for i in table.loc[:,'concat']: try: url="https://maps.googleapis.com/maps/api/geocode/json?address=%s&key=%s" % (i, key) response = requests.get(url) jsonf = response.json() jsonout.append(jsonf) except Exception: continue I get this output: jsonout=[{'results': [{'address_components': [{'long_name': '3', 'short_name': '3', 'types': ['street_number']}, {'long_name': 'Calle RΓ³mulo', 'short_name': 'Calle RΓ³mulo', 'types': ['route']}, {'long_name': 'Adra', 'short_name': 'Adra', 'types': ['locality', 'political']}, {'long_name': 'AlmerΓ­a', 'short_name': 'AL', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'AndalucΓ­a', 'short_name': 'AL', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'Spain', 'short_name': 'ES', 'types': ['country', 'political']}, {'long_name': '04770', 'short_name': '04770', 'types': ['postal_code']}], 'formatted_address': 'Calle RΓ³mulo, 3, 04770 Adra, AlmerΓ­a, Spain', 'geometry': {'location': {'lat': 36.7593, 'lng': -2.97818}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 36.76064898029149, 'lng': -2.976831019708498}, 'southwest': {'lat': 36.7579510197085, 'lng': -2.979528980291502}}}, 'partial_match': True, 'place_id': 'ChIJG39VNzNOcA0R2f8Ek3E12AY', 'types': ['street_address']}], 'status': 'OK'}, {'results': [{'address_components': [{'long_name': '5', 'short_name': '5', 'types': ['street_number']}, {'long_name': 'Calle de San BartolomΓ©', 'short_name': 'Calle de San BartolomΓ©', 'types': ['route']}, {'long_name': 'Toledo', 'short_name': 'Toledo', 'types': ['locality', 'political']}, {'long_name': 'Toledo', 'short_name': 'TO', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Castilla-La Mancha', 'short_name': 'CM', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'Spain', 'short_name': 'ES', 'types': ['country', 'political']}, {'long_name': '45002', 'short_name': '45002', 'types': ['postal_code']}], 'formatted_address': 'Calle de San BartolomΓ©, 5, 45002 Toledo, Spain', 'geometry': {'location': {'lat': 39.8549781, 'lng': -4.026267199999999}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 39.85632708029149, 'lng': -4.024918219708497}, 'southwest': {'lat': 39.85362911970849, 'lng': -4.027616180291502}}}, 'partial_match': True, 'place_id': 'ChIJ4bse1aALag0RJ5RxxfyDxUI', 'types': ['street_address']}], 'status': 'OK'}, {'results': [{'address_components': [{'long_name': '48', 'short_name': '48', 'types': ['street_number']}, {'long_name': 'Carrer de GarbΓ­', 'short_name': 'Carrer de GarbΓ­', 'types': ['route']}, {'long_name': 'PenΓ­scola', 'short_name': 'PenΓ­scola', 'types': ['locality', 'political']}, {'long_name': 'CastellΓ³', 'short_name': 'CastellΓ³', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Comunidad Valenciana', 'short_name': 'Comunidad Valenciana', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'Spain', 'short_name': 'ES', 'types': ['country', 'political']}, {'long_name': '12598', 'short_name': '12598', 'types': ['postal_code']}], 'formatted_address': 'Carrer de GarbΓ­, 48, 12598 PenΓ­scola, CastellΓ³, Spain', 'geometry': {'location': {'lat': 40.3634529, 'lng': 0.3963583}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 40.3648018802915, 'lng': 0.397707280291502}, 'southwest': {'lat': 40.3621039197085, 'lng': 0.395009319708498}}}, 'partial_match': True, 'place_id': 'ChIJHVNHcelGoBIRogILRMno_wk', 'types': ['street_address']}, {'address_components': [{'long_name': '48', 'short_name': '48', 'types': ['street_number']}, {'long_name': 'Carrer GarbΓ­', 'short_name': 'Carrer GarbΓ­', 'types': ['route']}, {'long_name': 'Vila-real', 'short_name': 'Vila-real', 'types': ['locality', 'political']}, {'long_name': 'CastellΓ³', 'short_name': 'CastellΓ³', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Comunidad Valenciana', 'short_name': 'Comunidad Valenciana', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'Spain', 'short_name': 'ES', 'types': ['country', 'political']}, {'long_name': '12540', 'short_name': '12540', 'types': ['postal_code']}], 'formatted_address': 'Carrer GarbΓ­, 48, 12540 Vila-real, CastellΓ³, Spain', 'geometry': {'bounds': {'northeast': {'lat': 39.955829, 'lng': -0.110409}, 'southwest': {'lat': 39.9558231, 'lng': -0.1104261}}, 'location': {'lat': 39.9558231, 'lng': -0.110409}, 'location_type': 'RANGE_INTERPOLATED', 'viewport': {'northeast': {'lat': 39.9571750302915, 'lng': -0.109068569708498}, 'southwest': {'lat': 39.9544770697085, 'lng': -0.111766530291502}}}, 'partial_match': True, 'place_id': 'EjRDYXJyZXIgR2FyYsOtLCA0OCwgMTI1NDAgVmlsYS1yZWFsLCBDYXN0ZWxsw7MsIFNwYWlu', 'types': ['street_address']}], 'status': 'OK'}, {'results': [{'address_components': [{'long_name': '30', 'short_name': '30', 'types': ['street_number']}, {'long_name': 'Calle San JosΓ©', 'short_name': 'Calle San JosΓ©', 'types': ['route']}, {'long_name': 'Santa Cruz de la Palma', 'short_name': 'Santa Cruz de la Palma', 'types': ['locality', 'political']}, {'long_name': 'Santa Cruz de Tenerife', 'short_name': 'TF', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Canarias', 'short_name': 'CN', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'Spain', 'short_name': 'ES', 'types': ['country', 'political']}, {'long_name': '38700', 'short_name': '38700', 'types': ['postal_code']}], 'formatted_address': 'Calle San JosΓ©, 30, 38700 Santa Cruz de la Palma, Santa Cruz de Tenerife, Spain', 'geometry': {'location': {'lat': 28.6864347, 'lng': -17.7624433}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 28.6877836802915, 'lng': -17.7610943197085}, 'southwest': {'lat': 28.6850857197085, 'lng': -17.7637922802915}}}, 'partial_match': True, 'place_id': 'ChIJ8ZFx6__rawwRV3dc118gEgE', 'types': ['street_address']}, {'address_components': [{'long_name': '30', 'short_name': '30', 'types': ['street_number']}, {'long_name': 'Calle San JosΓ©', 'short_name': 'Calle San JosΓ©', 'types': ['route']}, {'long_name': 'San AndrΓ©s', 'short_name': 'San AndrΓ©s', 'types': ['locality', 'political']}, {'long_name': 'Santa Cruz de Tenerife', 'short_name': 'Santa Cruz de Tenerife', 'types': ['administrative_area_level_4', 'political']}, {'long_name': 'Santa Cruz de Tenerife', 'short_name': 'TF', 'types': ['administrative_area_level_2', 'political']}, {'long_name': 'Canarias', 'short_name': 'CN', 'types': ['administrative_area_level_1', 'political']}, {'long_name': 'Spain', 'short_name': 'ES', 'types': ['country', 'political']}, {'long_name': '38120', 'short_name': '38120', 'types': ['postal_code']}], 'formatted_address': 'Calle San JosΓ©, 30, 38120 San AndrΓ©s, Santa Cruz de Tenerife, Spain', 'geometry': {'location': {'lat': 28.505875, 'lng': -16.1930036}, 'location_type': 'ROOFTOP', 'viewport': {'northeast': {'lat': 28.5072239802915, 'lng': -16.1916546197085}, 'southwest': {'lat': 28.5045260197085, 'lng': -16.1943525802915}}}, 'partial_match': True, 'place_id': 'ChIJsfd-ITjKQQwRjFHLI0XPSok', 'types': ['street_address']}], 'status': 'OK'}] What I finally would like to have is my original table dataframe with the lat and lng coordinates (i['results'][0]['geometry']['location']['lat'], i['results'][0]['geometry']['location']['lng']) and the formatted_address from the request. Answer: I use [this package](https://pypi.python.org/pypi/geopy) to do my geocoding, which takes care of parsing the JSON file. from geopy.geocoders import GoogleV3 googleGeo = GoogleV3('googleKey') # create a geocoded list containing geocode objects geocoded = [] for address in mydata['location']: # assumes mydata is a pandas df geocoded.append(googleGeo.geocode(address)) # geocode function returns a geocoded object # append geocoded list to mydata mydata['geocoded'] = geocoded # create coordinates column mydata['coords'] = mydata['geocoded'].apply(lambda x: (x.latitude, x.longitude)) # if you want to split our your lat and long then do # mydata['lat'] = mydata['geocoded'].apply(lambda x: x.latitude) # mydata['long'] = mydata['geocoded'].apply(lambda x: x.longitude) * * * Based on the comment you shared, if you are using Google's API without an API key, then it might be beneficial to include a random pause between each geocode call. from time import sleep from random import randint from geopy.geocoders import GoogleV3 googleGeo = GoogleV3() def geocode(address): location = googleGeo.geocode(address) sleep(randint(5,10)) # give the API a break return location Then you use this custom function to do your geocoding * * * Piggybacking on my earlier section, you can even utilize multiple map API services. This is the function I built for one of my projects, utilizing Nominatim's API first, and then falling back on Google's API if Nominatim either returns an error or returns nothing: from geopy.geocoders import Nominatim, GoogleV3 from geopy.exc import GeocoderTimedOut, GeocoderAuthenticationFailure from random import randint from time import sleep nomiGeo = Nominatim() # Nominatim geolocator googleGeo = GoogleV3('myKey') # Google Maps v3 API geolocator def geocode(address): """Geocode an address. Args: address (str): the physical address Returns: dict: geocoded object """ location = None attempt = 0 useGoogle = False # set to True to use Google only while (location is None) and (attempt <= 8): try: attempt += 1 if useGoogle: location = googleGeo.geocode(address, timeout=10) else: location = nomiGeo.geocode(address, timeout=10) if location is None: useGoogle = True location = googleGeo.geocode(address, timeout=10) sleep(randint(5, 10)) # Give the API a break except GeocoderAuthenticationFailure: print 'Error: GeocoderAuthenticationFailure while geocoding {} during attempt #{}'.format(address, attempt) if attempt % 2 == 0: # switch between services for every attempt useGoogle = True else: useGoogle = False sleep(60) except GeocoderTimedOut: sleep(randint(3, 5)) # Give API a break print 'Error: GeocoderTimedOut while geocoding {} during attempt #{}'.format(address, attempt) return location Note that I also imported some exceptions specific to the package, because based on my experience with Nominatim, it can sometimes throw random errors and these were the two that I got. Also, from my experiences with the two APIs, Google seemed to be able to interpolate coordinates even if a certain address was not found, whereas Nominatim had to have the address in their database in order to return something.
Why my Element not in view when page scrolls by itself in selenium? Question: Now, suppose my page have 10 clickable links with same class, one below another at some distance, such that only 1st 3 links are shown in current view, others are seen when i scroll down. Now, i have written a code to click on all of them. It clicks on first 3 and then selenium scrolls my page to show link 5 to 7, page is scrolling too much so as not to show link 4 and since code is trying to click link 4 which is not visible, my code gives error- Element is not visible. Code: def AddConnection(self): mylist=self.driver.find_elements_by_xpath("//a[@class='primary-action-button label']") for x in mylist: x.click() Full Error: ================================== FAILURES =================================== _____________________________ test_add_connection _____________________________ driver = <selenium.webdriver.firefox.webdriver.WebDriver (session="3a05990c-13b -4418-baee-f0d54c611ff7")> > add.AddConnection() test_add_connection.py:22: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ PageSearchResults.py:24: in AddConnection x.click() c:\python27\lib\site-packages\selenium\webdriver\remote\webelement.py:73: in cl ck self._execute(Command.CLICK_ELEMENT) c:\python27\lib\site-packages\selenium\webdriver\remote\webelement.py:456: in _ xecute return self._parent.execute(command, params) c:\python27\lib\site-packages\selenium\webdriver\remote\webdriver.py:236: in ex cute self.error_handler.check_response(response) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <selenium.webdriver.remote.errorhandler.ErrorHandler object at 0x01E6ACB > response = {'status': 500, 'value': '{"name":"clickElement","sessionId":"3a0599 c-13b8-4418-baee-f0d54c611ff7","status":13,"value...int (892.5, 12.199996948242 88). Other element would receive the click: <div class=\"advanced-search-inner\ ></div>"}'} def check_response(self, response): """ Checks that a JSON response from the WebDriver does not have an err r. :Args: - response - The JSON response from the WebDriver server as a dict onary object. :Raises: If the response contains an error message. """ status = response.get('status', None) if status is None or status == ErrorCode.SUCCESS: return value = None message = response.get("message", "") screen = response.get("screen", "") stacktrace = None if isinstance(status, int): value_json = response.get('value', None) if value_json and isinstance(value_json, basestring): import json try: value = json.loads(value_json) status = value.get('error', None) if status is None: status = value["status"] message = value["value"] if not isinstance(message, basestring): value = message try: message = message['message'] except TypeError: message = None else: message = value.get('message', None) except ValueError: pass exception_class = ErrorInResponseException if status in ErrorCode.NO_SUCH_ELEMENT: exception_class = NoSuchElementException elif status in ErrorCode.NO_SUCH_FRAME: exception_class = NoSuchFrameException elif status in ErrorCode.NO_SUCH_WINDOW: exception_class = NoSuchWindowException elif status in ErrorCode.STALE_ELEMENT_REFERENCE: exception_class = StaleElementReferenceException elif status in ErrorCode.ELEMENT_NOT_VISIBLE: exception_class = ElementNotVisibleException elif status in ErrorCode.INVALID_ELEMENT_STATE: exception_class = InvalidElementStateException elif status in ErrorCode.INVALID_SELECTOR \ or status in ErrorCode.INVALID_XPATH_SELECTOR \ or status in ErrorCode.INVALID_XPATH_SELECTOR_RETURN_TYPER: exception_class = InvalidSelectorException elif status in ErrorCode.ELEMENT_IS_NOT_SELECTABLE: exception_class = ElementNotSelectableException elif status in ErrorCode.INVALID_COOKIE_DOMAIN: exception_class = WebDriverException elif status in ErrorCode.UNABLE_TO_SET_COOKIE: exception_class = WebDriverException elif status in ErrorCode.TIMEOUT: exception_class = TimeoutException elif status in ErrorCode.SCRIPT_TIMEOUT: exception_class = TimeoutException elif status in ErrorCode.UNKNOWN_ERROR: exception_class = WebDriverException elif status in ErrorCode.UNEXPECTED_ALERT_OPEN: exception_class = UnexpectedAlertPresentException elif status in ErrorCode.NO_ALERT_OPEN: exception_class = NoAlertPresentException elif status in ErrorCode.IME_NOT_AVAILABLE: exception_class = ImeNotAvailableException elif status in ErrorCode.IME_ENGINE_ACTIVATION_FAILED: exception_class = ImeActivationFailedException elif status in ErrorCode.MOVE_TARGET_OUT_OF_BOUNDS: exception_class = MoveTargetOutOfBoundsException else: exception_class = WebDriverException if value == '' or value is None: value = response['value'] if isinstance(value, basestring): if exception_class == ErrorInResponseException: raise exception_class(response, value) raise exception_class(value) if message == "" and 'message' in value: message = value['message'] screen = None if 'screen' in value: screen = value['screen'] stacktrace = None if 'stackTrace' in value and value['stackTrace']: stacktrace = [] try: for frame in value['stackTrace']: line = self._value_or_default(frame, 'lineNumber', '') file = self._value_or_default(frame, 'fileName', '<anonymou >') if line: file = "%s:%s" % (file, line) meth = self._value_or_default(frame, 'methodName', '<anonym us>') if 'className' in frame: meth = "%s.%s" % (frame['className'], meth) msg = " at %s (%s)" msg = msg % (meth, file) stacktrace.append(msg) except TypeError: pass if exception_class == ErrorInResponseException: raise exception_class(response, message) elif exception_class == UnexpectedAlertPresentException and 'alert' in alue: raise exception_class(message, screen, stacktrace, value['alert'].g t('text')) > raise exception_class(message, screen, stacktrace) E WebDriverException: Message: Element is not clickable at point (892.5, 2.199996948242188). Other element would receive the click: <div class="advanced search-inner"></div> c:\python27\lib\site-packages\selenium\webdriver\remote\errorhandler.py:194: We DriverException ========================== 1 failed in 40.13 seconds ========================== Answer: Figured out the solution, by adding this code after every click. self.driver.execute_script("window.scrollBy(0, 150);")
Python speech recognition error converting mp3 file Question: My first try on audio to text. import speech_recognition as sr r = sr.Recognizer() with sr.AudioFile("/path/to/.mp3") as source: audio = r.record(source) When I execute the above code, the following error occurs, <ipython-input-10-72e982ecb706> in <module>() ----> 1 with sr.AudioFile("/home/yogaraj/Documents/Python workouts/Python audio to text/show_me_the_meaning.mp3") as source: 2 audio = sr.record(source) 3 /usr/lib/python2.7/site-packages/speech_recognition/__init__.pyc in __enter__(self) 197 aiff_file = io.BytesIO(aiff_data) 198 try: --> 199 self.audio_reader = aifc.open(aiff_file, "rb") 200 except aifc.Error: 201 assert False, "Audio file could not be read as WAV, AIFF, or FLAC; check if file is corrupted" /usr/lib64/python2.7/aifc.pyc in open(f, mode) 950 mode = 'rb' 951 if mode in ('r', 'rb'): --> 952 return Aifc_read(f) 953 elif mode in ('w', 'wb'): 954 return Aifc_write(f) /usr/lib64/python2.7/aifc.pyc in __init__(self, f) 345 f = __builtin__.open(f, 'rb') 346 # else, assume it is an open file object already --> 347 self.initfp(f) 348 349 # /usr/lib64/python2.7/aifc.pyc in initfp(self, file) 296 self._soundpos = 0 297 self._file = file --> 298 chunk = Chunk(file) 299 if chunk.getname() != 'FORM': 300 raise Error, 'file does not start with FORM id' /usr/lib64/python2.7/chunk.py in __init__(self, file, align, bigendian, inclheader) 61 self.chunkname = file.read(4) 62 if len(self.chunkname) < 4: ---> 63 raise EOFError 64 try: 65 self.chunksize = struct.unpack(strflag+'L', file.read(4))[0] I don't know what I'm going wrong. Can someone say me what I'm wrong in the above code? Answer: `Speech recognition` supports WAV file format. Here is a sample WAV to text program using `speech_recognition`: ### Sample code (Python 3) import speech_recognition as sr r = sr.Recognizer() with sr.AudioFile("woman1_wb.wav") as source: audio = r.record(source) try: s = r.recognize_google(audio) print("Text: "+s) except Exception as e: print("Exception: "+str(e)) ### Output: Text: to administer medicine to animals is frequency of very difficult matter and yet sometimes it's necessary to do so Used WAV File URL: <http://www- mobile.ecs.soton.ac.uk/hth97r/links/Database/woman1_wb.wav>
Python - insert HTML content tp WordPress using xmlrpc api Question: I'm trying to insert HTML content in my WordPress blog via XMLRPC but if i insert HTML - i got an error: > raise TypeError, "cannot marshal %s objects" % type(value) TypeError: cannot > marshal objects If i use bleach (for clean tags) - i got text with tags on my page My code # coding: utf-8 import requests from bs4 import BeautifulSoup import bleach from wordpress_xmlrpc import Client, WordPressPost from wordpress_xmlrpc.methods import posts xmlrpc_url = "http://site.ru/xmlrpc.php" wp_username = "user" wp_password = "123" blog_id = "" client = Client(xmlrpc_url, wp_username, wp_password, blog_id) url = "https://lifehacker.ru/2016/08/30/finansovye-sovety-dlya-molodyx-par/" r = requests.get(url) soup = BeautifulSoup(r.content) post_title = soup.find("h1") post_excerpt = soup.find("div", {"class", "single__excerpt"}) for tag in post_excerpt(): for attribute in ["class", "id", "style"]: del tag[attribute] post_content = soup.find("div", {"class","post-content"}) for tag in post_content(): for attribute in ["class", "id", "style"]: del tag[attribute] post = WordPressPost() post.title = post_title.text post.content = post_content post.id = client.call(posts.NewPost(post)) post.post_status = 'publish' client.call(posts.EditPost(post.id, post)) How i can insert parsed HTML content in my WordPress blog? Answer: Use .replace after bleach and fixed issue. My code ... post_content_insert = bleach.clean(post_content) post_content_insert = post_content_insert.replace('&lt;','<') post_content_insert = post_content_insert.replace('&gt;','>') post = WordPressPost() post.title = post_title.text post.content = post_content_insert post.id = client.call(posts.NewPost(post))
How do I remove consecutive duplicates from a list? Question: How do I remove consecutive duplicates from a list like this in python? lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5] Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list. I want the result to be like this: newlst = [1,2,4,1,3,5] Would you also please consider the case when I have a list like this `[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]` and I want the result to be `[4,2,3,3]` rather than `[4,2,3]` . Answer: [itertools.groupby()](https://docs.python.org/3/library/itertools.html#itertools.groupby) is your solution. newlst = [k for k, g in itertools.groupby(lst)] * * * If you wish to group and limit the group size by the item's value, meaning 8 4's will be [4,4], and 9 3's will be [3,3,3] here are 2 options that does it: import itertools def special_groupby(iterable): last_element = 0 count = 0 state = False def key_func(x): nonlocal last_element nonlocal count nonlocal state if last_element != x or x >= count: last_element = x count = 1 state = not state else: count += 1 return state return [next(g) for k, g in itertools.groupby(iterable, key=key_func)] special_groupby(lst) OR def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return itertools.zip_longest(*args, fillvalue=fillvalue) newlst = list(itertools.chain.from_iterable(next(zip(*grouper(g, k))) for k, g in itertools.groupby(lst))) Choose whichever you deem appropriate. Both methods are for numbers > 0.
Python Highlight specific text in json document on terminal Question: I am interested in highlighting a specific portion of the json document based on some arbitrary matching algorithm. For example { "text" : "hello world" } I searched for "hello" and above json document has hello in it. How to highlight that particular portion "hello" while displaying the document on the terminal using python? Also, the json has to be pretty printed. Expected { "text" : " `hello` world" } Text that is qoutes should be displayed in red color. Answer: I can't comment :( The answer here might be able to help you: [Print in terminal with colors using Python?](http://stackoverflow.com/questions/287871/print-in-terminal- with-colors-using-python) For example, you could use termcolor (if you are using a linux style terminal) and replace "hello" with colored("hello", "red") from termcolor import colored sample_text = "colored hello there !" print(sample_text) print(sample_text.replace("hello", colored("hello", "red")))
Server responds by "text/html" to a "text/css" request Question: **PROBLEM:** When I try to access the web page (localhost/mysite/admin), all goes well, except the CSS files which my server can't deliver !! [I got a **500 Internal Server Error**](http://i.stack.imgur.com/0VNcT.png) [By investigating the problem, I found that the server returns a **text/html** instead of **text/css**](http://i.stack.imgur.com/s4wTn.png) **Apache 2.4.23 / mod_wsgi 4.5.5 / Python 3.4.2 / Django 1.8 on Linux Debian (64-bit)** Additional informations: This is my **settings.py** file """ Django settings for mysite project. Generated by 'django-admin startproject' using Django 1.8. For more information on this file, see https://docs.djangoproject.com/en/1.8/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.8/ref/settings/ """ # Build paths inside the project like this: os.path.join(BASE_DIR, ...) import os BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = '##################################################' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'polls' ) MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'django.middleware.security.SecurityMiddleware', ) ROOT_URLCONF = 'mysite.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [os.path.join(BASE_DIR,'templates')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'mysite.wsgi.application' # Database # https://docs.djangoproject.com/en/1.8/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } # Internationalization # https://docs.djangoproject.com/en/1.8/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.8/howto/static-files/ STATIC_ROOT = '/var/www/django-projects/mysite/static' STATIC_URL = '/static/' The lines wich are appended to the httpd.conf (Apache HTTP Server) # Enable access for Django website mysite WSGIScriptAlias /mysite /var/www/django-projects/mysite/mysite/wsgi.py process-group=mysite WSGIScriptAlias /static /var/www/django-projects/mysite/static process-group=mysite WSGIPythonPath /var/www/django-projects/mysite:/home/ibrahim/.virtualenvs/django-1.8/lib/python3.4/site-packages WSGIDaemonProcess mysite python-path=/var/www/django-projects/mysite:/home/ibrahim/.virtualenvs/django-1.8/lib/python3.4/site-packages WSGIProcessGroup mysite <Directory /var/www/django-projects/mysite> Require all granted Options FollowSymLinks ServerSignature On </Directory> Project structure: - mysite - mysite + __pycache__ __init__.py settings.py urls.py wsgi.py - polls + __pycache__ + migrations __init__.py admin.py models.py tests.py views.py - static - admin - css base.css ........ + img + js + templates .schema db.sqlite3 manage.py Note that the file `/usr/local/apache2/conf/mime.types` include the following line: `text/css css` Answer: @Ankush Raghuvanshi: Yes, it serves all kind of files except CSS ! Also, when I us the built-in development server (which I run by using the command `python manage.py runserver` ): these CSS files works fine on my website.
Python performance on reading files with extremely long lines Question: I've got a file with around 6MB of data. All of the data are written in a single line. Why is the following command taking more than 15 minutes to finish? Is it normal? infile = open('file.txt') outfile = open('out.txt', 'w') for line in infile.readlines(): outfile.write(line); **Details:** I'm using Python 2.7. Output from **wc** : * newline count: 2 * word count: 3475246 * byte count: 6951140 **Evaluation 1:** _Reference_ code using in file (suggestted by Ahsanul Haque and Daewon Lee): for line in infile: output.write(line): _Time: 959.487 secs._ Answer: Try the following code snippet. The `readlines()` loads all data on memory, which seems to cause a long time. infile = open('file.txt', 'r') outfile = open('out.txt', 'w') for line in infile: outfile.write(line) With my python 3.5 (64bit) on Windows 10 OS, the following code snippet finished within a few seconds. import time start = time.time() with open("huge_text.txt", "w") as fout: for i in range(1737623): fout.write("ABCD ") fout.write('\n') for i in range(1737623): fout.write("EFGH ") fout.write('\n') # end of with infile = open('huge_text.txt', 'r') outfile = open('out.txt', 'w') for line in infile: outfile.write(line) outfile.close() infile.close() end = time.time() print("Time elapsed: ", end - start) """ <Output> Time elapsed: 1.557690143585205 """
python repeat list elements in an iterator Question: Is there any way to create an iterator to repeat elements in a list certain times? For example, a list is given: color = ['r', 'g', 'b'] Is there a way to create a iterator in form of `itertools.repeatlist(color, 7)` that can produce the following list? color_list = ['r', 'g', 'b', 'r', 'g', 'b', 'r'] Answer: You can use [`itertools.cycle()`](https://docs.python.org/2/library/itertools.html#itertools.cycle) together with [`itertools.islice()`](https://docs.python.org/2/library/itertools.html#itertools.islice) to build your `repeatlist()` function: from itertools import cycle, islice def repeatlist(it, count): return islice(cycle(it), count) This returns a new iterator; call `list()` on it if you must have a list object. Demo: >>> from itertools import cycle, islice >>> def repeatlist(it, count): ... return islice(cycle(it), count) ... >>> color = ['r', 'g', 'b'] >>> list(repeatlist(color, 7)) ['r', 'g', 'b', 'r', 'g', 'b', 'r']
Exit a multiprocessing script Question: I am trying to exit a multiprocessing script when an error is thrown by the target function, but instead of quitting, the parent process just hangs. This is the test script I use to replicate the problem: #!/usr/bin/python3.5 import time, multiprocessing as mp def myWait(wait, resultQueue): startedAt = time.strftime("%H:%M:%S", time.localtime()) time.sleep(wait) endedAt = time.strftime("%H:%M:%S", time.localtime()) name = mp.current_process().name resultQueue.put((name, wait, startedAt, endedAt)) # queue initialisation resultQueue = mp.Queue() # process creation arg: (process number, sleep time, queue) proc = [ mp.Process(target=myWait, name = ' _One_', args=(2, resultQueue,)), mp.Process(target=myWait, name = ' _Two_', args=(2, resultQueue,)) ] # starting processes for p in proc: p.start() for p in proc: p.join() # print results results = {} for p in proc: name, wait, startedAt, endedAt = resultQueue.get() print('Process %s started at %s wait %s ended at %s' % (name, startedAt, wait, endedAt)) This works perfectly, I can see the parent script spawning two child processes in `htop` but when I want to force the parent script to exit if an error is thrown in the `myWait` target function the parent process just hangs and doesn't even spawn any child process. I have to `ctrl-c` to kill it. def myWait(wait, resultQueue): try: # do something wrong except: raise SystemExit I have tried every way to exit the function (e.g. `exit()`, `sys.exit()`, `os._exit()`...) to no avail. Answer: Firstly, your code has a major issue: you're trying to join the processes before flushing the content of the queues, if any, which can result in a deadlock. See the section titled 'Joining processes that use queues' here: <https://docs.python.org/3/library/multiprocessing.html#multiprocessing- programming> Secondly, the call to `resultQueue.get()` will block until it receives some data, which never happens if an exception is raised from the `myWait` function and that no data has been pushed into the queue before then. So make it non- blocking and make it check for any data in a loop until it finally receives something or that something's wrong. Here's a quick'n'dirty fix to give you the idea: #!/usr/bin/python3.5 import multiprocessing as mp import queue import time def myWait(wait, resultQueue): raise Exception("error!") # queue initialisation resultQueue = mp.Queue() # process creation arg: (process number, sleep time, queue) proc = [ mp.Process(target=myWait, name = ' _One_', args=(2, resultQueue,)), mp.Process(target=myWait, name = ' _Two_', args=(2, resultQueue,)) ] # starting processes for p in proc: p.start() # print results results = {} for p in proc: while True: if not p.is_alive(): break try: name, wait, startedAt, endedAt = resultQueue.get(block=False) print('Process %s started at %s wait %s ended at %s' % (name, startedAt, wait, endedAt)) break except queue.Empty: pass for p in proc: p.join() The function `myWait` will throw an exception but both processes will still join and the program will exit nicely.
ImportError: No module named impyla Question: I have installed impyla and it's dependencies following [this](https://github.com/cloudera/impyla) guide. The installation seems to be successful as now I can see the folder **"impyla-0.13.8-py2.7.egg"** in the Anaconda folder (64-bit Anaconda 4.1.1 version). But when I import impyla in python, I get the following error: >>> import impyla Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named impyla **I have installed 64-bit Python 2.7.12** **Can any body please explain me why I am facing this error?** I am new on Python and have been spending allot of time on different blogs, but I don't see much information present there yet. Thanks in advance for your time. Answer: Usage is a little bit different then you mentioned (from <https://github.com/cloudera/impyla>) Impyla implements the Python DB API v2.0 (PEP 249) database interface (refer to it for API details): from impala.dbapi import connect conn = connect(host='my.host.com', port=21050) cursor = conn.cursor() cursor.execute('SELECT * FROM mytable LIMIT 100') print cursor.description # prints the result set's schema results = cursor.fetchall() The Cursor object also exposes the iterator interface, which is buffered (controlled by cursor.arraysize): cursor.execute('SELECT * FROM mytable LIMIT 100') for row in cursor: process(row) You can also get back a pandas DataFrame object from impala.util import as_pandas df = as_pandas(cur) # carry df through scikit-learn, for example
I keep getting The below errors on my log file whenever i try to Push my django app to Heroku, Kindly assist on how i should go about in solving it, Question: Here is my log file $ python manage.py collectstatic --noinput Traceback (most recent call last): File "manage.py", line 9, in <module> execute_from_command_line(sys.argv) File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line utility.execute() File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 327, in execute django.setup() File "/app/.heroku/python/lib/python2.7/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/app/.heroku/python/lib/python2.7/site-packages/django/apps/registry.py", line 115, in populate app_config.ready() File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/apps.py", line 22, in ready self.module.autodiscover() File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/admin/__init__.py", line 26, in autodiscover autodiscover_modules('admin', register_to=site) File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/module_loading.py", line 50, in autodiscover_modules import_module('%s.%s' % (app_config.name, module_to_search)) File "/app/.heroku/python/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/app/.heroku/python/lib/python2.7/site-packages/registration/admin.py", line 2, in <module> from django.contrib.sites.models import RequestSite ImportError: cannot import name RequestSite ! Error while running '$ python manage.py collectstatic --noinput'. See traceback above for details. You may need to update application code to resolve this error. Or, you can disable collectstatic for this application: $ heroku config:set DISABLE_COLLECTSTATIC=1 https://devcenter.heroku.com/articles/django-assets ! Push rejected, failed to compile Python app. ! Push failed Answer: RequestSite was moved from django.contrib.sites.models to django.contrib.sites.requests in Django 1.7, and the alias in models was removed in Django 1.9. You should ensure you are running a version of django- registration that is compatible with the version of Django you are using.
Using strings and byte-like objects compatibly in code to run in both Python 2 & 3 Question: I'm trying to modify the code shown far below, which works in Python 2.7.x, so it will also work unchanged in Python 3.x. However I'm encountering the following problem I can't solve in the first function, `bin_to_float()` as shown by the output below: float_to_bin(0.000000): '0' Traceback (most recent call last): File "binary-to-a-float-number.py", line 36, in <module> float = bin_to_float(binary) File "binary-to-a-float-number.py", line 9, in bin_to_float return struct.unpack('>d', bf)[0] TypeError: a bytes-like object is required, not 'str' I tried to fix that by adding a `bf = bytes(bf)` right before the call to `struct.unpack()`, but doing so produced its own `TypeError`: TypeError: string argument without an encoding So my questions are is it possible to fix this issue and achieve my goal? And if so, how? Preferably in a way that would work in both versions of Python. Here's the code that works in Python 2: import struct def bin_to_float(b): """ Convert binary string to a float. """ bf = int_to_bytes(int(b, 2), 8) # 8 bytes needed for IEEE 754 binary64 return struct.unpack('>d', bf)[0] def int_to_bytes(n, minlen=0): # helper function """ Int/long to byte string. """ nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit nbytes = (nbits+7) // 8 # number of whole bytes bytes = [] for _ in range(nbytes): bytes.append(chr(n & 0xff)) n >>= 8 if minlen > 0 and len(bytes) < minlen: # zero pad? bytes.extend((minlen-len(bytes)) * '0') return ''.join(reversed(bytes)) # high bytes at beginning # tests def float_to_bin(f): """ Convert a float into a binary string. """ ba = struct.pack('>d', f) ba = bytearray(ba) s = ''.join('{:08b}'.format(b) for b in ba) s = s.lstrip('0') # strip leading zeros return s if s else '0' # but leave at least one for f in 0.0, 1.0, -14.0, 12.546, 3.141593: binary = float_to_bin(f) print('float_to_bin(%f): %r' % (f, binary)) float = bin_to_float(binary) print('bin_to_float(%r): %f' % (binary, float)) print('') Answer: To make portable code that works with bytes in both Python 2 and 3 using libraries that literally use the different data types between the two, you need to explicitly declare them using the appropriate literal mark for every string (or add [`from __future__ import unicode_literals`](http://python- future.org/unicode_literals.html) to top of every module doing this). This step is to ensure your data types are correct internally in your code. Secondly, make the decision to support Python 3 going forward, with fallbacks specific for Python 2. This means overriding `str` with `unicode`, and figure out methods/functions that do not return the same types in both Python versions should be modified and replaced to return the correct type (being the Python 3 version). Do note that `bytes` is a reserved word, too, so don't use that. Putting this together, your code will look something like this: import struct import sys if sys.version_info < (3, 0): str = unicode chr = unichr def bin_to_float(b): """ Convert binary string to a float. """ bf = int_to_bytes(int(b, 2), 8) # 8 bytes needed for IEEE 754 binary64 return struct.unpack(b'>d', bf)[0] def int_to_bytes(n, minlen=0): # helper function """ Int/long to byte string. """ nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit nbytes = (nbits+7) // 8 # number of whole bytes ba = bytearray(b'') for _ in range(nbytes): ba.append(n & 0xff) n >>= 8 if minlen > 0 and len(ba) < minlen: # zero pad? ba.extend((minlen-len(ba)) * b'0') return u''.join(str(chr(b)) for b in reversed(ba)).encode('latin1') # high bytes at beginning # tests def float_to_bin(f): """ Convert a float into a binary string. """ ba = struct.pack(b'>d', f) ba = bytearray(ba) s = u''.join(u'{:08b}'.format(b) for b in ba) s = s.lstrip(u'0') # strip leading zeros return (s if s else u'0').encode('latin1') # but leave at least one for f in 0.0, 1.0, -14.0, 12.546, 3.141593: binary = float_to_bin(f) print(u'float_to_bin(%f): %r' % (f, binary)) float = bin_to_float(binary) print(u'bin_to_float(%r): %f' % (binary, float)) print(u'') I used the `latin1` codec simply because that's what the byte mappings are originally defined, and it seems to work $ python2 foo.py float_to_bin(0.000000): '0' bin_to_float('0'): 0.000000 float_to_bin(1.000000): '11111111110000000000000000000000000000000000000000000000000000' bin_to_float('11111111110000000000000000000000000000000000000000000000000000'): 1.000000 float_to_bin(-14.000000): '1100000000101100000000000000000000000000000000000000000000000000' bin_to_float('1100000000101100000000000000000000000000000000000000000000000000'): -14.000000 float_to_bin(12.546000): '100000000101001000101111000110101001111110111110011101101100100' bin_to_float('100000000101001000101111000110101001111110111110011101101100100'): 12.546000 float_to_bin(3.141593): '100000000001001001000011111101110000010110000101011110101111111' bin_to_float('100000000001001001000011111101110000010110000101011110101111111'): 3.141593 Again, but this time under Python 3.5) $ python3 foo.py float_to_bin(0.000000): b'0' bin_to_float(b'0'): 0.000000 float_to_bin(1.000000): b'11111111110000000000000000000000000000000000000000000000000000' bin_to_float(b'11111111110000000000000000000000000000000000000000000000000000'): 1.000000 float_to_bin(-14.000000): b'1100000000101100000000000000000000000000000000000000000000000000' bin_to_float(b'1100000000101100000000000000000000000000000000000000000000000000'): -14.000000 float_to_bin(12.546000): b'100000000101001000101111000110101001111110111110011101101100100' bin_to_float(b'100000000101001000101111000110101001111110111110011101101100100'): 12.546000 float_to_bin(3.141593): b'100000000001001001000011111101110000010110000101011110101111111' bin_to_float(b'100000000001001001000011111101110000010110000101011110101111111'): 3.141593 It's a lot more work, but in Python3 you can more clearly see that the types are done as proper bytes. I also changed your `bytes = []` to a bytearray to more clearly express what you were trying to do.
Confusion with Fancy indexing (for non-fancy people) Question: Let's assume a multi-dimensional array import numpy as np foo = np.random.rand(102,43,35,51) I know that those last dimensions represent a 2D space (35,51) of which I would like to index **a range of rows** of a **column** Let's say I want to have rows 8 to 30 of column 0 From my understanding of indexing I should call foo[0][0][8::30][0] Knowing my data though (unlike the random data used here), this is not what I expected I could try this that does work but looks ridiculous foo[0][0][[8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],0] Now from what I can find in [this documentation ](https://docs.scipy.org/doc/numpy- dev/user/quickstart.html#head-864862d3f2bb4c32f04260fac61eb4ef34788c4c) I can also use something like: foo[0][0][[8,30],0] which only gives me the values of rows 8 and 30 while this: foo[0][0][[8::30],0] gives an error File "<ipython-input-568-cc49fe1424d1>", line 1 foo[0][0][[8::30],0] ^ SyntaxError: invalid syntax I don't understand why the :: argument cannot be passed here. What is then a way to indicate a range in your indexing syntax? So I guess my overall question is what would be the proper pythonic equivalent of this syntax: foo[0][0][[8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],0] Answer: Instead of foo[0][0][8::30][0] try foo[0, 0, 8:30, 0] The `foo[0][0]` part is the same as `foo[0, 0, :, :]`, selecting a 2d array (35 x 51). But `foo[0][0][8::30]` selects a subset of those rows Consider what happens when is use `0::30` on 2d array: In [490]: np.zeros((35,51))[0::30].shape Out[490]: (2, 51) In [491]: np.arange(35)[0::30] Out[491]: array([ 0, 30]) The `30` is the `step`, not the `stop` value of the slice. the last `[0]` then picks the first of those rows. The end result is the same as `foo[0,0,0,:]`. It is better, in most cases, to index multiple dimensions with the comma syntax. And if you want the first 30 rows use `0:30`, not `0::30` (that's basic slicing notation, applicable to lists as well as arrays). As for: foo[0][0][[8::30],0] simplify it to `x[[8::30], 0]`. The Python interpreter accepts `[1:2:3, 0]`, translating it to `tuple(slice(1,2,3), 0)` and passing it to a `__getitem__` method. But the colon syntax is accepted in a very specific context. The interpreter is treating that inner set of brackets as a list, and colons are not accepted there. foo[0,0,[1,2,3],0] is ok, because the inner brackets are a list, and the numpy `getitem` can handle those. `numpy` has a tool for converting a slice notation into a list of numbers. Play with that if it is still confusing: In [495]: np.r_[8:30] Out[495]: array([ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]) In [496]: np.r_[8::30] Out[496]: array([0]) In [497]: np.r_[8:30:2] Out[497]: array([ 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28])
how to print type of keys for a list in python pdb Question: I'm learning pdb and I can print, or pp a list of objects but how can I print the type of key for each object? I can see it with pp, it looks like a byte array but I'd like to know the type. I suppose I could just print debug this but I'm curious if there's a smarter way to do it when using the debugger. Answer: Because you wrote > type of key I'm assuming that you mean a dictionary. But you _do_ also talk about a "list of objects", which could also mean * a list of any kind of object * a list of dictionaries But I'll show you two options: mydict = {b'some bytes': 42, 'a string!': 'fnord', (1,2,3): 'A tuple! (is that two-pull or tuh-ple?)', 19: 'Just an int', } list_of_things = [b'some bytes', 'a string!', (1,2,3), 19, ['a', 'b', 'c']] import pdb; pdb.set_trace() Now when `pdb` fires up: (Pdb) for _ in mydict: print('{} {}'.format(_, type(_))) 19 <type 'int'> some bytes <type 'str'> a string! <type 'str'> (1, 2, 3) <type 'tuple'> That will give you the keys and types of the keys. Here's the types and values from the list: (Pdb) for _ in list_of_things: print('{} {}'.format(_, type(_))) some bytes <type 'str'> a string! <type 'str'> (1, 2, 3) <type 'tuple'> 19 <type 'int'> ['a', 'b', 'c'] <type 'list'>
Theano crashing using cuDNN in linux Question: I am a non-root user on a cluster computer running Scientific Linux release 6.6 (Carbon). I am experiencing some theano crashes when running code on a GPU with CUDA 7.5 and cuDNN 5. I am using Python 2.7, Theano 0.9, Keras 1.0.7 and Lasange 0.1. The following crash occurs ONLY when I run the program on a GPU node with cuDNN enabled. The code completes without issue on a CPU and a GPU with cuDNN disabled. Traceback (most recent call last): File "runner.py", line 306, in <module> main() File "runner.py", line 241, in main queries_exp = __import__(args.exp_model).queries_exp File "/mnt/nfs2/inf/tjb32/workspace/CNN_EL/nlp-entity-convnet/exp_multi_conv_cosim.py", line 923, in <module> queries_exp = EntityVectorLinkExp() File "/mnt/nfs2/inf/tjb32/workspace/CNN_EL/nlp-entity-convnet/exp_multi_conv_cosim.py", line 51, in __init__ self._setup() File "/mnt/nfs2/inf/tjb32/workspace/CNN_EL/nlp-entity-convnet/exp_multi_conv_cosim.py", line 543, in _setup on_unused_input='ignore', File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/compile/function.py", line 326, in function output_keys=output_keys) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/compile/pfunc.py", line 484, in pfunc output_keys=output_keys) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/compile/function_module.py", line 1788, in orig_function output_keys=output_keys).create( File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/compile/function_module.py", line 1467, in __init__ optimizer_profile = optimizer(fgraph) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 102, in __call__ return self.optimize(fgraph) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 90, in optimize ret = self.apply(fgraph, *args, **kwargs) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 235, in apply sub_prof = optimizer.optimize(fgraph) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 90, in optimize ret = self.apply(fgraph, *args, **kwargs) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 235, in apply sub_prof = optimizer.optimize(fgraph) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 90, in optimize ret = self.apply(fgraph, *args, **kwargs) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 2262, in apply lopt_change = self.process_node(fgraph, node, lopt) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 1825, in process_node lopt, node) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 1719, in warn_inplace return NavigatorOptimizer.warn(exc, nav, repl_pairs, local_opt, node) File "/home/t/tj/tjb32/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 1705, in warn raise exc AssertionError My .theanorc looks like this: [global] floatX = float32 device = gpu [lib] cnmem = 1 [nvcc] fastmath = True And my profile has the following: export LD_LIBRARY_PATH=/home/t/tj/tjb32/cuda/lib64:$LD_LIBRARY_PATH export CPATH=/home/t/tj/tjb32/cuda/include:$CPATH export LIBRARY_PATH=/home/t/tj/tjb32/cuda/lib64:$LD_LIBRARY_PATH export PATH=/home/t/tj/tjb32/cuda/bin:$PATH When I query theano, the following is returned, which suggests to me that theano is interacting with CUDA and cuDNN. Using gpu device 0: Tesla K20m (CNMeM is enabled with initial size: 95.0% of memory, cuDNN 5005) I'm fairly sure that I have installed CUDA and cuDNN correctly, if anyone could suggest any additional configuration steps that I may have missed that is causing cuDNN to crash the program, that would be greatly appreciated. Answer: Not sure if this can be the issues but: export LIBRARY_PATH=/home/t/tj/tjb32/cuda/lib64:$**LD_** LIBRARY_PATH should be? export LIBRARY_PATH=/home/t/tj/tjb32/cuda/lib64:$LIBRARY_PATH
Porting a python 2 code to Python 3: ICMP Scan with errors Question: import random import socket import time import ipaddress import struct from threading import Thread def checksum(source_string): sum = 0 count_to = (len(source_string) / 2) * 2 count = 0 while count < count_to: this_val = ord(source_string[count + 1]) * 256 + ord(source_string[count]) sum = sum + this_val sum = sum & 0xffffffff count = count + 2 if count_to < len(source_string): sum = sum + ord(source_string[len(source_string) - 1]) sum = sum & 0xffffffff sum = (sum >> 16) + (sum & 0xffff) sum = sum + (sum >> 16) answer = ~sum answer = answer & 0xffff answer = answer >> 8 | (answer << 8 & 0xff00) return answer def create_packet(id): header = struct.pack('bbHHh', 8, 0, 0, id, 1) data = 192 * 'Q' my_checksum = checksum(header + data) header = struct.pack('bbHHh', 8, 0, socket.htons(my_checksum), id, 1) return header + data def ping(addr, timeout=1): try: my_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP) except Exception as e: print (e) packet_id = int((id(timeout) * random.random()) % 65535) packet = create_packet(packet_id) my_socket.connect((addr, 80)) my_socket.sendall(packet) my_socket.close() def rotate(addr, file_name, wait, responses): print ("Sending Packets", time.strftime("%X %x %Z")) for ip in addr: ping(str(ip)) time.sleep(wait) print ("All packets sent", time.strftime("%X %x %Z")) print ("Waiting for all responses") time.sleep(2) # Stoping listen global SIGNAL SIGNAL = False ping('127.0.0.1') # Final ping to trigger the false signal in listen print (len(responses), "hosts found!") print ("Writing File") hosts = [] for response in sorted(responses): ip = struct.unpack('BBBB', response) ip = str(ip[0]) + "." + str(ip[1]) + "." + str(ip[2]) + "." + str(ip[3]) hosts.append(ip) file = open(file_name, 'w') file.write(str(hosts)) print ("Done", time.strftime("%X %x %Z")) def listen(responses): global SIGNAL s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP) s.bind(('', 1)) print ("Listening") while SIGNAL: packet = s.recv(1024)[:20][-8:-4] responses.append(packet) print ("Stop Listening") s.close() SIGNAL = True responses = [] ips = '200.131.0.0/20' # Internet network wait = 0.002 # Adjust this based in your bandwidth (Faster link is Lower wait) file_name = 'log1.txt' ip_network = ipaddress.ip_network(unicode(ips), strict=False) t_server = Thread(target=listen, args=[responses]) t_server.start() t_ping = Thread(target=rotate, args=[ip_network, file_name, wait, responses]) t_ping.start() I tried: `ip_network = ipaddress.ip_network( ips, strict=False)` instead of `ip_network = ipaddress.ip_network(unicode(ips), strict=False)` because of the error: ""NameError: name 'unicode' is not defined" after: I got: `my_checksum = checksum(header + data)` -> TypeError: can't concat bytes to str so I tried: `data = bytes(192 * 'Q').encode('utf8')` instead of `data = 192 * 'Q'` Now, the error is : ""data = bytes (192 * 'Q').encode('utf8') TypeError: string argument without an encoding" Could anyone help me to port the code to Python 3 ? Answer: import random import socket import time import ipaddress import struct from threading import Thread def checksum(source_string): sum = 0 count_to = (len(source_string) / 2) * 2 count = 0 while count < count_to: this_val = source_string[count + 1] * 256 + source_string[count] sum = sum + this_val sum = sum & 0xffffffff count = count + 2 if count_to < len(source_string): sum = sum + source_string[len(source_string) - 1] sum = sum & 0xffffffff sum = (sum >> 16) + (sum & 0xffff) sum = sum + (sum >> 16) answer = ~sum answer = answer & 0xffff answer = answer >> 8 | (answer << 8 & 0xff00) return answer def create_packet(id): header = struct.pack('bbHHh', 8, 0, 0, id, 1) data = 192 * b'Q' my_checksum = checksum(header + data) header = struct.pack('bbHHh', 8, 0, socket.htons(my_checksum), id, 1) return header + data def ping(addr, timeout=1): try: my_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP) except Exception as e: print (e) packet_id = int((id(timeout) * random.random()) % 65535) packet = create_packet(packet_id) my_socket.connect((addr, 80)) my_socket.sendall(packet) my_socket.close() def rotate(addr, file_name, wait, responses): print ("Sending Packets", time.strftime("%X %x %Z")) for ip in addr: ping(str(ip)) time.sleep(wait) print ("All packets sent", time.strftime("%X %x %Z")) print ("Waiting for all responses") time.sleep(2) # Stoping listen global SIGNAL SIGNAL = False ping('127.0.0.1') # Final ping to trigger the false signal in listen print (len(responses), "hosts found!") print ("Writing File") hosts = set() for response in sorted(responses): ip = struct.unpack('BBBB', response) ip = str(ip[0]) + "." + str(ip[1]) + "." + str(ip[2]) + "." + str(ip[3]) hosts.add(ip) with open(file_name, 'w') as file: file.write('\n'.join(sorted(hosts, key=lambda item: socket.inet_aton(item)))) print ("Done", time.strftime("%X %x %Z")) def listen(responses): global SIGNAL s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP) s.bind(('', 1)) print ("Listening") while SIGNAL: packet = s.recv(1024)[:20][-8:-4] responses.append(packet) print ("Stop Listening") s.close() SIGNAL = True responses = [] ips = '192.168.1.0/28' # Internet network wait = 0.002 # Adjust this based in your bandwidth (Faster link is Lower wait) file_name = 'log1.txt' ip_network = ipaddress.ip_network(ips, strict=False) t_server = Thread(target=listen, args=[responses]) t_server.start() t_ping = Thread(target=rotate, args=[ip_network, file_name, wait, responses]) t_ping.start()
How to get "clean" match results in Python Question: I am a total noob, coding for the first time and trying to learn by doing. I'm using this: import re f = open('aaa.txt', 'r') string=f.read() c = re.findall(r"Guest last name: (.*)", string) print "Dear Mr.", c that returns Dear Mr. ['XXXX'] I was wondering, is there any way to get the result like Dear Mr. XXXX instead? Thanks in advance. Answer: You need to take the first item in the list print "Dear Mr.", c[0]
Is Spark's KMeans unable to handle bigdata? Question: KMeans has several parameters for its [training](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.html?highlight=kmeans#pyspark.mllib.clustering.KMeans.train), with initialization mode is defaulted to kmeans||. The problem is that it marches quickly (less than 10min) to the first 13 stages, but then **hangs completely** , without yielding an error! **Minimal Example** which reproduces the issue (it will succeed if I use 1000 points or random initialization): from pyspark.context import SparkContext from pyspark.mllib.clustering import KMeans from pyspark.mllib.random import RandomRDDs if __name__ == "__main__": sc = SparkContext(appName='kmeansMinimalExample') # same with 10000 points data = RandomRDDs.uniformVectorRDD(sc, 10000000, 64) C = KMeans.train(data, 8192, maxIterations=10) sc.stop() The job does nothing (it doesn't succeed, fail or progress..), as shown below. There are no active/failed tasks in the Executors tab. Stdout and Stderr Logs don't have anything particularly interesting: [![enter image description here](http://i.stack.imgur.com/aggpL.png)](http://i.stack.imgur.com/aggpL.png) If I use `k=81`, instead of 8192, it will succeed: [![enter image description here](http://i.stack.imgur.com/zBKqG.png)](http://i.stack.imgur.com/zBKqG.png) Notice that the two calls of `takeSample()`, [should not be an issue](http://stackoverflow.com/questions/38986395/sparkkmeans-calls- takesample-twice), since there were called twice in the random initialization case. So, what is happening? Is Spark's Kmeans **unable to scale**? Does anybody know? Can you reproduce? * * * If it was a memory issue, [I would get warnings and errors, as I had been before](https://gsamaras.wordpress.com/code/memoryoverhead-issue-in-spark/). Note: placeybordeaux's comments are based on the execution of the job in _client mode_ , where the driver's configurations are invalidated, causing the exit code 143 and such (see edit history), not in cluster mode, where there is _no error reported at all_ , the application _just hangs_. * * * From zero323: [Why is Spark Mllib KMeans algorithm extremely slow?](http://stackoverflow.com/questions/35512139/why-is-spark-mllib-kmeans- algorithm-extremely-slow) is related, but I think he witnesses some progress, while mine hangs, I did leave a comment... [![enter image description here](http://i.stack.imgur.com/osDrR.png)](http://i.stack.imgur.com/osDrR.png) Answer: I think the 'hanging' is because your executors keep dying. As I mentioned in a side conversation, this code runs fine for me, locally and on a cluster, in Pyspark and Scala. However, it takes a lot longer than it should. It is almost all time spent in k-means|| initialization. I opened <https://issues.apache.org/jira/browse/SPARK-17389> to track two main improvements, one of which you can use now. Edit: really, see also <https://issues.apache.org/jira/browse/SPARK-11560> First, there are some code optimizations that would speed up the init by about 13%. However most of the issue is that it default to 5 steps of k-means|| init, when it seems that 2 is almost always just as good. You can set initialization steps to 2 to see a speedup, especially in the stage that's hanging now. In my (smaller) test on my laptop, init time went from 5:54 to 1:41 with both changes, mostly due to setting init steps.
Alembic not handling column_types.PasswordType : Flask+SQLAlchemy+Alembic Question: **Background** I'm trying to use a PostgreSQL back-end instead of Sqlite in this [Flask + RESTplus server example](https://github.com/frol/flask-restplus-server- example). I faced an issue with the PasswordType db column type. In order to make it work, I had to change the following code in [app/modules/users/models.py](https://github.com/frol/flask-restplus-server- example/blob/master/app/modules/users/models.py#L46) password = db.Column( column_types.PasswordType( max_length=128, schemes=('bcrypt', ) ), nullable=False ) to password = db.Column(db.String(length=128), nullable=False) which is really bad as passwords will be stored in clear text... and I need your help! After changing the [line 13 to 15 in tasks/app/db_templates/flask/script.py.mako ](https://github.com/frol/flask- restplus-server- example/blob/master/tasks/app/db_templates/flask/script.py.mako#L13-15) to from alembic import op import sqlalchemy as sa import sqlalchemy_utils ${imports if imports else ""} I get the following error message, apparently related to passlib: 2016-08-31 23:18:52,751 [INFO] [alembic.runtime.migration] Context impl PostgresqlImpl. 2016-08-31 23:18:52,752 [INFO] [alembic.runtime.migration] Will assume transactional DDL. 2016-08-31 23:18:52,759 [INFO] [alembic.runtime.migration] Running upgrade -> 99b329343a41, empty message Traceback (most recent call last): File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/type_api.py", line 359, in dialect_impl return dialect._type_memos[self]['impl'] File "/usr/local/lib/python3.5/weakref.py", line 365, in __getitem__ return self.data[ref(key)] KeyError: <weakref at 0x7fde70d524a8; to 'PasswordType' at 0x7fde70a840b8> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/bin/invoke", line 11, in <module> sys.exit(program.run()) File "/usr/local/lib/python3.5/site-packages/invoke/program.py", line 270, in run self.execute() File "/usr/local/lib/python3.5/site-packages/invoke/program.py", line 381, in execute executor.execute(*self.tasks) File "/usr/local/lib/python3.5/site-packages/invoke/executor.py", line 113, in execute result = call.task(*args, **call.kwargs) File "/usr/local/lib/python3.5/site-packages/invoke/tasks.py", line 111, in __call__ result = self.body(*args, **kwargs) File "/usr/src/app/tasks/app/run.py", line 35, in run context.invoke_execute(context, 'app.db.upgrade') File "/usr/src/app/tasks/__init__.py", line 72, in invoke_execute results = Executor(namespace, config=context.config).execute((command_name, kwargs)) File "/usr/local/lib/python3.5/site-packages/invoke/executor.py", line 113, in execute result = call.task(*args, **call.kwargs) File "/usr/local/lib/python3.5/site-packages/invoke/tasks.py", line 111, in __call__ result = self.body(*args, **kwargs) File "/usr/src/app/tasks/app/db.py", line 177, in upgrade command.upgrade(config, revision, sql=sql, tag=tag) File "/usr/local/lib/python3.5/site-packages/alembic/command.py", line 174, in upgrade script.run_env() File "/usr/local/lib/python3.5/site-packages/alembic/script/base.py", line 407, in run_env util.load_python_file(self.dir, 'env.py') File "/usr/local/lib/python3.5/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file module = load_module_py(module_id, path) File "/usr/local/lib/python3.5/site-packages/alembic/util/compat.py", line 68, in load_module_py module_id, path).load_module(module_id) File "<frozen importlib._bootstrap_external>", line 385, in _check_name_wrapper File "<frozen importlib._bootstrap_external>", line 806, in load_module File "<frozen importlib._bootstrap_external>", line 665, in load_module File "<frozen importlib._bootstrap>", line 268, in _load_module_shim File "<frozen importlib._bootstrap>", line 693, in _load File "<frozen importlib._bootstrap>", line 673, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 662, in exec_module File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed File "migrations/env.py", line 93, in <module> run_migrations_online() File "migrations/env.py", line 86, in run_migrations_online context.run_migrations() File "<string>", line 8, in run_migrations File "/usr/local/lib/python3.5/site-packages/alembic/runtime/environment.py", line 797, in run_migrations self.get_context().run_migrations(**kw) File "/usr/local/lib/python3.5/site-packages/alembic/runtime/migration.py", line 312, in run_migrations step.migration_fn(**kw) File "/usr/src/app/migrations/versions/99b329343a41_.py", line 47, in upgrade sa.UniqueConstraint('username') File "<string>", line 8, in create_table File "<string>", line 3, in create_table File "/usr/local/lib/python3.5/site-packages/alembic/operations/ops.py", line 1098, in create_table return operations.invoke(op) File "/usr/local/lib/python3.5/site-packages/alembic/operations/base.py", line 318, in invoke return fn(self, operation) File "/usr/local/lib/python3.5/site-packages/alembic/operations/toimpl.py", line 101, in create_table operations.impl.create_table(table) File "/usr/local/lib/python3.5/site-packages/alembic/ddl/impl.py", line 194, in create_table self._exec(schema.CreateTable(table)) File "/usr/local/lib/python3.5/site-packages/alembic/ddl/impl.py", line 118, in _exec return conn.execute(construct, *multiparams, **params) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 914, in execute return meth(self, multiparams, params) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection return connection._execute_ddl(self, multiparams, params) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 962, in _execute_ddl compiled = ddl.compile(dialect=dialect) File "<string>", line 1, in <lambda> File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/elements.py", line 494, in compile return self._compiler(dialect, bind=bind, **kw) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/ddl.py", line 26, in _compiler return dialect.ddl_compiler(dialect, self, **kw) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/compiler.py", line 190, in __init__ self.string = self.process(self.statement, **compile_kwargs) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/compiler.py", line 213, in process return obj._compiler_dispatch(self, **kwargs) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/visitors.py", line 81, in _compiler_dispatch return meth(self, **kw) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/compiler.py", line 2157, in visit_create_table and not first_pk) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/compiler.py", line 213, in process return obj._compiler_dispatch(self, **kwargs) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/visitors.py", line 81, in _compiler_dispatch return meth(self, **kw) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/compiler.py", line 2188, in visit_create_column first_pk=first_pk File "/usr/local/lib/python3.5/site-packages/sqlalchemy/dialects/postgresql/base.py", line 1580, in get_column_specification impl_type = column.type.dialect_impl(self.dialect) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/type_api.py", line 361, in dialect_impl return self._dialect_info(dialect)['impl'] File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/type_api.py", line 403, in _dialect_info impl = self._gen_dialect_impl(dialect) File "/usr/local/lib/python3.5/site-packages/sqlalchemy/sql/type_api.py", line 763, in _gen_dialect_impl typedesc = self.load_dialect_impl(dialect).dialect_impl(dialect) File "/usr/local/lib/python3.5/site-packages/sqlalchemy_utils/types/password.py", line 194, in load_dialect_impl impl = postgresql.BYTEA(self.length) File "/usr/local/lib/python3.5/site-packages/sqlalchemy_utils/types/password.py", line 168, in length self._max_length = self.calculate_max_length() File "/usr/local/lib/python3.5/site-packages/sqlalchemy_utils/types/password.py", line 176, in calculate_max_length for name in self.context.schemes(): File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 2714, in __getattribute__ self._lazy_init() File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 2708, in _lazy_init super(LazyCryptContext, self).__init__(**kwds) File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 1707, in __init__ self.load(kwds) File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 1896, in load config = _CryptConfig(source) File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 1019, in __init__ self._init_options(source) File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 1097, in _init_options key, value = norm_context_option(key, value) File "/usr/local/lib/python3.5/site-packages/passlib/context.py", line 1162, in _norm_context_option raise KeyError("unknown CryptContext keyword: %r" % (key,)) KeyError: "unknown CryptContext keyword: 'length'" Any idea? Thanks in advance for your help! Answer: I see that you are having a new migration `99b329343a41_.py` (it doesn't exist in my Flask-RESTplus-example-server). Please, review your new migration and **remove** everything related to PasswordType. It is a bug in SQLAlchemy- Utils, which doesn't play nice with Alembic: <https://github.com/kvesteri/sqlalchemy-utils/issues/106>
Unable to run a basic GraphFrames example Question: Trying to run a simple GraphFrame example using pyspark. spark version : 2.0 graphframe version : 0.2.0 I am able to import graphframes in Jupyter: from graphframes import GraphFrame GraphFrame graphframes.graphframe.GraphFrame I get this error when I try and create a GraphFrame object: --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) <ipython-input-23-2bf19c66804d> in <module>() ----> 1 gr_links = GraphFrame(df_web_page, df_parent_child_link) /Users/roopal/software/graphframes-release-0.2.0/python/graphframes/graphframe.pyc in __init__(self, v, e) 60 self._sc = self._sqlContext._sc 61 self._sc._jvm.org.apache.spark.ml.feature.Tokenizer() ---> 62 self._jvm_gf_api = _java_api(self._sc) 63 self._jvm_graph = self._jvm_gf_api.createGraph(v._jdf, e._jdf) 64 /Users/roopal/software/graphframes-release-0.2.0/python/graphframes/graphframe.pyc in _java_api(jsc) 32 def _java_api(jsc): 33 javaClassName = "org.graphframes.GraphFramePythonAPI" ---> 34 return jsc._jvm.Thread.currentThread().getContextClassLoader().loadClass(javaClassName) \ 35 .newInstance() 36 /Users/roopal/software/spark-2.0.0-bin-hadoop2.7/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py in __call__(self, *args) 931 answer = self.gateway_client.send_command(command) 932 return_value = get_return_value( --> 933 answer, self.gateway_client, self.target_id, self.name) 934 935 for temp_arg in temp_args: /Users/roopal/software/spark-2.0.0-bin-hadoop2.7/python/pyspark/sql/utils.pyc in deco(*a, **kw) 61 def deco(*a, **kw): 62 try: ---> 63 return f(*a, **kw) 64 except py4j.protocol.Py4JJavaError as e: 65 s = e.java_exception.toString() /Users/roopal/software/spark-2.0.0-bin-hadoop2.7/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 310 raise Py4JJavaError( 311 "An error occurred while calling {0}{1}{2}.\n". --> 312 format(target_id, ".", name), value) 313 else: 314 raise Py4JError( Py4JJavaError: An error occurred while calling o138.loadClass. : java.lang.ClassNotFoundException: org.graphframes.GraphFramePythonAPI at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:211) at java.lang.Thread.run(Thread.java:745) The python code tries to read the java class (in the jar) I guess, but cant seem to find it. Any suggestions how to fix this? Answer: Make sure that your PYSPARK_SUBMIT_ARGS is updated to have "--packages graphframes:graphframes:0.2.0-spark2.0" in your kernel.json ~/.ipython/kernels//kernel.json. You probably already looked at the following [link](https://developer.ibm.com/clouddataservices/2016/07/15/intro-to-apache- spark-graphframes/). It has more details on Jupiter setup. Basically, pyspark has to be supplied the graphframes.jar.
Is my adaptation of point-in-polygon (jordan curve theorem) in python correct? Question: **Problem** I recently found a need to determine if my points are inside of a polygon. So I learned about [this](https://sidvind.com/wiki/Point-in- polygon:_Jordan_Curve_Theorem) approach in C++ and adapted it to python. However, the C++ code I was studying isn't quite right I think? I believe I have fixed it, but I am not quite sure so I was hoping folks brighter than me might help me caste some light on this? The theorem is super simple and the idea is like this, given an nth closed polygon you draw an arbitrary line, if your point is inside, you line will intersect with the edges an odd number of times. Otherwise, you will be even and it is outside the polygon. Pretty freaking cool. I had the following test cases: polygon_x = [5, 5, 11, 10] polygon_y = [5, 10, 5, 10] test1_x = 6 test1_y = 6 result1 = point_in_polygon(test1_x, test1_y, polygon_x, polygon_y) print(result1) test2_x = 13 test2_y = 5 result2 = point_in_polygon(test2_x, test2_y, polygon_x, polygon_y) print(result2) The above would give me both false if I defined it as follows: if polygon_x[i] < polygon_x[(i+1) % length]: temp_x = polygon_x[i] temp_y = polygon_x[(i+1) % length] else: temp_x = polygon_x[(i+1) % length] temp_y = polygon_x[i] This is wrong! I should be getting `true` for `result1` and then `false` for `result2`. So clearly, something is funky. The code I was reading in C++ makes sense except for the above. In addition, it failed my test case which made me think that `temp_y` should be defined with `polygon_y` and not `polygon_x`. Sure enough, when I did this, my test case for `(6,6)` passes. It still fails when my points are on the line, but as long as I am inside the polygon, it will pass. Expected behavior. ## Polygon code adopted to python def point_in_polygon(self, target_x, target_y, polygon_x, polygon_y): print(polygon_x) print(polygon_y) #Variable to track how many times ray crosses a line segment crossings = 0 temp_x = 0 temp_y = 0 length = len(polygon_x) for i in range(0,length): if polygon_x[i] < polygon_x[(i+1) % length]: temp_x = polygon_x[i] temp_y = polygon_y[(i+1) % length] else: temp_x = polygon_x[(i+1) % length] temp_y = polygon_y[i] print(str(temp_x) + ", " + str(temp_y)) #check if target_x > temp_x and target_x <= temp_y and (target_y < polygon_y[i] or target_y <= polygon_y[(i+1)%length]): eps = 0.000001 dx = polygon_x[(i+1) % length] - polygon_x[i] dy = polygon_y[(i+1) % length] - polygon_y[i] k = 0 if abs(dx) < eps: k = 999999999999999999999999999 else: k = dy/dx m = polygon_y[i] - k * polygon_x[i] y2 = k*target_x + m if target_y <= y2: crossings += 1 print(crossings) if crossings % 2 == 1: return True else: return False **Summary** Can someone please explain to me what the `temp_x` and `temp_y` approaches are doing? Also, if my fix for redefining the `temp_x` for `polygon_x` and `temp_y` for `polygon_y` is the correct approach? I doubt it. Here is why. What is going on for `temp_x` and `temp_y` doesn't quite make sense to me. For `i = 0`, clearly `polygon_x[0] < polygon_x[1]` is `false`, so we get `temp_x[1] = 5` and `temp_y[0] = 5`. That is `(5,5)`. This just happens to be one of my pairs. However, suppose I feed the algorithm my points out of order (by axis, pairwise integrity is always a must), something like: x = [5, 10, 10, 5] y = [10,10, 5, 5] In this case, when `i = 0`, we get `temp_x[1] = 10` and `temp_y[0] = 10`. Okay, by coincidence `(10,10)`. I also tested points against the "corrected" algorithm `(9,9)` and it is still inside. In short, I am trying to find a counterexample, for why my fix won't work, but I can't. If this is working, I need to understand what the method is doing and hope someone could help explain it to me? Regardless, if I am right or wrong, I would appreciate it if someone could help shed some better light on this problem. I'm even open to solving the problem in a more efficient way for n-polygons, but I want to make sure I am understanding code correctly. As a coder, I am uncomfortable with a method that doesn't quite make sense. Thank you so much for listening to my thoughts above. Any suggestions greatly welcomed. Answer: You've misunderstood what the `x1` and `x2` values in the linked C++ code are for, and that misunderstanding has caused you to pick inappropriate variable names in Python. Both of the variables contain `x` values, so `temp_y` is a very misleading name. Better names might be `min_x` and `max_x`, since they are the minimum and maximum of the `x` values of the vertices that make up the polygon edge. A clearer version of the code might be written as: for i in range(length): min_x = min(polygon_x[i], polygon_x[(i+1)%length]) max_x = max(polygon_x[i], polygon_x[(i+1)%length]) if x_min < target_x <= x_max: # ... This is perhaps a little less efficient than the C++ style of code, since calling both `min` and `max` will compare the values twice. Your comment about the order of the points suggests that there's a further misunderstanding going on, which might explain the unexpected results you were seeing when setting `temp_y` to a value from `polygon_x`. The order of the coordinates in the polygon lists is important, as the edges go from one coordinate pair to the next around the list (with the last pair of coordinates connecting to the first pair to close the polygon). If you reorder them, the edges of the polygon will get switched around. The example coordinates you give in your code (`polygon_x = [5, 5, 11, 10]` and `polygon_y = [5, 10, 5, 10]`) don't describe a normal kind of polygon. Rather, you get a (slightly lopsided) bow-tie shape, with two diagonal edges crossing each other like an `X` in the middle. That's not actually a problem for this algorithm though. However, the first point you're testing lies exactly on one of those diagonal edges (the one that wraps around the list, from the last vertex, `(10, 10)` to the first, `(5, 5)`). Whether the code will decide it's inside or outside of the polygon likely comes down to floating point rounding and the choice of operator between `<` or `<=`. Either answer could be considered "correct" in this situation. When you reordered the coordinates later in the question (and incidentally change an `11` to a `10`), you turned the bow-tie into a square. Now the `(6, 6)` test is fully inside the shape, and so the code will work if you don't assign a `y` coordinate to the second `temp` variable.
Python 2.7 TypeError: 'NoneType' object has no attribute '_getitem_' Question: I'm pretty new to coding and have been trying some things out. I am getting this error when I run a python script I have. I have read that this error is because something is returning "None" but I'm having trouble figuring out what is causing it (still trying to learn all of this). The purpose of the script is pulling thumbnails from videos and searching the internet for other instances of the same thing. After running my python script, it returns a result of: [*] Retrieving video ID: VIDEOID Traceback (most recent call last): File "VidSearch.py", line 40, in <module> thumbnails = video_data['items'][0]['snippet']['thumbnails'] TypeError: 'NoneType' object has no atribute '_getitem_' The following is the script I am running (Youtube Key removed): import argparse import requests import json from pytineye import TinEyeAPIRequest tineye = TinEyeAPIRequest('http://api.tineye.com/rest/','PUBLICKEY','PRIVATEKEY') youtube_key = "VIDEOID" ap = argparse.ArgumentParser() ap.add_argument("-v","--videoID", required=True,help="The videoID of the YouTube video. For example: https://www.youtube.com/watch?v=VIDEOID") args = vars(ap.parse_args()) video_id = args['videoID'] # # Retrieve the video details based on videoID # def youtube_video_details(video_id): api_url = "https://www.googleapis.com/youtube/v3/videos?part=snippet%2CrecordingDetails&" api_url += "id=%s&" % video_id api_url += "key=%s" % youtube_key response = requests.get(api_url) if response.status_code == 200: results = json.loads(response.content) return results return None print "[*] Retrieving video ID: %s" % video_id video_data = youtube_video_details(video_id) thumbnails = video_data['items'][0]['snippet']['thumbnails'] print "[*] Thumbnails retrieved. Now submitting to TinEye." url_list = [] # add the thumbnails from the API to the list for thumbnail in thumbnails: url_list.append(thumbnails[thumbnail]['url']) # build the manual URLS for count in range(4): url = "http://img.youtube.com/vi/%s/%d.jpg" % (video_id,count) url_list.append(url) results = [] # now walk over the list of URLs and search TinEye for url in url_list: print "[*] Searching TinEye for: %s" % url try: result = tineye.search_url(url) except: pass if result.total_results: results.extend(result.matches) result_urls = [] dates = {} for match in results: for link in match.backlinks: if link.url not in result_urls: result_urls.append(link.url) dates[link.crawl_date] = link.url print print "[*] Discovered %d unique URLs with image matches." % len(result_urls) for url in result_urls: print url oldest_date = sorted(dates.keys()) print print "[*] Oldest match was crawled on %s at %s" % (str(oldest_date[0]),dates[oldest_date[0]]) I know it's probably something simple but I can't seem to figure it out for the life of me. Any help would be greatly appreciated. Answer: In your `youtube_video_details` method. the response.status_code maybe is not `200`, so the method return the `None` So you can do like this: video_data = youtubo_video_details(video_id) if not video_data: thumbnails = video_data['items'][0]['snippet']['thumbnails']
How to convert tar.gz file to zip using Python only? Question: Does anybody has any code for converting tar.gz file into zip using only Python code? I have been facing many issues with tar.gz as mentioned in the [How can I read tar.gz file using pandas read_csv with gzip compression option?](http://stackoverflow.com/questions/39263929/how-can-i-read-tar-gz- file-using-pandas-read-csv-with-gzip-compression-option) Answer: You would have to use the [tarfile](https://docs.python.org/3/library/tarfile.html) module, with mode `'r|gz'` for reading. Then use [zipfile](https://docs.python.org/3/library/zipfile.html#module-zipfile) for writing. import tarfile, zipfile tarf = tarfile.open( name='mytar.tar.gz', mode='r|gz' ) zipf = ZipFile.open( name='myzip.zip', mode='a', compress_type=ZIP_DEFLATED ) for m in tarf.getmembers(): f = tarf.extractfile( m ) fl = f.read() fn = m.name zipf.writestr( fn, fl ) # or zipf.writestr( fn, f ) ? tarf.close() zipf.close() You can use `is_tarfile()` to check for a valid tar file. Perhaps you could also use [`shutil`](https://docs.python.org/3/library/shutil.html#archiving- operations), but I think it cannot work on memory. PS: I do not have python in this PC, so there may be a few things to tune.
Error on Data Pickle in Python Question: I need to save my Training Data set in Data Pickle. Here is the code. When execute this code there was an error. How do I fix this error. I need to save featureCounts and labelCounts variables in two pickles. from __future__ import division import collections import math import pickle class TrainClassifier: def __init__(self, arffFile): self.trainingFile = arffFile self.features = {} self.featureNameList = [] self.featureCounts = collections.defaultdict(lambda: 1) self.featureVectors = [] self.labelCounts = collections.defaultdict(lambda: 0) def DataTraning(self): for fv in self.featureVectors: self.labelCounts[fv[len(fv)-1]] += 1 #udpate count of the label for counter in range(0, len(fv)-1): self.featureCounts[(fv[len(fv)-1], self.featureNameList[counter], fv[counter])] += 1 for label in self.labelCounts: for feature in self.featureNameList[:len(self.featureNameList)-1]: self.labelCounts[label] += len(self.features[feature]) def GetValues(self): file = open(self.trainingFile, 'r') for line in file: if line[0] != '@': #start of actual data self.featureVectors.append(line.strip().lower().split(',')) else: #feature definitions if line.strip().lower().find('@data') == -1 and (not line.lower().startswith('@relation')): self.featureNameList.append(line.strip().split()[1]) self.features[self.featureNameList[len(self.featureNameList) - 1]] = line[line.find('{')+1: line.find('}')].strip().split(',') file.close() def SaveOnPickle(self): f = open('dict.pickle', 'wb') pickle.dump(self.labelCounts, f) f.close() if __name__ == "__main__": Predic = TrainClassifier("Military.arff") Predic.GetValues() Predic.DataTraning() Predic.SaveOnPickle() Here is the Error Traceback (most recent call last): File "C:\wamp64\www\M360\M360py\src\TrainClassifier.py", line 69, in <module> Predic.SaveOnPickle() File "C:\wamp64\www\M360\M360py\src\TrainClassifier.py", line 43, in SaveOnPickle pickle.dump(self.labelCounts, f) File "C:\Users\Udara\AppData\Roaming\NetBeans\8.1\jython-2.7.0\Lib\pickle.py", line 1370, in dump Pickler(file, protocol).dump(obj) File "C:\Users\Udara\AppData\Roaming\NetBeans\8.1\jython-2.7.0\Lib\pickle.py", line 224, in dump self.save(obj) File "C:\Users\Udara\AppData\Roaming\NetBeans\8.1\jython-2.7.0\Lib\pickle.py", line 331, in save self.save_reduce(obj=obj, *rv) File "C:\Users\Udara\AppData\Roaming\NetBeans\8.1\jython-2.7.0\Lib\pickle.py", line 401, in save_reduce save(args) File "C:\Users\Udara\AppData\Roaming\NetBeans\8.1\jython-2.7.0\Lib\pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "C:\Users\Udara\AppData\Roaming\NetBeans\8.1\jython-2.7.0\Lib\pickle.py", line 562, in save_tuple save(element) File "C:\Users\Udara\AppData\Roaming\NetBeans\8.1\jython-2.7.0\Lib\pickle.py", line 286, in save f(self, obj) # Call unbound method with explicit self File "C:\Users\Udara\AppData\Roaming\NetBeans\8.1\jython-2.7.0\Lib\pickle.py", line 746, in save_global raise PicklingError( pickle.PicklingError: Can't pickle <function <lambda> at 0x5>: it's not found as __main__.<lambda> Answer: you cannot serialize `self.labelCounts` because it is a defaultdict (no problem with that) with a `lambda` in it: here's the catch: Pickle cannot serialize functions. you wrote: self.labelCounts = collections.defaultdict(lambda: 0) But you are lucky: you don't need a lambda here (you need a lambda for mutable objects such as lists but with 0 no problem), just do: self.labelCounts = collections.defaultdict(0) (of course it's the same problem and solution for your other dict `featureCounts`). Do that: self.featureCounts = collections.defaultdict(1)
Camera calibration with circular pattern Question: I'm following [this tutorial](http://opencv-python- tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html#calibration) to calibrate my camera (with some lens) on Raspberry Pi, but using a [circular pattern](http://nerian.com/support/resources/patterns/) instead of the chessboard one. The problem is that the resulting undistorted image is shrinked and not full, and when I get every shrinking away from the code, it looks like the right part of [![this](http://i.stack.imgur.com/L9sgp.jpg)](http://i.stack.imgur.com/L9sgp.jpg) (usually it's even worse). The question is if it is possible to do something with the code so that it would give away a picture like [the example from the tutorial](http://opencv-python- tutroals.readthedocs.io/en/latest/_images/calib_result.jpg). Do I have to use the chessboard pattern for this? My code: import numpy as np import cv2 from picamera.array import PiRGBArray from picamera import PiCamera import time # termination criteria criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) #objp = np.zeros((4*11,3), np.float32) #objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2) objp=np.array([[0,0,0],[1,0,0],[2,0,0],[3,0,0],[0.5,0.5,0],[1.5,0.5,0],[2.5,0.5,0],[3.5,0.5,0]]) for y in range(2,11): for x in range(4): objp=np.append(objp,[np.array([objp[4*(y-2)+x][0],objp[4*(y-2)+x][1]+1,0])],axis=0) # Arrays to store object points and image points from all the images. objpoints = [] # 3d point in real world space imgpoints = [] # 2d points in image plane. #images = glob.glob('pict*.jpg') # initialize the camera and grab a reference to the raw camera capture camera = PiCamera() camera.resolution = (640, 480) camera.framerate = 32 rawCapture = PiRGBArray(camera, size=(640, 480)) # allow the camera to warmup time.sleep(0.1) ret0=[] j=0 for i,frame in enumerate(camera.capture_continuous(rawCapture, format="bgr", use_video_port=True)): image = frame.array img=image[::-1] gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Find the chess board corners ret, corners = cv2.findCirclesGrid(gray, (4,11),None,flags=cv2.CALIB_CB_ASYMMETRIC_GRID) # If found, add object points, image points (after refining them) if ret == True and np.sum(np.int32(ret0))<15 and not i%10: ret0.append(ret) print("{} more for proper calibration".format(15-np.sum(np.int32(ret0)))) objpoints.append(objp.astype('float32')) corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria) imgpoints.append(corners2.reshape(-1, 2).astype('float32')) # Draw and display the corners img = cv2.drawChessboardCorners(img.copy(), (4,11), corners2,ret) cv2.imshow('img',img) cv2.waitKey(1000) cv2.imwrite('cal{}.jpg'.format(j),img) j+=1 rawCapture.truncate(0) elif np.sum(np.int32(ret0))<15: cv2.imshow('img',img) cv2.waitKey(1) rawCapture.truncate(0) else: rawCapture.truncate(0) break dist = np.array([-0.13615181, 0.53005398, 0, 0, 0]) # no translation ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None) h, w = img.shape[:2] newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h)) np.savetxt('newcameramtx.out',newcameramtx) np.savetxt('mtx.out',mtx) np.savetxt('dist.out',dist) #img=cv2.imread('pict1.jpg') # undistort dst = cv2.undistort(img, mtx, dist, None, newcameramtx) # crop the image x,y,w,h = roi dst = dst[y:y+h, x:x+w] cv2.imwrite('calibresult.png',dst) cv2.imshow('undistorted',dst) cv2.waitKey(0)&0xFF cv2.destroyAllWindows() UPD: I've tried calibration with the chessboard pattern, and the program didn't even want to recognize the pattern! Here are examples with detected circular pattern: [![one](http://i.stack.imgur.com/BMxcg.jpg)](http://i.stack.imgur.com/BMxcg.jpg) [![other](http://i.stack.imgur.com/x2L9b.jpg)](http://i.stack.imgur.com/x2L9b.jpg) [![one more](http://i.stack.imgur.com/tA3Mf.jpg)](http://i.stack.imgur.com/tA3Mf.jpg) [![enter image description here](http://i.stack.imgur.com/Sdm0A.jpg)](http://i.stack.imgur.com/Sdm0A.jpg) So the program allegedly detects the circles, but not as ideally, as it would be wanted. Answer: You could try to tune the parameters of the blob detector. By the default the `findCirclesGrid` uses the SimpleBlobDetector. So try to adust the parameters, for example: params = cv2.SimpleBlobDetector_Params() params.minArea = 10; params.minDistBetweenBlobs = 5; detector = cv2.SimpleBlobDetector_create(params) and then pass it to findCirclesGrid: cv2.findCirclesGrid(gray, (4,11),None,flags=cv2.CALIB_CB_ASYMMETRIC_GRID,detector) Additionaly you can try to use `cv2.CALIB_CB_ASYMMETRIC_GRID + cv2.CALIB_CB_CLUSTERING` For more information about the SimpleBlobDetector and its parameters see [this tutrial](https://www.learnopencv.com/blob-detection-using-opencv-python-c/).
Python / Pygame FULLSCREEN Tag Creates A Game Screen That Is To Large For The Screen Question: **UPDATED ISSUE** I have discovered the issue appears to be with the fact that I am using the FULLSCREEN tag to create the window. I added a rectangle to be drawn in the top left of the scree (0, 0), but when I run the program, It is mostly off the screen. Then, when I Alt-Tab away and back, the rectangle is appropriately placed at 0,0 and the turret is off center. So basically, when the program starts, the game screen is larger than my actual screen, but centered. Then after Alt-Tab, the game screen is lined up with 0,0 but since the game screen is larger than my screen, the turret looks off center, but is actually centered relative to the game. So the real question is why does using the FULLSCREEN tag make a screen larger than my computer screen? **ORIGINAL ISSUE** I am building a simple demonstration of a turret in the center of the screen which follows the location of the cursor as if to fire where it is. Everything works perfectly until I Alt-Tab away from the screen, and then Alt-Tab back. At this point to turret is now off center (down and to the right) import pygame, math pygame.init() image_library = {} screen_dimen = pygame.display.Info() print("Screen Dimensions ", screen_dimen) def get_image(name): if name not in image_library: image = pygame.image.load(name) image_library[name] = image else: image = image_library[name] return image robot_turret_image = get_image('robot_turret.png') screen = pygame.display.set_mode((0, 0), pygame.FULLSCREEN) done = False clock = pygame.time.Clock() while not done: for event in pygame.event.get(): if event.type == pygame.QUIT: done = True if event.type == pygame.MOUSEMOTION: print(event.pos) if event.type == pygame.KEYDOWN and event.key == pygame.K_SPACE: done = True screen.fill((0, 0, 0)) pos = pygame.mouse.get_pos() angle = 360 - math.atan2(pos[1] - (screen_dimen.current_h / 2), pos[0] - (screen_dimen.current_w / 2)) * 180 / math.pi rot_image = pygame.transform.rotate(robot_turret_image, angle) rect = rot_image.get_rect(center=(screen_dimen.current_w / 2, screen_dimen.current_h / 2)) screen.blit(rot_image, rect) color = (0, 128, 255) pygame.draw.rect(screen, color, pygame.Rect(0, 0, 200, 200)) pygame.display.update() clock.tick(60) It seems that the center is now off. I have printed out the screen dimensions before and after the Alt-Tab and they are the same, so I can't figure out why the image moves. I believe I am missing something regarding state changes with Pygame, but can't figure out what. If it is relevant, I am on Windows 10. Answer: Alright, I discovered a solution from [gamedev.stackexchange](http://gamedev.stackexchange.com/questions/105750/pygame- fullsreen-display-issue) And I will re-hash it here. The issue was that Using the fullscreen tag was making a screen larger than my computer screen. The following code solves this import ctypes ctypes.windll.user32.SetProcessDPIAware() true_res = (ctypes.windll.user32.GetSystemMetrics(0), ctypes.windll.user32.GetSystemMetrics(1)) pygame.display.set_mode(true_res,pygame.FULLSCREEN) It is important to note that this is potentially just a windows fix, but I do not have another system with which to test it on. But It works on Windows 10 with python 3.5.1 and pygame 1.9.2a0
Python Check for decimals Question: i'm making a program which divides a lot of numbers and I want to check if the number gets decimals or not. I also want it to print those decimals. Example: foo = 7/3 if foo has a 3 in the decimals: (Just an example of what I want to do there) print("It works!) elif foo has no decimals: (another example) print("it has no decimals") EDIT: Ok, so since "check which decimal is afterwards" brought some confusion, let me explain. I want to be able to check IF a number has decimals. For example, 7/3 (foo) gives me decimals, but I want python to tell me that without me having to do the math. forget the "which decimal" part Answer: If you just want to test whether a division has decimals, just check the modulo: foo = a % b if foo != 0: # Then foo contains decimals pass if foo == 0: # Then foo does NOT contain decimals pass However (since your question is a bit unclear) if you want to split the integer and decimal parts then use [`math.modf()`](https://docs.python.org/2/library/math.html#math.modf) function: import math x = 1234.5678 math.modf(x) # (0.5678000000000338, 1234.0)
Issue deploying Django project to Apache via WSGI Question: Ubuntu 14.04.4 and Django 1.10 I'm trying to deploy a simple Django app that works perfectly in development to Apache, via WSGI. The relevant bits in my Apache config file: <VirtualHost [my IP]> WSGIScriptAlias /Django/MedFormUpdates /home/web/inside/django/MedFormUpdates/MedFormUpdates/wsgi.py WSGIApplicationGroup %{GLOBAL} </VirtualHost> WSGIPythonPath /home/web/inside/django/MedFormUpdates/MedFormUpdates <Directory "/home/web/inside/django/MedFormUpdates/MedFormUpdates"> Options ExecCGI <Files wsgi.py> Require all granted </Files> </Directory> In my wsgi.py file: import os, sys from django.core.wsgi import get_wsgi_application sys.path.append('/home/web/inside/django') sys.path.append('/home/web/inside/django/MedFormUpdates') os.environ.setdefault("DJANGO_SETTINGS_MODULE", "MedFormUpdates.settings") application = get_wsgi_application() And in my settings.py: WSGI_APPLICATION = 'MedFormUpdates.wsgi.application' When I attempt loading the page, I get "**We're sorry, the web server had an internal error.** " - with the Apache log showing End of script output before headers: wsgi.py I've been through every Django/WSGI/Apache thread that I can find, and this is driving me nuts. Any insight is appreciated. Thanks. Answer: It turns out that we have another Apache configuration file in the directory /etc/apache2/conf-enabled which has the following setting, which binds Python scripts (as well as Perl and CGI) to CGI: AddHandler cgi-script .cgi .pl .py This is necessary for many other things that we have in production. In order to except my project from that, in my Apache config file I removed Options ExecGI and instead put RemoveHandler .py which now has the page showing just fine.
PyCharm / OS X El Capitan / Python 3.5.2 - matplotlib not working in script Question: Python noob here, apologies if this is has an obvious answer I should know. I'm using Python 3.5.2 via PyCharm in OSX El Capitan and I'm trying to run the following simple script to practise with matplotlib: import matplotlib.pyplot as plt year = [1950,1970,1990,2010] pop = [2.159,3.692,5.263,6.972] plt.plot(year,pop) plt.show() If I execute this line by line in PyCharm's Python console, it works fine. If I execute it as an entire script, I get this error: /Library/Frameworks/Python.framework/Versions/3.5/bin/python3.5/Users/Cuckoo/Dropbox/Python/test.py Traceback (most recent call last): File "/Users/Cuckoo/Dropbox/Python/test.py", line 1, in <module> import matplotlib.pyplot as plt File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/matplotlib/__init__.py", line 115, in <module> import tempfile File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/tempfile.py", line 45, in <module> from random import Random as _Random ImportError: cannot import name 'Random' Process finished with exit code 1 Can anyone please explain what has gone wrong and better still, how I can fix it? Answer: This can be caused by having another Python script in your project named `random.py` that is overriding the original library named Random. Try to rename or remove the `random.py` file and your script should work from within PyCharm and the command line.
JSON.parse without escaping Question: Is there anyway to do this in JavaScript: $ cat test.json {"body":"\u0000"} $ python3 -c 'import json; print(json.load(open("test.json", "r")))' {'body': '\x00'} Notice, the data above only one `\` (does not need to be escaped). So you have the following situation in JavaScript: JSON.parse('{"body":"\\u0000"}') // works JSON.parse('{"body":"\u0000"}') // does not work With potentially any UTF-8 data comming from a binary source (websocket), can this data be processed directly like in the first python example above? Answer: String characters from `\u0000` through `\u001F` are considered as control characters, and according to [RFC-7159](http://rfc7159.net/rfc7159) are not allowed characters to use in JSON and must be escaped, as stated [in section 7](http://rfc7159.net/rfc7159#rfc.section.7). What you are trying to do is to put unescaped control characters into a JSON, which clearly not acceptable, you have to escape it first, non of the languages accept it, even Python. The correct answer would be place a UTF-8 encoded value into a string containing a JSON format. This is a correct JSON, and will be parsed by any JSON parser in any language, even in JavaScript: {"body":"\u0000"} This is **incorrect** JSON (consider the `[NUL]` as a NUL control character, as it cannot be represented in text): {"body":"[NUL]"} That's why `JSON.parse('{"body":"\\u0000"}')` works and `JSON.parse('{"body":"\u0000"}')` doesn't. Hope, it clarifies what's wrong with your test.
Django TypeError: allow_migrate() got an unexpected keyword argument 'model_name' Question: So I copied over my Django project to a new server, replicated the environment and imported the tables to the local mysql database. But when I try to run makemigrations it gives me the TypeError: allow_migrate() got an unexpected keyword argument 'model_name' This is the full stack trace: Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/home/cicd/.local/lib/python2.7/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line utility.execute() File "/home/cicd/.local/lib/python2.7/site-packages/django/core/management/__init__.py", line 359, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/cicd/.local/lib/python2.7/site-packages/django/core/management/base.py", line 305, in run_from_argv self.execute(*args, **cmd_options) File "/home/cicd/.local/lib/python2.7/site-packages/django/core/management/base.py", line 353, in execute self.check() File "/home/cicd/.local/lib/python2.7/site-packages/django/core/management/base.py", line 385, in check include_deployment_checks=include_deployment_checks, File "/home/cicd/.local/lib/python2.7/site-packages/django/core/management/base.py", line 372, in _run_checks return checks.run_checks(**kwargs) File "/home/cicd/.local/lib/python2.7/site-packages/django/core/checks/registry.py", line 81, in run_checks new_errors = check(app_configs=app_configs) File "/home/cicd/.local/lib/python2.7/site-packages/django/core/checks/model_checks.py", line 30, in check_all_models errors.extend(model.check(**kwargs)) File "/home/cicd/.local/lib/python2.7/site-packages/django/db/models/base.py", line 1266, in check errors.extend(cls._check_fields(**kwargs)) File "/home/cicd/.local/lib/python2.7/site-packages/django/db/models/base.py", line 1337, in _check_fields errors.extend(field.check(**kwargs)) File "/home/cicd/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 893, in check errors = super(AutoField, self).check(**kwargs) File "/home/cicd/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 208, in check errors.extend(self._check_backend_specific_checks(**kwargs)) File "/home/cicd/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 310, in _check_backend_specific_checks if router.allow_migrate(db, app_label, model_name=self.model._meta.model_name): File "/home/cicd/.local/lib/python2.7/site-packages/django/db/utils.py", line 300, in allow_migrate allow = method(db, app_label, **hints) TypeError: allow_migrate() got an unexpected keyword argument 'model_name' I would appreciate any help in debugging this error and trying to understand what is causing this error. Answer: I meet the same problem when i move from 1.6.* to 1.10.Finally i found the problem cause by the **DATABASE_ROUTERS** the old version i write like this class OnlineRouter(object): # ... def allow_migrate(self, db, model): if db == 'myonline': return model._meta.app_label == 'online' elif model._meta.app_label == 'online': return False return None it work by rewrite like this class OnlineRouter(object): # ... def allow_migrate(self, db, app_label, model_name=None, **hints): if db == 'my_online': return app_label == 'online' elif app_label == 'online': return False return None more detail see <https://docs.djangoproject.com/en/1.10/topics/db/multi- db/#an-example>
NoSuchElement exception with Selenium even after using Wait and checking page_soure Question: I have this simple scraper that I am running. I am trying to scrape the search results for letter q from sam.gov: from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from bs4 import BeautifulSoup import re import sys reload(sys) sys.setdefaultencoding('utf8') letter = 'q' driver = webdriver.PhantomJS() driver.set_window_size(1120, 550) driver.get("http://sam.gov") #element = WebDriverWait(driver, 10).until( # EC.presence_of_element_located((By.ID, "pbG220e071f_2de75f_2d417d_2d9c61_2d027d324c8fec:_viewRoot:j_id12:search1")) # ) #element.click() driver.find_element_by_id('pbG220e071f_2de75f_2d417d_2d9c61_2d027d324c8fec:_viewRoot:j_id12:search1').click() driver.find_element_by_id(letter).send_keys(letter) driver.find_element_by_id('RegSearchButton').click() def crawl(): bsObj = BeautifulSoup(driver.page_source, "html.parser") tableList = bsObj.find_all("table", {"class":"width100 menu_header_top_emr"}) tdList = bsObj.find_all("td", {"class":"menu_header width100"}) for table in tableList: item = table.find_all("span", {"class":"results_body_text"}) print item[0].get_text().strip() + ', ' + item[1].get_text().strip() if driver.find_element_by_id('anch_16'): crawl() driver.find_element_by_id('anch_16').click() print "Going to next page" else: crawl() print "Done with last page" driver.quit() When i run it gives a weird error which is bothering me: Traceback (most recent call last): File "save.py", line 22, in <module> driver.find_element_by_id('pbG220e071f_2de75f_2d417d_2d9c61_2d027d324c8fec:_viewRoot:j_id12:search1').click() File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 269, in find_element_by_id return self.find_element(by=By.ID, value=id_) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 752, in find_element 'value': value})['value'] File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 236, in execute self.error_handler.check_response(response) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/errorhandler.py", line 192, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: {"errorMessage":"Unable to find element with id 'pbG220e071f_2de75f_2d417d_2d9c61_2d027d324c8fec:_viewRoot:j_id12:search1'","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"153","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:40423","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\"using\": \"id\", \"sessionId\": \"eb7dfa50-70a7-11e6-b125-9ff4e2dbd485\", \"value\": \"pbG220e071f_2de75f_2d417d_2d9c61_2d027d324c8fec:_viewRoot:j_id12:search1\"}","url":"/element","urlParsed":{"anchor":"","query":"","file":"element","directory":"/","path":"/element","relative":"/element","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/element","queryKey":{},"chunks":["element"]},"urlOriginal":"/session/eb7dfa50-70a7-11e6-b125-9ff4e2dbd485/element"}} Screenshot: available via screen I have since tried using an implicit wait of 60 right after i initialize the browser. No luck I have also tried webdriverwait (commented out in the code right below `driver.get("http://sam.gov")`, and it gave me at TimeOutException. The weird thing is if i do a `print driver.page_source` right after the get call, the source is fine and it contains the following code which actually contains the element with the id that I am searching for. There is no frame or iframe either. <a id="pbG220e071f_2de75f_2d417d_2d9c61_2d027d324c8fec:_viewRoot:j_id12:search1" href="#" title="Search Records" onclick="if(typeof jsfcljs == 'function'){jsfcljs(document.getElementById('pbG220e071f_2de75f_2d417d_2d9c61_2d027d324c8fec:_viewRoot:j_id12'),{'pbG220e071f_2de75f_2d417d_2d9c61_2d027d324c8fec:_viewRoot:j_id12:search1':'pbG220e071f_2de75f_2d417d_2d9c61_2d027d324c8fec:_viewRoot:j_id12:search1'},'');}return false" class="button"> Answer: Id locator of the element looks like dynamically generated, you should try some different locator. You can try using `css_selector` as below:- driver.find_element_by_css_selector("a.button[title='Search Records']").click() Or using `WebDriverWait` as:- element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, "a.button[title='Search Records']"))) element.click() **Note** :- Before finding element make sure it's not inside any `frame/iframe`. If it's inside any `frame/iframe` you need to switch that `frame/iframe` before finding element as `driver.switch_to_frame("frame/iframe id or name")`
Save and reset parameters of multilayer networks in theano Question: We can save and load object in python using `six.moves.cPickle`. I saved and reset the parameters for LeNet using the following code. # save model # params = layer3.params + layer2.params + layer1.params + layer0.params import six.moves.cPickle as pickle f = file('best_cnnmodel.save', 'wb') pickle.dump(params, f, protocol=pickle.HIGHEST_PROTOCOL) f.close() # reset parameters model_file = file('best_cnnmodel.save', 'rb') params = pickle.load(model_file) model_file.close() layer3.W.set_value(params[0].get_value()) layer3.b.set_value(params[1].get_value()) layer2.W.set_value(params[2].get_value()) layer2.b.set_value(params[3].get_value()) layer1.W.set_value(params[4].get_value()) layer1.b.set_value(params[5].get_value()) layer0.W.set_value(params[6].get_value()) layer0.b.set_value(params[7].get_value()) The code seems to be ok for LeNet. But it is not elegant. For deep networks, I can not save models using this code. What can I do in this case? Answer: You can consider to use json format. It is human readable and easy to work with. Here is an example: # Prepare the data import json data = { 'L1' : { 'W': layer1.W, 'b': layer1.b }, 'L2' : { 'W': layer2.W, 'b': layer2.b }, 'L3' : { 'W': layer3.W, 'b': layer3.b }, } json_data = json.dumps(data) The `json_data` looks like this: {"L2": {"b": 2, "W": 17}, "L3": {"b": 2, "W": 10}, "L1": {"b": 2, "W": 1}} # Unpack the data params = json.loads(json_data) for k, v in params.items(): level = int(k[1:]) # assume you save the layer in an array, but you can use # different way to store and reference the layers layer = layers[level] layer.W = v['W'] layer.b = v['b']
Error tokenizing data Question: This is my code: import pandas import datetime from decimal import Decimal file_ = open('myfile.csv', 'r') result = pandas.read_csv( file_, header=None, names=('sec', 'date', 'sale', 'buy'), usecols=('date', 'sale', 'buy'), parse_dates=['date'], iterator=True, chunksize=100, compression=None, engine="c", date_parser=lambda dt: datetime.datetime.strptime(dt, '%Y%m%d %H:%M:%S.%f'), converters={'sale': (lambda u: Decimal(u)), 'buy': (lambda u: Decimal(u))} ) And then I try... result.get_chunk() Only to get an error like this: CParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4 From a file like this (I just show the first 4 lines - the file has no header, and all the lines have this format): EUR/USD,20160701 00:00:00.071,1.11031,1.11033 EUR/USD,20160701 00:00:00.255,1.11031,1.11033 EUR/USD,20160701 00:00:00.256,1.11025,1.11033 EUR/USD,20160701 00:00:00.258,1.11027,1.11033 ... > l0.000.000 lines like these My intention is to get an object to iterate by chunks and not have the whole crap in memory (the actual file has 560mb!). I want to discard the first column (there are 4 columns but since this file has the same value in the first column, I want to discard such column). I want to keep columns 1, 2, and 3 (discarding 0) as date, sale, and purchase price. Actually this is my first attempt with pandas, since the former solution used standard Python csv module, and takes a lot of time. What am I missing? Why am I getting such error? Answer: #try this code import pandas as pd import numpy as np import csv # To print only three columns , create a data frame,to do that give names to columns in csv file with ',' as seperator myfile.csv: sec,date,sale,buy EUR/USD,20160701 00:00:00.071,1.11031,1.11033 EUR/USD,20160701 00:00:00.255,1.11031,1.11033 EUR/USD,20160701 00:00:00.256,1.11025,1.11033 EUR/USD,20160701 00:00:00.258,1.11027,1.11033 data = pd.read_csv('myfile.csv',sep=',') df = pd.DataFrame({'date':data.date,'sale':data.sale,'buy':data.buy}) print(df) output: buy date sale 0 1.11033 20160701 00:00:00.071 1.11031 1 1.11033 20160701 00:00:00.255 1.11031 2 1.11033 20160701 00:00:00.256 1.11025 3 1.11033 20160701 00:00:00.258 1.11027
Customize username and password field in Django? Question: First, How can I set `min_length` for `username`? `ChachaUser._meta.get_field('username').min_length = 2` doesn't work. Second, How can I place `placeholder` for `password1` and `password2`? `forms.PasswordInput(attrs={'placeholder' : "6자리 이상"}),` doesn't work. This is my customized `User` model and `UserCreationForm`. `models.py` from django.core.validators import RegexValidator from django.contrib.auth.models import AbstractUser from django.db import models GENDER_CHOICES = ( ('M', '남'), ('F', 'μ—¬'), ) phone_regex = RegexValidator( regex=r'^\d{11}$', message=" '-' 없이 μž…λ ₯ν•΄μ£Όμ„Έμš”", ) username_regex = RegexValidator( regex=r'^[0-9a-zA-Z]*$', message='μ•„μ΄λ””λŠ” μ˜μ–΄μ™€ 숫자둜만 κ΅¬μ„±λ˜μ–΄μ•Ό ν•©λ‹ˆλ‹€.' ) class ChachaUser(AbstractUser): birth = models.DateField("생년월일") name = models.CharField( "이 름", max_length=4 ) gender = models.CharField( "μ„± 별", max_length=1, choices=GENDER_CHOICES, default='M' ) phone_number = models.CharField( "ν•Έλ“œν°", validators=[phone_regex], max_length=11 ) job = models.CharField( "직 μ—…", max_length=20, ) # python manage.py createsuperuser ν•  λ•Œ λ‚˜μ˜€λŠ” ν•­λͺ© REQUIRED_FIELDS = [ 'birth', 'name', 'gender', 'phone_number', 'job', 'email' ] ChachaUser._meta.get_field('username').verbose_name = '아이디' ChachaUser._meta.get_field('username').validators = [username_regex] ChachaUser._meta.get_field('username').max_length = 20 ChachaUser._meta.get_field('username').min_length = 2 `forms.py` from django import forms from django.contrib.auth.forms import UserCreationForm from django.contrib.auth import get_user_model class MyUserCreationForm(UserCreationForm): birth = forms.DateField( label="생년월일", widget=forms.SelectDateWidget( years=range(1970, 2015) ), ) class Meta(UserCreationForm.Meta): model = get_user_model() fields = UserCreationForm.Meta.fields + ( 'name', 'gender', 'birth', 'phone_number', 'job', ) exclude = ('email', ) widgets = { 'username' : forms.TextInput( attrs={'placeholder': 'μ•ŒνŒŒλ²³, 숫자만 κ°€λŠ₯(20자 이내)'} ), 'phone_number' : forms.TextInput( attrs={'placeholder' : "ex) 01012341234"} ), 'job' : forms.TextInput( attrs={'placeholder' : "ex) ν•œκ΅­λŒ€ μ² ν•™κ³Ό, μ„ μƒλ‹˜ λ“±(20자 이내)"} ), 'password1' : forms.PasswordInput( attrs={'placeholder' : "6자리 이상"} ), 'password2' : forms.PasswordInput( attrs={'placeholder' : "6자리 이상"} ), } Answer: For first question: from django.core import validators class ChachaUser(AbstractUser): .................. AbstractUser._meta.get_field('username').validators=[validators.MinLengthValidator(2)] For second question your couldn't change `password1` in `Meta.widgets` because it is not a model field. You can override widget in `__init__` method instead.
print current thread in python 3 Question: I have this script: import threading, socket for x in range(800) send().start() class send(threading.Thread): def run(self): while True: try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("www.google.it", 80)) s.send ("test") print ("Request sent!") except: pass And at the place of "Request sent!" I would like to print something like: "Request sent! %s" % (the current number of the thread sending the request) What's the fastest way to do it? **\--SOLVED--** import threading, socket for x in range(800) send(x+1).start() class send(threading.Thread): def __init__(self, counter): threading.Thread.__init__(self) self.counter = counter def run(self): while True: try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("www.google.it", 80)) s.send ("test") print ("Request sent! @", self.counter) except: pass Answer: You could pass your counting number (`x`, in this case), as a variable in your send class. Keep in mind though that `x` will start at 0, not 1. for x in range(800) send(x+1).start() class send(threading.Thread): def __init__(self, count): self.count = count def run(self): while True: try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("www.google.it", 80)) s.send ("test") print ("Request sent!"), self.count except: pass Or, as Rob commented above in the other question, [`threading.current_thread()`](http://docs.python.org/library/threading.html#threading.current_thread) looks satisfactory.
How to plot data from multiple files in a loop using matplotlib in python? Question: I have a more than 1000 files which are .CSV (data_1.csv......data1000.csv), each containing X and Y values ! x1 y1 x2 y2 5.0 60 5.5 500 6.0 70 6.5 600 7.0 80 7.5 700 8.0 90 8.5 800 9.0 100 9.5 900 * * * I have made a subplot program in python which can give two plots (plot1-X1vsY1, Plot2-X2vsY2) at a time using one file. I need help in looping all the files, (open a file, read it, plot it, pick another file, open it, read it, plot it, then next ..... till all the files in a folder gets plotted) ## I have made a small code: import pandas as pd import matplotlib.pyplot as plt df1=pd.read_csv("data_csv",header=1,sep=',') fig = plt.figure() plt.subplot(2, 1, 1) plt.plot(df1.iloc[:,[1]],df1.iloc[:,[2]]) plt.subplot(2, 1, 2) plt.plot(df1.iloc[:,[3]],df1.iloc[:,[4]]) plt.show() Any loop structure to do this task more efficiently ?? Answer: You can generate a list of filenames using `glob` and then plot them in a for loop. import glob import matplotlib.pyplot as plt files = glob.glob(# file pattern something like '*.csv') for file in files: df1=pd.read_csv(file,header=1,sep=',') fig = plt.figure() plt.subplot(2, 1, 1) plt.plot(df1.iloc[:,[1]],df1.iloc[:,[2]]) plt.subplot(2, 1, 2) plt.plot(df1.iloc[:,[3]],df1.iloc[:,[4]]) plt.show() # this wil stop the loop until you close the plot
How can I set infinity as an element of a matrix in python(numpy)? Question: This is the program import numpy as n m = complex('inf') z=n.empty([2,2] , dtype = complex) z=n.array(input() , dtype = complex ) but in the console when i give 'm' as an input i get the following error massage: 'NameError: name 'm' is not defined' Answer: Infinity in python can be typed as: float('inf') / float('-inf') or simply as a number 1e500 (+inf) -1e500 (-inf)
Convert bash script to python Question: I have a bash script and i want to convert it to python. This is the script : mv $1/positive/*.$3 $2/JPEGImages mv $1/negative/*.$3 $2/JPEGImages mv $1/positive/annotations/*.xml $2/Annotations mv $1/negative/annotations/*.xml $2/Annotations cut -d' ' -f1 $1/positive_label.txt > $4_trainval.txt My problem is : i didn't found how to past positive_label.txt in $4_trainval.txt. This is my try, it's the first time i work with python. please help me to make it work. thank you. import sys # Required for reading command line arguments import os # Required for path manipulations from os.path import expanduser # Required for expanding '~', which stands for home folder. Used just in case the command line arguments contain "~". Without this, python won't parse "~" import glob import shutil def copy_dataset(arg1,arg2,arg3,arg4): path1 = os.path.expanduser(arg1) path2 = os.path.expanduser(arg2) # frame_ext = arg3 # File extension of the patches pos_files = glob.glob(os.path.join(path1,'positive/'+'*.'+frame_ext)) neg_files = glob.glob(os.path.join(path1,'negative/'+'*.'+frame_ext)) pos_annotation = glob.glob(os.path.join(path1,'positive/annotations/'+'*.'+xml)) neg_annotation = glob.glob(os.path.join(path1,'negative/annotations/'+'*.'+xml)) #mv $1/positive/*.$3 $2/JPEGImages for x in pos_files: shutil.copyfile(x, os.path.join(path2,'JPEGImages')) #mv $1/negative/*.$3 $2/JPEGImages for y in neg_files: shutil.copyfile(y, os.path.join(path2,'JPEGImages')) #mv $1/positive/annotations/*.xml $2/Annotations for w in pos_annotation: shutil.copyfile(w, os.path.join(path2,'Annotations')) #mv $1/negative/annotations/*.xml $2/Annotations for z in neg_annotation: shutil.copyfile(z, os.path.join(path2,'Annotations')) #cut -d' ' -f1 $1/positive_label.txt > $4_trainval.txt for line in open(path1+'/positive_label.txt') line.split(' ')[0] Answer: Without testing, something like that should works. #cut -d' ' -f1 $1/positive_label.txt > $4_trainval.txt positive = path1+'/positive_label.txt' path4 = os.path.join(arg4, '_trainval.txt') with open(positive, 'r') as input_, open(path4, 'w') as output_: for line in input_.readlines(): output_.write(line.split()[0] + "\n") This code defines the two files we will work with and opens both. The first is opened in read mode, the second in write mode. For each line in the _input_ file, we write the first find data separated by a space to the _output_ file. Check [Reading and Writing Files in Python](http://www.pythonforbeginners.com/files/reading-and-writing-files-in- python) for more informations about files into Python.
Python tkinter's entry.get() does not work, how can I fix it? Question: I am building a simple program for university. We have to convert our code to an interface. Ive managed to make the interface, but i cant seem to pass my values from Entry to the actual code. Here is my code: import sys from tkinter import * from tkinter import ttk import time from datetime import datetime now= datetime.now() d = dict() def quit(): print("Have a great day! Goodbye :)") sys.exit(0) def display(): print(str(d)) def add(*args): global stock global d stock = stock_Entry.get() Quantity = Quantity_Entry.get() if stock not in d: d[stock] = Quantity else: d[stock] += Quantity root = Tk() root.title("Homework 5 216020088") mainframe = ttk.Frame(root, padding="6 6 20 20") mainframe.grid(column=0, row=0, sticky=(N, W, E, S)) ttk.Label(mainframe, text="you are accesing this on day %s of month %s of %s" % (now.day,now.month,now.year)+" at exactly %s:%s:%s" % (now.hour,now.minute,now.second), foreground="yellow", background="Black").grid(column=0, row = 0) stock_Entry= ttk.Entry(mainframe, width = 60, textvariable="stock").grid(column=0, row = 1, sticky=W) ttk.Label(mainframe, text="Please enter the stock name").grid(column=1, row = 1, sticky=(N, W, E, S)) Quantity_Entry= ttk.Entry(mainframe, width = 60, textvariable="Quantity").grid(column=0, row = 2, sticky=W) ttk.Label(mainframe, text="Please enter the quantity").grid(column=1, row = 2, sticky=(N, W, E, S)) ttk.Button(mainframe, text="Add", command=add).grid(column=0, row=3, sticky=W) ttk.Button(mainframe, text="Display", command=display).grid(column=0, row=3, sticky=S) ttk.Button(mainframe, text="Exit", command=quit).grid(column=0, row=3, sticky=E) for child in mainframe.winfo_children(): child.grid_configure(padx=5, pady=5) root.mainloop() Answer: You cannot create the widget and grid it at the same time like this, if you still need to access it later. `stock_Entry.get()` will raise an `AttributeError` (`NoneType`). Instead of: stock_Entry= ttk.Entry(mainframe, width = 60, textvariable="stock").grid(column=0, row = 1, sticky=W) use: stock_Entry = ttk.Entry(mainframe, width = 60, textvariable="stock") stock_Entry.grid(column=0, row = 1, sticky=W) Same for the `Quantity_Entry`: Quantity_Entry = ttk.Entry(mainframe, width = 60, textvariable="Quantity") Quantity_Entry.grid(column=0, row = 2, sticky=W) ...and it works: [![enter image description here](http://i.stack.imgur.com/q6i7A.png)](http://i.stack.imgur.com/q6i7A.png)
cmd module - python Question: I am trying to build a python shell using cmd module. from cmd import Cmd import subprocess import commands import os from subprocess import call class Pirate(Cmd): intro = 'Welcome to shell\n' prompt = 'platform> ' pass if __name__ == '__main__': Pirate().cmdloop() I am trying to build a shell using python - cmd module. I am trying to build these two functionalities. Welcome to shell platform> ls platform> cd .. like if I want to perform ls - list all files from that directory in my python shell or cd .. - go back to prev directory Can anyone help in this? I tried using subprocess library.. but didn't get it working. Appreciate your help ! Ref Doc: <https://docs.python.org/3/library/cmd.html> Answer: I have a hard time trying to figure out why you would need such a thing, but here's my attempt: import subprocess from cmd import Cmd class Pirate(Cmd): intro = 'Welcome to shell\n' prompt = 'platform> ' def default(self, line): # this method will catch all commands subprocess.call(line, shell=True) if __name__ == '__main__': Pirate().cmdloop() The main point is to use `default` method to catch all commands passed as input.
Easily editing base class variables from inherited class Question: ### How does communication between base classes and inherited classes work? I have a data class in my python code ( storing all important values, duh ), I tried inheriting new subclasses from the _data base class_ , everything worked fine except the fact that the classes were not actually communicating ( when one class variable was changed in a subclass, the class attribute **was not** changed in the base class nor any other subclasses. I guess I just failed to understand how inheritance works, **my question is** : Does inheritance keep any connection to the base classes, or are the values set at the time of inheritance? If there is any connection, how do you easily manipulate base class variables from a subclass ( I tried it with the _cls_ variable to access base class variables, didn't work out ) ### Example class Base: x = 'baseclass var' # The value I want to edit class Subclass(Base): @classmethod(cls) ???edit_base_x_var_here??? # This is the part I don't know Answer: Well, you could do that in this way: class Base: x = 'baseclass var' # The value I want to edit class Subclass(Base): @classmethod def change_base_x(cls): Base.x = 'nothing' print Subclass.x Subclass.change_base_x() print Subclass.x furthermore, you don't have to use `@classmethod`, it could be staticmethod, because you don't need current class object `cls`: class Base: x = 'baseclass var' # The value I want to edit class Subclass(Base): @staticmethod def change_base_x(): Base.x = 'nothing' EDITED: According to your question, about other way. Yes it is, but not so pretty. I would say more. If you want to change variable of base class, then you will do it globally though, so that option with assigning to `Base.x` is the best way you can achieve that.
Python: indexing letters of string in a list Question: I would like to ask if there is a way how to get exact letters of some string stored in a list? I'm working with DNA strings, get them from FASTA file using BioPython SeqIO and store them as strings in a list. In next step I will convert it to numerical sequence (called genomic signals). But as novice in Python I don't know how to obtain it from the list correctly. Should I use different data type? In Maltab I used: a=1+1i;c=-1-1i;g=-1+1i;t=1-1i; %base values definition for i=1:number of sequences length_of_sequence(i)=length(sequence{1,i}); temp=zeros(1,length_of_sequence(i),'double'); temp(sequence{i}=='A')=angle(a); temp(sequence{i}=='C')=angle(c); temp(sequence{i}=='G')=angle(g); temp(sequence{i}=='T')=angle(t); KontigNumS{i,1}=cumsum(temp); %cumulated phase of whole vector end what creates a vector and replace zeros with according values. I wasn't able to find a similar question. Thanks for replies. My python code: #Dependencies from Bio import SeqIO #fasta loading import cmath #complex numbers import numpy as np #Open FASTA file new variable lengths=list() sequences=list() handle=open("F:\GC_Assembler_Python\xx.fasta","r") for record in SeqIO.parse(handle, "fasta"): print(record.id) print(len(record.seq)) lengths.append(len(record.seq)) sequences.append(str(record.seq)) #Convert to genomic signals a=complex(1,1) c=complex(-1,-1) g=complex(-1,1) t=complex(1,-1) I stopped here. Answer: I don't know how MATLAB does it. In Python you can access any position in a string without converting to a list: DNA = "ACGTACGTACGT" print(DNA[2]) # outputs "G", the third base If you want to store "strings in a list" you can do this: DNA_list = ["AAAAAA", "CCCCC", "GGGGG", "TTTTT"] print(DNA_list[0][0]) # outputs "A", the first "A" of the first sequence print(DNA_list[1][0]) # outputs "C", the first "C" of the second sequence
Python - Parsing Json format input Question: I need to make a data parsing that come from another program in JSON format: import json input = ''' Array ( [error] => Array ( ) [result] => Array ( [0] => Person Object ( [arr:Person:private] => Array ( [cf] => DRGMRO75P03G273O [first_name] => Mario [last_name] => Dragoni [email] => [email protected] [phone] => 558723 [uid] => dragom [source] => USRDATA ) ) ) ) ''' I tried: data = json.loads(input) But I get: **ValueError:** No JSON object could be decoded Perhaps the fault is due to lack of field separators? Edit: The input was generated by a _php_ **print_r** , I replaced it with **json_encode** Answer: your function is correct. but the provided json string is wrong in fact the input is a mixed array and class object you can import json in python like this: import json j = json.loads('{"one" : "1", "two" : "2", "three" : "3"}') print j['two']
Custom date string to Python date object Question: I am using Scrapy to parse data and getting date in `Jun 14, 2016 ` format, I have tried to parse it with `datetime.strftime` but what approach should I use to convert custom date strings and what to do in my case. **UPDATE** I want to parse UNIX timestamp to save in database. Answer: Something like this should work: import time import datetime datestring = "September 2, 2016" unixdatetime = int(time.mktime(datetime.datetime.strptime(datestring, "%B %d, %Y").timetuple())) print(unixdatetime) Returns: `1472792400`
Pythonic way to find if an IP in a list belongs to a different subnet Question: I have a script that generates the configuration for some campus wireless mobility switches. The user needs to supply a set of IP addresses for the configuration. Among other constraints, these IP's must all be in the same /24 subnet (always /24). To avoid typos where an IP is in a different subnet (which messes up the configuration and it happened already), I would like to tell the user which IP is at fault. I wrote the following, but I feel there could be a better way to do it. Suppose the list is this, where the 3rd IP is wrong: ips = ['10.0.0.1', '10.0.0.2', '10.0.10.3', '10.0.0.4', '10.0.0.5'] The following gets the job done: subnets = set() for ip in ips: subnets.add('.'.join(ip.split('.')[0:3])) # This would result in subnet being set(['10.0.10', '10.0.0']) if len(subnets) > 1: seen_subnets = defaultdict(list) for sn in subnets: for ip in ips: if sn in ip: seen_subnets[sn].append(ip) suspect = '' for sn in seen_subnets.keys(): if len(seen_subnets[sn]) == 1: suspect = seen_subnets[sn][0] if not suspect: # Do something to tell the user one or more IP's are incorrect # but I couldn't tell which ones **NOTE:** The smallest list that I could have has 3 items, and I'm basing this on the assumption that probably most if not all mistakes will be just 1 IP. Is there perhaps a more straightforward way to do this? I feel the solution has to be based on using a `set` either way but it didn't seem to me any of its methods would be more helpful than this. Answer: Something like this would work: from itertools import groupby ips = ['10.0.0.1', '10.0.0.2', '10.0.10.3', '10.0.0.4', '10.0.0.5'] def get_subnet(ip): return '.'.join(ip.split('.')[:3]) groups = {} # group ips by subnet for k, g in groupby(sorted(ips), key=get_subnet): groups[k] = list(g) # Sort the groups by most common first for row, (subnet, ips) in enumerate( sorted(groups.iteritems(), key=lambda (k, v): len(v), reverse=True) ): if row > 0: # not the most common subnet print('You probably entered these incorrectly: {}'.format(ips))
Python Failed to Verify any CRLs for SSL/TLS connections Question: In Python 3.4, a `verify_flags` that can be used to check if a certificate was revoked against CRL, by set it to `VERIFY_CRL_CHECK_LEAF` or `VERIFY_CRL_CHECK_CHAIN`. I wrote a simple program for testing. But on my systems, this script failed to verify ANY connections even if it's perfectly valid. import ssl import socket def tlscheck(domain, port): addr = domain ctx = ssl.create_default_context() ctx.options &= ssl.CERT_REQUIRED ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF ctx.check_hostname = True #ctx.load_default_certs() #ctx.set_default_verify_paths() #ctx.load_verify_locations(cafile="/etc/ssl/certs/ca-certificates.crt") sock = ctx.wrap_socket(socket.socket(), server_hostname=addr) sock.connect((addr, port)) import pprint print("TLS Ceritificate:") pprint.pprint(sock.getpeercert()) print("TLS Version:", sock.version()) print("TLS Cipher:", sock.cipher()[0]) exit() tlscheck("stackoverflow.com", 443) My code always quits with `ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)`. First I suspected that the certificate database was not loaded properly. But after I tried `load_default_certs()`, `set_default_verify_paths()`, `load_verify_locations(cafile="/etc/ssl/certs/ca-certificates.crt")`, and none of them worked. Also, `ctx.options &= ssl.CERT_REQUIRED` works as expected, it can tell if a certificate chain is trusted or not. But not for CRLs... It also indicates that my CAs are correct. I know "/etc/ssl/certs/ca-certificates.crt" contains valid CAs. What is the problem? Answer: To check against CRL you have to manually download the CRL and put them in the right place so that the underlying OpenSSL library will find it. There is no automatic downloading of CRL and specifying the place where to look for the CRL is not intuitive either. What you can do: * get the CRL distribution points from the certificate. For stackoverflow.com one is <http://crl3.digicert.com/sha2-ha-server-g5.crl> * download the current CRL from there * convert it from the DER format to PEM format, because this is what is expected in the next step: `openssl crl -inform der -in sha2-ha-server-g5.crl > sha2-ha- server-g5.crl.pem` * add the location to the verify_locations: `ctx.load_verify_locations(cafile="./sha2-ha-server-g5.crl.pem")` This way you can verify the certificate against the CRL.
missing last bin in histogram plot from matplot python Question: I'm trying to draw histrogram based of my value x = ['3', '1', '4', '1', '5', '9', '2', '6', '5', '3', '5', '2', '3', '4', '5', '6', '4', '2', '0', '1', '9', '8', '8', '8', '8', '8', '9', '3', '8', '0', '9', '5', '2', '5', '7', '2', '0', '1', '0', '6', '5'] x_num = [int(i) for i in x] key = '0123456789' for i in key: print(i," count =>",x.count(i)) plt.hist(x_num, bins=[0,1,2,3,4,5,6,7,8,9]) [![enter image description here](http://i.stack.imgur.com/wZK0l.png)](http://i.stack.imgur.com/wZK0l.png) The last 2 numbers "8, 9" bin should have distribution count of 6 , 4 But in histogram it combine 8 and 9 and get value of 10 instead of separate them. Total number of bin should be 10 => but it only giving me graph of 9.. How could I separate them and break 8 and 9 ? Answer: import matplotlib.pyplot as plt x = ['3', '1', '4', '1', '5', '9', '2', '6', '5', '3', '5', '2', '3', '4', '5', '6', '4', '2', '0', '1', '9', '8', '8', '8', '8', '8', '9', '3', '8', '0', '9', '5', '2', '5', '7', '2', '0', '1', '0', '6', '5'] x_num = [int(i) for i in x] key = '0123456789' for i in key: print(i, " count =>", x.count(i)) plt.hist(x_num, bins=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9,10]) plt.show()
Whitenoise, Mezzanine, Django -ImportError: cannot import name ManifestStaticFilesStorage Question: I am trying to deploy my mezzanine project on heroku. The last error gives me an ultimate stack- ImportError: cannot import name ManifestStaticFilesStorage. Here is my core project structure: β”œβ”€β”€ deploy β”‚Β Β  β”œβ”€β”€ crontab β”‚Β Β  β”œβ”€β”€ gunicorn.conf.py.template β”‚Β Β  β”œβ”€β”€ local_settings.py.template β”‚Β Β  β”œβ”€β”€ nginx.conf β”‚Β Β  └── supervisor.conf β”œβ”€β”€ dev.db β”œβ”€β”€ fabfile.py β”œβ”€β”€ flat β”‚Β Β  β”œβ”€β”€ admin.py β”‚Β Β  β”œβ”€β”€ admin.pyc β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  β”œβ”€β”€ __init__.pyc β”‚Β Β  β”œβ”€β”€ models.py β”‚Β Β  β”œβ”€β”€ models.pyc β”‚Β Β  β”œβ”€β”€ tests.py β”‚Β Β  β”œβ”€β”€ views.py β”‚Β Β  └── views.pyc β”œβ”€β”€ __init__.py β”œβ”€β”€ __init__.pyc β”œβ”€β”€ manage.py β”œβ”€β”€ Procfile β”œβ”€β”€ README.md β”œβ”€β”€ requirements.txt β”œβ”€β”€ runtime.txt β”œβ”€β”€ settings.py β”œβ”€β”€ staticfiles -> mezzanine_heroku/staticfiles β”œβ”€β”€ urls.py β”œβ”€β”€ urls.pyc └── wsgi.py wsgi.py: import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings") from django.core.wsgi import get_wsgi_application from whitenoise.django import DjangoWhiteNoise application = get_wsgi_application() application = DjangoWhiteNoise(application) Procfile: web: gunicorn wsgi Traceback from heroku-logs: ImportError: cannot import name ManifestStaticFilesStorage 2016-09-02T18:02:36.124458+00:00 app[web.1]: Traceback (most recent call last): 2016-09-02T18:02:36.124494+00:00 app[web.1]: File "/app/.heroku/python/bin/gunicorn", line 11, in <module> 2016-09-02T18:02:36.124529+00:00 app[web.1]: sys.exit(run()) 2016-09-02T18:02:36.124558+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run 2016-09-02T18:02:36.124620+00:00 app[web.1]: WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() 2016-09-02T18:02:36.124646+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/base.py", line 192, in run 2016-09-02T18:02:36.124706+00:00 app[web.1]: super(Application, self).run() 2016-09-02T18:02:36.124709+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/app/base.py", line 72, in run 2016-09-02T18:02:36.124754+00:00 app[web.1]: Arbiter(self).run() 2016-09-02T18:02:36.124800+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/arbiter.py", line 218, in run 2016-09-02T18:02:36.124858+00:00 app[web.1]: self.halt(reason=inst.reason, exit_status=inst.exit_status) 2016-09-02T18:02:36.124862+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/arbiter.py", line 331, in halt 2016-09-02T18:02:36.124962+00:00 app[web.1]: self.stop() 2016-09-02T18:02:36.124966+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/arbiter.py", line 381, in stop 2016-09-02T18:02:36.125047+00:00 app[web.1]: time.sleep(0.1) 2016-09-02T18:02:36.125057+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/arbiter.py", line 231, in handle_chld 2016-09-02T18:02:36.125138+00:00 app[web.1]: self.reap_workers() 2016-09-02T18:02:36.125141+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/site-packages/gunicorn/arbiter.py", line 506, in reap_workers 2016-09-02T18:02:36.125241+00:00 app[web.1]: raise HaltServer(reason, self.WORKER_BOOT_ERROR) 2016-09-02T18:02:36.125304+00:00 app[web.1]: gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> 2016-09-02T18:02:36.182785+00:00 heroku[web.1]: State changed from starting to crashed 2016-09-02T18:02:36.175782+00:00 heroku[web.1]: Process exited with status 1 Most confusing is the `ImportError: cannot import name ManifestStaticFilesStorage` error. Answer: ManifestStaticFilesStorage was introduced in Django 1.7. Are you using an older version of Django? If so, you should upgrade to a [supported version](https://www.djangoproject.com/download/#supported-versions). It's still possible to use [WhiteNoise 2.0.6](http://whitenoise.evans.io/en/legacy-2.x/) with older versions of Django, but this is not supported by either me or the Django team.
How to configure a package in PyPI to install only with pip3 Question: I distributed my package written in Python 3 on PyPI. It can be installed by both `pip2` and `pip3`. How can I configure the package to only be available in Python 3; i.e. to install only with `pip3`? I've already added these classifiers in `setup.py` file: classifiers=[ ... # Supported Python versions. 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', ... ] But it still can be installed by `pip2`. Answer: I'm not sure if such an option exists. What you could do though is manually enforce it by checking that the version of python in which it is installed is larger than the version you want to dictate: from sys import version_info class NotSupportedException(BaseException): pass if version_info.major < 3: raise NotSupportedException("Only Python 3.x Supported") While this won't stop it being reached from `pip2` it should stump any users trying to use an old version of Python
Trying to output the x most common words in a text file Question: I'm trying to write a program that will read in a text file and output a list of most common words (30 as the code is written now) along with their counts. so something like: word1 count1 word2 count2 word3 count3 ... ... ... ... wordn countn in order of count1 > count2 > count3 >... >countn. This is what I have so far but I cannot get the sorted function to perform what I want. The error I get now is: TypeError: list indices must be integers, not tuple I'm new to python. Any help would be appreciated. Thank you. def count_func(dictionary_list): return dictionary_list[1] def print_top(filename): word_list = {} with open(filename, 'r') as input_file: count = 0 #best for line in input_file: for word in line.split(): word = word.lower() if word not in word_list: word_list[word] = 1 else: word_list[word] += 1 #sorted_x = sorted(word_list.items(), key=operator.itemgetter(1)) # items = sorted(word_count.items(), key=get_count, reverse=True) word_list = sorted(word_list.items(), key=lambda x: x[1]) for word in word_list: if (count > 30):#19 break print "%s: %s" % (word, word_list[word]) count += 1 # This basic command line argument parsing code is provided and # calls the print_words() and print_top() functions which you must define. def main(): if len(sys.argv) != 3: print 'usage: ./wordcount.py {--count | --topcount} file' sys.exit(1) option = sys.argv[1] filename = sys.argv[2] if option == '--count': print_words(filename) elif option == '--topcount': print_top(filename) else: print 'unknown option: ' + option sys.exit(1) if __name__ == '__main__': main() Answer: Use the [`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter) class. from collections import Counter for word, count in Counter(words).most_common(30): print(word, count) Some unsolicited advice: Don't make so many functions until everything is working as one big block of code. Refactor into functions _after_ it works. You don't even need a main section for a script this small.
Python: Ignore case sensitivity for MYSQL column names Question: I am querying my MySQL DB with following code. import MySQLdb db = MySQLdb.connect("localhost","user","password","test" ) cursor = db.cursor(MySQLdb.cursors.DictCursor) sql = "select * from student" try: cursor.execute(sql) results = cursor.fetchall() for row in results: print row['firstname'] except: print "Error: unable to fecth data" db.close() Problem is that the column name specified in `row['firstName']` does not match and is case sensitive. Is there any way to ignore case sensitivity? Answer: you can use this: sequence = cursor.column_names <https://dev.mysql.com/doc/connector-python/en/connector-python-api- mysqlcursor-column-names.html> Then compare your string to the actual name with .lower() and act accordingly
Python background shell script communication Question: I have 2 python scripts, `foo.py` and `bar.py`. I am running foo.py in the background using python foo.py & Now I want to run `bar.py` and use the stdout from this file to trigger script inside foo.py. Is this possible? I'm using Ubuntu 16.04 LTS. Answer: You could use [UNIX named pipe](https://en.wikipedia.org/wiki/Named_pipe) for that. First, you create named pipe object by executing `mkfifo named_pipe` in the same directory, where you have your python files. Your `foo.py` then could look like this: while True: for line in open('named_pipe'): print 'Got: [' + line.rstrip('\n') + ']' And your `bar.py` could look like this: import sys print >>open('named_pipe', 'wt'), sys.argv[-1] So, you run your consumer process like this: `python foo.py &`. And finally, each time you execute `python bar.py Hello`, you will see the message `Got: [Hello]` in your console. UPD: unlike Paul's answer, if you use _named_ pipe, you don't have to start one of the processes from inside the other.
Getting the root word using the Wordnet Lemmatizer Question: I need to find a common root word matched for all related words for a keyword extractor. How to convert words into the same root using the python nltk lemmatizer? * Eg: 1. generalized, generalization -> general 2. optimal, optimized -> optimize (maybe) 3. configure, configuration, configured -> configure The python nltk lemmatizer gives 'generalize', for 'generalized' and 'generalizing' when part of speech(pos) tag parameter is used but not for 'generalization'. Is there a way to do this? Answer: Use SnowballStemmer: >>> from nltk.stem.snowball import SnowballStemmer >>> stemmer = SnowballStemmer("english") >>> print(stemmer.stem("generalized")) general >>> print(stemmer.stem("generalization")) general > Note: Lemmatisation is closely related to stemming. The difference is that a > stemmer operates on a single word without knowledge of the context, and > therefore cannot discriminate between words which have different meanings > depending on part of speech. A general issue I have seen with lemmatizers is that it identifies even bigger words as _lemma_ s. Example: In WordNet Lemmatizer(checked in NLTK), * Genralized => Generalize * Generalization => Generalization * Generalizations => Generalization POS tag was not given as input in the above cases, so it was always considered _noun_.
matplotlib.use required before other imports clashes with pep8. Ignore or fix? Question: I have a pythonscript that starts like this: #!/usr/bin/env python import matplotlib matplotlib.use("Agg") from matplotlib.dates import strpdate2num import numpy as np import pylab as pl from cmath import rect, phase It works like a charm, but my editor complains: `E402 module level import not at top of file [pep8]`. If I move the `matplotlib.use("Agg")` down, the script will not work. Should I just ignore the error? Or is there a way to fix this? **EDIT :** I'm aware that PEP8 says that this is only a suggestion and it may be ignored, but I'm hoping that there exists a nice way to initialise modules without breaking PEP8 guidelines, as I don't think I can make my editor ignore this rule on a per-file-basis. **EDIT2 :** I'm using Atom with linter-pylama Answer: The solution depends on the `linter` that is being used. In my case I am using `pylama` The manual for this `linter` suggests adding `# noqa` to the end of a line containing an error you wish to suppress. Other linters will have different mechanisms.
Python Shellexecute windows api with Ctypes over TCP/IP Question: I have question about running windows API's over TCP/IP protocol. For example, I want to bring remote machine's cmd.exe to other machine (Like Netcat, fully simulating cmd.exe over TCP/IP) . I searched online for doing it with python but couldn't find anything helpful. I can do this using subprocess and other python capabilities but it lacks user-interface problem. I used this kind of code: import socket import subprocess import ctypes HOST = '192.168.1.22' PORT = 443 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) while 1: #send initial cmd.exe banner to remote machine s.send(ctypes.windll.Shell32.ShellExecuteA(None,"open","cmd.exe",None,None,0)) #accept user input data = s.recv(1024) #pass it again to Shellexecute api to execute and return output to remote machine, fully simulating original CMD over TCP/IP s.close() Look at to this picture: [Running CMD over TCP/IP using NetCat tool](http://i.stack.imgur.com/qI48a.jpg) Answer: I'm not sure what your need is based on your description is, but I'll try to help you :). If all you want is to activate the CMD.exe on a remote machine with a port being open, you would maybe like to look into a "server script" that will execute any command passed to it. i.e. class MyTCPHandler(socketserver.BaseRequestHandler): """ The RequestHandler class for our server. It is instantiated once per connection to the server, and must override the handle() method to implement communication to the client. """ def handle(self): # self.request is the TCP socket connected to the client self.data = self.request.recv(1024).strip() print("{} wrote:".format(self.client_address[0])) print(self.data) error_code = os.system(self.data.decode('UTF-8')) # just send back the error code so we know if it executed correctly self.request.sendall(error_code) if __file__ is '__main__': HOST, PORT = "localhost", 9999 # Create the server, binding to localhost on port 9999 server = socketserver.TCPServer((HOST, PORT), MyTCPHandler) # Activate the server; this will keep running until you # interrupt the program with Ctrl-C server.serve_forever() *ref :<https://docs.python.org/3.4/library/socketserver.html> I don't need to mention that this is really not secure... if you really want to pass it to python on the server side, I think I would put everything received on the socket in a tmp file and execute python over it. Like Ansible do. Let me know if that helped you!
What function should I use to fix my code? Question: I'm an amateur trying to code a simple troubleshooter in Python but I'm sure what function I need to use to stop this from happening... [enter image description here](http://i.stack.imgur.com/XJsB1.png) What should I do so the code doesnt continue running if the user inputs yes? TxtFile =open('Answers.txt') lines=TxtFile.readlines() import random def askUser(question): answer = input(question + "? ").lower() Split = answer.split() if any(word in Split for word in KW1): return False elif any(word in Split for word in KW2): return True else: print("Please answer yes or no.") askUser(question) KW1=["didn't", "no",'nope','na'] #NO KW2=["did","yes","yeah","ya","oui","si"] #YES print (lines[0]) print("Nice to meet you, " + input("What is your name? ")) print("Welcome to my troubleshooter") #This is the menu to make the user experience better shooter=True while shooter: print('\n\n1.Enter troubleshooter\n2.Exit\n\n') shooter=input('Press enter to continue: ') if shooter==('2'): print('Ok bye') break words = ('tablet', 'phone', 's7 edge') while True: question = input('What type of device do you have?: ').lower() if any(word in question for word in words): print("Ok, we can help you") break else: print("Either we dont support your device or your answer is too vague") if askUser("Have you tried charging your phone"): print("It needs to personally examined by Apple") else: if askUser("Is it unresponsive"): print (lines[0]) else: print ("Ok") if askUser("Are you using IOS 5.1 or lower"): print (lines[1]) else: if askUser("Have you tried a hard reboot"): print (lines[2]) else: if askUser("Is your device jailbroken"): print (lines[3]) else: if askUser("Do you have a iPhone 5 or later"): print (lines[4]) else: print(lines[5]) print ('Here is your case number, we have stored this on our system') print (random.random()) Here is my code for reference. Edit: Here is the problem [enter image description here](http://i.stack.imgur.com/XJsB1.png) It should just end the code there but it doesnt. I'm not sure how I can fix it Answer: I would recommend moving the following: if askUser("Are you using IOS 5.1 or lower"): print (lines[1]) else: if askUser("Have you tried a hard reboot"): print (lines[2]) else: if askUser("Is your device jailbroken"): print (lines[3]) else: if askUser("Do you have a iPhone 5 or later"): print (lines[4]) else: print(lines[5]) print ('Here is your case number, we have stored this on our system') print (random.random()) into the else part of: if askUser("Is it unresponsive"): Hope this helps.
How do I increase the contrast of an image in Python OpenCV Question: I am new to Python OpenCV. I have read some documents and answers [here](http://stackoverflow.com/questions/19363293/whats-the-fastest-way-to- increase-color-image-contrast-with-opencv-in-python-c) but I am unable to figure out what the following code means: if (self.array_alpha is None): self.array_alpha = np.array([1.25]) self.array_beta = np.array([-100.0]) # add a beta value to every pixel cv2.add(new_img, self.array_beta, new_img) # multiply every pixel value by alpha cv2.multiply(new_img, self.array_alpha, new_img) I have come to know that `Basically, every pixel can be transformed as X = aY + b where a and b are scalars.`. Basically, I have understood this. However, I did not understand the code and how to increase contrast with this. Till now, I have managed to simply read the image using `img = cv2.imread('image.jpg',0)` Thanks for your help Answer: Best explanation for `X = aY + b` (in fact it `f(x) = ax + b`)) is provided at <http://math.stackexchange.com/a/906280/357701> A Simpler one by just adjusting lightness/luma/brightness for contrast as is below: import cv2 img = cv2.imread('test.jpg') cv2.imshow('test', img) cv2.waitKey(1000) imghsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) imghsv[:,:,2] = [[max(pixel - 25, 0) if pixel < 190 else min(pixel + 25, 255) for pixel in row] for row in imghsv[:,:,2]] cv2.imshow('contrast', cv2.cvtColor(imghsv, cv2.COLOR_HSV2BGR)) cv2.waitKey(1000) raw_input()