text
stringlengths 226
34.5k
|
---|
probability of T total eyes when throwing N dice with S sides
Question: I want to calculate the probability of the event that the sum of all eyes of
`n` dice with `s` sides (numbered from 1 to `s`) is equal to `t`. My language
is Python 3.
My current approach is pretty much a try-and-count solution and only works for
small numbers (running `probability(10, 10, 50)` already ate all my RAM and
forced me to hard reset):
import itertools
def probability(n, s, t):
all_rolls=list(itertools.product(range(1,s+1), repeat=n))
target_rolls=[l for l in all_rolls if sum(l)==t]
return round(len(target_rolls)/len(all_rolls), 4)
But I honestly don't know how else to solve this. Can you please help me to
get on the right track?
Answer: first off: the total possible roll combinations will always be `s**n`, so you
don't need to store a list of all possibilities in order to get it's length.
Similarly you can just keep a running total of desired outcomes instead of
keeping a list of them to save on memory space but it still won't speed up the
function a whole lot:
def probability(n, s, t):
all_rolls = itertools.product(range(1,s+1), repeat=n) #no list, leave it a generator
target_rolls=sum(1 for l in all_rolls if sum(l)==t) #just total them up
return round(target_rolls/s**n, 4)
A much more efficient way of calculating the possibilities is with a `dict`
and some clever iteration. Each dictionary will use the roll value as keys and
frequency as value, each iteration the `prev` will be this dict for the
previous X dice and `cur` will be updated from it with the addition of another
die:
import collections
def probability(n, s, t):
prev = {0:1} #previous roll is 0 for first time
for _ in range(n):
cur = collections.defaultdict(int) #current probability
for r,times in prev.items():
for i in range(1,s+1):
#if r occured `times` times in the last iteration then
#r+i have `times` more possibilities for the current iteration.
cur[r+i]+=times
prev = cur #use this for the next iteration
return cur[t] / s**n
#return round(cur[t] / s**n , 4)
note 1: since `cur` is a defaultdict trying to look up a number that is not
not possible with the given input will return 0
note 2: since this method puts together a dictionary with all possible
outcomes you can return `cur` and do the calculation for multiple different
possible outcomes on the same dice rolls.
|
How to test class methods from outside the class in Python?
Question: I am trying to test each method in a class, from another module. So here is
the class.
#newmodule
class test:
def atest(a,b):
return a
def btest(a,b):
return b
and in the other module, I am attempting to do:
import unittest
import newmodule
test.atest(5,4).assert not errors
test.atest(7,9).assert not errors
Note: I'm sure there are all sorts of errors here, but I just mocked this up
as an example. The main question I have here is how to successfully import
newmodule and test each METHOD. I suspect that there are complications with
trying to test methods from outside of the class as opposed to just testing
functions.
I am already failing right off the bat because I am getting:
ImportError: no module named newmodule
even though they are in the same directory.
How do I successfully import this module and if so, am I able to test the
methods from outside the class?
Answer: _"even though they are in the same directory."_ \- its not whether they are in
the same directory, its whether `newmodule` is in the current working
directory. If you want to run tests from a different directory, you could add
your script directory to `sys.path`
import os
import sys
sys.path.insert(0, os.path.dirname(__file__))
import unittest
import newmodule
This hard-codes which version of your module you are going test test so you
have to decide whether that fits your goals. Alternately you could set the
`PYTHONPATH` enivronment variable outside of your test script for more
flexibility.
|
TypeError: 'str' object is not callable-Python
Question: Define a function test_sort that takes a tuple containing a sort function
reference and a function description as a parameter and executes that sort
function with the data from the previous task. Track the comparisons for each
set of data, calculate the average number of comparisons for the list of
random lists.
The sort function reference just means that you can put a function definition
into a variable just like any other value, and then execute that function
variable. You can also pass a function definition as an argument to a another
function and then execute the resulting parameter as a function.
this is the code
def test_sort(function_tuple, a_sorte, a_reverse, a_random):
Number.comparisons = 0
f = function_tuple[0]
f(a_sorte)
x = Number.comparisons
Number.comparisons = 0
f = function_tuple[0]
f(a_reverse)
y = Number.comparisons
Number.comparisons = 0
f = function_tuple[0]
for i in range(len(a_random)):
f(a_random[i])
z = Number.comparisons
print("{0} {1} {2} {3}".format(
function_tuple[1], x, y, z))
return
the main:
import copy
from sorts_array import Sorts
import functions
SORTS = (
('Bubble Sort', Sorts.bubble_sort),
('Insertion Sort', Sorts.insertion_sort),
('Selection Sort', Sorts.selection_sort),
('Merge Sort', Sorts.merge_sort),
('Quick Sort', Sorts.quick_sort),
('Heap Sort', Sorts.heap_sort),
('Shell Sort', Sorts.shell_sort),
('Cocktail Sort', Sorts.cocktail_sort),
('Comb Sort', Sorts.comb_sort),
('Bin. Ins. Sort', Sorts.binary_insert_sort)
)
a_sorte = functions.create_sorted()
a_reverse = functions.create_reversed()
a_random = functions.create_randomly()
for i in range(0, 9):
x = copy.deepcopy(a_sorte)
y = copy.deepcopy(a_reverse)
z = copy.deepcopy(a_random)
functions.test_sort(SORTS[i], x, y, z)
The error I get:
Traceback (most recent call last):
functions.test_sort(SORTS[i], x, y, z)
f(a_sorte)
TypeError: 'str' object is not callable
This what I did in the previous task as mentioned in the question above:
def create_sorted():
value = []
for i in range(0, SIZE):
n = Number(i)
value.append(copy.deepcopy(n))
return value
def create_reversed():
value = []
for i in range(SIZE, -1, -1):
n = Number(i)
value.append(copy.deepcopy(n))
return value
def create_randomly():
value = []
for i in range(N):
n = Number(random.randint(0, RANGE))
value.append(copy.deepcopy(n))
return value
Answer: > Define a function test_sort that takes a tuple containing a sort function
> reference and a function description as a parameter
Following those instructions, your logic is fine, but your tuple is not. You
put the description first.
SORTS = (
('Bubble Sort', Sorts.bubble_sort),
('Insertion Sort', Sorts.insertion_sort),
('Selection Sort', Sorts.selection_sort),
('Merge Sort', Sorts.merge_sort),
('Quick Sort', Sorts.quick_sort),
('Heap Sort', Sorts.heap_sort),
('Shell Sort', Sorts.shell_sort),
('Cocktail Sort', Sorts.cocktail_sort),
('Comb Sort', Sorts.comb_sort),
('Bin. Ins. Sort', Sorts.binary_insert_sort)
)
Therefore, your error starts with
f = function_tuple[0]
f(a_sorte) # TypeError: 'str' object is not callable
Because `f` is a string (the description of the function).
I also see you have
print("{0} {1} {2} {3}".format(
function_tuple[1], x, y, z))
Which will print the **function** object (`<function Sorts.bubble_sort at
0x1029beae8>`), not the description string.
So, you have two options.
1. Switch the ordering of all the tuples. I.e `(Sorts.bubble_sort, 'Bubble Sort')` and keep the other code the same
2. Use `f = function_tuple[1]` for the function that you can call and `function_tuple[0]` as the string to print.
* * *
Also, why is `a_random` treated any differently than the others? Just do the
same thing as the other lists.
Number.comparisons = 0
f = function_tuple[0]
f(a_random)
z = Number.comparisons
|
Regex: get accented letters with spaces
Question: I'm trying to extract a keyword from a JSON string and get the context of the
word. My string looks like:
**JSON**
{"1" : "Na casa de meu Pai há muitos aposentos; se não fosse assim, eu lhes teria dito. Vou preparar-lhes lugar."}
Currently, my Python code is:
**Python**
re.findall(regex, string)
I want to provide a word (e.g. _Pai_) and get the words before and after the
keyword. My script will count all the occurrences of the keyword and make a
list of contextual words.
My problem is: how do I get the accented letters with whitespaces, commas,
dots, etc? What is the best approach: list the desired chars or exclude the
unwanted? Something like:
([^\"]+)Pai([^\"$]+)
Answer: Load your JSON data via `json.load()` or `json.loads()`, then use the
[`nltk.ConcordanceIndex`](http://stackoverflow.com/a/8898784/771848) that
would help you to explore the words around a specific word in a text, example:
import nltk
text = 'Na casa de meu Pai há muitos aposentos; se não fosse assim, eu lhes teria dito. Vou preparar-lhes lugar.'
tokens = nltk.word_tokenize(text)
c = nltk.ConcordanceIndex(tokens, key=lambda s: s.lower())
result = []
for offset in c.offsets('Pai'):
result += tokens[offset - 2: offset]
result += tokens[offset + 1: offset + 3]
print(result)
Prints `['de', 'meu', 'há', 'muitos']`.
|
AttributeError: 'module' object has no attribute 'version'
Question: I am working on learning how to use pandas but get the following error:
Traceback (most recent call last):
File "data_frame.py", line 2, in <module>
import pandas as pd
File "/Users/gregwinter/anaconda2/lib/python2.7/site-packages/pandas/__init__.py", line 13, in <module>
__import__(dependency)
File "/Users/gregwinter/numpy.py", line 22, in <module>
from pandas.compat.numpy_compat import *
File "/Users/gregwinter/anaconda2/lib/python2.7/site-packages/pandas/compat/numpy_compat.py", line 15, in <module>
_np_version = np.version.short_version
AttributeError: 'module' object has no attribute 'version'
I have no idea how to fix this. Anything you can tell be on how to fix this
would be great.
Answer: You named a file of your _own_ `numpy.py`:
/Users/gregwinter/numpy.py
Guess which one Python thinks pandas wants to import? :-) Rename your program,
and remove any .pyc or .pyo files that are around.
|
Preview fswebcam image as it takes picture
Question: I am currently using a USB webcam with a Raspberry Pi 3. At the moment as part
of a lot of other code in Python it takes a picture using the camera and saves
it to a specific directory. I was wondering whether there was any way of
getting a preview of the image to show up on screen, similarly to how the
picamera works:
`camera.start_preview() time.sleep(5)
camera.capture('/home/pi/Downloads/image.jpg') camera.stop_preview()`
Is there an equivalent to this using fswebcam? The part of the code that takes
the image is:
`from subprocess import call call(["fswebcam", "--no-banner", "image.jpg"])`
Ideally I would like it to preview the image on-screen for a set number of
seconds before capturing it and saving it to the directory. Is this possible?
Answer: I made this kind of function for a commercial app. (no source code here,
sorry) Mixing fswebcam for capturing and other python code for previewing
seems to be a bad idea, as the camera will hardly be available for them both.
(you may try to use "mplayer tv://" as the preview and launch fswebcam from
another terminal to verify this point)
For the python approcah : as the most difficult part will be to display the
preview, you should build your app around this feature. Saving a frame as an
image will be easy from there. I had some success working with Pygame, that
can use v4l2 cameras, managing displays and user events.
In practice, look at <https://gist.github.com/snim2/255151>, the main pygame
function you'll need are there.
Note : don't expect the refresh rate of the preview to be perfect. It will
only be decent. Not sure if it's the camera, v4l2, pygame or python that cause
this ...
|
how do I test methods using boto3 with moto
Question: I am writing test cases for a quick class to find / fetch keys from s3, using
boto3. I have used moto in the past to test boto (not 3) code but am trying to
move to boto3 with this project, and running into an issue:
class TestS3Actor(unittest.TestCase):
@mock_s3
def setUp(self):
self.bucket_name = 'test_bucket_01'
self.key_name = 'stats_com/fake_fake/test.json'
self.key_contents = 'This is test data.'
s3 = boto3.session.Session().resource('s3')
s3.create_bucket(Bucket=self.bucket_name)
s3.Object(self.bucket_name, self.key_name).put(Body=self.key_contents)
error:
...
File "/Library/Python/2.7/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py", line 344, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
File "/Library/Python/2.7/site-packages/botocore/vendored/requests/packages/urllib3/connectionpool.py", line 314, in _raise_timeout
if 'timed out' in str(err) or 'did not complete (read)' in str(err): # Python 2.6
TypeError: __str__ returned non-string (type WantWriteError)
botocore.hooks: DEBUG: Event needs-retry.s3.CreateBucket: calling handler <botocore.retryhandler.RetryHandler object at 0x10ce75310>
It looks like moto is not mocking out the boto3 call correctly - how do I make
that work?
Answer: What worked for me is setting up the environment with `boto` before running my
mocked tests with `boto3`.
Here's a working snippet:
import unittest
import boto
from boto.s3.key import Key
from moto import mock_s3
import boto3
class TestS3Actor(unittest.TestCase):
mock_s3 = mock_s3()
def setUp(self):
self.mock_s3.start()
self.location = "eu-west-1"
self.bucket_name = 'test_bucket_01'
self.key_name = 'stats_com/fake_fake/test.json'
self.key_contents = 'This is test data.'
s3 = boto.connect_s3()
bucket = s3.create_bucket(self.bucket_name, location=self.location)
k = Key(bucket)
k.key = self.key_name
k.set_contents_from_string(self.key_contents)
def tearDown(self):
self.mock_s3.stop()
def test_s3_boto3(self):
s3 = boto3.resource('s3', region_name=self.location)
bucket = s3.Bucket(self.bucket_name)
assert bucket.name == self.bucket_name
# retrieve already setup keys
keys = list(bucket.objects.filter(Prefix=self.key_name))
assert len(keys) == 1
assert keys[0].key == self.key_name
# update key
s3.Object(self.bucket_name, self.key_name).put(Body='new')
key = s3.Object(self.bucket_name, self.key_name).get()
assert 'new' == key['Body'].read()
When run with `py.test test.py` you get the following output:
collected 1 items
test.py .
========================================================================================= 1 passed in 2.22 seconds =========================================================================================
|
Will killing Python script that called shell processes also kill the shell processes?
Question: If I have some code like this in the file this_script.py:
import subprocess
subprocess.Popen(["python", "another_script.py"])
and I call
python this_script.py
and kill the process while it is running, will it kill the subprocess?
Edit: I tested this, and if this_script is killed, the subprocess continues
running. Is there a way to make sure that the background process dies when the
main Python process does?
Answer: Yes, you can catch `KeyboardInterrupt`, and `SystemExit` and make sure to
`kill` the subprocess.
from subprocess import Popen
try:
p = Popen(args)
p.wait() # wait for the process to finish
except KeyboardInterrupt, SystemExit:
p.kill()
raise
|
Min/max for itertools python
Question: Ok i got this code
import itertools
res = itertools.product('abc', repeat=3)
for i in res:
print ''.join(i)
The problem is i don't know how i can also add an minimum and maximum to the
word that's gonna be the output? So lets say i put in the letters `'a'`, `'b'`
and `'c'` but i only want a minimum 1 letter and maximum 2 letter word how
would i do that? I'd already search on the internet, but couldn't find
anything. It's meant to make a dictionary for brute-force. I'm a beginner at
hacking but not a script kiddie ya know.
Answer: Use itertools.permutations(). Then concatenate the results.
S = [x for x in permutations('abc',2)] + [ x for x in permutations('abc',1)]
Actually you could also use `products`. The only difference is that products
removes duplicate results. But you won't have any duplicates when all the
elements are distinct.
|
Python switching between routines
Question: I am writing a python program where I have three routines that need to switch
between each other including a main loop, set up as follows:
pseudo-code:
main routine:
run routine,
while running:
if obtained signal A run routine A,
else if obtained signal B run routine B
routine A:
run routine,
while running:
if obtain signal B, run routine B,
else if completed with no signal run main
else if no signal, just keep running
routine B:
run routine,
while running
if obtain signal A, run routine A,
else if completed with no signal run main.
Where signal could sent by a handler that sets a variable that has an
associated Lock on it. No two routines need to run simultaneously.
I was thinking of using non-preemptive threads, and wondering if there exists
a way to do non-preemptive threading python where threads can specifically
yield to another target thread? I found
[Yarn](http://sifter.org/~simon/yarn/about.html), but I don't know if that
solves the problem. In addition, I was wondering if there is better way to
accomplish this, maybe using co-routines instead?
Answer: Something like that?
from random import choice
get_signal_somehow = lambda: choice(["A", "B"])
def main():
A = routine_a()
B = routine_b()
signal = get_signal_somehow()
while True:
if signal == 'A':
signal = next(A)
elif signal == 'B':
signal = next(B)
def routine_a():
#do stuff
if get_signal_somehow() == 'B':
yield 'B'
else:
yield 'A'
def routine_b():
#do stuff
if get_signal_somehow() == 'A':
yield 'A'
else:
yield 'B'
|
Python: Appending a list doesn't actually append it?
Question: I have a CSV file with names and scores in it. I've made each line a separate
list but when appending a variable to this list it doesn't actually do it. My
code is:
import csv
f = open('1scores.csv')
csv_f = csv.reader(f)
newlist = []
for row in csv_f:
newlist.append(row[0:4])
minimum = min(row[1:4])
newlist.append(minimum)
print(newlist)
With the data in the file being
Person One,4,7,4
Person Two,1,4,2
Person Three,3,4,1
Person Four,2
Surely the output would be `['Person One', '4', '7', '4', '4']` as the minimum
is 4, which I'm appending to the list. But I get this: `['Person One', '4',
'7', '4'], '4',` What am I doing wrong? I want the minimum to be inside the
list, instead of outside but don't understand.
Answer: Append the min to each row and then append the row itself, you are appending
the list you slice first then adding the min value to newlist not to the
sliced list:
for row in csv_f:
row.append(min(row[1:],key=int)
newlist.append(row)
You could also use a list comp:
new_list = [row + [min(row[1:], key=int)] for row in csv_f]
You also need the, `key=int` or you might find you get strange results as your
scores/strings will be compared lexicographically:
In [1]: l = ["100" , "2"]
In [2]: min(l)
Out[2]: '100'
In [3]: min(l,key=int)
Out[3]: '2'
|
reconstructing source from objects
Question: I'd like to grab the code from actual python objects. This is the opposite
idea of AST and parse, I have an object in memory and I want to recreate the
source code. I don't want to get down to the byte code that's excessive, I
just want a representation of the code that made the object:
In [24]: from django.apps import apps
In [25]: x= apps.get_app('accounts')
In [26]: x
Out[26]: <module 'mysite.accounts.models' from '/home/cchilders/work_projects/mysite/mysite/accounts/models.py'>
In [27]: x.
x.BusinessUnit x.models
In [35]: bizunit = x.BusinessUnit
In [36]: type(bizunit)
Out[36]: django.db.models.base.ModelBase
import something
bizunit_code = something.something(bizunit)
I want the source of all models, but using ast seems too hairy especially
since django provides the `apps` module to grab all models. Now I just need to
untranslate it
Thank you
Answer: You may be able to obtain the source code using:
import inspect
print(inspect.getsource(biz unit))
This only works when the argument is a module, class, method, function,
traceback, frame, or code object. If Python is unable to obtain the source
code then this will raise a `IOError`.
|
Python- Openpyxl works in console but fails to import
Question: I am having an issue getting openpyxl to write to an Excel file, when I run
the following code in the PyCharm Python console it works fine but when I
create & run the `.py` file I get the following error :
> C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\Scripts\python.exe
> C:/Python27/virtualenv-15.0.1/virtualenv/test.py Traceback (most recent call
> last): File "C:/Python27/virtualenv-15.0.1/virtualenv/test.py", line 1, in
> from openpyxl import Workbook File
> "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl__init__.py", line 28, in from openpyxl.workbook import
> Workbook File
> "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl\workbook__init__.py", line 5, in from .workbook import *
> File "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl\workbook\workbook.py", line 7, in from openpyxl.worksheet
> import Worksheet File
> "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl\worksheet__init__.py", line 4, in from .worksheet import *
> File "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl\worksheet\worksheet.py", line 34, in from openpyxl.cell
> import Cell File
> "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl\cell__init__.py", line 4, in from .cell import * File
> "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl\cell\cell.py", line 44, in from openpyxl.styles import
> numbers, is_date_format File
> "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl\styles__init__.py", line 4, in from openpyxl.descriptors
> import Typed File
> "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl\descriptors__init__.py", line 4, in from .base import *
> File "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl\descriptors\base.py", line 12, in from
> openpyxl.xml.functions import Element File
> "C:\Users\David\PycharmProjects\VirtualEnv1\VirtualEnv1\lib\site-
> packages\openpyxl\xml\functions.py", line 41, in from xml.etree.ElementTree
> import ( ImportError: No module named etree.ElementTree Process finished
> with exit code 1
I installed from <https://openpyxl.readthedocs.org/en/default/index.html> and
am using the virtual environment as recommended. I also downloaded the
elementtree package to the virtual environment but the script still fails. Any
help would be appreciated, thanks!
from openpyxl import Workbook
wb = Workbook()
ws1 = wb.create_sheet()
ws1.title = "worksheet1"
c = ws1['A4']
ws1['A4'] = 15
cell_range = ws1['A1':'C2']
for row in ws1.iter_rows('A1:C2'):
for cell in row:
print cell
wb.save('balances.xlsx')
[Console run](http://i.stack.imgur.com/lmuG9.png)
Answer: Where does your script use etree.ElementTree? This worked for me fine in the
console:
$ virtualenv .venv
$ . .venv/bin/activate
$ pip install openpyxl
$ tee test.py << 'EOF'
from openpyxl import Workbook
wb = Workbook()
ws1 = wb.create_sheet()
ws1.title = "worksheet1"
c = ws1['A4']
ws1['A4'] = 15
cell_range = ws1['A1':'C2']
for row in ws1.iter_rows('A1:C2'):
for cell in row:
print cell
wb.save('balances.xlsx')
EOF
$ python test.py
|
Monty Hall Python Simulation Calculation
Question: I'm trying to simulate the Monty Hall Problem where someone chooses a door,
and a random one is removed--in the end it must be one with a car and one
without, one of which someone must have chosen. While I don't need to simulate
currently/ask the person using the program which door they'd like, I'm having
trouble actually setting up the calculations. When I run the code, it outputs
0, where is should be approximately 66%
import random
doors=[0,1,2]
wins=0
car=random.randint(0,2)
player=random.randint(0,2)
#This chooses the random door removed
if player==car:
doors.remove.random.randint(0,2)
else:
doors.remove(car)
doors.remove(player)
for plays in range(100):
if car == player:
wins=wins+1
print(wins)
Answer: You need to put your code inside the loop to actually have it run each time.
You also need to make sure you're only allowing valid choices the second time
(they can't choose the removed door) and that you're only removing valid doors
(you can't remove the door with the car or the player-chosen door).
import random
wins = 0
for plays in range(100):
doors = [0,1,2]
car = random.choice(doors)
player = random.choice(doors)
# This chooses the random door removed
doors.remove(random.choice([d for d in doors if d != car and d != player]))
# Player chooses again (stay or switch)
player = random.choice(doors)
if player == car:
wins += 1
print(wins)
But for the purposes of the Monty Hall problem, you don't even have to track
the doors.
win_if_stay = 0
win_if_switch = 0
for i in range(100):
player = random.randint(0, 2)
car = random.randint(0, 2)
if player == car:
win_if_stay += 1
else:
win_if_switch += 1
|
extract data from file with python
Question: I need to extract data from lines of a text file. The data is name and scoring
information formatted like this:
Feature_Locations:
- { x:9.0745818614959717e-01, y:2.8846755623817444e-01,
z:3.5268107056617737e-01 }
- { x:1.1413983106613159e+00, y:2.7305576205253601e-01,
z:4.4357028603553772e-01 }
- { x:1.7582545280456543e+00, y:2.2776308655738831e-01,
z:6.6982054710388184e-01 }
- { x:9.6545284986495972e-01, y:2.8368893265724182e-01,
z:3.6416915059089661e-01 }
- { x:1.2183872461318970e+00, y:2.7094465494155884e-01,
z:4.5954680442810059e-01 }
This file is generated by another software. Basically I want to get that data
back in this program and i want to save them in different other files for
examples "axeX.txt" "axeY.txt" "axeZ.txt"
I have try this
import numpy as np
import matplotlib.pyplot as plt
import re
file = open('data.txt', "r")
for r in file:
y = re.sub("- {", "",r).split()
tt = y[:2]
zz = tt
st = re.findall('\d+', r)
print st
file.close()
Is there a better way or I am doing it wrong ?
Answer: The input file is in YAML format. It is recommended to use
[PyYAML](http://pyyaml.org/wiki/PyYAMLDocumentation) package for parsing yaml
files.
import yaml
document = """
Feature_Locations:
- { x: 9.0745818614959717e-01, y: 2.8846755623817444e-01,
z: 3.5268107056617737e-01 }
- { x: 1.1413983106613159e+00, y: 2.7305576205253601e-01,
z: 4.4357028603553772e-01 }
- { x: 1.7582545280456543e+00, y: 2.2776308655738831e-01,
z: 6.6982054710388184e-01 }
- { x: 9.6545284986495972e-01, y: 2.8368893265724182e-01,
z: 3.6416915059089661e-01 }
- { x: 1.2183872461318970e+00, y: 2.7094465494155884e-01,
z: 4.5954680442810059e-01 }
"""
locations = yaml.load(document)['Feature_Locations']
for ch in 'XYZ':
fname = 'axe%s.txt' %ch
with open(fname, 'w') as fh:
for item in locations:
fh.write('%s\n' % item[ch.lower()])
The input file is slightly corrupted.
[yamllint](http://yamllint.readthedocs.org/en/latest/) will do a sanity check
and inform us of the errors.
yamllint inputfile.yaml
inputfile.yaml
1:1 warning missing document start "---" (document-start)
2:9 error syntax error: found unexpected ':'
In this case we can fix the input file easily.
sed -i 's/:/: /g' inputfile.yaml
|
Python: Save a file based on user input
Question: I am attempting to save a file from a python tkinter window via a 'Save As'
prompt. I have looked for a while now and cannot seem to find the answer I am
looking for. I can successfully save the information to a file with a default
name, and even can save it using a name the user inputs via input(), however,
this is not what I am trying to do. I want the user to be able to click, 'Save
As' and then when the prompt comes up, they enter in the file name and it
saves as that name, I just cannot seem to find an answer anywhere. Here is my
code at this point:
# Import Tkinter
from tkinter import *
import os
import pickle
from tkinter.filedialog import askopenfilename, asksaveasfile
from tkinter.messagebox import *
MainWindow = Tk()
MainWindow.geometry("600x400")
MainWindow.attributes("-alpha", 1)
MainWindow.title(string="Hours Log")
CurrentDirect=os.getcwd()
def FileSaveAs():
fname = asksaveasfile(initialdir=CurrentDirect ,filetypes=(("Text Files", "*.txt"),
("All files", "*.*") ))
if fname:
try:
print(fname)
SH = SHVar.get()
SM = SMVar.get()
SAP = SAPVar.get()
EH = EHVar.get()
EM = EMVar.get()
EAP = EAPVar.get()
DM = DMVar.get()
DD = DDVar.get()
DY = DYVar.get()
DE = Description.get("1.0", END)
AP = APVar.get()
with open("filename.txt", 'wb') as f:
pickle.dump([SH, SM, SAP, EH, EM, EAP, DM, DD, DY, DE, AP], f)
except:
showerror("FILE SAVE ERROR", "Error on Saving File!\n'%s'" % fname)
return
I understand that the "filename.txt" is the name of the file to save to,
however, how do I acquire the variable name from the prompt?
NOTE: There are no errors in this code, it runs fine with the rest of my
program.
Answer: You can use `asksaveasfilename` instead of `asksaveasfile` and `fname` instead
of `"filename.txt"`.
60 def asksaveasfile(self):
61
62 """Returns an opened file in write mode."""
63
64 return tkFileDialog.asksaveasfile(mode='w', **self.file_opt)
65
66 def asksaveasfilename(self):
67
68 """Returns an opened file in write mode.
69 This time the dialog just returns a filename and the file is opened by your own code.
70 """
|
Issue with handling the reader object in python csv module
Question: The goal I am trying to accomplish is reading in only the particular data I
want from a large csv file. To do this, I have a main menu that I use as a
handler for data acquisition and then a separate menu for exiting or
continuing. My issue arises when I attempt to read in more data after looping
through the file once. The issue being that I have reached the end of the file
and for some reason the for loop is not handling the StopIterator error
correctly. Any suggestions? Thanks in advance!
fname = open(current_file, 'r')
reader = csv.reader(fname)
for row in reader:
list_header = row
break
def main_menu():
i=0
menu = {}
print reader
for hdr in list_header:
menu[str(i)]=hdr
i+=1;
options=menu.keys() #creates a list out of the keys
options.sort()
for entry in options:
print entry, menu[entry]
selection=raw_input("Please Select:")
data=[]
for row in reader:
a=0
for block in row:
if a==list_header.index(menu[selection]):
data.append(block)
a+=1
print 'Saving '+menu[selection]+' values into an array.'+'\n'
return data
def continue_menu():
menu_1={}
menu_1['0']='Continue'
menu_1['1']='Exit'
options=menu_1.keys()
options.sort()
for entry in options:
print entry, menu_1[entry]
selection=raw_input('Please Select:')
if float(selection)==0:
print 'As you wish'+'\n'
proceed=True
else:
proceed=False
return proceed
proceed=True
while proceed:
data1=main_menu()
proceed=continue_menu()
Answer: `csv.reader` reads lines from a file object and splits them into a rows. When
you hit the end of file, `StopIteration` is raised, the `for` loop catches
that exception and the loop stops. Now the file pointer is at the end of the
file. If you try to iterate through it a second time, its already at the end
and raises `StopIteration` immediately. Notice in the example that nothing is
printed the second time through the loop
>>> import csv
>>> fname=open('a.csv')
>>> reader = csv.reader(f)
>>> for row in reader:
... print(row)
...
['1', '2', '3']
['4', '5', '6']
['7', '8', '9']
>>> for row in reader:
... print(row)
...
>>>
One solution is to just rewind the file pointer to the start of the file. Now,
the loop works again
>>> fname.seek(0,0)
0
>>> for row in reader:
... print(row)
...
['1', '2', '3']
['4', '5', '6']
['7', '8', '9']
Another more commonly used solution is to open the file just before the
iteration. By using a `while` the file is closed immediately after use and the
next time the loop is run, the file is opened and iterated again.
>>> with open('a.csv') as fname:
... for row in csv.reader(fname):
... print(row)
...
['1', '2', '3']
['4', '5', '6']
['7', '8', '9']
|
wxPython: How to update window size as the window resizes in real-time
Question: I would like to be able to obtain the window size of an app and pass it to
other modules in an application, and when the window size updates (say, if a
user resizes the window), the updated window size also gets passed to other
modules.
For example, I tried something like the code below where I tried to store the
window size in `self.size` so that it could be used in `foo()`. However, this
code would give me an error message saying `'TestPanel' object has no
attribute 'size`. I wonder if there is a way to do what I want to accomplish.
import wx
class TestPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent, -1)
self.Bind(wx.EVT_SIZE, self.OnSize, self)
self.foo()
def Resize(self):
self.size = self.GetSize()
def OnSize(self, event):
self.Resize()
def foo(self):
print(self.size)
if __name__ == '__main__':
app = wx.App(False)
f = wx.Frame(None, -1)
TestPanel(f)
f.Show()
app.MainLoop()
Answer: You need to first find out what gave you that error message in the first
place. With your code as is, do realize that in the `__init__` method, the
`size` attribute was not set anywhere before its `foo` was called, giving you
that error.
What you want to do is to delay the calling of `foo` to your handler for
`EVT_SIZE`, in this case `OnSize`. The event will be called when the window
becomes visible as it will be resized to the default size (thus setting
`self.size`). You could then simplify what you want to do to:
class TestPanel(wx.Panel):
def __init__(self, parent):
wx.Panel.__init__(self, parent, -1)
self.Bind(wx.EVT_SIZE, self.OnSize, self)
def OnSize(self, event):
self.size = self.GetSize()
self.foo()
def foo(self):
print(self.size)
Override `foo` to call into the other window, or whatever.
|
Can Not Click "Select Photos From My Computer" Button In Google My Business Using Selenium
Question: When trying to click on the Google My Business "Select Photos From My
Computer" button I receive this error. I have tried using ever Identifying
element type that selenium offers in the Documentation but cant seem to get
this button to click.
Traceback (most recent call last):
File "C:/Users/Office/Documents/Development/Web_Postmate.py", line 18, in <module>
elem6 = driver.find_element_by_partial_link_text("Select photos from your computer")
File "C:\Users\Office\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 338, in find_element_by_partial_link_text
return self.find_element(by=By.PARTIAL_LINK_TEXT, value=link_text)
File "C:\Users\Office\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 744, in find_element
{'using': by, 'value': value})['value']
File "C:\Users\Office\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 233, in execute
self.error_handler.check_response(response)
File "C:\Users\Office\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 194, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"partial link text","selector":"Select a photo from your computer"}
Stacktrace:
at FirefoxDriver.prototype.findElementInternal_ (file:///C:/Users/Office/AppData/Local/Temp/tmpuv4pvvys/extensions/[email protected]/components/driver-component.js:10770)
at fxdriver.Timer.prototype.setTimeout/<.notify (file:///C:/Users/Office/AppData/Local/Temp/tmpuv4pvvys/extensions/[email protected]/components/driver-component.js:625)
Here is the Button HTML I have to use "Class" and "Link Text"
<div tabindex="0" class="c-F-U e-d e-d-Ac" role="button" style="-moz-user-select: none;">Select photos from your computer</div>
Here is my source file:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("https://business.google.com/b/101831927968068062215/photos/l/03416071574991367502")
elem = driver.find_element_by_name("Email")
elem.send_keys("User")
elem.send_keys(Keys.ENTER)
driver.implicitly_wait(5)
elem1 = driver.find_element_by_name("Passwd")
elem1.send_keys("Password")
elem1.send_keys(Keys.ENTER)
driver.implicitly_wait(5)
elem5 = driver.find_element_by_class_name("tx")
elem5.click()
driver.implicitly_wait(5)
elem6 = driver.find_element_by_partial_link_text("Select photos from your computer")
elem6.click()
Answer: You have specified `'...a photo...'` in your locator while in `HTML` source
text contains `'...photos...'`
To use both `class` and link text try:
driver.find_element_by_xpath('//div[@class="c-F-U e-d e-d-Ac"][contains(text(), "Select photos from your computer")]')
|
matplotlib two charts side-by-side with third overlying the second chart
Question: I am trying to use matplotlib (more specifically the plot method from pandas)
to plot two charts side-by-side in an ipython notebook with a third chart
overlying the second chart and using a secondary y axis. However, I have been
unable to get the overlay to work.
Currently this is my code:
import matplotlib.pyplot as plt
%matplotlib inline
fig, axs = plt.subplots(1,2)
fig.set_size_inches(12, 4)
top10.plot(kind='barh', ax=axs[0])
top10_time_trend.T.plot(kind='bar', stacked=True, legend=False, ax=axs[1])
time_trend.plot(kind='line', ax=axs[1], ylim=0, secondary_y=True)
I get the side-by-side structure I am looking for, but only the first (top10)
and last (time_trend) plots are visible. My output is below:
[](http://i.stack.imgur.com/RK0SB.png)
When plotted separately the unshown plot (top10_time_trend) looks like this
[](http://i.stack.imgur.com/N4brg.png)
What I am trying to accomplish is something that looks like this, i.e. the
line chart overlaying the stacked bar.
[](http://i.stack.imgur.com/Qbz89.png)
Answer: The best method to do this is by creating a third axis say:
ax3 = ax[1].twinx()
and then
top10_time_trend.T.plot(kind='bar', stacked=True, legend=False, ax=ax3)
Please let me know if this works for you.
Here you can find an example for the usage of twinx() from matplotlib docs
<http://matplotlib.org/examples/api/two_scales.html>
|
'DataFrame' object has no attribute 'value_counts'
Question: My dataset is a DataFrame of dimension (840,84). When I write the code:
`ds[ds.columns[1]].value_counts()`
I get a correct output:
Out[82]:
0 847
1 5
Name: o_East, dtype: int64
But when I write a loop to store values, I get _'DataFrame' object has no
attribute 'value_counts'_. I can't explain why ...
wind_vec = []
wind_vec = [(ds[x].value_counts()) for x in ds.columns]
**UPDATE FOR THE CODE**
import pandas as pd
import numpy as np
import numpy.ma as ma
import statsmodels.api as sm
import matplotlib
import matplotlib.pyplot as plt
from sklearn.preprocessing import OneHotEncoder
dataset = pd.read_csv('data/dataset.csv')
ds = dataset
o_wdire = pd.get_dummies(ds['o_wdire'])
s_wdire = pd.get_dummies(ds['s_wdire'])
t_wdire = pd.get_dummies(ds['t_wdire'])
k_wdire = pd.get_dummies(ds['k_wdire'])
b_wdire = pd.get_dummies(ds['b_wdire'])
o_wdire.rename(columns={'ENE': 'o_ENE','ESE': 'o_ESE', 'East': 'o_East', 'NE': 'o_NE', 'NNE': 'o_NNE', 'NNW': 'o_NNW', \
'NW': 'o_NW', 'North': 'o_North', 'SE': 'o_SE', 'SSE': 'o_SSE', 'SSW': 'o_SSW', 'SW': 'o_SW', \
'South': 'o_South', 'Variable': 'o_Variable', 'WSW': 'o_WSW','West':'o_West'}, inplace=True)
s_wdire.rename(columns={'ENE': 's_ENE','ESE': 's_ESE', 'East': 's_East', 'NE': 's_NE', 'NNE': 's_NNE', 'NNW': 's_NNW', \
'NW': 's_NW', 'North': 's_North', 'SE': 's_SE', 'SSE': 's_SSE', 'SSW': 's_SSW', 'SW': 's_SW', \
'South': 's_South', 'Variable': 's_Variable', 'West': 's_West','WSW': 's_WSW'}, inplace=True)
k_wdire.rename(columns={'ENE': 'k_ENE','ESE': 'k_ESE', 'East': 'k_East', 'NE': 'k_NE', 'NNE': 'k_NNE', 'NNW': 'k_NNW', \
'NW': 'k_NW', 'North': 'k_North', 'SE': 'k_SE', 'SSE': 'k_SSE', 'SSW': 'k_SSW', 'SW': 'k_SW', \
'South': 'k_South', 'Variable': 'k_Variable', 'WNW': 'k_WNW', 'West': 'k_West','WSW': 'k_WSW'}, inplace=True)
b_wdire.rename(columns={'ENE': 'b_ENE','ESE': 'b_ESE', 'East': 'b_East', 'NE': 'b_NE', 'NNE': 'b_NNE', 'NNW': 'b_NNW', \
'NW': 'b_NW', 'North': 'b_North', 'SE': 'b_SE', 'SSE': 'b_SSE', 'SSW': 'b_SSW', 'SW': 'b_SW', \
'South': 'b_South', 'Variable': 'b_Variable', 'WSW': 'b_WSW', 'WNW': 'b_WNW', 'West': 'b_West'}, inplace=True)
t_wdire.rename(columns={'ENE': 't_ENE','ESE': 't_ESE', 'East': 't_East', 'NE': 't_NE', 'NNE': 't_NNE', 'NNW': 't_NNW', \
'NW': 't_NW', 'North': 't_North', 'SE': 't_SE', 'SSE': 't_SSE', 'SSW': 't_SSW', 'SW': 't_SW', \
'South': 't_South', 'Variable': 't_Variable', 'WSW': 't_WSW', 'WNW': 't_WNW', 'West':'t_West'}, inplace=True)
#WIND
ds_wdire = pd.DataFrame(pd.concat([o_wdire,s_wdire,t_wdire,k_wdire,b_wdire],axis=1))
ds_wdire = ds_wdire.astype('float64')
In [93]: ds_wdire.shape
Out[93]: (852, 84)
In[101]: ds_wdire[ds_wdire.columns[0]].head()
Out[101]:
0 0
1 0
2 0
3 0
4 0
Name: o_ENE, dtype: float64
In[103]: ds_wdire[ds_wdire.columns[0]].value_counts()
Out[103]:
0 838
1 14
Name: o_ENE, dtype: int64
In[104]: [ds_wdire[x].value_counts() for x in ds_wdire.columns]
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-104-d9756c468818> in <module>()
1 #Filtering for the wind direction based on the most frequent ones.
----> 2 [ds_wdire[x].value_counts() for x in ds_wdire.columns]
<ipython-input-104-d9756c468818> in <listcomp>(.0)
1 #Filtering for the wind direction based on the most frequent ones.
----> 2 [ds_wdire[x].value_counts() for x in ds_wdire.columns]
/home/florian/anaconda3/lib/python3.5/site-packages/pandas/core/generic.py in __getattr__(self, name)
2358 return self[name]
2359 raise AttributeError("'%s' object has no attribute '%s'" %
-> 2360 (type(self).__name__, name))
2361
2362 def __setattr__(self, name, value):
AttributeError: 'DataFrame' object has no attribute 'value_counts'
Answer: Thanks to @EdChum adviced, I checked :
len(ds_wdire.columns),len(ds_wdire.columns.unique())
Out[100]: (83,84)
Actually, there was a missing name value in the dict that should have been
modified from 'WNW' to 'o_WNW'.:
o_wdire.rename(columns={'ENE': 'o_ENE','ESE': 'o_ESE', 'East': 'o_East', 'NE': 'o_NE', 'NNE': 'o_NNE', 'NNW': 'o_NNW', \
'NW': 'o_NW', 'North': 'o_North', 'SE': 'o_SE', 'SSE': 'o_SSE', 'SSW': 'o_SSW', 'SW': 'o_SW', \
'South': 'o_South', 'Variable': 'o_Variable', 'WSW': 'o_WSW','West':'o_West', **[MISSING VALUE WNW]**}, inplace=True)
Maybe it would be better to write a loop that inserts a prefix to the wind
direction variables, this way, I would avoid that kind of problem.
|
robust DOM parsing with getElementsByTagName
Question: The following (from "Dive into Python")
from xml.dom import minidom
xmldoc = minidom.parse('/path/to/index.html')
reflist = xmldoc.getElementsByTagName('img')
failed with
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/path/to/htmlToNumEmbedded.py", line 2, in <module>
xmldoc = minidom.parse('/path/to/index.html')
File "/usr/lib/python2.7/xml/dom/minidom.py", line 1918, in parse
return expatbuilder.parse(file)
File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 924, in parse
result = builder.parseFile(fp)
File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 207, in parseFile
parser.Parse(buffer, 0)
xml.parsers.expat.ExpatError: mismatched tag: line 12, column 4
Using `lxml`, which is recommended by
<http://www.ianbicking.org/blog/2008/12/lxml-an-underappreciated-web-scraping-
library.html>, allows you to parse the document, but it does not seem to have
an `getElementsByTagName`. The following works:
from lxml import html
xmldoc = html.parse('/path/to/index.html')
root = xmldoc.getroot()
for i in root.iter("img"):
print i
but seems kludgey: is there a built-in function that I overlooked?
Or another more elegant way to have **robust DOM parsing with
getElementsByTagName**?
Answer: If you want a list of Element, instead of iterating the return value of the
`Element.iter`, call `list` on it:
from lxml import html
reflist = list(html.parse('/path/to/index.html.html').iter('img'))
|
Splitting HTML text by <br> while using beautifulsoup
Question: HTML code:
<td> <label class="identifier">Speed (avg./max):</label> </td> <td class="value"> <span class="block">4.5 kn<br>7.1 kn</span> </td>
I need to get values 4.5 kn and 7.1 as separate list items so I could append
them separately. I do not want to split it I wanted to split the text string
using re.sub, but it does not work. I tried too use replace to replace br, but
it did not work. Can anybody provide any insight?
Python code:
def NameSearch(shipLink, mmsi, shipName):
from bs4 import BeautifulSoup
import urllib2
import csv
import re
values = []
values.append(mmsi)
values.append(shipName)
regex = re.compile(r'[\n\r\t]')
i = 0
with open('Ship_indexname.csv', 'wb')as f:
writer = csv.writer(f)
while True:
try:
shipPage = urllib2.urlopen(shipLink, timeout=5)
except urllib2.URLError:
continue
except:
continue
break
soup = BeautifulSoup(shipPage, "html.parser") # Read the web page HTML
#soup.find('br').replaceWith(' ')
#for br in soup('br'):
#br.extract()
table = soup.find_all("table", {"id": "vessel-related"}) # Finds table with class table1
for mytable in table: #Loops tables with class table1
table_body = mytable.find_all('tbody') #Finds tbody section in table
for body in table_body:
rows = body.find_all('tr') #Finds all rows
for tr in rows: #Loops rows
cols = tr.find_all('td') #Finds the columns
for td in cols: #Loops the columns
checker = td.text.encode('ascii', 'ignore')
check = regex.sub('', checker)
if check == ' Speed (avg./max): ':
i = 1
elif i == 1:
print td.text
pat=re.compile('<br\s*/>')
print pat.sub(" ",td.text)
values.append(td.text.strip("\n").encode('utf-8')) #Takes the second columns value and assigns it to a list called Values
i = 0
#print values
return values
NameSearch('https://www.fleetmon.com/vessels/kind-of-magic_0_3478642/','230034570','KIND OF MAGIC')
Answer: Locate the "Speed (avg./max)" label first and then go to the value via
[`.find_next()`](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-
all-next-and-find-next):
from bs4 import BeautifulSoup
data = '<td> <label class="identifier">Speed (avg./max):</label> </td> <td class="value"> <span class="block">4.5 kn<br>7.1 kn</span> </td>'
soup = BeautifulSoup(data, "html.parser")
label = soup.find("label", class_="identifier", text="Speed (avg./max):")
value = label.find_next("td", class_="value").get_text(strip=True)
print(value) # prints 4.5 kn7.1 kn
Now, you can extract the actual numbers from the string:
import re
speed_values = re.findall(r"([0-9.]+) kn", value)
print(speed_values)
Prints `['4.5', '7.1']`.
You can then further convert the values to floats and unpack into separate
variables:
avg_speed, max_speed = map(float, speed_values)
|
Why doesn't Python lxml take my xml?
Question: I'm using the Python lxml library to parse my xml, but I'm having a hard time
parsing one specific text. Checkout the following code:
>>> print type(raw_text_xml)
<type 'unicode'>
>>> from lxml import etree
>>> article_xml_root = etree.fromstring(raw_text_xml, parser)
Traceback (most recent call last):
File "<input>", line 1, in <module>
article_xml_root = etree.fromstring(raw_text_xml, parser)
File "lxml.etree.pyx", line 3032, in lxml.etree.fromstring (src/lxml/lxml.etree.c:68121)
File "parser.pxi", line 1786, in lxml.etree._parseMemoryDocument (src/lxml/lxml.etree.c:102470)
File "parser.pxi", line 1667, in lxml.etree._parseDoc (src/lxml/lxml.etree.c:101229)
File "parser.pxi", line 1035, in lxml.etree._BaseParser._parseUnicodeDoc (src/lxml/lxml.etree.c:96139)
File "parser.pxi", line 582, in lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:91290)
File "parser.pxi", line 683, in lxml.etree._handleParseResult (src/lxml/lxml.etree.c:92476)
File "parser.pxi", line 622, in lxml.etree._raiseParseError (src/lxml/lxml.etree.c:91772)
XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1
so it says the first character is not a `<`, which by inspection is true:
>>> print raw_text_xml[:20]
ďťż<?xml version="1.
it has 3 weird characters in front of the xml. So to clean these I tried the
following:
>>> article_xml_root = etree.fromstring(raw_text_xml[3:], parser)
Traceback (most recent call last):
File "<input>", line 1, in <module>
article_xml_root = etree.fromstring(raw_text_xml[3:], parser)
File "lxml.etree.pyx", line 3032, in lxml.etree.fromstring (src/lxml/lxml.etree.c:68121)
File "parser.pxi", line 1781, in lxml.etree._parseMemoryDocument (src/lxml/lxml.etree.c:102435)
ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.
And now it suddenly complains about it being a unicode string with encoding
declaration, while if you look all the way up to my first line of code, it was
Unicode all along.
Does anybody know why after slicing it suddenly gives a whole different error?
And most importantly, does anybody know how I can solve this?
Answer: > why after slicing it suddenly gives a whole different error?
Because after the slicing the first error vanishes and the parsing can
progress until the second one is found.
> And most importantly, does anybody know how I can solve this?
Maybe the error message is right (it happens) and you can solve it by
converting the unicode to bytes. I guess that's better than removing the
encoding declaration.
raw_text_xml.encode('utf8')
Or instead of `'utf8'` whatever encoding is declared in the xml fragment.
|
Python 3 need assistance
Question:
def bubble_down(L, start, end):
""" (list, int, int) -> NoneType
Bubble down through L from indexes end through start, swapping items that are out of place.
>>> L = [4, 3, 2, 1, 0]
>>> bubble_down(L, 1, 3)
>>> L
[4, 1, 3, 2, 0]
"""
for i in range(start, end):
if L[i] < L[i]:
L[i - 1], L[i] = L[i], L [i - 1]
This function won't do.... and I don't get why docstring example L return [4,
1, 3, 2, 0], not [4, 1, 2, 3, 0]
Answer: You're almost there. Your comparison is wrong (you are comparing the same
element), and you probably want to think a bit more about your end bondary.
Most importantly, you want to iterate the process until there is no change.
Here's a correct version:
def bubble_down(to_sort, start=0, end=None):
if end is None:
end = len(to_sort)
did_change = True
while did_change:
did_change = False
for i in range(start, end-1):
if to_sort[i] > to_sort[i+1]:
did_change = True
to_sort[i], to_sort[i+1] = to_sort[i+1], to_sort[i]
return to_sort
>>> print(bubble_down([5, 7, 6 , 1]))
[1, 5, 6, 7]
>>> print bubble_down([4, 3, 2, 1, 0])
[0, 1, 2, 3, 4]
|
Cannot import matplotlib in Python 3
Question: I want to install matplotlib on windows. To do this I tried those lines,
git clone https://github.com/matplotlib/matplotlib
cd matplotlib
py setup.py build
py setup.py install
which I found at [this link](http://stackoverflow.com/questions/8605847/how-
to-install-matplotlib-with-python3-2)
But I think the installation does not succesfully occured. This is result of
`py setup.py install`:
[](http://i.stack.imgur.com/iasty.png)
So still following imports does not work;
import matplotlib.pyplot as plt
import matplotlib.animation as animation
An error says Unresolved import. So I am supposing this is because freetype
and png did not installed.
Now I found freetype.dll and installed it but where should I put that file?
Any idea about this problem.
Answer: Yes. Matplotlib has some dependencies that need to be installed in order for
the library to function fully.
[Quoting](http://matplotlib.org/users/installing.html#installing-from-source):
> Once you have satisfied the requirements detailed below (mainly python,
> numpy, libpng and freetype), you can build matplotlib:
>
> cd matplotlib python setup.py build python setup.py install
To be sure of the correct procedure check the [build
instructions](http://matplotlib.org/users/installing.html#build-requirements).
If this process seems somewhat complex (it sometimes is) you can consider
Python distributions such as:
1) [WinPython](http://winpython.github.io/)
2) [Python XY](https://python-xy.github.io/)
3) [Anaconda](https://www.continuum.io/downloads)
, that already bring several libraries by default and making it a lot easier
to work with Python (and extensions).
|
Flask application on uwsgi gives a TypeError: 'Flask' object is not iterable
Question: I'm running Python/Flask application on Python 3.5 in a virtualenv on Arch
Linux. The application is run by a uwsgi server that is connected via socket
to Nginx.
When I perform a request, I get the following uwsgi error:
Mar 23 02:38:19 saltminion1.local uwsgi[20720]: TypeError: 'Flask' object is not iterable
This is the callable that uwsgi is configured to use:
def create_app(config=None, import_name=None):
if import_name is None:
import_name = DefaultConfig.PROJECT
app = Flask(import_name, instance_path=INSTANCE_FOLDER_PATH, instance_relative_config=True)
configure_app(app, config)
configure_database(app)
configure_logging(app)
configure_error_handlers(app)
configure_blueprints(app)
return app
Things work fine when I start the application using the built-in HTTP server
both on the local OS X development workstation and on Arch/Ubuntu vagrant
boxes.
Problem is: After adding debug statements it became clear the error occurs at
some point in the Flask code itself and not within my app. How can I get a
stack trace here to troubleshoot better?
Answer: A WSGI app (which Flask is), is a callable object. That's what uWSGI expects
to be passed to `callable`. You're passing an app factory, which is also
callable, but you need to pass it the _result_ of that call, because the app
factory isn't a WSGI application itself.
The factory function can be called directly in the configuration. The `module`
and `callable` options can also be combined in just `module`.
module = my_app:create_app()
This tells uWSGI to import `my_app`, find `my_app.create_app`, and call it.
The result of that, the Flask app, is what's actually used as the callable.
|
Python multiprocessing.Process behaves non deterministic
Question: The following code shows a simple multiprocessing.Process pipeline with a
shared dictionary of lists and a task queue for different consumers:
import multiprocessing
class Consumer(multiprocessing.Process):
def __init__(self, task_queue, result_dict):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue
self.result_dict = result_dict
def run(self):
proc_name = self.name
while True:
next_task = self.task_queue.get()
if next_task is None:
# Poison pill means shutdown
print('%s: Exiting' % proc_name)
self.task_queue.task_done()
break
print('%s: %s' % (proc_name, next_task))
# Do something with the next_task
l = self.result_dict[5]
l.append(3)
self.result_dict[5] = l
# alternative, but same problem
#self.result_dict[5] += [3]
self.task_queue.task_done()
return
def provide_tasks(tasks, num_worker):
low = [
['w1', 'w2'],
['w3'],
['w4', 'w5']
]
for el in low:
tasks.put(el)
# Add a poison pill for each worker
for i in range(num_worker):
tasks.put(None)
if __name__ == '__main__':
num_worker = 3
tasks = multiprocessing.JoinableQueue()
manager = multiprocessing.Manager()
results = manager.dict()
lists = [manager.list() for i in range(1, 11)]
for i in range(1, 11):
results[i] = lists[i - 1]
worker = [Consumer(tasks, results) for i in range(num_worker)]
for w in worker:
w.start()
p = multiprocessing.Process(target=provide_tasks, args=(tasks,num_worker))
p.start()
# Wait for all of the tasks to finish
p.join()
print(results)
When you run this example with Python3.x you will receive different outputs
for the results dict. I actually expect the results dict to look like
{1: [], 2: [], 3: [], 4: [], 5: [3, 3, 3], 6: [], 7: [], 8: [], 9: [], 10: []}
But for some executions it looks like this:
{1: [], 2: [], 3: [], 4: [], 5: [3, 3], 6: [], 7: [], 8: [], 9: [], 10: []}
Can someone explain me this behavior? Why is somethings a number missing?
**Updated solution approach according to the proposed answer:**
if next_task is None:
with lock:
self.result_dict.update(self.local_dict)
[...]
Where lock is a `manager.Lock()` and self.local_dict is a `defaultdict(list)`.
**Moved lock according to the answer comment. Also added a version that
doesn't work with the lock.**
# Works
with lock:
l = self.result_dict[x]
l.append(3)
self.result_dict[x] = l
self.task_queue.task_done()
# Doesn't work. Even if I move the lock out of the loop.
for x in range(1, 10):
with lock:
l = self.result_dict[x]
l.append(3)
self.result_dict[x] = l
In order to get the second example to work, we need to call `join` on all
worker too.
Answer: Getting a local copy of the list, modifying it, and reassigning it to the
manager dict is not an atomic operation, thus creating a race condition where
an append operation can get "lost".
Described in [this python bug
report](http://bugs.python.org/issue9801%C2%A0%C2%A0opened).
l = self.result_dict[5] # <-- race begins
l.append(3)
self.result_dict[5] = l # <-- race ends
|
Gradient Descent vs Adagrad vs Momentum in TensorFlow
Question: I'm studyng _TensorFlow_ and how to use it, even if I'm not an expert of
neural network and deep learnig (just the bases).
Following tutorials I don't understand the real and practice difference
between the three optimizers for a loss.
Now I need an advice.
I look at the API
<https://www.tensorflow.org/versions/r0.7/api_docs/python/train.html#optimizers>
and I understand the principles, but my questions are:
**1\. When is preferable use one instead the others ?**
**2\. There are important differences to know ?**
Answer: [Here's a nice explanation](http://sebastianruder.com/optimizing-gradient-
descent/) and comparison of different optimizers (not mine)
|
Gstreamer RTSP Server not working (SDP contains no streams)
Question: Here is my code for GstRtspServer that should just stream mp4 file for now:
import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
from gi.repository import Gst, GObject, GstRtspServer
GObject.threads_init()
Gst.init(None)
class RTSP_Server:
def __init__(self):
self.server = GstRtspServer.RTSPServer.new()
self.address = '192.168.1.15'
self.port = '8554'
self.launch_description = '( playbin uri=file:///E://...sample_video.mp4 )'
self.server.set_address(self.address)
self.server.set_service(self.port)
self.server.connect("client-connected",self.client_connected)
self.factory = GstRtspServer.RTSPMediaFactory.new()
self.factory.set_launch(self.launch_description)
self.factory.set_shared(True)
self.factory.set_transport_mode(GstRtspServer.RTSPTransportMode.PLAY)
self.mount_points = self.server.get_mount_points()
self.mount_points.add_factory('/video', self.factory)
self.server.attach(None)
print('Stream ready')
GObject.MainLoop().run()
def client_connected(self, arg1, arg2):
print('Client connected')
server = RTSP_Server()
I run it, get 'Stream ready' and then type in command line:
C:\gstreamer\1.0\x86_64\bin>gst-launch-1.0 rtspsrc location=rtsp://192.168.1.15:8554/video latency=0 ! decodebin ! autovideosink
And receive this:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://192.168.1.15:8554/video
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not get/set settings from/on resource.
Additional debug info:
gstrtspsrc.c(6845): gst_rtspsrc_setup_streams (): /GstPipeline:pipeline0/GstRTSP
Src:rtspsrc0:
SDP contains no streams
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
C:\gstreamer\1.0\x86_64\bin>
Also I receive 'Client connected' in Python and first frame of the video opens
and then closes after a moment.
* Gst.parse_launch('playbin uri=file:///E://...sample_video.mp4') works OK - (with full address)
* VLC says that it is impossible to open rtsp://192.168.1.15:8554/video
* I've tried launching it on another computer in local network
* With 127.0.0.1 as well
* And receive stream without latency=0 ! decodebin ! autovideosink
What's the problem? I am looking forward to your help!
Answer: Your server is listening on:
> self.port = '554'
while You are trying to play port 8554:
> VLC says that it is impossible to open rtsp://192.168.1.15:8554/video
|
parsing xml with python, selecting a tag using a sibling tag as selector
Question: from the following xml structure and using ElementTree i'm trying to parse the
descriptions' text _solely_ for the items where titles' text contain a certain
keyword of interest. thanks for any suggestion
<data>
<item>
<title>contains KEYWORD of interest </title>
<description> description text of interest "1"</description>
</item>
<item>
<title>title text </title>
<description> description text not of interest</description>
</item>
.
.
.
<item>
<title>also contains KEYWORD of interest </title>
<description> description text of interest "k" </description>
</item>
</data>
desired outcome:
description text of interest "1"
description text of interest "k"
Answer: You can use [`lxml`](http://lxml.de/) which support
[XPath](http://lxml.de/xpathxslt.html):
xml = '''<data>
<item>
<title>contains KEYWORD of interest </title>
<description> description text of interest "1"</description>
</item>
<item>
<title>title text </title>
<description> description text not of interest</description>
</item>
.
.
.
<item>
<title>also contains KEYWORD of interest </title>
<description> description text of interest "k" </description>
</item>
</data>
'''
import lxml.etree
root = lxml.etree.fromstring(xml)
root.xpath('.//title[contains(text(), "KEYWORD")]/'
'following-sibling::description/text()')
# => [' description text of interest "1"', ' description text of interest "k" ']
Using
[`xml.etree.ElementTree`](https://docs.python.org/2/library/xml.etree.elementtree.html):
import xml.etree.ElementTree as ET
root = ET.fromstring(xml)
[item.find('description').text for item in root.iter('item')
if'KEYWORD' in item.find('title').text]
# => [' description text of interest "1"', ' description text of interest "k" ']
|
Merging pandas dataframes duplicates some data
Question: Thanks for taking the time to read my post.
I'm using Python pandas and merging information from a number of CSV and TSV
files. When I execute the 2nd merge data is duplicated in the resulting
dataframe. I'm assuming, I'm missing something basic with the merge call but I
haven't been able to figure it out.
Code:
from pandas import DataFrame, read_csv
import matplotlib.pyplot as plt
import pandas as pd
import sys
import matplotlib
# Enable inline plotting
%matplotlib inline
# read data into dataframes
ticketdata = r'/pathto.csv'
ticketdata = r'/pathto.csv'
userdata = r'/pathto.csv'
shipmentdata = r'/pathto.tsv'
tickets_df = pd.read_csv((ticketdata), usecols=['Id',"Requester",'Created at',"Requester email",
"Requester external id"])
users_df = pd.read_csv((userdata), usecols=['External ID','Printers',"Organization Title"])
shipment_df = pd.read_csv((shipmentdata), delimiter='\t', usecols=['Cust','Printer ID'])
# Clean up tickets_df & shipment_df
# Change "Requester external id" to "External ID" to support the merge
tickets_df.columns = ['Ticket Id',"Requester","External ID","Requester email",'Created at']
shipment_df.columns = ['VAR','Printers']
# Change column order for the sake of readability
tickets_df = tickets_df[['Ticket Id','Requester','Created at',"Requester email","External ID"]]
# Replace NaN in External ID with 0 and merge data
tickets_df.fillna(0, inplace=True)
merge1_df = pd.merge(tickets_df, users_df, on=['External ID'], how='left')
merge1_df = merge1_df[['Ticket Id','Created at',"Organization Title",'Requester',"Requester email","External ID",'Printers']]
merge2_df = pd.merge(merge1_df, shipment_df, on=['Printers'], how='left')
merge1_df looks as expected (NaN is expected for some values):
Ticket Id Created at Organization Title Requester Requester email External ID Printers
0 1 2014-08-21 18:19 NaN dude [email protected] 0 NaN
1 2 2014-09-09 12:04 NaN dude1 [email protected] 0 NaN
2 3 2014-09-09 12:04 NaN dude2 [email protected] 0 NaN
3 4 2014-09-09 12:04 NaN dude3 [email protected] 0 NaN
merge2_df contains thousands of dupes:
Ticket Id Created at Organization Title Requester Requester email External ID Printers
0 1 2014-08-21 18:19 NaN dude [email protected] 0 NaN
1 1 2014-08-21 18:19 NaN dude [email protected] 0 NaN
2 1 2014-08-21 18:19 NaN dude [email protected] 0 NaN
3 1 2014-08-21 18:19 NaN dude [email protected] 0 NaN
Any idea(s) how I am messing up merge2_df?
Answer: The issue was with NaN values in the shipment_df dataframe. I added the
following to replace NaN with 0 and the duplicate entries in merge2_df were
resolved
shipment_df.fillna(0, inplace=True)
|
Python 2.7 NUMPY ImportError: PyCapsule_Import could not import module "date time" in mac version 10.11
Question: I am using Python 2.7 and Sublime Text 3. I ran this code in terminal, and it
ran well but when I try to run it using Sublime Text, it doesn't.
TERMINAL MODE:
Last login: Wed Mar 23 11:16:23 on ttys000
admins-iMac:~ admin$ python
Python 2.7.11 |Anaconda 2.5.0 (x86_64)| (default, Dec 6 2015, 18:57:58)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
import numpy
numbers = [1,5,6,7,8,9]
numpy.mean(numbers)
6.0
SUBLIME TEXT MODE
RESPONSE FROM SUBLIME TEXT:
ImportError: PyCapsule_Import could not import module "date time"
Answer: I am able to find an answer through some guidance from Stephen, that if you
rename the file it works. Somehow having the file name as Numbers.py doesn't
work. So renamed as test.py. Sound simple, it worked, after a week time of
difficulty. thanks everyone
|
error: Cython does not appear to be installed
Question: Pip does not recognize Cython even though it is installed.
C:\Python27>python -m pip install watchdog
Collecting watchdog
C:\Python27\lib\site-packages\pip\_vendor\requests\packages\urllib3\util\ssl_.py
:315: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Na
me Indication) extension to TLS is not available on this platform. This may caus
e the server to present an incorrect TLS certificate, which can cause validation
failures. For more information, see https://urllib3.readthedocs.org/en/latest/s
ecurity.html#snimissingwarning.
SNIMissingWarning
C:\Python27\lib\site-packages\pip\_vendor\requests\packages\urllib3\util\ssl_.py
:120: InsecurePlatformWarning: A true SSLContext object is not available. This p
revents urllib3 from configuring SSL appropriately and may cause certain SSL con
nections to fail. For more information, see https://urllib3.readthedocs.org/en/l
atest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Downloading watchdog-0.8.3.tar.gz (83kB)
100% |################################| 92kB 1.2MB/s
Collecting PyYAML>=3.10 (from watchdog)
Cache entry deserialization failed, entry ignored
Cache entry deserialization failed, entry ignored
Downloading PyYAML-3.11.tar.gz (248kB)
100% |################################| 256kB 1.3MB/s
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info\PyYAML.egg-info
writing pip-egg-info\PyYAML.egg-info\PKG-INFO
writing top-level names to pip-egg-info\PyYAML.egg-info\top_level.txt
writing dependency_links to pip-egg-info\PyYAML.egg-info\dependency_links.tx
t
writing manifest file 'pip-egg-info\PyYAML.egg-info\SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
failed to import Cython: DLL load failed: %1 is not a valid Win32 applicatio
n.
error: Cython does not appear to be installed
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in c:\users\jasons\a
ppdata\local\temp\pip-build-9vxzli\PyYAML\
C:\Python27>Cython --version
Cython version 0.23.4
Answer: Go to scripts folder (follow each step):
1. open `cmd` as administrator.
2. Goto your python scripts folder (usually `C:\Python27\Scripts`)
3. type `pip install watchdog`
4. pip must do it automatically for you.
Hope this helps! (This worked for me)
|
correct static files setting
Question: Hello I'm very confused about setting static files up. Every thing works
fine(displays image, javascript, css) no matter what I try. So I'm confused
which one is the right one.
Currently, this is how my project looks like
project
--project
---------static
---------media
--env
--static
--------media
--------static
And this is my code
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(os.path.dirname(BASE_DIR), "static", "media")
STATIC_URL = '/static/'
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),)
STATIC_ROOT = os.path.join(os.path.dirname(BASE_DIR), "static", "static")
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
When I do python manage.py collectstatic, I don't get any error but static
folder that's in outer static folder doesn't contain anything. but media
folder that's in static folder contains the files in media folder that's in
project folder.
Also I have this for aws,
AWS_FILE_EXPIRE = 200
AWS_PRELOAD_METADATA = True
AWS_QUERYSTRING_AUTH = True
DEFAULT_FILE_STORAGE = 'project.utils.MediaRootS3BotoStorage'
STATICFILES_STORAGE = 'project.utils.StaticRootS3BotoStorage'
AWS_STORAGE_BUCKET_NAME = 'realproject'
S3DIRECT_REGION = 'ap-northeast-2'
S3_URL = '//%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
MEDIA_URL = '//%s.s3.amazonaws.com/media/' % AWS_STORAGE_BUCKET_NAME
MEDIA_ROOT = MEDIA_URL
STATIC_URL = S3_URL + 'static/'
ADMIN_MEDIA_PREFIX = STATIC_URL + 'admin/'
import datetime
date_two_months_later = datetime.date.today() + datetime.timedelta(2 * 365 / 12)
expires = date_two_months_later.strftime("%A, %d %B %Y 20:00:00 GMT")
AWS_HEADERS = {
'Expires': expires,
'Cache-Control': 'max-age=86400',
}
Can someone please tell me if I'm doing it right?
by the way, I read <https://docs.djangoproject.com/en/1.9/howto/static-files/>
and followed it, i'm not sure if I followed it right(displayed above) which is
why I'm asking.
Answer: The `python manage.py collectstatic` command looks for all your static
directories and combines those file in the directory defined by the
`STATIC_ROOT` setting.
In your case, `STATIC_ROOT` is set to `os.path.join(os.path.dirname(BASE_DIR),
"static", "static")`, i.e.
your_project/static/static
So this is where the static files are being collected to. If you want them in
the outer static directory, you can change `STATIC_ROOT` to
`os.path.join(os.path.dirname(BASE_DIR), "static")`.
There is a good discussion of this in the excellent Django [docs
here](https://docs.djangoproject.com/en/1.9/howto/static-files/).
There is quite a lot to take in in these settings, so here is a quick summary
of each static setting as an example:
# this is the URL that django will look for static resources at
# - i.e. http://your_domain/static
# so this one is a URL used when by your web server and in template
# shortcuts.
STATIC_URL = '/static/'
# this is where Django will look for static files to collect.
# I.e. the search locations that collectstatic uses.
# 'my_project/static' in this instance. You need to add the places
# you write your static files to this directory. For example, if you
# have several places where you are writing css files, add their
# container directories to this setting.
# it is a list of places to look for static files.
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),)
# this is where collectstatic will collect the static files to.
# When you hook this all into your webserver, you would tell your
# webserver that the /static/ url maps to this directory so that
# your app can find the static content. It's a directory in your
# project usually.
# it's a single directory where the static files are collected together.
STATIC_ROOT
|
Redirecting the print output to a .txt file in Python
Question: I am complete beginner in Python. I have tried many methods from stackoverflow
answers on this question, but neither of them works in my script.
I have this little script to use, however I can not get the huge result to
.txt file so I can analyize the data. How do I redirect the print output to
txt file on my computer?
from nltk.util import ngrams
import collections
with open("text.txt", "rU") as f:
sixgrams = ngrams(f.read().decode('utf8').split(), 2)
result = collections.Counter(sixgrams)
print result
for item, count in sorted(result.iteritems()):
if count >= 2:
print " ".join(item).encode('utf8'), count
Answer: Just do it on command line: `python script.py > text.txt`
|
What's the best way to make a new migration in a standalone Django app?
Question: I have a Django app which was spun out a while ago from the project it was
originally built for, made into a separate standalone app, and put on PyPI
(<https://pypi.python.org/pypi/mysociety-django-images>). The app is built
expecting to be run with Django 1.7+.
I'd now like to make a change to one of the models there - just changing a
`max_length` on a field. I can't see anything in the documentation about how
to make a new migration for it? Do I need to make an example project and use
that, or is there a better way?
Answer: You can do this by making a script like:
#!/path/to/your python
import sys
import django
from django.conf import settings
from django.core.management import call_command
settings.configure(
DEBUG=True,
INSTALLED_APPS=(
'django.contrib.contenttypes',
'yourAppName',
),
)
django.setup()
call_command('makemigrations', 'yourAppName')
(this is also how we go about testing our standalone apps).
I don't know which is the better practice between this and create an example
project (this appears lighter to do).
|
Store more than 1 value in python array?
Question: I would like to store more than 1 value in a python array(I am open to any
other data structure too).
For example :
array[1][2][3] = 1 # (this is what I am able to do now)
But later I also get the value 2, now instead of storing it in another array
using the same indices, I want to be able to do this :
array[1][2][3] = 1,2
But I dont want to concatenate the existing result like in a string, and split
the string to get the individual values.
Is there a way of doing this without having to introduce another dimension to
the array?
edit : I want a neater way to store 2 values in the same cell.
Thanks
Answer: I would use defaultdict for this:
from collections import defaultdict
array = defaultdict(list)
array[(1,2,3)].append(1)
array[(1,2,3)].append(2)
Now array at position (1,2,3) is a list containing 1 and 2
|
Why does my import time give error module object is not callable
Question: I just want to use the import time function to get a timestamp from python.
import time
I have this sample test code which works just fine on cloud 9.
import time
now = int(time.time() * 1000)
print now
But it doesn't work on my mac. I get an error right on line 1.
Python 2.7.11 (default, Mar 21 2016, 23:21:56)
[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import time
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "time.py", line 2, in <module>
now = int(time.time() * 1000)
TypeError: 'module' object is not callable
>>>
Not sure what is going on here it is frustrating.
Answer: You have a file in your local directory called "time.py". Rename it.
|
Python XHR Request Timing Out
Question: Trying to wrap my head around using requests to get Javscript loaded content
without spawning an actual browser to render it. I'm looking at using the
requests lib to get the tables but I keep getting a 504 with my test code and
I'm not 100% why.
So I'm looking at getting horse racing data from: sports.betway.com/#/horse-
racing/uk-and-ireland/haydock
I watched the network traffic and found the source of the traffic. It's a call
to /emoapi/emos with an eventIds number.
I tried this:
import requests
url = 'https://sports.betway.com/emoapi/emos'
params = {
'eventIds': '807789',
'lang': 'en'
}
headers = {'Accept': '*/*',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive',
'Content-Length': '271',
'Content-Type': 'application/json',
'Host': 'sports.betway.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36'}
#Note: I do also set the origin and ref link in the header but I can't post that many links in a question.
response = requests.post(url, params=params, headers=headers)
print response
fixtures = response.json()
print fixtures
I can't see what else I'm missing from the request. But the print response
comes back as a
This is an example of the full payload on the browser header which requests a
whole bunch of Ids rather than just the one I'm trying:
{"eventIds":[807789,808612,808597,807790,808613,808598,807791,808611,808599,807792,808614,808600,807793,808615,808601,807794,808616,808602,807795,808617,807781,808591,807782,808589,807783,808590,807785,808592,807784,808593,807786,808594,807788,808595,807787],"lang":"en"}
And it's a POST to that URL so I'm not sure why it's timing out.
Can anyone shed any light on where I'm going wrong here? Is it something
painfully obvious?
Answer: The payload should be included in request body rather than url params. The
payload in this case is a json raw string.
import requests
url = 'https://sports.betway.com/emoapi/emos'
data = '{"eventIds": [807789]}'
response = requests.post(url, data=data )
print response.text
|
How can you extract data from this json using, beautifulsoup and python?
Question: how can get those two values utc_last_updated and name given the following
json ? I used requests, to get to fetch the content, and then I used
BeautifulSoup to make it like it is now. But now I just want to extract the
two values that I have shown.
"data": [
{
"scm": "hg",
"has_wiki": false,
"last_updated": "2016-03-23T14:05:27.433",
"no_forks": false,
"created_on": "2016-03-18T22:55:52.705",
"owner": "user",
"email_mailinglist": "",
"is_mq": false,
"size": 420034,
"read_only": false,
"fork_of": null,
"mq_of": null,
"state": "available",
"utc_created_on": "2016-03-18 21:55:52+00:00",
"website": "",
"description": "",
"has_issues": false,
"is_fork": false,
"slug": "store",
"is_private": true,
"name": "store",
"language": "python",
"utc_last_updated": "2016-03-23 13:05:27+00:00",
"no_public_forks": true,
"creator": null,
"resource_uri": "/1.0/repositories/my_url"
},
{
"scm": "hg",
"has_wiki": false,
"last_updated": "2016-03-18T12:26:22.261",
"no_forks": false,
"created_on": "2016-03-18T12:19:08.262",
"owner": "user",
"email_mailinglist": "",
"is_mq": false,
"size": 173137,
"read_only": false,
"fork_of": null,
"mq_of": null,
"state": "available",
"utc_created_on": "2016-03-18 11:19:08+00:00",
"website": "",
"description": "",
"has_issues": false,
"is_fork": false,
"name": 'foo'
"is_private": true,,
"language": "python",
"utc_last_updated": "2016-03-18 11:26:22+00:00",
"no_public_forks": true,
"creator": null,
"resource_uri": "/1.0/repositories/my_rl"
},
} I will appreciate any help.
Answer: You've got a _`JSON` response_, not `HTML` \- parse it with [`json`
module](https://docs.python.org/2/library/json.html):
import json
data = json.loads(response)
for item in data["data"]:
print(item["utc_last_updated"])
|
python-daemon doesn't call the start function
Question: I've been following the [this
example](https://www.python.org/dev/peps/pep-3143/#example-usage) to implement
a python daemon, and it seems to be somewhat working, but only the reconfigure
function is called.
This is the code I've been using:
import signal
import daemon
import lockfile
import manager
context = daemon.DaemonContext(
working_directory='/home/debian/station',
pidfile=lockfile.FileLock('/var/run/station.pid'))
context.signal_map = {
signal.SIGTERM: manager.Manager.program_terminate,
signal.SIGHUP: 'terminate',
signal.SIGUSR1: manager.Manager.program_reload_configuration,
}
manager.Manager.program_configure()
with context:
manager.Manager.program_start()
This is the code on the manager class:
@staticmethod
def program_configure():
logging.info('Configuring program')
@staticmethod
def program_reload_configuration():
logging.info('Reloading configuration')
@staticmethod
def program_start():
global Instance
logging.info('Program started')
Instance = Manager()
Instance.run()
@staticmethod
def program_terminate():
logging.info('Terminating')
And the log shows only:
INFO:root:Configuring program
For some reason `program_start()` isn't being called. `program_configure()` is
called every time the python file is read, so that's that, but why isn't
`program_start()` called?
I start the daemon by typing `sudo service station.sh start` and the line that
runs the script is:
python $DAEMON start
EDIT: After reading a bit, I've realized that the program probably exits or
hangs in `context.__enter__()` (`with` calls that). But I have no clue what is
causing this
Answer: The problem wasn't in the python-daemon not calling the functions, it's the
logging that didn't work.
When the daemon creates a new process it doesn't transfer all file handles
from the mother process - Therefore the logs aren't written. See [this
question](http://stackoverflow.com/questions/13180720/maintaining-logging-and-
or-stdout-stderr-in-python-daemon) for more info.
The solution to that is to use the `files_preserve` property like so:
# Set the logger
LOG_LEVEL = logging.DEBUG
logger = logging.getLogger()
logger.setLevel(LOG_LEVEL)
fh = logging.FileHandler(LOG_FILENAME)
logger.addHandler(fh)
# Not create the context, and notify it to preserve the log file
context = daemon.DaemonContext(
working_directory='/home/debian/station',
pidfile=lockfile.FileLock('/var/run/station.pid'),
files_preserve=[fh.stream],
)
|
Python error: subprocess.CalledProcessError: Command returned non-zero exit status 1
Question: I need to count the lines of a shell-command output in python script.
This function works fine in case there is output, but in case the output is
empty, it gives an error as explained in the error output.
I tried to avoid that using an `if` statement in case the output of the
command is `None`, but that didn't help.
#!/usr/bin/python
import subprocess
lines_counter=0
func="nova list | grep Shutdown "
data=subprocess.check_output(func, shell=True)
if data is True:
for line in data.splitlines():
lines_counter +=1
print lines_counter
Error output:
data=subprocess.check_output(func, shell=True)
File "/usr/lib/python2.7/subprocess.py", line 573, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command 'nova list | grep Shutdown ' returned non-zero exit status 1
Answer: The `grep` command you're running exits with exit status `1` if it doesn't
match anything. That non-zero exit code causes `check_output` to raise an
exception (that's what the "check" part of its name means).
If you don't want a failed match to raise an exception, consider using
`subprocess.getoutput` instead of `check_output`. Or you could change your
command to avoid non-zero exit codes:
func = "nova list | grep Shutdown || true"
|
Using Pandas in Python to Join Multiple Files Based on Date
Question: I have csv files that I need to join together based upon date but the dates in
each file are not the same (i.e. some files start on 1/1/1991 and other in
1998). I have a basic start to the code (see below) but I am not sure where to
go from here. Any tips are appreciated. Below please find a sample of the
different csv I am trying to join.
import os, pandas as pd, glob
directory = r'C:\data\Monthly_Data'
files = os.listdir(directory)
print(files)
all_data =pd.DataFrame()
for f in glob.glob(directory):
df=pd.read_csv(f)
all_data=all_data.append(df,ignore_index=True)
all_data.describe()
File 1
DateTime F1_cfs F2_cfs F3_cfs F4_cfs F5_cfs F6_cfs F7_cfs
3/31/1991 0.860702028 1.167239264 0 0 0 0 0
4/30/1991 2.116930556 2.463493056 3.316688418
5/31/1991 4.056572581 4.544307796 5.562668011
6/30/1991 1.587513889 2.348215278 2.611659722
7/31/1991 0.55328629 1.089637097 1.132043011
8/31/1991 0.29702957 0.54186828 0.585073925 2.624375
9/30/1991 0.237083333 0.323902778 0.362583333 0.925563094 1.157786606 2.68722973 2.104090278
File 2
DateTime F1_mg-P_L F2_mg-P_L F3_mg-P_L F4_mg-P_L F5_mg-P_L F6_mg-P_L F7_mg-P_L
6/1/1992 0.05 0.05 0.06 0.04 0.03 0.18 0.08
7/1/1992 0.03 0.05 0.04 0.03 0.04 0.05 0.09
8/1/1992 0.02 0.03 0.02 0.02 0.02 0.02 0.02
File 3
DateTime F1_TSS_mgL F1_TVS_mgL F2_TSS_mgL F2_TVS_mgL F3_TSS_mgL F3_TVS_mgL F4_TSS_mgL F4_TVS_mgL F5_TSS_mgL F5_TVS_mgL F6_TSS_mgL F6_TVS_mgL F7_TSS_mgL F7_TVS_mgL
4/30/1991 10 7.285714286 8.5 6.083333333 3.7 3.1
5/31/1991 5.042553191 3.723404255 6.8 6.3 3.769230769 2.980769231
6/30/1991 5 5 1 1
7/31/1991
8/31/1991
9/30/1991 5.75 3.75 6.75 4.75 9.666666667 6.333333333 8.666666667 5 12 7.666666667 8 5.5 9 6.75
10/31/1991 14.33333333 9 14 10.66666667 16.25 11 12.75 9.25 10.25 7.25 29.33333333 18.33333333 13.66666667 9
11/30/1991 2.2 1.933333333 2 1.88 0 0 4.208333333 3.708333333 10.15151515 7.909090909 9.5 6.785714286 4.612903226 3.580645161
Answer: You didn't read the csv files correctly.
1) You need to comment out the following lines because you never use it later
in your code.
files = os.listdir(directory)
print(files)
2) `glob.glob(directory)` didnt return any match files. glob.glob() takes
**pattern** as argument, for example: `'C:\data\Monthly_Data\File*.csv'`,
unfortunately you put a directory as a pattern, and no files are found
`for f in glob.glob(directory):`
I modified the above 2 parts and `print all_data`, the file contents display
on my console
|
Flask & SQL Alchemy db.create_all() detect unicode returns: %r
Question: I'm trying to setup a sqlite database with Flask using SQLAlchemy according to
[the tutorial:](http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-
part-i-hello-world "The Flask Mega-Tutorial")
I get the following error when I try to run `db.create_all()`:
(venv)[username@md projectname]$ python
Python 2.7.9 (default, Jan 12 2015, 10:50:37)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from app import db
/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py:800: UserWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.
warnings.warn('SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.')
>>> db.create_all()
/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py:298: SAWarning: Exception attempting to detect unicode returns: OperationalError('(sqlite3.OperationalError) near "\xf1\x90\x81\x93\xf1\x90\x81\x8c\xf5\x80\x81\x83\xf0\xb0\x80\xa0\xf4\xb0\x81\x81\xfa\x80\x81\x94\xfd\x80\x80\xa7\xfc\xb0\x81\xa5\xf8\x80\x81\xb4\xfb\x80\x81\xb0\xfa\x90\x81\xa1\xf8\x80\x81\xae\xf9\x90\x81\xb2\xfd\x90\x81\xb4\xfb\xa0\x81\xb2\xf9\xb0\x81\xb3\xf0\x90\x80\xa0\xf8\x80\x81\x93\xf0\x90\x81\x96\xf0\xb0\x81\x92\xf0\x90\x81\x88\xfa\x80\x81\x92\xfc\x80\x80\xb6\xfa\x90\x80\xa9\xf0\x90\x80\xa0\xf8\x80\x81\x93\xfb\xa0\x81\xa1\xfb\xa0\x81\xaf\xfc\x90\x81\x9f\xfe\x90\x80\x80": syntax error',)
"detect unicode returns: %r" % de)
/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py:298: SAWarning: Exception attempting to detect unicode returns: OperationalError('(sqlite3.OperationalError) near "\xf1\x90\x81\x93\xf1\x90\x81\x8c\xf5\x80\x81\x83\xf0\xb0\x80\xa0\xf4\xb0\x81\x81\xfa\x80\x81\x94\xfd\x80\x80\xa7\xfc\xb0\x81\xa5\xf8\x80\x81\xb4\xfb\xa0\x81\xb5\xf8\xb0\x81\xa9\xf9\x80\x81\xaf\xf8\x80\x81\xa5\xf9\x90\x81\xb2\xfd\x90\x81\xb4\xfb\xa0\x81\xb2\xf9\xb0\x81\xb3\xf0\x90\x80\xa0\xf8\x80\x81\x93\xf0\x90\x81\x96\xf0\xb0\x81\x92\xf0\x90\x81\x88\xfa\x80\x81\x92\xfc\x80\x80\xb6\xfa\x90\x80\xa9\xf0\x90\x80\xa0\xf8\x80\x81\x93\xfb\xa0\x81\xa1\xfb\xa0\x81\xaf\xfc\x90\x81\x9f\xfc\xa0\x80\x80": syntax error',)
"detect unicode returns: %r" % de)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 972, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 964, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", line 3695, in create_all
tables=tables)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1855, in _run_visitor
with self._optional_conn_ctx_manager(connection) as conn:
File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1848, in _optional_conn_ctx_manager
with self.contextual_connect() as conn:
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2039, in contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2078, in _wrap_pool_connect
e, dialect, self)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1405, in _handle_dbapi_exception_noconnection
exc_info
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 200, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2074, in _wrap_pool_connect
return fn()
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 376, in connect
return _ConnectionFairy._checkout(self)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 713, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 480, in checkout
rec = pool._do_get()
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 1151, in _do_get
return self._create_connection()
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 323, in _create_connection
return _ConnectionRecord(self)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/pool.py", line 454, in __init__
exec_once(self.connection, self)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/event/attr.py", line 246, in exec_once
self(*args, **kw)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/event/attr.py", line 256, in __call__
fn(*args, **kw)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 1319, in go
return once_fn(*arg, **kw)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 165, in first_connect
dialect.initialize(c)
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 256, in initialize
self._check_unicode_description(connection):
File "/home/username/flask-project/projectname/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 343, in _check_unicode_description
]).compile(dialect=self)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "???????3???¥": syntax error
>>>
Answer: You are going to have an issue with these characters:
???????3???¥
It is unclear based on what you've posted where those are, but it's likely in
the file you're using to build your database from. The characters in that file
need to be in a character encoding that can be interpreted.
|
Py2Exe, [Errno 2] No such file or directory: 'numpy-atlas.dll'
Question: I have included matplotlib in my program, I searched about numpy_atlas.dll on
google and I seem to be the only one on Earth with this problem.
# setup.py
from setuptools import setup
import py2exe
setup(console=['EulerMethod.py'])
# Running Py2Exe results in error
C:\(..obmitted..)>python setup.py py2exe
running py2exe
*** searching for required modules ***
*** parsing results ***
......
...obmitted...
......
*** finding dlls needed ***
error: [Errno 2] No such file or directory: 'numpy-atlas.dll'
Answer: This is what worked for me. I found the dll: C:\Python27\Lib\site-
packages\numpy\core\numpy-atlas.dll and copied it to the same folder that has
the setup.py
|
Self-reference of type annotations in Python
Question: I'm trying to figure out how self-reference of types work with [python3's type
annotations](https://docs.python.org/3/library/typing.html) \- the docs don't
specify anything regarding this.
As an example:
from typing import TypeVar, Optional, Generic
T = TypeVar('T')
class Node(Generic[T]):
left = None
right = None
value = None
def __init__(
self, value: Optional[T],
left: Optional[Node[T]]=None,
right: Optional[Node[T]]=None,
) -> None:
self.value = value
self.left = left
self.right = right
This code generates the error:
Traceback (most recent call last):
File "node.py", line 4, in <module>
class Node(Generic[T]):
File "node.py", line 12, in Node
right: Optional[Node[T]]=None,
NameError: name 'Node' is not defined
This is using Python 3.5.1
Answer: [PEP 0484 - Type Hints - The problem of forward
declarations](https://www.python.org/dev/peps/pep-0484/#the-problem-of-
forward-declarations) addresses the issue:
> The problem with type hints is that annotations (per [PEP
> 3107](https://www.python.org/dev/peps/pep-3107/) , and similar to default
> values) are evaluated at the time a function is defined, and thus any names
> used in an annotation must be already defined when the function is being
> defined. A common scenario is a class definition whose methods need to
> reference the class itself in their annotations. (More general, it can also
> occur with mutually recursive classes.) This is natural for container types,
> for example:
>
> ...
>
> As written this will not work, because of the peculiarity in Python that
> class names become defined once the entire body of the class has been
> executed. **Our solution, which isn't particularly elegant, but gets the job
> done, is to allow using string literals in annotations.** Most of the time
> you won't have to use this though -- most uses of type hints are expected to
> reference builtin types or types defined in other modules.
from typing import TypeVar, Optional, Generic
T = TypeVar('T')
class Node(Generic[T]):
left = None
right = None
value = None
def __init__(
self,
value: Optional[T],
left: Optional['Node[T]']=None,
right: Optional['Node[T]']=None,
) -> None:
self.value = value
self.left = left
self.right = right
* * *
>>> import typing
>>> typing.get_type_hints(Node.__init__)
{'return': None,
'value': typing.Union[~T, NoneType],
'left': typing.Union[__main__.Node[~T], NoneType],
'right': typing.Union[__main__.Node[~T], NoneType]}
|
Does Scrapy crawl ALL links with Rules?
Question: Code source: <http://mherman.org/blog/2012/11/08/recursively-scraping-web-
pages-with-scrapy/#rules> Im new to python and scrapy. I searched for
recursive spider and found this.
I have a few questions:
How does the follow work? Does it just takes href links from a page and add it
in to the request queue?
Which part of the web page does scrapy crawl from?
Does the code below scrape ALL links from a webpage?
Lets say i want to crawl and download every file from this website
<http://downloads.trendnet.com/>
the way i would probably do it is to scrape every link on this website and
check URL's content header and download if it is a file. Is this feasible?
Sorry if it is a bad question....
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from craigslist_sample.items import CraigslistSampleItem
class MySpider(CrawlSpider):
name = "craigs"
allowed_domains = ["sfbay.craigslist.org"]
start_urls = ["http://sfbay.craigslist.org/search/npo"]
rules = (
Rule(SgmlLinkExtractor(allow=(), restrict_xpaths=('//a[@class="button next"]',)), callback="parse_items", follow= True),
)
def parse_items(self, response):
hxs = HtmlXPathSelector(response)
titles = hxs.xpath('//span[@class="pl"]')
items = []
for titles in titles:
item = CraigslistSampleItem()
item["title"] = titles.xpath("a/text()").extract()
item["link"] = titles.xpath("a/@href").extract()
items.append(item)
return(items)
Answer: I think RTFM is really really applicable here, but to give you a short answer:
**With regards to the example given**
rules = (
Rule(SgmlLinkExtractor(allow=(), restrict_xpaths=('//a[@class="button next"]',)), callback="parse_items", follow= True),
)
You asked what it crawls. It only crawls what you set up under rules. THat
means that your bot only crawls the next page each time. For each page it
finds, it does: callback = parse_items.
def parse_items(self, response):
hxs = HtmlXPathSelector(response)
titles = hxs.xpath('//span[@class="pl"]')
items = []
for titles in titles:
item = CraigslistSampleItem()
item["title"] = titles.xpath("a/text()").extract()
item["link"] = titles.xpath("a/@href").extract()
items.append(item)
return(items)
What parse_items does in this case, is check for entries in a list. You define
the list via the xpath (as you can see above with `titles =
hxs.xpath('//span[@class="pl"]')`). For each entry in the list (i.e. `for
titles in titles:`), it copies the text and the link into an item. It then
returns the items (aka it's done).
Parse_items is done for each page the crawler finds by following the next-
button.
Under settings, you can include `DEPTH_LIMIT=3`. In this case, your
crawlspider will only crawl 3 deep.
**With regards to the site you posted:**
No, you don't need a crawlspider, since there are no multiple pages. A normal
base-spider is sufficient. A crawlspider however can work and I'll show some
bits below. Set the rules to restrict_xpath('//a',) and it'll follow all links
on the page.
Make sure your item.py contains all the necessary items. For example, below it
refers to item["link"]. In your item.py, make sure an item called link (caps-
sensitive) is included, i.e. make sure the line -- link = Field() -- is there.
Under parse_items, do something like this:
def parse_items(self, response):
list = response.xpath('//a"')
items = []
for titles in list:
item = [INSERT WHATEVER YOU CALLED YOUR ITEM]
item["title"] = titles.xpath("/text()").extract()
item["link"] = titles.xpath("/@href").extract()
if ".pdf" in item["link"]:
SEE COMMENT BELOW
return(items)
The last bit you need to do is check how the item-pipeline works. It uses
file_urls etc. in your item.
|
How to deal with array of variable size while creating a HDF5 Dataset in Python?
Question: How to create a HDF5 dataset when size of one dimension of a multidimensional
array is not fixed. I tried the following toy code, but it seems that I am
missing some point here.
import numpy as np
import h5py
Polyline=h5py.special_dtype(vlen=np.float32)
f=h5py.File('dataset.hdf5', mode='w')
var_features=f.create_dataset('var_features', (10,), dtype=Polyline )
features = np.empty(shape=(10,), dtype=Polyline)
for i in range(10):
a=10+i*2
features[i]=np.arange(a).reshape(a/2,2)
var_features[...]=features
print features[0].shape
print var_features[0].shape
Answer: It's quite simple, just create dataset with `maxsize` attribute with one or
more `None` values.
Something like this:
import h5py
import numpy as np
fff = h5py.File('test1.h5','w')
fff.create_dataset('test_resize',(100,100),maxshape=(None,None),chunks=(10,10))
fff['test_resize'][:] = np.random.random((100,100))
fff.flush()
fff['test_resize'].resize((150,100))
fff['test_resize'][100:150,:] = np.ones((50,100))
fff.close()
|
Split python dictionary to result in all combinations of values
Question:
my_dict = {'a':[1,2], 'b':[3], 'c':{'d':[4,5], 'e':[6,7]}}
I need to derive all the combinations out of it as below.
{'a':1, 'b':3, 'c':{'d':4, 'e':6}}
{'a':1, 'b':3, 'c':{'d':4, 'e':7}}
{'a':1, 'b':3, 'c':{'d':5, 'e':6}}
{'a':1, 'b':3, 'c':{'d':5, 'e':7}}
{'a':2, 'b':3, 'c':{'d':4, 'e':6}}
and so on. There could be any level of nesting here
Please let me know how to achieve this
Something that I tried is pasted below but definitely was reaching nowhere
def gen_combinations(data):
my_list =[]
if isinstance(data, dict):
for k, v in data.iteritems():
if isinstance(v, dict):
gen_combinations(v)
elif isinstance(v, list):
for i in range(len(v)):
temp_dict = data.copy()
temp_dict[k] = v[i]
print temp_dict
my_dict = {'a':[1,2], 'b':[3], 'c':{'d':[4,5], 'e':[6,7]}}
gen_combinations(my_dict)
Which resulted in
{'a': 1, 'c': {'e': [6, 7], 'd': [4, 5]}, 'b': [3]}
{'a': 2, 'c': {'e': [6, 7], 'd': [4, 5]}, 'b': [3]}
{'e': 6, 'd': [4, 5]}
{'e': 7, 'd': [4, 5]}
{'e': [6, 7], 'd': 4}
{'e': [6, 7], 'd': 5}
{'a': [1, 2], 'c': {'e': [6, 7], 'd': [4, 5]}, 'b': 3}
Answer:
from itertools import product
my_dict = {'a':[1,2], 'b':[3], 'c':{'d':[4,5], 'e':[6,7]}}
def process(d):
to_product = [] # [[('a', 1), ('a', 2)], [('b', 3),], ...]
for k, v in d.items():
if isinstance(v, list):
to_product.append([(k, i) for i in v])
elif isinstance(v, dict):
to_product.append([(k, i) for i in process(v)])
else:
to_product.append([(k, v)])
return [dict(l) for l in product(*to_product)]
for i in process(my_dict):
print(i)
Output:
{'a': 1, 'b': 3, 'c': {'e': 6, 'd': 4}}
{'a': 2, 'b': 3, 'c': {'e': 6, 'd': 4}}
{'a': 1, 'b': 3, 'c': {'e': 6, 'd': 5}}
{'a': 2, 'b': 3, 'c': {'e': 6, 'd': 5}}
{'a': 1, 'b': 3, 'c': {'e': 7, 'd': 4}}
{'a': 2, 'b': 3, 'c': {'e': 7, 'd': 4}}
{'a': 1, 'b': 3, 'c': {'e': 7, 'd': 5}}
{'a': 2, 'b': 3, 'c': {'e': 7, 'd': 5}}
**Upd:**
Code that works as asked
[here](http://stackoverflow.com/questions/36198540/split-python-dictionary-to-
result-in-all-combinations-of-values?noredirect=1#comment60043432_36198540):
from itertools import product
my_dict = {'a':[1,2], 'e':[7], 'f':{'x':[{'a':[3,5]}, {'a':[4]}] } }
def process(d):
to_product = [] # [[('a', 1), ('a', 2)], [('b', 3),], ...]
for k, v in d.items():
if isinstance(v, list) and all(isinstance(i, dict) for i in v):
# specific case, when list of dicts process differently...
c = product(*list(process(i) for i in v))
to_product.append([(k, list(l)) for l in c])
elif isinstance(v, list):
to_product.append([(k, i) for i in v])
elif isinstance(v, dict):
to_product.append([(k, i) for i in process(v)])
else:
to_product.append([(k, v)])
return [dict(l) for l in product(*to_product)]
for i in process(my_dict):
print(i)
Output:
{'f': {'x': [{'a': 3}, {'a': 4}]}, 'a': 1, 'e': 7}
{'f': {'x': [{'a': 3}, {'a': 4}]}, 'a': 2, 'e': 7}
{'f': {'x': [{'a': 5}, {'a': 4}]}, 'a': 1, 'e': 7}
{'f': {'x': [{'a': 5}, {'a': 4}]}, 'a': 2, 'e': 7}
|
python prettytable module raise Could not determine delimiter error for valid csv file
Question: I'm trying to use prettytable module to print out data from csv file. But it
failed with the following exception
> Could not determine delimiter error for valid csv file
>>> import prettytable
>>> with file("/tmp/test.csv") as f:
... prettytable.from_csv(f)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "build/bdist.linux-x86_64/egg/prettytable.py", line 1337, in from_csv
File "/usr/lib/python2.7/csv.py", line 188, in sniff
raise Error, "Could not determine delimiter"
_csv.Error: Could not determine delimiter
The CSV file:
input_gps,1424185824460,1424185902788,1424185939525,1424186019313,1424186058952,1424186133797,1424186168766,1424186170214,1424186246354,1424186298434,1424186376789,1424186413625,1424186491453,1424186606143,1424186719394,1424186756366,1424186835829,1424186948532,1424187107293,1424187215557,1424187250693,1424187323097,1424187358989,1424187465475,1424187475824,1424187476738,1424187548602,1424187549228,1424187550690,1424187582866,1424187584248,1424187639923,1424187641623,1424187774567,1424187776418,1424187810376,1424187820238,1424187820998,1424187916896,1424187917472,1424187919241,1424188048340,dummy-0,dummy-1,Total
-73.958315%2C 40.815569,0.0(nan%),1.0(100%),1.0(100%),0.0(nan%),1.0(100%),1.0(100%),0.0(nan%),1.0(100%),1.0(100%),0.0(nan%),1.0(100%),1.0(100%),0.0(nan%),1.0(100%),1.0(100%),1.0(100%),1.0(100%),1.0(100%),0.0(0%),0.0(nan%),0.0(0%),0.0(nan%),0.0(0%),0.0(nan%),0.0(nan%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),13.0 (42%)
-76.932984%2C 38.992186,0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),1.0(100%),0.0(nan%),1.0(100%),0.0(nan%),1.0(100%),0.0(nan%),0.0(nan%),1.0(100%),1.0(100%),1.0(100%),1.0(100%),1.0(100%),1.0(100%),1.0(100%),1.0(100%),0.0(nan%),1.0(100%),1.0(100%),0.0(nan%),1.0(100%),1.0(100%),0.0(nan%),1.0(100%),1.0(100%),0.0(nan%),0.0(0%),17.0 (55%)
null_input-0,0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(nan%),0.0(0%),0.0(nan%),0.0(nan%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0 (0%)
null_input-1,0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(nan%),0.0(0%),0.0(nan%),0.0(nan%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),0.0(0%),0.0(0%),0.0(nan%),1.0(100%),1.0 (3%)
Total,0.0(0%),1.0(3%),1.0(3%),0.0(0%),1.0(3%),1.0(3%),0.0(0%),1.0(3%),1.0(3%),0.0(0%),1.0(3%),1.0(3%),0.0(0%),1.0(3%),1.0(3%),1.0(3%),1.0(3%),1.0(3%),1.0(3%),0.0(0%),1.0(3%),0.0(0%),1.0(3%),0.0(0%),0.0(0%),1.0(3%),1.0(3%),1.0(3%),1.0(3%),1.0(3%),1.0(3%),1.0(3%),1.0(3%),0.0(0%),1.0(3%),1.0(3%),0.0(0%),1.0(3%),1.0(3%),0.0(0%),1.0(3%),1.0(3%),0.0(0%),1.0(3%),31.0(100%)
If you anyone can inform me how to workaround the problem or other alternative
alternatives, it will be very helpful.
Answer: According to pypi, prettytable is only alpha level. I could not find where you
could give it the configuration to pass to the csv module. So in that case,
you probably should read the csv file by explicitely declaring the delimiter,
and build the PrettyTable line by line
pt = None # to avoid it vanished at end of block...
with open('/tmp/test.csv') as fd:
rd = csv.reader(fd, delimiter = ',')
pt = PrettyTable(next(rd))
for row in rd:
pt.add_row(row)
|
TypeError: argument of type 'WindowsPath' is not iterable - in open of pdf file with python
Question: Good day,
I want to open the pdf files that have a specific name from a directory .
These file names are provided from a csv file input, which are in the second
column.
I tried the follwing code, but I received an error message:
> TypeError: argument of type 'WindowsPath' is not iterable
How can I solve this problem and the pdf files to be opened according the
input file?
And another issue: how can I fix if the input name is not an exact match with
the pdf title,but I still want to open this file that contain the input name?
import csv
import os
from pathlib import *
dir_path = Path('D:\\path\\pdf files')
pdf_files = dir_path.glob('*.pdf')
file1=open('INPUT.csv','r')
reader=csv.reader(file1,delimiter=';')
for pdf_file in pdf_files:
for item in reader:
file_name=item[1]
print(file_name)#just to see the file name that I want to open
if file_name in pdf_file:
os.startfile("%s"%(pdf_file))
file1.close()
Thank you in advance!
Answer: Problem in line `if file_name in pdf_file`: `pdf_file` is not string, but
instance of `pathlib.Path`, use
[name](https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.name)
to get file name as string:
if file_name == pdf_file.name
In case you want to check if `file_name` without ext contains in `pdf_file`
name:
file_name.split('.')[-2] in f.name # ('example' in 'some_example.pdf') == True
|
Raspberry Pi SMBus support combined data transmission?
Question: I am trying to use the ACS764 Hall effect current sensor with Raspberry Pi.
This sensor will sense the current and return its value via the chip built-in
I2C interface. I had connected the circuit according to the specification. On
my Raspberry Pi Python code I can write and read data to/from the sensor
however the data I read alway the same value.
Below is my simple code to read the sensor:
import datetime
import smbus
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(37, GPIO.OUT) #Connected to the ACS764 Freeze pin
bus=smbus.SMBus(1)
#Freeze the data
GPIO.output(37, True)
#Read the values
bus.write_byte(0x60, 0x00) #Simulate the combined data transmission format
data=bus.read_i2c_block_data(0x60, 0x00)
print data
#Unfreeze the data
GPIO.output(37, False)
GPIO.cleanup()
However when I run the script the value alway show the same even I had changed
the current to be sensed value.
pi@Raspberry:~ $ python i2cAcs764.py
[0, 0, 3, 0, 0, 3, 0, 0, 3, 0, 0, 3, 0, 0, 3, 0, 0, 3, 0, 0, 3, 0, 0, 3, 0, 0, 3, 0, 0, 3, 0, 0]
According to the ACS764 specification, to read the sensor value, I need to use
the "combined data transmission" format. However, I didn't find any function
in the Python SMBus library that allow me to use the combined data
transmission, therefore at the moment I use the "bus.write_byte" function to
simulate the "combined data transmission". Below is the screen capture of the
specification.
[ACS764 Datasheet Snapshoot](http://i.stack.imgur.com/fbno9.png)
My question now is how can I use the Python SMBus I2C library to perform the
"combined data transmission" reading of the ACS764 chip?
Please advise, thank you.
Answer: After google for a few days I finally found a solution to my question above.
The answer is that Raspberry I2C interface does support "combined data
transmission" (aka Repeated Start) but it is not enable by default. You need
to enable the setting by the following command.
sudo su -
echo -n 1 > /sys/module/i2c_bcm2708/parameters/combined
exit
Please refer to [i2c repeated start transactions
](https://www.raspberrypi.org/forums/viewtopic.php?f=44&t=15840&start=25) for
more information.
Base on the smbus specification the function that support the repeated start
is i2c_smbus_read_i2c_block_data(), in Python library it is call
read_i2c_block_data().
Please refer to the [SMBus Protocol
Summary](http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Documentation/i2c/smbus-
protocol) for more details.
Below is my sample code that read data from ACS764 Hall effect sensor chip
that required repeated start.
import datetime
import smbus
import time
bus=smbus.SMBus(1)
# Write setting parameter to the chip
data = [0x02, 0x02, 0x02]
bus.write_i2c_block_data(0x60, 0x04, data)
# Read the data out of the chip that require Repeated Start
data=bus.read_i2c_block_data(0x60, 0x00)
print data
I was happy to find the solution and hope that those who face the same issue
can get help from this post. Thank you all!
|
How to access python objects with a dynamic object name?
Question: I have a question to one of my python scripts. I'm using the library untangle
(<https://github.com/stchris/untangle>) to import and convert xml config files
into the main script.
The problem: I have user informations in the config file for more than one
user and I'm using this information in a loop. It works very well, but it
makes the script very ugly due to the name of the generated objects from the
xml file.
Concrete this means I can't (or I just don't know how) change the name of the
object I would like to use dynamic.
The example code is below:
if employee == 0:
if str(configobj.config.modes.employee.employee_1.name.cdata) != '':
display.drawtext(0,0,str(configobj.config.modes.employee.employee_1.name.cdata),"7x13B",255,255,255,True)
if str(configobj.config.modes.employee.employee_1.line1.cdata) != '':
display.drawtext(int(configobj.config.modes.employee.employee_1.line1['x']),
int(configobj.config.modes.employee.employee_1.line1['y']),
if str(configobj.config.modes.employee.employee_1.line2.cdata) != '':
display.drawtext(int(configobj.config.modes.employee.employee_1.line2['x']),
int(configobj.config.modes.employee.employee_1.line2['y']),
if str(configobj.config.modes.employee.employee_1.line3.cdata) != '':
display.drawtext(int(configobj.config.modes.employee.employee_1.line3['x']),
int(configobj.config.modes.employee.employee_1.line3['y']))
displayimage = True
elif employee == 1:
if str(configobj.config.modes.employee.employee_2.name.cdata) != '':
display.drawtext(0,0,str(configobj.config.modes.employee.employee_2.name.cdata),"7x13B",255,255,255,True)
if str(configobj.config.modes.employee.employee_2.line1.cdata) != '':
display.drawtext(int(configobj.config.modes.employee.employee_2.line1['x']),
int(configobj.config.modes.employee.employee_2.line1['y']),
if str(configobj.config.modes.employee.employee_2.line2.cdata) != '':
display.drawtext(int(configobj.config.modes.employee.employee_2.line2['x']),
int(configobj.config.modes.employee.employee_2.line2['y']),
if str(configobj.config.modes.employee.employee_2.line3.cdata) != '':
display.drawtext(int(configobj.config.modes.employee.employee_2.line3['x']),
int(configobj.config.modes.employee.employee_2.line3['y']),
if str(configobj.config.modes.employee.employee_2.image.cdata) != '':
display.showimage(160,0,str(configobj.config.modes.employee.employee_2.image.cdata))
displayimage = True
And this is a lot of repeated code with a changing number. How can I improve
this?
Answer: Use [getattr](https://docs.python.org/3.5/library/functions.html#getattr):
getattr(configobj.config.modes.employee, 'employee_' + str(employee + 1)).name.cdata
You can also create separate variable for employee:
employee = getattr(configobj.config.modes.employee, 'employee_' + str(employee + 1))
print(employee.name.cdata)
print(employee.line1['x'])
|
Catching Keyboard Interrupt with Raw Input
Question: I have a bit of python code to to try and make raw_input catch keyboard
interrupts. If I run the code in this function it works perfectly fine. But if
I run it in my program, the print statement is never made, indicating that the
keyboard interrupt is not caught. The program attempts to exit and fails until
it escalates to SIGKILL, which of course works fine. My guess is somewhere
else the keyboard interrupt is being caught, preventing the exception from
running at all. My question is, where would such an interrupt likely occur,
and how can I prevent it from blocking this one. My plan has been to add a
slight delay between the program catching a keyboard interrupt and killing
itself to give excepting here a moment to catch.
Any ideas appreciated
Thanks!
import sys
def interruptable_input(text=''):
'''Takes raw input, but accepts keyboard interrupt'''
try:
return raw_input(text)
except KeyboardInterrupt:
print "Interrupted by user"
sys.exit()
Answer: I have determined the reason for my issue was another interrupt handler
killing the script before the KeyboardInterrupt was hit. I solved it by
setting my own interrupt handler for signal.SIGINT like so:
import sys
import signal
signal.signal(signal.SIGINT, signal_term_handler)
def signal_term_handler(signal, frame):
'''Handles KeyboardInterrupts to ensure smooth exit'''
rospy.logerr('User Keyboard interrupt')
sys.exit(0)
it's slightly less direct but it get's the job done. Now raw_input() will
simply die when told to.
|
python, Storing and Reading varying dictionary size information in a csv file
Question: I have implemented a python dictionary which has SQL query & results.
logtime = time.strftime("%d.%m.%Y)
sqlDict = { 'time':logtime,
'Q1' : 50,
'Q2' : 15,
'Q3' : 20,
'Q4' : 10,
'Q5' : 30,
}
Each day, the results are written in a CSV file in dictionary Format. Note:
Python dictionaries are not odered. so colomns in each row may vary when
additional queries (e.g Q7,Q8,Q9...) are added to the dictionary.
('Q1', 25);('Q3', 23);('Q2', 15);('Q5', 320);('Q4', 130);('time', '20.03.2016')
('Q1', 35);('Q2', 21);('Q3', 12);('Q5', 30);('Q4', 10);('time', '21.03.2016')
('Q4', 22);('Q3', 27);('Q2', 15);('Q5', 30);('Q1', 10);('time', '22.03.2016')
With addition of a new SQL query in the dictionary, the additional Information
is also saved in the same csv file.
So, e.g. with addition of Q7, the dictionary Looks like
sqlDict = { 'time':logtime,
'Q1' : 50,
'Q2' : 15,
'Q3' : 20,
'Q4' : 10,
'Q5' : 30,
'Q7' : 5,
}
and the csv file will look like
('Q1', 25);('Q3', 23);('Q2', 15);('Q5', 320);('Q4', 130);('time',
'20.03.2016')
('Q1', 35);('Q2', 21);('Q3', 12);('Q5', 30);('Q4', 10);('time', '21.03.2016')
('Q4', 22);('Q3', 27);('Q2', 15);('Q5', 30);('Q1', 10);('time', '22.03.2016')
('Q1', 50);('Q3', 20);('Q2', 15);('Q5', 30);('Q4', 10);('time', '23.03.2016');('Q7', 5)
I Need to plot all the Information available in the csv, i.e for all SQL keys,
the time vs value(numbers) plot.
The csv file does not hold a regular pattern. In the end, I would like to plot
a graph with all available Qs and their corresponding values. Where the Qs are
missing in the row, program should assume value 0 for that date.
Answer: You just need to process your csv. You know that the last cell of every row is
the date so it's pretty formated for me.
import csv
with open("file.csv","r") as f:
spamreader = csv.reader(f,delimiter=";")
for row in spamreader:
for value in range(len(row)):
query,result = value.strip('(').strip(')').split(",")
if query != "time":
# process it
# query = 'QX'
# result = 'N'
else:
# query = 'time'
# result = 'date'
The thing that will bother you is that you will read everything as string, so
you will have to split on the coma and strip the '(' and the ')'
for example:
query,result = row[x].strip('(').strip(')').split(", ")
then query = 'Q2' and result = 15 (type = string for both)
|
Python send control + Q then control + A (special keys)
Question: I need to send some special keystrokes and am unsure of how to do it.
I need to send `Ctrl` \+ `Q` followed by `Ctrl` \+ `A` to a terminal (I'm
using Paramiko).
i have tried
shell = client.invoke_shell()
shell.send(chr(10))
time.sleep(5)
shell.send(chr(13))
shell.send('\x11')
shell.send('\x01')
print 'i tried'
I can see the two returns go in successfully, but then nothing, it doesnt quit
the picocom (also to note i have it the wrong way round, its expecting ctrl+a,
then ctrl+q)
if it helps this is the device
<http://www.cisco.com/c/en/us/td/docs/routers/access/interfaces/eesm/software/configuration/guide/4451_config.html#pgfId-1069760>
as you can see at step 2
Step 2 Exit the session from the switch, press Ctrl-a and Ctrl-q from your keyboard:
Switch# <type ^a^q>
Thanks for using picocom
Router#
UPDATE:
i have tried \x01\x16\x11\n but this returns
Switch#
Switch#
*** baud: 9600
*** flow: none
*** parity: none
*** databits: 8
*** dtr: down
Switch#
this looks like it could be another special command?
Answer: Just as assumption: maybe pseudoterminal would help
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(...)
channel = сlient.get_transport().open_session()
channel.get_pty()
channel.settimeout(5)
channel.exec_command('\x11\x01')
|
Python: simplifying code by writing it in a more Pandas specific way
Question: I wrote some code that finds the distance between gps coordinates based on
machines having the same serial numbers looking at
* [Fast (but not very accurate) Method for Finding Distance between 2 Points using Python and Pandas](http://stackoverflow.com/questions/29545704/fast-but-not-very-accurate-method-for-finding-distance-between-2-points-using)
But I believe it will be more efficient if it can be simplified to using
`iterrows` or `df.apply`; however, I cannot seems to figure it out.
Since I need to only execute the function when `ser_no[i] == ser_no[i+1]` and
insert a `NaN` value at the location where the ser_no changes, I cannot seem
to apply the Pandas methodology to make the code more efficient. I have looked
at:
* [using haversine formula with data stored in a pandas dataframe](http://stackoverflow.com/questions/25767596/using-haversine-formula-with-data-stored-in-a-pandas-dataframe)
* [Python function to calculate distance using haversine formula in pandas](http://stackoverflow.com/questions/34510749/python-function-to-calculate-distance-using-haversine-formula-in-pandas)
* [Vectorizing a function in pandas](http://stackoverflow.com/questions/27575854/vectorizing-a-function-in-pandas)
Unfortunately, I don't readily see the leap I need to make even after looking
over these posts.
> What I have:
def haversine(lat1, long1, lat2, long2):
r = 6371 # radius of Earth in km
# convert decimals to degrees
lat1, long1, lat2, long2 = map(np.radians, [lat1, long1, lat2, long2])
# haversine formula
lat = lat2 - lat1
lon = long2 - long1
a = np.sin(lat/2)**2 + np.cos(lat1)*np.cos(lat2)*np.sin(lon/2)**2
c = 2*np.arcsin(np.sqrt(a))
d = r*c
return d
# pre-allocate vector
hdist = np.zeros(len(mttt_pings.index), dtype = float)
# haversine loop calculation
for i in range(0, len(mttt_pings.index) - 1):
'''
when the ser_no from i and i + 1 are the same calculate the distance
between them using the haversine formula and put the distance in the
i + 1 location
'''
if mttt_pings.ser_no.loc[i] == mttt_pings.ser_no[i + 1]:
hdist[i + 1] = haversine(mttt_pings.EQP_GPS_SPEC_LAT_CORD[i], \
mttt_pings.EQP_GPS_SPEC_LONG_CORD[i], \
mttt_pings.EQP_GPS_SPEC_LAT_CORD[i + 1], \
mttt_pings.EQP_GPS_SPEC_LONG_CORD[i + 1])
else:
hdist = np.insert(hdist, i, np.nan)
'''
when ser_no i and i + 1 are not the same, insert NaN at the ith location
'''
Answer: The main idea is to utilize `shift` to check consecutive rows. I'm also
writing a `get_dist` function just wraps your existing distance function to
make things more readable for when I use `apply` to compute distances.
def get_dist(row):
lat1 = row['EQP_GPS_SPEC_LAT_CORD']
long1 = row['EQP_GPS_SPEC_LONG_CORD']
lat2 = row['EQP_GPS_SPEC_LAT_CORD_2']
long2 = row['EQP_GPS_SPEC_LONG_CORD_2']
return haversine(lat1, long1, lat2, long2)
# Find consecutive rows with matching ser_no, and get coordinates.
coord_cols = ['EQP_GPS_SPEC_LAT_CORD', 'EQP_GPS_SPEC_LONG_CORD']
matching_ser = mttt_pings['ser_no'] == mttt_pings['ser_no'].shift(1)
shift_coords = mttt_pings.shift(1).loc[matching_ser, coord_cols]
# Join shifted coordinates and compute distances.
mttt_pings_shift = mttt_pings.join(shift_coords, how='inner', rsuffix='_2')
mttt_pings['hdist'] = mttt_pings_shift.apply(get_dist, axis=1)
In the above code, I've added the distances to your dataframe. If you want to
get the result as a numpy array, you can do:
hdist = mttt_pings['hdist'].values
As a side note, you may want to consider using
[`geopy.distance.vincenty`](http://geopy.readthedocs.org/en/latest/#geopy.distance.vincenty)
to compute distances between lat/long coordinates. In general, `vincenty` is
more accurate than `haversine`, although it may take longer to compute. Very
minor modifications to the `get_dist` function are required to use `vincenty`.
from geopy.distance import vincenty
def get_dist(row):
lat1 = row['EQP_GPS_SPEC_LAT_CORD']
long1 = row['EQP_GPS_SPEC_LONG_CORD']
lat2 = row['EQP_GPS_SPEC_LAT_CORD_2']
long2 = row['EQP_GPS_SPEC_LONG_CORD_2']
return vincenty((lat1, long1), (lat2, long2)).km
|
python & pandas: iterating over DataFrame twice
Question: Doing a mahalanobis calculation for each row of a DataFrame with distances to
every other row in the DataFrame. It kind of looks like this:
import pandas as pd
from scipy import linalg
from scipy.spatial.distance import mahalanobis
from pprint import pprint
testa = { 'pid': 'testa', 'a': 25, 'b': .455, 'c': .375 }
testb = { 'pid': 'testb', 'a': 22, 'b': .422, 'c': .402 }
testc = { 'pid': 'testc', 'a': 11, 'b': .389, 'c': .391 }
cats = ['a','b','c']
pids = pd.DataFrame([ testa, testb, testc ])
inverse = linalg.inv(pids[cats].cov().values)
distances = { pid: {} for pid in pids['pid'].tolist() }
for i, p in pids.iterrows():
pid = p['pid']
others = pids.loc[pids['pid'] != pid]
for x, other in others.iterrows():
otherpid = other['pid']
d = mahalanobis(p[cats], other[cats], inverse) ** 2
distances[pid][otherpid] = d
pprint(distances)
It works fine for the three test cases here, but in real life it's going to
run against around 2000-3000 rows, and using this approach takes too long. I'm
relatively new to pandas and I really prefer python to R, so I'd like to have
this cleaned up.
How can I make this more efficient?
Answer: > Doing a mahalanobis calculation for each row of a DataFrame with distances
> to every other row in the DataFrame.
This is basically addressed in
[`sklearn.metrics.pairwise.pairwise_distances`](http://scikit-
learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html),
so it's doubtful that it's possible to do it more efficiently by hand. In this
case, therefore, how about
from sklearn import metrics
>>> metrics.pairwise.pairwise_distances(
pids[['a', 'b', 'c']].as_matrix(),
metric='mahalanobis')
array([[ 0. , 2.15290501, 3.54499647],
[ 2.15290501, 0. , 2.62516666],
[ 3.54499647, 2.62516666, 0. ]])
|
How to detect lines accurately using HoughLines transform in openCV python?
Question: I am a newbie in both `python` and **`opencv`** and I am facing a problem in
detecting lines in the following image, which has strips of black lines laid
on the ground:
[](http://i.stack.imgur.com/aMHlL.jpg)
I used the following code:
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
print img.shape[1]
print img.shape
minLineLength = img.shape[1]-1
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2 in lines[0]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
but it is unable to detect the lines accurately and only draws a green line on
the first black strip from the bottom which does not even cover the entire
line,
also,
please suggest a way of obtaining the **`y`** cordinates of each line.
Answer: Sanj,
a modified code which detects not one but many Hough lines is shown below. I
have improved the way how to loop through the lines array so that you get many
more line segments.
You can further tune the parameters, however, I think that the contour
approach in your other post will most likely be the better approach to solve
your task, as shown there: [How to detect horizontal lines in an image and
obtain its y-coordinates using python and
opencv?](http://stackoverflow.com/questions/36210615/how-to-detect-horizontal-
lines-in-an-image-and-obtain-its-y-coordinates-using-py)
import numpy as np
import cv2
img = cv2.imread('lines.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
print img.shape[1]
print img.shape
minLineLength=img.shape[1]-300
lines = cv2.HoughLinesP(image=edges,rho=0.02,theta=np.pi/500, threshold=10,lines=np.array([]), minLineLength=minLineLength,maxLineGap=100)
a,b,c = lines.shape
for i in range(a):
cv2.line(img, (lines[i][0][0], lines[i][0][1]), (lines[i][0][2], lines[i][0][3]), (0, 0, 255), 3, cv2.LINE_AA)
cv2.imshow('edges', edges)
cv2.imshow('result', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
|
Move from an array of 'labels' to array of equations corresponding to those labels in Python 2.7
Question: How can I **efficiently** move from an array of essentially labels to
equations corresponding to those labels in python 2.7?
This image illustrates what I mean:

The equations pull values from array x, so "xn" in the image in python terms
would be x[n]. As an example, the 0th entry in the label array is "a" which
corresponds to the equation 1+xn which would be 1+x[0]. The next is "b" which
is x[1]*2.
There are a plenty of ways to accomplish this, but I want to focus on
efficiency. The actual arrays have thousands of elements, and this operation
is performed thousands of times (each timestep in my model). The x array will
be different at each timestep. What I am actually doing is building the
diagonals for a tridiagonal matrix based on various boundary conditions.
Can anyone offer some insight better than iterating through the whole array
each timestep with a switch case?
Here is an example of me doing it a brute way:
'''
Equations corresponding to various labels
a -> 2+x[n]
b -> 3*x[n-1]+2x[n]
c -> 4*x[n]
These are just dummy equations I am making up for the example
'''
x = [4,7,6,6,9,12,4,9,1,11]
labelArray = ['a','b','b','b','c','c','a','b','b','c']
outputArray = []
n = 0
for label in labelArray:
if label == 'a':
output = 2+x[n]
elif label == 'b':
output = 3*x[n-1]+2*x[n]
elif label == 'c':
output = 4*x[n]
outputArray.append(output)
n += 1
print outputArray
# outputArray = [6, 26, 33, 30, 36, 48, 6, 30, 29, 44]
Answer: Here's a solution I came up with. It should be faster and more concise, though
it may not be the optimal solution.
from itertools import imap
# List of values
values = [4, 7, 6, 6, 9, 12, 4, 9, 1, 11]
# A list of corresponding methods for each value, must be same length as values.
# Optionally, you could create the data with the value and method in a tuple
# e.g. [(4, 'a'), (7, 'b') ... (x, 'y')]
# Though if you ensure both lists are of the same length, you can use the zip()
# method, which does the same thing.
methods = ['a', 'b', 'b', 'b', 'c', 'c', 'a', 'b', 'b', 'c']
# A dictionary with all your equations. You can also define them in a function
# elsewhere and include them like
# >{ 'a': external_function }
equations = {
# Lambda is just an anonymous function.
'a': lambda index: 2 + values[index],
'b': lambda index: 3 * values[index-1] + 2 * values[index],
'c': lambda index: 4 * values[index],
}
# Returns an iterator to prevent eating up your memory with one big array.
new_values = imap(lambda x,y: equations[x](y), methods, xrange(len(values)))
print [value for value in new_values]
Check out <https://docs.python.org/2/library/functions.html> for an
explanation of the built in methods I'm using here. Here's some info on
iterators: <http://anandology.com/python-practice-book/iterators.html>
|
Reading a list stored in a text file, Python,
Question: I have a file whose content is in the form of a python list such as the
following:
['hello','how','are','you','doing','today','2016','10.004']
Is there any way to read the python file back into a list object? instead of
using `.read()` and having the whole file just read as a string.
EDIT: for those who may be interested i ran into a strange issue using (import
ast) as suggested as a solution for the above problem.
the program i used it in has a function which fetches historical stock data
from the yahoo finance python module. this function is in no way related or
dependent on the function which used ast.literal_eval().
anyways every night after market close i collect new batches of historical
data from yahoo finance and last night i ran into an error :
simplejson.scanner.jsondecodeerror expecting value.
it was strange because it would collect data just fine for some companies but
throw the error for others, and sometime work for the same company but a
minute later it would not work. after trying all kinds of things to debug and
solve the issue remembered that the import ast was recently added and thought
i should try to see if it could have an effect, after removing the import ast
the program went back to workin as it normally did.
does anybody know why import ast caused issues? @Apero why did you initially
warn against using eval or ast.literal_eval?
Answer: 1. rename the file from i.e. **_foo.txt_** to **_foo.py_**
2. add `my_list =` in front of that line
3. in your code: `import foo; l = foo.my_list`
Simpler, no? ;-)
|
Formatting a column with pandas
Question: I'm new to Pandas and Python.
We have a firewall application that parses out our ACLs in CSV format. The
problem is -it provides way too much info -the format of the data makes the
info useless
We've been editing these queries by hand until now.
I've figured out how to use pandas to "pull" the columns we need. Now I need
to reconfigure one of the columns to the proper format.
So far my code looks like this:
import pandas as pd
f=pd.read_csv("/Volumes/Untitled/ACL-SOURCE.csv")
keep_col = ['Device name','Source','Destination','Service']
new_f = f[keep_col]
# this pulls the 4 columns I ned out of the original 20 column CSV.
# If I do a print of 'new_f" i get the following:
Device name Source Destination Service
0 ACL-NAME-V1 ABC-123 MEC-KLM ssh/tcp
1 ACL-NAME-V1 ABC-123 MEC-KLM 3306/tcp
2 ACL-NAME-V1 MEC-456 MEC-KLM ssh/tcp
3 ACL-NAME-V1 MEC-456 MEC-KLM 3306/tcp
4 ACL-NAME-V1 MEC-456 MEC-KLM 4444/tcp
5 ACL-NAME-V1 MEC-456 MEC-KLM 8888/tcp
6 ACL-NAME-V1 MEC-456 MEC-KLM 4567-4568/tcp
7 ACL-NAME-V1 MEC-456 MEC-KLM icmp
At this point what I want to do is format the last row to Move the tcp in
front of the port number and remove the '/'. the end result will be from
ssh/tcp to tcp ssh
for some reason I can't wrap my head around this.
Please help. Thanks,
Answer: apply will apply a function to everything in your series. The thing is to make
the function. I called it switch and 'x' is all the data in the series.
def switch(x):
a = x.split('/')
a.reverse()
return ' '.join(a)
df['Service'].apply(switch)
|
Python: Returning a filename for matching a specific condition
Question:
import sys, hashlib
import os
inputFile = 'C:\Users\User\Desktop\hashes.txt'
sourceDir = 'C:\Users\User\Desktop\Test Directory'
hashMatch = False
for root, dirs, files in os.walk(sourceDir):
for filename in files:
sourceDirHashes = hashlib.md5(filename)
for digest in inputFile:
if sourceDirHashes.hexdigest() == digest:
hashMatch = True
break
if hashMatch:
print str(filename)
else:
print 'hash not found'
Contents of inputFile =
2899ebdb5f7a90a216e97b3187851fc1
54c177418615a90a6424cb945f7a6aec
dd18bf3a8e0a2a3e53e2661c7fb53534
Contents of sourceDir files =
test
test 1
test 2
I almost have the code working, I'm just tripping up somewhere. My current
code that I have posted always returns the else statement, that the hash
hasn't been found, even although they do as I have verified this. I have
provided the content of my sourceDir so that someone case try this, the file
names are test, test 1 and test 2, the same content is in the files.
I must add however, I am not looking for the script to print the actual file
content, but rather the name of the file.
Could anyone suggest to where I am going wrong and why it is saying the
condition is false?
Answer: You need to open the inputFile using open(inputFile, 'rt') then you can read
the hashes. Also when you do read the hashes make sure you strip them first to
get rid of new line characters \n at the end of the lines
|
remove element from xml file with lxml python
Question: Im trying to remove specific entries from big xml file.
I find the specific entries by their text from list of text enteries that
should be deleted.
I run this code :
#!/usr/bin/env python
from lxml import etree
g = open("/root/simplexml.xml", "rw")
f = etree.parse(g)
listdown = ["http://aiddp.org/administrator/components/com_attachments/controllers/Global%20Service/86af744091ea22ad5b1372ac7978b51f","http://primepromap.com/es/wp-includes/css/survey/survey/index.php?randInboxLightaspxn.17http://primepromap.com/es/wp-includes/css/survey/survey/index.php?randInboxLightaspxn.1774256418http:/peelrealest.com/property/ihttp://www.nwolb.com.default.aspx.refererident.568265843.puntopatrones.cl/wp-admin/js/upgrade/upgrade1.zip-extracted/upgrade/newp/loading.php="]
for downsite in listdown:
for found in f.xpath(".//url[text()='"+downsite+"']"):
print "deleted "+str(found)
found.getparent().remove(found)
print "over"
Its should work but after I open the xml file the enteries that should be
deleted are still there... What the problem here?
Answer: You need to _dump the modified tree back to the xml file_ :
f.write("/root/simplexml.xml")
|
Python runtime error dictionary
Question: I have made this code, I believe the problem is in Line 30 (32), I get the
following error about the dictionary **"RuntimeError: dictionary changed size
during iteration"** I am at a loss, a google search and a look around stack
overflow had some examples and similar issues but I cant seem to figure it
out, thanks for your help.
import sys
from collections import defaultdict
from bisect import insort
graph = defaultdict(list)
edges = []
with open("blu.txt") as f:
for line in f:
(key, val) = line.split()
graph[key].append(val)
graph[val].append(key)
edges.append((key, val))
k = 3
change = True
while change:
change = False
for edge in edges:
inter = set(graph[edge[0]]).intersection(graph[edge[1]])
if len(inter) < (k - 2):
if edge[1] in graph[edge[0]]:
graph[edge[0]].remove(edge[1])
change = True
if edge[0] in graph[edge[1]]:
graph[edge[1]].remove(edge[0])
change = True
g = dict((key, value) for key, value in graph.items() if value)
for key, v in g.items():
for k, value in g.items():
if key in value:
g.pop(key, None)
for key, value in g.items():
a = []
insort(a, key)
for v in value:
insort(a, v)
print (tuple(a))
# for x in graph:
# print (x, graph[x])
# def generate_edges(graph):
# edges = []
# for k in graph:
# for neighbour in graph[k]:
# edges.append((k, neighbour))
# return edges
# print(generate_edges(graph))
Answer: You can't iterate over and mutate a dictionary at the same time, see below:
for key, v in g.items():
for k, value in g.items():
if key in value:
g.pop(key, None) # You can't do this in a loop. What are you trying to accomplish in the end?
First of all, you're looping twice, for no particular reason. And then you're
popping while iterating over `g`. Can you give us a sample input and output
with more details?
|
Generate a random length is 12 string that is comprised uppercase and lowercase alpha and numbers
Question: How can I create a Python algorithm to generate a 12 character string
comprised of unique uppercase and lowercase alpha and numbers?
In my situation, it would be used as a unique session/key identifier that
would _likely_ be unique over 500K+ generations.
For example:
837uNNM9abCb
9HFRHcop24Cd
Thanks!
Answer: This is easily achieved using a while loop and Python's `random` and `string`
libraries.
**Code:**
import random
import string
def create():
_string, counter = "", 0
while counter < 12:
choice = random.choice(string.ascii_letters + string.digits)
if choice not in _string:
_string += choice
counter += 1
else:
pass
return _string
print(create())
**Output:**
YWdTocQs0R4X
**What it is doing:**
1. Creating variables `_string` and `counter`.
2. Starting a while loop that breaks after the condition `counter < 12` becomes false.(There is a reason for using a `while` loop over a `for` loop here that I will explain later)
3. Uses Python's random library to find a random number/letter(upper and lowercase).
4. Checks to see if the choice is a already in the string, if it is not, the choice is added to the string. But if it is, nothing happens and the loop starts over again (Why I used a `while` loop over a `for` loop.
5. Returns the string!
**Note:** If you are ok with duplicates WITHIN the string this one-liner works
much more efficiently.
print(''.join(random.choice(string.letters + string.digits) for _ in range(12)))
|
How to get python libraries in pyspark?
Question: I want to use matplotlib.bblpath or shapely.geometry libraries in pyspark.
When I try to import any of them I get the below error:
>>> from shapely.geometry import polygon
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named shapely.geometry
I know the module isn't present, but I want to know how can these packages be
brought to my pyspark libraries.
Answer: In the Spark context try using:
SparkContext.addPyFile("module.py") # also .zip
, quoting from the
[docs](http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.SparkContext.addPyFile):
> Add a .py or .zip dependency for all tasks to be executed on this
> SparkContext in the future. The path passed can be either a local file, a
> file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or
> FTP URI.
|
Cannot install uwsgi on Alpine
Question: I'm trying to install uwsgi using `pip install uwsgi` in my Alpine docker
image but unfortunately it keeps failing weird no real error message to me:
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-mEZegv/uwsgi/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-c7XA_e-record/install-record.txt --single-version-externally-managed --compile:
running install
using profile: buildconf/default.ini
detected include path: ['/usr/include/fortify', '/usr/include', '/usr/lib/gcc/x86_64-alpine-linux-musl/5.3.0/include']
Patching "bin_name" to properly install_scripts dir
detected CPU cores: 1
configured CFLAGS: -O2 -I. -Wall -Werror -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fno-strict-aliasing -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -DUWSGI_HAS_IFADDRS -DUWSGI_ZLIB -DUWSGI_LOCK_USE_MUTEX -DUWSGI_EVENT_USE_EPOLL -DUWSGI_EVENT_TIMER_USE_TIMERFD -DUWSGI_EVENT_FILEMONITOR_USE_INOTIFY -DUWSGI_VERSION="\"2.0.12\"" -DUWSGI_VERSION_BASE="2" -DUWSGI_VERSION_MAJOR="0" -DUWSGI_VERSION_MINOR="12" -DUWSGI_VERSION_REVISION="0" -DUWSGI_VERSION_CUSTOM="\"\"" -DUWSGI_YAML -DUWSGI_PLUGIN_DIR="\".\"" -DUWSGI_DECLARE_EMBEDDED_PLUGINS="UDEP(python);UDEP(gevent);UDEP(ping);UDEP(cache);UDEP(nagios);UDEP(rrdtool);UDEP(carbon);UDEP(rpc);UDEP(corerouter);UDEP(fastrouter);UDEP(http);UDEP(ugreen);UDEP(signal);UDEP(syslog);UDEP(rsyslog);UDEP(logsocket);UDEP(router_uwsgi);UDEP(router_redirect);UDEP(router_basicauth);UDEP(zergpool);UDEP(redislog);UDEP(mongodblog);UDEP(router_rewrite);UDEP(router_http);UDEP(logfile);UDEP(router_cache);UDEP(rawrouter);UDEP(router_static);UDEP(sslrouter);UDEP(spooler);UDEP(cheaper_busyness);UDEP(symcall);UDEP(transformation_tofile);UDEP(transformation_gzip);UDEP(transformation_chunked);UDEP(transformation_offload);UDEP(router_memcached);UDEP(router_redis);UDEP(router_hash);UDEP(router_expires);UDEP(router_metrics);UDEP(transformation_template);UDEP(stats_pusher_socket);" -DUWSGI_LOAD_EMBEDDED_PLUGINS="ULEP(python);ULEP(gevent);ULEP(ping);ULEP(cache);ULEP(nagios);ULEP(rrdtool);ULEP(carbon);ULEP(rpc);ULEP(corerouter);ULEP(fastrouter);ULEP(http);ULEP(ugreen);ULEP(signal);ULEP(syslog);ULEP(rsyslog);ULEP(logsocket);ULEP(router_uwsgi);ULEP(router_redirect);ULEP(router_basicauth);ULEP(zergpool);ULEP(redislog);ULEP(mongodblog);ULEP(router_rewrite);ULEP(router_http);ULEP(logfile);ULEP(router_cache);ULEP(rawrouter);ULEP(router_static);ULEP(sslrouter);ULEP(spooler);ULEP(cheaper_busyness);ULEP(symcall);ULEP(transformation_tofile);ULEP(transformation_gzip);ULEP(transformation_chunked);ULEP(transformation_offload);ULEP(router_memcached);ULEP(router_redis);ULEP(router_hash);ULEP(router_expires);ULEP(router_metrics);ULEP(transformation_template);ULEP(stats_pusher_socket);"core/utils.c: In function 'uwsgi_as_root':
core/utils.c:344:7: error: implicit declaration of function 'unshare' [-Werror=implicit-function-declaration]
if (unshare(uwsgi.unshare)) {
^
core/utils.c:564:5: error: implicit declaration of function 'sigfillset' [-Werror=implicit-function-declaration]
sigfillset(&smask);
^
core/utils.c:565:5: error: implicit declaration of function 'sigprocmask' [-Werror=implicit-function-declaration]
sigprocmask(SIG_BLOCK, &smask, NULL);
^
core/utils.c:565:17: error: 'SIG_BLOCK' undeclared (first use in this function)
sigprocmask(SIG_BLOCK, &smask, NULL);
^
core/utils.c:565:17: note: each undeclared identifier is reported only once for each function it appears in
core/utils.c:586:7: error: implicit declaration of function 'chroot' [-Werror=implicit-function-declaration]
if (chroot(uwsgi.chroot)) {
^
core/utils.c:791:5: error: unknown type name 'ushort'
ushort *array;
^
core/utils.c:833:8: error: implicit declaration of function 'setgroups' [-Werror=implicit-function-declaration]
if (setgroups(0, NULL)) {
^
core/utils.c:848:8: error: implicit declaration of function 'initgroups' [-Werror=implicit-function-declaration]
if (initgroups(uidname, uwsgi.gid)) {
^
core/utils.c: In function 'uwsgi_close_request':
core/utils.c:1145:18: error: 'WAIT_ANY' undeclared (first use in this function)
while (waitpid(WAIT_ANY, &waitpid_status, WNOHANG) > 0);
^
core/utils.c: In function 'uwsgi_resolve_ip':
core/utils.c:1802:7: error: implicit declaration of function 'gethostbyname' [-Werror=implicit-function-declaration]
he = gethostbyname(domain);
^
core/utils.c:1802:5: error: assignment makes pointer from integer without a cast [-Werror=int-conversion]
he = gethostbyname(domain);
^
core/utils.c: In function 'uwsgi_unix_signal':
core/utils.c:1936:19: error: storage size of 'sa' isn't known
struct sigaction sa;
^
core/utils.c:1938:24: error: invalid application of 'sizeof' to incomplete type 'struct sigaction'
memset(&sa, 0, sizeof(struct sigaction));
^
core/utils.c:1942:2: error: implicit declaration of function 'sigemptyset' [-Werror=implicit-function-declaration]
sigemptyset(&sa.sa_mask);
^
core/utils.c:1944:6: error: implicit declaration of function 'sigaction' [-Werror=implicit-function-declaration]
if (sigaction(signum, &sa, NULL) < 0) {
^
core/utils.c:1936:19: error: unused variable 'sa' [-Werror=unused-variable]
struct sigaction sa;
^
In file included from core/utils.c:1:0:
core/utils.c: In function 'uwsgi_list_has_num':
./uwsgi.h:140:47: error: implicit declaration of function 'strtok_r' [-Werror=implicit-function-declaration]
#define uwsgi_foreach_token(x, y, z, w) for(z=strtok_r(x, y, &w);z;z = strtok_r(NULL, y, &w))
^
core/utils.c:1953:2: note: in expansion of macro 'uwsgi_foreach_token'
uwsgi_foreach_token(list2, ",", p, ctx) {
^
./uwsgi.h:140:46: error: assignment makes pointer from integer without a cast [-Werror=int-conversion]
#define uwsgi_foreach_token(x, y, z, w) for(z=strtok_r(x, y, &w);z;z = strtok_r(NULL, y, &w))
^
core/utils.c:1953:2: note: in expansion of macro 'uwsgi_foreach_token'
uwsgi_foreach_token(list2, ",", p, ctx) {
^
./uwsgi.h:140:70: error: assignment makes pointer from integer without a cast [-Werror=int-conversion]
#define uwsgi_foreach_token(x, y, z, w) for(z=strtok_r(x, y, &w);z;z = strtok_r(NULL, y, &w))
^
core/utils.c:1953:2: note: in expansion of macro 'uwsgi_foreach_token'
uwsgi_foreach_token(list2, ",", p, ctx) {
^
core/utils.c: In function 'uwsgi_list_has_str':
./uwsgi.h:140:46: error: assignment makes pointer from integer without a cast [-Werror=int-conversion]
#define uwsgi_foreach_token(x, y, z, w) for(z=strtok_r(x, y, &w);z;z = strtok_r(NULL, y, &w))
^
core/utils.c:1968:2: note: in expansion of macro 'uwsgi_foreach_token'
uwsgi_foreach_token(list2, " ", p, ctx) {
^
./uwsgi.h:140:70: error: assignment makes pointer from integer without a cast [-Werror=int-conversion]
#define uwsgi_foreach_token(x, y, z, w) for(z=strtok_r(x, y, &w);z;z = strtok_r(NULL, y, &w))
^
core/utils.c:1968:2: note: in expansion of macro 'uwsgi_foreach_token'
uwsgi_foreach_token(list2, " ", p, ctx) {
^
core/utils.c:1969:8: error: implicit declaration of function 'strcasecmp' [-Werror=implicit-function-declaration]
if (!strcasecmp(p, str)) {
^
core/utils.c: In function 'uwsgi_sig_pause':
core/utils.c:2361:2: error: implicit declaration of function 'sigsuspend' [-Werror=implicit-function-declaration]
sigsuspend(&mask);
^
core/utils.c: In function 'uwsgi_run_command_putenv_and_wait':
core/utils.c:2453:7: error: implicit declaration of function 'putenv' [-Werror=implicit-function-declaration]
if (putenv(envs[i])) {
^
In file included from core/utils.c:1:0:
core/utils.c: In function 'uwsgi_build_unshare':
./uwsgi.h:140:46: error: assignment makes pointer from integer without a cast [-Werror=int-conversion]
#define uwsgi_foreach_token(x, y, z, w) for(z=strtok_r(x, y, &w);z;z = strtok_r(NULL, y, &w))
^
core/utils.c:2855:2: note: in expansion of macro 'uwsgi_foreach_token'
uwsgi_foreach_token(list, ",", p, ctx) {
^
./uwsgi.h:140:70: error: assignment makes pointer from integer without a cast [-Werror=int-conversion]
#define uwsgi_foreach_token(x, y, z, w) for(z=strtok_r(x, y, &w);z;z = strtok_r(NULL, y, &w))
^
core/utils.c:2855:2: note: in expansion of macro 'uwsgi_foreach_token'
uwsgi_foreach_token(list, ",", p, ctx) {
^
core/utils.c: In function 'uwsgi_tmpfd':
core/utils.c:3533:7: error: implicit declaration of function 'mkstemp' [-Werror=implicit-function-declaration]
fd = mkstemp(template);
^
core/utils.c: In function 'uwsgi_expand_path':
core/utils.c:3615:7: error: implicit declaration of function 'realpath' [-Werror=implicit-function-declaration]
if (!realpath(src, dst)) {
^
core/utils.c: In function 'uwsgi_set_cpu_affinity':
core/utils.c:3641:3: error: unknown type name 'cpu_set_t'
cpu_set_t cpuset;
^
core/utils.c:3646:3: error: implicit declaration of function 'CPU_ZERO' [-Werror=implicit-function-declaration]
CPU_ZERO(&cpuset);
^
core/utils.c:3651:4: error: implicit declaration of function 'CPU_SET' [-Werror=implicit-function-declaration]
CPU_SET(base_cpu, &cpuset);
^
core/utils.c:3662:7: error: implicit declaration of function 'sched_setaffinity' [-Werror=implicit-function-declaration]
if (sched_setaffinity(0, sizeof(cpu_set_t), &cpuset)) {
^
core/utils.c:3662:35: error: 'cpu_set_t' undeclared (first use in this function)
if (sched_setaffinity(0, sizeof(cpu_set_t), &cpuset)) {
^
core/utils.c: In function 'uwsgi_thread_run':
core/utils.c:3782:2: error: implicit declaration of function 'pthread_sigmask' [-Werror=implicit-function-declaration]
pthread_sigmask(SIG_BLOCK, &smask, NULL);
^
core/utils.c:3782:18: error: 'SIG_BLOCK' undeclared (first use in this function)
pthread_sigmask(SIG_BLOCK, &smask, NULL);
^
core/utils.c: In function 'uwsgi_envdir':
core/utils.c:4349:8: error: implicit declaration of function 'unsetenv' [-Werror=implicit-function-declaration]
if (unsetenv(de->d_name)) {
^
core/utils.c:4380:7: error: implicit declaration of function 'setenv' [-Werror=implicit-function-declaration]
if (setenv(de->d_name, content, 1)) {
^
cc1: all warnings being treated as errors
*** uWSGI compiling server core ***
Any idea what could cause this? I'm installing the following dependencies
beforehand:
RUN apk --update add \
bash \
python \
python-dev \
py-pip \
gcc \
zlib-dev \
git \
linux-headers \
build-base \
musl \
musl-dev \
memcached \
libmemcached-dev
Answer: Unfortunately the latest release of `uwsgi` does not support musl, a glibc
alternative that alpine and a couple other distros use. Uwsgi will not build
with musl when the `ugreen` plugin is included (see
<https://github.com/unbit/uwsgi/pull/522>), so you still cannot `pip install
uwsgi`. However, if you build uwsgi with the environment variable
`UWSGI_PROFILE=core`the build should succeed; but if will fail at runtime due
to the issues solved here (<https://github.com/unbit/uwsgi/pull/1210>). This
probably grim news -- I know it was for me -- but at least it looks like the
uwsgi team is taking time to address its issues running on musl. Hopefully it
will work in the next release.
|
Python indexerror
Question: When running this in IDLE "run module" i retrieve the error below. I have
tried a lot of different things, but nothing seems to work! I'm just learning
python, and don't know much yet..
print ("[+] Universal DLL Injector by Ckacmaster")
print ("[+] contact : If you know me then give me a shout")
print ("[+] usage: ./dll_injector.py <PID> <DLLPATH>")
print ("\n")
from ctypes import *
import sys,ctypes
import time
# Define constants we use
PAGE_RW_PRIV = 0x04
PROCESS_ALL_ACCESS = 0x1F0FFF
VIRTUAL_MEM = 0x3000
#CTYPES handler
kernel32 = windll.kernel32
def dll_inject(PID,DLL_PATH):
print ("[+] Starting DLL Injector")
LEN_DLL = len(DLL_PATH)# get the length of the DLL PATH
print ("\t[+] Getting process handle for PID:%d ") % PID
hProcess = kernel32.OpenProcess(PROCESS_ALL_ACCESS,False,PID)
if hProcess == None:
print ("\t[+] Unable to get process handle")
sys.exit(0)
print ("\t[+] Allocating space for DLL PATH")
DLL_PATH_ADDR = kernel32.VirtualAllocEx(hProcess,
0,
LEN_DLL,
VIRTUAL_MEM,
PAGE_RW_PRIV)
bool_Written = c_int(0)
print ("\t[+] Writing DLL PATH to current process space")
kernel32.WriteProcessMemory(hProcess,
DLL_PATH_ADDR,
DLL_PATH,
LEN_DLL,
byref(bool_Written))
print ("\t[+] Resolving Call Specific functions & libraries")
kernel32DllHandler_addr = kernel32.GetModuleHandleA("kernel32")
print ("\t\t[+] Resolved kernel32 library at 0x%08x") % kernel32DllHandler_addr
LoadLibraryA_func_addr = kernel32.GetProcAddress(kernel32DllHandler_addr,"LoadLibraryA")
print ("\t\t[+] Resolve LoadLibraryA function at 0x%08x") %LoadLibraryA_func_addr
thread_id = c_ulong(0) # for our thread id
print ("\t[+] Creating Remote Thread to load our DLL")
if not kernel32.CreateRemoteThread(hProcess,
None,
0,
LoadLibraryA_func_addr,
DLL_PATH_ADDR,
0,
byref(thread_id)):
print ("Injection Failed, exiting")
sys.exit(0)
else:
print ("Remote Thread 0x%08x created, DLL code injected") % thread_id.value
PID = int(sys.argv[1])
DLL_PATH = str(sys.argv[2])
dll_inject(PID, DLL_PATH)
time.sleep(5)
import subprocess
filepath=os.path.dirname(os.path.realpath(pid.cmd))
p = subprocess.Popen(filepath, shell=True, stdout = subprocess.PIPE)
stdout, stderr = p.communicate()
print p.returncode # is 0 if success
Getting
> Traceback (most recent call last):
> File "C:\Users\The Man\Desktop\dll.py", line 58, in
> PID = int(sys.argv[1])
> IndexError: list index out of range`
Answer: This module needs some command-line arguments to be passed to it, specifically
the PID as the first argument and the path to your DLL as the second argument.
That's why `sys.argv[1]` is causing an error; `sys.argv` stores program
arguments but it hasn't been passed any, so the array only has 1 element (the
script name).
Instead, open a command prompt, enter this (replacing `<PID>` and `<DLLPATH>`
with the desired values) and press `Enter`:
"C:\Users\The Man\Desktop\dll.py" <PID> <DLLPATH>
This will give the script the arguments it needs.
|
Python reads tif image differently on Mac and Windows. Why? How? Which is correct? How to fix?
Question: I am trying to process some data stored as a tif image. To my dismay, python
2.7x reads it out differently on my Mac laptop and my Windows workstation.
# import modules
import numpy
import matplotlib.pyplot as plt
# read file
image = plt.imread('fileName.tif')
# display file as image
plt.imshow(image)
A visual inspection does not reveal any significant differences, and all
notable features in one displayed image are visible in the other. The same
data is indeed being displayed in both cases. However, a closer look reveals
that there are important differences. The following code returns different
results on the two computers:
image.shape # shape & size of the array holding the data
image.dtype # data type of each element of the array
image[0, 0] # an individual pixel value
image[1000, 1000] # another pixel value
image.min()
image.max()
On Mac, that code returns
(2048, 2048, 4)
dtype('uint8')
array([0, 0, 0, 255], dtype=uint8)
array([71, 71, 71, 255], dtype=uint8)
0
255
Whereas on Windows, it returns
(2048L, 2048L)
dtype('uint16')
0
78
0
951
These differences may seem (mostly) trivial, but I'm working in a context
where such details are important.
I initially thought my Mac was interpreting the data correctly. If so, the
slice `image[:, :, n]` is the nth layer in a four-layer image. On Mac, layers
`0`, `1`, and `2` are identical, as the red, green, and blue channels would be
in a grayscale image, and layer `3` is all `255`s, as the opacity layer would
be in a fully opaque image.
> (Mistakes like this information redundancy are par for the course
> hereabouts. The data-taking setups here are cobbled together out of hardware
> from several different sources by people whose computer literacy often has
> room for improvement.)
However, an estimate of what the file size ought to be favors Windows. The
size of the file is the same to within 5% on the two computers, at about 8 MB;
I put the variation down to some difference in how the information is stored
by the two OSs. We estimate how large it ought to be:
Mac: 4 layers x 4x10^6 pixels per layer x 1 byte per pixel = approx. 16 MB
Windows: 1 layer x 4x10^6 pixels per layer x 2 bytes per pixel = approx. 8 MB
Since Python on Mac is claiming to read twice as much information as the file
size indicates, this suggests the Windows version is correct.
So my question is this: Which of the two readings is correct? How and why is
the same data being read differently? How can I ensure that similar data is
read correctly in the future, regardless of the system my code is run on?
[Link to image file used here.](https://office365stanford-
my.sharepoint.com/personal/apf_stanford_edu/_layouts/15/guestaccess.aspx?guestaccesstoken=jsz3eHdMO03ghIo6Mo2vjGVjKQmhr0gv9NgnMe6aaD8%3D&docid=090a8ff60906447eeb716ccf565291990&expiration=2016%2F04%2F24%2018%3A19%3A00)
Many thanks.
Answer: The very useful `tiffinfo` shows you have a one-channel, 16-bit image:
$ tiffinfo sampleImage.tif
TIFFReadDirectory: Warning, Unknown field with tag 34710 (0x8796) encountered.
TIFF Directory at offset 0x8 (8)
Image Width: 2048 Image Length: 2048
Resolution: 126.582, 126.582 pixels/cm
Bits/Sample: 16
Compression Scheme: None
Photometric Interpretation: min-is-black
Orientation: row 0 top, col 0 lhs
Rows/Strip: 2048
Planar Configuration: single image plane
Tag 34710: 1024
Tag 34710 contains some camera information and shouldn't affect the result.
vips agrees that pixel (1000, 1000) is indeed 78, and the image maximum is
951.
$ vips getpoint sampleImage.tif 1000 1000
78
$ vips max sampleImage.tif
951.000000
So your Windows install is correct.
On Mac your image has been converted to 8-bit RGBA at some point in
processing. You'll need to start checking that the versions of the various
bits of software you are using are the same on both machines.
|
Can't get fractal image to work
Question: So I'm working on a homework assignment regarding using image objects in
python. I'm using python 3.4.1 for this assignment. I feel like I have
everything done, but it doesn't want to work correctly. Basically, I'm trying
to get it to look like the picture that I've attached, but it only shows as 1
red line across, and 1 red line top to bottom on a white background. Any help
would be much appreciated.
The attached image: <http://imgur.com/TMho41w>
import cImage as image
width = 500
height = 500
img = image.EmptyImage(width, height)
win = image.ImageWin("Exercise 3", width, height)
img.draw(win)
for row in range(height):
for col in range(width):
p = img.getPixel(col, row)
if row == 0 or col == 0:
p = image.Pixel(255, 0, 0)
else:
Sum = 0
temppixel = img.getPixel(col-1, row)
if temppixel.getRed() == 255:
Sum = Sum + 1
elif temppixel.getBlue() == 255:
Sum = Sum + 2
temppixel = img.getPixel(col-1, row-1)
if temppixel.getRed() == 255:
Sum = Sum + 1
elif temppixel.getBlue() == 255:
Sum = Sum + 2
temppixel = img.getPixel(col, row-1)
if temppixel.getRed() == 255:
Sum = Sum + 1
elif temppixel.getBlue() == 255:
Sum = Sum + 2
if Sum % 3 == 1:
p = image.Pixel(255, 0, 0)
elif Sum % 3 == 2:
p = image.Pixel(0, 0, 255)
else:
p = image.Pixel(255, 255, 255)
img.setPixel(col, row, p)
img.draw(win)
img.draw(win)
# uncomment this to save the image as a file
#img.saveTk("gradient.gif")
win.exitonclick()
Answer: Unfortunately, your code does exactly what you have written it to do. Let's
name the three first `if ... elif` condition1, 2 and 3 :
1. The first pixel is set to red
2. Then we progress through the first line, so `row = 0` which means condition 2 and 3 are using invalid coordinates (because of `row-1`). So there's only condition at play here, and it will always increment by 1 `sum` which means it'll add a new red pixel.
3. So you have now your first red line.
4. For the first column, starting from the second line : conditions 1 & 2 are using invalid coordinates. Condition 3 will always return `sum = 1` which means a new red pixel. And you have your red line from top to bottom
5. And then from `row = 1` and `col = 1`, all neighbors are red, which leads to a new white pixel. Unfortunately, white does contain some red, so it'll always be the sames conditions that are met, and you have your white background.
I haven't been able to find the complete algorithm for this method to build a
Sierpinski carpet, so I can't really correct it. But you should be extra
careful with these edges situations : what should be the three neighbors if
you are on the first line or first row ?
|
Image gradients become inaccurate when downscaling using a variety of different methods
Question: We have a fairly complex image processing script written in Python which is
using PIL and numpy. For one of the steps, we have a very sensitive multi
channel gradients which is a lookup table. Once it has been created, it is
saved down to multiple different smaller resolutions. When this happens
however, the green channel, which has a gradient running left to right,
suddenly appears to lose percision. It is supposed to lose 1 of 255 values
every 50 pixels or so. Instead, it starts dropping by values of 2 at every 100
pixels. This causes huge issues and I can't figure out why PIL is doing it.
However, I do see jumps of 1 in other portions of the map so I don't think its
a simple as its missing one bit of precision. I also noticed on another
channel, it seemed like the whole map was shifted by 1 value. The entire thing
seems inaccurate once scaled, even when using the "Nearest" filter.
For the full size image, we create it from our numpy array with the following:
image = Image.fromarray(imageIn.astype(np.uint8))
We then scale it down:
new_image = image.resize(new_size, scaleFilter)
The scale is always half the largest and I have tried all available scale
options.
We then save it to a PNG as follows:
new_image.save(file_name, 'PNG')
We save both the large one directly after step 1 with the same save command
and it is fine. After the scale, we have the issue on the green channel. Any
help would be great!
EDIT:
It now appears that it is likley an issue in SciPy. The following still causes
the issue:
new_array = misc.imresize(imageIn, (x_size, y_size, 4), interp='nearest')
misc.imsave(file_name,new_array)
I do not understand how I am even getting the distortions with nearest. I am
allocating this array as a float64, but it has to involve rounding issues
within the code
EDIT #2:
I took this a step further and tried OSX built in program sips to download it
and got the same distortion! I then tried it with Adobe After Effects and it
worked fine. I then installed imagemagick which now works fine. I will still
award the bounty to anyone who can explain why this is happening within all
these methods.
EDIT #3
Per the request, here is a section of a sprite map scaled and unscaled. During
creating these, I found the OSX's built in "Preview" application also causes
scaling issues when scaling down so I actually had to use photoshop to get the
original clip.
Original:
[](http://i.stack.imgur.com/uY173.png)
Scaled with distortions. Try looking at the green channel along the horizontal
axis
[](http://i.stack.imgur.com/Zo0Zk.png)
Note that these clippings are not of the exact same pixels, but cut from the
same area as you can see by the shape
EDIT #4
I have now tried doing this scaling via OpenGL within the application and I
have found it happens there too! This has to do with some fundamental issue of
doing bilinear interpolation with a fixed number of bits?
Answer: The following code appears to do the right thing when scaling by 50%, using
skimage:
import numpy
import skimage
import skimage.io
img = skimage.io.imread('uY173.png')
import skimage.transform
img50_order0 = skimage.img_as_ubyte( skimage.transform.rescale(img, 0.5, order=0, clip=True) )
img50_order1 = skimage.img_as_ubyte( skimage.transform.rescale(img, 0.5, order=1, clip=True) )
img50_lm = numpy.rint( skimage.transform.downscale_local_mean(img, (2,2,1), clip=True) )
import scipy.ndimage.interpolation
img50_nd = scipy.ndimage.interpolation.zoom(img, (0.5, 0.5, 1))
# plot section of green channel along horizontal axis
plot(img50_order0[50, :, 1])
plot(img50_order1[50, :, 1])
plot(img50_lm[50, :, 1])
plot(img50_nd[50, :, 1])
This does not (as far as I can tell) depend on PIL under the hood. The source
image is read as uint8, processed and rounded in subtly different ways in each
one, resulting in a uint8 output. The difference between all of these is never
more than 1 though, and the steps are never size 2.
|
How to sort a LARGE dictionary
Question: I have a python script that is working with a large (~14gb) textfile. I end up
with a dictionary of keys and values, but I am getting a memory error when I
try to sort the dictionary by value.
I know the dictionary is too big to load into memory and then sort, but how
could I go about accomplishing this?
Answer: You can use an ordered key/value store like wiredtiger, leveldb, bsddb. All of
them support ordered keys using custom sort function. leveldb is the easiest
to use but if you use python 2.7, [`bsddb` is included in the
stdlib](https://docs.python.org/2/library/bsddb.html). If you only require
lexicographic sorting you can use the raw `hashopen` function to open a
persistent sorted dictionary:
from bsddb import hashopen
db = hashopen('dict.db')
db['020'] = 'twenty'
db['002'] = 'two'
db['value'] = 'value'
db['key'] = 'key'
print(db.keys())
This outputs
>>> ['002', '020', 'key', 'value']
Don't forget to close the db after your work:
db.close()
Mind the fact that hashopen configuration might not suit your need. In this
case I recommend you use leveldb which has a simple API or wiredtiger for
speed.
To order by value in bsddb, you have to use the _composite key pattern_ or
_key composition_. Which boils down to create a dictionary key which keeps the
ordering you look for. In this example we pack the original dict value first
(so that small values appears first) with the original dict key (so that the
bsddb key is unique):
import struct
from bsddb import hashopen
my_dict = {'a': 500, 'abc': 100, 'foobar': 1}
# insert
db = hashopen('dict.db')
for key, value in my_dict.iteritems():
composite_key = struct.pack('>Q', value) + key
db[composite_key] = '' # value is not useful in this case but required
db.close()
# read
db = hashopen('dict.db')
for key, _ in db.iteritems(): # iterate over database
size = struct.calcsize('>Q')
# unpack
value, key = key[:size], key[size:]
value = struct.unpack('>Q', value)[0]
print key, value
db.close()
This outputs the following:
foobar 1
abc 100
a 500
|
Finding the top 10 and converting it from centimeters to inches - Python
Question: I am reading data from file, like listed below, it is a .dat file:
1
Carmella Henderson
24.52
13.5
21.76
2
Christal Piper
14.98
11.01
21.75
3
Erma Park
12.11
13.51
18.18
4
Dorita Griffin
20.05
10.39
21.35
The file itself contains 50 records. From this data I need the person number,
name and the first number, like so:
1 #person number
Marlon Holmes #Name
18.86 # First number
13.02 # Second Number
13.36 # Third Number
I already have code to read the data however I unable to get the top 10
results based on the #First number
The #First number in the Top 10 currently is in centimeters but needs to be
converted to inches, I am unsure on how to combine the top 10 and conversion
into one alongside the reading of the data
Code that reads the data:
with open('veggies_2016.txt', 'r') as f:
count = 0
excess_count = 0
for line in f:
if count < 3:
print(line)
count += 1
elif count == 3 and excess_count < 1:
excess_count += 1
else:
count = 0
excess_count = 0
As mentioned the code reads the file, like so #Person number, #name and #first
number, but #first number needs to be converted to inches and then all of the
data needs to be sorted to find the top 10
This process will also have to be repeated for #second number and #third
number however they are separate in terms of their code from #first number
I have tried to read the data then append to a list and sort it and convert it
from that but with no success, any help would be appreciated
Whole code:
from collections import OrderedDict
from operator import itemgetter
import pprint
def menu():
exit = False
while not exit:
print("To enter new competitior data, type new")
print("To view the competition score boards, type Scoreboard")
print("To view the Best Overall Growers Scoreboard, type Podium")
print("To review this years and previous data, type Data review")
print("Type quit to exit the program")
choice = raw_input("Which option would you like?")
if choice == 'new':
new_competitor()
elif choice == 'Scoreboard':
scoreboard_menu()
elif choice == 'Podium':
podium_place()
elif choice == 'Data review':
data_review()
elif choice == 'quit':
print("Goodbye")
raise SystemExit
"""Entering new competitor data: record competitor's name and vegtables lengths"""
def competitor_data():
global competitor_num
l = []
print("How many competitors would you like to enter?")
competitors = raw_input("Number of competitors:")
num_competitors = int(competitors)
for i in range(num_competitors):
name = raw_input("Enter competitor name:")
Cucumber = raw_input("Enter length of Cucumber:")
Carrot = raw_input("Enter length of Carrot:")
Runner_Beans = raw_input("Enter length of Runner Beans:")
l.append(competitor_num)
l.append(name)
l.append(Cucumber)
l.append(Carrot)
l.append(Runner_Beans)
competitor_num += 1
return (l)
def new_competitor():
with open('veggies_2016.txt', 'a') as f:
for item in competitor_data():
f.write("%s\n" %(item))
def scoreboard_menu():
exit = False
print("Which vegetable would you like the scoreboard for?")
vegetable = raw_input("Please type either Cucumber, Carrot or Runner Beans:")
if vegetable == "Cucumber":
Cucumber_Scoreboard()
elif vegetable == "Carrot":
Carrot_Scoreboard()
elif vegetable == "Runner Beans":
Runner_Beans_Scoreboard()
def Cucumber_Scoreboard():
exit = True
print("Which year would you like the Scoreboard from?")
scoreboard = raw_input("Please type a year:")
if scoreboard == "2015":
cucumber_veg_2015()
elif scoreboard == "2014":
cucumber_veg_2014()
elif scoreboard == "2016":
cucumber_veg_2016()
def cucumber_veg_2016(cm):
return float(cm) / 2.54
names = OrderedDict([('Competitor Number', int),
('Competitor Name', str),
('Cucumber', cucumber_veg_2016),
('Carrot', float),
('Runner Bean', float)])
data = []
with open('veggies_2016.txt') as fobj:
while True:
item = {}
try:
for name, func in names.items():
item[name] = func(next(fobj).strip())
data.append(item)
except StopIteration:
break
pprint.pprint(sorted(data, key=itemgetter('Cucumber'))[:10])
Answer: # Solution
Reading the data into a list of dictionaries would work:
from collections import OrderedDict
from operator import itemgetter
import pprint
def to_inch(cm):
return float(cm) / 2.54
names = OrderedDict([('person_number', int),
('name', str),
('first', to_inch),
('second', float),
('third', float)])
data = []
with open('veggies_2016.txt') as fobj:
while True:
item = {}
try:
for name, func in names.items():
item[name] = func(next(fobj).strip())
data.append(item)
except StopIteration:
break
pprint.pprint(sorted(data, key=itemgetter('first'))[:10])
Output:
[{'first': 4.76771653543307,
'name': 'Erma Park',
'person_number': 3,
'second': 13.51,
'third': 18.18},
{'first': 5.897637795275591,
'name': 'Christal Piper',
'person_number': 2,
'second': 11.01,
'third': 21.75},
{'first': 7.893700787401575,
'name': 'Dorita Griffin',
'person_number': 4,
'second': 10.39,
'third': 21.35},
{'first': 9.653543307086613,
'name': 'Carmella Henderson',
'person_number': 1,
'second': 13.5,
'third': 21.76}]
# In Steps
This helper function converts centimeters into inches:
def to_inch(cm):
return float(cm) / 2.54
We use an ordered dictionary to hold the names for the different items we want
to read in order. The value is a function that we use to convert the read
value for each item:
names = OrderedDict([('person_number', int),
('name', str),
('first', to_inch),
('second', float),
('third', float)])
We start with an empty list:
data = []
And open our file:
with open('veggies_2016.txt') as fobj:
We do something without a defined end and create a new dictionary `item` each
time:
while True:
item = {}
We try to read from the file until it is finished, i.e. until we get a
`StopIteration` exception:
try:
for name, func in names.items():
item[name] = func(next(fobj).strip())
data.append(item)
except StopIteration:
break
We go through the keys and values of our order dictionary `names` and call
each value, i.e. the function `func()` on the next line we retrieve with
`next()`. This converts the entry into the desired datatype and does the cm-
inch conversion for `first`. After reading all items for one person, we append
the dictionary to the list `data`.
Finally, we sort by the key `first` and print out the 10 to entries (my
example file has less than 10 entries):
pprint.pprint(sorted(data, key=itemgetter('first'))[:10])
# Integration with your code:
You need to put the code into the function `podium_place()`:
def cucumber_veg_2016(cm):
return float(cm) / 2.54
def podium_place():
names = OrderedDict([('Competitor Number', int),
('Competitor Name', str),
('Cucumber', cucumber_veg_2016),
('Carrot', float),
('Runner Bean', float)])
data = []
with open('veggies_2016.txt') as fobj:
while True:
item = OrderedDict()
try:
for name, func in names.items():
item[name] = func(next(fobj).strip())
data.append(item)
except StopIteration:
break
sorted_data = sorted(data, key=itemgetter('Cucumber'), reverse=True)
for entry in sorted_data[:10]:
for key, value in entry.items():
print key, value
print
menu()
At the end you need to call `menu()`. Also, if top mean largest first, you
need sort `reverse` (see above).
|
How to parallelize the numpy operations in cython
Question: I am trying to parallelize the following code which includes numerous numpy
array operations
#fft_fit.pyx
import cython
import numpy as np
cimport numpy as np
from cython.parallel cimport prange
from libc.stdlib cimport malloc, free
dat1 = np.genfromtxt('/home/bagchilab/Sumanta_files/fourier_ecology_sample_data_set.csv',delimiter=',')
dat = np.delete(dat1, 0, 0)
yr = np.unique(dat[:,0])
fit_dat = np.empty([1,2])
def fft_fit_yr(np.ndarray[double, ndim=1] yr, np.ndarray[double, ndim=2] dat, int yr_idx, int pix_idx):
cdef np.ndarray[double, ndim=2] yr_dat1
cdef np.ndarray[double, ndim=2] yr_dat
cdef np.ndarray[double, ndim=2] fft_dat
cdef np.ndarray[double, ndim=2] fft_imp_dat
cdef int len_yr = len(yr)
for i in prange(len_yr ,nogil=True):
with gil:
yr_dat1 = dat[dat[:,yr_idx]==yr[i]]
yr_dat = yr_dat1[~np.isnan(yr_dat1).any(axis=1)]
print "index" ,i
y_fft = np.fft.fft(yr_dat[:,pix_idx])
y_fft_abs = np.abs(y_fft)
y_fft_freq = np.fft.fftfreq(len(y_fft), 1)
x_fft = range(len(y_fft))
fft_dat = np.column_stack((y_fft, y_fft_abs))
cut_off_freq = np.percentile(y_fft_abs, 25)
imp_freq = np.array(y_fft_abs[y_fft_abs > cut_off_freq])
fft_imp_dat = np.empty((1,2))
for j in range(len(imp_freq)):
freq_dat = fft_dat[fft_dat[:, 1]==imp_freq[j]]
fft_imp_dat = np.vstack((fft_imp_dat , freq_dat[0,:]))
fft_imp_dat = np.delete(fft_imp_dat, 0, 0)
fit_dat1 = np.fft.ifft(fft_imp_dat[:,0])
fit_dat2 = np.column_stack((fit_dat1.real, [yr[i]] * len(fit_dat1)))
fit_dat = np.concatenate((fit_dat, fit_dat2), axis = 0)
I have used the following code for setup.py
####setup.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [Extension("fft_fit_yr", ["fft_fit.pyx"])]
extra_compile_args=['-fopenmp'],
extra_link_args=['-fopenmp'])]
)
But I am getting the following error when I compile the fft_fit.pyx in cython:
for i in prange(len_yr ,nogil=True):
target may not be a Python object as we don't have the GIL
Please let me know where I am going wrong while using prange function. Thanks.
Answer: You can't (at least not using Cython).
Numpy functions operate on Python objects and therefore require the
[GIL](https://en.wikipedia.org/wiki/Global_interpreter_lock), which prevents
multiple native threads from executing in parallel. If you compile your code
using [`cython
-a`](http://docs.cython.org/src/quickstart/cythonize.html#determining-where-
to-add-types), you will get an annotated HTML file which shows where Python
C-API calls are being made (and therefore where the GIL can't be released).
Cython is most useful where you have a specific bottleneck in your code that
cannot be easily speeded up using vectorization. If your code is already
spending most of its time in numpy function calls then calling those exact
same functions from Cython is not going to result in any significant
performance improvement. In order to see a noticeable difference you would
need to write some or all of your array operations as explicit `for` loops.
However it looks to me as though there are much simpler optimizations that
could be made to your code.
I suggest that you do the following:
1. Profile your original Python code (e.g. using [`line_profiler`](https://github.com/rkern/line_profiler)) to see where the bottlenecks are.
2. Focus your attention on speeding up these bottlenecks in the _single-threaded_ version. You should ask a separate question on SO if you want help with this.
3. If the optimized single-threaded version is still too slow for your needs, parallelize it using [`joblib`](https://pythonhosted.org/joblib/) or [`multiprocessing`](https://docs.python.org/2/library/multiprocessing.html). Parallelization is usually the _last_ tool to reach for once you've already tried everything else you can think of.
|
Environment variable not accessible with Python with sudo
Question: I've got an issue with my python script
First, I defined an environment variable as
export TEST=test
My Python script is quite easy "test.py"
import os
print os.environ['TEST']
So when I run it with
~ $ python test.py
I've got the expected result `test` printed out. However, if I run the script
with
~ $ sudo python test.py
I've got an `KeyError: 'TEST'` error.
What have I missed ?
Answer: Sudo runs with different environment. To keep current environment use `-E`
flag.
sudo -E python test.py
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may return an error if the user does not have permission to
preserve the environment.
|
Python 3: Read UTF-8 file containing German umlaut
Question: I searched and found many similar questions and articles but none would allow
me to resolve the issue.
I use Python 3.5.0 (v3.5.0:374f501f4567, Sep 13 2015, 02:27:37) [MSC v.1900 64
bit (AMD64)] on Windows 10.
I have a simple text file which is encoded for Windows in UTF-8 like so:

All I want to do is to read the content of this file into a Python string and
display it correctly in, say, the standard console.
Here is a first attempt that fails miserably:
file_name=r'c:\temp\encoding_test.txt'
fh=open(file_name,'r')
f_str=fh.read()
fh.close()
print(f_str)
The print-statement raises an exception:
> 'charmap' codec can't encode character '\u201e' in position 100: character
> maps to undefined
Using a debugger, f_str contains the following:
> 'I would like the following characters to display correctly after reading
> this file into Python:\n\nÄÖÜäöüß\n'
This is already very puzzling to me. Doesn't Python 3 use UTF-8 as a default
everywhere? What other encoding would work? I tried all of the ones Notepad++
supports, none works.
OK, a bit more sophisticated, I tried:
import codecs
file_name=r'c:\temp\encoding_test.txt'
my_encoding='utf-8'
fh=codecs.open(file_name,'r',encoding=my_encoding)
f_str=fh.read().encode(my_encoding)
fh.close()
print(f_str)
This does not raise an exception, at least, but yields
> b'I would like the following characters to display correctly after reading
> this file into
> Python:\r\n\r\n\xc3\x84\xc3\x96\xc3\x9c\xc3\xa4\xc3\xb6\xc3\xbc\xc3\x9f\r\n'
> I
This is a complete mess to me. Can anyone here please help me sort this out?
Answer: You are encoding to bytes after using `codecs.open` , just printing the data
should give you want as you can see when we decode back:
In [31]: s = b'I would like the following characters to display correctly after reading this file into Python:\r\n\r\n\xc3\x84\xc3\x96\xc3\x9c\xc3\xa4\xc3\xb6\xc3\xbc\xc3\x9f\r\n'
In [32]: print(s)
b'I would like the following characters to display correctly after reading this file into Python:\r\n\r\n\xc3\x84\xc3\x96\xc3\x9c\xc3\xa4\xc3\xb6\xc3\xbc\xc3\x9f\r\n'
In [33]: print(s.decode("utf-8"))
I would like the following characters to display correctly after reading this file into Python:
ÄÖÜäöüß
If you are not seeing the correct output then it is your shell encoding that
is the problem. The windows console encoding is not utf-8 so where you are
running the code from and the shell encoding matters.
|
BeautifulSoup: Extract "img alt" content Web Scraping in Python
Question: I am working in python 3. My objective is extracting differents values of one
table and to put them in differents lists.
The problem is that i can't take the value of "img alt" in a td.
This is my code:
from bs4 import BeautifulSoup
import urllib.request
redditFile = urllib.request.urlopen("http://www.mtggoldfish.com/movers/online/all")
redditHtml = redditFile.read()
redditFile.close()
soup = BeautifulSoup(redditHtml)
all_tables = soup.find_all('table')
right_table = soup.find('table', class_='table table-bordered table-striped table-condensed movers-table')
#create a list
A=[]
B=[]
C=[]
D=[]
for row in right_table.findAll("tr"):
cells = row.findAll('td')
increment = row.findAll('span')
colection = row.findAll('img')
link = row.findAll('a')
if len(cells) == 6:
A.append(cells[0].find(text=True))
B.append(increment[0].find(text=True))
C.append(colection[0])
D.append(link[0].find(text=True))
print(A)
print(B)
print(C)
print(D)
This code gives me this result:
['1', '2', '3', '4', '5', '6', '7', '8', '9', '10']
['+8.40', '+2.47', '+1.35', '+1.28', '+1.14', '+0.99', '+0.94', '+0.91', '+0.90', '+0.75']
[<img alt="ORI" class="sprite-set_symbols_ORI" src="//assets1.mtggoldfish.com/assets/s-407aaa9c9786d606684c6967c47739c5.gif"/>, <img alt="PRM" class="sprite-set_symbols_PRM" src="//assets1.mtggoldfish.com/assets/s-407aaa9c9786d606684c6967c47739c5.gif"/>, <img alt="8ED" class="sprite-set_symbols_8ED" src="//assets1.mtggoldfish.com/assets/s-407aaa9c9786d606684c6967c47739c5.gif"/>, <img alt="EX" class="sprite-set_symbols_EX" src="//assets1.mtggoldfish.com/assets/s-407aaa9c9786d606684c6967c47739c5.gif"/>, <img alt="TSB" class="sprite-set_symbols_TSB" src="//assets1.mtggoldfish.com/assets/s-407aaa9c9786d606684c6967c47739c5.gif"/>, <img alt="WL" class="sprite-set_symbols_WL"
src="//assets1.mtggoldfish.com/assets/s-407aaa9c9786d606684c6967c47739c5.gif"/>,
, , , ] ["Jace, Vryn's Prodigy", "Gaea's Cradle", 'Ensnaring Bridge', 'City of
Traitors', 'Pendelhaven', 'Firestorm', 'Kor Spiritdancer', 'Scalding Tarn',
'Daybreak Coronet', 'Grove of the Burnwillows']
But I need the IMG ALT VALUE in (for exemple the first img alt value is "ORI")
> colection variable
**I don't have any idea that I can do. Guys, could you help me with this,
please?**
Thanks so much in advance
Answer: Once you have an `<img>` node instance, you can get the alt value using this:
alt_tag = img.attrs['alt']
Since you're getting a collection of img elements, you can iterate over it and
retrieve the alt tag for each:
tags = []
collection = soup.findAll("img")
for img in collection:
if 'alt' in img.attrs:
tags.append(img.attrs['alt'])
#do whatever you need to do with your list of alt attributes.
print tags
|
Attribute error while using opencv for face recognition
Question: I am teaching myself how to use openCV by writing a simple face recognition
program I found on youtube. I have installed opencv version 2 as well as numpy
1.8.0. I am using python2.7.
I copyed this code exactly how it was done in the video and article links
below, yet I keep getting errors. AttributeError: 'module' object has no
attribute 'cv' What am I doing wrong?
Here is the code I'm using.
import cv2
import sys
# Get user supplied values
imagePath = sys.argv[1]
cascPath = sys.argv[2]
# Create the haar cascade
faceCascade = cv2.CascadeClassifier(cascPath)
# Read the image
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Detect faces in the image
faces = (faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags = cv2.cv.CV_HAAR_SCALE_IMAGE)
)
print "Found {0} faces!".format(len(faces))
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow("Faces found", image)
cv2.waitKey(0)
<https://www.youtube.com/watch?v=IiMIKKOfjqE>
<https://realpython.com/blog/python/face-recognition-with-python/>
Answer: The latest openCV no longer allows importing the legacy `cv` module.
Furthermore the naming convention of the constants generally does away with
the leading "CV_..." and several/many of the names have been altered somewhat.
I think you are running into both problems.
Specifically, the error you are reporting is in regards to this expression in
your code: `cv2.cv.CV_HAAR_SCALE_IMAGE`. This expression is trying to find the
named constant `CV_HAAR_SCALE_IMAGE` within the `cv` submodule of the `cv2`
package you imported. But alas, there is no cv2.cv anymore.
In openCV 3, I believe this constant is now referenced as follows:
`cv2.CASCADE_SCALE_IMAGE`
Also, you may find [this
link](https://github.com/Itseez/opencv/blob/master/samples/python/facedetect.py)
useful. It is to the facedetect.py sample script found in the OpenCV source
code. You can see the usage of the new constant name in this example, and you
may also inspect it for other changes from older sources/tutorials.
|
ValueError: could not convert string to float: pi
Question: i'm making a program (w/python 2.7) to approximate sin(x) with taylor series,
here's the code:
from math import pi
from math import sin
from math import factorial
x=float(raw_input("Degree(in radian, example: 5*pi/4): "))
n=input("n: ")
Sum=0
for i in range(1,n+1):
Sum=Sum+(pow(-1,(i+1))*pow(x,(2*i-1))/factorial(2*i-1))
error=math.fabs(sin(x)-Sum)
print "Using Taylor series for sin(%s) with n = %d returns %f,with error= %f"(x,n,Sum,error)
(sorry for the `from math import` mess up there, not exactly good with this)
however, when run with x = 5*pi/4, the program returns
> ValueError: invalid literal for float(): 5*pi/4
what am I doing wrong here? I think that python reads x as a string and fails
to float that, but what do I know
any help would be appreciated!
Answer: You are right, python does read x as a string:
y = raw_input("Degree(in radian, example: 5*pi/4): ")
print y => 5*pi/4
You would have to precompute the value in radian and pass it to your program:
(5 * math.pi)/2 = 7.853981633974483
This would be the value you give as input to your program.
|
how to store default username and password for login system test in sqlalchemy
Question: Trying to make a simple login system in Python using pyramid framework. What I
want to be able to do now is for example if we're working in php we can easily
open phpmyadmin and set a fields with username and password that we can use to
test the login system.
**How can I do this in python pyramid project using sqlalchemy**
This is my model class where I created my table structure
from test2.models.meta import Base
from which model classes will inherit
from sqlalchemy import (
Column,
Integer,
Unicode, #<- will provide Unicode field
UnicodeText, #<- will provide Unicode text field
Text,
)
class Users(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(Text, unique=True, nullable=False,default='admin')
password = Column(Text, nullable=False,default='password')
from pyramid.view import view_config
from test2.models.services.userservice import UserServices
from pyramid.httpexceptions import HTTPFound, HTTPNotFound
from pyramid.security import remember,forget
This is the view config
@view_config(route_name='auth',match_param='action=in',renderer='string',request_method='POST')
@view_config(route_name='auth', match_param='action=out', renderer='string')
def dashboard(request):
username=request.POST.get('username')
if username:
user=UserServices.by_name(username)
if user and user.verify_password(request.POST.get('password')):
return HTTPFound(location=request.route_url('home'))
else:
headers=forget(request)
else:
#return HTTPNotFound
headers=forget(request)
return HTTPFound(location=request.route_url('home'),headers=headers)
**What I want to achieve is a simple login system which tests the username and
password then redirects**
Answer: There are two versions of the official Pyramid tutorial, "SQLAlchemy + URL
Dispatch Wiki Tutorial", which show both the wrong and right way for storing
passwords:
* [Latest (v.1.6)](http://docs.pylonsproject.org/projects/pyramid/en/latest/tutorials/wiki2/index.html)
* [Master (unreleased)](http://docs.pylonsproject.org/projects/pyramid/en/master/tutorials/wiki2/index.html)
In the "latest" branch, you will find [how to set passwords in
cleartext](http://docs.pylonsproject.org/projects/pyramid/en/latest/tutorials/wiki2/authorization.html).
**This is absolutely the wrong way to store passwords.**
In the "master" branch, [we hash
passwords](http://docs.pylonsproject.org/projects/pyramid/en/master/tutorials/wiki2/definingmodels.html).
Always hash passwords with a strong cryptographic algorithm that already
exists, like `bcrypt`, and never attempt to write your own. Also, to be
pedantic, one "hashes" passwords and does not "encrypt" them. Hashing is a
one-way only operation, whereas encrypting can be reversed (decrypting).
|
how to limit number of super user in django
Question: In my django project, I want that there will be only one super user and no
more super users can be create by **python manage.py createsuperuser**
Is it possible? If possible how?
Answer: You can write a script to check number of superuser. Suppose you want 10
superusers then every time a superuser is created count if its more than 10 or
not and give error/success message accordingly.
You can count superusers as follows:
from django.contrib.auth.models import User
from django.http import HttpResponse
user_obj = User.objects.all()
c = 0
for i in user_obj:
if i.is_superuser():
c += 1
if c > 10:
return HttpResponse('Cannot add anymore superusers')
else:
new_user = User.objects.create_user(username = name, password = password)
of course you will have to make a form to accept username and password but I
have given the basic idea.
You can also use python's `threading` library to make things async
|
documenting imports in Python
Question: So imagine I have a Django 1.9 application with many models.
Inside `admin.py` I import my models, but I want to stick to the 80 character
limit. What is the best practice for something like this?
For example
from .models import app_name_student, app_name_teacher, app_name_employment, app_name_grade, app_name_subject, app_name_activity
Is this the best solution or are there better solutions that I'm not aware of?
Typically, I would do this
from .models import app_name_student, app_name_teacher, app_name_employment
from .models import app_name_grade, app_name_subject, app_name_activity
Otherwise, there probably is a framework/standards that I am not aware of...
Answer: Although there is nothing wrong with what you have - you can and should split
up imports.
However, [as per pep8](https://www.python.org/dev/peps/pep-0008/#maximum-line-
length) (the Python style guide) you can use `( )`:
> The preferred way of wrapping long lines is by using Python's implied line
> continuation inside parentheses, brackets and braces. Long lines can be
> broken over multiple lines by wrapping expressions in parentheses. These
> should be used in preference to using a backslash for line continuation.
That would result in code like:
from .models import (
app_name_student,
app_name_teacher,
...
...
)
Although it may raise a a few eyebrows as its not a common use of `( )`; most
developers prefer multiple import lines.
Avoid the temptation to do `from .models import *`.
|
How to write variables to a temporary file in python
Question: I have been trying to write variables to a temporary text file but I get the
following error:
Traceback (most recent call last):
File "F:/A453/_Codes_/APP CONFIG/Temp.py", line 102, in <module>
ORXQC-IIHL2-6AV55-FIJEV-2""")
File "C:\Program Files (x86)\Python34\lib\tempfile.py", line 399, in func_wrapper
return func(*args, **kwargs)
TypeError: 'str' does not support the buffer interface
My Script is the Following:
import tempfile
TEMPDIR=tempfile.TemporaryFile()
TEMPDIR.write("""B5IB6-ELAZ1-RAPY9-V8X1I-3
OKXVB-Q8B9G-IT9ZF-MI4EQ-2
PLDZ6-769YT-YJSR4-682JT-7
H67L5-9HO4C-4UDSR-BYA14-6
Y73EC-S8OJG-O1APH-N41KM-3
JCYVV-UXNIN-9RGSU-WQ9SD-1
WL9AO-9BLI7-GXXGM-VESEU-2
VDLHT-IXMUY-V4FPU-V3IFZ-1
8CPVN-Z776Z-Y49J3-2C683-5
ORXQC-IIHL2-6AV55-FIJEV-2""")
Activation=input('Please Enter your Product Activation Key: ')
if Activation in TEMPDIR:
print('True')
else:
print('False')
Please help me overcome this error
Thanks
Answer: The default mode of `TemporaryFile` is `"w+b"`, e.g. binary. You have to
provide the mode explicitly for text:
TEMPDIR = tempfile.TemporaryFile(mode="w+")
|
Python create_user error
Question: I am getting an error `ValueError: Users must have a valid username` when
trying to invoke the create_superuser command from the command line using
Django 1.7.1. I am following a tutorial that creates a custom User model with
the email field as the USERNAME_FIELD. It doesn't prompt me for a username and
I have tried passing the username as an option with `python manage.py
createsuperuser --username=someusername` and `python manage.py createsuperuser
username=someusername`. Snippets of my code are below.
**models.py**
# Create your models here.
from django.contrib.auth.models import AbstractBaseUser
from django.db import models
from django.contrib.auth.models import BaseUserManager
class AccountManager(BaseUserManager):
def create_user(self, email, password=None, **kwargs):
if not email:
raise ValueError('Users must have a valid email address.')
if not kwargs.get('username'):
raise ValueError('Users must have a valid username')
account = self.model(
email=self.normalize_email(email), username=kwargs.get('username')
)
account.set_password(password)
account.save()
return account
def create_superuser(self, email, password, **kwargs):
account = self.create_user(email, password, **kwargs)
account.is_admin = True
account.save()
return account
class Account(AbstractBaseUser):
"""docstring for Account"""
email = models.EmailField(unique=True)
username = models.CharField(max_length=40, unique=True)
first_name = models.CharField(max_length=40, unique=True)
last_name = models.CharField(max_length=40, unique=True)
tagline = models.CharField(max_length=140, unique=True)
is_admin = models.BooleanField(default=False)
created_at = models.DateTimeField(auto_now_add=True)
modified_at = models.DateTimeField(auto_now=True)
objects = AccountManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELD = ['username']
def __unicode__(self):
return self.email
def get_full_name(self):
return ' ' . join([self.first_name, self.last_name])
def get_short_name(self):
return self.first_name
In the settings.py file, I have added authentication to the list of installed
apps and I have defined AUTH_USER_MODEL as authentication.Account
Answer: Set `REQUIRED_FIELDS` instead of `REQUIRED_FIELD`. This is a list of field
names that `createsuperuser` will prompt for. It must contain all fields of
your custom user model that are non-nullable/non-blankable, so in your case
you might want to include `first_name`, `last_name`, and `tagline` as well.
This is [well-
documented](https://docs.djangoproject.com/en/1.9/topics/auth/customizing/#django.contrib.auth.models.CustomUser.REQUIRED_FIELDS).
|
Improving order of operations and testing if a value has a decimal
Question: I'm trying to create a program that prints out pentagonal numbers up to n =
100. A pentagonal number is defined as n(3n-1)/2 for n = 1, 2, 3 so the first
pentagonal numbers would be 1, 5, 12, 22..etc
package mathematical.functions.test;
import java.util.Scanner;
public class MathematicalFunctionsTest {
public static int getPentagonalNumber(int n) {
int i = 0;
double part_one = (3 * n) - 1;
double part_two = part_one / 2;
while (i < 100){
if (part_two == int(part_two))
{
System.out.println(n);
i++;
}
else{
}
}
}
This is just part of my program above. I have two problems right now:
1. I do not know how to properly implement order of operations. What I have done is break up the operation n(3n - 1)/2 into part_one and part_two. Is this good practice?
2. Looking at the nested if in the while loop, I am trying to test if part_two is an integer (i.e. has no decimal). I've tried many things but Java is not letting me. I tried using the % operator, too. But it did not let me aswell.
I've programmed in Python before and could do this easily there, but I think
right now I am struggling with the syntax mostly. I have to say however I was
at best a novice programmer in python, too (but I really enjoyed algorithms
there).
Thanks all...
Answer: You're missing part of your equation (factor of n). You can combine them into
one statement like so:
double num = n * (3 * n - 1) / 2
Just use brackets the way you normally would. Java understands proper order of
operations.
Try the following to check if the number is an int:
if ((num == Math.floor(num)) && !Double.isInfinite(num)) {
// integral type
}
This works by flooring (truncating the decimal) the number and checking to see
if it's the same as the original number. It also has the added check to ensure
that the number is finite.
|
Server has connection only with one client, Python Socket
Question: First of all, I hope I'm not writing too much. I'm new and I want that no one
has a doubt, that everything will be clear for the readers. I hope someone can
help me.
I have been working in a socket server and client for some weeks. As time
passes I add more features. At the beginning, it was just about having an
_echo server_. Afterwards, I did a server that returns time, random number or
other specific things the client asked. And for last thing, I added to the
scripts so the server can accept 2 clients so they can talk between them.
However, the client couldn't write the messages he wanted because he needed
always to wait until the second client answers. When I got stuck with this
problem, I learned about **Threads**.
So the next feature I wanted to add and it's where I'm stuck for about two or
three weeks is the part where two clients can send to each other messages
without stop and without the need of waiting like before they would need.
I have a script of the server:
import socket
import threading
from datetime import datetime
from random import randint
global num
num = 0
class serverThread(threading.Thread):
def __init__(self):
global num
num = num + 1
self.id = num
threading.Thread.__init__(self)
client, address = server.accept()
self.client = client
self.address = address
print "serverThread init finished-" + str(self.id)
def run(self):
print "r1 num-" + str(self.id)
size = 1024
while True:
#try:
print "r2*************-" + str(self.id)
data = self.client.recv(size)
print "r3..... " + data
print "r4-" + str(self.id)
if data:
print "r5-" + str(self.id)
response = data
self.client.send(response)
print "r6-" + str(self.id)
else:
print "r7-" + str(self.id)
raise Exception('Client disconnected-' + str(self.id) )
#except:
# print "Except"
# self.client.close()
# return
def create(ipHost, port):
server = socket.socket()
server.bind((ipHost, port))
print "The server was created successfully."
return server
def listen(server):
server.listen(5)
c1 = serverThread()
c1.start()
c2 = serverThread()
c2.start()
print "finished both threads created"
while c1.isAlive() and c2.isAlive():
continue
server = create("0.0.0.0", 1729)
listen(server)
As you can see I'm not using `try` and `except` because I don't know good how
to use them.
My second script, the client:
import socket
import threading
class sendThread(threading.Thread):
def __init__(self, ip, port):
threading.Thread.__init__(self)
self.client = socket.socket()
self.port = port
self.ip = ip
self.client.connect((self.ip, self.port))
print "[+] New send thread started for "+ip+":"+str(port) + "...Everything went successful!"
def run(self):
while True:
data = raw_input("Enter command:")
self.client.send("Client sent: " + data)
class receiveThread(threading.Thread):
def __init__(self, ip, port):
threading.Thread.__init__(self)
self.client = socket.socket()
self.ip = ip
self.port = port
self.client.connect((str(self.ip), self.port))
print "[+] New receive thread started for "+ip+":"+str(port) + "...Everything went successful!"
def run(self):
print "Entered run method"
size = 1024
while True:
data = self.client.recv(size)
if data != "" or data:
print "A client sent " + data
def client():
port = 1729
ip = '127.0.0.1'
print "Connection from : "+ip+":"+str(port)
receive = receiveThread(ip, port)
print "b1"
receive.start()
print "b2"
send = sendThread(ip, port)
print "b3"
send.start()
while send.isAlive() and receive.isAlive():
continue
print "-----END of While TRUE------"
print "Client disconnected..."
client()
I thought it would be a good idea to describe my scripts, go step by step in
my code, maybe it helps so it will be more readable.
## The Server script
I create a socket server, and call the `bind` method. I call for the `listen`
method and begin to receive the clients. I create a **thread** for each client
that I will accept (`accept()`) and receive (`recv`) data from. After I create
each client thread I print a message that they were created successfully. When
I start the clients threads they wait for receiving a message sent (`recv`)
and send it. If I'm not wrong, I just need the `send` method and not to tell
to _who_ send it.
## The Client script
The client will have two threads. One for sending messages (as much as you
want) and one for always waiting for messages that another client sent.
## The problem
When I want to run the server before running the two clients it prints
The server was created successfully.
I run 2 clients and both print:
Connection from : 127.0.0.1:1720
[+] New receive thread started for 127.0.0.1:1720...Everything went successful!
b1
Entered run method
b2
[+] New send thread started for 127.0.0.1:1720...Everything went successful!
b3
Enter command:
However, there is a **problem** in the connection between the **second**
client created and the server. I did that when the server receives a message
sent by a client, it will print the message as output in the server. However,
when the first client sends a message it prints it. But not when the second
client sends.
I even tried to copy the client script and put it in a new file, so the two
clients are from two different files and maybe find a problem. However, it
didn't help. I tried to run the first file and then the second file, and vice
versa. Always the second client had a problem with the server. (By the way, I
would also want to know why the client file doesn't print the message he
himself sends (he will still receive it from the server) but that's a
secondary problem).
I hope I didn't do it too long or too far, hope someone can help find the
problem in my code.
I would be even happier if you tell what is the problem in the code, instead
of giving me one that you created or find!
Answer: I think it could be because you have 2 threads trying to accept a connection
at the same time?
You create the first thread, then that thread's init function accepts a
connection with socket.accept(). Then, before you receive a connection, you
instantly create another server thread, which ALSO calls accept() on the
socket. My _guess_ is that this 2nd accept call isn't registered, as 1 thread
is already 'locking' that object.
Instead of creating 2 thread immediately, maybe try only creating a thread
once someone connects to the socket?
client = socket.accept()
serverThread1 = serverThread(client)
serverThread2.start()
client = socket.accept()
serverThread2 = serverThread(client)
serverThread2.start()
Where the serverThread class now takes the client socket as a constructor
parameter.
|
Execute Python Script Every Hour
Question: # Goal
I have a script written in python.
1. connect to database
2. insert some fake data
My goal is execute that script every hour.
* * *
# `database.py`
#!/usr/bin/python
import MySQLdb
import random
import requests
import time
db = MySQLdb.connect(host="localhost", # your host, usually localhost
user="root", # your username
passwd="*********", # your password
db="db-local") # name of the data base
# you must create a Cursor object. It will let
# you execute all the queries you need
cur = db.cursor()
# The first line is defined for specified vendor
mac = [ 0x00, 0x24, 0x81,
random.randint(0x00, 0x7f),
random.randint(0x00, 0xff),
random.randint(0x00, 0xff) ]
device_mac = ':'.join(map(lambda x: "%02x" % x, mac))
cpe_mac = '000D6766F2F6'
url = "https://randomuser.me/api/"
data = requests.get(url).json()
firstname = data['results'][0]['user']['name']['first']
lastname = data['results'][0]['user']['name']['last']
email = data['results'][0]['user']['email']
gender = data['results'][0]['user']['gender']
age_range_options = [">15", "15-25", "25-40","40+"]
age_range = random.choice(age_range_options)
ip = '10.10.10.10'
host_name = 'cron.job'
visit_count = 1
created_at = time.strftime('%Y-%m-%d %H:%M:%S')
updated_at = time.strftime('%Y-%m-%d %H:%M:%S')
sql = ('''INSERT INTO visitors (device_mac,cpe_mac,firstname, lastname, email, gender, age_range,ip,host_name,visit_count,created_at, updated_at) VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)''')
args = (device_mac,cpe_mac, firstname, lastname, email, gender, age_range,ip, host_name,visit_count, created_at, updated_at)
cur.execute(sql,args)
db.commit()
# for record in records:
# print records
db.close()
* * *
**CronniX**
I did some researches, and people suggested a bunch of apps to do that.
So I've tried downloaded/installed **CronniX**
create a task `>` set the schedule `>` and run it.
[](http://i.stack.imgur.com/hz6Jf.png)
It kept hanging on `executing ...`
* * *
**Task Till Dawn**
In addition to that, I've also tried **Task Till Dawn** , and again
create a task `>` set the schedule `>` and run it.
**Result**
[](http://i.stack.imgur.com/yOnVo.png)
Nothing seem to insert to my database, even it said 35 succesfully executions.
All it did was pop up my `database.py` inside `Xcode` window.
* * *
# Terminal
I run `python database.py`, It works perfectly fine, and data get inserted.
* * *
I was thinking that, it was the permission issue, but I already did
`sudo chmod a+x database.py`
* * *
What did I miss ? What's the better way to achieve this ?
Any suggestions / hints will be much appreciated !
Answer: You can also just use crontab:
Every hour:
crontab 0 * * * * /path/to/script
Every minute:
crontab * * * * * /path/to/script
To see your crontabs:
crontab -l
To see further options:
man crontab
|
regEx - Isolate punctuation in Python 3.x
Question: I have been trying to use the regEx module (regular expression) to single out
punctuation, but I just can't figure it out. Does anyone have any useful
information on this?
import re
n = True
while n == True:
name = input("What is your name?\n")
invalid = re.findall(r'[^\s\w]', name)
if invalid:
print("Invalid!")
else:
print("That is a valid name.")
n = False
name = name.lower()
name = name.title()
This is the new, updated code. Still looking for ways to break it. Comment if
you find some way that it'll accept punctuation.
Answer: When you something like this `re.match("[A-Z]",name)` you match that there is
a capital letter (at least one) contained in `name`. Try that instead:
import re
import string
n = True
while n == True:
name = input("What is your name?\n")
chars = set(string.punctuation+'1234567890')
if any((c in chars) for c in name):
print("Invalid!")
else:
print("That is a valid name!")
n=False
|
Performing PCA on a dataframe with Python with sklearn
Question: I have a sample input file that has many rows of all variants, and columns
represent the number of components.
A01_01 A01_02 A01_03 A01_04 A01_05 A01_06 A01_07 A01_08 A01_09 A01_10 A01_11 A01_12 A01_13 A01_14 A01_15 A01_16 A01_17 A01_18 A01_19 A01_20 A01_21 A01_22 A01_23 A01_24 A01_25 A01_26 A01_27 A01_28 A01_29 A01_30 A01_31 A01_32 A01_33 A01_34 A01_35 A01_36 A01_37 A01_38 A01_39 A01_40 A01_41 A01_42 A01_43 A01_44 A01_45 A01_46 A01_47 A01_48 A01_49 A01_50 A01_51 A01_52 A01_53 A01_54 A01_55 A01_56 A01_57 A01_58 A01_59 A01_60 A01_61 A01_62 A01_63 A01_64 A01_65 A01_66 A01_67 A01_69 A01_70 A01_71
0 1 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 0 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 1
0 1 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 0 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 1
0 1 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 0 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 1
0 1 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 0 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 1
0 1 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 0 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 1
0 1 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 1 0 1 1 1 0 1 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 0 1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 0 0 0 1
I first import this .txt file as:
#!/usr/bin/env python
from sklearn.decomposition import PCA
inputfile=vcf=open('sample_input_file', 'r')
**I would like to performing principal component analysis and plotting the
first two components (meaning the first two columns)**
I am not sure if this the way to go about it after reading about
sklearn
PCA for two components:
pca = PCA(n_components=2)
pca.fit(inputfile) #not sure how this read in this file
_Therefore, I need help importing my input file as a dataframe for Python to
perform PCA on it_
Answer: `sklearn` works with numpy arrays.
So you want to use `numpy.loadtxt`:
data = numpy.loadtxt('sample_input_file', skiprows=1)
pca = PCA(n_components=2)
pca.fit(data)
|
Find particular rows in Graphlab or Python
Question: In Graphlab,
I am working with a small subset of movies from a larger list.
movieIds_5K_np = LL_features_SCD_min.to_numpy()[:,0]
ratings_33K_np = ratings_33K.to_numpy()
`movieIds_5K_np` is an array containing my movieIds. `ratings_33K_np' is an
array with FOUR columns whose second columns contains movie Ids for ALL
movies.
I need to select only the rows in `ratings_33K_np` whose id exist in
`movieIds_5K_np'.
I tried this approach but it doesn't seems to be working:
ratings_5K_np = ratings_33K_np[ratings_33K_np[:,2]==movieIds_5K_np]
How can I do this in Graphlab or by using some Python libraries? I should say
that originally `ratings_33K` and `movieIds_5K` were imported as SFrame.
Thanks
Answer: Given that you have 2 `sframe`s, you can do a `join`, like so:
ratings_5K = LL_features_SCD_min[['id_column_name']].join(ratings_33K, on='id_column_name', how='left')
As far as I understood from your code, the `LL_features_SCD_min` is the
`sframe` corresponding to your miniset (5K data). So you just take the IDs
that you want and left join them with the entire dataset, thus obtaining a new
`sframe` with only the IDs that you wanted. Just substitute your id column
name and there you go.
For more information regarding how `join` work within `graphlab`, consider
checking the
[documentation](https://dato.com/products/create/docs/generated/graphlab.SFrame.join.html)
on `SFrame`.
Good luck!
|
Django First Tutorial: ImportError: No module named 'polls'
Question: I've set up Django on my Windows 10 PC, and was working through the first
tutorial: <https://docs.djangoproject.com/en/1.9/intro/tutorial01/>
I can't seem to do the first part, because of an import error.
Here's the views.py script:
mysite/polls/views.py
from django.shortcuts import render
from django.http import HttpResponse
# Create your views here.
def index(request):
return HttpResponse("Hello, world. You're at the polls index.")
urls.py in the polls app:
mysite/polls/urls.py
from django.conf.urls import url
import views
urlpatterns = [
url(r'^$', views.index, name='index'),
]
And urls.py in mysite:
mysite/mysite/urls.py
from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^polls/', include('polls.urls')),
url(r'^admin/', admin.site.urls),
]
After troubleshooting and fixing issues with the first two files, this is the
error I get when I run "python urls.py" on the mysite urls.py file:
[](http://i.stack.imgur.com/eJKU2.png)
I've seen a few stackoverflow posts here regarding this tutorial, and similar
issues. Someone advocated that I add polls to the INSTALLED_APPS section of
settings.py, but this did not work.
/mysite/mysite/settings.py
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'polls',
]
I also read that someone got this to work by adding the 'polls' module to
their pythonpath... But I'm not sure if this is the way to go.
A very similar post was made here: [Django reusable apps tutorial,
ImportError: No module named
'polls'](http://stackoverflow.com/questions/32785902/django-reusable-apps-
tutorial-importerror-no-module-named-polls) but no solution was provided.
Anyone with some insight or input, please let me know what I can do to fix
this, and let me know if you need any more information.
Answer: You seem to be running `urls.py`. shouldn't you be running `python manage.py
runserver` command to start the app. Are you trying to achieve something else
here ?
|
String manipulating in python
Question: I'm trying to make a function that will take a string an remove any blocks of
text from it. For example turning "(example) somestuff" into "somestuff"
removing any blocked text from the string. This is a single function for a
large program that is meant to automatically create directories based on the
files name and move relevant files into said folder. I think I'm running into
an endless loop but lost as to what by problem is.
startbrackets = '[', '('
endbrackets = ']', ')'
digits = range(0,10)
def striptoname(string):
startNum = 0
endNum = 0
finished = True
indexBeginList = []
indexEndList = []
while (finished):
try:
for bracket in startbrackets:
indexBeginList.append(string.find(bracket, 0, len(string)))
except:
print "Search Start Bracket Failed"
wait()
exit()
# Testing Code START
finished = False
for i in indexBeginList:
if i != -1:
finished = True
startNum = i
break
# Testing Code END
try:
for bracket in endbrackets:
indexEndList.append(string.find(bracket, 0, len(string)))
except:
print "Search End Bracket Failed"
wait()
exit()
# Testing Code START
for i in indexEndList:
if i != -1:
endNum = i
break
# Testing Code END
if(finished):
if(startNum == 0):
string = string[:(endNum+1)]
else:
string = string[0:startNum]
for i in digits:
string.replace(str(i),"")
return string
Answer: Here's an approach using
[`re`](https://docs.python.org/3.2/library/re.html#re.sub):
import re
def remove_unwanted(s):
# This will look for a group of any characters inside () or [] and substitute an empty string, "", instead of that entire group.
# The final strip is to eliminate any other empty spaces that can be leftover outside of the parenthesis.
return re.sub("((\(|\[).*(\)|\]))", "", s).strip()
print(remove_unwanted("[some text] abcdef"))
>>> "abcdef"
print(remove_unwanted("(example) somestuff"))
>>> "somestuff"
|
Using Python to search string where a number iterates
Question: I'm trying to write a script that will search a string in google, loop and
iterate the number in the string, and print the top links. I have this:
import urllib.parse
import urllib.request
import json as m_json
for x in range(3, 5):
query = '"Amazon Best Sellers Rank: #' + str(x) + ' in Kitchen & Dining": Amazon.com'
query = urllib.parse.urlencode ( { 'q' : query } )
response = urllib.request.urlopen ( 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&' + query ).read().decode()
json = m_json.loads ( response )
results = json [ 'responseData' ] [ 'results' ]
for result in results:
title = result['title']
url = result['url'] # was URL in the original and that threw a name error exception
print ( title + '; ' + url )
I'm getting this error: "TypeError: 'NoneType' object is not subscriptable" on
line 10, results = ...
Answer: **Same question** Was posted by you beforetwo months and now again you are
posting that [Link to your
question.](http://stackoverflow.com/questions/35023259/python-script-that-
runs-an-iterating-google-search-and-prints-top-results-and-li)
And by the way the answer to your question is also has been provided already
on stackoverflow.
But again I am posting the code for you . And using your code only I am
getting the desired result in Python 2.7 .
import urllib
import json as m_json
for x in range(3, 5):
query = 'x mile run'
query = urllib.urlencode ( { 'q' : query } )
response = urllib.urlopen ( 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&' + query ).read()
json = m_json.loads ( response )
results = json [ 'responseData' ] [ 'results' ]
for result in results:
title = result['title']
url = result['url']
print ( title + '; ' + url )
|
Python: Looping through files in a different directory and scanning data
Question: I am having a hard time looping through files in a directory that is different
from the directory where the script was written. I also ideally would want my
script through go to through all files that start with sasa. There are a
couple of files in the folder such as sasa.1, sasa.2 etc... as well as other
files such as doc1.pdf, doc2.pdf
**I use Python Version 2.7 with windows Powershell**
**Locations of Everything**
1) Python Script Location ex: `C:Users\user\python_project`
2) Main_Directory ex: `C:Users\user\Desktop\Data`
3) Current_Working_Directory ex: `C:Users\user\python_project`
Main directory contains 100 folders (folder A, B, C, D etc..) Each of these
folders contains many files including the sasa files of interest.
**Attempts at running script**
For 1 file the following works:
Script is run the following way: `python script1.py`
file_path = 'C:Users\user\Desktop\Data\A\sasa.1
def writing_function(file_path):
with open(file_path) as file_object:
lines = file_object.readlines()
for line in lines:
print(lines)
writing_function(file_path)
However, the following does not work
Script is run the following way: `python script1.py A sasa.1`
import os
import sys
from os.path import join
dr = sys.argv[1]
file_name = sys.argv[2]
file_path = 'C:Users\user\Desktop\Data'
new_file_path = os.path.join(file_path, dr)
new_file_path2 = os.path.join(new_file_path, file_name)
def writing_function(paths):
with open(paths) as file_object:
lines = file_object.readlines()
for line in lines:
print(line)
writing_function(new_file_path2)
I get the following error:
`with open(paths) as file_object:`
`IO Error: [Errno 2] No such file or directory:`
`'C:Users\\user\\Desktop\\A\\sasa.1'`
Please note right now I am just working on one file, I want to be able to loop
through all of the sasa files in the folder.
Answer: It can be something in the line of:
import os
from os.path import join
def function_exec(file):
code to execute on each file
for root, dirs, files in os.walk('path/to/your/files'): # from your argv[1]
for f in files:
filename = join(root, f)
function_exec(filename)
Avoid using the variable `dir`. it is a python keyword. Try `print(dir(os))`
dir_ = argv[1] # is preferable
|
removing json items from array if value is duplicate python
Question: I am incredibly new to python.
I have an array full of json objects. Some of the json objects contain
duplicated values. The array looks like this:
[{"id":"1"."name":"Paul","age":"21"},
{"id":"2","name":"Peter","age":"22"},
{"id":"3","name":"Paul","age":"23"}]
What I am trying to do is to remove an item if the `name` is the same as
another json object, and leave the first one in the array.
So in this case I should be left with
[{"id":"1"."name":"Paul","age":"21"},
{"id":"2","name":"Peter","age":"22"}]
The code I currently have can be seen below and is largely [based on this
answer](http://stackoverflow.com/questions/17076345/remove-duplicates-from-
json-data):
import json
ds = json.loads('python.json') #this file contains the json
unique_stuff = { each['name'] : each for each in ds }.values()
all_ids = [ each['name'] for each in ds ]
unique_stuff = [ ds[ all_ids.index(text) ] for text in set(texts) ]
print unique_stuff
I am not even sure that this line is working `ds = json.loads('python.json')
#this file contains the json` as when I try and `print ds` nothing shows up in
the console.
Answer: If you need to keep the first instance of `"Paul"` in your data a dictionary
comprehension gives you the opposite result.
A simple solution could be as following
new = []
seen = set()
for record in old:
name = record['name']
if name not in seen:
seen.add(name)
new.append(record)
del seen
|
Find a String in a .txt file
Question: I want to find a specific string in different .txt files which I can choose in
my computer's files. This code actually work :
string = "example"
fichier = open(file_path,"r")
for line in fichier:
if string in line:
print string
fichier.close()
But I have to wrote the path by myself, and when I add those code lines in
order to select the file without writting the whole file's path by myself :
from Tkinter import Tk
from tkFileDialog import askopenfile
import os
Tk().withdraw()
file = askopenfile()
file_path = os.path.realpath(file)
string = "example"
fichier = open(file_path,"r")
for line in fichier:
if string in line:
print string
fichier.close()"
Here is the traceback
Traceback (most recent call last):
File "C:\Users\WinPython-64bit-2.7.10.3\python-2.7.10.amd64\Lib\sip-4.18.dev1603251537\fichier txt.py", line 13, in <module>
file_path = os.path.realpath(file)
File "C:\Users\WinPython-64bit-2.7.10.3\python-2.7.10.amd64\lib\ntpath.py", line 488, in abspath
path = _getfullpathname(path)
TypeError: coercing to Unicode: need string or buffer, file found
I can't see what's wrong, because the os.path.realpath() gives a path right ?
I guess my problem comes from the askopenfile, I can't find what kind of data
it returned. I would appreciate if you give me a hand please.
Answer: `askopenfile()` does not return a file _name_ ; it returns a file _object_.
That means that you don't need to do the opening yourself. You can just do
this:
from Tkinter import Tk
from tkFileDialog import askopenfile
import os
Tk().withdraw()
fichier = askopenfile()
string = "example"
for line in fichier:
if string in line:
print string
fichier.close()
You shouldn't be using `file` as a variable name anyway, because in Python2 it
shadows the built-in type.
|
Boost.Python return a list of noncopyable objects
Question: I have a type `X` that is noncopyable and I want to expose a function that
creates a `list` of them:
#include <boost/python.hpp>
namespace py = boost::python;
struct X {
X(int i) : i(i) { }
X(const X& ) = delete;
X& operator=(X const&) = delete;
int i;
friend std::ostream& operator<<(std::ostream& os, X const& x) {
return os << "X(" << x.i << ")";
}
};
py::list get_xs(int n) {
py::list xs;
for (int i = 0; i < n; ++i) {
xs.append(X{i});
}
return xs;
}
BOOST_PYTHON_MODULE(Foo)
{
py::class_<X, boost::noncopyable>("X", py::init<int>())
.def(str(py::self))
.def(repr(py::self))
;
py::def("get_xs", get_xs);
}
This compiles fine, yet when I try to use it, gives me the dreaded:
>>> import Foo
>>> Foo.get_xs(10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: No to_python (by-value) converter found for C++ type: X
What does that error actually mean? How do I fix it?
Answer: `noncopyable` looks to be the problem. When `X` is copyable then everything is
fine.
If `X` must be `noncopyable` then `boost::shared_ptr` can be used:
py::list get_xs(int n) {
py::list xs;
for (int i = 0; i < n; ++i) {
xs.append(boost::shared_ptr<X>(new X(i)));
}
return xs;
}
....
BOOST_PYTHON_MODULE(Foo)
{
py::class_<X, boost::shared_ptr<X>, boost::noncopyable>("X", py::init<int>())
...
...
py::register_ptr_to_python<boost::shared_ptr<X>>();
}
|
I don't understand why can't open() file correctly in Python 2.x
Question: Here is my code:
from os.path import exists
def confirm(file_name):
while not exists(file_name):
print "File doesn't exist."
file_name = raw_input("File name: ")
from_file = raw_input("copy from: ")
confirm(from_file)
to_file = raw_input("copy to: ")
confirm(to_file)
with open(to_file, 'w')as f:
f.write(open(from_file).read())
Output in Terminal
copy from: asd.txt
File doesn't exist.
File name: test.txt
copy to: dsa.txt
File doesn't exist.
File name: test.py
Traceback (most recent call last):
File "ex17.py", line 17, in <module>
f.write(open(from_file).read())
IOError: [Errno 2] No such file or directory: 'ad.txt'
Why does it open the incorrect file?
How to fix it?
And when I do this:
from_file = raw_input("copy from: ")
while not exists(from_file):
print "File doesn't exist."
from_file = raw_input("File name: ")
It Works well.
I want to define a function for less code, but I get a problem.
Answer: The changes to `file_name` you make inside of `confirm` do not affect the
parameter you passed to that function. You should return the final value of
`file_name` in `confirm`, and have the caller assign that to the appropriate
variable.
|
Subsets and Splits