text
stringlengths 226
34.5k
|
---|
Scraping values from a webpage table
Question: I want to create a python dictionary of color names to background color from
this [color dictionary](http://people.csail.mit.edu/jaffer/Color/M.htm).
What is the best way to access the color name strings and the background color
hex values? I want to create a mapping for color name --> hex values, where 1
color name maps to 1 or more hex values.
The following is my code:
import requests
from bs4 import BeautifulSoup
page = requests.get('http://people.csail.mit.edu/jaffer/Color/M.htm')
soup = BeautifulSoup(page.text)
I'm not sure how to specify what to scrape from the table. I've tried the
following to get a format that's useful:
soup.td
<td nowrap="" width="175*">abbey</td>
soup.get_text()
"(M)\n td { padding: 0 10px; } \n\n(M) Dictionary of Color Maerz and Paul, Dictionary of Color, 1st ed. \n\nabbey207\nabsinthe [green] 120\nabsinthe yellow105\nacacia101102\nacademy blue173\nacajou43\nacanthe95\nacier109\nackermann's green137\naconite violet223....
.............\nyolk yellow84\nyosemite76\nyucatan5474\nyucca150\nyu chi146\nyvette violet228\n\nzaffre blue 179182\nzanzibar47\nzedoary wash71\nzenith [blue] 199203\nzephyr78\nzinc233265\nzinc green136\nzinc orange5053\nzinc yellow84\nzinnia15\nzulu47\nzuni brown58\n\n"
soup.select('tr td')
[...
<td nowrap="" width="175*">burnt russet</td>,
<td style="background-color:#722F37; color:#FFF" title="16">16</td>,
<td style="background-color:#79443B; color:#FFF" title="43">43
</td>,
<td nowrap="" width="175*">burnt sienna</td>,
<td style="background-color:#9E4732; color:#FFF" title="38">38
</td>,
...]
EDIT: I want to scrape the strings in the td elements e.g "burnt russet" as
the color and the string (hex component) in the following td elements where
the "style" attribute is specified as the background color.
I want the dictionary to look as follows:
color_map = {'burnt russet': [#722F37, #79443B], 'burnt sienna': [#9E4732]}
Answer: Just look for the tds with nowrap, extract the text and get the following
siblings td's _style_ attribute:
import requests
from bs4 import BeautifulSoup
page = requests.get('http://people.csail.mit.edu/jaffer/Color/M.htm')
soup = BeautifulSoup(page.content)
for td in soup.select("td[nowrap]"):
print(td.text, [sib["style"] for sib in td.find_next_siblings("td")])
A snippet of the output:
(u'abbey', ['background-color:#604E97; color:#FFF'])
(u'absinthe [green] ', ['background-color:#8A9A5B'])
(u'absinthe yellow', ['background-color:#B9B57D'])
(u'acacia', ['background-color:#EAE679', 'background-color:#B9B459'])
(u'academy blue', ['background-color:#367588'])
(u'acajou', ['background-color:#79443B; color:#FFF'])
(u'acanthe', ['background-color:#6C541E; color:#FFF'])
(u'acier', ['background-color:#8C8767'])
(u"ackermann's green", ['background-color:#355E3B; color:#FFF'])
(u'aconite violet', ['background-color:#86608E'])
(u'acorn', ['background-color:#7E6D5A; color:#FFF'])
(u'adamia', ['background-color:#563C5C; color:#FFF'])
(u'adelaide', ['background-color:#32174D; color:#FFF'])
If you just want the hex values you can split the style text on `"; "` then
split the sub strings on `:`:
page = requests.get('http://people.csail.mit.edu/jaffer/Color/M.htm')
soup = BeautifulSoup(page.content)
d = {}
for td in soup.select("td[nowrap]"):
cols = td.find_next_siblings("td")
d[td.text] = [st.split(":", 1)[-1] for sib in cols for st in sib["style"].split("; ")]
print(d)
That will give you a dioct like:
{u'moonlight ': ['#FAD6A5', '#BFB8A5'], u'honey bird': ['#239EBA'], u'monte carlo ': ['#007A74', '#317873'],...............
You will need to use either `lxml` or `html5lib` as the parser to handle the
broken html. I presume you are using one of them as if not you would not get
the output you do.
|
How do I pull a recurring key from a JSON?
Question: I'm new to python (and coding in general), I've gotten this far but I'm having
trouble. I'm querying against a web service that returns a json file with
information on every employee. I would like to pull just a couple of
attributes for each employee, but I'm having some trouble.
I have this script so far:
import json
import urllib2
req = urllib2.Request('http://server.company.com/api')
response = urllib2.urlopen(req)
the_page = response.read()
j = json.loads(the_page)
print j[1]['name']
The JSON that it returns looks like this...
{
"name": bill jones,
"address": "123 something st",
"city": "somewhere",
"state": "somestate",
"zip": "12345",
"phone_number": "800-555-1234",
},
{
"name": jane doe,
"address": "456 another ave",
"city": "metropolis",
"state": "ny",
"zip": "10001",
"phone_number": "555-555-5554",
},
You can see that with the script I can return the name of employee in index 1.
But I would like to have something more along the lines of: `print j[**0
through len(j)**]['name']` so it will print out the name (and preferably the
phone number too) of every employee in the json list.
I'm fairly sure I'm approaching something wrong, but I need some feedback and
direction.
Answer: Your JSON is the `list` of `dict` objects. By doing `j[1]`, you are accessing
the item in the list at index `1`. In order to get all the records, you need
to iterate all the elements of the list as:
for item in j:
print item['name']
where `j` is result of `j = json.loads(the_page)` as is mentioned in your
answer
|
python: why am I not exiting while loop?
Question: Can't find anything applicable to the problem I have here. If there is, please
point me toward it. Anyway, as a new one to python, I can't understand why my
output here keeps repeating indefinitely.
from random import randint
dollars = int(input("How many dollars do you have? "))
maxdollars = dollars
countatmax = 0
count = 0
while dollars > 0:
count += 1
diceone = randint(0, 6)
dicetwo = randint(0, 6)
if diceone + dicetwo == 7:
dollars + 4
else:
dollars - 1
if dollars != maxdollars:
mostdollars = dollars
countatmax = count
print "You are broke after " + str(count) + " rolls.\n" + \
"You should have quit after " + str(countatmax) + \
" rolls when you had $" + str(maxdollars) + "."
Answer: Yes, like Karin said, you are not changing the value of dollars with these
statements:
if diceone + dicetwo == 7:
dollars + 4
else:
dollars - 1
since the amount of "dollars" never gets changed, your while loop will loop
forever (dollars will always be greater than zero) assuming that the user
inputs a positive value
|
How to add dynamically C function in embedded Python
Question: I declare a C function as Python prototype
static PyObject* MyFunction(PyObject* self, PyObject* args)
{
return Py_None ;
}
Now I want to add it into a dynamically loaded module
PyObject *pymod = PyImport_ImportModule("mymodule");
PyObject_SetAttrString( pymod, "myfunction", ? );
How to convert C function into PyObject callable ?
Answer: You need to construct a new `PyCFunctionObject` object from the `MyFunction`.
Usually this is done under the hood using the module initialization code, but
as you're now doing it the opposite way, you need to construct the
`PyCFunctionObject` yourself, using the undocumented `PyCFunction_New` or
`PyCFunction_NewEx`, and a suitable
[`PyMethodDef`](https://docs.python.org/3/c-api/structures.html#c.PyMethodDef):
static PyMethodDef myfunction_def = {
"myfunction",
MyFunction,
METH_VARARGS,
"the doc string for myfunction"
};
...
// Use PyUnicode_FromString in Python 3.
PyObject* module_name = PyString_FromString("mymodule");
if (module_name == NULL) {
// error exit!
}
// this is adapted from code in code in
// Objects/moduleobject.c, for Python 3.3+ and perhaps 2.7
PyObject *func = PyCFunction_NewEx(&myfunction_def, pymod, module_name);
if (func == NULL) {
// error exit!
}
if (PyObject_SetAttrString(module, myfunction_def.ml_name, func) != 0) {
Py_DECREF(func);
// error exit!
}
Py_DECREF(func);
Again, this is not the preferred way to do things; usually a C extension
creates concrete module objects (such as `_mymodule`) and `mymodule.py` would
import `_mymodule` and put things into proper places.
|
Parsing NBA reference with python beautiful soup
Question: So I'm trying to scrape out the miscellaneous stats table from this site
<http://www.basketball-reference.com/leagues/NBA_2016.html> using python and
beautiful soup. This is the basic code so far I just want to see if it is even
reading the table but when I do print table I just get none.
from bs4 import BeautifulSoup
import requests
import pandas as pd
url = "http://www.basketball-reference.com/leagues/NBA_2016.html"
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data)
table = soup.find('table', id='misc_stats')
print table
When I inspect the html on the webpage itself, the table that I want appears
with this symbol in front `<!--` and the html text is green for the portion.
What can I do?
Answer: `<!--` is the start of a comment and `-->` is the end in html so just remove
the comments before you parse it:
from bs4 import BeautifulSoup
import requests
comm = re.compile("<!--|-->")
html = requests.get("http://www.basketball-reference.com/leagues/NBA_2016.html").content
cleaned_soup = BeautifulSoup(re.sub("<!--|-->","", html))
tableStats = cleaned_soup.find('table', {'id':'team_stats'})
print(tableStats)
|
Writing pandas DataFrame to JSON in unicode
Question: I'm trying to write a pandas DataFrame containing unicode to json, but the
built in `.to_json` function escapes the characters. How do I fix this?
Some sample code:
import pandas as pd
df=pd.DataFrame([['τ','a',1],['π','b',2]])
df.to_json('df.json')
gives:
{"0":{"0":"\u03c4","1":"\u03c0"},"1":{"0":"a","1":"b"},"2":{"0":1,"1":2}}
instead of what I want:
{"0":{"0":"τ","1":"π"},"1":{"0":"a","1":"b"},"2":{"0":1,"1":2}}
Adding the `force_asciii=False` argument gives me the following error:
`UnicodeEncodeError: 'charmap' codec can't encode character '\u03c4' in
position 11: character maps to <undefined>`
I'm using WinPython 3.4.4.2 64bit with pandas 0.18.0
Answer: Opening a file with the encoding set to utf-8, and then passing that file to
the `.to_json` function fixes the problem:
with open('df.json', 'w', encoding='utf-8') as file:
df.to_json(file, force_ascii=False)
gives the correct:
{"0":{"0":"τ","1":"π"},"1":{"0":"a","1":"b"},"2":{"0":1,"1":2}}
Note: it does still require the `force_ascii=False` argument.
|
How to create a prescription pill count like pain management facilities use?
Question: I don't understand why this code won't work. I want to create some code to
help me know exactly how many pills need to be taken back to pain management.
If you don't take the right amount back, then you get kicked out of pain
management. So I'm just wanting to create a script that will help me so I
don't take too few back.
As anyone can tell. I don't have any experience with Python. I just installed
it and tried using the documentation to aide in completing what I thought
would be a trivial script.
Traceback (most recent call last):
File "C:\Users\howell\AppData\Local\Programs\Python\Python35-32\Scripts\pill_count.py", line 17, in <module>
date1 = datetime.date(datetime.strptime((str(year) + "-" + str(starting_Month) + "-" + str(starting_Month) + "-" + str(starting_Day)), '%Y-%m-%d'))
File "C:\Users\howell\AppData\Local\Programs\Python\Python35-32\lib\_strptime.py", line 510, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "C:\Users\howell\AppData\Local\Programs\Python\Python35-32\lib\_strptime.py", line 346, in _strptime
data_string[found.end():])
ValueError: unconverted data remains: -1
How many pills did you have left? 12
How many pills did you get? 90
How many pills do you take? 6
Starting Month, Type 1 for January, 2 for February, etc.9
Starting Day; Type 1-311
Ending Month, Type 1 for January, 2 for February, etc.10
Starting Day; Type 1-3131
Taking 6 a day, you should have 102 left.
# dates are easily constructed and formatted
#from datetime import datetime, timedelta
from datetime import datetime
year = 2016
left_over_pill_count = input('How many pills did you have left? ')
new_prescription = input('How many pills did you get? ')
total_pills = int(left_over_pill_count) + int(new_prescription)
daily_pill_intake = input('How many pills do you take? ')
starting_Month = input('Starting Month, Type 1 for January, 2 for February, etc.')
starting_Day = input('Starting Day; Type 1-31')
ending_Month = input('Ending Month, Type 1 for January, 2 for February, etc.')
ending_Day = input('Starting Day; Type 1-31')
# count number of days until next doctors appointment
date1 = datetime.date(datetime.strptime((str(year) + "-" + str(starting_Month) + "-" + str(starting_Day)), '%Y-%m-%d'))
date2 = datetime.date(datetime.strptime((str(year) + "-" + str(ending_Month) + "-" + str(ending_Day)), '%Y-%m-%d'))
#date_count = (date2 - date1)
#total_days = date_count
# fmt = '%Y-%m-%d %H:%M:%S'
#fmt = '%d'
#d1 = datetime.strptime(date1, fmt)
#d2 = datetime.strptime(date2, fmt)
# print (d2-d1).days * 24 * 60
for i in range(1, (date1-date2).days):
total_pills = total_pills - int(daily_pill_intake)
print(total_pills)
print("Taking " + str(daily_pill_intake) + " a day, you should have " + str(total_pills) + " left.")
Answer: In this line:
date1 = datetime.date(datetime.strptime((str(year) + "-" + str(starting_Month) + "-" + str(starting_Month) + "-" + str(starting_Day)), '%Y-%m-%d'))
You're telling `datetime.strptime` to parse a string of the form "year-month-
day", but the string you give it is of the form "year-month-month-day"; you
included the month twice! The same problem applies to the next line as well.
|
Google App Engine import error, for django.urls
Question: I'm trying to learn Django, so I completed their multi-part tutorial (Python
2.7) and ran it locally. I got it working fine on my PC.
I need the following import, in a views.py file:
from django.urls import reverse
When I upload it to GAE, it gives me the following error: Exception Type:
ImportError Exception Value: No module named urls
Is this module unavailable for the GAE, or am I doing something wrong? (By the
way, I need this import so I can use the "reverse" method, after a user
submission is received in the polls app, like:
HttpResponseRedirect(reverse('polls:results', args=(question.id,))) )
Answer: `reverse()` was moved from `django.core.urlresolvers` to `django.urls` in
Django 1.10. The error suggests that you are using an older version of Django.
You need to import `reverse()` from the old location:
from django.core.urlresolvers import reverse
|
Reverse the list while creation
Question: I have this code:
def iterate_through_list_1(arr):
lala = None
for i in range(len(arr))[::-1]:
lala = i
def iterate_through_list_2(arr):
lala = None
for i in range(len(arr), 0, -1):
lala = i
Logically, iterating through list created by `range()` in reverse order should
be more efficient, than creating list with `range()` and reversing it
afterwards with `[::-1]`. But _cProfile_ tells me, that
`iterate_through_list_1` function works faster.
I used python-3. And here you can see output of profiling on the two identical
arrays with 100000000 elements in them.
ncalls tottime percall cumtime percall filename:lineno(function)
1 5.029 5.029 5.029 5.029 bs.py:24(iterate_throgh_list_2)
1 4.842 4.842 4.842 4.842 bs.py:19(iterate_throgh_list_1)
What happened underneath Python slices while list creation?
Answer: Well designed test shows that first function is slowest on Python 2.x (mostly
because two lists have to be created, first one as a increasing range, second
one as a reverted first one). I also included a demo using `reversed`.
from __future__ import print_function
import sys
import timeit
def iterate_through_list_1(arr):
lala = None
for i in range(len(arr))[::-1]:
lala = i
def iterate_through_list_2(arr):
lala = None
for i in range(len(arr), 0, -1):
lala = i
def correct_iterate_reversed(arr):
lala = None
for obj in reversed(arr):
lala = obj
print(sys.version)
print('iterate_through_list_1', timeit.timeit('iterate_through_list_1(seq)',
setup='from __main__ import iterate_through_list_1\nseq = range(0, 10000)',
number=10000))
print('iterate_through_list_2', timeit.timeit('iterate_through_list_2(seq)',
setup='from __main__ import iterate_through_list_2\nseq = range(0, 10000)',
number=10000))
print('correct_iterate_reversed', timeit.timeit('correct_iterate_reversed(seq)',
setup='from __main__ import correct_iterate_reversed\nseq = range(0, 10000)',
number=10000))
Results:
2.7.12 (default, Jun 29 2016, 14:05:02)
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)]
iterate_through_list_1 3.87919592857
iterate_through_list_2 3.38339591026
correct_iterate_reversed 2.78083491325
Differences in 3.x are all neglible, because in each case objects iterated
over are lazy.
3.5.2 (default, Jul 28 2016, 21:28:00)
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)]
iterate_through_list_1 2.986786328998278
iterate_through_list_2 2.9836046030031866
correct_iterate_reversed 2.9411962590020266
|
Nesting mpi calls with mpi4py
Question: I am trying to use mpi4py to call a second instance of an mpi executable.
I am getting the error:
Open MPI does not support recursive calls of mpirun
But I was under the impression that is exactly what Spawn is supposed to be
able to handle - i.e. setting up a new communicator within which another mpi
command could be launched.
The test code:
parent.py:
#!/usr/bin/env python
from mpi4py import MPI
import numpy
import sys
rank = MPI.COMM_WORLD.Get_rank()
new_comm = MPI.COMM_WORLD.Split(color=rank, key=rank)
print(new_comm.Get_rank())
new_comm.Spawn(sys.executable,
args=['test.py'],
maxprocs=4)
which calls test.py:
#!/usr/bin/env python
from mpi4py import MPI
import numpy
import os
import sys
comm = MPI.Comm.Get_parent()
rank = comm.Get_rank()
cwd=os.getcwd()
directory=os.path.join(cwd,str(rank))
os.chdir(directory)
os.system('{}'.format('mpirun -np 4 SOME_MPI_EXECUTABLE_HERE'))
print("Finished in "+directory)
os.chdir(cwd)
comm.Disconnect()
I'm running with:
mpirun --oversubscribe -np 1 parent.py
Using openmpi 2.0.0 with gcc, and python/3.4.2
Anyone have any bright ideas as to why this is happening.....
Thanks!
Answer: The following code seems to perform the way I wanted.
#!/usr/bin/env python
from mpi4py import MPI
import numpy
import sys
import os
rank = MPI.COMM_WORLD.Get_rank()
new_comm = MPI.COMM_WORLD.Split(color=rank, key=rank)
print(new_comm.Get_rank())
cwd=os.getcwd()
os.mkdir(str(rank))
directory=os.path.join(cwd,str(rank))
print(rank,directory)
os.chdir(directory)
new_comm.Spawn("SOME_MPI_EXECUTABLE_HERE",
args=[""],
maxprocs=4)
run with:
mpirun --oversubscribe -np 4 parent.py
Seems to start 4 instances of SOME_MPI_EXECUTABLE each running on 4 cores.
(Thanks to Zulan)
|
Do AND, OR strings have special meaning in PLY?
Question: When using PLY (<http://www.dabeaz.com/ply/>) I've noticed what seems to be a
very strange problem: when I'm using tokens like `&` for conjunction, the
program below works, but when I use `AND` in the same place, PLY claims syntax
error.
Program:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import os
from ply import lex
import ply.yacc as yacc
parser = None
lexer = None
def flatten_list(lst):
flat = []
for x in lst:
if isinstance(x, list):
flat.extend(flatten_list(x))
else:
flat.append(x)
return flat
############## Tokenization ##############
tokens = (
'number',
'lparen',
'rparen',
'textw',
'titlew',
'qword',
'AND'
)
t_lparen = r'\('
t_rparen = r'\)'
t_textw = r'TEXTW:'
t_titlew = r'TITLEW:'
t_qword = r'\w+'
t_AND = r'AND'
def t_number(t):
r'\d+'
t.value = int(t.value)
return t
t_ignore = ' \t'
def t_error(t):
raise ValueError(
'Illegal character "{}" at position {}, query text: {}'.format(t.value[0], t.lexpos, t.lexer.lexdata))
lexer = lex.lex()
################# Parsing #################
def p_querylist_boolop(p):
"""querylist : subquery AND subquery"""
print >> sys.stderr, 'p_querylist', list(p)
p[0] = []
p[0].append(p[1])
p[0].append(p[3])
def p_subquery(p):
"""subquery : lparen querykw qwordseq rparen"""
print >> sys.stderr, 'p_subquery', list(p)
p[0] = flatten_list(p[3])
def p_querykw(p):
"""querykw : textw
| titlew"""
print >> sys.stderr, 'p_querykw', list(p)
p[0] = p[1]
def p_qwordseq(p):
"""qwordseq : qwordseq qword
| qwordseq number
| qword
| number"""
print >> sys.stderr, 'p_qwordseq', list(p)
if p[0]:
p[0].extend(p[1:])
else:
p[0] = p[1:]
def p_error(p):
global parser
if p:
tok = parser.token()
if tok:
msg = 'Syntax error in input, token "{}" at position {}, query text: {}'.format(tok.value, tok.lexpos,
lexer.lexdata)
raise ValueError(msg)
msg = 'Syntax error at the end of input, query text: {}'.format(lexer.lexdata)
raise ValueError(msg)
parser = yacc.yacc()
# parser = yacc.yacc(debug=0, write_tables=0)
def parse_query(q):
return parser.parse(q)
if __name__ == '__main__':
query_texts = ["""(TEXTW: one article) AND (TEXTW: two books)"""]
for qt in query_texts:
res = parse_query(qt)
print '***', res
This produces:
ValueError: Syntax error in input, token "(" at position 19, query text: ( TEXTW: abc ) AND ( TEXTW: aaa )
However, when I change the following to:
t_AND = r'&'
query_texts = ["""(TEXTW: one article) & (TEXTW: two books)"""]
..it works just fine:
*** [['one', 'article'], ['two', 'books']]
Answer: Ply has a slightly eccentric approach to ordering token regular expressions,
in part because it depends on the underlying python regular expression
library. Tokens defined with functions, such as your `number` token, are
recognuzed in the order they appear, and unlike many lexical scanner
generators, Ply makes no attempt to perform a longest match. Tokens defined by
assignment -- all your other token types -- have lower priority than
functions, and are placed in order by decreasing length (of the regular
expression).
The Ply manual (section 4.3) strongly suggests not using variable assignment
style for keyword tokens such as `AND`, because the pattern `r'AND'`, for
example, will recognize the first three characters of, for example,
`ANDROGYNOUS`, which you would probably expect to be a variable. Instead, it
recommends using a function with a simple pattern to first recognize all
keywords and variables as simple words, and then use a dictionary to recognize
the specific keywords. Sample code and a less telegraphic explanation are in
the Ply manual (in the section I cited above).
|
Python - separate duplicate objects into different list
Question: So let say I have this class:
class Spam(object):
def __init__(self, a):
self.a = a
And now I have these objects:
s1 = Spam((1, 1, 1, 4))
s2 = Spam((1, 2, 1, 4))
s3 = Spam((1, 2, 1, 4))
s4 = Spam((2, 2, 1, 4))
s5 = Spam((2, 1, 1, 8))
s6 = Spam((2, 1, 1, 8))
objects = [s1, s2, s3, s4, s5, s6]
so after running some kind of method, I need to have two lists that have
objects that had same `a` attribute value in one list and the other objects
that had unique `a` attribute.
Like this:
dups = [s2, s3, s5, s6]
normal = [s1, s4]
So it is something like getting duplicates, but in addition it should also add
even first occurrence of object that shares same `a` attribute value.
I have written this method and it seems to be working, but it is quite ugly in
my opinion (and probably not very optimal).
def eggs(objects):
vals = []
dups = []
normal = []
for obj in objects:
if obj.a in vals:
dups.append(obj)
else:
normal.append(obj)
vals.append(obj.a)
dups_vals = [o.a for o in dups]
# separate again
new_normal = []
for n in normal:
if n.a in dups_vals:
dups.append(n)
else:
new_normal.append(n)
return dups, new_normal
Can anyone write more appropriate pythonic approach for such problem?
Answer: I would group together the objects in a dictionary, using the `a` attribute as
the key. Then I would separate them by the size of the groups.
import collections
def separate_dupes(seq, key_func):
d = collections.defaultdict(list)
for item in seq:
d[key_func(item)].append(item)
dupes = [item for v in d.values() for item in v if len(v) > 1]
uniques = [item for v in d.values() for item in v if len(v) == 1]
return dupes, uniques
class Spam(object):
def __init__(self, a):
self.a = a
#this method is not necessary for the solution, just for displaying the results nicely
def __repr__(self):
return "Spam({})".format(self.a)
s1 = Spam((1, 1, 1, 4))
s2 = Spam((1, 2, 1, 4))
s3 = Spam((1, 2, 1, 4))
s4 = Spam((2, 2, 1, 4))
s5 = Spam((2, 1, 1, 8))
s6 = Spam((2, 1, 1, 8))
objects = [s1, s2, s3, s4, s5, s6]
dupes, uniques = separate_dupes(objects, lambda item: item.a)
print(dupes)
print(uniques)
Result:
[Spam((2, 1, 1, 8)), Spam((2, 1, 1, 8)), Spam((1, 2, 1, 4)), Spam((1, 2, 1, 4))]
[Spam((1, 1, 1, 4)), Spam((2, 2, 1, 4))]
|
Convert cURL command to post request to send notification to kaa server
Question: I want to send a notification to kaa server. The below cURL command is working
fine but I want to send POST request from my node.js server. Kindly help me in
converting to post request.
curl -v -S -u devuser:devuser123 -F'notification=
{"applicationId":"32769","schemaId":"32778","topicId":"32770","type":"USER"};
type=application/json' -F [email protected] "http://localhost:8080/kaaAdmin/rest/api/sendNotification" | python -mjson.tool
I tried like this:
var notificationValue= {"applicationId":"32769","schemaId":"32778","topicId":"32770","type":"USER"};
var file = 'notification.json';
var opts = {
url: 'http://localhost:8080/kaaAdmin/rest/api/sendNotification',
method: 'POST',
auth: { user: 'devuser', password: 'devuser123' },
json: true,
formData: {
notification: JSON.stringify(notificationValue),
file : fs.readFileSync(file)
}
};
request(opts, function(err, resp, body) {
if(err)
res.send(err);
else{
res.send(body);
}
});
I am getting: Error 400 Required request part 'notification' is not present.
Answer: Here is a solution.
First import next modules.
var fs = require('fs');
var request = require('request');
var crypto = require('crypto');
We need two utility functions to generate **boundary** for multipart content
type and the other to build raw POST request body.
var CRLF = "\r\n";
var md5 = crypto.createHash('md5');
function multipartRequestBodyBuilder(fields, boundary) {
var requestBody = '';
for(var name in fields) {
var field = fields[name];
var data = field.data;
var fileName = field.fileName ? '; filename="' + field.fileName + '"' : '';
var type = field.type ? 'Content-Type:' + field.type + CRLF : '';
requestBody += "--" + boundary + CRLF +
"Content-Disposition: form-data; name=\"" + name + "\"" + fileName + CRLF +
type + CRLF +
data + CRLF;
}
requestBody += '--' + boundary + '--' + CRLF
return requestBody;
}
function getBoundary() {
md5.update(new Date() + getRandomArbitrary(1, 65536));
return md5.digest('hex');
}
function getRandomArbitrary(min, max) {
return Math.random() * (max - min) + min;
}
Then we form our data and generate the boundary.
var notificationValue = {
"applicationId":"2",
"schemaId":"12",
"topicId":"1",
"type":"USER"
};
var postData = {
notification : {
data : JSON.stringify(notificationValue),
type : "application/json"
},
file : {
data : fs.readFileSync("message.json"),
fileName : 'notification.json',
type : 'application/octet-stream'
}
}
var boundary = getBoundary();
After that compose a request and send to Kaa Server.
var opts = {
url: 'http://localhost:8080/kaaAdmin/rest/api/sendNotification',
method: 'POST',
auth: { user: 'devuser', password: 'devuser123' },
headers: {
'content-type': 'multipart/form-data; boundary=' + boundary
},
body : multipartRequestBodyBuilder(postData, boundary)
};
request(opts, function(err, resp, body) {
if(err) {
console.log("Error: " + err);
} else {
console.log("Satus code: " + resp.statusCode + "\n");
console.log("Result: " + body);
}
});
After all, you will see the confirmation response with status code 200.
Status code: 200
Result: {
"id" : "57e42623c3fabb0799bb3279",
"applicationId" : "2",
"schemaId" : "12",
"topicId" : "1",
"nfVersion" : 2,
"lastTimeModify" : 1474569763797,
"type" : "USER",
"body" : "CkhlbGxvAA==",
"expiredAt" : 1475174563793,
"secNum" : 17
}
I attach the file with whole code that I tested on Notification Demo from Kaa
sandbox: [send
notification](https://drive.google.com/file/d/0B8XVYCfGBxs3THZaYzYtSUZYV2s/view?usp=sharing).
|
Adding a dict as a value to another dict is overwriting the previous value
Question: I have a piece of python code like below ( I am sorry that I couldn't paste my
actual code because its very big)
final_dict = {}
default_dict = some_data
for dict in list_of_dicts:
# I am getting list_of_dicts from a json file
resultant_dict = merge_dicts(dict, default_dict)
id = return_value_from_a_function(resultant_dict)
final_dict[id] = resultant_dict # id will be different in each loop
So the final_dict is supposed to have id's as keys and resultant_dict's as
values. My problem is that at the end of the for loop, all my values in the
final_dict are same as the last value of resultant_dict. I think it is
overwriting the previous values (may be because its a reference). How to solve
this issue..?
EDIT 1: merge_dicts actually creates the union of two dicts. When I print
resultant_dict, it prints different dict each time, as expected. But when I
assign it as a value to final_dict, it is modifying all the previous values
with the latest one.
EDIT 2: All the input data is a dict which I am getting from a json file. The
final dict should look something like below
final_dict = {
id1 : dict1,
id2 : dict2
}
But I am getting like below ( It is overwriting all the values with the latest
dict value)
final_dict = {
id1 : dict2,
id2 : dict2
}
EDIT 3: This is how merge_dicts work
def merge_dicts(tmp1, tmp2):
'''
merges tmp2 into tmp1
'''
for key in tmp2:
if key in tmp1:
if isinstance(tmp1[key], dict) and isinstance(tmp2[key], dict):
merge_dicts(tmp1[key], tmp2[key])
else :
tmp1[key] = tmp2[key]
else:
tmp1[key] = tmp2[key]
return tmp1
Answer: Why don't you generate the `id` first and then straight away assign the
merge_dicts value there?
for dict in list_of_dicts:
# I am getting list_of_dicts from a json file
id = return_value_from_a_function
final_dict[id] = merge_dicts(dict, default_dict)
EDIT: Since `return_value_from_a_function` function makes use of
`resultant_dict`, it seems the `return_value_from_a_function` modifies the
`resultant_dict`.
from copy import deepcopy
for input_dict in list_of_dicts:
resultant_dict = {}
resultant_dict = merge_dicts(input_dict, default_dict)
# I am getting list_of_dicts from a json file
value_dict = deepcopy(resultant_dict)
id = return_value_from_a_function(resultant_dict)
final_dict[id] = value_dict
|
Can Django collectstatic overwrite old files?
Question: In my deb postinst file:
PYTHON=/usr/bin/python
PYTHON_VERSION=`$PYTHON -c 'import sys; print sys.version[:3]'`
SITE_PACKAGES=/opt/pkgs/mypackage/lib/python$PYTHON_VERSION/site-packages
export PYTHONPATH=$SITE_PACKAGES
echo "collect static files"
$PYTHON manage.py collectstatic --noinput
> When I run 'dpkg -i mypackage.deb' to install the package, no problem.
>
> When I run 'dpkg -i mypackage.deb' to **re-install the package, old css
> files unchanged**.
>
> When I changed '$PYTHON manage.py collectstatic \--noinput ' to '$PYTHON
> manage.py collectstatic --noinput -c' and run 'dpkg -i mypackage.deb' to
> **re-install the package** , the error is following: OSError: [Errno 2] No
> such file or directory: '/opt/pkgs/myporject/static'
Any idea?
**Can Django collectstatic overwrite old files?**
Answer: (Added here, maybe someone will have same problems with mine.) Yes.
The timestamp of css files in /opt/pkgs/mypropject/lib/python2.7/site-
packages/mypropject-py2.7.egg/myapp/static/css (directory A) is the time when
package building finished, not the time when css files installed.
But the timestamp of css files in /opt/pkgs/myporject/static (directory B) is
the time of installation.
That is why collectstatic sometimes cannot overwrite my old css files (cannot
copy some css files from directory A to directory B).
|
Convert python cryptography EC key to OpenSSH format
Question: I am looking to convert EC key generated using cryptography module to their
respective OpenSSH strings. like
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAhANiNlmyHtBUgaPXG+CtCVK8mQxBUtDjX3/nqqPZAHhduAAAAIE/JNDqLTeq9WVa5XWyU2Y7NJXfV54wakHmsP5gRNeh2
This is the code I use for EC key generation
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.asymmetric import ec
key=ec.generate_private_key(ec.SECP256R1(), default_backend())
I tried the following.
numbers = key.private_numbers()
opensshpublic = 'ecdsa-sha2-nistp256' + base64.b64encode('nistp256' + numbers.public_numbers.x, numbers.public_numbers.y)
but that didn't seem to work.
I suppose there should be a easy way to do this but I am missing something.
Answer: Cryptography added support for doing this in June 2016, it is possible as
follows
from cryptography.hazmat.primitives import serialization
key.public_bytes(serialization.Encoding.OpenSSH, serialization.PublicFormat.OpenSSH)
this gives those neat OpenSSH public keys
|
Import data from xml file into two tables w/ foreign key at MySQL database
Question: I need to load file of the following format into MySQL database.
<item value="{$\emptyset $}">
<subitem value="(empty language)"></subitem>
<subitem value="(empty set)"></subitem>
</item>
<item value="{$\subseteq$ (subset)}">
</item>
<item value="{$\subset$ (proper subset)}">
</item>
<item value="{$:$ (such that)}">
</item>
<item value="{$\cap$ (set intersection)}">
</item>
<item value="{$\cup$ (set union)}">
</item>
<item value="{$-$ (set difference)}">
</item>
<item value="{$\left | \mskip \medmuskip \right |$}">
<subitem value="(flow value)"></subitem>
<subitem value="(length of a string)"></subitem>
<subitem value="(set cardinality)"></subitem>
</item>
I think in database it should be represented by two tables, Subitem table
should contain foreign key:
Item <\-- Subitem
I want to do it with python. Is it possible to accomplish it with MySQL
instructions only, or it is better to load xml file in python, create both
tables manually and then insert all entries into tables i want?
Answer: I was able to do it by reading xml with python and then inserting it into
MySQL database. First one need to install needed software:
sudo apt install mysql-server
sudo apt-get install python-mysqldb
Then this py-file will do the job:
import xml.etree.ElementTree
import MySQLdb
try:
db = MySQLdb.connect(host="localhost",
user="root",
passwd="!")
cur = db.cursor()
cur.execute("DROP DATABASE IF EXISTS i2a")
cur.execute("CREATE DATABASE i2a")
cur.execute("USE i2a")
print "Created database"
cur.execute("""
CREATE TABLE Item (
id INT NOT NULL AUTO_INCREMENT,
`value` VARCHAR(255) NOT NULL,
PRIMARY KEY (`id`)
)
DEFAULT CHARACTER SET = utf8
COLLATE = utf8_bin""")
print "Created Item table"
cur.execute("""
CREATE TABLE Subitem (
id INT NOT NULL AUTO_INCREMENT,
item_id INT NOT NULL,
`value` VARCHAR(255) NOT NULL,
PRIMARY KEY (`id`),
FOREIGN KEY (item_id) REFERENCES Item(id) ON DELETE RESTRICT
)
DEFAULT CHARACTER SET = utf8
COLLATE = utf8_bin""")
print "Created Subitem table"
e = xml.etree.ElementTree.parse('index.xml').getroot()
for item in e.findall('item'):
cur.execute("INSERT INTO Item (value) VALUES (%s)", [item.get('value')])
for subitem in item:
cur.execute("INSERT INTO Subitem (item_id, value) VALUES (%s, %s)", (db.insert_id(), subitem.get('value')))
print "All data are there!"
except Exception, e:
print str(e)
|
Using a DLL exported from D
Question: I've created a simple encryption program in D, and I had the idea to make a
DLL from it and try to import it to, for example, Python.
I've could simply call my `main` function, becouse it dosn't need any params.
But when I get to my encrytion method, **it uses dynamic-lenght`ubyte[]`
arrays**, but as far as I know, they **don't exist in other C/C++ based
langs**.
For example, there's the first line of one of my funcs:
`ubyte[] encode(ubyte[] data, ubyte[] key){`
But I can't use an array without fixed lenght in other languages! How can I
import that function, for example, in Python?
**EDIT:**
I know that I can create a wrapper that takes a pointer and the lenght of the
array, but isn't there a more elegant solution?
(Where I don't need to use D to use a lib written in D)
Answer: Well tbh. there's no real elegant way other than wrapping a pointer with a
length or wrapping to C arrays and then to D.
However you can make a somewhat elegant purpose with the first way using a
struct that has a pointer, a length and a property that converts it to a D
array.
Then the function you export takes your struct, all that function should do is
call an internal function that takes an actual D array and you'd simply pass
the array to it and the conversion would happen at that moment through alias
this and the conversion property.
An example usage is here: module main;
import core.stdc.stdlib : malloc;
import std.stdio;
struct DArray(T) {
T* data;
size_t length;
/// This field can be removed, only used for testing purpose
size_t offset;
@property T[] array() {
T[] arr;
foreach(i; 0 .. length) {
arr ~= data[i];
}
return arr;
}
alias array this;
/// This function can be removed, only used for testing purpose
void init(size_t size) {
data = cast(T*)malloc(size * T.sizeof);
length = size;
}
/// This function can be removed, only used for testing purpose
void append(T value) {
data[offset] = value;
offset++;
}
}
// This function is the one exported
void externalFoo(DArray!int intArray) {
writeln("Calling extern foo");
internalFoo(intArray);
}
// This function is the one you use
private void internalFoo(int[] intArray) {
writeln("Calling internal foo");
writeln(intArray);
}
void main() {
// Constructing our test array
DArray!int arrayTest;
arrayTest.init(10);
foreach (int i; 0 .. 10) {
arrayTest.append(i);
}
// Testing the exported function
externalFoo(arrayTest);
}
Here is an absolute minimum version of how to do it
struct DArray(T) {
T* data;
size_t length;
@property T[] array() {
T[] arr;
foreach(i; 0 .. length) {
arr ~= data[i];
}
return arr;
}
alias array this;
}
// This function is the one exported
void externalFoo(DArray!int intArray) {
writeln("Calling extern foo");
internalFoo(intArray);
}
// This function is the one you use
private void internalFoo(int[] intArray) {
writeln("Calling internal foo");
writeln(intArray);
}
|
pyFFTW doesn't find libfftw3l.so while import
Question: In my Raspbian system I have succesfully installed pyFFTW, but there is a
problem while import package.
import pyfftw
File "/usr/local/lib/python3.4/dist-packages/pyfftw/__init__.py", line 16, in <module>
from .pyfftw import (
ImportError: libfftw3l.so.3: cannot open shared object file: No such file or directory
Actually, I have FFTW installed from source.
* * *
I've dig into __init__.py and there is an **relative import** line:
from .pyfftw import (
FFTW
blah blah )
there is no module pyfftw in the . folder but I suppose this line indicates to
./**pyfftw.cpython-34m.so** file which probably wraps C code of FFTW.
How to tell to this pyfftw.cpython-34m.so file where it should look for
correct path?
Answer: The problem was with PYTHONPATH.
To check if the file is somewhere at the disk:
$ sudo file / -name libfftw3l.so.3
/home/pi/bin/fftw-3.3.5/.libs/libfftw3.so.3
/usr/lib/arm-linux-gnueabihf/libfftw3.so.3
/usr/local/lib/libfftw3.so.3
And add a line before import pyfftw (see
[here](https://docs.python.org/3/tutorial/modules.html "here")):
import sys
sys.path.append('/usr/local/lib/libfftw3.so.3')
|
Process hangs if web browser crashes in selenium
Question: I am using selenium + python, been using implicit waits and try/except code on
python to catch errors. However I have been noticing that if the browser
crashes (let's say the user closes the browser during the program's
executing), my python program will hang, and the timeouts of implicit wait
seems to not work when this happens. The below process will just stay there
forever.
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium import webdriver
import datetime
import time
import sys
import os
def open_browser():
print "Opening web page..."
driver = webdriver.Chrome()
driver.implicitly_wait(1)
#driver.set_page_load_timeout(30)
return driver
driver = open_browser() # Opens web browser
# LET'S SAY I CLOSE THE BROWSER RIGHT HERE!
# IF I CLOSE THE PROCESS HERE, THE PROGRAM WILL HANG FOREVER
time.sleep(5)
while True:
try:
driver.get('http://www.google.com')
break
except:
driver.quit()
driver = open_browser()
Answer: The code you have provided will always hang in the event that there is an
exception getting the google home page. What is probably happening is that
attempting to get the google home page is resulting in an exception which
would normally halt the program, but you are masking that out with the except
clause.
Attempt with the following amendment to your loop.
max_attemtps = 10
attempts = 0
while attempts <= max_attempts:
try:
print "Retrieving google"
driver.get('http://www.google.com')
break
except:
print "Retrieving google failed"
attempts += 1
|
How to add `colorbar` to `networkx` using a `seaborn` color palette? (Python 3)
Question: I'm trying to add a `colorbar` to my `networkx` drawn `matplotlib ax` from the
range of `1` (being the lightest) and `3` (being the darkest) [check out the
line w/ `cmap` below]. I'm trying to combine a lot of `PyData`
functionalities.
**How can I add a color bar type feature on a networkx plot using a seaborn
color palette?**
[](http://i.stack.imgur.com/Yt5ud.png)
# Set up Graph
DF_adj = pd.DataFrame(np.array(
[[1, 0, 1, 1],
[0, 1, 1, 0],
[1, 1, 1, 1],
[1, 0, 1, 1] ]), columns=['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'], index=['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'])
G = nx.Graph(DF_adj.as_matrix())
G = nx.relabel_nodes(G, dict(zip(range(4), ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'])))
# Color mapping
color_palette = sns.cubehelix_palette(3)
cmap = {k:color_palette[v-1] for k,v in zip(['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'],[2, 1, 3, 2])}
# Draw
nx.draw(G, node_color=[cmap[node] for node in G.nodes()], with_labels=True)
In this, they are all using `matplotlib` color palettes:
<http://jakevdp.github.io/mpl_tutorial/tutorial_pages/tut3.html> I even tried
converting them to a `ListedColormap` object but it didn't work.
This doesn't work for my situation either b/c matplotlib colormap: [Seaborn
regplot with colorbar?](http://stackoverflow.com/questions/30353363/seaborn-
regplot-with-colorbar)
Same for
<http://matplotlib.org/examples/pylab_examples/colorbar_tick_labelling_demo.html>
This was the closest I got but it didn't work I got a autoscale Nonetype: [How
do I use seaborns color_palette as a colormap in
matplotlib?](http://stackoverflow.com/questions/37902459/how-do-i-use-
seaborns-color-palette-as-a-colormap-in-matplotlib)
Answer: I think the best thing to do here is to fake it following [this
answer](http://stackoverflow.com/a/11558629/5285918) since you don't have a
"ScalarMappable" to work with.
For a discrete colormap
from matplotlib.colors import ListedColormap
sm = plt.cm.ScalarMappable(cmap=ListedColormap(color_palette),
norm=plt.Normalize(vmin=0, vmax=3))
sm._A = []
plt.colorbar(sm)
If you want a linear (continuous) colormap and to only show integer ticks
sm = plt.cm.ScalarMappable(cmap=sns.cubehelix_palette(3, as_cmap=True),
norm=plt.Normalize(vmin=0, vmax=3))
sm._A = []
plt.colorbar(sm, ticks=range(4))
[](http://i.stack.imgur.com/pswBu.png)
|
Why does my Python XML parser break after the first file?
Question: I am working on a Python (3) XML parser that should extract the text content
of specific nodes from every xml file within a folder. Then, the script should
write the collected data into a tab-separated text file. So far, all the
functions seem to be working. The script returns all the information that I
want from the first file, but it always breaks, I believe, when it starts to
parse the second file.
When it breaks, it returns "TypeError: 'str' object is not callable." I've
checked the second file and found that the functions work just as well on that
as the first file when I remove the first file from the folder. I'm very new
to Python/XML. Any advice, help, or useful links would be greatly appreciated.
Thanks!
import xml.etree.ElementTree as ET
import re
import glob
import csv
import sys
content_file = open('WWP Project/WWP_texts.txt','wt')
quotes_file = open('WWP Project/WWP_quotes.txt', 'wt')
list_of_files = glob.glob("../../../Documents/WWPtextbase/distribution/*.xml")
ns = {'wwp':'http://www.wwp.northeastern.edu/ns/textbase'}
def content(tree):
lines = ''.join(ET.tostring(tree.getroot(),encoding='unicode',method='text')).replace('\n',' ').replace('\t',' ').strip()
clean_lines = re.sub(' +',' ', lines)
return clean_lines.lower()
def quotes(tree):
quotes_list = []
for node in tree.findall('.//wwp:quote', namespaces=ns):
quote = ET.tostring(node,encoding='unicode',method='text')
clean_quote = re.sub(' +',' ', quote)
quotes_list.append(clean_quote)
return ' '.join(str(v) for v in quotes_list).replace('\t','').replace('\n','').lower()
def pid(tree):
for node in tree.findall('.//wwp:sourceDesc//wwp:author/wwp:persName[1]', namespaces=ns):
pid = node.attrib.get('ref')
return pid.replace('personography.xml#','') # will need to replace 'p:'
def trid(tree): # this function will eventually need to call OT (.//wwp:publicationStmt//wwp:idno)
for node in tree.findall('.//wwp:sourceDesc',namespaces=ns):
trid = node.attrib.get('n')
return trid
content_file.write('pid' + '\t' + 'trid' + '\t' +'text' + '\n')
quotes_file.write('pid' + '\t' + 'trid' + '\t' + 'quotes' + '\n')
for file_name in list_of_files:
file = open(file_name, 'rt')
tree = ET.parse(file)
file.close()
pid = pid(tree)
trid = trid(tree)
content = content(tree)
quotes = quotes(tree)
content_file.write(pid + '\t' + trid + '\t' + content + '\n')
quotes_file.write(pid + '\t' + trid + '\t' + quotes + '\n')
content_file.close()
quotes_file.close()
Answer: You are overwriting your function calls with the values they returned.
changing the function names should fix it.
import xml.etree.ElementTree as ET
import re
import glob
import csv
import sys
content_file = open('WWP Project/WWP_texts.txt','wt')
quotes_file = open('WWP Project/WWP_quotes.txt', 'wt')
list_of_files = glob.glob("../../../Documents/WWPtextbase/distribution/*.xml")
ns = {'wwp':'http://www.wwp.northeastern.edu/ns/textbase'}
def get_content(tree):
lines = ''.join(ET.tostring(tree.getroot(),encoding='unicode',method='text')).replace('\n',' ').replace('\t',' ').strip()
clean_lines = re.sub(' +',' ', lines)
return clean_lines.lower()
def get_quotes(tree):
quotes_list = []
for node in tree.findall('.//wwp:quote', namespaces=ns):
quote = ET.tostring(node,encoding='unicode',method='text')
clean_quote = re.sub(' +',' ', quote)
quotes_list.append(clean_quote)
return ' '.join(str(v) for v in quotes_list).replace('\t','').replace('\n','').lower()
def get_pid(tree):
for node in tree.findall('.//wwp:sourceDesc//wwp:author/wwp:persName[1]', namespaces=ns):
pid = node.attrib.get('ref')
return pid.replace('personography.xml#','') # will need to replace 'p:'
def get_trid(tree): # this function will eventually need to call OT (.//wwp:publicationStmt//wwp:idno)
for node in tree.findall('.//wwp:sourceDesc',namespaces=ns):
trid = node.attrib.get('n')
return trid
content_file.write('pid' + '\t' + 'trid' + '\t' +'text' + '\n')
quotes_file.write('pid' + '\t' + 'trid' + '\t' + 'quotes' + '\n')
for file_name in list_of_files:
file = open(file_name, 'rt')
tree = ET.parse(file)
file.close()
pid = get_pid(tree)
trid = get_trid(tree)
content = get_content(tree)
quotes = get_quotes(tree)
content_file.write(pid + '\t' + trid + '\t' + content + '\n')
quotes_file.write(pid + '\t' + trid + '\t' + quotes + '\n')
content_file.close()
quotes_file.close()
|
How to connect a socket to another computer's socket through Internet
Question: I recently have some difficulties to connect a socket to another computer's
socket through Internet, an image is worth a thousand words:
[](http://i.stack.imgur.com/9CseJ.png)
Computer **A** is running this "**listener.py** " script:
import socket
PORT = 50007
BUFFER = 2048
HOST = ''
if __name__ == '__main__':
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
with conn:
print('Connected by', addr)
while True:
data = conn.recv(BUFFER)
if not data: break
conn.sendall(data)
Computer **B** is running this "**sender.py** " script:
import socket
HOST = '101.81.83.169' # The remote host
PORT = 50007 # The same port as used by the server
if __name__ == '__main__':
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((HOST, PORT))
s.sendall(b'Hello, world')
So first of all, I run the "**listener** " script of the computer **A**. Then,
I run the "**sender** " script of the computer B. However, when I execute the
"**sender** " script, I received a **error** message which explains me that I
am not authorized to connect to this remote address.
So I would like to know how can I connect a socket to another socket through
internet without changing the router configurations.
Thank you very much for your help.
**Edit** : Here the error message (I didn't execute the same script for some
reasons, but it's the same error message)
sock.connect(('101.81.83.169',50007)) Traceback (most recent call last): File "<stdin>", line 1, in
<module> File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 224, in
meth return getattr(self._sock,name)(*args) socket.error: [Errno 61] Connection refused
Answer: Computer B can't directly connect to computer A since it has an IP address
which is not reachable from the outside. You need to set up a port forwarding
rule in the 101.81.83.169 router that redirects incoming connection requests
for port 50007 to IP address 192.168.0.4.
However, since you say that you are seeking a solution without changing router
configurations, you need something different.
In this case, you could setup an intermediate server running on the public
Internet that both computers can then connect to and serves as an intermediate
tunneling platform between them. Solutions for this already exist, for example
have a look at [ngrok](https://ngrok.com/), which has Python bindings
available.
|
How do I ask the user if they want to play again and repeat the while loop?
Question: Running on Python, this is an example of my code:
import random
comp = random.choice([1,2,3])
while True:
user = input("Please enter 1, 2, or 3: ")
if user == comp
print("Tie game!")
elif (user == "1") and (comp == "2")
print("You lose!")
break
else:
print("Your choice is not valid.")
So this part works. However, how do I exit out of this loop because after
entering a correct input it keeps asking "Please input 1,2,3".
I also want to ask if the player wants to play again:
**Psuedocode:**
play_again = input("If you'd like to play again, please type 'yes'")
if play_again == "yes"
start loop again
else:
exit program
Is this related to a nested loop somehow?
Answer: Points for your code:
1. Code you have pasted don't have `':'` after `if,elif` and `else.`
2. Whatever you want can be achived using Control Flow Statements like `continue and break`. [Please check here for more detail](https://docs.python.org/2/tutorial/controlflow.html).
3. You need to remove break from "YOU LOSE" since you want to ask user whether he wants to play.
4. Code you have written will never hit "Tie Game" since you are comparing string with integer. User input which is saved in variable will be string and `comp` which is output of random will be integer. You have convert user input to integer as `int(user)`.
5. Checking user input is valid or not can be simply check using `in` operator.
**Code:**
import random
while True:
comp = random.choice([1,2,3])
user = raw_input("Please enter 1, 2, or 3: ")
if int(user) in [1,2,3]:
if int(user) == comp:
print("Tie game!")
else:
print("You lose!")
else:
print("Your choice is not valid.")
play_again = raw_input("If you'd like to play again, please type 'yes'")
if play_again == "yes":
continue
else:
break
|
python+pyspark: error on inner join with multiple column comparison in pyspark
Question: Hi I have 2 dataframes to join
#df1
name genre count
satya drama 1
satya action 3
abc drame 2
abc comedy 2
def romance 1
#df2
name max_count
satya 3
abc 2
def 1
Now I want to join above 2 dfs on name and count==max_count, But i am getting
an error
import pyspark.sql.functions as F
from pyspark.sql.functions import count, col
from pyspark.sql.functions import struct
df = spark.read.csv('file',sep = '###', header=True)
df1 = df.groupBy("name", "genre").count()
df2 = df1.groupby('name').agg(F.max("count").alias("max_count"))
#Now trying to join both dataframes
final_df = df1.join(df2, (df1.name == df2.name) & (df1.count == df2.max_count))
final_df.show() ###Error
#py4j.protocol.Py4JJavaError: An error occurred while calling o207.showString.
: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:194)
#Caused by: java.lang.UnsupportedOperationException: Cannot evaluate expression: count(1)
at org.apache.spark.sql.catalyst.expressions.Unevaluable$class.doGenCode(Expression.scala:224)
But success with "left " join
final_df = df1.join(df2, (df1.name == df2.name) & (df1.count == df2.max_count), "left")
final_df.show() ###Success but i don't want left join , i want inner join
My question is why the above one fails, am I doing something wrong there???
I referred this link "[Find maximum row per group in Spark
DataFrame](http://stackoverflow.com/questions/35218882/find-maximum-row-per-
group-in-spark-dataframe)". Used the first answer (2 groupby method).But same
error.
I am on spark-2.0.0-bin-hadoop2.7 and python 2.7.
Please suggest.Thanks.
# Edit:
The above scenario works with spark 1.6 (which is quite surprising that what's
wrong with spark 2.0 (or with my installation , I will reinstall, check and
update here)).
Has anybody tried this on spark 2.0 and got success , by following Yaron's
answer below???
Answer: Update: It seems like your code was failing also due to the use of "count" as
column name. count seems to be protected keyword in DataFrame API. renaming
count to "mycount" solved the problem. The below working code was modify to
support spark version 1.5.2 which I used to test your issue.
df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").load("/tmp/fac_cal.csv")
df1 = df.groupBy("name", "genre").count()
df1 = df1.select(col("name"),col("genre"),col("count").alias("mycount"))
df2 = df1.groupby('name').agg(F.max("mycount").alias("max_count"))
df2 = df2.select(col('name').alias('name2'),col("max_count"))
#Now trying to join both dataframes
final_df = df1.join(df2,[df1.name == df2.name2 , df1.mycount == df2.max_count])
final_df.show()
+-----+---------+-------+-----+---------+
| name| genre|mycount|name2|max_count|
+-----+---------+-------+-----+---------+
|brata| comedy| 2|brata| 2|
|brata| drama| 2|brata| 2|
|panda|adventure| 1|panda| 1|
|panda| romance| 1|panda| 1|
|satya| action| 3|satya| 3|
+-----+---------+-------+-----+---------+
The example for complex condition in
<https://spark.apache.org/docs/2.0.0/api/python/pyspark.sql.html>
cond = [df.name == df3.name, df.age == df3.age]
>>> df.join(df3, cond, 'outer').select(df.name, df3.age).collect()
[Row(name=u'Alice', age=2), Row(name=u'Bob', age=5)]
* * *
can you try:
final_df = df1.join(df2, [df1.name == df2.name , df1.mycount == df2.max_count])
Note also, that according to the spec "left" is not part of the valid join
types: how – str, default ‘inner’. One of inner, outer, left_outer,
right_outer, leftsemi.
|
SSH tunnel from Python is too slow to connect
Question: I'm connecting to a remote SQL database over SSH. If I set up the SSH
connection from the Linux command line (using `ssh-add my_private_key.key` and
then `ssh [email protected]`), it takes less than a second to connect. But if
I do it from Python using [sshtunnel](https://github.com/pahaz/sshtunnel) (in
the following script), it takes around 70 seconds. I accept that using Python
might be a bit of an overhead, but not that much! And especially since, if I
run the Python script _after_ having connected from the command line, it's
very fast. What do I need to add in the script to make it faster?
Python script:
import pymysql, shlex, shutil, subprocess
import logging
import sshtunnel
from sshtunnel import SSHTunnelForwarder
import iot_config as cfg
def OpenRemoteDB():
global remotecur, remotedb
sshtunnel.DEFAULT_LOGLEVEL = logging.DEBUG
with SSHTunnelForwarder(
(cfg.sshconn['host'], cfg.sshconn['port']),
ssh_username = cfg.sshconn['user'],
ssh_private_key = cfg.sshconn['private_key_loc'],
ssh_private_key_password = cfg.sshconn['private_key_passwd'],
remote_bind_address = ('127.0.0.1', 3306)) as server:
print("OK")
# Main program starts here
OpenRemoteDB()
Python output:
2016-09-20 12:34:15,272 | WARNING | Could not read SSH configuration file: ~/.ssh/config
2016-09-20 12:34:15,305 | INFO | 0 keys loaded from agent
2016-09-20 12:34:15,332 | DEBUG | Private key file (/etc/ssh/my_private_key.key, <class 'paramiko.rsakey.RSAKey'>) successfully loaded
2016-09-20 12:34:15,364 | INFO | Connecting to gateway: mysite.co.uk:22 as user 'user'
2016-09-20 12:34:15,389 | DEBUG | Concurrent connections allowed: True
2016-09-20 12:34:15,409 | DEBUG | Trying to log in with key: b'XXX'
2016-09-20 12:35:26,610 | INFO | Opening tunnel: 0.0.0.0:34504 <> 127.0.0.1:3306
Answer: Doh! After posting this question I thought it would be a good idea to make
sure sshtunnel was up-to-date - and it wasn't. So I've updated from 0.0.8.1 to
the latest version (0.1.0) and my problem is solved!
|
can't define a udf inside pyspark project
Question: I have a python project that uses pyspark and i am trying to define a udf
function inside the spark project (not in my python project) specifically in
spark\python\pyspark\ml\tuning.py but i get pickling problems. it can't load
the udf. The code:
from pyspark.sql.functions import udf, log
test_udf = udf(lambda x : -x[1], returnType=FloatType())
d = data.withColumn("new_col", test_udf(data["x"]))
d.show()
when i try d.show() i am getting exception of unknown attribute test_udf
In my python project i defined many udf and it worked fine.
Answer: add the following to your code. It isn't recognizing the datatype.
from pyspark.sql.types import *
Let me know if this helps. Thanks.
|
Python Scraping - Unable to get required data from Flipkart
Question: I was trying to scrape the customer reviews from Flipkart website. The
following is the [link](https://www.flipkart.com/samsung-
galaxy-j5-6-new-2016-edition-white-16-gb/product-
reviews/itmegmrnzqjcpfg9?pid=MOBEG4XWJG7F9A6Z). The following was my code to
scrape, but it is always returning an empty list.
>>> from bs4 import BeautifulSoup
>>> import requests
>>> r = requests.get('https://www.flipkart.com/samsung-galaxy-j5-6-new-2016-edition-white-16-gb/product-reviews/itmegmrnzqjcpfg9?pid=MOBEG4XWJG7F9A6Z')
>>> soup = BeautifulSoup(r.content, 'lxml') # Tried with 'html.parser' also
>>> soup.find_all('div', '_3DCdKt')
[]
>>> soup.find_all('div', {'class': '_3DCdKt'})
[]
>>> soup.find_all('div', {'class': 'row _3wYu6I _3BRC7L'})
[]
>>> soup.find_all('div', {'class': '_1GRhLX hFPo14'})
[]
So, I tried to get the entire section, but I was getting only the following:
>>> soup.find_all('div', {'class': 'col-9-12'})
[<div class="col-9-12" data-reactid="96"><div class="row _2_xtR5" data-reactid="97"></div><div class="row _3wYu6I _1KVtzT" data-reactid="98"></div></div>]
I was not getting the other contents. So, next I tried with selenium, even
then it was returning `None`. The following is my selenium code:
>>> driver = webdriver.Firefox()
>>> driver.get('https://www.flipkart.com/samsung-galaxy-j5-6-new-2016-edition-white-16-gb/product-reviews/itmegmrnzqjcpfg9?pid=MOBEG4XWJG7F9A6Z')
>>> a = driver.find_elements_by_class_name("_3DCdKt")
>>> len(a)
10
>>> for i in a:
... print i.get_attribute('value')
...
None
None
None
None
None
None
None
None
None
None
What might be the problem? Am I doing any mistakes in the code. Kindly help. I
am new to Python.
Answer: The reviews etc.. are populated using _reactjs_ , the data is retrieved using
an ajax request which you can mimic with requests:
import requests
data = {"productId": "MOBEG4XWJG7F9A6Z", # end of url pid=MOBEG4XWJG7F9A6Z
"count": "15",
"ratings": "ALL",
"reviewerType:ALL"
"sortOrder": "MOST_HELPFUL"}
headers = ({"x-user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.92 Safari/537.36 FKUA/website/41/website/Desktop"})
data = requests.get("https://www.flipkart.com/api/3/product/reviews", params=data, headers=headers).json()
print(data)
What you want is to access `data["RESPONSE"]["data"]` which is a list of
dicts:
for dct in data["RESPONSE"]["data"]
print(dct)
Which will give you:
{u'action': None, u'fixed': False, u'value': {u'rating': 5, u'text': u'Thanks to Flipkart who deliver it me with in 5 days \nGood Phone With Metal Body \nAnd Best front Camera With Flash\nBest for night Selfie \nI Take more than 30 pic in night mode with front flash \ngood smartphone gold color is also supereb\nbest ever smartphone under 15k by samsung\nGood Battery\nGood Camera Front with Flash and Rear Also Superb', u'reportAbuse': {u'action': {u'originalUrl': None, u'params': {u'vote': u'ABUSE', u'reviewId': u'be37810e-20fe-4417-9d88-2709288cf2ba', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'totalCount': 285, u'downvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'DOWN', u'reviewId': u'be37810e-20fe-4417-9d88-2709288cf2ba', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 74, u'type': u'VoteValue'}, u'tracking': None}, u'id': u'be37810e-20fe-4417-9d88-2709288cf2ba', u'author': u'Happy Thakur', u'url': u'/reviews/be37810e-20fe-4417-9d88-2709288cf2ba', u'upvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'UP', u'reviewId': u'be37810e-20fe-4417-9d88-2709288cf2ba', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 211, u'type': u'VoteValue'}, u'tracking': None}, u'helpfulCount': 211, u'created': u'16 May, 2016', u'certifiedBuyer': True, u'title': u'Best Smartphone by Samsung', u'type': u'ProductReviewValue'}, u'tracking': None}
{u'action': None, u'fixed': False, u'value': {u'rating': 5, u'text': u"Updated Review on 02-August after 3 months of usage:\nWhat I liked most:\nLook : 100/100 - Very good looking phone. Gold color and the finishing is super cool\nSize : 100/100 - 5.2 Inch is neither big nor small. I can still operate with one hand.. \nBattery : 100/100 - 3100 mAH is outstanding. 3G is always ON when i am out of home and Wi-Fi is always ON in home. I am charging mobile only once in every 36 hours. I use Whatsapp, instagram and Browsing mostly. \nDisplay : 90/100 - Not so bright and sharp as S series phones, but a real deal for the price. Impressed again. My only worry is about it is not having a Gorilla scratch proof glass. I may need to use tempered glass.\nTouch : 95/100 - So smooth and I dont see any lags as of now.\nCamera : 90/100 - Photos are good and can capture fast, but again not as great as S series phones. but at this price I believe this phone outclasses all other competitors in camera department. \n\nOne last thing is about the SAMSUNG brand and its service center coverage, which is again awesome. \nOverall I am completely satisfied with the phone and this phone reached my expectations. \nWhat I disliked:\nEarphone jack at the bottom.. I feel uncomfortable when chatting and listening to songs at same time\nLow speaker volume, not a big deal though for me, As i don't use loudspeaker for songs mostly", u'reportAbuse': {u'action': {u'originalUrl': None, u'params': {u'vote': u'ABUSE', u'reviewId': u'e786669a-024b-4ef0-b70c-1e4fcf5fe5ff', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'totalCount': 272, u'downvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'DOWN', u'reviewId': u'e786669a-024b-4ef0-b70c-1e4fcf5fe5ff', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 87, u'type': u'VoteValue'}, u'tracking': None}, u'id': u'e786669a-024b-4ef0-b70c-1e4fcf5fe5ff', u'author': u'Naresh Kareti', u'url': u'/reviews/e786669a-024b-4ef0-b70c-1e4fcf5fe5ff', u'upvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'UP', u'reviewId': u'e786669a-024b-4ef0-b70c-1e4fcf5fe5ff', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 185, u'type': u'VoteValue'}, u'tracking': None}, u'helpfulCount': 185, u'created': u'13 May, 2016', u'certifiedBuyer': True, u'title': u'Absolute Stunner and Impressive', u'type': u'ProductReviewValue'}, u'tracking': None}
{u'action': None, u'fixed': False, u'value': {u'rating': 3, u'text': u'Hi,\n\nI got this phone from Flipkart on Friday and here is my 3 days review.\n\nPros:\n * Beautiful design\n * Very handy, easy to handle\n * Battery backup is great\n * Back camera is good\n * No heating issues\n \nCons:\n * If we are charging, it will not show any light or any notification whether it is charging or not. We need to on the screen and check whether it is charging or not. So every time we need to turn it on and see whether it is charging or not.\n* Camera issue: Once you take the picture and then press the back button it is taking some time to come back to camera mode.\n* If you turn on the flash and take pic with back camera it is taking some time to capture the picture. With out Flash it is taking very fast.\n* Volume is very low. Not enough for a medium sized room.\n* Ear phones are not good especially for me. \n\n\nWill post my feedback after using it another 15 days.\n\nThanks', u'reportAbuse': {u'action': {u'originalUrl': None, u'params': {u'vote': u'ABUSE', u'reviewId': u'9cbcd27c-a8ad-4793-978a-5903cd086252', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'totalCount': 212, u'downvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'DOWN', u'reviewId': u'9cbcd27c-a8ad-4793-978a-5903cd086252', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 67, u'type': u'VoteValue'}, u'tracking': None}, u'id': u'9cbcd27c-a8ad-4793-978a-5903cd086252', u'author': u'ileep ', u'url': u'/reviews/9cbcd27c-a8ad-4793-978a-5903cd086252', u'upvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'UP', u'reviewId': u'9cbcd27c-a8ad-4793-978a-5903cd086252', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 145, u'type': u'VoteValue'}, u'tracking': None}, u'helpfulCount': 145, u'created': u'16 May, 2016', u'certifiedBuyer': True, u'title': u'Good looking phone with some drawbacks', u'type': u'ProductReviewValue'}, u'tracking': None}
{u'action': None, u'fixed': False, u'value': {u'rating': 5, u'text': u'Super Amoled Display..2 GB RAM with Latest Android Marshmallow OS only for 13K....its difficult to get Samsung Phone with 2 GB ram in such a low price Range...used for 15 days....Going Smooth....Awesome Earphone Quality.....selfie and back Camera Good.....Battery last for more than a day with Continous usage or will go for two days....Free Microsoft apps and Much More...', u'reportAbuse': {u'action': {u'originalUrl': None, u'params': {u'vote': u'ABUSE', u'reviewId': u'1546ed16-5945-4257-9f2d-0d86db7ed92e', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'totalCount': 34, u'downvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'DOWN', u'reviewId': u'1546ed16-5945-4257-9f2d-0d86db7ed92e', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 9, u'type': u'VoteValue'}, u'tracking': None}, u'id': u'1546ed16-5945-4257-9f2d-0d86db7ed92e', u'author': u'Prashant Dias', u'url': u'/reviews/1546ed16-5945-4257-9f2d-0d86db7ed92e', u'upvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'UP', u'reviewId': u'1546ed16-5945-4257-9f2d-0d86db7ed92e', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 25, u'type': u'VoteValue'}, u'tracking': None}, u'helpfulCount': 25, u'created': u'7 Sep, 2016', u'certifiedBuyer': True, u'title': u'Brilliant Phone Compared to Money', u'type': u'ProductReviewValue'}, u'tracking': None}
{u'action': None, u'fixed': False, u'value': {u'rating': 5, u'text': u"Nice.battery backup it's good", u'reportAbuse': {u'action': {u'originalUrl': None, u'params': {u'vote': u'ABUSE', u'reviewId': u'a9f2f6a0-2272-4187-bd37-48eb8a0a85c9', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'totalCount': 5, u'downvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'DOWN', u'reviewId': u'a9f2f6a0-2272-4187-bd37-48eb8a0a85c9', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'id': u'a9f2f6a0-2272-4187-bd37-48eb8a0a85c9', u'author': u'Flipkart Customer', u'url': u'/reviews/a9f2f6a0-2272-4187-bd37-48eb8a0a85c9', u'upvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'UP', u'reviewId': u'a9f2f6a0-2272-4187-bd37-48eb8a0a85c9', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 5, u'type': u'VoteValue'}, u'tracking': None}, u'helpfulCount': 5, u'created': u'17 Aug, 2016', u'certifiedBuyer': True, u'title': u"It's very good", u'type': u'ProductReviewValue'}, u'tracking': None}
{u'action': None, u'fixed': False, u'value': {u'rating': 5, u'text': u'This Phone is awesome..Must Buy', u'reportAbuse': {u'action': {u'originalUrl': None, u'params': {u'vote': u'ABUSE', u'reviewId': u'cf8cf2c8-1f79-4d56-a4cd-e641ffb3551b', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'totalCount': 5, u'downvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'DOWN', u'reviewId': u'cf8cf2c8-1f79-4d56-a4cd-e641ffb3551b', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'id': u'cf8cf2c8-1f79-4d56-a4cd-e641ffb3551b', u'author': u'Durvank Aregekar', u'url': u'/reviews/cf8cf2c8-1f79-4d56-a4cd-e641ffb3551b', u'upvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'UP', u'reviewId': u'cf8cf2c8-1f79-4d56-a4cd-e641ffb3551b', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 5, u'type': u'VoteValue'}, u'tracking': None}, u'helpfulCount': 5, u'created': u'10 Aug, 2016', u'certifiedBuyer': True, u'title': u'Must Buy', u'type': u'ProductReviewValue'}, u'tracking': None}
{u'action': None, u'fixed': False, u'value': {u'rating': 5, u'text': u'It is a good phone', u'reportAbuse': {u'action': {u'originalUrl': None, u'params': {u'vote': u'ABUSE', u'reviewId': u'ce31beb5-5c8f-4a2d-be7d-aba416592df2', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'totalCount': 5, u'downvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'DOWN', u'reviewId': u'ce31beb5-5c8f-4a2d-be7d-aba416592df2', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'id': u'ce31beb5-5c8f-4a2d-be7d-aba416592df2', u'author': u'Sourabh Jain', u'url': u'/reviews/ce31beb5-5c8f-4a2d-be7d-aba416592df2', u'upvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'UP', u'reviewId': u'ce31beb5-5c8f-4a2d-be7d-aba416592df2', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 5, u'type': u'VoteValue'}, u'tracking': None}, u'helpfulCount': 5, u'created': u'9 Aug, 2016', u'certifiedBuyer': True, u'title': u'Good phone', u'type': u'ProductReviewValue'}, u'tracking': None}
{u'action': None, u'fixed': False, u'value': {u'rating': 5, u'text': u'delivery is in time but my phone is heat will data is on plz check', u'reportAbuse': {u'action': {u'originalUrl': None, u'params': {u'vote': u'ABUSE', u'reviewId': u'1fcf5a13-edef-4b16-8372-8732819c143c', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'totalCount': 9, u'downvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'DOWN', u'reviewId': u'1fcf5a13-edef-4b16-8372-8732819c143c', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 1, u'type': u'VoteValue'}, u'tracking': None}, u'id': u'1fcf5a13-edef-4b16-8372-8732819c143c', u'author': u'Santhoaha m n santhu', u'url': u'/reviews/1fcf5a13-edef-4b16-8372-8732819c143c', u'upvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'UP', u'reviewId': u'1fcf5a13-edef-4b16-8372-8732819c143c', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 8, u'type': u'VoteValue'}, u'tracking': None}, u'helpfulCount': 8, u'created': u'12 Aug, 2016', u'certifiedBuyer': True, u'title': u'very good', u'type': u'ProductReviewValue'}, u'tracking': None}
{u'action': None, u'fixed': False, u'value': {u'rating': 4, u'text': u'Good Product by Samsung\n\nThe things from this phone is\n 1. Marshmellow v6.0\n 2. Front flash with 5mb camera not so good\n 3. Its design\n 4. Primary Camera is not so good with 13mb led flash\n 5. Battery life is also not so good\n 6. Its size is correct in its design\n 7. Supports OTG\n 9. Only 2GB RAM\n 10. 16GB Internal storage but only 11GB is availiable\n 11. 4G supports\n 12. Ultra power saving mode\n 13. S bike mode\n 14. Speaker volume is not so good\n 15. 3G supports\n 16. Ultra data saving\n 17. No auto brightness\n 18. 2G supports\n 19. Top performance \n 20. Good phone at the price 14000\n *********************', u'reportAbuse': {u'action': {u'originalUrl': None, u'params': {u'vote': u'ABUSE', u'reviewId': u'958efa75-1b67-4872-9f71-b18035fafe6a', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'totalCount': 20, u'downvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'DOWN', u'reviewId': u'958efa75-1b67-4872-9f71-b18035fafe6a', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 5, u'type': u'VoteValue'}, u'tracking': None}, u'id': u'958efa75-1b67-4872-9f71-b18035fafe6a', u'author': u'Vaishnav ', u'url': u'/reviews/958efa75-1b67-4872-9f71-b18035fafe6a', u'upvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'UP', u'reviewId': u'958efa75-1b67-4872-9f71-b18035fafe6a', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 15, u'type': u'VoteValue'}, u'tracking': None}, u'helpfulCount': 15, u'created': u'17 May, 2016', u'certifiedBuyer': True, u'title': u'By Expert -Vaishnav VJ', u'type': u'ProductReviewValue'}, u'tracking': None}
{u'action': None, u'fixed': False, u'value': {u'rating': 4, u'text': u'Very nice device', u'reportAbuse': {u'action': {u'originalUrl': None, u'params': {u'vote': u'ABUSE', u'reviewId': u'c7177dfb-39c2-4c0b-8bbd-288f96757c3a', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'totalCount': 4, u'downvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'DOWN', u'reviewId': u'c7177dfb-39c2-4c0b-8bbd-288f96757c3a', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 0, u'type': u'VoteValue'}, u'tracking': None}, u'id': u'c7177dfb-39c2-4c0b-8bbd-288f96757c3a', u'author': u'Flipkart Customer', u'url': u'/reviews/c7177dfb-39c2-4c0b-8bbd-288f96757c3a', u'upvote': {u'action': {u'originalUrl': None, u'params': {u'vote': u'UP', u'reviewId': u'c7177dfb-39c2-4c0b-8bbd-288f96757c3a', u'reviewDomain': u'PRODUCT'}, u'loginType': u'LEGACY_LOGIN', u'url': None, u'fallback': None, u'type': u'REVIEW_VOTE', u'omnitureData': None, u'screenType': None, u'tracking': {}}, u'fixed': False, u'value': {u'count': 4, u'type': u'VoteValue'}, u'tracking': None}, u'helpfulCount': 4, u'created': u'8 Sep, 2016', u'certifiedBuyer': True, u'title': u'Good quality product', u'type': u'ProductReviewValue'}, u'tracking': None}
The `x-user-agent` is required, without it you will get a 403. You can play
around with the parameters to see different results, I will leave that up to
you.
|
Python appending from previous for loop iteration
Question: I have a very simple but annoying problem. I am reading in a list of files one
by one whose names are stored in an ascii file ("file_input.txt") and
performing calculations on them. My issue is that when I print out the result
of the calculation ("print peak_wv, peak_flux" in the script below) it appends
the previous printout. Below is the code I have written, please help me see
what I'm doing wrong here.
from math import*
wv = []
flux = []
fits = []
p = open("file_input.txt","r")
for line in p:
fits.append(str(line.split()[0]))
p.close()
for j in range(len(fits)):
f = open("%s"%(fits[j]),"r")
for line in f:
wv.append(float(line.split()[0]))
flux.append(float(line.split()[1]))
f.close()
print "%s"%(fits[j])
for i in range(len(wv)):
if 6555.0<wv[i]<6569.0:
m1 = (flux[i+1]-flux[i])/(wv[i+1] - wv[i])
m2 = (flux[i+2]-flux[i+1])/(wv[i+2] - wv[i+1])
if m2*m1 < 0:
peak_wv = (wv[i+2]+wv[i+1]+wv[i])/3.0
peak_flux = flux[i+1]
print peak_wv, peak_flux
Answer: Based on your comment I believe the issue is that you are appending the data
with each new file. You probably want to clear wv and flux for each new file.
For example:
for j in range(len(fits)):
wv = []
flux = []
f = open("%s"%(fits[j]),"r")
* * *
Edit: I should also point out that you aren't actually using any math
functions so you don't need that import, and there are a bunch of ways to make
this code more pythonic. You can use the "with open" idiom to avoid having to
manually close the file. You can also use basic for loops and/or "enumerate"
to make your for loops cleaner. For example, this:
for j in range(len(fits)):
f = open("%s"%(fits[j]),"r")
# code
f.close()
Could be this:
for file in fits:
with open(file, "r") as f:
# code
And this:
for i in range(len(wv)):
if 6555.0<wv[i]<6569.0:
Could be:
for i, cur_wv in enumerate(wv):
if 6555.0 < cur_wv < 6569.0:
|
Hidden references to function arguments causing big memory usage?
Question: **Edit:** Never mind, I was just being completely stupid.
I came across code with recursion on smaller and smaller substrings, here's
its essence plus my testing stuff:
def f(s):
if len(s) == 2**20:
input('check your memory usage again')
else:
f(s[1:])
input('check your memory usage, then press enter')
f('a' * (2**20 + 500))
Before the call, my Python process takes about 9 MB (as checked by Windows
task manager). After the 500 levels with ~1MB strings, it's at about 513 MB.
No surprise, as each call level is still holding on to its string in its `s`
variable.
But I tried to fix it by **replacing** the reference to the string with a
reference to the new string and it still goes up to 513 MB:
def f(s):
if len(s) == 2**20:
input('check your memory usage again')
else:
s = s[1:]
f(s)
input('check your memory usage, then press enter')
f('a' * (2**20 + 500))
Why doesn't that let go off the memory? The strings even only get smaller, so
later strings would easily fit into the space of earlier strings. Are there
hidden additional references to the strings somewhere or what is going on?
I had expected it to behave like this, which only goes up to 10 MB (a change
of 1 MB, as expected because the new string is built while the old string
still exists):
input('check your memory usage, then press enter')
s = 'a' * (2**20 + 500)
while len(s) != 2**20:
s = s[1:]
input('check your memory usage again')
(Never mind the poor time complexity, btw, I know that, don't bother.)
Answer: Your function is recursive, so when you call `f()`, your current frame is put
onto a stack, and a new one is created. So basically each function call keeps
a reference to the new string it creates to pass down to the next call.
To illustrate the stack
import traceback
def recursive(x):
if x:
recursive(x[1:])
else:
traceback.print_stack()
recursive('abc')
Gives
$ python tmp.py
File "tmp.py", line 10, in <module>
recursive('abc')
File "tmp.py", line 5, in recursive
recursive(x[1:])
File "tmp.py", line 5, in recursive
recursive(x[1:])
File "tmp.py", line 5, in recursive
recursive(x[1:])
File "tmp.py", line 7, in recursive
traceback.print_stack()
When the final call to `recursive()` returns, it returns back into the next
call above it which still has the reference to `x`.
> But I tried to fix it by **replacing** the reference to the string with a
> reference to the new string and it still goes up to 513 MB
Well you did in the current function being called, but the function which
called it still has the reference to what was passed in. e.g.
def foo(x):
print "foo1", locals()
bar(x)
print "foo2", locals()
def bar(x):
print "bar1", locals()
x = "something else"
print "bar2", locals()
foo('original thing')
When `foo()` is called, it passes the string `'original thing'` to `bar()`.
And even though `bar()` then gets rid of the reference, the current call above
to `foo()` still has the reference
$ python tmp_x.py
foo1 {'x': 'original thing'}
bar1 {'x': 'original thing'}
bar2 {'x': 'something else'}
foo2 {'x': 'original thing'}
I hope that illustrates it. I have been a little vague in my first statement
about stack frames.
|
Value Error: x and y must have the same first dimension
Question: Let me quickly brief you first, I am working with a .txt file with 5400 data
points. Each is a 16 second average over a 24 hour period (24 hrs * 3600 s/hr
= 86400...86400/16 = 5400). In short this is the average magnetic strength in
the z direction for an inbound particle field curtsy of the Advanced
Composition Experiment satellite. Data publicly available
[here](http://www.srl.caltech.edu/ACE/ASC/level2/index.html). So when I try to
plot it says the error
Value Error: x and y must have the same first dimension
So I created a numpy lin space of 5400 points broken apart by 16 units. I did
this because I thought that my dimensions didn't match with my previous array
that I had defined. But now I am sure these two array's are of the same
dimension and yet it still gives back that Value Error. The code is as
follows:
**First try (without the linspace):**
import numpy as np
import matplotlib as plt
Bz = np.loadtxt(r"C:\Users\Schmidt\Desktop\Project\Data\ACE\MAG\ACE_MAG_Data_20151202_GSM.txt", dtype = bytes).astype(float)
Start_ACE = dt.date(2015,12,2)
Finish_ACE = dt.date(2015,12,2)
dt_Mag = 16
time_Mag = np.arange(Start_ACE, Finish_ACE, dt_Mag)
plt.subplot(3,1,1)
plt.plot(time_Mag, Bz)
plt.title('Bz 2015 12 02')
**Second Try (with linspace):**
import numpy as np
import matplotlib as plt
Bz = np.loadtxt(r"C:\Users\Schmidt\Desktop\Project\Data\ACE\MAG\ACE_MAG_Data_20151202_GSM.txt", dtype = bytes).astype(float)
Mag_time = np.linspace(0,5399,16, dtype = float)
plt.subplot(3,1,1)
plt.plot(Mag_time, Bz)
plt.title('Bz 2015 12 02')
Other than it being a dimensional problem I don't know what else could be
holding back this plotting procedure back.
**Full traceback:**
ValueError Traceback (most recent call last)
<ipython-input-68-c5dc0bdf5117> in <module>()
1 plt.subplot(3,1,1)
----> 2 plt.plot(Mag_time, Bz)
3 plt.title('Bz 2015 12 02')
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\pyplot.py in plot(*args, **kwargs)
3152 ax.hold(hold)
3153 try:
-> 3154 ret = ax.plot(*args, **kwargs)
3155 finally:
3156 ax.hold(washold)
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\__init__.py in inner(ax, *args, **kwargs)
1809 warnings.warn(msg % (label_namer, func.__name__),
1810 RuntimeWarning, stacklevel=2)
-> 1811 return func(ax, *args, **kwargs)
1812 pre_doc = inner.__doc__
1813 if pre_doc is None:
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\axes\_axes.py in plot(self, *args, **kwargs)
1422 kwargs['color'] = c
1423
-> 1424 for line in self._get_lines(*args, **kwargs):
1425 self.add_line(line)
1426 lines.append(line)
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in _grab_next_args(self, *args, **kwargs)
384 return
385 if len(remaining) <= 3:
--> 386 for seg in self._plot_args(remaining, kwargs):
387 yield seg
388 return
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in _plot_args(self, tup, kwargs)
362 x, y = index_of(tup[-1])
363
--> 364 x, y = self._xy_from_xy(x, y)
365
366 if self.command == 'plot':
C:\Users\Schmidt\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in _xy_from_xy(self, x, y)
221 y = _check_1d(y)
222 if x.shape[0] != y.shape[0]:
--> 223 raise ValueError("x and y must have same first dimension")
224 if x.ndim > 2 or y.ndim > 2:
225 raise ValueError("x and y can be no greater than 2-D")
ValueError: x and y must have same first dimension
Answer: The problem was the selection of array creation. Instead of linspace, I should
have used arange.
Mag_time = np.arange(0,86400, 16, dtype = float)
|
Calling mpmath directly from C
Question: I want to access mpmath's special functions from a C code. I know how to do it
via an intermediate python script. For instance, in order to evaluate the
hypergeometric function, the C program:
#include <Python.h>
void main (int argc, char *argv[])
{
int npars= 4;
double a1, a2, b1, x, res;
PyObject *pName, *pModule, *pFunc, *pArgs, *pValue;
PyObject *pa1, *pa2, *pb1, *px;
a1= atof(argv[1]);
a2= atof(argv[2]);
b1= atof(argv[3]);
x= atof(argv[4]);
setenv("PYTHONPATH", ".", 1); // Set PYTHONPATH TO bin directory
Py_Initialize();
pa1= PyFloat_FromDouble(a1);
pa2= PyFloat_FromDouble(a2);
pb1= PyFloat_FromDouble(b1);
px= PyFloat_FromDouble(x);
pName = PyString_FromString("GGauss_2F1");
pModule = PyImport_Import(pName);
pFunc = PyObject_GetAttrString(pModule, "Gauss_2F1");
pArgs = PyTuple_Pack(npars, pa1, pa2, pb1, px);
pValue = PyObject_CallObject(pFunc, pArgs);
res= PyFloat_AsDouble(pValue);
printf("2F1(x)= %.15f\n", res);
}
works all right by calling the GGauss_2F1.py script:
from mpmath import *
def Gauss_2F1(a1, a2, b1, z):
hpg= hyp2f1(a1, a2, b1, z)
return hpg
Is there a way to call the mpmath function hyp2f1 directly from C, without
having to resort to an intermediate python script? I guess that the mpmath
module can be imported by the command
PyRun_SimpleString("from mpmath import *");
But how do I access the actual function?
Answer: > What? No! Literally do the things you did to access GGauss_2F1.Gauss_2F1,
> just with the names changed. Why are you trying to PyRun_SimpleString("from
> mpmath import *")? – user2357112
Ok. Following your suggestions:
#include <Python.h>
void main (int argc, char *argv[])
{
int npars= 4;
double a1, a2, b1, x, res;
PyObject *pName, *pModule, *pFunc, *pArgs, *pValue;
PyObject *pa1, *pa2, *pb1, *px;
a1= atof(argv[1]);
a2= atof(argv[2]);
b1= atof(argv[3]);
x= atof(argv[4]);
setenv("PYTHONPATH", ".", 1); // Set PYTHONPATH TO bin directory
Py_Initialize();
pa1= PyFloat_FromDouble(a1);
pa2= PyFloat_FromDouble(a2);
pb1= PyFloat_FromDouble(b1);
px= PyFloat_FromDouble(x);
pName = PyString_FromString("mpmath");
pModule = PyImport_Import(pName);
pFunc = PyObject_GetAttrString(pModule, "hyp2f1");
pArgs = PyTuple_Pack(npars, pa1, pa2, pb1, px);
pValue = PyObject_CallObject(pFunc, pArgs);
res= PyFloat_AsDouble(pValue);
printf("2F1(x)= %.15f\n", res);
}
The code seems to work as expected and is generating the correct result. Thank
you for your "patience"...
|
Convert string type array to array
Question: I have this:
[s[8] = 5,
s[4] = 3,
s[19] = 2,
s[17] = 8,
s[16] = 8,
s[2] = 8,
s[9] = 7,
s[1] = 2,
s[3] = 9,
s[15] = 7,
s[11] = 0,
s[10] = 9,
s[12] = 3,
s[18] = 1,
s[0] = 4,
s[14] = 5,
s[7] = 4,
s[6] = 2,
s[5] = 7,
s[13] = 9]
How can I turn this into a python array where I can do `for items in x:` ?
Answer:
import re
data = """[s[8] = 5,
s[4] = 3,
s[19] = 2,
s[17] = 8,
s[16] = 8,
s[2] = 8,
s[9] = 7,
s[1] = 2,
s[3] = 9,
s[15] = 7,
s[11] = 0,
s[10] = 9,
s[12] = 3,
s[18] = 1,
s[0] = 4,
s[14] = 5,
s[7] = 4,
s[6] = 2,
s[5] = 7,
s[13] = 9]"""
d = {int(m.group(1)): int(m.group(2)) for m in re.finditer(r"s\[(\d*)\] = (\d*)", data)}
seq = [d.get(x) for x in range(max(d))]
print(seq)
#result: [4, 2, 8, 9, 3, 7, 2, 4, 5, 7, 9, 0, 3, 9, 5, 7, 8, 8, 1]
|
Modules and variable scopes
Question: I'm not an expert at python, so bear with me while I try to understand the
nuances of variable scopes.
As a simple example that describes the problem I'm facing, say I have the
following three files.
The first file is outside_code.py. Due to certain restrictions I cannot modify
this file. It must be taken as is. It contains some code that runs an eval at
some point (yes, I know that eval is the spawn of satan but that's a
discussion for a later day). For example, let's say that it contains the
following lines of code:
def eval_string(x):
return eval(x)
The second file is a set of user defined functions. Let's call it
functions.py. It contains some unknown number of function definitions, for
example, let's say that functions.py contains one function, defined below:
def foo(x):
print("Your number is {}!".format(x))
Now I write a third file, let's call it main.py. Which contains the following
code:
import outside_code
from functions import *
outside_code.eval_string("foo(4)")
I import all of the function definitions from functions.py with a *, so they
should be accessible by main.py without needing to do something like
functions.foo(). I also import outside_code.py so I can access its core
functionality, the code that contains an eval. Finally I call the function in
outside_code.py, passing a string that is related to a function defined in
functions.py.
In the simplified example, I want the code to print out "Your number is 4!".
However, I get an error stating that 'foo' is not defined. This obviously
means that the code in outside_code.py cannot access the same foo function
that exists in main.py. So somehow I need to make foo accessible to it. Could
anyone tell me exactly what the scope of foo currently is, and how I could
extend it to cover the space that I actually want to use it in? What is the
best way to solve my problem?
Answer: `foo` has been imported into main.py; its scope is restricted to that file
(and to the file where it was originally defined, of course). It does not
exist within outside_code.py.
The real `eval` function accepts locals and globals dicts to allow you to add
elements to the namespace of the evaluted code. But you can't do anything if
your `eval_string` doesn't already pass those on.
|
How can I repeatedly play a sound sample, allowing the next loop to overlap the previous
Question: Not sure if this isn't a dupe, but the posts I found so far didn't solve my
issue.
* * *
A while ago, I wrote a (music) [metronome for
Ubuntu](http://askubuntu.com/a/814889/72216). The metronome is written in
`python3/Gtk`
To repeatedly play the metronome- tick (a recorded sound sample), I used
`subprocess.Popen()` to play the sound, using `ogg123` as a cli tool:
subprocess.Popen(["ogg123", soundfile])
This works fine, I can easily run up to 240 beats per minute.
### On WIndows
I decided to rewrite the project on Windows (`python3/tkinter/ttk`). I am
having a hard time however to play the sound, repeating the beat sample in
higher tempi. The next beat simply won't start while the previous one
(appearantly) hasn't finished yet, playing the beat sample.
Is there a way, in `python3` on Windows, I can start playing the next beat
while the sample is still playing?
Currently, I am using `winsound`:
winsound.Playsound()
Running this in a loop has, as mentioned issues.
Answer: You can use [pydub](https://github.com/jiaaro/pydub) for audio manipulation ,
including playing repetedly.
Here is an example. You can develop this further using examples from [pydub
site.](http://pydub.com/)
from pydub import AudioSegment
from pydub.playback import play
n = 2
audio = AudioSegment.from_file("sound.wav") #your audio file
play(audio * n) #Play audio 2 times
Change `n` above to the number that you need.
|
Unable to mock class methods using unitest in python
Question: module `a.ClassA`:
class ClassA():
def __init__(self,callingString):
print callingString
def functionInClassA(self,val):
return val
module `b.ClassB`:
from a.ClassA import ClassA
class ClassB():
def __init__(self,val):
self.value=val
def functionInsideClassB(self):
obj=ClassA("Calling From Class B")
value=obj.functionInClassA(self.value)
Python `unittest` class
import unittest
from b.ClassB import ClassB
from mock import patch, Mock, PropertyMock,mock
class Test(unittest.TestCase):
@patch('b.ClassB.ClassA',autospec = True)
def _test_sample(self,classAmock):
dummyMock=Mock()
dummyMock.functionInClassA.return_value="mocking functionInClassA"
classAmock.return_value=dummyMock
obj=ClassB("dummy_val")
obj.functionInsideClassB()
assert dummyMock.functionInClassA.assert_called_once_with("dummy_val")
Assertion fails. Where exactly am I going wrong? I am using python 2.7
Answer: You assigned to `return_value` twice:
classAmock.return_value=dummyMock
classAmock.return_value=Mock()
That second assignment undoes your work setting up `dummyMock` entirely; the
new `Mock` instance has no `functionInClassA` attribute set up.
You don't need to create new mock objects; just use the default `return_value`
attribute value:
class Test(unittest.TestCase):
@patch('b.ClassB.ClassA', autospec=True)
def test_sample(self, classAmock):
instance = classAmock.return_value
instance.functionInClassA.return_value = "mocking functionInClassA"
obj = ClassB("dummy_val")
obj.functionInsideClassB()
instance.functionInClassA.assert_called_once_with("dummy_val")
You do **not** need to assert the return value of `assert_called_once_with()`
as that is always `None` (making your extra `assert` fail, always). Leave the
assertion to the `assert_called_once_with()` method, it'll raise as needed.
|
Grab retweeted status text in loop
Question: I am using the python script tweepy to scrape Twitter data; the scraped data
is output as a csv. The retweets are truncated. I am looking for suggestions
on how I could modify the code below to grab the "retweeted_status.text" if
the retweeted_status is "True". It seems that I have to specify:
"api.user_timeline(screen_name = screen_name,count=200,include_rts=True)"
import sys
from urllib.request import urlopen
default_encoding = 'utf-8'
if sys.getdefaultencoding() != default_encoding:
reload(sys)
sys.setdefaultencoding(default_encoding)
import tweepy #https://github.com/tweepy/tweepy
import csv
#Twitter API credentials
consumer_key = ""
consumer_secret = ""
access_key = ""
access_secret = ""
screenNamesList = [
''
]
def get_all_tweets(screen_name):
#Twitter only allows access to a users most recent 3240 tweets with this method
#authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
#initialize a list to hold all the tweepy Tweets
alltweets = []
#make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name = screen_name,count=200)
#save most recent tweets
alltweets.extend(new_tweets)
#save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
#keep grabbing tweets until there are no tweets left to grab
while len(new_tweets) > 0:
#print "getting tweets before %s" % (oldest)
#all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest)
#save most recent tweets
alltweets.extend(new_tweets)
#update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
#print "...%s tweets downloaded so far" % (len(alltweets))
#transform the tweepy tweets into a 2D array that will populate the csv
outtweets = [[tweet.id_str, tweet.created_at, tweet.text, tweet.retweet_count, tweet.favorite_count, tweet.author.followers_count, tweet.author.description] for tweet in alltweets]
#write the csv
with open('%s_tweets.csv' % screen_name, 'w', newline='', encoding='utf-8-sig') as f:
writer = csv.writer(f)
writer.writerow(["id", "created_at", "text", "retweet_count","favorite_count", "followers_count", "description"])
writer.writerows(outtweets)
pass
if __name__ == '__main__':
#pass in the username of the account you want to download
for i, user in enumerate(screenNamesList):
get_all_tweets(screenNamesList[i])
i+=1
Answer: As you've discovered, the way Twitter handles retweets is a little strange.
In the JSON representation of the tweet you have something which looks like
this:
` - user: "you" - text: "RT @example this text gets truncated at the end of
the tw..." - retweeted: "true" - retweeted_status: - user: "example" - text:
"this text is the full text" `
So, look for `"retweeted":"true"` in the response. If it is there, grab the
tweet text from `status->retweeted_status->text`
|
How can I make my python binary converter pass these tests
Question: My python code is supposed to take decimal numbers from 0 to 255 as arguments
and convert them to binary, return invalid when the parameter is less than 0
or greater than 255
def binary_converter(x):
if (x < 0) or (x > 255):
return "invalid input"
try:
return int(bin(x)[2:]
except ValueError:
pass
The Test
import unittest
class BinaryConverterTestCases(unittest.TestCase):
def test_conversion_one(self):
result = binary_converter(0)
self.assertEqual(result, '0', msg='Invalid conversion')
def test_conversion_two(self):
result = binary_converter(62)
self.assertEqual(result, '111110', msg='Invalid conversion')
def test_no_negative_numbers(self):
result = binary_converter(-1)
self.assertEqual(result, 'Invalid input', msg='Input below 0 not allowed')
def test_no_numbers_above_255(self):
result = binary_converter(300)
self.assertEqual(result, 'Invalid input', msg='Input above 255 not allowed')
Answer: You already know how to check the range of the input argument, and how to
return values. Now it's a simple matter of returning what the assignment
requires.
In checking for valid input, all you've missed is to capitalize "Invalid".
For legal conversions, you just need to pass back the binary representation
without the leading "0b", which you've almost done (remove that integer
conversion, as two commenters have already noted).
|
Python multiprocessing lock strange behavior
Question: I notice a behaviour in my code that I cannot explain. This is the code:
import multiprocessing
from collections import deque
LOCK = multiprocessing.Lock()
data = deque(['apple', 'orange', 'melon'])
def f(*args):
with LOCK:
data.rotate()
print data[0]
pool = multiprocessing.Pool()
pool.map(f, range(4))
I expect that the output would be
melon
orange
apple
melon
but instead I get
melon
melon
melon
Any ideas would be greatly appreciated.
Answer: As Tim Peters commented, the problem is not the `Lock` but that the `deque` is
not shared across the processes but every process will have their own copy.
There are some data structures provided by the `multiprocessing` module which
will be shared across processes, e.g.
[`multiprocessing.Queue`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Queue).
Use that instead.
|
Trying to create a crude send/receive through TCP in python
Question: So far I can send files to my "fileserver" and retrieve files from there as
well. But i can't do both at the same time. I have to comment out one of the
other threads for them to work. As you will see in my code.
SERVER CODE
from socket import *
import threading
import os
# Send file function
def SendFile (name, sock):
filename = sock.recv(1024)
if os.path.isfile(filename):
sock.send("EXISTS " + str(os.path.getsize(filename)))
userResponse = sock.recv(1024)
if userResponse[:2] == 'OK':
with open(filename, 'rb') as f:
bytesToSend = f.read(1024)
sock.send(bytesToSend)
while bytesToSend != "":
bytesToSend = f.read(1024)
sock.send(bytesToSend)
else:
sock.send('ERROR')
sock.close()
def RetrFile (name, sock):
filename = sock.recv(1024)
data = sock.recv(1024)
if data[:6] == 'EXISTS':
filesize = long(data[6:])
sock.send('OK')
f = open('new_' + filename, 'wb')
data = sock.recv(1024)
totalRecieved = len(data)
f.write(data)
while totalRecieved < filesize:
data = sock.recv(1024)
totalRecieved += len(data)
f.write(data)
sock.close()
myHost = ''
myPort = 7005
s = socket(AF_INET, SOCK_STREAM)
s.bind((myHost, myPort))
s.listen(5)
print("Server Started.")
while True:
connection, address = s.accept()
print("Client Connection at:", address)
# u = threading.Thread(target=RetrFile, args=("retrThread", connection))
t = threading.Thread(target=SendFile, args=("sendThread", connection))
# u.start()
t.start()
s.close()
CLIENT CODE
from socket import *
import sys
import os
servHost = ''
servPort = 7005
s = socket(AF_INET, SOCK_STREAM)
s.connect((servHost, servPort))
decision = raw_input("do you want to send or retrieve a file?(send/retrieve): ")
if decision == "retrieve" or decision == "Retrieve":
filename = raw_input("Filename of file you want to retrieve from server: ") # ask user for filename
if filename != "q":
s.send(filename)
data = s.recv(1024)
if data[:6] == 'EXISTS':
filesize = long(data[6:])
message = raw_input("File Exists, " + str(filesize)+"Bytes, download?: Y/N -> ")
if message == "Y" or message == "y":
s.send('OK')
f = open('new_' + filename, 'wb')
data = s.recv(1024)
totalRecieved = len(data)
f.write(data)
while totalRecieved < filesize:
data = s.recv(1024)
totalRecieved += len(data)
f.write(data)
print("{0: .2f}".format((totalRecieved/float(filesize))*100)) + "% Done" # print % of download progress
print("Download Done!")
else:
print("File does not exist!")
s.close()
elif decision == "send" or decision == "Send":
filename = raw_input("Filename of file you want to send to server: ")
if filename != "q":
s.send(filename)
if os.path.isfile(filename):
s.send("EXISTS " + str(os.path.getsize(filename)))
userResponse = s.recv(1024)
if userResponse[:2] == 'OK':
with open(filename, 'rb') as f:
bytesToSend = f.read(1024)
s.send(bytesToSend)
while bytesToSend != "":
bytesToSend = f.read(1024)
s.send(bytesToSend)
else:
s.send('ERROR')
s.close()
s.close()
I'm still new to programming, so this is quite tough for me. All in all i'm
just trying to figure out how to send AND receive files without having to
comment out the bottom threads in my SERVER CODE.
Please and thank you!
Answer: On the serverside, you're trying to use the same connection for your two
threads t and u.
I think it might work if you listened for another connection in your `while
True:` loop on the server, after you started your first thread.
I always use the more high-level `socketserver` module ([Python Doc on
socketserver](https://docs.python.org/3.6/library/socketserver.html#socketserver-
tcpserver-example)), which also natively supports Threading. I recommend
checking it out!
By the way, since you do a lot of `if (x == 'r' or x == 'R')`: you could just
do `if x.lower() == 'r'`
|
Python dropbox - Opening spreadsheets
Question: I was testing with the dropbox provided API for python..my target was to read
a Spreadsheet in my dropbox without downloading it to my local storage.
import dropbox
dbx = dropbox.Dropbox('my-token')
print dbx.users_get_current_account()
fl = dbx.files_get_preview('/CGPA.xlsx')[1] # returns a Response object
After the above code, calling the `fl.text()` method gives an HTML output
which shows the preview that would be seen if opened by browser. And the data
can be parsed.
My query is, if there is a built-in method of the SDK for getting any
particular info from the spreadsheet, like the data of a row or a
cell...preferrably in json format...I previously used butterdb for extracting
data from a google drive spreadsheet...is there such functionality for
dropbox?....could not understand by reading the docs: <http://dropbox-sdk-
python.readthedocs.io/en/master/>
Answer: No, the Dropbox API doesn't offer the ability to selectively query parts of a
spreadsheet file like this without downloading the whole file, but we'll
consider it a feature request.
|
How to calculate the values of a pandas DataFrame column depending on the results of a rolling function from another column
Question:
A very simple example just for understanding.
**The goal is to calculate the values of a pandas DataFrame column depending
on the results of a rolling function from another column.**
I have the following DataFrame:
import numpy as np
import pandas as pd
s = pd.Series([1,2,3,2,1,2,3,2,1])
df = pd.DataFrame({'DATA':s, 'POINTS':0})
df
[](http://i.stack.imgur.com/8dZmD.png)
_Note: I don't even know how to format the Jupyter Notebook results in the
Stackoverflow edit window, so I copy and paste the image, I beg your pardon._
The **DATA** column shows the observed data; the **POINTS** column,
initialized to 0, is used to collect the output of a "rolling" function
applied to DATA column, as explained in the following.
Set a window = 4
nwin = 4
Just for the example, **the "rolling" function calculate the max**.
Now let me use a drawing to explain what I need.
[](http://i.stack.imgur.com/GflSG.png)
For every iteration, the rolling function calculate the maximum of the data in
the window; then the POINT at the same index of the max DATA is incremented by
1.
The final result is:
[](http://i.stack.imgur.com/7P1Z5.png)
Can you help me with the python code?
I really appreciate your help.
Thank you in advance for your time,
Gilberto
_P.S. Can you also suggest how to copy and paste Jupyter Notebook formatted
cell to Stackoverflow edit window? Thank you._
Answer: IIUC the explanation by @IanS (thanks again!), you can do
In [75]: np.array([df.DATA.rolling(4).max().shift(-i) == df.DATA for i in range(4)]).T.sum(axis=1)
Out[75]: array([0, 0, 3, 0, 0, 0, 3, 0, 0])
To update the column:
In [78]: df = pd.DataFrame({'DATA':s, 'POINTS':0})
In [79]: df.POINTS += np.array([df.DATA.rolling(4).max().shift(-i) == df.DATA for i in range(4)]).T.sum(axis=1)
In [80]: df
Out[80]:
DATA POINTS
0 1 0
1 2 0
2 3 3
3 2 0
4 1 0
5 2 0
6 3 3
7 2 0
8 1 0
|
Python: [Errno 2] No such file or directory
Question: I want to open and read all csv file in a specific folder. I'm on OS X El
Capitan version 10.11.6, and I'm using Python 2.7.10. I have the following
function in phyton file:
def open_csv_files(dir):
for root,dirs,files in os.walk(dir):
for file in files:
if file.endswith(".csv"):
f= open(file)
print "FILE OPEN, AND DO SOMETHING... "
f.close
return
I call `open_csv_file(./dati/esempi)`
This procedure return
IOError: [Errno 2] No such file or directory: 'sensorfile_1.csv'
I try to call the procedure with absolute path
`/Users/Claudia/Desktop/Thesis/dati/esempi/` but I have the same error.
Moreover I define another procedure that print all filename in folder, this
procedure print correctly all filenames in folder.
Thanks for the help.
Answer: You need to build absolute path to file based on values of `root` (base dir)
and file name.
import os
def open_csv_files(directory):
for root, dirs, files in os.walk(directory):
for file_name in files:
if file_name.endswith(".csv"):
full_file_path = os.path.join(root, file_name)
with open(full_file_path) as fh:
print "Do something with", full_file_path
|
Binding outputs of transformers in FeatureUnion
Question: New to python and sklearn so apologies in advance. I have two transformers and
I would like to gather the results in a `FeatureUnion (for a final modelling
step at the end). This should be quite simple but FeatureUnion is stacking the
outputs rather than providing an nx2 array or DataFrame. In the example below
I will generate some data that is 10 rows by 2 collumns. This will then
generate two features that are 10 rows by 1 collumn. I would like the final
feature union to have 10 rows and 1 collumn but what I get are 20 rows by 1
collumn.
I will try to demonstrate with my example below:
some imports
import numpy as np
import pandas as pd
from sklearn import pipeline
from sklearn.base import TransformerMixin
some random data
df = pd.DataFrame(np.random.rand(10, 2), columns=['a', 'b'])
a custom transformer that selects a collumn
class Trans(TransformerMixin):
def __init__(self, col_name):
self.col_name = col_name
def fit(self, X):
return self
def transform(self, X):
return X[self.col_name]
a pipeline that uses the transformer twice (in my real case I have two
different transformers but this reproduces the problem)
pipe = pipeline.FeatureUnion([
('select_a', Trans('a')),
('select_b', Trans('b'))
])
now i use the pipeline but it returns an array of twice the length
pipe.fit_transform(df).shape
(20,)
however I would like an array with dimensions (10, 2).
Quick fix?
Answer: The transformers in the `FeatureUnion` need to return 2-dimensional matrices,
however in your code by selecting a column, you are returning a 1-dimensional
vector. You could fix this by selecting the column with `X[[self.col_name]]`.
|
Animated text funtion only working for certain strings
Question: I am attempting to make a function that displays animated text in Python
import sys
def anitext(str):
for char in str:
sys.stdout.write(char)
time.sleep(textspeed)
print ("")
This function is working for strings such as
anitext ("String")
And for sole variables such as
name = ("Stack")
anitext (name)
But will not work for input statements, or conjoined statements like
anitext (name, "This is a string")
Is there any way for this "Anitext" function to work on statements that are
not just plain strings?
_\- Olli E_
Answer: You just need to use argument unpacking. See [Arbitrary Argument
Lists](https://docs.python.org/3/tutorial/controlflow.html#arbitrary-argument-
lists) in the official Python tutorial.
import sys
import time
textspeed = 0.2
def anitext(*args):
for s in args:
for char in s:
sys.stdout.write(char)
sys.stdout.flush()
time.sleep(textspeed)
print("")
anitext("String", "Another string", "More stuff")
I've made a couple of other changes to your script. The `sys.stdout.flush()`
call ensures that the characters are actually printed one at a time; most
terminals will buffer whole lines of text, so you wouldn't actually see the
animation happening.
Also, I use `s` for the name of the current string being animated. You should
not use `str` as a variable name as that shadows the built-in `str` type. That
makes your code confusing to read and can also lead to subtle bugs.
|
Python, scipy.optimize.curve_fit do not fit to a linear equation where the slope is known
Question: I think I have a relatively simple problem but I have been trying now for a
few hours without luck. I am trying to fit a linear function (linearf) or
power-law function (plaw) where I already known the slope of these functions
(b, I have to keep it constant in this study). The results should give an
intercept around 1.8, something I have not managed to get. I must do something
wrong but I can not point my finger on it. Does somebody have an idea how to
get around this problem?
Thank you in advance!
import numpy as np
from scipy import optimize
p2 = np.array([ 8.08543600e-06, 1.61708700e-06, 1.61708700e-05,
4.04271800e-07, 4.04271800e-06, 8.08543600e-07])
pD = np.array([ 12.86156, 16.79658, 11.52103, 21.092 , 14.47469, 18.87318])
# Power-law function
def plaw(a,x):
b=-0.1677 # known slope
y = a*(x**b)
return y
# linear function
def linearf(a,x):
b=-0.1677 # known slope
y = b*x + a
return y
## First way, via power-law function ##
popt, pcov = optimize.curve_fit(plaw,p2,pD,p0=1.8)
# array([ 7.12248200e-37]) wrong
popt, pcov = optimize.curve_fit(plaw,p2,pD)
# >>> return 0.9, it is wrong too (the results should be around 1.8)
## Second way, via log10 and linear function ##
x = np.log10(p2)
y = np.log10(pD)
popt, pcov = optimize.curve_fit(linearf,x,y,p0=0.3)
K = 10**popt[0]
## >>>> return 3.4712954470408948e-41, it is wrong
Answer: I just discover an error in the functions:
It should be :
def plaw(x,a):
b=-0.1677 # known slope
y = a*(x**b)
return y
and not
def plaw(a,x):
b=-0.1677 # known slope
y = a*(x**b)
return y
Stupid mistake.
|
PyGobject error
Question:
#!/usr/bin/python
# -*- coding: utf-8 -*-
from gi.repository import Gtk
class ourwindow(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self, title="My Hello World Program")
Gtk.Window.set_default_size(self, 400,325)
Gtk.Window.set_position(self, Gtk.WindowPosition.CENTER)
button1 = Gtk.Button("Hello, World!")
button1.connect("clicked", self.whenbutton1_clicked)
self.add(button1)
def whenbutton1_clicked(self, button):
print "Hello, World!"
window = ourwindow()
window.connect("delete-event", Gtk.main_quit)
window.show_all()
Gtk.main()
This Python+GTK code is giving me the following error:
./pygtk.py
./pygtk.py:3: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded.
from gi.repository import Gtk
Traceback (most recent call last):
File "./pygtk.py", line 4, in <module>
class ourwindow(Gtk.Window):
File "./pygtk.py", line 10, in ourwindow
button1.connect("clicked", self.whenbutton1_clicked)
NameError: name 'self' is not defined
It also gives me an indentaion error. I am new to Python and GTK. Thanks in
advance.
Answer: This is most likely how it should be formatted:
#!/usr/bin/python
# -*- coding: utf-8 -*-
from gi.repository import Gtk
class ourwindow(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self, title="My Hello World Program")
Gtk.Window.set_default_size(self, 400,325)
Gtk.Window.set_position(self, Gtk.WindowPosition.CENTER)
button1 = Gtk.Button("Hello, World!")
button1.connect("clicked", self.whenbutton1_clicked)
self.add(button1)
def whenbutton1_clicked(self, button):
print "Hello, World!"
window = ourwindow()
window.connect("delete-event", Gtk.main_quit)
window.show_all()
Gtk.main()
I would definitely recommend you to read some basic Python tutorial to at
least understand the syntax. Easier to do GUI stuff when you know the basics
of the language.
|
How do I format a scientific number into decimal format in Python?
Question: I'm having trouble trying to convert the results of my "def pricing(question)"
function into decimal values instead of scientific.
I tried converting the result to a string but that didn't work and I can't see
anyway of formatting the pricex variables where they are.
Any help is much appreciated
[Code](http://i.stack.imgur.com/9UmXK.png)
[](http://i.stack.imgur.com/9UmXK.png)
[Result](http://i.stack.imgur.com/GzSHS.png)
[](http://i.stack.imgur.com/GzSHS.png)
Answer: You need to use a formatting string.
>>> import math
>>> print(math.pi)
3.141592653589793
>>> print("{:.2f}".format(math.pi))
3.14
|
Regex Search in Python: Exclude port 22 lines with ' line 22 '
Question: My current regex search in python looks for lines with `' 22 '`, but I would
like to exclude lines that have `' line 22 '`. How could I express this in
`Regex`? Would I be `'.*(^line) 22 .*$'`
import re
sshRegexString='.* 22 .*$'
sshRegexExpression=re.compile(sshRegexString)
Answer: You current requirement to find a line that _contains_ ` 22 ` but does not
contain `line 22 ` can be implemented without the help of a regex.
Just check if these texts are `in` or are `not in` the string inside list
comprehension. Here is a [Python demo](http://ideone.com/o2jFU7) (I assume you
have lines in a list, but it can be adjusted to handling lines read from file
one by one):
lines = ['Some text 1 line 22 here', 'Some text 2 Text2 22 here', 'Some text 3 Text3 22 here']
good = [s for s in lines if ' 22 ' in s and 'line 22 ' not in s]
print(good) # the first lines[0] is not printed!
|
youtube-dl python script postprocessing error: FFMPEG codecs aren't being recognized
Question: My python script is trying to download youtube videos with youtube-dl.py.
Works fine unless postprocessing is required. The code:
import youtube_dl
options = {
'format':'bestaudio/best',
'extractaudio':True,
'audioformat':'mp3',
'outtmpl':'%(id)s', #name the file the ID of the video
'noplaylist':True,
'nocheckcertificate':True,
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192',
}]
}
with youtube_dl.YoutubeDL(options) as ydl:
ydl.download(['http://www.youtube.com/watch?v=BaW_jenozKc'])
Below is the output I receive:[](http://i.stack.imgur.com/g3mBo.png)
I get a similar error if I try setting 'preferredcodec' to 'opus' or 'best'.
I'm not sure if this is relevant, but I can run the command line counterpart
fine:
youtube-dl -o 'test2.%(ext)s' --extract-audio --audio-format mp3 --no-check-certificate https://www.youtube.com/watch?v=BaW_jenozKc
I've gotten a few clues from the internet and other questions and from what i
understand this is most likely an issue with my ffmpeg, which isn't a python
module right? Here is my ffmpeg version and configuration: [](http://i.stack.imgur.com/0Uprn.png)
If the answer to my problem is to add some configuration setting to my ffmpeg
please explain how i go about doing that.
Answer: This is a bug in the interplay between youtube-dl and ffmpeg, caused by the
lack of extension in the filename. youtube-dl calls ffmpeg. Since the filename
does not contain any extension, youtube-dl asks ffmpeg to generate a temporary
file `mp3`. However, ffmpeg detects the output container type automatically by
the extension and fails because `mp3` has no extension.
As a workaround, simply add `%(ext)s` in your filename template:
'outtmpl': u'%(id)s.%(ext)s',
|
Calculate run time of a given function python
Question: I have created a function that takes in a another function as parameter and
calculates the run time of that particular function. but when i run it, i can
not seem to understand why this is not working . Does any one know why ?
import time
import random
import timeit
import functools
def ListGenerator(rangeStart,rangeEnd,lenth):
sampleList = random.sample(range(rangeStart,rangeEnd),lenth)
return sampleList
def timeit(func):
@functools.wraps(func)
def newfunc(*args):
startTime = time.time()
func(*args)
elapsedTime = time.time() - startTime
print('function [{}] finished in {} ms'.format(
func.__name__, int(elapsedTime * 1000)))
return newfunc
@timeit
def bubbleSort(NumList):
compCount,copyCount= 0,0
for currentRange in range(len(NumList)-1,0,-1):
for i in range(currentRange):
compCount += 1
if NumList[i] > NumList[i+1]:
temp = NumList[i]
NumList[i] = NumList[i+1]
NumList[i+1] = temp
# print("Number of comparisons:",compCount)
NumList = ListGenerator(1,200,10)
print("Before running through soriting algorithm\n",NumList)
print("\nAfter running through soriting algorithm")
bubbleSort(NumList)
print(NumList,"\n")
for i in range (0, 10, ++1):
print("\n>Test run:",i+1)
bubbleSort(NumList)
compCount = ((len(NumList))*((len(NumList))-1))/2
print("Number of comparisons:",compCount)
run time screen shot [](http://i.stack.imgur.com/i5M2f.png)
Answer: It looks like the code just executes incredibly fast. In `bubbleSort`, I added
an additional `for` loop to execute the comparisons another `10000` times:
@timeit
def bubbleSort(NumList):
compCount,copyCount= 0,0
for i in range(10000):
for currentRange in range(len(NumList)-1,0,-1):
for i in range(currentRange):
compCount += 1
if NumList[i] > NumList[i+1]:
temp = NumList[i]
NumList[i] = NumList[i+1]
NumList[i+1] = temp
Now the result is:
('Before running through soriting algorithm\n', [30, 18, 144, 28, 155, 183, 50, 101, 156, 26])
After running through soriting algorithm
function [bubbleSort] finished in 12 ms
([18, 26, 28, 30, 50, 101, 144, 155, 156, 183], '\n')
('\n>Test run:', 1)
function [bubbleSort] finished in 12 ms
('Number of comparisons:', 45)
('\n>Test run:', 2)
function [bubbleSort] finished in 8 ms
('Number of comparisons:', 45)
('\n>Test run:', 3)
etc... @vishes_shell points this out in the comments as well.
|
How to calculate Variable Importance in SVM regression models
Question: How do I calculate the variable importance of an
[SVM](https://en.wikipedia.org/wiki/Support_vector_machine) regression model
implemented in Python?
At least, if an already-implemented function does not exist, I would like some
hints how to calculate it theoretically. I have already searched in the
literature without any success. Any idea?
Answer: You can use "l1" as the [penalty function](http://scikit-
learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html) to get a sparse
model. See [here](http://stats.stackexchange.com/questions/2179/variable-
importance-from-svm) for details. Then just inspect the coefficients of the
learned model.
Below we use the [iris
dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set) as an example.
from sklearn import svm, datasets
iris = datasets.load_iris()
X = iris.data
y = iris.target
linear_svc = svm.LinearSVC(penalty='l1',dual=False).fit(X,y)
>>> linear_svc.coef_.shape
(3, 4)
# Note the three rows corresponds to the three classes in iris dataset
# the four columns corresponds to the four features.
>>> linear_svc.coef_
## feature sepal-l sepal-w petal-l petal-w
array([[ 0.09528818, 0.62495604, -0.90205483, 0. ], # 'setosa',
[ 0. , -0.93523507, 0.40433182, -0.89729973], # 'versicolor',
[-0.65391116, -0.94345497, 1.29683221, 2.10726403]]) # 'virginica'
When inspecting the variable importance, you should refer to
[this](https://upload.wikimedia.org/wikipedia/commons/5/56/Iris_dataset_scatterplot.svg)
graph at wikipedia.
For example, the first class (first row in `linear_svc.coef_` \- "setosa" (red
in below graph: x-axis is sepal-width, y-axis is petal-length) can be
perfectly separated (with a linear separator) from the other two classes with
only the two features in the middle. I think that's why the fourth feature has
a zero coefficient and the first feature's coefficient is very small. Because
given the two features - sepal-width and petal-length, it's sufficient to tell
whether the species is "setosa" or not.
[](http://i.stack.imgur.com/xVazf.png)
On the other hand, "versicolor" and "virginica" are harder to separate from
each other using a linear separator. Hence 'versicolor' has three non-zero
coefficient and 'virginica' has four.
**EDIT** : I just noticed you are talking about an SVM _regression_ model. It
should be pretty similar. But you might want to use [`SVR`](http://scikit-
learn.org/stable/modules/generated/sklearn.svm.LinearSVR.html).
|
Using multiple levels of inheritance with sqlalchemy declarative base
Question: I have many tables with identical columns. The difference is the table names
themselves. I want to set up a inheritance chain to minimize code duplication.
The following single layer inheritance works the way I want it to:
from sqlalchemy import Column, Integer, Text
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.orm import sessionmaker
engine = sqlalchemy.create_engine('sqlite:///monDom5.db')
class Base(object):
"""base for all table classes"""
__abstract__ = True
__table_args__ = {'autoload': True, 'autoload_with': engine}
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
Base = declarative_base(cls=Base)
class TransMap_HgmIntronVector(Base):
AlignmentId = Column(Text, primary_key=True)
But requires me to specify the `AlignmentId` column for every instance of the
`Hgm` base. I would instead like to do this, but get a
`sqlalchemy.exc.InvalidRequestError` when I try to actually use it:
from sqlalchemy import Column, Integer, Text
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.orm import sessionmaker
engine = sqlalchemy.create_engine('sqlite:///monDom5.db')
class Base(object):
"""base for all table classes"""
__abstract__ = True
__table_args__ = {'autoload': True, 'autoload_with': engine}
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
# model for all Hgm tables
class Hgm(Base):
__abstract__ = True
AlignmentId = Column(Text, primary_key=True)
Base = declarative_base(cls=Hgm)
class TransMap_HgmIntronVector(Hgm):
pass
metadata = Base.metadata
Session = sessionmaker(bind=engine)
session = Session()
Leads to the error
>>> metadata = Base.metadata
>>> Session = sessionmaker(bind=engine)
>>> session = Session()
>>> session.query(TransMap_HgmIntronVector).all()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/cluster/home/ifiddes/anaconda2/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1260, in query
return self._query_cls(entities, self, **kwargs)
File "/cluster/home/ifiddes/anaconda2/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 110, in __init__
self._set_entities(entities)
File "/cluster/home/ifiddes/anaconda2/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 118, in _set_entities
entity_wrapper(self, ent)
File "/cluster/home/ifiddes/anaconda2/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 3829, in __init__
"expected - got '%r'" % (column, )
sqlalchemy.exc.InvalidRequestError: SQL expression, column, or mapped entity expected - got '<class '__main__.TransMap_HgmIntronVector'>'
Answer: An example is in the
[docs](http://docs.sqlalchemy.org/en/latest/orm/extensions/declarative/mixins.html#augmenting-
the-base). In particular, `__abstract__ = True` is not necessary. This works
fine:
class Base(object):
@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
class Hgm(Base):
AlignmentId = Column(Text, primary_key=True)
Base = declarative_base(cls=Hgm)
class TransMap_HgmIntronVector(Base):
pass
Note that it may be simpler to just use a mixin for the identical columns
instead.
|
Extracting a row from a table from a url
Question: I want to download EPS value for all years (Under Annual Trends) from the
below link. [http://www.bseindia.com/stock-share-
price/stockreach_financials.aspx?scripcode=500180&expandable=0](http://www.bseindia.com/stock-
share-price/stockreach_financials.aspx?scripcode=500180&expandable=0)
I tried using Beautiful Soup as mentioned in the below answer. [Extracting
table contents from html with python and
BeautifulSoup](http://stackoverflow.com/questions/17196018/extracting-table-
contents-from-html-with-python-and-beautifulsoup) But couldn't proceed after
the below code. I feel I am very close to my answer. Any help will be greatly
appreciated.
from bs4 import BeautifulSoup
import urllib2
html = urllib2.urlopen("http://www.bseindia.com/stock-share-price/stockreach_financials.aspx?scripcode=500180&expandable=0").read()
soup=BeautifulSoup(html)
table = soup.find('table',{'id' :'acr'})
#the below code wasn't working as I expected it to be
tr = table.find('tr', text='EPS')
I am open to using any other language to get this done
Answer: The text is in the _td_ not the _tr_ so get the _td_ using the text and then
call _.parent_ to get the _tr_ :
In [12]: table = soup.find('table',{'id' :'acr'})
In [13]: tr = table.find('td', text='EPS').parent
In [14]: print(tr)
<tr><td class="TTRow_left" style="padding-left: 30px;">EPS</td><td class="TTRow_right">48.80</td>
<td class="TTRow_right">42.10</td>
<td class="TTRow_right">35.50</td>
<td class="TTRow_right">28.50</td>
<td class="TTRow_right">22.10</td>
</tr>
In [15]: [td.text for td in tr.select("td + td")]
Out[15]: [u'48.80', u'42.10', u'35.50', u'28.50', u'22.10']
Which you will see exactly matches what is on the page.
Another approach would be to call _find_next_siblings_ :
In [17]: tds = table.find('td', text='EPS').find_next_siblings("td")
In [18]: tds
Out[19]:
[<td class="TTRow_right">48.80</td>,
<td class="TTRow_right">42.10</td>,
<td class="TTRow_right">35.50</td>,
<td class="TTRow_right">28.50</td>,
<td class="TTRow_right">22.10</td>]
In [20]: [td.text for td in tds]
Out[20]: [u'48.80', u'42.10', u'35.50', u'28.50', u'22.10']
|
Need help adding API PUT method to Python script
Question: I am using the script below to collect inventory information from servers and
send it to a product called Device42. The script currently works however one
of the APIs that I'm trying to add uses PUT instead of POST. I'm not a
programmer and just started using python with this script. This script is
using iron python. Can the PUT method be used in this script?
"""
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
##################################################
# a sample script to show how to use
# /api/ip/add-or-update
# /api/device/add-or-update
#
# requires ironPython (http://ironpython.codeplex.com/) and
# powershell (http://support.microsoft.com/kb/968929)
##################################################
import clr
clr.AddReference('System.Management.Automation')
from System.Management.Automation import (
PSMethod, RunspaceInvoke
)
RUNSPACE = RunspaceInvoke()
import urllib
import urllib2
import traceback
import base64
import math
import ssl
import functools
BASE_URL='https://device42_URL'
API_DEVICE_URL=BASE_URL+'/api/1.0/devices/'
API_IP_URL =BASE_URL+'/api/1.0/ips/'
API_PART_URL=BASE_URL+'/api/1.0/parts/'
API_MOUNTPOINT_URL=BASE_URL+'/api/1.0/device/mountpoints/'
API_CUSTOMFIELD_URL=BASE_URL+'/api/1.0/device/custom_field/'
USER ='usernme'
PASSWORD ='password'
old_init = ssl.SSLSocket.__init__
@functools.wraps(old_init)
def init_with_tls1(self, *args, **kwargs):
kwargs['ssl_version'] = ssl.PROTOCOL_TLSv1
old_init(self, *args, **kwargs)
ssl.SSLSocket.__init__ = init_with_tls1
def post(url, params):
"""
http post with basic-auth
params is dict like object
"""
try:
data= urllib.urlencode(params) # convert to ascii chars
headers = {
'Authorization' : 'Basic '+ base64.b64encode(USER + ':' + PASSWORD),
'Content-Type' : 'application/x-www-form-urlencoded'
}
req = urllib2.Request(url, data, headers)
print '---REQUEST---',req.get_full_url()
print req.headers
print req.data
reponse = urllib2.urlopen(req)
print '---RESPONSE---'
print reponse.getcode()
print reponse.info()
print reponse.read()
except urllib2.HTTPError as err:
print '---RESPONSE---'
print err.getcode()
print err.info()
print err.read()
except urllib2.URLError as err:
print '---RESPONSE---'
print err
def to_ascii(s):
# ignore non-ascii chars
return s.encode('ascii','ignore')
def wmi(query):
return [dict([(prop.Name, prop.Value) for prop in psobj.Properties]) for psobj in RUNSPACE.Invoke(query)]
def closest_memory_assumption(v):
return int(256 * math.ceil(v / 256.0))
def add_or_update_device():
computer_system = wmi('Get-WmiObject Win32_ComputerSystem -Namespace "root\CIMV2"')[0] # take first
bios = wmi('Get-WmiObject Win32_BIOS -Namespace "root\CIMV2"')[0]
operating_system = wmi('Get-WmiObject Win32_OperatingSystem -Namespace "root\CIMV2"')[0]
environment = wmi('Get-WmiObject Win32Reg_ESFFarmNode -Namespace "root\CIMV2"')[0]
mem = closest_memory_assumption(int(computer_system.get('TotalPhysicalMemory')) / 1047552)
dev_name = to_ascii(computer_system.get('Name')).upper()
fqdn_name = to_ascii(computer_system.get('Name')).upper() + '.' + to_ascii(computer_system.get('Domain')).lower()
device = {
'memory' : mem,
'os' : to_ascii(operating_system.get('Caption')),
'osver' : operating_system.get('OSArchitecture'),
'osmanufacturer': to_ascii(operating_system.get('Manufacturer')),
'osserial' : operating_system.get('SerialNumber'),
'osverno' : operating_system.get('Version'),
'service_level' : environment.get('Environment'),
'notes' : 'Test w/ Change to Device name collection'
}
devicedmn = ''
for dmn in ['Domain1', 'Domain2', 'Domain3', 'Domain4', 'Domain5']:
if dmn == to_ascii(computer_system.get('Domain')).strip():
devicedmn = 'Domain'
device.update({ 'name' : fqdn_name, })
break
if devicedmn != 'Domain':
device.update({
'name': dev_name,
})
manufacturer = ''
for mftr in ['VMware, Inc.', 'Bochs', 'KVM', 'QEMU', 'Microsoft Corporation', 'Xen']:
if mftr == to_ascii(computer_system.get('Manufacturer')).strip():
manufacturer = 'virtual'
device.update({ 'manufacturer' : 'vmware', })
break
if manufacturer != 'virtual':
device.update({
'manufacturer': to_ascii(computer_system.get('Manufacturer')).strip(),
'hardware': to_ascii(computer_system.get('Model')).strip(),
'serial_no': to_ascii(bios.get('SerialNumber')).strip(),
'type': 'Physical',
})
cpucount = 0
for cpu in wmi('Get-WmiObject Win32_Processor -Namespace "root\CIMV2"'):
cpucount += 1
cpuspeed = cpu.get('MaxClockSpeed')
cpucores = cpu.get('NumberOfCores')
if cpucount > 0:
device.update({
'cpucount': cpucount,
'cpupower': cpuspeed,
'cpucore': cpucores,
})
hddcount = 0
hddsize = 0
for hdd in wmi('Get-WmiObject Win32_LogicalDisk -Namespace "root\CIMV2" | where{$_.Size -gt 1}'):
hddcount += 1
hddsize += hdd.get('Size') / 1073741742
if hddcount > 0:
device.update({
'hddcount': hddcount,
'hddsize': hddsize,
})
post(API_DEVICE_URL, device)
for hdd in wmi('Get-WmiObject Win32_LogicalDisk -Namespace "root\CIMV2" | where{$_.Size -gt 1}'):
mountpoint = {
'mountpoint' : hdd.get('Name'),
'label' : hdd.get('Caption'),
'fstype' : hdd.get('FileSystem'),
'capacity' : hdd.get('Size') / 1024 / 1024,
'free_capacity' : hdd.get('FreeSpace') / 1024 / 1024,
'device' : dev_name,
'assignment' : 'Device',
}
post(API_MOUNTPOINT_URL, mountpoint)
network_adapter_configuration = wmi('Get-WmiObject Win32_NetworkAdapterConfiguration -Namespace "root\CIMV2" | where{$_.IPEnabled -eq "True"}')
for ntwk in network_adapter_configuration:
for ipaddr in ntwk.get('IPAddress'):
ip = {
'ipaddress' : ipaddr,
'macaddress' : ntwk.get('MACAddress'),
'label' : ntwk.get('Description'),
'device' : dev_name,
}
post(API_IP_URL, ip)
def main():
try:
add_or_update_device()
except:
traceback.print_exc()
if __name__ == "__main__":
main()
Answer: Ok first things first you need to understand the difference between PUT and
POST. I would write it out but another member of the community gave a very
good description of the two
[here](http://stackoverflow.com/questions/107390/whats-the-difference-between-
a-post-and-a-put-http-request).
Now, yes you can use requests with that script. Here is an example of using
the requests library by python, in order to install requests if you have pip
installed install it like this:
pip install requests
Now, lest go through some examples of using the Requests library, the
documentation can be found [here](http://docs.python-requests.org/en/master/).
HTTP Get Request. So for this example, you call the get function from the
request library, give the url as parameter, then you can print out the text
from the touple that is returned. Since GET will return something, it will
generally be in the text portion of the touple allowing you to print it.
r = requests.get('http://urlhere.com/apistuffhere')
print(r.text)
HTTP POST: Posting to a url, depending on how the API was set up will return
something, it generally does for error handling, but you also have to pass in
parameters. Here is an example for a POST request to a new user entry. And
again, you can print the text from the touple to check the response from the
API
payload = {'username': 'myloginname', 'password': 'passwordhere'}
r = requests.post('https://testlogin.com/newuserEntry', params=payload)
print(r.text)
Alternatively you can print just r and it should return you a response 200
which should be successful.
For PUT: You have to keep in mind put responses can not be cacheable, so you
can post data to the PUT url, but you will not know if there is an error or
not but use the same syntax as POST. I have not tried to print out the text
response in a PUT request using the Request library as I don't use PUT in any
API I write.
requests.put('http://urlhere.com/putextension')
Now for implementing this into your code, you already have the base of the
url, in your post for the login just do:
payload = {'username': USERNAME, 'passwd':PASSWORD}
r = requests.post('https://loginurlhere.com/', params=payload)
#check response by printing text
print (r.text)
As for putting data to an extension of your api, let us assume you already
have a payload variable ready with the info you need, for example the API
device extension:
requests.put(API_DEVICE, params=payload)
And that should PUT to the url. If you have any questions comment below and I
can answer them if you would like.
|
Python: registering key presses and saving responses to an array or matrix
Question: I am very new to Python, and I have been struggling with trying to find an
answer to this question for a while now.
I am using Python 3.5 to write an experiment script. I would like to write a
script that loops through a number of trials and saves which key was pressed
for each trial in a matrix. I am coding in Python's IDLE.
I attempted to write something using tkinter, but cannot get it to loop
through the trials and store them properly in the matrix. It is important that
pressing the "Enter" key after the key press is not required. Also, I know
msvcrt.getch() will not work in IDLE (I would like to try continue using
IDLE), and I cannot use Psychopy2 (for reasons that are too detailed to
explain here).
I am open to any suggestions/recommendations! Thank you so much in advance for
your time.
[I posted my draft code below to try to give people a better idea of what I am
trying to do. I know it is incorrect, though!]
import tkinter as tk
import numpy
num_trials = 5
response_matrix = numpy.zeros([num_trials,1]) # creating a matrix to fill
for x in range(0,num_trials-1): # looping through trials 0 to 4
def keypress(event):
key = event.char
if key == "1":
print("1 pressed")
response_matrix[x]=1 # assigning what key pressed to the matrix
elif key == "2":
print("2 pressed")
response_matrix[x]=2
elif key == "3":
print("3 pressed")
response_matrix[x]=3
elif key == "4":
print("4 pressed")
response_matrix[x]=4
elif x == 4: # the final trial
print("done")
break
root = tk.Tk()
root.bind_all('<Key>', keypress)
root.update()
Answer: The two big mistakes are trying to do repetitions with a loop and defining the
function within the loop. Putting characters in a numpy array is likely not
what you want. Anyway, you can build on the following.
import tkinter as tk
num_trials = 5
trials = 0
responses = []
def keypress(event):
global trials
key = event.char
if key.isdigit() and trials < num_trials:
print("%s pressed" % key)
responses.append(key)
trials += 1
if trials == num_trials:
print('Responses were %s.' % responses)
print('Hit [X] to quit.')
print('Enter %s digits.' % num_trials)
root = tk.Tk()
root.bind('<Key>', keypress)
root.mainloop()
In a real program, you should probably put messages to user in widgets, and
only use print for debugging.
|
Allow end-user to upload and execute javascript on server side
Question: I'm studying javascript/nodeJS to develop ERP solution. I would like to allow
ERP end-users to upload their own custom scripts, so they can interact with
ERP scripts. Of course user scripts should implement pre-defined ERP API.
For example this is a feature offered by ODOO (open-ERP) using user custom
scripts writen in python.
I would like to know if full javascript stack can do the trick: is it possible
to import some uploaded js file at runtime in node.js, in order to execute
them? Is there any issue with this approach?
Answer: I would suggest you to use
[workers](https://nodejs.org/api/cluster.html#cluster_worker_process). It will
look something like this:
const cluster = require('cluster');
cluster.setupMaster({
exec: 'fileUploadedByUser.js'
});
cluster.fork();
But I would highly recommend you to review all such files before executing
them, or at least write some code to analyze them and find any system
functions / variables usage.
Also you can use something like [pidusage](https://github.com/soyuka/pidusage)
to track usage of CPU by those files (workers), and if it will reach the limit
you set, just kill the process.
|
python urlib in loop
Question: my requirement is to read some page which has so many links available in order
i have to stop at suppose at 4th link and i have to read and connect to the
url at that particular link save the link contents in a list again the
connected link has so many links and i have to connected to the link at 4th
position again repeat this process for suppose 10 times and finally print the
names of the link connected
i am using this code urlllib is working only once
import urllib
from bs4 import *
url = raw_input('enter url:')
count = raw_input('enter count:')
position = raw_input('enter position:')
count = int(count)
position = int(position)
l = list()
p = 0
for _ in xrange(0,count):
print 'retrieving:' + url
html = urllib.urlopen(url).read()
s = BeautifulSoup(html)
tags = s.findAll('a')
for tag in tags:
w = tag.get('href')
p = p + 1
if p == position:
url = "'" + w + "'"
l.append(tag.contents[0])
print l
Answer: Without knowing the particular site you're talking about this is just a guess,
but could it be that the links in the page you're interested in are relative
and not absolute? If that's the case when you reset url in the for loop then
it would be set to an incomplete link like /link.php instead of
<http://example.com/link.php> and urllib wouldn't know what to do with that.
If you expect all the links you could be interested in to be relative then
you'd need to add the base url before appending the new link for it follow.
|
Reading .dat file with fixed column width
Question: The code I use in SAS
Options symbolgen ps=10000;
Data span_nonspan;
INFILE 'C:\September 2016\SAMPLE.dat';
INPUT @1 XYZ $10.
@11 ABC $7.
@18 PM $3.
run;
Can anyone please help me how to write in Python 3.5 to read data same way, I
am new to Python and any help would be really appreciated.
The data would look something like below:
5085489966001001600220161002191219 1P 106SCHWARTZ
Answer:
import pandas as p
data = p.read_fwf("filename", colspecs=[(0,10),(11,17),(17,20),(20,24)],names=['DP','PHONETYPE','MARKET','FOLDER'])
data.head()
|
How to rectify this error?
Question:
python serve.py
/usr/local/lib/python3.4/dist-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.sqlalchemy is deprecated, use flask_sqlalchemy instead.
.format(x=modname), ExtDeprecationWarning
Traceback (most recent call last):
File "serve.py", line 1, in <module>
from CTFd import create_app
File "/home/rajat/Downloads/CTFd/CTFd/__init__.py", line 7, in <module>
from utils import get_config, set_config
ImportError: cannot import name 'get_config'
Don't know what is causing this error
Tried pip install utils
Thanks in advance
Answer: ` from utils import get_config, set_config `
looks like `get_config` does not exists in `utils` library.
I think you need to give more details about your problem.
|
Python Turtle game, Check not working?
Question:
import turtle
# Make the play screen
wn = turtle.Screen()
wn.bgcolor("red")
# Make the play field
mypen = turtle.Turtle()
mypen.penup()
mypen.setposition(-300,-300)
mypen.pendown()
mypen.pensize(5)
for side in range(4):
mypen.forward(600)
mypen.left(90)
mypen.hideturtle()
# Make the object
player = turtle.Turtle()
player.color("black")
player.shape("circle")
player.penup()
# define directions( East, West , South , Nord )
def west():
player.setheading(180)
def east():
player.setheading(0)
def north():
player.setheading(90)
def south():
player.setheading(270)
# define forward
def forward():
player.forward(20)
# Wait for input
turtle.listen()
turtle.onkey(west, "a")
turtle.onkey(east, "d")
turtle.onkey(forward,"w")
turtle.onkey(north,"q")
turtle.onkey(south,"s")
if player.xcor() > 300 or player.xcor() < -300:
print("Game over")
if player.ycor() > 300 or player.ycor() < -300:
print("Game over")
So everything is working fine, till the If statements. When i go trough the
play field it should give me a print " Game over ". The coordinates are right
but it doesnt check the coordinates! What am i doing wrong ?
Answer: The problem is that your logic to test if the player has gone out of bounds is
at the top level of your code -- it doesn't belong there. You should turn
control over to the turtle listener, via `mainloop()` and handle the bounds
detection in one of your callback methods, namely `forward()`.
A demonstration of the above in a rework of your code:
import turtle
QUADRANT = 250
# Make the play screen
screen = turtle.Screen()
screen.bgcolor("red")
# Make the play field
play_pen = turtle.Turtle()
play_pen.pensize(5)
play_pen.speed("fastest")
play_pen.penup()
play_pen.setposition(-QUADRANT, -QUADRANT)
play_pen.pendown()
for _ in range(4):
play_pen.forward(QUADRANT * 2)
play_pen.left(90)
play_pen.hideturtle()
# Make the object
player = turtle.Turtle()
player.color("black")
player.shape("circle")
player.penup()
# define forward
def forward():
player.forward(20)
if player.xcor() > QUADRANT or player.xcor() < -QUADRANT or player.ycor() > QUADRANT or player.ycor() < -QUADRANT:
player.hideturtle()
player.setposition((0, 0))
player.write("Game over", False, align="center", font=("Arial", 24, "normal"))
# define directions(East, West, North, South)
turtle.onkey(lambda: player.setheading(180), "a") # west
turtle.onkey(lambda: player.setheading(0), "d") # east
turtle.onkey(lambda: player.setheading(90), "q") # north
turtle.onkey(lambda: player.setheading(270), "s") # south
turtle.onkey(forward, "w")
# Wait for input
turtle.listen()
turtle.mainloop()
|
Having a compiling error with Python using PyCharm 4.0.5
Question: The reason for me asking the question here is that I did not find a solution
elsewhere. I'm having the following error with my PyCharm 4.0.5 program while
trying to run a Python script. It was working fine the one day and when I
tried using it this afternoon I got the following error after tying to run a
program which I am 100% has no errors in it. In the message box I got the
following error:
Failed to import the site module
Traceback (most recent call last):
File "C:\Python34\lib\site.py", line 562, in <module>
main()
File "C:\Python34\lib\site.py", line 544, in main
known_paths = removeduppaths()
File "C:\Python34\lib\site.py", line 125, in removeduppaths
dir, dircase = makepath(dir)
File "C:\Python34\lib\site.py", line 90, in makepath
dir = os.path.join(*paths)
AttributeError: 'module' object has no attribute 'path'
Process finished with exit code 1
I have never seen an error of this kind and don't know where to start tackling
this problem.
Any feedback will be greatly appreciated!
The code looks like the following, and I seem to have forgotten to mention
that it gives me the exact same error for every single .py script on my
computer.
import turtle
wn = turtle.Screen()
alex = turtle.Turtle()
def hexagon(var):
for i in range(6):
alex.right(60)
alex.forward(var)
def square(var):
for i in range(4):
alex.forward(var)
alex.left(90)
def triangle(var):
for i in range(3):
alex.forward(var)
alex.left(120)
def reset():
alex.clear()
alex.reset()
x = True
while x:
alex.hideturtle()
choice = input("""
Enter the shape of choice:
a. Triangle
b. Square
c. Hexagon
""")
if choice.lower() == "a":
length = input("Enter the desired length of the sides: ")
triangle(int(length))
restart = input("Do you wish to try again? Y/N ")
if restart.lower() == "n":
x = False
else:
reset()
if choice.lower() == "b":
length = input("Enter the desired length of the sides: ")
square(int(length))
restart = input("Do you wish to try again? Y/N ")
if restart.lower() == "n":
x = False
else:
reset()
if choice.lower() == "c":
length = input("Enter the desired length of the sides: ")
hexagon(int(length))
restart = input("Do you wish to try again? Y/N ")
if restart.lower() == "n":
x = False
else:
reset()
print("Thank you for using your local turtle services!")
Answer: You must have a python file named `os.py` which is being imported instead of
the "real" os module.
|
Python Django Rest Post API without storage
Question: I would like to create a web api with Python and the Django Rest framework.
The tutorials that I have read so far incorporate models and serializers to
process and store data. I was wondering if there's a simpler way to process
data that is post-ed to my api and then return a JSON response without storing
any data.
Currently, this is my urls.py
from django.conf.urls import url
from rest_framework import routers
from core.views import StudentViewSet, UniversityViewSet, TestViewSet
router = routers.DefaultRouter()
router.register(r'students', StudentViewSet)
router.register(r'universities', UniversityViewSet)
router.register(r'other', TestViewSet,"other")
urlpatterns = router.urls
and this is my views.py
from rest_framework import viewsets
from rest_framework.decorators import api_view
from rest_framework.response import Response
from .models import University, Student
from .serializers import UniversitySerializer, StudentSerializer
import json
from django.http import HttpResponse
class StudentViewSet(viewsets.ModelViewSet):
queryset = Student.objects.all()
serializer_class = StudentSerializer
class UniversityViewSet(viewsets.ModelViewSet):
queryset = University.objects.all()
serializer_class = UniversitySerializer
class TestViewSet(viewsets.ModelViewSet):
def retrieve(self, request, *args, **kwargs):
return Response({'something': 'my custom JSON'})
The first two parts regarding Students and Universities were created after
following a tutorial on Django setup. I don't need the functionality that it
provides for creating, editing and removing objects. I tried playing around
with the TestViewSet which I created.
I am currently stuck trying to receive JSON data that gets posted to the url
ending with "other" and processing that JSON before responding with some
custom JSON.
**Edit**
These two links were helpful in addition to the solution provided:
[Django REST framework: non-model
serializer](http://stackoverflow.com/questions/13603027/django-rest-framework-
non-model-serializer)
<http://jsatt.com/blog/abusing-django-rest-framework-part-1-non-model-
endpoints/>
Answer: You can use their generic [APIView](http://www.django-rest-framework.org/api-
guide/views/) class (which doesn't have any attachment to Models or
Serializers) and then handle the request yourself based on the HTTP request
type. For example:
class RetrieveMessages(APIView):
def post(self, request, *args, **kwargs):
posted_data = self.request.data
city = posted_data['city']
return_data = [
{"echo": city}
]
return Response(status=200, data=return_data)
def get....
|
Redirect error when trying to request a url with requests/urllib only in python
Question: im trying to post data to a url in my server ... but im stuck in sending any
request to that url (any url on that server ) here is one for example
http://apimy.in/page/test
the website is written in python3.4/django1.9
i can send request with `curl` in `php` without any problem
but any request with python will result on some kind of redirect error
at first i've tried `requests` lib
i got this error
TooManyRedirects at /api/sender
Exceeded 30 redirects.
Request Method: GET
Request URL: http://localhost:8000/api/sender
Django Version: 1.9.6
Exception Type: TooManyRedirects
Exception Value:
Exceeded 30 redirects.
i thought maybe something wrong with `requests` so i tried `urllib`
request_data = urllib.parse.urlencode({"DATA": 'aaa'}).encode()
response = urllib.request.urlopen("http://apimy.in/page/test" , data=request_data)
HTTPError at /api/sender
HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop.
The last 30x error message was:
Found
Request Method: GET
Request URL: http://localhost:8000/api/sender
Django Version: 1.9.6
Exception Type: HTTPError
Exception Value:
HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop.
The last 30x error message was:
Found
Exception Location: c:\Python344\lib\urllib\request.py in http_error_302, line 675
im using mod_wsgi and apache to serve the website
Answer: Since you are using the [`requests`](http://docs.python-
requests.org/en/master/) module did you try to tell the requests module to
ignore redirects?
import requests
response = requests.get('your_url_here', allow_redirects=False)
Maybe this work. If this doesn't work, you can also try to change your `user-
agent` in case your server is configured to drop `script-requests` for
security reasons.
import requests
headers = {'user-agent': 'some_user_agent'}
response = requests.get(url, headers=headers)
|
Django app has a no ImportError: No module named 'django.core.context_processors'
Question: Tried git pushing my app after tweaking it and got the following error.
ImportError: No module named 'django.core.context_processors'
this was not showing up in my heroku logs and my app works locally so I was
confused. I had to set debug to true on the production side to finally figure
this out. What can I do to clear this up?
this is some of the traceback
Request Method: GET
Request URL: http://hispanicheights.com/
Django Version: 1.10.1
Exception Type: ImportError
Exception Value:
No module named 'django.core.context_processors'
Exception Location: /app/.heroku/python/lib/python3.5/importlib/__init__.py in import_module, line 126
Python Executable: /app/.heroku/python/bin/python
Python Version: 3.5.1
Python Path:['/app',
'/app/.heroku/python/bin',
'/app/.heroku/python/lib/python3.5/site-packages/setuptools-23.1.0-py3.5.egg',
'/app/.heroku/python/lib/python3.5/site-packages/pip-8.1.2-py3.5.egg',
'/app',
'/app/.heroku/python/lib/python35.zip',
'/app/.heroku/python/lib/python3.5',
'/app/.heroku/python/lib/python3.5/plat-linux',
'/app/.heroku/python/lib/python3.5/lib-dynload',
'/app/.heroku/python/lib/python3.5/site-packages',
'/app',
'/app']
I looked at line 126 and this is what's there
return _bootstrap._gcd_import(name[level:], package, level)
this
django.core.context_processors
is no where to be found in the init file. I looked in my settings file for
production and saw this
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'debug': True,
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
"django.core.context_processors.request",
],
},
},
]
am I supposed to modify this some how?
Answer: Try removing `"django.core.context_processors.request"` from your settings.
In Django 1.10 `django.core.context_processors` has been moved to
`django.template.context_processors`. See [the release
notes](https://docs.djangoproject.com/en/1.10/releases/1.8/#django-core-
context-processors)
You already have the request context processors, no need to add it again with
the wrong location.
|
Gantt Chart python machine scheduling
Question: I'm having some bad time trying to plot a gantt chart from a data set with
python. I have a set of machines that work on different tasks during a time
period. I want to make a gantt chart that shows by the y axis the machines and
x axis the time spent on each task. Each machine should appear just once on
the y axis and to make easier to see the tasks I want the same color for each
task.
The idea is to check strange things like having the same task being processed
by two or more machines together.
Let me show what data set I have with a small example:
machine equipment start finish
m1 e2 date1 date2
m2 e2 date3 date4
m1 e1 date5 date6
m3 e3 date7 date8
m3 e4 date9 date10
I tried to use the
[broken_barh](http://matplotlib.org/examples/pylab_examples/broken_barh.html)
from matplotlib, but I can't figure out a way to add the data for the plot
efficiently. Since I have some thing like 100 machines and 400 tasks.
[Here is a picture](http://i.stack.imgur.com/ea3eL.gif) to show how the output
should look like.
Current code bellow:
import datetime as dt
machines = set(list(mydata["machine"]))
tasks = set(list(mydata["task"]))
fig, ax = plt.subplots(figsize=(20, 10))
yrange = 5 # y width of gantt bar
ymin = 0
orign = min(list(mydata["start"])) # time origin
for i in machines:
stdur = [] # list of tuples (start, duration)
ymin = index*6 # start y of gantt bar
for index, row in mydata.iterrows():
if row["machine"] == i:
start = (row["start"] - orign).total_seconds()/3600
duration = (row["finish"] - row["start"]).total_seconds()/3600
stdur.append((start,duration))
ax.broken_barh(stdur,(ymin,yrange))
ax.set_xlabel('Time')
ax.set_yticklabels(machines)
plt.show()
Answer: Seimetz,
If you are fine passing the entire data to the client and let a JS Gantt do
it's thing, then the RadiantQ jQuery Gantt Package will be a better option for
you. Here is an online demo of something similar:
<http://demos.radiantq.com/jQueryGanttDemo/Samples/ServerStatusWithHugeData.htm>
This is how it looks: [Server Status
Sample](http://i.stack.imgur.com/VyzsO.png)
More on the product here: <http://radiantq.com/products/jquery-gantt-
package/jquery-gantt-package-features/>
|
Python and Tkinter root naming
Question: I often see a GUI using root.mainloop() at the end. Near the top sometimes
they put: root=tk.TK() and sometimes they just put: root=Tk()
## Do these two statements do something different? (examples below)
from Tkinter import *
class App:
def __init__(self, master):
frame = Frame(master)
frame.pack()
self.button = Button(frame,
text="QUIT", fg="red",
command=frame.quit)
self.button.pack(side=LEFT)
self.slogan = Button(frame,
text="Hello",
command=self.write_slogan)
self.slogan.pack(side=LEFT)
def write_slogan(self):
print "Tkinter is easy to use!"
root = Tk()
app = App(root)
root.mainloop()
* * *
import Tkinter as tk
counter = 0
def counter_label(label):
counter = 0
def count():
global counter
counter += 1
label.config(text=str(counter))
label.after(1000, count)
count()
root = tk.Tk()
root.title("Counting Seconds")
label = tk.Label(root, fg="dark green")
label.pack()
counter_label(label)
button = tk.Button(root, text='Stop', width=25, command=root.destroy)
button.pack()
root.mainloop()
* * *
Answer: The difference is in the import statements. After `from Tkinter import *`, all
of the names from Tkinter are directly incorporated into your namespace; in
fact, you can't refer to `Tkinter` itself because that name wasn't imported.
In contrast `import Tkinter as tk` ONLY imports `Tkinter` (under the shorted
alias `tk`), and all further references to names from the Tkinter module have
to be prefixed with that name.
|
How can I join a list of characters into strings of 8?
Question: I have a python list of chars and want to join them to create a list of
strings of 8 elements each, eg:
x = ['0','0','1','a','4','b','6','2','2','1','4','1','5','7','9','8']
result
['001a4b62', '21415798']
Answer: The [`itertools`
documentation](https://docs.python.org/2/library/itertools.html) contains a
`grouper` recipe that groups consecutive items to fixed-sized groups:
from itertools import *
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
Now you can just group into lists of size 8, and turn each one to a string:
>>> [''.join(e) for e in grouper(x, 8)]
['001a4b62', '21415798']
|
Python program that sends txt file to email
Question: I've recently created a python keylogger. The code is :
import win32api
import win32console
import win32gui
import pythoncom,pyHook
win=win32console.GetConsoleWindow()
win32gui.ShowWindow(win,0)
def OnKeyboardEvent(event):
if event.Ascii==5:
_exit(1)
if event.Ascii !=0 or 8:
#open output.txt to read current keystrokes
f=open('c:\output.txt','r+')
buffer=f.read()
f.close()
#open output.txt to write current + new keystrokes
f=open('c:\output.txt','w')
keylogs=chr(event.Ascii)
if event.Ascii==13:
keylogs='/n'
buffer+=keylogs
f.write(buffer)
f.close()
# create a hook manager object
hm=pyHook.HookManager()
hm.KeyDown=OnKeyboardEvent
# set the hook
hm.HookKeyboard()
# wait forever
pythoncom.PumpMessages()
However, I would like this to send to my e-mail. Do you have any idea what I
could add to allow this, or a separate program that would do this.
Thanks in advance
Answer: [The python docs has good documentation of emails in
python.](https://docs.python.org/3/library/email-examples.html)
# Import smtplib for the actual sending function
import smtplib
# Import the email modules we'll need
from email.mime.text import MIMEText
# Open a plain text file for reading. For this example, assume that
# the text file contains only ASCII characters.
with open(textfile) as fp:
# Create a text/plain message
msg = MIMEText(fp.read())
# me == the sender's email address
# you == the recipient's email address
msg['Subject'] = 'The contents of %s' % textfile
msg['From'] = me
msg['To'] = you
# Send the message via our own SMTP server.
s = smtplib.SMTP('localhost')
s.send_message(msg)
s.quit()
This example has exactly what you are asking for.
|
Getting Column Headers from multiple html 'tbody'
Question: I need to get the column headers from the second tbody in this url.
<http://bepi.mpob.gov.my/index.php/statistics/price/daily.html>
Specifically, i would like to see "september, october"... etc.
I am getting the following error:
runfile('C:/Python27/Lib/site-packages/xy/workspace/webscrape/mpob1.py', wdir='C:/Python27/Lib/site-packages/xy/workspace/webscrape')
Traceback (most recent call last):
File "<ipython-input-8-ab4005f51fa3>", line 1, in <module>
runfile('C:/Python27/Lib/site-packages/xy/workspace/webscrape/mpob1.py', wdir='C:/Python27/Lib/site-packages/xy/workspace/webscrape')
File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 685, in runfile
execfile(filename, namespace)
File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 71, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Python27/Lib/site-packages/xy/workspace/webscrape/mpob1.py", line 26, in <module>
soup.findAll('tbody', limit=2)[1].findAll('tr').findAll('th')]
IndexError: list index out of range
can anyone here please help me out? I shall be eternally grateful!
have posted my code below:
import requests
from bs4 import BeautifulSoup
import pandas as pd
url = "http://bepi.mpob.gov.my/index.php/statistics/price/daily.html"
r = requests.get(url)
soup = BeautifulSoup(r.text, 'lxml')
column_headers = [th.getText() for th in
soup.findAll('tbody', limit=2)[1].findAll('tr').findAll('th')]
Answer: When you click "View Price" button a POST request is sent to the
`http://bepi.mpob.gov.my/admin2/price_local_daily_view3.php` endpoint.
Simulate this POST request and parse the resulting HTML:
import requests
from bs4 import BeautifulSoup
with requests.Session() as session:
session.get("http://bepi.mpob.gov.my/index.php/statistics/price/daily.html")
response = session.post("http://bepi.mpob.gov.my/admin2/price_local_daily_view3.php", data={
"tahun": "2016",
"bulan": "9",
"Submit2222": "View Price"
})
soup = BeautifulSoup(response.content, 'lxml')
table = soup.find("table", id="hor-zebra")
headers = [td.get_text() for td in table.find_all("tr")[2].find_all("td")]
print(headers)
Prints the headers of the table:
[u'Tarikh', u'September', u'October', u'November', u'December', u'September', u'October', u'November', u'December', u'September', u'October', u'November', u'December']
|
asyncio: prevent task from being cancelled twice
Question: Sometimes, my coroutine cleanup code includes some blocking parts (in the
`asyncio` sense, i.e. they may yield).
I try to design them carefully, so they don't block indefinitely. So "by
contract", coroutine must never be interrupted once it's inside its cleanup
fragment.
Unfortunately, I can't find a way to prevent this, and bad things occur when
it happens (whether it's caused by actual double `cancel` call; or when it's
almost finished by itself, doing cleanup, and happens to be cancelled from
elsewhere).
Theoretically, I can delegate cleanup to some other function, protect it with
a `shield`, and surround it with `try`-`except` loop, but it's just ugly.
Is there a Pythonic way to do so?
#!/usr/bin/env python3
import asyncio
@asyncio.coroutine
def foo():
"""
This is the function in question,
with blocking cleanup fragment.
"""
try:
yield from asyncio.sleep(1)
except asyncio.CancelledError:
print("Interrupted during work")
raise
finally:
print("I need just a couple more seconds to cleanup!")
try:
# upload results to the database, whatever
yield from asyncio.sleep(1)
except asyncio.CancelledError:
print("Interrupted during cleanup :(")
else:
print("All cleaned up!")
@asyncio.coroutine
def interrupt_during_work():
# this is a good example, all cleanup
# finishes successfully
t = asyncio.async(foo())
try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
t.cancel()
# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass
@asyncio.coroutine
def interrupt_during_cleanup():
# here, cleanup is interrupted
t = asyncio.async(foo())
try:
yield from asyncio.wait_for(t, 1.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
t.cancel()
# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass
@asyncio.coroutine
def double_cancel():
# cleanup is interrupted here as well
t = asyncio.async(foo())
try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
t.cancel()
try:
yield from asyncio.wait_for(t, 0.5)
except asyncio.TimeoutError:
pass
else:
assert False, "should've been timed out"
# although double cancel is easy to avoid in
# this particular example, it might not be so obvious
# in more complex code
t.cancel()
# wait for finish
try:
yield from t
except asyncio.CancelledError:
pass
@asyncio.coroutine
def comain():
print("1. Interrupt during work")
yield from interrupt_during_work()
print("2. Interrupt during cleanup")
yield from interrupt_during_cleanup()
print("3. Double cancel")
yield from double_cancel()
def main():
loop = asyncio.get_event_loop()
task = loop.create_task(comain())
loop.run_until_complete(task)
if __name__ == "__main__":
main()
Answer: I ended up writing a simple function that provides a stronger shield, so to
speak.
Unlike `asyncio.shield`, which protects the callee, but raises
`CancelledError` in its caller, this function suppresses `CancelledError`
altogether.
The drawback is that this function doesn't allow you to handle
`CancelledError` later. You won't see whether it has ever happened. Something
_slightly_ more complex would be required to do so.
@asyncio.coroutine
def super_shield(arg, *, loop=None):
arg = asyncio.async(arg)
while True:
try:
return (yield from asyncio.shield(arg, loop=loop))
except asyncio.CancelledError:
continue
|
How to take input from stdin, display something using curses and output to stdout?
Question: I'm trying to make a python script that takes input from stdin, displays GUI
in terminal using curses and then when user finishes interaction outputs the
result to the stdout. Good example of this behaviour is
[selecta](https://github.com/garybernhardt/selecta) but it's written in ruby.
I can't make curses display anything. This is minimal (it only displays one
character and waits for one character) example of what I tried so far:
import os, sys
import curses
c = None
old_out = sys.__stdout__
old_in = sys.__stdin__
old_err = sys.__stderr__
sys.__stdout__ = sys.stdout = open('/dev/tty', 'w')
sys.__stdin__ = sys.stdin = open('/dev/tty')
sys.__stderr__ = sys.stderr = open('/dev/tty')
def show_a(s):
global c
s.addch(ord('a'))
c = s.getch()
curses.wrapper(show_a)
sys.stdin.flush()
sys.stdout.flush()
sys.stderr.flush()
sys.stdin.close()
sys.stdout.close()
sys.stderr.close()
sys.__stdout__ = sys.stdout = old_out
sys.__stdin__ = sys.stdin = old_in
sys.__stderr__ = sys.stderr = old_err
print(c)
When I try to use `echo $(python3 show_a.py)` nothing is displayed but after
pressing any key its number is displayed:
[](http://i.stack.imgur.com/lOsvS.png)
Is something like this even possible using curses, if so how to do this?
Answer: It doesn't work because the `print` statement is writing to the same standard
output as
[`curses.wrapper`](https://docs.python.org/2/library/curses.html#curses.wrapper).
You can either defer that `print` until after you have restored `sys.stdout`,
or you could use the `file=` property something like this:
printf(s.getch(), file=old_out)
For the other (ordering) problem, it sounds as if you need to amend the code
to do a
[_refresh_](https://docs.python.org/2/library/curses.html#curses.window.refresh)
after the `getch` (to make curses display it), and depending on what version
of curses, a _flush_ of the standard output would then be helpful.
Further reading:
* [Temporarily Redirect stdout/stderr](http://stackoverflow.com/questions/6796492/temporarily-redirect-stdout-stderr)
|
From perl to python
Question: I've got some code that I've translated from perl into python, but I am having
a time trying to figure out this last part.
my $bashcode=<<'__bash__';
. /opt/qip/etc/qiprc;
. /opt/sybase/sybase.sh
perl -mdata::dumper -e 'print dumper \%env';
__bash__
my $var1;
eval qx(bash -c "$bashcode");
While I understand (a bit) what this is doing, I can't seem to find out how to
do this in python. Any help would be greatly appreciated.
Answer: Your program is generating a script and running it.
A first python approximation is:
import os
script=""". /opt/qip/etc/qiprc;
. /opt/sybase/sybase.sh
perl -mdata::dumper -e 'print dumper \%env';
"""
os.system(script)
As you can see, perl is still being used inside your script which is using the
module data::dumper. If you want to use python here, you may need the
equivalent module.
|
Python Tkinter While Thread
Question: Well i am a bit of newb at python, and i am getting hard to make a thread in
Tkinter , as you all know using while in Tkinter makes it Not Responding and
the script still running.
def scheduler():
def wait():
schedule.run_pending()
time.sleep(1)
return
Hours = ScheduleTest()
if len(Hours) == 0:
print("You need to write Hours, Example: 13:30,20:07")
if len(Hours) > 0:
print("Scheduled: ", str(Hours))
if len(Hours) == 1:
schedule.every().day.at(Hours[0]).do(Jumper)
print("Will jump 1 time")
elif len(Hours) == 2:
schedule.every().day.at(Hours[0]).do(Jumper)
schedule.every().day.at(Hours[1]).do(Jumper)
print("Will jump 2 times")
elif len(Hours) == 3:
schedule.every().day.at(Hours[0]).do(Jumper)
schedule.every().day.at(Hours[1]).do(Jumper)
schedule.every().day.at(Hours[2]).do(Jumper)
print("Will jump 3 times")
while True:
t = threading.Thread(target=wait)
t.start()
return
scheduler()
i have tried to do something like this but it still makes tkinter not
responding Thanks in advance.
Answer: ### When to use the after method; faking while without threading
As mentioned in a comment, In far most cases, you do not need threading to run
a "fake" while loop. You can use the `after()` method to schedule your
actions, using `tkinter`'s `mainloop` as a "coat rack" to schedule things,
pretty much exactly like you would in a while loop.
This works in all situations where you can simply throw out commands with e.g.
`subprocess.Popen()`, update widgets, show messages etc.
It does _not_ work when the scheduled process takes a lot of time, running
_inside_ the mainloop. Therefore `time.sleep()` is a bummer; it will simply
hold the `mainloop`.
### How it works
Within that limitation however, you can run complicated tasks, schedule
actions even set `break` (-equivalent) conditions.
Simply create a function, initiate it with `window.after(0, <function>)`.
Inside the function, (re-) schedule the function with
`window.after(<time_in_milliseconds>, <function>)`.
To apply a break- like condition, simply rout the process (inside the
function) not to be scheduled again.
### An example
This is best illustrated with a simplified example:
[](http://i.stack.imgur.com/pj8bS.png)
[](http://i.stack.imgur.com/5s8Ux.png)
[](http://i.stack.imgur.com/kO16M.png)
from tkinter import *
import time
class TestWhile:
def __init__(self):
self.window = Tk()
shape = Canvas(width=200, height=0).grid(column=0, row=0)
self.showtext = Label(text="Wait and see...")
self.showtext.grid(column=0, row=1)
fakebutton = Button(
text="Useless button"
)
fakebutton.grid(column=0, row=2)
# initiate fake while
self.window.after(0, self.fakewhile)
self.cycles = 0
self.window.minsize(width=200, height=50)
self.window.title("Test 123(4)")
self.window.mainloop()
def fakewhile(self):
# You can schedule anything in here
if self.cycles == 5:
self.showtext.configure(text="Five seconds passed")
elif self.cycles == 10:
self.showtext.configure(text="Ten seconds passed...")
elif self.cycles == 15:
self.showtext.configure(text="I quit...")
"""
If the fake while loop should only run a limited number of times,
add a counter
"""
self.cycles = self.cycles+1
"""
Since we do not use while, break will not work, but simply
"routing" the loop to not being scheduled is equivalent to "break":
"""
if self.cycles <= 15:
self.window.after(1000, self.fakewhile)
else:
# start over again
self.cycles = 0
self.window.after(1000, self.fakewhile)
# or: fakebreak, in that case, uncomment below and comment out the
# two lines above
# pass
TestWhile()
In the example above, we run a scheduled process for fifteen seconds. While
the loop runs, several simple tasks are performed, in time, by the function
`fakewhile()`.
After these fivteen seconds, we can start over again or "break". Just
uncomment the indicated section to see...
|
PHP openssl AES in Python
Question: I am working on a project where PHP is used for decrypt AES-256-CBC messages
<?php
class CryptService{
private static $encryptMethod = 'AES-256-CBC';
private $key;
private $iv;
public function __construct(){
$this->key = hash('sha256', 'c7b35827805788e77e41c50df44441491098be42');
$this->iv = substr(hash('sha256', 'c09f6a9e157d253d0b2f0bcd81d338298950f246'), 0, 16);
}
public function decrypt($string){
$string = base64_decode($string);
return openssl_decrypt($string, self::$encryptMethod, $this->key, 0, $this->iv);
}
public function encrypt($string){
$output = openssl_encrypt($string, self::$encryptMethod, $this->key, 0, $this->iv);
$output = base64_encode($output);
return $output;
}
}
$a = new CryptService;
echo $a->encrypt('secret');
echo "\n";
echo $a->decrypt('S1NaeUFaUHdqc20rQWM1L2ZVMDJudz09');
echo "\n";
ouutput
>>> S1NaeUFaUHdqc20rQWM1L2ZVMDJudz09
>>> secret
Now I have to write Python 3 code for encrypting data. I've tried use PyCrypto
but without success. My code:
import base64
import hashlib
from Crypto.Cipher import AES
class AESCipher:
def __init__(self, key, iv):
self.key = hashlib.sha256(key.encode('utf-8')).digest()
self.iv = hashlib.sha256(iv.encode('utf-8')).digest()[:16]
__pad = lambda self,s: s + (AES.block_size - len(s) % AES.block_size) * chr(AES.block_size - len(s) % AES.block_size)
__unpad = lambda self,s: s[0:-ord(s[-1])]
def encrypt( self, raw ):
raw = self.__pad(raw)
cipher = AES.new(self.key, AES.MODE_CBC, self.iv)
return base64.b64encode(cipher.encrypt(raw))
def decrypt( self, enc ):
enc = base64.b64decode(enc)
cipher = AES.new(self.key, AES.MODE_CBC, self.iv )
return self.__unpad(cipher.decrypt(enc).decode("utf-8"))
cipher = AESCipher('c7b35827805788e77e41c50df44441491098be42', 'c09f6a9e157d253d0b2f0bcd81d338298950f246')
enc_str = cipher.encrypt("secret")
print(enc_str)
output
>>> b'tnF87LsVAkzkvs+gwpCRMg=='
But I need output `S1NaeUFaUHdqc20rQWM1L2ZVMDJudz09` which will PHP decrypt to
`secret`. How to modify Python code to get expected output?
Answer: [PHP's `hash`](http://php.net/manual/en/function.hash.php) outputs a Hex-
encoded string by default, but Python's `.digest()` returns `bytes`. You
probably wanted to use `.hexdigest()`:
def __init__(self, key, iv):
self.key = hashlib.sha256(key.encode('utf-8')).hexdigest()[:32].encode("utf-8")
self.iv = hashlib.sha256(iv.encode('utf-8')).hexdigest()[:16].encode("utf-8")
The idea of the initialization vector (IV) is to provide randomization for the
encryption with the same key. If you use the same IV, an attacker may be able
to deduce that you send the same message twice. This can be considered as a
broken protocol.
The IV is not supposed to be secret, so you can simply send it along with the
ciphertext. It is common to prepend it to the ciphertext during encryption and
slice it off before decryption.
|
GitPython "blame" does not give me all changed lines
Question: I am using GitPython. Below I print the total number of lines changed in a
specific commit: `f092795fe94ba727f7368b63d8eb1ecd39749fc4`:
from git import Repo
repo = Repo("C:/Users/shiro/Desktop/lucene-solr/")
sum_lines = 0
for blame_commit, lines_list in repo.blame('HEAD', 'lucene/core/src/java/org/apache/lucene/analysis/Analyzer.java'):
if blame_commit.hexsha == 'f092795fe94ba727f7368b63d8eb1ecd39749fc4':
sum_lines += len(lines_list)
print sum_lines
The output is 38. However, if you simply go to
<https://github.com/apache/lucene-
solr/commit/f092795fe94ba727f7368b63d8eb1ecd39749fc4> and look at the commit
yourself for file `/lucene/analysis/Analyzer.java`, the actual number of lines
changed is not 38 but it is 47. Some lines are completely missing.
Why am I getting a wrong value ?
Answer: `git blame` tells you which commit last changed each line in a given file.
You're not counting the number of lines changed in that commit, but rather the
number of lines in the file at your current HEAD that were last modified by
that specific commit.
Changing `HEAD` to `f092795fe94ba727f7368b63d8eb1ecd39749fc4` should give you
the result you expect.
$ git blame f092795fe94ba727f7368b63d8eb1ecd39749fc4 ./lucene/core/src/java/org/apache/lucene/analysis/Analyzer.java | grep f092795 | wc -l
47
$ git blame master ./lucene/core/src/java/org/apache/lucene/analysis/Analyzer.java | grep f092795 | wc -l
38
|
Itertools Chain on Nested List
Question: I have two lists combined sequentially to create a nested list with python's
map and zip funcionality; however, I wish to recreate this with itertools.
Furthermore, I am trying to understand why itertools.chain is returning a
flattened list when I insert two lists, but when I add a nested list it simply
returns the nested list.
Any help on these two issues would be greatly appreciated.
from itertools import chain
a = [0,1,2,3]
b = [4,5,6,7]
#how can I produce this with itertools?
c = list(map(list, zip(a,b)))
print(c) #[[0, 4], [1, 5], [2, 6], [3, 7]]
d = list(chain(c))
print(d) #[[0, 4], [1, 5], [2, 6], [3, 7]]
d = list(chain(a,b))
print(d) #[0, 1, 2, 3, 4, 5, 6, 7]
Answer: I'll try to answer your questions as best I can.
First off, `itertools.chain` doesn't work the way you think it does. `chain`
takes `x` number of iterables and iterates over them in sequence. When you
call `chain`, it essentially (internally) packs the objects into a list:
chain("ABC", "DEF") # Internally creates ["ABC", "DEF"]
Inside the method, it accesses each of these items one at a time, and iterates
through them:
for iter_item in arguments:
for item in iter_item:
yield item
So when you call `chain([[a,b],[c,d,e],[f,g]])`, it creates a list with _one
iterable object:_ the list you passed as an argument. So now it looks like
this:
[ #outer
[ #inner
[a,b],
[c,d,e],
[f,g]
]
]
`chain` as such iterates over the **inner** list, and returns three elements:
`[a,b]`, `[c,d,e]`, and `[f,g]` in order. Then they get repacked by `list`,
giving you what you had in the first place.
Incidentally, there is a way to do what you want to: `chain.from_iterable`.
This is an alternate constructor for `chain` which accepts a single iterable,
such as your list, and pulls the elements out to iterate over. So instead of
this:
# chain(l)
[ #outer
[ #inner
[a,b],
[c,d,e],
[f,g]
]
]
You get this:
# chain.from_iterable(l)
[
[a,b],
[c,d,e],
[f,g]
]
This will iterate through the three sub-lists, and return them in one
sequence, so `list(chain.from_iterable(l))` will return `[a,b,c,d,e,f,g]`.
As for your second question: While I don't know why `itertools` is a necessity
to this process, you can do this in Python 2.x:
`list(itertools.izip(x,y))`
However, in 3.x, the `izip` function has been removed. There is still
`zip_longest`, which will match up as many pairs as it can, and accept a
filler value for extra pairs:
`list(zip_longest([a,b,c],[d,e,f,g,h],fillvalue="N"))` returns
`[(a,d),(b,e),(c,f),(N,g),(N,h)]` since the second list is longer than the
first. Normal `zip` will take the shortest iterable and cut off the rest.
In other words, unless you want `zip_longest` instead of `zip`, `itertools`
does not have a built-in method for zipping.
|
python2.7 create array in loop
Question: I would like to create a new variable in a loop with an index in which I write
a 2d matrix of data. Something like this:
import numpy
DARK = []
a = []
for i in range(0,3):
# create 3d numpy array
d = numpy.array([[1, 2], [3, 4]])
a.append(d)
stack = numpy.array(a)
# write it into the actual variable (here is the problem)
DARK[i] = numpy.median(stack)
I tried an approach with DARK.append but that gave me an list index out of
range error.
Answer: After 4 days of trying I found the answer myself. Thanks for the great help
guys...
import numpy
DARK = []
a = []
stack = []
for i in range(0,3):
# create 3d numpy array
d = numpy.array([[1, 2], [3, 4]])
a.append(d)
stack.append(numpy.array(a))
# write it into the actual variable
DARK.append(numpy.array(numpy.median(stack[i], 0)))
|
Altering dictionaries/keys in Python
Question: I have ran the code below in Python to generate a list of words and their
count from a text file. How would I go about filtering out words from my
"frequency_list" variable that only have a count of 1?
In addition, how would I export the print statement loop at the bottom to a
CSV
Thanks in advance for any help provided.
import re
import string
frequency = {}
document_text = open('Words.txt', 'r')
text_string = document_text.read().lower()
match_pattern = re.findall(r'\b[a-z]{3,15}\b', text_string)
for word in match_pattern:
count = frequency.get(word,0)
frequency[word] = count + 1
frequency_list = frequency.keys()
for words in frequency_list:
print (words, frequency[words])
Answer: For the first part - you can use dict comprehension:
`frequency = {k:v for k,v in frequency.items() if v>1}`
|
What is the easiest way to get a list of the keywords in a string?
Question: For example:
str = 'abc{text}ghi{num}'
I can then do
print(str.format(text='def',num=5))
> abcdefghi5
I would like to do something like
print(str.keywords) # function does not exist
> ['text','num']
What is the easiest way to do this? I can search character-by-character for
`{` and `}` but I wonder if there a built-in python function?
Thank you.
Answer: Check out the [`string.Formatter`
class](https://docs.python.org/3.5/library/string.html#string.Formatter):
>>> import string
>>> text = 'abc{text}ghi{num}'
>>> [t[1] for t in string.Formatter().parse(text)]
['text', 'num']
|
Scrollspy Navbar jumping over tab?
Question:
<body data-spy="scroll" data-target=".navbar" data-offset="50">
<nav class="navbar navbar-default navbar-fixed-top">
<div class="container-fluid">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#myNavbar">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand">Aakarsh Madhavan</a>
</div>
<div class="collapse navbar-collapse" id="myNavbar">
<ul class="nav navbar-nav navbar-right">
<li class="active" id="welcomeTab"><a class="tabs" href="#div1">Welcome</a></li>
<li><a class="tabs" href="#div2">About</a> </li>
<li><a class="tabs" href="#div3">Skills</a></li>
<li><a class="tabs" href="#div4">Projects</a></li>
<li><a class="tabs" href="#">Resume</a></li>
<li><a class="tabs" href="#">Connect!</a></li>
</ul>
</div>
</div>
</nav>
[...]
<script>
$(document).ready(function(){
$(".tabs").on('click', function(event) {
if (this.hash !== "") {
event.preventDefault();
var hash = this.hash;
$('html, body').animate({
scrollTop: $(hash).offset().top
}, 800, function(){
window.location.hash = hash;
});
}
});
});
</script>
[...]
<div class="deviconsContainer divs" id="div3">
<span id="javaIcon" class="devicons devicons-java"> </span>
<span id="pythonIcon" class="devicons devicons-python"> </span>
<span id="htmlIcon" class="devicons devicons-html5"> </span>
<span id="cssIcon" class="devicons devicons-css3"></span>
</div>
</p>
</div>
This is the code for the page basically. I am not sure why it isn't properly
working. Whenever I click on the **skills** tab, it just moves over to
projects? Why is this happening? I tried changing the ID but that did not do
anything either.
Answer: This should works:
.ui-widget-content[aria-hidden="true"] {
display: block !important;
height: 0px;
overflow: hidden;
}
.ui-widget-content[aria-hidden="false"] {
display: block !important;
height: auto;
}
|
How to resolve "_tkinter.TclError: unknown option"?
Question: I am learning python tkinter but I have an error whenever I tried to compile
it:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/spyderlib/
widgets/externalshell/sitecustomize.py", line 540, in runfile
execfile(filename, namespace)
File "/home/jason/.spyder2/.temp.py", line 14, in <module>
menu.config(menu=menu)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1274,
inconfigure
return self._configure('configure', cnf, kw)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1265,
in _configure
self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: unknown option "-menu").
My code is:
from tkinter import *
def hello():
print "hello"
root = Tk()
menu = Menu(root)
menu.config(menu=menu)
menu.add_command(label ="new",command = hello)
root.mainloop()
Answer: You have an issue with your code a little bit. Instead of `menu.config`, use
`root.config`, you will not get any such errors.
For more information and detailed tutorial, kindly visit [Tkinter Menu
Widget](http://effbot.org/tkinterbook/menu.htm).
|
Pretty print a JSON in Python 3.5
Question: I want to pretty print a JSON file, but popular solutions: [How to Python
prettyprint a JSON file](http://stackoverflow.com/questions/12943819/how-to-
python-prettyprint-a-json-file) dont work for me.
Code:
import json, os
def load_data(filepath):
if not os.path.exists(filepath):
print("ACHTUNG! Incorrect path")
return None
with open(filepath, 'r') as file:
return json.load(file)
This function is OKay - it loads jason properly. But when i want to pp it like
this:
def pretty_print_json(data):
print(json.dumps(data, indent=4, sort_keys=True))
return None
if __name__ == '__main__':
pretty_print_json(load_data("data.json")) ,
it serializes the values of the dictionaries!:
[
{
"Cells": {
"Address": "\u0443\u043b\u0438\u0446\u0430 \u0410\u043a\u0430\u0434\u0435\u043c\u0438\u043a\u0430 \u041f\u0430\u0432\u043b\u043e\u0432\u0430, \u0434\u043e\u043c 10",
"AdmArea": "\u0417\u0430\u043f\u0430\u0434\u043d\u044b\u0439 \u0430\u0434\u043c\u0438\u043d\u0438\u0441\u0442\u0440\u0430\u0442\u0438\u0432\u043d\u044b\u0439 \u043e\u043a\u0440\u0443\u0433",
"ClarificationOfWorkingHours": null,
"District": "\u0440\u0430\u0439\u043e\u043d \u041a\u0443\u043d\u0446\u0435\u0432\u043e",
"IsNetObject": "\u0434\u0430",
"Name": "\u0410\u0440\u043e\u043c\u0430\u0442\u043d\u044b\u0439 \u041c\u0438\u0440",
"OperatingCompany": "\u0410\u0440\u043e\u043c\u0430\u0442\u043d\u044b\u0439 \u041c\u0438\u0440",
"PublicPhone": [
{
"PublicPhone": "(495) 777-51-95"
}
What's the problem? It's anaconda 3.5
Answer: `json.dumps()` produces ASCII-safe JSON by default. If you want to retain non-
ASCII data as Unicode codepoints, disable that default by setting
`ensure_ascii=False`:
print(json.dumps(data, indent=4, sort_keys=True, ensure_ascii=False))
which, for your sample data, then produces:
[
{
"Cells": {
"Address": "улица Академика Павлова, дом 10",
"AdmArea": "Западный административный округ",
"ClarificationOfWorkingHours": null,
"District": "район Кунцево",
"IsNetObject": "да",
"Name": "Ароматный Мир",
"OperatingCompany": "Ароматный Мир",
"PublicPhone": [
{
"PublicPhone": "(495) 777-51-95"
}
(cut off at the same point you cut things off).
|
Calling class that initiates UI class in python
Question: I have an issue while creating my small Python project. I am used to Java and
this is still quite new to me. The problem is i create a UI class from
QtCreator. Then convert it to `.py` and import to my project. I have a class
that for now is considered `main` that initiates and runs the UI class. The
problem is i need to have a class that is really the `main` class and when app
is started this class to call the second class that initiates the UI. To sum
it up now when app is started i have `A > calls > B(UI)`. I need it to be `A >
calls > B > init > C(UI)`. Is that possible? My idea is that B must be only a
manager class that sets up the UI and run it. Here is my working code so far:
1.Manager class-
class mainpanelManager(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
window = mainpanelManager()
window.show()
sys.exit(app.exec_())
and UI class is typical UI generated class. I will post only constructor:
class Ui_MainWindow(QtGui.QWindow):
def setupUi(self, MainWindow):
In this case `mainpaneManager` runs `Ui_MainWindow` after element initiation.
I want to have a 3rd class that calls `mainpanelManager`. What do i do with
this `__name__` function? Move it? Thanks in advance!
Answer: I'm not exactly sure I understand what you're asking for, but it would seem to
be as simple as this:
class Main(object):
def __init__(self):
self.window = MainPanelManager()
self.window.show()
class MainPanelManager(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.setupUi(self)
self.settingsButton.clicked.connect(self.editSettings)
self.loadSettings()
def editSettings(self):
dialog = SettingsDialog(self)
if dialog.exec_() == QtWidgets.QDialog.Accepted:
self.loadSettings()
def loadSettings(self):
# do stuff to load settings...
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
main = Main()
sys.exit(app.exec_())
|
How do I resolve builtins.ConnectionRefusedError error in attempting to send email using flask-mail
Question: I am making a simple WebApp using Flask framework in python. It will take user
inputs for email and name from my website
([www.anshulbansal.esy.es](http://www.anshulbansal.esy.es)) and will check if
email exists in database (here database is supposed as dictionary for now)
then it will not work further, but if it doesn't exists in database, it will
send a random link to the submitted email and if the user clicks the link,
then its info will be added to my database.
It's almost done but this error in coming in my way. Check out this code:
from flask import Flask, render_template, request, redirect, url_for
from flask_mail import Mail, Message
import random
import string
def random_generator(size=6, chars=string.ascii_letters + string.digits):
return ''.join(random.choice(chars) for x in range(size))
subscribers_d = {'[email protected]': 'Anshul Bansal', '[email protected]': 'Bansal', '[email protected]': 'Anshul',}
app = Flask(__name__)
mail = Mail(app)
app.config.update(
MAIL_SERVER='smtp.gmail.com',
MAIL_PORT=465,
MAIL_USE_TLS = False,
MAIL_USE_SSL=True,
MAIL_USERNAME='[email protected]',
MAIL_PASSWORD="It's Secret"
)
@app.route('/')
def index():
return render_template("index.html")
@app.route('/submit', methods=['POST'])
def submit():
if request.method == "POST":
v_name = request.form['vname']
v_email = request.form['vemail']
return send_mail(v_name, v_email)
else:
return redirect(url_for("/"))
random_link_sent = random_generator(20)
@app.route("/")
def send_mail(v_name, v_email):
if v_email in subscribers_d:
return "Oh! It seems that you have already registered."
else:
msg = Message('Confirm Subscription', sender=['[email protected]'], recipients=[v_email])
msg.html = "<h3>Confirm Subscription</h3>" \
"<p>Hi! </p>" + v_name + "<p> , Please click on below link to subscribe</p>" \
"Link: " + ' www.anshulbansal.esy.es/' + random_link_sent
mail.send(msg)
return 'Check Your Inbox For Confirmation Email'
@app.route("/<random_link_sent>")
def confirm(random_link_sent):
return "You have registered on " + random_link_sent
subscribers_d[v_email] = v_name
if __name__ == "__main__":
app.run(debug=True)
But this code is giving me a builtins.ConnectionRefusedError Error. But before
2-3 attempts of sending email were successful without any error. How do I
resolve it?
[Here is the screenshot of error](http://i.stack.imgur.com/cJNKS.png)
Answer: You should update configuration before you initialize Mail:
app = Flask(__name__)
app.config.update(
DEBUG = True,
MAIL_SERVER = 'smtp.gmail.com',
MAIL_PORT = 587,
MAIL_USE_TLS = True,
MAIL_USE_SSL = False,
MAIL_USERNAME = '[email protected]',
MAIL_PASSWORD = 'your_password',
)
mail = Mail(app)
|
Pyvmomi get folders name
Question: I'm new to Python and Django and I need to list all my VMs. I used pyvmomi and
Django but I can't get the folders name from VSphere, it shows a strange line.
> VMware list
>
> 'vim.Folder:group-v207'
>
> 'vim.Folder:group-v3177'
>
> 'vim.Folder:group-v188'
I have 3 folders on vSphere so I think my connection it's good but that's
absolutely not their names.
Here is my code :
views.py
from __future__ import print_function
from django.shortcuts import render
from pyVim.connect import SmartConnect, Disconnect
import ssl
def home(request):
s = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
s.verify_mode = ssl.CERT_NONE
try:
connect = SmartConnect(...)
except:
connect = SmartConnect(...)
datacenter = connect.content.rootFolder.childEntity[0]
vmsFolders = datacenter.vmFolder.childEntity
Disconnect(connect)
return render(request, 'vmware/home.html', {'vmsFolders':vmsFolders})
home.html
<h1>VMware list</h1>
{% for vmFolder in vmsFolders %}
<div>
<h3>{{ vmFolder }}</h3>
</div>
{% endfor %}
Can anybody help me to get the real names of my folders?
Answer: You need to specifically state you want the name, like this:
vmFolders = datacenter.vmFolder.childEntity
for folder in vmFolders:
print(folder.name)
|
Airflow DB session not providing any environement vabiable
Question: As an Airflow and Python newbie, even don't know if I'm asking the right
question, but asking anyway. I've configured airflow on a CentOS system. Use
remote MySql instance as the backend. In my code, need to get a number of
Variables, the code looks like below:
import os
from airflow.models import Variable
print(os.environ['SHELL'])
local_env['SHELL'] = Variable.get('SHELL')
And I got following error:
> Traceback (most recent call last): File "test2.py", line 5, in
> local_env['SHELL'] = Variable.get('SHELL') File
> "/com/work/airflowenv/lib/python2.7/site-packages/airflow/utils/db.py", line
> 53, in wrapper result = func(*args, **kwargs) File
> "/com/work/airflowenv/lib/python2.7/site-packages/airflow/models.py", line
> 3134, in get raise ValueError('Variable {} does not exist'.format(key))
> ValueError: Variable SHELL does not exist
It is the Variable.get() method throws the exception in this piece of code in
models.py:
@classmethod
@provide_session
def get(cls, key, default_var=None, deserialize_json=False, session=None):
obj = session.query(cls).filter(cls.key == key).first()
if obj is None:
if default_var is not None:
return default_var
else:
raise ValueError('Variable {} does not exist'.format(key))
Where session.query already yield None. Don't quite understand how the session
is injected here. and why these session variables not set. Should we set up
something on the remote MySQL instance?
BTW, we have another identical airflow instance on another machine with local
mysql instance. And running the script I provided stand alone has no problem:
> [2016-09-27 01:54:48,341] {**init**.py:36} INFO - Using executor
>
> LocalExecutor
>
> /bin/bash /bin/bash
Anything I missed when setting up the airflow? Thanks,
Answer: Ok, finally got the problem resolved. What I've done is print out the query,
and figured out the variable must be from some relational database table
called variable. And dig into the backend DB, found the DB, made the
comparation between it and the working one, and figured out that the
"variable" table data missed. The way to add these variable is simple: airflow
variables -s SHELL /bin/bash and so forth for other variables.
|
python add array of hours to datetime
Question: import timedelta as td I have a date time and I want to add an array of hours
to it.
i.e.
Date[0]
datetime.datetime(2011, 1, 1, 0, 0)
Date[0] + td(hours=9)
datetime.datetime(2011, 1, 1, 9, 0)
hrs = [1,2,3,4]
Date[0] + td(hours=hrs)
But obviously it is not supported.
Date array above is a giant array of size 100X1 and I want to add hrs =
[1,2,3,4] to each row of Date to get a datetime array of size 100x4. So, a for
loop is not going to work in my case.
Answer: Use a nested _list comprehension_ and [`.replace()`
method](https://docs.python.org/2/library/datetime.html#datetime.datetime.replace).
Sample for a list with 2 datetimes:
In [1]: from datetime import datetime
In [2]: l = [datetime(2011, 1, 1, 0, 0), datetime(2012, 1, 1, 0, 0)]
In [3]: hours = [1, 2, 3, 4]
In [4]: [[item.replace(hour=hour) for hour in hours] for item in l]
Out[4]:
[[datetime.datetime(2011, 1, 1, 1, 0),
datetime.datetime(2011, 1, 1, 2, 0),
datetime.datetime(2011, 1, 1, 3, 0),
datetime.datetime(2011, 1, 1, 4, 0)],
[datetime.datetime(2012, 1, 1, 1, 0),
datetime.datetime(2012, 1, 1, 2, 0),
datetime.datetime(2012, 1, 1, 3, 0),
datetime.datetime(2012, 1, 1, 4, 0)]]
As a result a 2x4 list of lists.
|
python 27 - Creating and running instances of another script in parallel
Question: I'm attempting to build a multiprocessing script that retrieves dicts of
attributes from a MySQL table and then runs instances of my main script in
**parallel** , using each dict retrieved from the MySQL table as an argument
to each instance of the main script. The main script has a method called
queen_bee() that's responsible for ensuring that all the other methods have
the correct information and are executed in the proper order.
I have tried to iterate through the list of dicts in order to create/run
parallel processes of the main script using the multiprocessing library. But
they end up running consecutively, not concurrently:
from my_main_script import my_main_class as main
import multiprocessing as mp
def create_list_of_attribute_dicts():
...
return list_of_dicts
for each_dict in list_of_dicts:
instance = main(each_dict)
p = mp.Process(target=instance.queen_bee(),args=(each_dict,))
p.start()
...
I have also tried using the multiprocessing library's Pool.map() method. But I
can't figure out how to instantiate the main script one time for each dict
using Pool.map():
...
pool = mp.Pool()
jobs = pool.map(main.queen_bee(),list_of_dicts)
The Pool.map method seems to be the cleanest, most pythonic way to get these
instances to run in parallel, but I'm hung up on the proper way to do that in
this case. I know the above 'jobs' variable will fail because 'main' has not
been instantiated. However, I can't figure out how to pass each dict as an
argument to separate instances of the main class and then run those instances
using the map method. I'm open to trying a different approach. Thanks in
advance for your help.
Answer: You could store your dictionaries in a list and then try something like that:
from multiprocessing import Pool as ThreadPool
# ...
def parallel_function(list_of_dictionaries,threads=20):
thread_pool = ThreadPool(threads)
results = thread_pool.map(SC.queen_bee() ,list_of_dictionaries)
thread_pool.close()
thread_pool.join()
return results
**Note:** I had problem using multiprocessing due to my pc, so for me worked
by replacing `multiprocessing` with the `multiprocessing.dummy` in the
`import` statement.
|
python3 How to select two elements on either side of a random element in a list?
Question: I have finished this part of the code so far:
wedding = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
from random import randrange
random_index = randrange(0, len(wedding))
print('TV =', wedding[random_index])
I got stuck at a step that I need to find two elements on either side of the
element that I have randomly selected.
Answer: When using arrays, always check for your array bounds in your code. The below
code will output the value from the array that is to the left and right of the
randomly selected index value. If either of the indexes are out of bounds, it
will not output that value.
from random import randrange
wedding = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
random_index = randrange(0, len(wedding))
print('TV = ', wedding[random_index])
if random_index-1 >= 0:
print('Left of Index = ', wedding[random_index-1])
if random_index + 1 < len(wedding) - 1:
print('Right of Index = ', wedding[random_index + 1])
|
Problems with pd.read_csv
Question: I have Anaconda 3 on Windows 10. I am using pd.read_csv() to load csv files
but I get error messages. To begin with I tried `df =
pd.read_csv('C:\direct_marketing.csv')` which worked and the file was
imported.
Then I tried `df = pd.read_csv('C:\tutorial.csv')` and I received the
following error message:
Traceback (most recent call last):
File "<ipython-input-3-ce208cc2684f>", line 1, in <module>
df = pd.read_csv('C:\tutorial.csv')
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 562, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 315, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 645, in __init__
self._make_engine(self.engine)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 799, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 1213, in __init__
self._reader = _parser.TextReader(src, **kwds)
File "pandas\parser.pyx", line 358, in pandas.parser.TextReader.__cinit__ (pandas\parser.c:3427)
File "pandas\parser.pyx", line 628, in pandas.parser.TextReader._setup_parser_source (pandas\parser.c:6861)
OSError: File b'C:\tutorial.csv' does not exist
Then I moved the file to a new folder and renamed it and again used read.csv()
to import it:
df = pd.read_csv('C:\Users\test.csv')
This time I received a different error message:
File "<ipython-input-5-03c6d380c174>", line 1
df = pd.read_csv('C:\Users\test.csv')
^
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
Could you help me understand what is going on and how to handle this
situation?
Thanks a lot!
Answer: Try escaping the backslashes:
df = pd.read_csv('C:\\Users\\test.csv')
|
Display select Mysql query in python with output nice and readable
Question: I need python script for display sql query with nice output and readable this
not readable for heavy tables...
cnx = mysql.connector.connect(user='root', password='*****',
host='127.0.0.1',
database='dietetique')
c = cnx.cursor()
sys.stdout = open('mysql_data.log', 'w')
c.execute("SELECT * FROM administrations;")
for row in c:
print row
Answer:
import pypyodbc
ID=2
ConnectionDtl='Driver={SQL Server};Server=WIN7-297;Database=AdventureWorks2014;trusted_connection=yes'
connection = pypyodbc.connect(ConnectionDtl)
print("Retrieve row based on [FirstName]='Mani'")
cursor = connection.cursor()
SQLCommand = ("SELECT [FirstName],[LastName] "
"FROM Person.SampleData "
"WHERE FirstName =?")
Values = ['Mani']
print(SQLCommand)
cursor.execute(SQLCommand,Values)
i=1
for x in cursor :
row = cursor.fetchone()
print str(i) + ". FirstName: " + row[0] + " LastName: " + row[1]
i=i+1
connection.close()
|
Django Standalone Script
Question: I am trying to access my Django (v1.10) app DB from another python script and
having some trouble doing so.
This is my file and folder structure:
store
store
__init.py__
settings.py
urls.py
wsgi.py
store_app
__init.py__
admin.py
apps.py
models.py
...
db.sqlite3
manage.py
other_script.py
In accordance with [Django's
documentations](https://docs.djangoproject.com/en/1.10/topics/settings/#calling-
django-setup-is-required-for-standalone-django-usage) my `other_script.py`
looks like this:
import django
from django.conf import settings
settings.configure(DEBUG=True)
django.setup()
from store.store_app.models import MyModel
But it generates a runtime error:
RunTimeError: Model class store.store_app.models.MyModel doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.
I should note that my `INSTALLED_APPS` list contains `store_app` as its last
element.
If instead I try to pass a config like this:
import django
from django.conf import settings
from store.store_app.apps import StoreAppConfig
settings.configure(StoreAppConfig, DEBUG=True)
django.setup()
from store.store_app.models import MyModel
I get:
AttributeError: type object 'StoreAppConfig has no attribute 'LOGGING_CONFIG'.
If I edit `settings.py` and add `LOGGING_CONFIG=None` I get another error
about another missing attribute, and so on.
Any suggestions will be appreciated.
Answer: This sounds like a great use case for [Django Management
commands.](https://docs.djangoproject.com/en/1.10/howto/custom-management-
commands/) which has the added bonus you can run it scheduled from cron,
direct from the commandline, or call from inside django. This gives the script
full access to the same settings and environment variables as your main
project.
If you move this into an appropriate directory - using store here as an
example, not a suggestion, it should work.
store
management
__init__.py
otherscript.py
|
Apache poi multiline bullet point is working but not multiple paragaraph?
Question: Generate word document using apache poi library bullet point is working but i
trying multiple paragraph not working,i have pasted below,
my java class code :
package samplebuller;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.math.BigInteger;
import org.apache.poi.xwpf.usermodel.TextAlignment;
import org.apache.poi.xwpf.usermodel.XWPFAbstractNum;
import org.apache.poi.xwpf.usermodel.XWPFDocument;
import org.apache.poi.xwpf.usermodel.XWPFNumbering;
import org.apache.poi.xwpf.usermodel.XWPFParagraph;
import org.apache.poi.xwpf.usermodel.XWPFRun;
import org.apache.xmlbeans.XmlException;
import org.openxmlformats.schemas.wordprocessingml.x2006.main.CTAbstractNum;
public class Samplebullet {
String fileName="";
InputStream in = null;
CTAbstractNum abstractNum = null;
public Samplebullet() {
try {
InputStream in = new FileInputStream("numbering.xml");
abstractNum = CTAbstractNum.Factory.parse(in);
} catch (XmlException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public String createDocument(String fileName, String content) {
this.fileName=fileName;
XWPFDocument doc = new XWPFDocument();
doc.createNumbering();
XWPFNumbering numbering=null;
numbering=doc.createNumbering();
XWPFParagraph para = doc.createParagraph();
para.setVerticalAlignment(TextAlignment.CENTER);
para.setNumID(addListStyle(abstractNum, doc, numbering));
XWPFRun run=para.createRun();
run.setText("JAVA Code");
run.addBreak();
XWPFParagraph para1 = doc.createParagraph();
para1.setVerticalAlignment(TextAlignment.CENTER);
para1.setNumID(addListStyle(abstractNum, doc, numbering));
XWPFRun run1=para.createRun();
run1.setText("PYTHON CODE");
run1.addBreak();
XWPFParagraph para2 = doc.createParagraph();
para2.setVerticalAlignment(TextAlignment.CENTER);
para2.setNumID(addListStyle(abstractNum, doc, numbering));
XWPFRun run2=para.createRun();
run2.setText("PHP CODE");
run2.addBreak();
try {
FileOutputStream out = new FileOutputStream(fileName);
doc.write(out);
out.close();
in.close();
} catch(Exception e) {}
return null;
}
private BigInteger addListStyle(CTAbstractNum abstractNum, XWPFDocument doc, XWPFNumbering numbering) {
try {
XWPFAbstractNum abs = new XWPFAbstractNum(abstractNum, numbering);
BigInteger id = BigInteger.valueOf(0);
boolean found = false;
while (!found) {
Object o = numbering.getAbstractNum(id);
found = (o == null);
if (!found)
id = id.add(BigInteger.ONE);
}
abs.getAbstractNum().setAbstractNumId(id);
id = numbering.addAbstractNum(abs);
return doc.getNumbering().addNum(id);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
public static void main(String[] args) throws Exception {
String fileName="Test.docx";
new Samplebullet().createDocument(fileName, "First Level@@Second Level@@Second Level@@First Level");
}
}
My xml numbering file code:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<w:numbering xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:mo="http://schemas.microsoft.com/office/mac/office/2008/main" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:mv="urn:schemas-microsoft-com:mac:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" mc:Ignorable="w14 wp14">
<w:abstractNum w:abstractNumId="0">
<w:nsid w:val="3A6A237F"/>
<w:multiLevelType w:val="hybridMultilevel"/>
<w:tmpl w:val="5C9890C4"/>
<w:lvl w:ilvl="0" w:tplc="0409000F"><w:start w:val="1"/><w:numFmt w:val="decimal"/><w:lvlText w:val="%1."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="720" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="1" w:tplc="04090019" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="lowerLetter"/><w:lvlText w:val="%2."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="1440" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="2" w:tplc="0409001B" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="lowerRoman"/><w:lvlText w:val="%3."/><w:lvlJc w:val="right"/><w:pPr><w:ind w:left="2160" w:hanging="180"/></w:pPr></w:lvl>
<w:lvl w:ilvl="3" w:tplc="0409000F" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="decimal"/><w:lvlText w:val="%4."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="2880" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="4" w:tplc="04090019" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="lowerLetter"/><w:lvlText w:val="%5."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="3600" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="5" w:tplc="0409001B" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="lowerRoman"/><w:lvlText w:val="%6."/><w:lvlJc w:val="right"/><w:pPr><w:ind w:left="4320" w:hanging="180"/></w:pPr></w:lvl>
<w:lvl w:ilvl="6" w:tplc="0409000F" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="decimal"/><w:lvlText w:val="%7."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="5040" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="7" w:tplc="04090019" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="lowerLetter"/><w:lvlText w:val="%8."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="5760" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="8" w:tplc="0409001B" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="lowerRoman"/><w:lvlText w:val="%9."/><w:lvlJc w:val="right"/><w:pPr><w:ind w:left="6480" w:hanging="180"/></w:pPr></w:lvl>
</w:abstractNum>
<w:abstractNum w:abstractNumId="1">
<w:nsid w:val="5E7736F6"/>
<w:multiLevelType w:val="hybridMultilevel"/>
<w:tmpl w:val="F602653C"/>
<w:lvl w:ilvl="0" w:tplc="0409000F"><w:start w:val="1"/><w:numFmt w:val="decimal"/><w:lvlText w:val="%1."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="720" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="1" w:tplc="04090001"><w:start w:val="1"/><w:numFmt w:val="bullet"/><w:lvlText w:val=""/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="1440" w:hanging="360"/></w:pPr><w:rPr><w:rFonts w:ascii="Symbol" w:hAnsi="Symbol" w:hint="default"/></w:rPr></w:lvl>
<w:lvl w:ilvl="2" w:tplc="04090003"><w:start w:val="1"/><w:numFmt w:val="bullet"/><w:lvlText w:val="o"/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="2340" w:hanging="360"/></w:pPr><w:rPr><w:rFonts w:ascii="Courier New" w:hAnsi="Courier New" w:hint="default"/></w:rPr></w:lvl>
<w:lvl w:ilvl="3" w:tplc="0409000F" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="decimal"/><w:lvlText w:val="%4."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="2880" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="4" w:tplc="04090019" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="lowerLetter"/><w:lvlText w:val="%5."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="3600" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="5" w:tplc="0409001B" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="lowerRoman"/><w:lvlText w:val="%6."/><w:lvlJc w:val="right"/><w:pPr><w:ind w:left="4320" w:hanging="180"/></w:pPr></w:lvl>
<w:lvl w:ilvl="6" w:tplc="0409000F" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="decimal"/><w:lvlText w:val="%7."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="5040" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="7" w:tplc="04090019" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="lowerLetter"/><w:lvlText w:val="%8."/><w:lvlJc w:val="left"/><w:pPr><w:ind w:left="5760" w:hanging="360"/></w:pPr></w:lvl>
<w:lvl w:ilvl="8" w:tplc="0409001B" w:tentative="1"><w:start w:val="1"/><w:numFmt w:val="lowerRoman"/><w:lvlText w:val="%9."/><w:lvlJc w:val="right"/><w:pPr><w:ind w:left="6480" w:hanging="180"/></w:pPr></w:lvl>
</w:abstractNum>
<w:num w:numId="1"><w:abstractNumId w:val="1"/></w:num>
<w:num w:numId="2"><w:abstractNumId w:val="0"/></w:num>
</w:numbering>
Please anyone help me multiline bullet point,Thanks in advance
Answer: There mainly two issues with your code:
1. Your `run`s are all in `para`, which is the first paragraph. I don't believe that this shall be so.
2. Your `addListStyle` creates abstract numberings each time it is called. So your `numbering.xml` gets unnecessary big and contains multiple abstract numberings with the same IDs 0 and 1. The `addListStyle` should be called only once. The `BigInteger` should be used as `numID` for all paragraphs which uses the same numbering.
But I would the whole thing not do that complicated. Consider the following
example:
import java.io.File;
import java.io.FileOutputStream;
import org.apache.poi.xwpf.usermodel.*;
import org.openxmlformats.schemas.wordprocessingml.x2006.main.CTAbstractNum;
import java.math.BigInteger;
public class CreateWordBulletOrDecimalList {
static String cTAbstractNumBulletXML =
"<w:abstractNum xmlns:w=\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\" w:abstractNumId=\"0\">"
+ "<w:multiLevelType w:val=\"hybridMultilevel\"/>"
+ "<w:lvl w:ilvl=\"0\"><w:start w:val=\"1\"/><w:numFmt w:val=\"bullet\"/><w:lvlText w:val=\"\"/><w:lvlJc w:val=\"left\"/><w:pPr><w:ind w:left=\"720\" w:hanging=\"360\"/></w:pPr><w:rPr><w:rFonts w:ascii=\"Symbol\" w:hAnsi=\"Symbol\" w:hint=\"default\"/></w:rPr></w:lvl>"
+ "<w:lvl w:ilvl=\"1\" w:tentative=\"1\"><w:start w:val=\"1\"/><w:numFmt w:val=\"bullet\"/><w:lvlText w:val=\"o\"/><w:lvlJc w:val=\"left\"/><w:pPr><w:ind w:left=\"1440\" w:hanging=\"360\"/></w:pPr><w:rPr><w:rFonts w:ascii=\"Courier New\" w:hAnsi=\"Courier New\" w:cs=\"Courier New\" w:hint=\"default\"/></w:rPr></w:lvl>"
+ "<w:lvl w:ilvl=\"2\" w:tentative=\"1\"><w:start w:val=\"1\"/><w:numFmt w:val=\"bullet\"/><w:lvlText w:val=\"\"/><w:lvlJc w:val=\"left\"/><w:pPr><w:ind w:left=\"2160\" w:hanging=\"360\"/></w:pPr><w:rPr><w:rFonts w:ascii=\"Wingdings\" w:hAnsi=\"Wingdings\" w:hint=\"default\"/></w:rPr></w:lvl>"
+ "</w:abstractNum>";
static String cTAbstractNumDecimalXML =
"<w:abstractNum xmlns:w=\"http://schemas.openxmlformats.org/wordprocessingml/2006/main\" w:abstractNumId=\"0\">"
+ "<w:multiLevelType w:val=\"hybridMultilevel\"/>"
+ "<w:lvl w:ilvl=\"0\"><w:start w:val=\"1\"/><w:numFmt w:val=\"decimal\"/><w:lvlText w:val=\"%1\"/><w:lvlJc w:val=\"left\"/><w:pPr><w:ind w:left=\"720\" w:hanging=\"360\"/></w:pPr></w:lvl>"
+ "<w:lvl w:ilvl=\"1\" w:tentative=\"1\"><w:start w:val=\"1\"/><w:numFmt w:val=\"decimal\"/><w:lvlText w:val=\"%1.%2\"/><w:lvlJc w:val=\"left\"/><w:pPr><w:ind w:left=\"1440\" w:hanging=\"360\"/></w:pPr></w:lvl>"
+ "<w:lvl w:ilvl=\"2\" w:tentative=\"1\"><w:start w:val=\"1\"/><w:numFmt w:val=\"decimal\"/><w:lvlText w:val=\"%1.%2.%3\"/><w:lvlJc w:val=\"left\"/><w:pPr><w:ind w:left=\"2160\" w:hanging=\"360\"/></w:pPr></w:lvl>"
+ "</w:abstractNum>";
public static void main(String[] args) throws Exception {
XWPFDocument document = new XWPFDocument();
XWPFParagraph paragraph = document.createParagraph();
XWPFRun run=paragraph.createRun();
run.setText("The List:");
CTAbstractNum cTAbstractNum = CTAbstractNum.Factory.parse(cTAbstractNumBulletXML);
//CTAbstractNum cTAbstractNum = CTAbstractNum.Factory.parse(cTAbstractNumDecimalXML);
XWPFAbstractNum abstractNum = new XWPFAbstractNum(cTAbstractNum);
XWPFNumbering numbering = document.createNumbering();
BigInteger abstractNumID = numbering.addAbstractNum(abstractNum);
BigInteger numID = numbering.addNum(abstractNumID);
for (int i = 0; i < 3; i++) {
paragraph = document.createParagraph();
paragraph.setNumID(numID);
run = paragraph.createRun();
run.setText("List item " + i);
if (i < 2) paragraph.setSpacingAfter(0);
if (i == 1) {
paragraph = document.createParagraph();
paragraph.setNumID(numID);
paragraph.getCTP().getPPr().getNumPr().addNewIlvl().setVal(BigInteger.valueOf(1));
run = paragraph.createRun();
run.setText("Sub list item " + i + " a");
paragraph.setSpacingAfter(0);
}
}
paragraph = document.createParagraph();
run=paragraph.createRun();
run.setText("Paragraph after the list.");
FileOutputStream out = new FileOutputStream("CreateWordBulletOrDecimalList.docx");
document.write(out);
System.out.println("CreateWordBulletOrDecimalList written successully");
}
}
|
How do I execute multiple shell commands with a single python subprocess call?
Question: Ideally it should be like a list of commands that I want to execute and
execute all of them using a single subprocess call. I was able to do something
similar by storing all the commands as a shell script and calling that script
using subprocess, but I want a pure python solution.I will be executing the
commands with shell=True and yes I understand the risks.
Answer: Use semicolon to chain them if they're independent.
For example, (Python 3)
>>> import subprocess
>>> result = subprocess.run('echo Hello ; echo World', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
>>> result
CompletedProcess(args='echo Hello ; echo World', returncode=0, stdout=b'Hello\nWorld\n')
But technically that's not a pure Python solution, because of `shell=True`.
The arg processing is actually done by shell. (You may think of it as of
executing `/bin/sh -c "$your_arguments"`)
If you want a somewhat more pure solution, you'll have to use `shell=False`
and loop over your several commands. As far as I know, there is no way to
start multiple subprocesses directly with subprocess module.
|
Write hex code to text file from integer value in python
Question: **Details: Ubuntu 14.04(LTS), Python(2.7)**
I want to write hex code to a text file so I wrote this code:
import numpy as np
width = 28
height = 28
num = 10
info = np.array([num, width, height]).reshape(1,3)
info = info.astype(np.int32)
newfile = open('test.txt', 'w')
newfile.write(info)
newfile.close()
I expected like this:
00 00 00 0A 00 00 00 1C 00 00 00 1C
But this is my actual result:
0A 00 00 00 1C 00 00 00 1C 00 00 00
Why did this happen and how can I get my expected output?
Answer: If you want big endian binary data, call `astype(">i")` then `tostring()`:
import numpy as np
width = 28
height = 28
num = 10
info = np.array([num, width, height]).reshape(1,3)
info = info.astype(np.int32)
info.astype(">i").tostring()
If you want hex text:
" ".join("{:02X}".format(x) for x in info.astype(">i").tostring())
the output:
00 00 00 0A 00 00 00 1C 00 00 00 1C
|
matplotlib 2D plot from x,y,z values
Question: I am a Python beginner.
I have a list of X values
x_list = [-1,2,10,3]
and I have a list of Y values
y_list = [3,-3,4,7]
I then have a Z value for each couple. Schematically, this works like that:
X Y Z
-1 3 5
2 -3 1
10 4 2.5
3 7 4.5
and the Z values are stored in `z_list = [5,1,2.5,4.5]`. I need to get a 2D
plot with the X values on the X axis, the Y values on the Y axis, and for each
couple the Z value, represented by an intensity map. This is what I have
tried, unsuccessfully:
X, Y = np.meshgrid(x_list, y_list)
fig, ax = plt.subplots()
extent = [x_list.min(), x_list.max(), y_list.min(), y_list.max()]
im=plt.imshow(z_list, extent=extent, aspect = 'auto')
plt.colorbar(im)
plt.show()
How to get this done correctly?
Answer: Here is one way of doing it:
import matplotlib.pyplot as plt
import nupmy as np
from matplotlib.colors import LogNorm
x_list = np.array([-1,2,10,3])
y_list = np.array([3,-3,4,7])
z_list = np.array([5,1,2.5,4.5])
N = int(len(z_list)**.5)
z = z_list.reshape(N, N)
plt.imshow(z, extent=(np.amin(x_list), np.amax(x_list), np.amin(y_list), np.amax(y_list)), norm=LogNorm(), aspect = 'auto')
plt.colorbar()
plt.show()
[](http://i.stack.imgur.com/WIIRd.png)
I followed this link: [How to plot a density map in
python?](http://stackoverflow.com/questions/24119920/how-to-plot-a-density-
map-in-python)
|
Python - Filename validation help needed
Question: Bad Filename Example: `foo is-not_bar-3.mp4` What it should be:
`foo_is_not_bar-3.mp4`
I only want to keep a `-` for the last bit of the string if it is a digit
followed by the extension. The closest I have gotten thus far is with the
following code:
fname = 'foo is-not_bar-3.mp4'
valchars = '-_. %s%s' % (string.ascii_letters, string.digits)
f = ''.join(c for c in fname if c in valchars).replace(' ', '_').replace('-', '_')
Answer: You can use regex replacement with a negative lookahead:
import re
fname = 'foo is-not_bar-3.mp4'
f = re.sub(r'\s|-(?!\d+)', '_', fname)
print(f)
>> 'foo_is_not_bar-3.mp4'
This will replace every `-` and space with `_` **unless** it is followed by a
number.
|
Printing the current minute in a loop with python
Question: I'm using python 3 and trying to create a script that runs constantly, and at
some time, execute a specific code. The code i have so far, verifies the
current minute, and if it's above a given minute, it print's a message,
otherwise, it prints the current minute and waits 5 seconds and try again. The
problem is that it only prints the minute the code started.
import time
from datetime import datetime
now = datetime.now()
hour = now.hour
minute = now.minute
L=1
while (L == 1):
if minute > 39:
print ("It's past "+str(hour)+":"+str(minute))
L = 2
else:
print(str(minute))
time.sleep(5)
Answer: Some programming pointers:
1) To make a constant loop use the following construct:
while (True):
if (...):
....
break
2) The time stored in your "now" variable is static must be updated with the
new time within the loop:
while (True):
now = datetime.now()
So:
while True:
now = datetime.now()
if now.minute > 39:
print "Hour, Minute:", now.hour, now.minute
print "All done!"
break
else:
print ""Minute, second:", now.minute, now.second
time.sleep(5)
3) In "real life" (TM) calculate the time you want to wait until your event
and sleep that long.
now = time.now()
if now.minute >= 39:
minutesToEvent = 0
else:
minutesToEvent = 39 - now.minute
print "Sleep seconds to next event:", minutesToEvent * 60
sleep(minutesToEvent * 60)
|
Error 3: Renaming files in python
Question: Newbie Python question.
I'm trying to rename files in a directory...
the value of path is
C:\tempdir\1\0cd3a8asdsdfasfasdsgvsdfc1.pdf
while the value newfile is
`C:\tempdir\1\newfilename.pdf`
origfile = path
newfile = path.split("\\")
newfile = newfile[0]+"\\"+newfile[1]+"\\"+newfile[2]+"\\"+text+".pdf"
os.rename(path, newfile)
print origfile
print newfile
im getting the following error...
os.rename(path, newfile)
WindowsError: [Error 3] The system cannot find the path specified
I know the directory and file are good because i can call os.stats() on it. I
have changed to value of newfile to include the new file name only but recieve
the same error (after reading the python documentation on rename())
My imported libraries are....
import sys
import os
import string
from os import path
import re
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfpage import PDFPage
from cStringIO import StringIO
I've read some other threads on this topic - pertaining to absolute vs.
relative paths. Obviously, my intent is to use absolute paths. My variables
are string variable, anotherwords...
origfile = "C:\tempdir\1\0cd3a8asdsdfasfasdsgvsdfc1.pdf"
Is that enough? or am i supposed to be using some other declaration to tell
python this is a path?
Answer: Can you try the following instead? You might find that renaming is easier
while using a different API.
import pathlib
parent = pathlib.Path('C:/') / 'tempdir' / '1'
old = parent / '0cd3a8asdsdfasfasdsgvsdfc1.pdf'
new = parent / 'newfilename.pdf'
old.rename(new)
Using the `pathlib` module makes working with paths in a cross-platform
fashion somewhat simpler.
|
Subsets and Splits