text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
Program not terminating when score limit is reached?
I'm working on a simple text-based trivia game as my first python project, and my program won't terminate once the score limit is reached.
def game(quest_list):
points = 0
score_limit = 20
x, y = info()
time.sleep(2)
if y >= 18 and y < 100:
time.sleep(1)
while points < score_limit:
random.choice(quest_list)(points)
time.sleep(2)
print("Current score:", points, "points")
print("You beat the game!")
quit()
...
A:
It looks like the points variable is not increased. Something like this might work in your inner loop:
while points < score_limit:
points = random.choice(quest_list)(points)
time.sleep(2)
print("Current score:", points, "points")
I'm assuming that quest_list is a list of functions, and you're passing the points value as an argument? To make this example work, you'll also want to return the points from the function returned by the quest_list that's called. A perhaps cleaner way to build this would be to return only the points generated by the quest. Then you could do something like:
quest = random.choice(quest_list)
points += quest()
Unless points is a mutable data structure, it won't change the value. You can read more about that in this StackOverflow question.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I return the length of each link in a string using JavaScript?
I need to calculate length and number of links in a string in JavaScript.
Here's an example of what I'm looking to do:
var myString = 'Lorem ipsum dolor sit amet, www.google.com/abc consectetur adipiscing elit. http://stackoverflow.com/question/ask Donec sed magna ultricies.'
function getLinkLength(myString) {
// do stuff. ha!
return linkArray; // returns [0] => 18, [1] => 37
}
Output should tell me the length of all links in a string, like so:
www.google.com/abc = 18
http://stackoverflow.com/question/ask = 37
Can you help me parse a string for links and return the length of each string? Email addresses should also count as links (ex. [email protected] = 16).
This is for a character counter where I don't want to penalize characters for link length, so I need to subtract the length of all links in a string for my counter.
Here are some Regex's I'm looking to use. I realize these aren't perfect, but if I can handle the basic links I'll sacrifice the corner cases.
regexes.email = /^(?:[\w\!\#\$\%\&\'\*\+\-\/\=\?\^\`\{\|\}\~]+\.)*[\w\!\#\$\%\&\'\*\+\-\/\=\?\^\`\{\|\}\~]+@(?:(?:(?:[a-zA-Z0-9](?:[a-zA-Z0-9\-](?!\.)){0,61}[a-zA-Z0-9]?\.)+[a-zA-Z0-9](?:[a-zA-Z0-9\-](?!$)){0,61}[a-zA-Z0-9]?)|(?:\[(?:(?:[01]?\d{1,2}|2[0-4]\d|25[0-5])\.){3}(?:[01]?\d{1,2}|2[0-4]\d|25[0-5])\]))$/;
regexes.url = /^(?:(?:ht|f)tp(?:s?)\:\/\/|~\/|\/)?(?:\w+:\w+@)?((?:(?:[-\w\d{1-3}]+\.)+(?:com|org|net|gov|mil|biz|info|mobi|name|aero|jobs|edu|co\.uk|ac\.uk|it|fr|tv|museum|asia|local|travel|[a-z]{2}))|((\b25[0-5]\b|\b[2][0-4][0-9]\b|\b[0-1]?[0-9]?[0-9]\b)(\.(\b25[0-5]\b|\b[2][0-4][0-9]\b|\b[0-1]?[0-9]?[0-9]\b)){3}))(?::[\d]{1,5})?(?:(?:(?:\/(?:[-\w~!$+|.,=]|%[a-f\d]{2})+)+|\/)+|\?|#)?(?:(?:\?(?:[-\w~!$+|.,*:]|%[a-f\d{2}])+=?(?:[-\w~!$+|.,*:=]|%[a-f\d]{2})*)(?:&(?:[-\w~!$+|.,*:]|%[a-f\d{2}])+=?(?:[-\w~!$+|.,*:=]|%[a-f\d]{2})*)*)*(?:#(?:[-\w~!$ |\/.,*:;=]|%[a-f\d]{2})*)?$/i;
regexes.cc = /^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})$/,
regexes.urlsafe = /^[^&$+,\/:=?@ <>\[\]\{\}\\^~%#]+$/;
A:
Your URL regex looks like it is probably both serious overkill as well as misses certain cases.
It is probably better to go with a far simpler URL regex (unless you have an explicit reason for needing that particular pattern).
Here is a JSFiddle which does the trick: http://jsfiddle.net/m5ny4/1/
var input = "http://google.com google.com/abc [email protected] [email protected] www.cookies.com ftps://a.b.c.d/cookies [email protected]";
var pattern = /(?:[^\s]+@[a-z]+(\.[a-z]+)+)|(?:(?:(?:[a-z]+:\/\/)|\s)[a-z]+(\.[a-z]+)+(\/[^\s]*)?)/g;
var matches = input.match(pattern);
for (var i = 0, len = matches.length; i < len; i++) {
$('ul').append('<li>' + matches[i] + " = " + matches[i].length + '</li>');
}
The pattern I use is both email and URLs, but greatly simplified from the ones you showed above. It could be reduced down a bit (combine them more closely), but I chose to keep them separate and just pipe them together because its easier to read.
The regex basically has two big blocks: (?:[^\s]+@[a-z]+(\.[a-z]+)+) and (?:(?:(?:[a-z]+:\/\/)|\s)[a-z]+(\.[a-z]+)+(\/[^\s]*)?)
The first block is for email. Ignore the (?: ) wrapping around it, and you have [^\s]+@[a-z]+(\.[a-z]+)+. The [^\s]+ matches any non-white-space character before an @ sign. Afterwords, it matches any domain with any number of sub or top level domains (e.g., google.com, google.co.uk).
The second one (?:(?:(?:[a-z]+:\/\/)|\s)[a-z]+(\.[a-z]+)+(\/[^\s]*)?) is the URL one. The first meaningful section is (?:[a-z]+:\/\/)|\s), which would match any protocol or a white-space character (to tell it where the start is). If you want to restrict it to certain protocols, you would just replace [a-z]+ with the protocols you want.
Next is [a-z]+ which matches the first (sub)domain, followed by (\.[a-z]+)+ which matches one or more additional domains (since you need at least two to make a legitimate domain name). Finally, we have (\/[^\s]*) which optionally matches everything until it finds a white-space.
The rest is pretty simple. Do the match globally (the g at the end of the pattern) to get all occurrences, then just loop through them and use .length on the strings to get their length.
I just output them in to a list, but you can do whatever you want by replacing the for loop.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Transformation between latin squares
Let $L\in R^{k\times k}$ a Latin square matrix.
Which is the most general form of $A\in R^{k\times k}$ such that
$$
A^TLA=L'
$$
with $L'$ another Latin square?
Thanks!
Fabio
A:
I doubt that there's a very nice characterization in general.
Let $L$ be any $k \times k$ Latin square matrix (whose entries are $k$ distinct reals). Let $M$ be a $k \times k$ Latin square with symbolic entries. For $B = A^T L A$ to be a Latin square with the same pattern as
$M$, what we need is $B_{i.j} = B_{i'.j'}$ if and only if $M_{i.j} = M_{i',j'}$.
We might start by solving a set of $k(k-1)$ quadratic equations (e.g. taking one $(i',j')$ for each symbol) in the $k^2$ unknowns $a_{ij}$. We might hope that for a generic solution the entries of $B$ that are not required to be equal will not be equal. Of course, restricting to real solutions is an additional complication. In some cases, symmetry dictates there is no solution: if one of $L$ and $M$ is symmetric, the other must also be symmetric.
For example, I tried the case $k=3$ with $L = \pmatrix{1 & 2 & 0\cr 2 & 0 & 1\cr 0 & 1 & 2}$ and $M = \pmatrix{a & b & c\cr b & c & a\cr c & a & b\cr}$ (both symmetric).
We get a set of $3$ equations in $9$ unknowns, which turns out to have Hilbert dimension $6$. One family of solutions is
$$ A = \pmatrix{a_{1,1} & a_{1,1} & a_{1,1}\cr
a_{2,1} & a_{2,1} & a_{2,1}\cr
-2 a_{1,1} & -2 a_{1,1} & -2 a_{1,1}\cr} $$
but this doesn't work as $A^T L A$ has all entries equal.
One $5$-parameter family that does work is
$$ \pmatrix{a_{1,1} & a_{1,2} & a_{1,3}\cr
u/d & v/d & w/d\cr
-2 a_{1,1} & a_{3,2} & a_{3,3}\cr} $$
where
$$ \eqalign{u &= 72\,{a_{{1,1}}}^{2}a_{{1,2}}a_{{1,3}}+36\,{a_{{1,1}}}^{2}a_{{1,2}}a_{{
3,3}}+36\,{a_{{1,1}}}^{2}a_{{1,3}}a_{{3,2}}+18\,{a_{{1,1}}}^{2}a_{{3,2
}}a_{{3,3}}-4\,a_{{1,1}}{a_{{1,2}}}^{3}+12\,a_{{1,1}}{a_{{1,2}}}^{2}a_
{{3,2}}+15\,a_{{1,1}}a_{{1,2}}{a_{{3,2}}}^{2}-4\,a_{{1,1}}{a_{{1,3}}}^
{3}+12\,a_{{1,1}}{a_{{1,3}}}^{2}a_{{3,3}}+15\,a_{{1,1}}a_{{1,3}}{a_{{3
,3}}}^{2}+4\,a_{{1,1}}{a_{{3,2}}}^{3}+4\,a_{{1,1}}{a_{{3,3}}}^{3}+9\,{
a_{{3,3}}}^{2}{a_{{1,2}}}^{2}-18\,a_{{3,2}}a_{{1,3}}a_{{3,3}}a_{{1,2}}
+9\,{a_{{3,2}}}^{2}{a_{{1,3}}}^{2}
\cr
v &= 36\,{a_{{1,1}}}^{2}{a_{{1,3}}}^{2}+36\,{a_{{1,1}}}^{2}a_{{1,3}}a_{{3,3
}}+9\,{a_{{1,1}}}^{2}{a_{{3,3}}}^{2}-9\,a_{{1,1}}{a_{{1,2}}}^{2}a_{{3,
3}}+9\,a_{{1,1}}a_{{1,2}}a_{{1,3}}a_{{3,2}}-9/2\,a_{{1,1}}a_{{1,2}}a_{
{3,2}}a_{{3,3}}+9/2\,a_{{1,1}}a_{{1,3}}{a_{{3,2}}}^{2}-2\,{a_{{1,2}}}^
{4}-2\,{a_{{1,2}}}^{3}a_{{3,2}}-9/2\,{a_{{1,2}}}^{2}{a_{{3,2}}}^{2}-2
\,a_{{1,2}}{a_{{1,3}}}^{3}-3\,a_{{1,2}}{a_{{1,3}}}^{2}a_{{3,3}}+3\,a_{
{1,2}}a_{{1,3}}{a_{{3,3}}}^{2}-4\,a_{{1,2}}{a_{{3,2}}}^{3}+2\,a_{{1,2}
}{a_{{3,3}}}^{3}+{a_{{1,3}}}^{3}a_{{3,2}}-15/2\,{a_{{1,3}}}^{2}a_{{3,2
}}a_{{3,3}}-6\,a_{{1,3}}a_{{3,2}}{a_{{3,3}}}^{2}-{a_{{3,2}}}^{4}-a_{{3
,2}}{a_{{3,3}}}^{3}
\cr
w &= 36\,{a_{{1,1}}}^{2}{a_{{1,2}}}^{2}+36\,{a_{{1,1}}}^{2}a_{{1,2}}a_{{3,2
}}+9\,{a_{{1,1}}}^{2}{a_{{3,2}}}^{2}+9\,a_{{1,1}}a_{{1,2}}a_{{1,3}}a_{
{3,3}}+9/2\,a_{{1,1}}a_{{1,2}}{a_{{3,3}}}^{2}-9\,a_{{1,1}}{a_{{1,3}}}^
{2}a_{{3,2}}-9/2\,a_{{1,1}}a_{{1,3}}a_{{3,2}}a_{{3,3}}-2\,{a_{{1,2}}}^
{3}a_{{1,3}}+{a_{{1,2}}}^{3}a_{{3,3}}-3\,{a_{{1,2}}}^{2}a_{{1,3}}a_{{3
,2}}-15/2\,{a_{{1,2}}}^{2}a_{{3,2}}a_{{3,3}}+3\,a_{{1,2}}a_{{1,3}}{a_{
{3,2}}}^{2}-6\,a_{{1,2}}{a_{{3,2}}}^{2}a_{{3,3}}-2\,{a_{{1,3}}}^{4}-2
\,{a_{{1,3}}}^{3}a_{{3,3}}-9/2\,{a_{{1,3}}}^{2}{a_{{3,3}}}^{2}+2\,a_{{
1,3}}{a_{{3,2}}}^{3}-4\,a_{{1,3}}{a_{{3,3}}}^{3}-{a_{{3,2}}}^{3}a_{{3,
3}}-{a_{{3,3}}}^{4}
\cr
d &= 8\,{a_{{1,2}}}^{3}+12\,{a_{{1,2}}}^{2}a_{{3,2}}+6\,a_{{1,2}}{a_{{3,2}}
}^{2}+8\,{a_{{1,3}}}^{3}+12\,{a_{{1,3}}}^{2}a_{{3,3}}+6\,a_{{1,3}}{a_{
{3,3}}}^{2}+{a_{{3,2}}}^{3}+{a_{{3,3}}}^{3}
\cr}$$
where some polynomials in the $a_{ij}$ must be nonzero for $B$ to have three distinct entries.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to open and display pdf using qt in raspberry pi
I have raspberry pi3. I want to open and display pdf file using qt in raspberry pi. So i had written the code, but while running that, it is giving me error like: no module named QtPoppler even i had install poppler using command: sudo apt-get install python-poppler. Any one know how to solve this error? Please Reply.
A:
I assume from your imports that you are using Qt4.
Install the correct version of Poppler for Qt4 :
apt-get install python-poppler-qt4
Then import the python binding of Poppler in the proper way:
import popplerqt4
Source: https://pypi.python.org/pypi/python-poppler-qt4/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to pick consecutive numbers from list?
Question is very simple.
If we have
tst = {2,3,4,6,7,9,11}
result must be
{{2,3,4}, {6,7}, {9}, {11}}
There are similar questions, but not exact.
My best solution is:
myFun[arr_] := Module[{prev = First@arr, tag = First@arr},
Reap[
Sow[prev, tag];
Do[
If[prev != e - 1, tag = e];
Sow[e, tag];
prev = e,
{e, Rest@tst}]
]][[2]];
Is it possible to do it better?
A:
Split[] was meant for this:
Split[{2, 3, 4, 6, 7, 9, 11}, #2 - #1 == 1 &]
{{2, 3, 4}, {6, 7}, {9}, {11}}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Custom Media Types Used In Request's Body Content Type When Designing A REST Service?
When creating your own custom media type format (say application/vnd.myapp+xml), should the client when sending body content, do so in the custom media type?
For example you PUT a representation of an order to a uri. Should the content be application/vnd.myapp+xml, or just xml, since the client is not going to be including hypermedia controls like links?
The server will always respond with the custom media type if the user accepts it (which it should), but do clients have to use it in their request bodies?
A:
Clients don't necessarily have to send data to the server (e.g. via PUT or POST) using the same media type that the server sends back in a GET response. It's up to the service to decide what media types it can receive and what types it will return. And of course, it can be implemented so as to support multiple media types in both directions for the same resource.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why am I getting downvoted when I downvote someone
Possible Duplicate:
How does “Reputation” work?
I downvoted an answer to a question and I got downvoted myself. Could somebody explain please?
A:
You are not getting downvoted.
When you downvote an answer, you lose one reputation point. This is explained in the FAQ.
A:
The reputation loss is because you downvoted, not because you were downvoted.
See the docs for more information.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
iOS AVFoundation Video Capture Orientation Options
I have an app that I would like to have video capture for the front-facing camera only. That's no problem. But I would like the video capture to always be in landscape, even when the phone is being held in portrait.
I have a working implementation based on the AVCamDemo code that Apple published. And borrowing from the information in this tech note, I am able to specify the orientation. There's just one trick: while the video frame is oriented correctly, the contents still appear as though shot in portrait:
I'm wondering if I'm just getting boned by the physical constraints of the hardware: is the image sensor just oriented this way? The referenced tech note above makes this note:
Important: Setting the orientation on a still image output and movie
file output doesn't physically rotate the buffers. For the movie file
output, it applies a track transform (matrix) to the video track so
that the movie is rotated on playback, and for the still image output
it inserts exif metadata that image viewers use to rotate the image
properly when viewing later.
But my playback of that video suggests otherwise. Any insight or suggestions would be appreciated!
Thanks,
Aaron.
A:
To answer your question, yes, the image sensor is just oriented that way. The video camera is an approx 1-megapixel "1080p" camera that has a fixed orientation. The 5MP (or 8MP for 4S, etc) still camera also has a fixed orientation. The lenses themselves don't rotate nor do any of the other camera bits, and hence the feed itself has a fixed orientation.
"But wait!", you say, "pictures I take with the camera app (or API) get rotated correctly. Why is that?" That's cuz iOS takes a look at the orientation of the phone when a picture is taken and stores that information with the picture (as an Exif attachment). Yet video isn't so flagged -- and each frame would have to be individually flagged, and then there's issues about what to do when the user rotates the phone during video....
So, no, you can't ask a video stream or a still image what orientation the phone was in when the video was captured. You can, however, directly ask the phone what orientation it is in now:
UIDeviceOrientation currentOrientation = [UIDevice currentDevice].orientation;
If you do that at the start of video capture (or when you grab a still image from a video feed) you can then use that information to do your own rotation of playback.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is it worth it to use Bullet for 2D physics instead of Box2D for the sake of learning Bullet?
There isn't much more to the question. I'm not concerned about overhead, as I'm sure they are both fine for my purposes. Basically, I am familiar with Box2D concepts because of the Farseer Physics Engine, but I want to use Bullet when I make the jump to 3D stuff. Perhaps Bullet has some educational value for me even in the 2D realm?
The generalized version of the question is: should I use a 3D physics engine for a 2D game if I plan to utilize a 3D physics engine in the future? Or is this a waste of time which would not provide educational value?
A:
My generally feeling is always that learning to use something in the wrong context is not a valuable exercise.
A:
Why not treat them separately?
You have a 2D game; use the right engine/tools to make that game the best it can be.
You want to mess around with a 3D engine to learn it; then mess around with it, make some simple 3D games or apps, but keep that separate from the other game you're working on.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Java JPanel & JScrollPane
I working on a application that manage image's filters etc.
I want to have scroll bars when the image is to big to be display.
I put my customize panel that extend JPanel in a JScrollPane and I add it in my JFrame.
My image is displayed but not the whole image and the scroll bars are not there.
How to get the scroll-bars to appear?
Here is my code :
CustomePanel test = new ImagePanel(new File("test.jpg"));
test.setPreferredSize(new Dimension(400, 400));
JScrollPane tmp = new JScrollPane(test);
this.getContentPane().add(tmp);
A:
It is likely that your initial preferred size does not match that of your Image. Rather than using setPreferredSize, override getPreferredSize to reflect the size of the image in ImagePanel:
@Override
public Dimension getPreferredSize() {
return new Dimension(image.getWidth(this), image.getHeight(this));
}
A JLabel would be a better approach here if the panel is not required as a container.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
PHP preg_match to grab text in between two HTML tags
I'm trying to use preg_match to grab the text in between two HTML tags.
Here's a simplified version of my code:
$sPattern = "/<li class=\"sample\">(.*?)<\/li>/s";
$sText = "blah blah blah <li class=\"sample\">hello world!</li> blah blah blah";
preg_match($sPattern,$sText,$aMatch);
echo '<pre>'.print_r($aMatch).'</pre>';
However, when I run this code, I get the full HTML string returned:
<li class=\"sample\">hello world!</li>
Does anyone know what changes I need to make to my regular expression?
Note: I'm aware of other ways to parse data from an HTML page. For various reasons, DOMDocument and DOMXPath are not an option--I'm sticking with RegEx.
A:
This should work how you want:
$sPattern = "/<li class=\"sample\">(.*?)<\/li>/s";
$sText = "blah blah blah <li class=\"sample\">hello world!</li> blah blah blah";
preg_match($sPattern,$sText,$aMatch);
echo '<pre>'.$aMatch[1].'</pre>';
A:
You need to access the capturing group output.
var_dump( $aMatch[1]);
Here is a demo showing that the regex is working fine, you're just accessing the resulting array incorrectly.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to Dynamically set your viewport to the Page Limits
I've got a page that is 1200px wide (but this can change). I've got a viewport that I can't change that is:
<meta name="viewport" content="initial-scale=1, width=device-width">
I'm trying to write a script where, if you come to the page with a device that is more than 601px wide and less than 1025px wide the screen will zoom out to the page width. Currently the device will load the page and show the first 768px then the user has to zoom out to see the remaining 1200px.
This is somewhat what I want:
if(window.innerWidth > 600 && window.innerWidth < 1024){
var defaultDeviceWidth = window.innerWidth;
var defaultPageWidth = document.getElementById('wrapper').offsetWidth;
if(defaultDeviceWidth < defaultPageWidth){
document.querySelector('meta[name="viewport"]').content = 'initial-scale='+
(defaultDeviceWidth/defaultPageWidth)+
', maximum-scale='+(defaultDeviceWidth/defaultPageWidth)+
', user-scalable=yes';
}
}
Which will output:
<meta name="viewport" content="initial-scale=0.64, maximum-scale=0.64, user-scalable=yes">
Problem with this is, maximum-scale zooms the view out but doesn't allow the user to zoom in if they want to.
Essentially:
I want tablets to zoom out to the page limits, as if you did the pinch out as far as it'll go using javascript.
A:
I think one popular method to best fit the website layout for viewers from all devices are through the use of responsive classes (media queries, bootstrap, jquery-mobile, etc. )
Other ways to load dynamic CSS styles can be using refresh trigger to load specific *.css in different device resolutions.
Have you thought about using dynamic if-else to echo different viewport meta tag once you detected the device width in advance before converting to HTML and show in client side?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Xcode switch to IBaction code on click
In Visual Studio it is possible to select a control on a form and by click on control event switch to the event code. Is it possible to do this in Connections Inspector?
A:
There is no way to do like Visual Studio.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Preposition confusion: "in home" vs "at home"
I want to know the correct preposition to use in
this sentence, which I wrote for my writing class assignment.
I have already received my grade, so I'm not trying to cheat or something.
'...Freedom can also refer to being secure in home and hometown'
Is it correct to use 'in home'? or is it wrong? I know there are alternatives which fit better (e.g: at) but what about 'in'?
I looked this up on the internet but opinions seem to vary significantly.
A:
I can see your logic. If someone is financially secure, they are secure in their finances. You can be secure in your job and secure in the knowledge that someone, somewhere will agree with you. These are common phrases.
Dictionary.com gives the following definitions for "secure" (amongst others):
free from or not exposed to danger or harm; safe
sure; certain; assured: secure of victory; secure in religious belief
If something is "secure at home", that would imply the first definition: it is free from risk. I believe you mean the second definition: "home and hometown" are "assured". Therefore, I don’t think you are wrong to use "in" in this context.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Get the text content of an html element using xpath
This is the html content
<p class="ref Hover">
<b class="Hover Hover Hover Hover Hover">Mfr Part#:</b>
MC34063ADR2G<br>
<b>Mounting Method:</b>
Surface Mount<br>
<b>Package Style:</b>
SOIC-8<br>
<b>Packaging:</b>
REEL<br>
</p>
Using below xpath I am able to get only "Mfr Part#:".
//div[@id='product-desc']/p[2]/b[1]/text()
//div[@id='product-desc']/p[2]/b[1]
But I want "Mfr Part#:MC34063ADR2G"
A:
Your MC34063ADR2G should be at
//div[@id='product-desc']/p[2]/text()[2]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Parallel.for causes different results
I am currently trying to improve a C# project I am working on. Specifically, my goal is to parallelize some operations to reduce processing time.
I am starting with small snippets just to get the hang of it.
The following code (not parallel) works correctly (as expected)
for (int i = 0; i < M; i++)
{
double d;
try
{
d = Double.Parse(lData[i]);
}
catch (Exception)
{
throw new Exception("Wrong formatting on data number " + (i + 1) + " on line " + (lCount + 1));
}
sg[lCount % N][i] = d;
}
By using the following (parallel) code I would expect to obtain the exact same results, but that is not the case.
Parallel.For(0, M, i =>
{
double d;
try
{
d = Double.Parse(lData[i]);
}
catch (Exception)
{
throw new Exception("Wrong formatting on data number " + (i + 1) + " on line " + (lCount + 1));
}
sg[lCount % N][i] = d;
});
The part of the program these snippets are from reads data from a file, one line at a time. Each line is a sequence of comma-separated double precision numbers, that I put in the vector lData[] using String.Split(). Every M lines, the data sequence starts over with a new data frame (hence the % M in the element index when i assign the values).
It is my understanding (clearly wrong) that by putting the code from the (serial) for-loop in the third parameter of Parallel.For I parallelize its execution. This shouldn't change the results. Is the problem in the fact that the threads are all accessing to lCount and M? Should I make thread-local copies?
Thanks.
(since I'm new I am not allowed to create the Parallel.For tag)
EDIT:
I ran some more tests. Basically I looked at an output earlier in the code than what I did before. It would appear that the parallel version of my code does not fill the sg[][] array entirely. Rather, some values are left to their defaults (0, in my case).
EDIT 2 (to answer some of the comments):
lData[] is a string[] obtained by using string.Split(). The original string I am splitting is read from my data files. I wrote the code that generates them, so they are generally well-formatted (I still used the try-catch construct out of habit). Just before the for-loop (wither parallel or serial) I check to verify that lData[] has the correct number of values (M). If it doesn't, I throw an exception that prevents the program from reaching the for-loop in question.
sg[][] is a N by M array of type double (there was a typo in the snippets, now corrected; In my original code this error was not present). After I read N lines from the file the array sg[][] contains a whole data set. After the for-loop (either parallel or serial) there is a portion of come that looks like this:
lCount++; //counting the lines I have already read
if((lCount % N) == 0)
{
//do things with sg[][]
//reset sg[][]
}
So, I am on purpose overwriting all lines of sg[][]. The for-loop's whole purpose is to update the values in sg[][].
A:
After doing some line-by-line debugging over the weekend, I managed to find where the problem was.
Basically, unbeknownst to me, the threads created by the parallel.for did not inherit the CultureInfo (this is the normal behaviour of threads, and I didn't know that). What was happening then was that strings like 3.256 were being parsed to 3256.0. This caused the issues I found with the output.
(Note: the default locale on my computer is set to use a comma as decimal separator, but I had set to the full stop in program.cs for all my code. I had incorrectly assumed this would be inherited by new threads)
The correct parallel snippet looks like this:
CultureInfo newCulture = (CultureInfo)CultureInfo.CurrentCulture.Clone();
newCulture.NumberFormat.NumberDecimalSeparator = ".";
Parallel.For(0, M, i =>
{
Thread.CurrentThread.CurrentCulture = newCulture;
double d;
try
{
d = Double.Parse(lData[i]);
}
catch (Exception)
{
throw new Exception("Wrong formatting on data number " + (i + 1) + " on line " + (lCount + 1));
}
GlobalVar.sgData[lCount % N][i] = d;
});
Thanks to all who pitched in with comments and opinions. Good information to improve my programming.
I updated the question tags to reflect where the issue really was.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to append several lines of text in a file using a shell script
I want to write several lines (5 or more) to a file I'm going to create in script. I can do this by echo >> filename. But I would like to know what the best way to do this?
A:
You can use a here document:
cat <<EOF >> outputfile
some lines
of text
EOF
A:
I usually use the so-called "here-document" Dennis suggested. An alternative is:
(echo first line; echo second line) >> outputfile
This should have comparable performance in bash, as (....) starts a subshell, but echo is 'inlined' - bash does not run /bin/echo, but does the echo by itself.
It might even be faster because it involves no exec().
This style is even more useful if you want to use output from another command somewhere in the text.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
LiveCharts - Connect across missing points
I am using LiveCharts to plot several line charts on the same graph. Some of the charts have missing data points.
Current graph with gaps:
I would like to connect across these gaps:
The goal if possible:
MainWindow.xaml
<Window x:Class="LiveChartsTest.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:lvc="clr-namespace:LiveCharts.Wpf;assembly=LiveCharts.Wpf"
mc:Ignorable="d"
Title="MainWindow" Height="450" Width="800">
<Grid>
<lvc:CartesianChart Series="{Binding Series}">
<lvc:CartesianChart.AxisX>
<lvc:Axis Title="Date" Labels="{Binding Labels}"/>
</lvc:CartesianChart.AxisX>
</lvc:CartesianChart>
</Grid>
</Window>
MainWindow.xaml.cs
using LiveCharts;
using LiveCharts.Wpf;
using System;
using System.Windows;
namespace LiveChartsTest
{
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
// Create date labels
Labels = new string[10];
for (int i = 0; i < Labels.Length; i++)
{
Labels[i] = DateTime.Now.Add(TimeSpan.FromDays(i)).ToString("dd MMM yyyy");
}
Series = new SeriesCollection
{
new LineSeries
{
Title = "Dataset 1",
Values = new ChartValues<double>
{
4,
5,
7,
double.NaN,
double.NaN,
5,
2,
8,
double.NaN,
6
}
},
new LineSeries
{
Title = "Dataset 2",
Values = new ChartValues<double>
{
2,
3,
4,
5,
6,
3,
1,
4,
5,
3
}
}
};
DataContext = this;
}
public SeriesCollection Series { get; set; }
public string[] Labels { get; set; }
}
}
Is there any way to do this with LiveCharts?
A:
That looks more like a math trigonometry problem, where you have to figure out the coordinates of the missing points and add them to SeriesCollection so you can end up with that flat looing joints between the fragments.
Consider the following explanatory picture based on your graph:
Between the X and Y we have to deduce two points, A and B (we know that we need two in between because we can deduce that from the interval between X and Y or we can simply count the NaNs in the initial collection).
A & B Y coordinates could be easily deduced using what we already know and the angle α.
We are looking to calculate the |BB'| and |AA'| sizes (added to the distance between y and the index should represent the final A and B)
We know basically that: tan(α)= |BB'|/|B'Y| = |AA'|/|A'Y| = |XZ|/|ZY|
For simplicity now let's assume that all intervals in the X-axis and Y-axis are equal 1, I will come back to this later.
Now we do know |XZ|/|ZY|, (xz is the difference between x and y, and zy is basically how many NaNs there is in between), so we can easily calculate |BB'| and |AA'|:
|BB'| = (|XZ|/|ZY|) * |B'Y| (Note that |B'Y| is equal to one since it's a one unit interval)
|AA'| = (|XZ|/|ZY|) * |A'Y| (Note that |A'Y| is equal to two-unit interval )
Here how a basic implementation to what was explained above looks like (the code should be self-explanatory):
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
// Create date labels
Labels = new string[10];
for (int i = 0; i < Labels.Length; i++)
{
Labels[i] = DateTime.Now.Add(TimeSpan.FromDays(i)).ToString("dd MMM yyyy");
}
var chartValues = new ChartValues<double>
{
4,
5,
7,
double.NaN,
double.NaN,
5,
2,
8,
double.NaN,
6
};
Series = new SeriesCollection
{
new LineSeries
{
Title = "Dataset 1",
Values = ProcessChartValues(chartValues)
},
new LineSeries
{
Title = "Dataset 2",
Values = new ChartValues<double>
{
2,
3,
4,
5,
6,
3,
1,
4,
5,
3
}
}
};
DataContext = this;
}
private ChartValues<double> ProcessChartValues(ChartValues<double> chartValues)
{
var tmpChartValues = new ChartValues<double>();
double bornLeft =0, bornRight=0;
double xz = 0, zy = 0, xy = 0;
bool gapFound = false;
foreach (var point in chartValues)
{
if (!double.IsNaN(point))
{
if (gapFound)
{
// a gap was found and it needs filling
bornRight = point;
xz = Math.Abs(bornLeft - bornRight);
for (double i = zy; i >0; i--)
{
tmpChartValues.Add((xz / zy) * i + Math.Min(bornLeft, bornRight));
}
tmpChartValues.Add(point);
gapFound = false;
zy = 0;
}
else
{
tmpChartValues.Add(point);
bornLeft = point;
}
}
else if(gapFound)
{
zy += 1;
}
else
{
zy += 1;
gapFound = true;
}
}
return tmpChartValues;
}
public SeriesCollection Series { get; set; }
public string[] Labels { get; set; }
}
And here the output:
Now coming back to our interval size, notice how the fragments aren't sharp because of our interval=1 assumption, but on the other hand, this assumption gave the graph some smoothness which is most likely what anyone would be after. If you still need to have sharp fragments, you could explore the LiveChart API to get that interval in pixels which I am not sure they offer (then simply multiply with xz and zy sizes) otherwise you could deduce it from the ActualWidth and ActualHight of the chart drawn area.
As a final note, the code should be extended to handle the NaN points on the edges (you have to either neglect them or define a direction to which the graph should go).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
AngularJS $location.search() not working
It's the first time I'm trying to use the $location service in AngularJS in order to check for query string arguments. I've been reading the docs and trying to play a bit with it in Plunkr to see how to use it, but so far I've failed to get it to retrieve any parameters from the query string.
I've been testing it using this plunk http://plnkr.co/edit/RIFdWa5ay2gmRa6Zw4gm?p=info
var app = angular.module('myApp', [])
.config(function($locationProvider) {
$locationProvider.html5Mode(true);
});
angular.module('myApp').controller('myCtrl', function($scope, $location){
$scope.name = "Andrei";
$scope.url = $location.host();
$scope.path = $location.path();
$scope._params = $location.search();
});
I've read that setting html5Mode(true) on the $locationProvider is required in order to get the $location service to work as "expected" - which I've done, but when setting this nothing works anymore in my plunk (you can set it to false and you'll see the binding are qorking again properly).
Am I missing something regarding the $location service?
Any help or suggestions are appreciated!
Thanks!
A:
In AngualarJS 1.3 $location in HTML5 mode requires a <base> tag to be present so that it knows the path that all of the links are relative to. You can add <base href="/" /> to get it working again.
http://plnkr.co/edit/j9rd1PajNLQVJ8r4c8BZ?p=preview
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how to add element of page to another page with greasemonkey
ok, i try to add black google bar to top of facebook page, so this my attemp using jquery
// ==UserScript==
// @name GooggleBar Facebook
// @namespace nyongrand
// @require http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.js
// @include http://www.facebook.com/*
// ==/UserScript==
$('<div id="result">a</div>').load('http://www.google.com/index.html #gbg').insertBefore('#blueBarHolder');
but its not working, so i try to use iframe
// ==UserScript==
// @name GooggleBar Facebook
// @namespace nyongrand
// @include http://www.facebook.com/*
// ==/UserScript==
var getRef = document.getElementById("blueBarHolder");
var makeIframe = document.createElement("iframe");
makeIframe.setAttribute("height", "150px");
makeIframe.setAttribute("src", "http://google.com");
var parentDiv = getRef.parentNode;
parentDiv.insertBefore(makeIframe, getRef);
with this iframe was created but no content in the iframe, iframe is blank altought i see
<iframe width="325px" height="150px" src="http://google.com">
in firebug.
A:
it's blank because the display is forbidden in the X-Frame-Options.
you can read more about this in Overcoming display forbidden by X-Frame-Options
|
{
"pile_set_name": "StackExchange"
}
|
Q:
mySQL PHP separate terms in mysql table
I have multiple entries in a mysql table field, these are separated with commas;
data1, data2, data3 etc. How do I using PHP separate out these terms and display them separately in a table like so;
Data
------------
Data 1
------------
Data 2
------------
Data 3
------------
etc
A:
First of all, you shouldn't have multiple values in a single column. However, regardless of why you have it, there is a function in php called explode which allows you to explode a string at a specific delimiter. You can have something like this: $myArray = explode(',', $mydata_coming_from_database);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to make specific items only appear in recto pages in ConTeXt?
I would like for some items in my book to begin only on recto, or right-hand pages, such as, e.g.:
The first page.
The table of contents.
The beginning of a \part.
I believe such a configuration is a default in most LaTeX document classes. How can I ensure that all of these items are placed on recto pages in ConTeXt?
A:
As Aditya already pointed out, using a doublesided layout is necessary. Otherwise there is no concept of a right page. By default the first page, the table of contents (when \completecontent is used) and the parts already start on a right page.
However, the placement is customizable. All structure head commands have a page key that either take left, right, yes or a page break definition as an argument. For more details see the ConTeXt wiki - Titles. Using a \definepagebreak definition is more powerful, since you can set the page headers as well.
And you can always use the \page command in your document. To separate content and layout I would advice to avoid the use of direct instructions like \page if possible.
Example:
% switch from singlesided to doublesided layout
\setuppagenumbering [alternative=doublesided]
% you have lots of options to influence the appearence of the page
% \definepagebreak [chapterpagebreak] [yes, header, right]
\definepagebreak [chapterpagebreak] [right]
% either use a page break definition
\setuphead [chapter] [page=chapterpagebreak]
% or a direct instruction without page break definition
% \setuphead [chapter] [page=right]
\setuphead [part] [page=right, placehead=yes]
\starttext
\startstandardmakeup [align=middle, style=\bfd]
First page
\stopstandardmakeup
\completecontent
\startpart [title=Part one]
\startchapter [title=First]
\input ward
\stopchapter
\startchapter [title=Second]
\input ward
\stopchapter
\stoppart
\stoptext
|
{
"pile_set_name": "StackExchange"
}
|
Q:
compare and delete on an array in matlab
I am trying to write a short code to read a .m file(testin1.m) into an array, and search for a particular word( 'auto'). if match is found,delete it. i have the following code, please help me figure out my mistake.
fid = fopen('testin1.m');
txt = textscan(fid,'%s');
fclose(fid);
m_file_idx = 1;
data=['auto'];
B=cellstr(data);
for idx = i : length(txt)
A=txt{i};
is_auto=isequal(A, B);
if is_auto==0
txt{i}=[];
end
end
if txt{i}=auto then it should delete that row.
A:
AK4749's answer is absolutely correct in showing where you went wrong. I'll just add an alternative solution to yours, which is shorter:
C = textread('testin1.m', '%s', 'delimiter', '\n');
C = C(cellfun(@isempty, regexp(C, 'auto')));
That's it!
EDIT #1: answer modified to remove the lines that contains the word 'auto', not just the word itself.
EDIT #2: answer modified to accept regular expressions.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
In pair programming, what is each role named, and why?
I've heard the person at the keyboard named the "driver", and the other person named the "navigator". I've imagined rally car racers, where the person at the wheel just cannot keep up with everything that's happening, and they must have the help of the "navigator", who's shouting volumes and volumes of directions at them to, literally, keep them "out of the weeds", much like a person at the keyboard is kept "on course" by the person in the second chair.
What are the more/most commonly accepted terms for the two roles in pair programming, and (if it's not obvious) are they borrowed from any earlier professions?
A:
The driver (or less commonly pilot) has hands on with the keyboard and is right there, banging out the code.
The navigator (or observer, or less commonly co-driver or co-pilot) is sitting alongside with the reference documents making sure the code is going the right way.
The navigator has a better perspective of what's coming up, and isn't just worrying about the mechanics of typing away.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
“List” property breaks retrieval of Send
The documentation on Send states I should be able to add a "List" property to learn from which list a user unsubscribed. However, when I use the List parameter, I receive "Error: The Request Property(s) List do not match with the fields of Send retrieve".
This appears to be an issue with the ExactTarget API. It would be great to have this corrected because currently I can only get all sends of a login without knowing which lists were effected.
Below, I have documented both a failed request with the "List" parameter, and a succeeding request where the only alteration is the removal of the "List" parameter. I have tried "List.ID" and other properties, to no avail.
Thank you for looking into this!
Service Reference URL: https://webservice.exacttarget.com/etframework.wsdl
var soapClient = CreateClient(settings.Username, settings.Password);
var retrieveRequest = new RetrieveRequest {ObjectType = "Send"};
retrieveRequest.Properties = new[] { "SentDate", "List" };
APIObject[] results;
String requestId;
soapClient.Retrieve(retrieveRequest, out requestId, out results);
*** Error: The Request Property(s) List do not match with the fields of Send retrieve
*** Now removing List from properties returns OK
var soapClient = CreateClient(settings.Username, settings.Password);
var retrieveRequest = new RetrieveRequest {ObjectType = "Send"};
retrieveRequest.Properties = new[] { "SentDate" };
APIObject[] results;
String requestId;
soapClient.Retrieve(retrieveRequest, out requestId, out results);
*** Returns OK
Yes this is similar to this question, but the ObjectTypes being requested are different. The answer to the stated question does NOT work for this ObjectType.
A:
Unfortunately, not all properties on an object are retrievable. The best source for getting list of the retrievable objects is using the Describe method:
https://help.exacttarget.com/en/technical_library/web_service_guide/technical_articles/describing_a_subscriber/
For relationships like List to Send that are Many-to-Many, there is typically a junction object that represents that relationship. For this scenario, the object is ListSend.
https://help.exacttarget.com/en/technical_library/web_service_guide/technical_articles/retrieving_all_lists_for_a_send/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Should I have two threads for input/output or use NIO?
I have been working on a (relatively) simple tcp client/server chat program for my networking class. The problem that I am running into is I am using blocking calls, such as read() and writeBytes(). So, whenever I try to send a message to my server, the server does not print it out until it writes one back. For this situation, would using one thread for input and one thread for output be the most sensible solution, or would using NIO serve me better? Just to give you an idea of what my code looks like now, my server is:
ServerSocket welcomeSocket = new ServerSocket(port);
DataOutputStream output;
BufferedReader inFromUser = new BufferedReader( new InputStreamReader(
System.in));
String sentence;
while ((sentence = inFromUser.readLine()) != null) {
Socket connectionSocket = welcomeSocket.accept();
output = new DataOutputStream( connectionSocket.getOutputStream());
output.writeBytes(sentence + "\n");
BufferedReader inFromServer = new BufferedReader( new InputStreamReader(
connectionSocket.getInputStream()));
System.out.println("Client said: " + inFromServer.readLine());
connectionSocket.close();
}
The client code is essentially the same. Thanks for your time!
A:
Just use two threads unless you want to learn about NIO. The Java tutorial has examples of spawning threads to handle client connections to a ServerSocket. Look toward the bottom of "Writing the Server Side of a Socket".
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Matlab - highlight a specific interval using patch
I would like to highlight a specific interval in my plot. I found a way using the patch object.
The problem is that my layout gets messy whenever i use it.
Here's an example of a plot:
x = 1:0.05:25;
plot(x,sin(x)+sin(100*x)/3);
xlabel('x axis');
ylabel('y axis');
title('\Omega \Delta test');
legend('sin \Sigma')
And to highlight a period:
yl = ylim;
patch([5 5 10 10],[yl(1) yl(2) yl(2) yl(1)],'k',...
'facecolor',[.4 .6 .4],'edgecolor',[.4 .6 .4],...
'facealpha',0.3,'edgealpha',0.3)
My results with and without the patch command:
Normal:
Messy:
Look at the fonts and the legend block. Any ideas on how to fix that?
Is there a better way to highlight an interval? I need to choose the color and set transparency.
Just one more question: Why do I have to use the third input (color) if it's not applied?
Thanks in advance!
A:
Edit: This answer is only valid for Matlab versions before 2014b, as the incredibly useful erasemode property has been removed from all HG2 graphic objects on later Matlab versions.
I ran into this problem countless times and I had to learn to live with it. Most times I can accept the glitches of the OpenGL renderer if it buys me nice transparency effects, but in some cases it is just not acceptable.
I use patch objects to highlight intervals in many applications, usually over several curves. There is a trick you can use when transparency is not an option, it is the EraseMode property of the patch object. If you set the EraseMode property to 'xor' the patch will not be transparent, yet anything under the patch will xor the patch pixel colours so you can still see the curves under the patch.
This not being a transparency rendering, you can use the default painter renderer and avoid all the occasional glitches of the OpenGL.
So for example with your data:
hp = patch([5 5 10 10],[yl(1) yl(2) yl(2) yl(1)],'k',...
'facecolor','g','edgecolor','g',...
'erasemode','xor') ;
And the nice advantage of this trick is it works with monochrome display/prints. If you cannot use multiple colours, you can use that with only one colour (if you plan black & white printing for publication for example)
hpx = patch([5 5 10 10],[yl(1) yl(2) yl(2) yl(1)],'b',...
'facecolor','b','edgecolor','b',...
'erasemode','xor') ;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Wildfly: Jboss error on start-up
I am having trouble starting my jboss with
./standalone.sh
in the directory
/ali/wildfly-9.0.1.Final/bin$.
It throws the following error
15:01:37,824 ERROR [org.jboss.as.controller.management-operation]
(Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address:
([("deployment" => "HelloServlet.war")]) - failure description:
"WFLYCTL0212: Duplicate resource [(\"deployment\" => \"HelloServlet.war
\")]"
15:01:37,830 FATAL [org.jboss.as.server] (Controller Boot Thread)
WFLYSRV0056: Server boot has failed in an unrecoverable manner; exiting.
A:
If you are using a deploy tool, check the standalone.xml conf file or HA conf file, at the end you should to delete the "deployments" tag
<deployments>
<deployment name="xxx.war" runtime-name="xxx.war">
<content sha1="fb73e5d66e5184b1d4791a5cb5d61970f73c14b0"/>
</deployment>
</deployments>
A:
You have an error in your code.
For my case, this is what i did and it worked:
Recheck your project for errors, and correct them. Then
clean build and deploy to wildfly using the command: mvn wildfly:deploy
Then run your project on your server (localhost:8080/HelloServlet)
A:
You will need to delete all the files in /ali/wildfly-9.0.1.Final/standalone/deployments then run the compile command again then deploy
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Swift Solid Metronome System
I am trying to build a reliable solid system to build a metronome in my app using SWIFT.
I Have built what seems to be a solid system using NSTimer so far.. The only issue I am having right now is when the timer starts the first 2 clicks are off time but then it catches into a solid timeframe.
Now after all my research I have seen people mention you should use other Audio tools not relying on NSTimer.. Or if you choose use NSTimer then it should be on its own thread. Now I see many confused by this Including myself and I would love to get down to the bottom of this Metronome business and get this solved and share it with all those who are struggling.
UPDATE
So I have implemented and cleaned up at this point after the feedback I had last recieved. At this point here is how my code is structured. Its playing back. But I am still getting 2 fast clicks in the beginning and then it settles in.
I apologize on my ignorance for this one. I hope I am on the right path.
I currently am prototyping another method as well. Where I have a very small audio file with one click and dead space at the end of it with the correct duration until for a loop point for specific tempos. I am looping this back and works very well. But the only thing Is I dont get to detect the loop points for visual updates so I have my basic NStimer just detecting the timing intervals underneath the audio being processed and it seems to matchup very well throughout and no delay. But I still would rather get it all with this NSTimer. If you can easily spot my error would be great for one more kick in the right direction and I am sure it can work soon! Thanks so much.
//VARIABLES
//AUDIO
var clickPlayer:AVAudioPlayer = AVAudioPlayer()
let soundFileClick = NSBundle.mainBundle().pathForResource("metronomeClick", ofType: ".mp3")
//TIMERS
var metroTimer = NSTimer()
var nextTimer = NSTimer()
var previousClick = CFAbsoluteTimeGetCurrent() //When Metro Starts Last Click
//Metro Features
var isOn = false
var bpm = 60.0 //Tempo Used for beeps, calculated into time value
var barNoteValue = 4 //How Many Notes Per Bar (Set To Amount Of Hits Per Pattern)
var noteInBar = 0 //What Note You Are On In Bar
//********* FUNCTIONS ***********
func startMetro()
{
MetronomeCount()
barNoteValue = 4 // How Many Notes Per Bar (Set To Amount Of Hits Per Pattern)
noteInBar = 0 // What Note You Are On In Bar
isOn = true //
}
//Main Metro Pulse Timer
func MetronomeCount()
{
previousClick = CFAbsoluteTimeGetCurrent()
metroTimer = NSTimer.scheduledTimerWithTimeInterval(60.0 / bpm, target: self, selector: Selector ("MetroClick"), userInfo: nil, repeats: true)
nextTimer = NSTimer(timeInterval: (60.0/Double(bpm)) * 0.01, target: self, selector: "tick:", userInfo: ["bpm":bpm], repeats: true)
}
func MetroClick()
{
tick(nextTimer)
}
func tick(timer:NSTimer)
{
let elapsedTime:CFAbsoluteTime = CFAbsoluteTimeGetCurrent() - previousClick
let targetTime:Double = 60/timer.userInfo!.objectForKey("bpm")!.doubleValue!
if (elapsedTime > targetTime) || (abs(elapsedTime - targetTime) < 0.003)
{
previousClick = CFAbsoluteTimeGetCurrent()
//Play the click here
if noteInBar == barNoteValue
{
clickPlayer.play() //Play Sound
noteInBar = 1
}
else//If We Are Still On Same Bar
{
clickPlayer.play() //Play Sound
noteInBar++ //Increase Note Value
}
countLabel.text = String(noteInBar) //Update UI Display To Show Note We Are At
}
}
A:
A metronome built purely with NSTimer will not be very accurate, as Apple explains in their documentation.
Because of the various input sources a typical run loop manages, the effective resolution of the time interval for a timer is limited to on the order of 50-100 milliseconds. If a timer’s firing time occurs during a long callout or while the run loop is in a mode that is not monitoring the timer, the timer does not fire until the next time the run loop checks the timer.
I would suggest using an NSTimer that fires on the order of 50 times per desired tick (for example, if you would like a 60 ticks per minute, you would have the NSTimeInterval to be about 1/50 of a second.
You should then store a CFAbsoluteTime which stores the "last tick" time, and compare it to the current time. If the absolute value of the difference between the current time and the "last tick" time is less than some tolerance (I would make this about 4 times the number of ticks per interval, for example, if you chose 1/50 of a second per NSTimer fire, you should apply a tolerance of around 4/50 of a second), you can play the "tick."
You may need to calibrate the tolerances to get to your desired accuracy, but this general concept will make your metronome a lot more accurate.
Here is some more information on another SO post. It also includes some code that uses the theory I discussed. I hope this helps!
Update
The way you are calculating your tolerances is incorrect. In your calculations, notice that the tolerance is inversely proportional to the square of the bpm. The problem with this is that the tolerance will eventually be less than the number of times the timer fires per second. Take a look at this graph to see what I mean. This will generate problems at high BPMs. The other potential source of error is your top bounding condition. You really don't need to check an upper limit on your tolerance, because theoretically, the timer should have already fired by then. Therefore, if the elapsed time is greater than the theoretical time, you can fire it regardless. (For example if the elapsed time is 0.1s and and the actual time with the true BPM should be 0.05s, you should go ahead and fire the timer anyways, no matter what your tolerance is).
Here is my timer "tick" function, which seems to work fine. You need to tweak it to fit your needs (with the downbeats, etc.) but it works in concept.
func tick(timer:NSTimer) {
let elapsedTime:CFAbsoluteTime = CFAbsoluteTimeGetCurrent() - lastTick
let targetTime:Double = 60/timer.userInfo!.objectForKey("bpm")!.doubleValue!
if (elapsedTime > targetTime) || (abs(elapsedTime - targetTime) < 0.003) {
lastTick = CFAbsoluteTimeGetCurrent()
# Play the click here
}
}
My timer is initialized like so: nextTimer = NSTimer(timeInterval: (60.0/Double(bpm)) * 0.01, target: self, selector: "tick:", userInfo: ["bpm":bpm], repeats: true)
A:
Ok! You can't get things right basing on time, because somehow we need to deal with DA converters and their frequency - samplerate. We need to tell them the exact sample to start play the sound. Add a single view iOS app with two buttons start and stop and insert this code into ViewController.swift. I keep things simple and it's just an Idea of how we can do this. Sorry for forcing try... This one is made with swift 3. Also check out my project on GitHub https://github.com/AlexShubin/MetronomeIdea
Swift 3
import UIKit
import AVFoundation
class Metronome {
var audioPlayerNode:AVAudioPlayerNode
var audioFile:AVAudioFile
var audioEngine:AVAudioEngine
init (fileURL: URL) {
audioFile = try! AVAudioFile(forReading: fileURL)
audioPlayerNode = AVAudioPlayerNode()
audioEngine = AVAudioEngine()
audioEngine.attach(self.audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: audioFile.processingFormat)
try! audioEngine.start()
}
func generateBuffer(forBpm bpm: Int) -> AVAudioPCMBuffer {
audioFile.framePosition = 0
let periodLength = AVAudioFrameCount(audioFile.processingFormat.sampleRate * 60 / Double(bpm))
let buffer = AVAudioPCMBuffer(pcmFormat: audioFile.processingFormat, frameCapacity: periodLength)
try! audioFile.read(into: buffer)
buffer.frameLength = periodLength
return buffer
}
func play(bpm: Int) {
let buffer = generateBuffer(forBpm: bpm)
self.audioPlayerNode.play()
self.audioPlayerNode.scheduleBuffer(buffer, at: nil, options: .loops, completionHandler: nil)
}
func stop() {
audioPlayerNode.stop()
}
}
class ViewController: UIViewController {
var metronome:Metronome
required init?(coder aDecoder: NSCoder) {
let fileUrl = Bundle.main.url(forResource: "Click", withExtension: "wav")
metronome = Metronome(fileURL: fileUrl!)
super.init(coder: aDecoder)
}
@IBAction func StartPlayback(_ sender: Any) {
metronome.play(bpm: 120)
}
@IBAction func StopPlayback(_ sender: Any) {
metronome.stop()
}
}
A:
Thanks to the great work already done on this question by vigneshv & CakeGamesStudios, I was able to put together the following, which is an expanded version of the metronome timer discussed here.
Some highlights:
It's updated for Swift v5
It uses a Grand Central Dispatch timer to run on a separate queue, rather than just a regular NSTimer (see here for more details)
It uses more calculated properties for clarity
It uses delegation, to allow for any arbitrary 'tick' action to be handled by the delegate class (be that playing a sound from AVFoundation, updating the display, or whatever else - just remember to set the delegate property after creating the timer). This delegate would also be the one to distinguish beat 1 vs. others, but that'd be easy enough to add within this class itself if desired.
It has a % to Next Tick property, which could be used to update a UI progress bar, etc.
Any feedback on how this can be improved further is welcome!
protocol BPMTimerDelegate: class {
func bpmTimerTicked()
}
class BPMTimer {
// MARK: - Properties
weak var delegate: BPMTimerDelegate? // The class's delegate, to handle the results of ticks
var bpm: Double { // The speed of the metronome ticks in BPM (Beats Per Minute)
didSet {
changeBPM() // Respond to any changes in BPM, so that the timer intervals change accordingly
}
}
var tickDuration: Double { // The amount of time that will elapse between ticks
return 60/bpm
}
var timeToNextTick: Double { // The amount of time until the next tick takes place
if paused {
return tickDuration
} else {
return abs(elapsedTime - tickDuration)
}
}
var percentageToNextTick: Double { // Percentage progress from the previous tick to the next
if paused {
return 0
} else {
return min(100, (timeToNextTick / tickDuration) * 100) // Return a percentage, and never more than 100%
}
}
// MARK: - Private Properties
private var timer: DispatchSourceTimer!
private lazy var timerQueue = DispatchQueue.global(qos: .utility) // The Grand Central Dispatch queue to be used for running the timer. Leverages a global queue with the Quality of Service 'Utility', which is for long-running tasks, typically with user-visible progress. See here for more info: https://www.raywenderlich.com/5370-grand-central-dispatch-tutorial-for-swift-4-part-1-2
private var paused: Bool
private var lastTickTimestamp: CFAbsoluteTime
private var tickCheckInterval: Double {
return tickDuration / 50 // Run checks many times within each tick duration, to ensure accuracy
}
private var timerTolerance: DispatchTimeInterval {
return DispatchTimeInterval.milliseconds(Int(tickCheckInterval / 10 * 1000)) // For a repeating timer, Apple recommends a tolerance of at least 10% of the interval. It must be multiplied by 1,000, so it can be expressed in milliseconds, as required by DispatchTimeInterval.
}
private var elapsedTime: Double {
return CFAbsoluteTimeGetCurrent() - lastTickTimestamp // Determine how long has passed since the last tick
}
// MARK: - Initialization
init(bpm: Double) {
self.bpm = bpm
self.paused = true
self.lastTickTimestamp = CFAbsoluteTimeGetCurrent()
self.timer = createNewTimer()
}
// MARK: - Methods
func start() {
if paused {
paused = false
lastTickTimestamp = CFAbsoluteTimeGetCurrent()
timer.resume() // A crash will occur if calling resume on an already resumed timer. The paused property is used to guard against this. See here for more info: https://medium.com/over-engineering/a-background-repeating-timer-in-swift-412cecfd2ef9
} else {
// Already running, so do nothing
}
}
func stop() {
if !paused {
paused = true
timer.suspend()
} else {
// Already paused, so do nothing
}
}
// MARK: - Private Methods
// Implements timer functionality using the DispatchSourceTimer in Grand Central Dispatch. See here for more info: http://danielemargutti.com/2018/02/22/the-secret-world-of-nstimer/
private func createNewTimer() -> DispatchSourceTimer {
let timer = DispatchSource.makeTimerSource(queue: timerQueue) // Create the timer on the correct queue
let deadline: DispatchTime = DispatchTime.now() + tickCheckInterval // Establish the next time to trigger
timer.schedule(deadline: deadline, repeating: tickCheckInterval, leeway: timerTolerance) // Set it on a repeating schedule, with the established tolerance
timer.setEventHandler { [weak self] in // Set the code to be executed when the timer fires, using a weak reference to 'self' to avoid retain cycles (memory leaks). See here for more info: https://learnappmaking.com/escaping-closures-swift/
self?.tickCheck()
}
timer.activate() // Dispatch Sources are returned initially in the inactive state, to begin processing, use the activate() method
// Determine whether to pause the timer
if paused {
timer.suspend()
}
return timer
}
private func cancelTimer() {
timer.setEventHandler(handler: nil)
timer.cancel()
if paused {
timer.resume() // If the timer is suspended, calling cancel without resuming triggers a crash. See here for more info: https://forums.developer.apple.com/thread/15902
}
}
private func replaceTimer() {
cancelTimer()
timer = createNewTimer()
}
private func changeBPM() {
replaceTimer() // Create a new timer, which will be configured for the new BPM
}
@objc private func tickCheck() {
if (elapsedTime > tickDuration) || (timeToNextTick < 0.003) { // If past or extremely close to correct duration, tick
tick()
}
}
private func tick() {
lastTickTimestamp = CFAbsoluteTimeGetCurrent()
DispatchQueue.main.sync { // Calls the delegate from the application's main thread, because it keeps the separate threading within this class, and otherwise, it can cause errors (e.g. 'Main Thread Checker: UI API called on a background thread', if the delegate tries to update the UI). See here for more info: https://stackoverflow.com/questions/45081731/uiapplication-delegate-must-be-called-from-main-thread-only
delegate?.bpmTimerTicked() // Have the delegate respond accordingly
}
}
// MARK: - Deinitialization
deinit {
cancelTimer() // Ensure that the timer's cancelled if this object is deallocated
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Angular conditional promises
For an angular project, I have to nest promises and I run into cases where I am not sure of what I am doing.
Here is one of my code :
return Action1().then(function (data) {
var defer = $q.defer();
if (data.condition) {
$q.all([Action2(), Action3(), Action4()]).then(function () {
defer.resolve();
});
} else {
defer.reject("error_code");
}
return defer.promise;
});
Action1, Action2, Action3 and Action4 are working promises functions. It's a lot of promises and actions depend on conditions.
Can I do that and be sure my main function will be always resolved or rejected?
I read that we can pass promise inside resolve function.
Can I do that and is this the same as above:
return Action1().then(function (data) {
var defer = $q.defer();
if (data.condition) {
defer.resolve($q.all([Action2(), Action3(), Action4()]);
} else {
defer.reject("error_code");
}
return defer.promise;
});
A:
No, it is not. Your first function would stay forever pending if one of Action2(), Action3() or Action4() did "throw", and reject the $q.all(…) promise - your deferred is never resolved then. This is the most common bug of the deferred antipattern you've used here.
Your second function does mitigate this, but is still unncessary complicated. You don't need a deferred here at all! Just return the promise directly, and use $q.reject:
return Action1().then(function (data) {
if (data.condition) {
return $q.all([Action2(), Action3(), Action4()]);
} else {
return $q.reject("error_code");
}
});
Or, as this happens inside a then handler, you can also use throw "error_code".
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Implement single signon (SSO) using SAML in java and AzureAD as IDP
I want to implement Single signOn on my web application using SAML where Azure Ad will be the Identity Provider. i need to know What is entityId, appId and and how to create metadata.xml.
A:
The details you seek are available in this site https://docs.microsoft.com/en-us/azure/active-directory/develop/azure-ad-federation-metadata
All the details present in FederationMetadata.xml file depends on what is configured on Azure AD.
As a Web application developer, you don't need to create this XML file. It should be provided by the person who is managing Azure AD. If you are the person managing Azure AD also, then you can download it from there.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Ruby regex: ^ matches start of line even without m modifier?
Ruby 1.8.7. I'm using a regex with a ^ to match a pattern at the start of the string. The problem is that if the pattern is found at the start of any line in the string it still matches. This is the behaviour I would expect if I were using the 'm' modifier but I'm not:
$ irb
irb(main):001:0> str = "hello\ngoodbye"
=> "hello\ngoodbye"
irb(main):002:0> puts str
hello
goodbye
=> nil
irb(main):004:0> str =~ /^goodbye/
=> 6
What am I doing wrong here?
A:
start of the line: ^
end of the line: $
start of the string: \A
end of the string: \z
A:
Use \A instead of ^.
Ruby regex reference: http://www.zenspider.com/ruby/quickref.html#regexen
A:
Your confusion is justified. In most regex flavors, ^ is equivalent to \A and $ is equivalent to \Z by default, and you have to set the "multiline" flag to make them take on their other meanings (i.e. line boundaries). In Ruby, ^ and $ always match at line boundaries.
To add to the confusion, Ruby has something it calls "multiline" mode, but it's really what everybody else calls "single-line" or "DOTALL" mode: it changes the meaning of the . metacharacter, allowing it to match line-separator characters (e.g. \r, \n) as well as all other characters.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Outgoing client TCP ports get blocked when there are too many connections
I have a distribution system for doing file operation with and running shell commands on target client machines on windows. and I use a custom TCP endpoint for connecting to the windows service which is resident on the server.
Now I've created this tool to create numerous instances of that agent(client) on one machine and run a certain job-set against all of them from the server. The problem is that all outgoing TCP ports on the client machine gets blocked after launching more than a few hundred agents. each agent is using a dynamic port and is listening to a single server port.
Say, i got 2000 agents running on ports 2000-3999 and all are listening to port 5111 on the server.
The error message i'm receiving in windows event log goes like this:
TCP/IP failed to establish an outgoing connection because the selected
local endpoint was recently used to connect to the same remote
endpoint. This error typically occurs when outgoing connections are
opened and closed at a high rate, causing all available local ports to
be used and forcing TCP/IP to reuse a local port for an outgoing
connection. To minimize the risk of data corruption, the TCP/IP
standard requires a minimum time period to elapse between successive
connections from a given local endpoint to a given remote endpoint.
When this occurs this machine cannot use any TCP port anymore. I did try changing some of the TCP default behavior in registry but to no avail. The interval between opening connections is between 1 to 5 seconds.
Any workaround for managing the optimal delay and/or somehow make windows trust the application irrespective of the aggressive network activity required for the test?
A:
Turns out if you open connections without a proper gap in between, all ports on the client will get blocked due to aggressive behavior. Finally i got to connect all my agents by increasing the delay between each connection from 1000 ms to 3000 ms. I am still to figure out the dynamics of this though, and probably opening agents in parallel threads rather than processes would be a better solution after all. High number of processes with the same name seemingly doesn't appeal to the OS.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Invalid conversion char to char* - Copying char in string array to another string array
I'm a beginner in C++ Programming language. I wanted to write a program that take the alphabets in a string array called str, and copy it in a new array called str_alpha.
And the same goes to numbers, the program copies it from str array to str_digit array.
There's my humble code, it might be full of errors and stuff. But this is what I could do now with my very little experience.
#include <iostream>
#include <cstdio>
#include <cstring>
#include <cctype>
using namespace std;
char str[100], str_alpha[100], str_digit[100];
int main()
{
gets(str);
for (int i=0 ; str[i] ; i++)
{
if (isalpha(str[i]))
{
strcpy (str_alpha[i] , str[i]);
}
else if (isdigit(str[i]))
{
strcpy (str_digit[i] , str[i]);
}
}
cout << "Alpha is " << str_alpha << endl ;
cout << "Number is : " << str_digit << endl ;
return 0;
}
And it gives my those errors :
F:\C++Progs\string\main.cpp||In function `int main()':|
F:\C++Progs\string\main.cpp|18|error: invalid conversion from `char' to `char*'|
F:\C++Progs\string\main.cpp|18|error: initializing argument 1 of `char* strcpy(char*, const char*)'|
F:\C++Progs\string\main.cpp|18|error: invalid conversion from `char' to `const char*'|
F:\C++Progs\string\main.cpp|18|error: initializing argument 2 of `char* strcpy(char*, const char*)'|
F:\C++Progs\string\main.cpp|22|error: invalid conversion from `char' to `char*'|
F:\C++Progs\string\main.cpp|22|error: initializing argument 1 of `char* strcpy(char*, const char*)'|
F:\C++Progs\string\main.cpp|22|error: invalid conversion from `char' to `const char*'|
F:\C++Progs\string\main.cpp|22|error: initializing argument 2 of `char* strcpy(char*, const char*)'|
||=== Build finished: 8 errors, 0 warnings ===|
Help me please.
Thanks in advance.
A:
I'm a beginner in C++ Programming language.
The do not use char* and the likes; use C++ predefined types for this, in your case string:
string str;
cin >> str;
string alpha;
string digit;
for (unsigned int i = 0; i < str.length(); ++i) {
char chr = str[i];
if (isalpha(chr))
alpha.push_back(chr);
else if (isdigit(chr))
digit.push_back(chr);
}
Furthermore, strcpy is, as the name says, used to copy strings, not individual chars. You can copy those directly by assigning them. No function call needed.
A:
First of all, strcpy copies C strings (character arrays) not chars. Additionally, the lines strcpy(str_digit[i],str[i]) and strcpy(str_alpha[i], str[i]) would still probably be wrong even if this wasn't the case. Since you haven't initialised the arrays str_digit and str_alpha, you'll get a lot of garbage values while printing them and if any of those garbage values happen to be 0x00, the cout statements will fail to print the whole string. As already mentioned, you really should be using std::string rather than char[] or char*. Having said that, here are corrected versions of your code for both char[] and std::string.
Using gets is bad practice and you might consider using std::cin instead. And you might want to use an iterator rather than a simple for loop.
//using char[]
#include <iostream>
using namespace std;
int main()
{
char str[100] , str_alpha[100] , str_digit[100] ;
int alpha_counter=0, digit_counter=0;
cin.get(str, 99);
for (int i=0 ; str[i] ; i++)
{
if(isalpha(str[i]))
{
str_alpha[alpha_counter] = str[i];
alpha_counter++;
}
else if (isdigit(str[i]))
{
str_digit[digit_counter] = str[i];
digit_counter++;
}
}
str_alpha[alpha_counter] = 0;
str_digit[digit_counter] = 0;
cout << "Alpha is " << str_alpha << endl ;
cout << "Number is : " << str_digit << endl ;
return 0;
}
And the version using std::string:
//using std::string
#include <iostream>
using namespace std;
int main()
{
string str, str_alpha , str_digit;
cin >> str ;
for (string::iterator it = str.begin();it<str.end();it++)
{
if(isalpha(*it))
{
str_alpha += *it;
}
else if (isdigit(*it))
{
str_digit += *it;
}
}
cout << "Alpha is " << str_alpha << endl ;
cout << "Number is : " << str_digit << endl ;
return 0;
}
Both versions compile without warnings on g++ 4.2.1 with -Wall and -pedantic.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Transform JavaScript Array, create tabular view
I've searched through similar questions and tried to solve the issue on my own, but my programming skills are weak.
I have an array like:
[{"city": "Amsterdam", "year": "2013", "amount": "450"},
{"city": "Rotterdam", "year": "2013", "amount": "620"},
{"city": "Geneva", "year": "2014", "amount": "530"},
{"city": "Rotterdam", "year": "2015", "amount": "350"}]
and I want to transform it using "city" and "year" to get "amounts". E.g. Rotterdam:[620,N/A,350]. "N/A" because value for year 2014 is missing.
I was checking map function etc. but my skills are to weak.
In the end I want to create a tabular view with years (horizontal) and cities (vertical).
Please advice. Thanks
A:
I don't think this is programming skills but algorithm skills. Here is a code doing what you want:
var amounts = [{"city": "Amsterdam", "year": "2013", "amount": "450"},{"city": "Rotterdam", "year": "2013", "amount": "620"},{"city": "Geneva", "year": "2014", "amount": "530"},{"city": "Rotterdam", "year": "2015", "amount": "350"}],
formattedAmounts = {}
;
for (var i = 0; i < amounts.length; i++) {
var amount = amounts[i];
if (undefined === formattedAmounts[amount.city]) {
formattedAmounts[amount.city] = {};
}
formattedAmounts[amount.city][amount.year] = amount.amount;
}
console.log(JSON.stringify(formattedAmounts));
alert(JSON.stringify(formattedAmounts));
function getCityAmount(city) {
var years = [2013, 2014, 2015],
cityAmounts = []
;
for (var i = 0; i < years.length; i++) {
cityAmounts.push(
formattedAmounts[city] && formattedAmounts[city][years[i]]
? formattedAmounts[city][years[i]]
: 'N/A'
);
}
return cityAmounts;
}
console.log(JSON.stringify(getCityAmount('Amsterdam')));
alert(JSON.stringify(getCityAmount('Amsterdam')));
function getCitiesAmounts() {
var citiesAmounts = [];
for (var i = 0; i < amounts.length; i++) {
citiesAmounts.push(getCityAmount(amounts[i].city));
}
return citiesAmounts;
}
console.log(JSON.stringify(getCitiesAmounts()));
alert(JSON.stringify(getCitiesAmounts()));
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Python 2.7 server.socket timeout question
I have some python code that sometimes will block when a connection is opened up but no data is sent. I understand why it is waiting for 64 bits or less of data. It will wait forever. Is there a simple way to time out the connection if no data is received. Any help or suggestions would be greatly appreciated.
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind(('', port))
serversocket.listen(5) # become a server socket, maximum 5 connections
while 1:
try:
while 1:
# print "Waiting for oonnection..."
connection, address = serversocket.accept()
buf = connection.recv(64)
A:
You can use socket.settimeout(value) where value is number of seconds.
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind(('', port))
serversocket.listen(5) # become a server socket, maximum 5 connections
while 1:
try:
while 1:
# print "Waiting for oonnection..."
connection, address = serversocket.accept()
connection.settimeout(5) # Set 5 seconds timeout to receive 64 bytes of data
buf = connection.recv(64)
except socket.timeout:
print("Timeout happened -- goting back to accept another connection")
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is this undefined behavior and how would you tell?
Motivated by answers to this post. Why is this NaN
I have the following code:
int main()
{
const int weeks = 10;
const int salespersons = 9;
const int days = 30;
double weekly_sales[weeks][salespersons][days];
double total_weekly_sales[weeks];
double total_overall_weekly_sales[salespersons];
int a;
cout << "a = " << a <<endl;
cout << total_weekly_sales[0] <<endl;
for(int w=0; w < weeks;w++)
{
for(int d =0; d < days; d++)
{
for(int s=0; s < salespersons; s++)
{
total_weekly_sales[w]+=weekly_sales[w][s][d];
total_overall_weekly_sales[s]+= weekly_sales[w][s][d];
}
}
}
cout << total_weekly_sales[0] <<endl;
}
It will output the following:
a = 0
0
0
Under gcc 4.5.3, with compiling option -Wall.
I also compiled the code under here: http://liveworkspace.org/code/94SOj$2. Same output as above.
I also compiled the code under VS2010.
VS2010 gaves warning as follows:
warning C4700: uninitialized local variable 'a' used
warning C4700: uninitialized local variable 'total_weekly_sales' used
When I ran:
Run-Time Check Failure #3 - The variable 'a' is being used without being initialized.
I know that it is bad practice to NOT initialize local variable and use them. I also understand it will be problematic.
My question is:
In C++ standard: is there any place saying that using uninitialized local variable will result in undefined behavior? Why does it behave differently under different compilers? Does this mean the standard actually does not enforce that all compilers should implement proper actions regarding using uninitialized local variable? How would you tell it is undefined behavior from compiler output then?
Thanks.
A:
Yes, the standard explicitly says that lvalue-to-rvalue conversion on an uninitialized object will result in undefined behaviour:
A glvalue (3.10) of a non-function, non-array type T can be converted to a prvalue. If T is an incomplete type, a program that necessitates this conversion is ill-formed. If the object to which the glvalue refers is not an object of type T and is not an object of a type derived from T, or if the object is uninitialized, a program that necessitates this conversion has undefined behavior.
Anything that requires using the value of object will invoke lvalue-to-rvalue conversion.
Undefined behavior is defined as:
behavior for which this International Standard imposes no requirements
So yes, a program with undefined behaviour can do anything, even if it appears to work correctly. So you cannot always identify undefined behaviour from a program's output. The real solution is to write correct, well-defined code. To do this, I highly recommend having a copy of the C++ standard by your side. Writing code and making assumptions about what it does is a very bad things, so if you ever write any C++ that you're not sure about, be sure to check it up.
Why does undefined behaviour exist in the standard? Firstly, it means you really only get what you ask for. If an uninitialized variable were instead defined to automatically get the value 0 (for example), every variable you declare that you don't initialize will have some extra operation to set the value to 0 that probably isn't needed. The standard simply says that using the value of an uninitialized variable is undefined, allowing it to leave the garbage value that already existed in that memory location. No extra cost.
Secondly, it allows the compiler to make optimizations based on the assumption that any C++ programmer will write sane, well-defined code.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Moss Covered Chests, do they respawn?
I know I am late to the "Pandarian Gate." However, I finally made it to the Timeless Isle and discovered moss covered chests. These things are awesome for a casual gamer like me. Since I started looting them though, I can't find a definitive "they do" or "they do not" respawn. Can anyone shed light into this? I've seen all answers from "yes" and "no" to "here are a few that do respawn.
A:
They are single use items and are limited to level 90 characters. Also on the island you can find Sturdy Chest, Skull-Covered Chest, Smouldering Chest, and Blazing Chest. These are all single use chests.
There are also chests that reset weekly. Near the bottom of this Wowpedia page, you can find a list these chests.
That being said, the best source I can find for this is this comment on WoWHead. I would consider it a reliable source, as Perculia is the Site Director for WoWHead.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Read json payload from gitlab webhook in Jenkins
I followed this tutorial to setup a Jenkins job to run whenever a push is made to the gitlab repository. I tested the webhook and I can see that the job is triggered. However, I don't see anything in the payload.
Just wondering, if anyone has ever tried to read the payload received from gitlab webhook?
A:
Jenkins Gitlab Plugin sends these POST parameters to Jenkins whenever any event occurs in the Gitlab repo.
You can add env in the Jenkins console to get what all Gitlab parameters are exported to the Jenkins environment. Then you can print or use the required variables.
e.g
echo $gitlabSourceRepoURL
echo $gitlabAfter
echo $gitlabTargetBranch
echo $gitlabSourceRepoHttpUrl
echo $gitlabMergeRequestLastCommit
echo $gitlabSourceRepoSshUrl
echo $gitlabSourceRepoHomepage
echo $gitlabBranch
echo $gitlabSourceBranch
echo $gitlabUserEmail
echo $gitlabBefore
echo $gitlabSourceRepoName
echo $gitlabSourceNamespace
echo $gitlabUserName
A:
The tutorial you have mentioned talks about GitHub webhooks. GitLab and GitHub are two separate products. So, the documentation or links for GitHub webhooks will not apply to GitLab webhooks.
GitLab invokes the webhook URL with a JSON payload in the request body that carries a lot of information about the GitLab event that led to the webhook invocation. For example, the GitLab webhook push event payload carries the following information in it:
{
"object_kind": "push",
"before": "95790bf891e76fee5e1747ab589903a6a1f80f22",
"after": "da1560886d4f094c3e6c9ef40349f7d38b5d27d7",
"ref": "refs/heads/master",
"checkout_sha": "da1560886d4f094c3e6c9ef40349f7d38b5d27d7",
"user_id": 4,
"user_name": "John Smith",
"user_username": "jsmith",
"user_email": "[email protected]",
"user_avatar": "https://s.gravatar.com/avatar/d4c74594d841139328695756648b6bd6?s=8://s.gravatar.com/avatar/d4c74594d841139328695756648b6bd6?s=80",
"project_id": 15,
"project":{
"id": 15,
"name":"Diaspora",
"description":"",
"web_url":"http://example.com/mike/diaspora",
"avatar_url":null,
"git_ssh_url":"[email protected]:mike/diaspora.git",
"git_http_url":"http://example.com/mike/diaspora.git",
"namespace":"Mike",
"visibility_level":0,
"path_with_namespace":"mike/diaspora",
"default_branch":"master",
"homepage":"http://example.com/mike/diaspora",
"url":"[email protected]:mike/diaspora.git",
"ssh_url":"[email protected]:mike/diaspora.git",
"http_url":"http://example.com/mike/diaspora.git"
},
"repository":{
"name": "Diaspora",
"url": "[email protected]:mike/diaspora.git",
"description": "",
"homepage": "http://example.com/mike/diaspora",
"git_http_url":"http://example.com/mike/diaspora.git",
"git_ssh_url":"[email protected]:mike/diaspora.git",
"visibility_level":0
},
"commits": [
{
"id": "b6568db1bc1dcd7f8b4d5a946b0b91f9dacd7327",
"message": "Update Catalan translation to e38cb41.",
"timestamp": "2011-12-12T14:27:31+02:00",
"url": "http://example.com/mike/diaspora/commit/b6568db1bc1dcd7f8b4d5a946b0b91f9dacd7327",
"author": {
"name": "Jordi Mallach",
"email": "[email protected]"
},
"added": ["CHANGELOG"],
"modified": ["app/controller/application.rb"],
"removed": []
},
{
"id": "da1560886d4f094c3e6c9ef40349f7d38b5d27d7",
"message": "fixed readme",
"timestamp": "2012-01-03T23:36:29+02:00",
"url": "http://example.com/mike/diaspora/commit/da1560886d4f094c3e6c9ef40349f7d38b5d27d7",
"author": {
"name": "GitLab dev user",
"email": "gitlabdev@dv6700.(none)"
},
"added": ["CHANGELOG"],
"modified": ["app/controller/application.rb"],
"removed": []
}
],
"total_commits_count": 4
}
The Jenkins GitLab plugin makes this webhook payload information available in the Jenkins Global Variable env. The available env variables are as follows:
gitlabBranch
gitlabSourceBranch
gitlabActionType
gitlabUserName
gitlabUserEmail
gitlabSourceRepoHomepage
gitlabSourceRepoName
gitlabSourceNamespace
gitlabSourceRepoURL
gitlabSourceRepoSshUrl
gitlabSourceRepoHttpUrl
gitlabMergeRequestTitle
gitlabMergeRequestDescription
gitlabMergeRequestId
gitlabMergeRequestIid
gitlabMergeRequestState
gitlabMergedByUser
gitlabMergeRequestAssignee
gitlabMergeRequestLastCommit
gitlabMergeRequestTargetProjectId
gitlabTargetBranch
gitlabTargetRepoName
gitlabTargetNamespace
gitlabTargetRepoSshUrl
gitlabTargetRepoHttpUrl
gitlabBefore
gitlabAfter
gitlabTriggerPhrase
Just as you would read Jenkins job parameters from Jenkins Global Variable params in your job pipeline script, you could read webhook payload fields from Jenkins Global Variable env:
echo "My Jenkins job parameter is ${params.MY_PARAM_NAME}"
echo "One of Jenkins job webhook payload field is ${env.gitlabTargetBranch}"
Hope, the above information helps solve your problem.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to get other metrics in Tensorflow 2.0 (not only accuracy)?
I'm new in the world of Tensorflow and I'm working on the simple example of mnist dataset classification. I would like to know how can I obtain other metrics (e.g precision, recall etc) in addition to accuracy and loss (and possibly to show them). Here's my code:
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.callbacks import TensorBoard
import os
#load mnist dataset
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
#create and compile the model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.summary()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
#model checkpoint (only if there is an improvement)
checkpoint_path = "logs/weights-improvement-{epoch:02d}-{accuracy:.2f}.hdf5"
cp_callback = ModelCheckpoint(checkpoint_path, monitor='accuracy',save_best_only=True,verbose=1, mode='max')
#Tensorboard
NAME = "tensorboard_{}".format(int(time.time())) #name of the model with timestamp
tensorboard = TensorBoard(log_dir="logs/{}".format(NAME))
#train the model
model.fit(x_train, y_train, callbacks = [cp_callback, tensorboard], epochs=5)
#evaluate the model
model.evaluate(x_test, y_test, verbose=2)
Since I get only accuracy and loss, how can i get other metrics?
Thank you in advance, I'm sorry if it is a simple question or If was already answered somewhere.
A:
I am adding another answer because this is the cleanest way in order to compute these metrics correctly on your test set (as of 22nd of March 2020).
The first thing you need to do is to create a custom callback, in which you send your test data:
import tensorflow as tf
from sklearn.metrics import classification_report
from tensorflow.keras.callbacks import Callback
class MetricsCallback(Callback):
def __init__(self, test_data, y_true):
#Should be the label encoding of your classes
self.y_true = y_true
self.test_data = test_data
def on_epoch_end(self, epoch, logs=None):
#Here we get the probabilities
y_pred = self.model.predict(self.test_data))
#Here we get the actual classes
y_pred = tf.argmax(y_pred,axis=1)
#Actual dictionary
report_dictionary = classification_report(self.y_true, y_pred, output_dict = True)
#Only printing the report
print(classification_report(self.y_true,y_pred,output_dict=False)
In your main, where you load your dataset and add the callbacks:
metrics_callback = MetricsCallback(test_data = my_test_data, y_true = my_y_true)
...
...
#train the model
model.fit(x_train, y_train, callbacks = [cp_callback, metrics_callback,tensorboard], epochs=5)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Do Rich Domain Models mean large domain classes are acceptable?
I have been reading a lot about SOLID and Domain Driven Design, then the debate on Anemic Domain Models and Rich Domain Models. I personally prefer the approach where an object will encapsulate its own domain knowledge, however as there is seemingly some difference of opinion I have some questions:
Depending on the type of system, the main domain classes could get quite big, even if the logic of the methods are in separate classes. Is it generally acceptable that Single Responsibility Principal is ignored here, or is there a way of encapsulating say an Order with 50 fields and 50 methods, into a nice structure that does not leave you with a 1mb class, or is this acceptable given the encapsulation approach?
Is there any guideline or rule of thumb on what should still go into a Domain Service or even Domain Factory, while trying to maintain an Rich Domain Model and encapsulation?
A:
SRP always applies. I would ask myself if that entity makes sense as a whole, or it would be easier to understand it and work with it if you are able find some internal substructure and split it that way.
If you have a 50-fields order, it might actually be a classical case where bounded contexts apply, that is where an order can be viewed differently by different subsystems, and only parts of the order are needed by each subsystems.
For "Domain Factory" the rule of thumb is that it contains anything related to the object creation.
For "Domain Service" it seems to be a stateless pile of logic that doesn't fit logically in entities. see this.
P.S. I don't think that a 1 MB class (10K lines of code or more) is ever acceptable by any software design methodology (unless it is generated code, and thus is not intended for humans). Unfortunately sometimes the code reaches that state accidentally, due to lack of design planning, fear of refactoring, or deliberate omission (a decision to postpone the tech debt). That depends on the app and programming languages, but my personal rule of thumb is to start worrying and improve the design if the class reaches 1K lines, or even a bit before that.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
404 page when run as web application project
I want to create a basic Google Web Application project and i get an 404 Page not found error.
I followed several tutorials and what i do is this:
Open eclipse - File - New - Other - Web Application Project (under Google) - name it - (it doesn't matter if i check "Use GWT" or not the result is the same - Finish.
Right now i have a newly created web application project, so i can test it in localhost. I right click it, run as - web application project (with google icon). After it finishes in my console i have this:
Mar 22, 2017 10:13:58 AM com.google.apphosting.utils.jetty.JettyLogger info
INFO: Started [email protected]:8888
Mar 22, 2017 10:13:58 AM com.google.appengine.tools.development.AbstractModule startup
INFO: Module instance default is running at http://localhost:8888/
Mar 22, 2017 10:13:58 AM com.google.appengine.tools.development.AbstractModule startup
INFO: The admin console is running at http://localhost:8888/_ah/admin
Mar 22, 2017 12:13:58 PM com.google.appengine.tools.development.DevAppServerImpl doStart
INFO: Dev App Server is now running
I open Chrome and go to localhost and i get:
HTTP ERROR: 404
Problem accessing /.
Reason:
NOT_FOUND
Powered by Jetty://
I must specify that i am deploying behind a proxy server. I tried also to run it without proxy, same result. All my settings are correct, because if i create a Dynamic Web Project and i deploy it on a Tomcat server, that one works fine.
A:
The solution that worked for me is this. Check your Java facet. It has to be 1.7. Mine was 1.8.
Thank you.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Governance methodology for micro services explosion
Microservices are the trend now and most of them are developed on cloud. I have a situation where we are decomposing most of monolithic services into domain level microservices. Each problem domain having just a handful of services.
In Amazon cloud each individual services would further realize down as multiple lambda functions. As there would 100s of functions each doing specific kind of activities, deployed by individual pipeline jobs each.
The volume of functions can potentially increase to the order of 1000s in very near future. This in comparison to 40 monolithic apps we have today. Is there any way to group, visualize, account usage metrics, cost etc?
The situation would be similar or complex than the xml hell we saw with earlier version spring framework.
A:
First off you may want to consider a framework such as Chalice if you’re moving to microservices on Lambda. This will help reduce the sprawl of services some but each case is different and all depends on where you draw your bounded contexts.
Speaking from a similar experience to what you’re embarking on, you will want to invest heavily in a few areas. First off having a consistent logging approach is key. You’ll want to ship logs consistently to a single log aggregation service so you can easily query across all services to get metrics. CloudWatch, Sumo Logic, etc can help with this. Also use X-Ray to get more detailed insight.
You will also want to consider adding some automation into your CI/CD pipeline to produce documentation in Swagger or something similar. This should be done in a way that the result is a searchable catalog
Of all services with all necessary documentation. My experience has involved doing this with Swagger UI and some custom HTML that gets generated and deployed on each build job.
One last recommendation is to invest in testing. Contract testing and backwards compatibility testing is key to saving yourself from deploying breaking changes. I would also add feature toggles as another key that can go hand and hand here.
Good luck with this effort!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Harmonic functions on $\mathbf{Z}^2$
Problem 1: Find all functions $f:\mathbf{Z}^2 \to \mathbf{R}$ which are harmonic in the sense that $$f(x,y) = \frac{f(x+1,y) + f(x-1,y) + f(x,y+1) + f(x,y-1)}{4}$$for all $(x,y)\in\mathbf{Z}^2$, and which are also Lipschitz in the sense that the gradients $$f(x+1,y)-f(x,y)\\f(x-1,y)-f(x,y)\\f(x,y+1)-f(x,y)\\f(x,y-1)-f(x,y)$$are all globally bounded.
Obvious examples: $f = 1$, $f=x$, $f=y$, and linear combinations of these. Is this all?
Problem 2: What further examples do we get if we weaken the Lipschitz condition, say by allowing the gradients to grow at most linearly (with respect to distance from $(0,0)$)?
Problem 3: How much do the character of our examples change if we replace the generating set $S = \lbrace (1,0),(-1,0),(0,1),(0,-1)\rbrace$ by another (symmetric) set $S$ which generates $\mathbf{Z}^2$? For example, what if we require that $$f(x) = \frac{1}{|S|}\sum_{s\in S}f(x+s),$$for, say, $S = \lbrace s: \|s\|_2\leq 100\rbrace$? Does the dimension of the space of Lipschitz harmonic functions change?
[Background: I'm trying to understand Kleiner's theorem, which states that if a finitely generated group $G$ has polynomial growth then the space of Lipschitz harmonic functions on $G$ has finite dimension. The simplest example $G=\mathbf{Z}$ is pretty simple, but the second simplest example $G=\mathbf{Z}^2$ already seems nonobvious to me.]
A:
Lemma. Let $f$ be a bounded harmonic function on $\mathbb Z^n$ with respect to a random walk that is not restricted to any proper subset of the grid. Then $f$ is constant.
Proof (from Principles of Random Walk by F. Spitzer). If $f$ is nonconstant, then the function $g(x)=f(x+a)-f(x)$ has positive supremum $M$ for some $a$. Let $x_n$ be a sequence such that $g(x_n)\to M$. Let $g_n(x)=g(x+x_n)=f(x+a+x_n)-f(x+x_n)$. Using the Cantor diagonal argument, pick a subsequence of $g_n$ that converges at every point of the grid. Let $h$ be the limit of this subsequence. Since $h$ is harmonic and attains its maximum at $0$, we have $h\equiv M$. Due to pointwise convergence, for any positive integer $N$ there exists $n$ such that $g_n>M/2$ at the points $0,a, 2a, \dots, Na$. It follows that $f(x_n+Na)-f(x_n)=\sum_{k=0}^{N-1} g_n(ka)>MN/2$ which contradicts the boundedness of $f$. $\Box$
Another, more probabilistic proof is here, but it uses the recurrence of random walk and therefore works only in two dimensions (not for general dimension as claimed there).
Answer 1: If a harmonic function on $\mathbb Z^2$ is Lipschitz, then it's of the form $f(x,y)=ax+by+c$. Indeed, $g(x,y)=f(x+1,y)-f(x,y)$ is bounded and harmonic, therefore constant by the Lemma. Similarly, $f(x,y+1)-f(x,y)$ is constant and thus $f$ is linear.
Partial Answer 2: Linear growth on derivatives allows for 2nd degree harmonic polynomials $f(x,y)=xy$ and $f(x,y)=x^2-y^2$. I think these are all (i.e., the space is 5-dimensional) but don't have a proof. Higher-order polynomial bounds will allow for higher degree polynomials, which are similar, but not identical to the harmonic polynomials on $\mathbb R^2$: see the expository article Discrete analytic functions by Lovász.
Answer 3: The space will not change, because the proof from Answer 1 applies here as well.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SharePoint 2010/Silverlight: Pass custom parameters programmatically
I need to pass the current logged in user's information to a Silverlight application through SharePoint 2010 Silverlight webpart. I am familiar with the custom "initialization parameters" in the Silverlight web parts properties but that does not solve my problem because of the nature of the information. The "Initialization Parameters" is fixed information and logged-in user information changes. I need to pass parameters programmatically to the silverlight application.
Maybe someone could point to a custom implementation of Silverlight host webpart. Any help would be much appreciated.
Thanks.
A:
To pass parameters dynamically to a Silverlight application, add a visual web part to your SharePoint 2010 project. and add the following code to the markup.
<asp:Panel ID="SilverlightPanel" runat="server" >
<div id="silverlightControlHost" style="width:100%;height:150">
<object id="SLServicesBanner"
data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="500" height="300">
<param name="source" value="your_xap_file_goes_here"/>
<param name="initParams" value="<%= InitParameters %>" />
<param name="background" value="white" />
<param name="minRuntimeVersion" value="4.0.50401.0" />
<param name="autoUpgrade" value="true" />
<a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration:none">
<img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style:none"/>
</a>
</object>
<iframe id="_sl_historyFrame" style="visibility:hidden; height:0px; width:0px; border:0px">
</iframe>
</div>
</asp:Panel>
Observer the "initParams" attribute. You can set it up to anything from codebehind.
Thanks
|
{
"pile_set_name": "StackExchange"
}
|
Q:
select specific lines with the data.table package
I have the following (simplified) dataset:
df <- data.frame(a=c("A","A","B","B","B"),x=c(1,2,3,3,4))
df
a x
1 A 1
2 A 2
3 B 3
4 B 3
5 B 4
Since I'm working with large datasets, I use the data.table package.
Is there a way to get those lines in df, where x is minimal grouped by a. So in this case, I want to select lines 1,3 and 4.
Something like
df[,min(x),by=a]
But that doesn't give me the lines I wanna have, it just Shows me the minmum values for x grouped by a.
Any suggestions?
A:
library(data.table)
dt <- data.table(a=c("A","A","B","B","B"), x=c(1,2,3,3,4))
These give only unique rows:
dt[, .SD[which.min(x)], by=a]
Or alternatively:
setkeyv(dt, c("a","x"))
dt[unique(dt[,a]), mult="first"]
Since you want to have all ties:
dt[,.SD[x==min(x)], by=a]
You could also use:
setkeyv(dt,c("a","x"))
dt[dt[unique(dt[,a]), mult="first"]]
Which could be more efficient if you have very big groups.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Memcached dependent items
I'm using memcahced (specifically the Enyim memcached client) and I would like to able to make a keys in the cache dependant on other keys, i.e. if Key A is dependent on Key B, then whenever Key B is deleted or changed, Key A is also invalidated.
If possible I would also like to make sure that data integrity is maintained in the case of a node in the cluster fails, i.e. if Key B is at some point unavailable, Key A should still be invalid if Key B should become invalid.
Based on this post I believe that this is possible, but I'm struggling to understand the algorithm enough to convince myself how / why this works.
Can anyone help me out?
A:
I've been using memcached quite a bit lately and I'm sure what you're trying to do with depencies isn't possible with memcached "as is" but would need to be handled from client side. Also that the data replication should happen server side and not from the client, these are 2 different domains. (With memcached at least, seeing its lack of data storage logic. The point of memcached though is just that, extreme minimalism for bettter performance)
For the data replication (protection against a physical failing cluster node) you should check out membased http://www.couchbase.org/get/couchbase/current instead.
For the deps algorithm, I could see something like this in a client: For any given key there is a suspected additional key holding the list/array of dependant keys.
# - delete a key, recursive:
function deleteKey( keyname ):
deps = client.getDeps( keyname ) #
foreach ( deps as dep ):
deleteKey( dep )
memcached.delete( dep )
endeach
memcached.delete( keyname )
endfunction
# return the list of keynames or an empty list if the key doesnt exist
function client.getDeps( keyname ):
return memcached.get( key_name + "_deps" ) or array()
endfunction
# Key "demokey1" and its counterpart "demokey1_deps". In the list of keys stored in
# "demokey1_deps" there is "demokey2" and "demokey3".
deleteKey( "demokey1" );
# this would first perform a memcached get on "demokey1_deps" then with the
# value returned as a list of keys ("demokey2" and "demokey3") run deleteKey()
# on each of them.
Cheers
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Putting method implementations of the same class in different object files
I'm working on a big project in C++.
I have many classes which have methods that do completely different things (like one dumps, another modifies the object, another checks it to see if it's valid and so on...).
Is it a good style to put the implementation of a method for all the classes in a source file (or in a group of object files that may be archived) and all of the implementations of another method of the classes in another archive?
Could this be good when linking, maybe when someone doesn't need the dumping methods (for example), or it's better to keep the method implementations of the same class in the same source file, in order to not make confusion?
A:
There are trade-offs.
When you change the implementation of any function, the entire translation unit must be re-compiled into a new object file.
If you write only a single function per translation unit, you minimize the length of compilation time caused by unnecessary rebuilds.
On the other hand, writing a single function per translation unit, you maximize the length of compilation time from scratch, because it's slower to compile many small TU's than a few bit TU's.
The optimal solution is personal preference, but usually somewhere in between "single function per TU" and "one massive TU for entire program" (rather than exactly one of those). For member functions, one TU per class is a popular heuristic, but not necessarily always the best choice.
Another consideration is optimisation. Calls to non-inline functions can be expanded inline, but only within the same translation unit. Therefore, it is easier for the compiler to optimize a single massive TU.
Of course, you can choose to define the functions inline, in the header file, but that causes a re-building problem, because if any of the inline functions change, then all who include the header must re-build. This is worse problem than simply having bigger TUs but not as bad as having one massive TU.
So, defining related non-inline functions within the same TU allows the compiler to decide on optimization within that TU, while preventing a re-build cascade. This is advantageous if those related functions would benefit from inline expansion and call each other a lot.
This advantage is mitigated by whole program optimisation.
Third consideration is organisation. It may be likely, that a programmer who looks at member function of a class would also be interested in other member functions of that class. Having them in the same source file will allow them to spend less time on searching the correct file.
The organizational advantage of grouping all class functions into a common source file is somewhat mitigated by modern IDEs that allow for quickly jumping from source file to header and from there to the other function.
Fourth consideration is the performance of the editor. Parsing a file of tens of thousands of lines or more can be slow and may use a lot of memory depending on parsing technique. One massive TU doesn't necessarily cause this, because you can use separate files that are only included together.
On the other hand, massive number of files can be problematic for some file browsers (probably not much these days) and also for version control systems.
Finally, my opinion: I think that one source file per class is a decent heuristic. But it should not be followed religiously when it's not appropriate.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What happens to a DWG file once it has been translated into an SVF file?
When I use the Model Derivative API to translate a DWG file into an SVF file, what happens to the DWG file?
Does it get stored in my bucket along with the SVF? Does it get stored somewhere else? Does it get thrown away?
A:
It depends on what policy you set for your bucket when you create it. You can choose a retention policy of either transient, temporary, or persistent, meaning that uploaded files (ie your DWGs before conversion, not the SVFs) will be kept for either 24 hours, 30 days, or permanently.
Read here for more information.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Attempting to write read and write files with java, results in blank output files
I have a file 'A.txt' which consists of 60 lines. Each line contains a single number from 1 to 60. I also have a folder DD that contains 2 files, labelled 'DD1.txt' and 'DD2.txt'
File DD1 has 3 lines. Each line of DD1 has 1.1, 1.2 and 1.3 . Similarly file DD2 also has 3 lines. Each line of DD2 has 2.1, 2.2, 2.3.
What I am trying to do is first read the first line of the A.txt, which calculates with all the lines of DD1 and writes the calculated value, and calculates with all the lines of DD2 and writes the output value. Thus I have 60 X 2 = 120 files to be created. Each file should have 2 lines. But I am getting only 2 files which is inside blank at the moment.
This is what I have tried so far.
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.io.InputStreamReader;
public class Sample {
private static final String FILEA = "E:\\SP\\A.txt";
private static String FILEH = "E:\\SP\\DD\\DD";
private static String FILER = "E:\\SP\\Result\\Result";
private static final String FILEEXE = ".txt";
private static BufferedReader br = null, br1 = null, br2 = null;
private static BufferedWriter bw = null;
private static FileWriter fw = null;
public static void main(String[] args) throws NumberFormatException,IOException, InterruptedException
{
final double C1 = -3.75, C2 = 1.14;
double T = 0, R = 0, E = 0, Efficiency = 1;
System.out.println("Enter the value of Efficiency:");
br = new BufferedReader(new InputStreamReader(System.in));
Efficiency = Double.parseDouble(br.readLine());
try {
String sCurrentLine, sCurrentLine1, sCurrentLine2;
int i = 0;
br1 = new BufferedReader(new FileReader(FILEA));
while ((sCurrentLine = br1.readLine()) != null)
{
i++;
String FILEH1 = FILEH + i + FILEEXE;
String FILER1 = FILER + i + FILEEXE;
br2 = new BufferedReader(new FileReader(FILEH1));
bw = new BufferedWriter(new FileWriter(FILER1));
int count = 0;
while ((sCurrentLine1 = br2.readLine()) != null)
{
T = Double.parseDouble(sCurrentLine1);
R = Double.parseDouble(sCurrentLine);
E = R * 24 * (T * 0.024);
bw.write(Double.toString(E));
bw.newLine();
}
}
} catch (Exception e2) {
// TODO: handle exception
}//end trycatch
}
}
A:
After you made your calculation inside the while loop try bw.write("\r\n"); Then after the while loop try closing the write as bw.close(); Hope this works for you.
bw.write(Double.toString(E));
bw.write("\r\n");
}
bw.close();
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is it possible to calculate this conditional probability?
I have this problem.
Suppose that 20% of the population is made of terrorists.
A company developed an alarm such that, when a terrorist goes through the alarm, the alarm always beeps. The problem is that sometimes the alarm beeps also when the person is not a terrorist.
We can improve the alarm in such a way that it still always beeps when faced with terrorist, but when faced with non-terrorists it commits less mistakes.
Let's call alpha the probability that the alarm beeps when faced with a non-terrorist. What is the value of alpha if P(terrorist|alarm_beeped) = 99,9%?
(P(terrorist|alarm_beeped) meaning the probability that the person is a terrorist given that the alarm beeped).
I'm working with Bayes Theorem, but can't find a solution.
Any help?
Thanks
A:
With Bayes theorem would be enough. First, suppose net events:
$T$ Person is a terrorist
$A$ Alarm beeps
From your statement, we have:
$$ P(T) = 0.2 $$
$$ P(A \vert T) = 1$$
$$ P(A \vert \bar{T}) = \alpha $$
Applying Bayes:
$$ P(T \vert A) = \frac{P(A \vert T) P(T)}{P(A)} = \frac{0.2}{0.2+0.8 \alpha}$$
Where $P(A)$ can be computed from total probability, as follow:
$$P(A) = P(A \vert T) P(T) + P(A \vert \bar{T}) P(\bar{T}) = 0.2 + 0.8 \alpha$$
So, to have $P(T \vert A) = 0.999$, we need:
$$\frac{0.2}{0.2+0.8 \alpha} = 0.999 $$
Where you can resolve for alpha.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Hibernate warning about follow-on locking using Oracle 10g
I am using Hibernate 4.3.0.Final / JPA 2.1, Hibernate Search 4.5.0.Final running on WildFly 8.0.0.Final. My application works absolutely fine, but I am getting this hibernate warning when the indexes are being created.
WARN org.hibernate.loader.Loader - HHH000444: Encountered request for locking however dialect reports that database prefers locking be done in a separate select (follow-on locking); results will be locked after initial query execute
This is the method that creates the index:
public void createIndex() throws DAOException {
FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(this.entityManager);
try {
fullTextEntityManager.createIndexer(Colaborador.class)
.purgeAllOnStart(Boolean.TRUE)
.optimizeOnFinish(Boolean.TRUE)
.startAndWait();
}
catch (InterruptedException e) {
logger.error("Error creating index", e);
throw new DAOException(e);
}
}
I've done some searches and I found a "solution" or, better saying, a way to suppress the warning. However, I don't know if this is the best solution. The solution proposes to extend the org.hibernate.dialect.Oracle10gDialect and override the method public boolean useFollowOnLocking() to return false.
Other important thing: this only happens after Hibernate version 4.2.0.Final. Before this version, there isn't a useFollowOnLocking() method.
The new dialect:
import org.hibernate.dialect.Oracle10gDialect;
public class MyOracle10gDialect extends Oracle10gDialect {
@Override
public boolean useFollowOnLocking() {
return false;
}
}
I found this solution here and here. There is also a bug report that was reject about this warning. I've not found any other solution available for this warning.
A:
There is no reason to worry about this warning, logging it was a mistake: you should ignore it, or change the logger configuration to ignore it.
I've opened HHH-9097.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Problems encrypting and decrypting a byte array in VB.NET
I am having difficulty encrypting and decrypting a byte array in .NET. I would appreciate some help in understanding why it is not currently working.
Here is the code:
Public Shared Function GenerateKey(password As String, Size As Int32) As Byte()
Dim rfc As New Rfc2898DeriveBytes(password, Salt, iterations:=973)
Return rfc.GetBytes(Size)
End Function
Public Shared Function EncryptArray(ByRef data As Byte(), password As String) As Byte()
Dim key() As Byte = GenerateKey(password, 8)
Dim IV() As Byte = {18, 52, 86, 120, 144, 171, 205, 239}
Using cp As New DESCryptoServiceProvider
Using ms As New MemoryStream
Dim bf As New BinaryFormatter
Using cs As New CryptoStream(ms, cp.CreateEncryptor(key, IV), CryptoStreamMode.Write)
bf.Serialize(cs, data)
cs.FlushFinalBlock()
Return ms.GetBuffer
End Using
End Using
End Using
End Function
Public Shared Sub DecryptArray(ByRef data As Byte(), password As String)
Dim key() As Byte = GenerateKey(password, 8)
Dim IV() As Byte = {18, 52, 86, 120, 144, 171, 205, 239}
Using cp As New DESCryptoServiceProvider
Using ms As New MemoryStream(data)
Using cs As New CryptoStream(ms, cp.CreateDecryptor(key, IV), CryptoStreamMode.Read)
Using br As New BinaryReader(cs)
Dim bf As New BinaryFormatter
data = DirectCast(bf.Deserialize(cs), Byte())
End Using
End Using
End Using
End Using
End Sub
My calling routine:
Public Sub TestArrayEncryption()
Dim text As String = " imkkj r As ing = Hash.gHa.ing(s, ""sh2"", ""DGF&^***YHGJ&^*&(KI~@"")"""
Dim pw As String = "pasword12345678901234567890"
Dim arr As Byte() = Encryption.EncryptArray(Encryption.ConvertUTF8ToByteArray(text), pw)
Encryption.DecryptArray(arr, pw)
Dim txt2 As String = Encryption.ConvertByteArrayToUTF8(arr)
Assert.AreEqual(txt2, text)
End Sub
The code is failing on this line data = DirectCast(bf.Deserialize(ms), Byte()) with the Cryptographic Exception 'Bad Data'.
Addition:
I managed to get this working using other code found on this site. I noticed when using these new routines the encrypted array is larger than the input array. This was not the case in my original code. I would still be interested to know why the original code did not work.
Working code:
Public Shared Function DecryptArray2(ByRef data As Byte(), password As String) As Byte()
Dim key() As Byte = GenerateKey(password, 8)
Dim IV() As Byte = {18, 52, 86, 120, 144, 171, 205, 239}
Dim ddata As Byte()
Using ms As New MemoryStream
Using cp As New DESCryptoServiceProvider
Using cs As New CryptoStream(ms, cp.CreateDecryptor(key, IV), CryptoStreamMode.Write)
cs.Write(data, 0, data.Length)
cs.Close()
ddata = ms.ToArray
End Using
End Using
End Using
Return ddata
End Function
Public Shared Function EncryptArray2(ByRef data As Byte(), password As String) As Byte()
Dim key() As Byte = GenerateKey(password, 8)
Dim IV() As Byte = {18, 52, 86, 120, 144, 171, 205, 239}
Dim edata As Byte()
Using ms As New MemoryStream
Using cp As New DESCryptoServiceProvider
Using cs As New CryptoStream(ms, cp.CreateEncryptor(key, IV), CryptoStreamMode.Write)
cs.Write(data, 0, data.Length)
cs.Close()
edata = ms.ToArray
End Using
End Using
End Using
Return edata
End Function
A:
You shouldn't serialize your CryptoStream. Just write the data into it and get the array from underlying stream. For encryption:
Using ms As New MemoryStream
Using cs As New CryptoStream(ms, cp.CreateEncryptor(key, IV), CryptoStreamMode.Write)
cs.Write(data, 0, data.Length)
cs.FlushFinalBlock()
Return ms.ToArray()
End Using
End Using
and for decryption:
Using ms As New MemoryStream(data)
Using cs As New CryptoStream(ms, cp.CreateDecryptor(key, IV), CryptoStreamMode.Read)
Using ms2 As New MemoryStream()
cs.CopyTo(ms2)
data = ms2.ToArray()
End Using
End Using
End Using
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Objective C iOS drawing rectangles on an arc
I need to draw a custom arc in an iOS application. The picture demonstrate how I want to draw the arc, using small rectangles. I have tried to use CGContextAddArc which draws a line, but I cant understand how to use objects(rectangle) and not just a straight line. Is this even possible or should I choose another approach?
Here is the arc i want to draw:
A:
Rather than try to draw each element individually, try drawing a single arc with a large width and a dash pattern. For example, a line width of 10.0 and a dash pattern of 1,10 will draw an effect very similar to the one you are trying to achieve - a series of 1x10 rectangles, placed 10 points apart, on the path you specify.
The relevant CGContext functions are CGContextSetLineWidth and CGContextSetLineDash.
A:
I draw this kind of thing using UIBenizerCurve from QuartzCore framework. I do something demo code as following,
in class .h file
struct angleRanges{
float startRange;
float endRange;
}angle_Range;
Provide your ranges of angle.
And in .m file
#import <tgmath.h>
#import <QuartzCore/QuartzCore.h>
#define RADIUS 125.0f
#define METER_END_LIMIT 100.f
#define CIRCLE_STROKE_PATH_WIDTH 30.f
#define ANGAL -90.f
NSInteger numberOfParts = METER_END_LIMIT/5;
float angleRange = 0;
for(int loopIndex = 0; loopIndex <= numberOfParts; loopIndex++){
angle_Range.startRange = 0;
angle_Range.endRange = 360 - (180.f * angleRange)/METER_END_LIMIT;
double actualLineAngle = angle_Range.endRange - angle_Range.startRange;
float startAngle = actualLineAngle - 0.5;
float endAngle = actualLineAngle + 0.5;
startAngle = DEGREES_TO_RADIANS(startAngle);
endAngle = DEGREES_TO_RADIANS(endAngle);
UIBezierPath *aPath = [UIBezierPath bezierPathWithArcCenter:CGPointMake(self.center.x, self.center.y + RADIUS/2)
radius:(RADIUS+CIRCLE_STROKE_PATH_WIDTH/3)
startAngle:startAngle
endAngle:endAngle
clockwise:YES];
CAShapeLayer *shapeLayer = [[CAShapeLayer alloc] init];
[shapeLayer setFrame: self.frame];
[shapeLayer setPath: [aPath CGPath]];
shapeLayer.lineWidth = 5;
[shapeLayer setStrokeColor:[[UIColor grayColor] CGColor]];
[shapeLayer setFillColor:[[UIColor clearColor] CGColor]];
[shapeLayer setMasksToBounds:YES];
[self.layer addSublayer:shapeLayer];
[aPath closePath];
angleRange = angleRange + 5.0f;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Examples of how to transform a random variable with a distribution not Gaussian to a random variable with Gaussian distribution
My doubt is if there exists a method to transform a random variable with not Gaussian distribution in other with a Gaussian distribution.
I only found a random variable with Birhaum Saunders distribution that I can transform in Gaussian.
I would like to obtain other examples.
A:
If you have a continuous random variable $X$ with distribution function $F_X(X)$, that you know what it is, then the random variable
$$Y=\sigma \Phi^{-1}[F_X(X)] + \mu \sim N(\mu,\sigma^2)$$
where $\Phi()^{-1}$ is the inverse standard Normal distribution function.
Since this is the standard way of generating draws from a Normal Distribution (since $F_X(X)\sim U(0,1)$), I wonder whether this is what you are really asking here.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
codeigniter : form validation jquery
i have a form name nama kategori and then when someone input the nama kategorithat already exist, i want the button become unclickable like when the button being clicked it's not gonna do anything, the problem is i manage to get the error message when input the existing nama kategori, but when i click the button it's still send the data and inputting the data, for more info look the images below
Success Display Error Message
Then i clicked button "Tambah"
it's still adding the data into the tables, i want to prevent that, i want when the button clicked it's not gonna do anything below are my code
JQUERY
$(document).ready(function(){
var check1=0;
$("#kategori").bind("keyup change", function(){
var nama = $(this).val();
$.ajax({
url:'cekData/kategori/nama_kategori/'+nama,
data:{send:true},
success:function(data){
if(data==1){
$("#report1").text("");
check1=1;
}else{
$("#report1").text("*choose another kategori");
check1=0;
}
}
});
});
});
VIEW
<div class="row">
<div class="col s12 m8 l6 offset-m2 offset-l3" align="center">
<form action="<?php echo site_url('kategori/insertKategori') ?>" method="post">
<div class="input-field">
<input id="kategori" name="kategori" type="text" maxlength="40" class="validate" required>
<label for="kategori">nama kategori</label> <span class="error" id="report1"></span>
</div>
<br/>
<button type="submit" class="waves-effect waves-light btn blue darken-1">Tambah</button>
</form>
<br/>
</div>
</div>
CONTROLLER
public function cekData($table, $field, $data){
$match = $this->MKategori->read($table, array($field=>$data), null, null);
if($match->num_rows() > 0){
$report = 2;
}else{
$report = 1;
}
echo $report;
}
A:
You need to make following changes to your jquery code:
$(document).ready(function(){
var check1=0;
$("#kategori").bind("keyup change", function(){
var nama = $(this).val();
$.ajax({
url:'cekData/kategori/nama_kategori/'+nama,
data:{send:true},
success:function(data){
if(data==1){
$("#report1").text("");
check1=1;
$('button[type="submit"]').prop('disabled','');
}else{
$("#report1").text("*choose another kategori");
check1=0;
$('button[type="submit"]').prop('disabled',true);
}
}
});
});
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is the Thief Race event in Tomba! undoable if done too late?
In the classic Playstation game Tomba!, there is a series of events involving a thief who's imprisoned in the Dwarf Village. First you go into his dungeon and the light goes out, so you restore the light using a torch, then he digs a tunnel to escape, then you find a broken vase in the dungeon, then you cover up the tunnel he's digging, so he's forced back into the dungeon, then he goes to sleep, then you come back and he's awake, then he breaks out again, then he challenges you to a footrace in the Watch Tower, and then after you win you get a Silver Powder that's essential to make candy with Mizuno the witch.
Now I completed all the events in this sequence up to and including "The Great Escape", where you find the broken vase and cover up the thief's tunnel, so the thief has gone back to the dungeon and fallen asleep. But for some reason, he seems to stay asleep indefinitely, rather than just waking up after you leave the dungeon and re-enter it, so I'm unable to initiate the footrace of the "Ready Set Go!" event.
So what's causing the problem, and how can I get him to wake up? I suspect the cause may be that I've progressed too far in the game, while I was supposed to do the thief events earlier on. I've already beaten 6 of the 7 Evil Pigs, including the one responsible for the Dwarf Forest curse. Could it be that the Thief events, specifically the footrace, can only be done when the Dwarf Forest is still under the curse?
A:
The explanation turned out to be trivial. The walkthroughs don't mention it, but it turns out that the thief won't wake up until you do another event - the telescope event. You have to take the telescope from the roof of the wooden observatory on the top of the watch tower (after climbing onto the roof by grapple), climb down and put the telescope on the platform at the top of the observatory ladder, and then take a look through the telescope. Only when you do that event will the thief be awake.
The game doesn't really make this point clear; right before going to sleep, the thief does tell you that the view from the watch tower telescope is really good, but it's hard to tell from that that you're required to look through the telescope before proceeding further with the thief.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MySQL - Using an alias in a subquery with WHERE clause
I have a feeling I am completely borking this MySQL query but I'll ask anyway. I am wondering why I get the warning Unknown column 'FOO', and how I can get this query to work properly, in 'where clause' when I run the following:
SELECT sample_id AS FOO
FROM tbl_test
WHERE sample_id = 521
AND sample_id IN (SELECT sample_id
FROM tbl_test
WHERE sample_id = FOO
GROUP BY sample_id)
Edit This query works fine on a different server and fails as described above on the new server. The old one was v 5.0.45 and the new one is 5.0.75.
A:
SELECT sample_id
FROM tbl_test outter
WHERE sample_id = 521
AND sample_id IN (SELECT sample_id
FROM tbl_test
WHERE sample_id = outter.sample_id
GROUP BY sample_id)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
commenting code
whats the most professional and informative way of commenting code. Are there any standards out there?
p.s. it doesn't have to be javadoc, just info on what to include - any common layouts etc
cheers guys
A:
There's a big difference between commenting internal method code and commenting APIs.
For code, I'm not familiar of specific practices or layouts. "Use common sense" is the best one. Don't document anything that is obvious from the code, etc, but document everything that is not immediately clear. And remember, the one thing worse than code without comments is code with outdated comments. More comments mean more stuff that needs to be updated.
For API documentation, there are two approaches. The document-everything-in-tons-of-details (proposed by Sun), and the more agile (propose the important parts only). In many places, you are not expected to document API stuff that is obvious from the signature.
While a complete documentation of the method (the sun approach) is important to have a well fleshed up spec, my research shows that it makes it more difficult to spot important caveats, likely leading to more errors.
For APIs, see also: Creating Great API Documentation: Tools and Techniques
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Are the measurement outcomes of an observable gaussian distributed?
Suppose in an experiment we perform $n$ independent measurements to find the true value of an observable $X$. Let the outcomes of $n$ measurement are denoted by $x_1,x_2,...x_n$. If $n$ is sufficiently large, will these measured outcomes $\{x_i\}$ be Gaussian distributed?
Please note that I am not asking whether the means of the measurements are Gaussian distributed. I know they are.
A:
It's certainly possible to identify an observable with a non-Gaussian distribution.
An example from my professional life was a detector which collected Cherenkov photons from relativistic electrons. We needed the same detector to be sensitive to a macroscopic electron current (nanoamps) but also, with a gain change, to be able to trigger on single electrons. In the counting mode we determined that each electron sent about ten photons to the photomultiplier --- a number small enough that we had to account for the asymmetry in the Poisson distribution ($10\pm\sqrt{10}$) rather than using the Gaussian approximation.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
C++ setting pointer to function member in a struct from outside the class
I try to set from outside the class the function pointers contained in the str struct within the class through the method SetPtr().
I get the error: invalid use of non-static member function.
class C {
public:
float v1, v2;
struct S {
float (*Read)();
void (*Write)(float);
} str;
float ReadV1() {
return v1;
}
void WriteV1(float value) {
v1 = value;
}
float ReadV2() {
return v2;
}
void WriteV2(float value) {
v2 = value;
}
void SetPtr(float (*read)(), void (*write)(float)) {
str.Read = read;
str.Write = write;
}
void F()
{
float f = str.Read();
str.Write(f);
}
};
int main()
{
C c;
c.SetPtr(c.ReadV1, c.WriteV2); // ERROR
c.v1 = 0;
c.F();
return 0;
}
I also tried to replace function pointers by pointer to member functions:
class C {
public:
float v1, v2;
struct S {
float (C::*Read)();
void (C::*Write)(float);
} str;
float ReadV1() {
return v1;
}
void WriteV1(float value) {
v1 = value;
}
float ReadV2() {
return v2;
}
void WriteV2(float value) {
v2 = value;
}
void SetPtr(float (C::*read)(), void (C::*write)(float)) {
str.Read = read;
str.Write = write;
}
void F()
{
float f = str.Read(); // ERROR
str.Write(f); // ERROR
}
};
int main()
{
C c;
c.SetPtr(&C::ReadV1, &C::WriteV2);
c.v1 = 0;
c.F();
return 0;
}
But this will move the error within the class:
error: must use ‘.’ or ‘->’ to call pointer-to-member function in
‘((C*)this)->C::str.C::S::Read (...)’, e.g. ‘(... ->*
((C*)this)->C::str.C::S::Read) (...)’
And whatever combination of this->, braces, *, ->, . I try, it doesn't work.
Any ideas?
Thanks!
A:
You need to use the second form (with pointer to class method), but when calling the methods you need to use:
float f = (this->*str.Read)();
(this->*str.Write) (f);
The first method cannot work as it is because pointer to class method won't decay to pointer to standard function (i.e. float (C::*) () cannot decay to float (*) ()).
With C++11 you could store the method as std::function and use std::bind:
#include <functional>
class C {
struct S {
std::function <float()> Read ;
std::function <void(float)> Write ;
} str ;
void SetPtr(float (C::*read)(), void (C::*write)(float)) {
str.Read = std::bind(read, this);
str.Write = std::bind(write, this, std::placeholders::_1);
}
void F() {
float f = str.Read();
str.Write (f);
}
}
int main () {
C c;
c.SetPtr(&C::ReadV1, &C::WriteV2);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Problem with AJAX responses
I have the following problem. I programmed a java servlet, which responses to ajax requests from my javascript application. the answer from the java servlet is a xml encoded message. normally everything works well, but if I send "too much" (I think) Ajax reqeusts, it happens that more responses are within one ajax reponse, and as a consequence, firefox complains with the error message "junk after root document":
e.g.:
<root>
<node1></node1>
</root>
<root>
<node1></node1>
</root>
and that is not allowed (two times <root> in one message). Why does this happen? I always thought for each ajax call, a new servlet instance will be started. is that wrong?
A:
The earlier answer got it right - your PrintWriter member variable named "writer" is the problem. Tomcat (or any other servlet container) might have one instance handle more than 1 request at the same time. This is a common cause of problems in servlet programming.
Here's some pseudo-code of what Tomcat might be doing (which is valid and expected in servlets, even though it can easily cause problems):
Servlet servlet = new ServletTest();
// in thread 1:
servlet.doPost(request1, response1);
// in thread 2:
servlet.doPost(request2, response2);
So both request1 and request2 might be running at the same time, and sharing the writer variable. If request1 sets the writer, then request2 sets the writer, then request1 writes XML, then request2 writes XML, then you'll get the output you show in your question.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Finding the time difference between two specific SQL rows and populating a new table
I have a table, a shortened version is shown below. I need to find the time each ID goes offline. But, if an ID does not come online, it should be ignored, each ID can go offline multiple times, but needs to come online before it can go offline
|ID | Description | Time |
|---|-------------|--------------------|
|1 |Offline |'2017-09-07 12:53:02|
|---|-------------|--------------------|
|2 |Offline |'2017-09-07 12:54:00|
|---|-------------|--------------------|
|2 |Online |'2017-09-07 12:54:01|
|---|-------------|--------------------|
|3 |Offline |'2017-09-07 12:54:02|
|---|-------------|--------------------|
|1 |Online |'2017-09-07 12:55:21|
|---|-------------|--------------------|
|2 |Offline |'2017-09-07 12:57:21|
|---|-------------|--------------------|
|2 |Online |'2017-09-07 12:58:21|
This is the resulting table I need, the order doesn't matter(Time difference can be in seconds)
|ID |Time Difference |
|---|----------------|
|1 |141 |
|---|----------------|
|2 |1 |
|---|----------------|
|2 |60 |
A:
In your data, the online/offline states are interleaved. This makes it easy to use lead():
select id,
sum(datediff(second, time, next_t)
from (select t.*,
lead(description) over (partition by id order by time) as next_d,
lead(time) over (partition by id order by time) as next_t
from t
) t
where description = 'Offline' and next_d = 'Online';
|
{
"pile_set_name": "StackExchange"
}
|
Q:
¿Como puedo mostrar dos arrays en un mismo fragment?
Estoy intentando hacer un Array con diferentes categorías para mostrarlas en un mismo fragment, el problema es que solo me muestra un Array y no los dos. Quisiera saber si se puede hacer así o hay otra manera.
Este es mi código en donde quiero meter los dos Array:
public class FragmentNovedades extends Fragment {
public RecyclerView recyclerView;
public LinearLayoutManager linearLayout;
public FragmentNovedades() { }
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View view = inflater.inflate(R.layout.fragment_novedades, container, false);
recyclerView = (RecyclerView) view.findViewById(R.id.recycler_view);
recyclerView.setHasFixedSize(true);
linearLayout = new LinearLayoutManager(getActivity());
recyclerView = (RecyclerView) view.findViewById(R.id.recycler_view);
recyclerView.setLayoutManager(linearLayout);
ListaAdaptador adapter = new ListaAdaptador(getContext(),ListaObjetos.getCourses());
recyclerView.setAdapter(adapter);
//esta es la segunda lista que quiero meter pero solo me muestra la de
arriba
ListaAdaptadorVideo adaptervideo=new
ListaAdaptadorVideo(getContext(),ListaObjetosVideo.getVideos());
recyclerView.setAdapter(adaptervideo);
return view;
}
}
Cuando pongo las siguientes dos líneas no me muestra ambas listas (sólo una), cómo puedo hacer para mostrar las dos?:
ListaAdaptador adapter = new ListaAdaptador(getContext(),
ListaObjetos.getCourses());
ListaAdaptadorVideo adaptervideo=new ListaAdaptadorVideo(getContext(),
ListaObjetosVideo.getVideos());
A:
El problema es que tienes un único recyclerView, y a este elemento intentas setearle 2 adaptadores diferentes. Ten en cuenta que instancias un único recyclerView.
...public RecyclerView recyclerView, recyclerViewVideo;...
Y luego los asocias al mismo elemento de la vista cuando deberías tener 2:
recyclerView = (RecyclerView) view.findViewById(R.id.recycler_view);
recyclerViewVideo = (RecyclerView) view.findViewById(R.id.recycler_view_video);
Y luego instancias cada adaptador en el correspondiente.
Espero que te sirva.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Error generated when tried to call superclass method from the main method of the program
I'm new to Java and is trying to learn the concept of inheritance. When I tried to call the EggLayer's identifyMyself() method from the main class's identifyMyself method() using
System.out.println(EggLayer.super.identifyMyself());
it works as expected. However, when I tried to call the EggLayer's identifyMyself() method from the main class's main method() using the same statement, the compiler generate an error saying:"not a enclosing class: EggLayer".
Could someone please explain to me why this is the case?
interface Animal {
default public String identifyMyself() {
return "I am an animal.";
}
}
interface EggLayer extends Animal {
default public String identifyMyself() {
return "I am able to lay eggs.";
}
}
interface FireBreather extends Animal {
@Override
default public String identifyMyself(){
return "I'm a firebreathing animal";
}
}
public class Dragon implements EggLayer, FireBreather {
public static void main (String... args) {
Dragon myApp = new Dragon();
System.out.println(myApp.identifyMyself());
/**
*Not allowed, compiler says "not a enclosing class: EggLayer"
*System.out.println(EggLayer.super.identifyMyself());
*/
}
public String identifyMyself(){
//Call to EggLayer.super.identifyMyself() allowed
System.out.println(EggLayer.super.identifyMyself());
return "im a dragon egglayer firebreather";
}
}
A:
the problem in your code is that your dragon class implements two interfaces with :
default public String identifyMyself() method both returning different things . Also eggLayer is not a class it is an interface
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Button "Add" behavier is very strange
Application has Add to Cart button, which creates new @line_item (code is based on Agile Development with Rails).
I added an Ajax-feature, so Add to Cart doesn't reload page, and adds @line_item to @cart.
But Rails does this three times per once button push!
I push "Add to Cart", and it adds 3 items.
Also, when I emty cart, it asks me three times "Are you sure?"
Have no idea, what may cause these, any ideas?
def create
@cart = current_cart
product = Product.find(params[:product_id])
@line_item = @cart.add_product(product.id)
respond_to do |format|
if @line_item.save
format.html { redirect_to(store_url) }
format.js
format.xml { render :xml => @line_item,
:status => :created, :location => @line_item }
else
format.html { render :action => "new" }
format.xml { render :xml => @line_item.errors,
:status => :unprocessable_entity }
end
end
end
button:
<%= button_to 'Add to Cart' , line_items_path(:product_id => product),:remote => true %>
create.js.erb:
$('#cart').html("<%= escape_javascript(render(@cart)) %>");
_cart.html.erb template:
<div class="cart_title" >Your Cart</div>
<table>
<%= render(cart.line_items) %>
<tr class="total_line" >
<td colspan="2" >Total</td>
<td class="total_cell" ><%= number_to_currency(cart.total_price) %></td>
</tr>
</table>
<%= button_to 'Empty cart' , cart, :method => :delete,
:confirm => 'Are you sure?' %>
_line_item.html.erb template:
<tr>
<td><%= line_item.quantity %>×</td>
<td><%= line_item.product.title %></td>
<td class="item_price" ><%= number_to_currency(line_item.total_price) %></td>
</tr>
A:
Problem solved.
On my view application layout I thought it is right to do like that
<!DOCTYPE html>
<html>
<head>
<title>Store</title>
<%= stylesheet_link_tag "store" %>
<%= javascript_include_tag 'application' %>
<%= csrf_meta_tags %>
<title>Cart</title>
<%= stylesheet_link_tag "carts" %>
<%= javascript_include_tag 'cart' %> #this
<%= csrf_meta_tags %>
Actually, i suppose once you have used "application" on main page, #this dublicates execution of JS code
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Wireless connection unstable on home network
I have a wireless home network based on a 3Com OfficeConnect Wireless 11g Cable device. A few laptops, my iPhone and my wife's iPod touch usually connect wirelessly and a printer plus a couple of NAS devices are connected by wire. Everything used to work fine until a few weeks ago. Since then, the wireless connection drops on the iPhone and iPod touch every so often, but never on the wireless laptops. The wired connection never drops either.
The latest changes I have made to the network are an upgrade from Windows Vista to to Windows 7 on a couple of machines, and a firmware update on a NAS device. I don't think this could be related, but I can't be sure anymore. Anyone has an idea of what the cause of this issue may be? Thank you.
A:
try changing the channel or frequency on your wireless router (1 tru 9)onthe router, someone else in the neighborhood may have set up a router using the same channel.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
AspNet Categorize the messages by UserName when asking for Query
My database is structured as below
public partial class AspNetUserMessage
{
public string From { get; set; }
public string To { get; set; }
public string Text { get; set; }
public System.DateTime SentDate { get; set; }
public Nullable<System.DateTime> ReadDate { get; set; }
public int Id { get; set; }
}
I have an API point which returns all the messages sent by a user e.g
[
{
"From": "0617c281-5488-4009-bf37-761f6cfea2df",
"To": "f54b7bf4-c7b5-482c-83b8-5b61e78ea398",
"Text": "i love this backend!!",
"SentDate": "2020-04-10T00:00:00",
"ReadDate": "2020-04-10T00:00:00",
"Id": 1
},
{
"From": "0617c281-5488-4009-bf37-761f6cfea2df",
"To": "f54b7bf4-c7b5-482c-83b8-5b61e78ea398",
"Text": "this is a sample msg",
"SentDate": "0001-01-01T00:00:00",
"ReadDate": null,
"Id": 2
},
{
"From": "0617c281-5488-4009-bf37-761f6cfea2df",
"To": "f54b7bf4-c7b5-482c-83b8-5b61e78ea398",
"Text": "id test",
"SentDate": "0001-01-01T00:00:00",
"ReadDate": null,
"Id": 3
},
{
"From": "0617c281-5488-4009-bf37-761f6cfea2df",
"To": "f54b7bf4-c7b5-482c-83b8-5b61e78ea398",
"Text": "blocked test",
"SentDate": "0001-01-01T00:00:00",
"ReadDate": null,
"Id": 4
},
{
"From": "0617c281-5488-4009-bf37-761f6cfea2df",
"To": "f54b7bf4-c7b5-482c-83b8-5b61e78ea398",
"Text": "blocked other way test",
"SentDate": "0001-01-01T00:00:00",
"ReadDate": null,
"Id": 5
},
{
"From": "0617c281-5488-4009-bf37-761f6cfea2df",
"To": "f54b7bf4-c7b5-482c-83b8-5b61e78ea398",
"Text": "date test",
"SentDate": "2020-04-11T00:00:00",
"ReadDate": null,
"Id": 6
},
{
"From": "0617c281-5488-4009-bf37-761f6cfea2df",
"To": "f54b7bf4-c7b5-482c-83b8-5b61e78ea398",
"Text": "date test",
"SentDate": "2020-04-11T00:00:00",
"ReadDate": null,
"Id": 7
},
{
"From": "0617c281-5488-4009-bf37-761f6cfea2df",
"To": "f54b7bf4-c7b5-482c-83b8-5b61e78ea398",
"Text": "id test",
"SentDate": "0001-01-01T00:00:00",
"ReadDate": null,
"Id": 8
},
{
"From": "0617c281-5488-4009-bf37-761f6cfea2df",
"To": "f54b7bf4-c7b5-482c-83b8-5b61e78ea398",
"Text": "id test",
"SentDate": "0001-01-01T00:00:00",
"ReadDate": null,
"Id": 9
}
]
My code is the following:
public IEnumerable<AspNetUserMessage> GetAllMessages([FromBody]MessageRequest Request)
{
try
{
// Get all the messages sent or received by the user
var messages = db.AspNetUserMessages.Where(o => o.From == Request.From || o.To == Request.From).ToList();
return messages;
}
catch (Exception e)
{
// Log the error to the database
LogException(e);
return null;
}
}
What I want to do is instead of returning all the messages, I want to return a categorized version of the messages. E.g
User1 sent 4 messages to User2;
User1 sent 2 messages to User3
For user1 output should be
[
{
"SentTo": "User2"
"Messages": [All 4 Messages sent to User2 here]
},
{
"SentTo": "User3"
"Messages": [All 2 Messages sent to User3 here]
}
]
How can I achieve this modifying my GetAllMessages function
A:
You have to make a a new model that uses your AspNetUserMessage and fill it.
Make a new model, for now I named it SentMessagesModel
public class SentMessagesModel{
public string SentTo {get;set;}
public List<AspNetUserMessage> Messages {get;set;}
// All 4 Messages sent to User2 here
}
Then make a new controller action that uses the new model. Use the code below
public List<SentMessagesModel> GetAllMessages([FromBody]MessageRequest Request)
{
try
{
// Get all the messages sent or received by the user
var messages = db.AspNetUserMessages.Where(o => o.From == Request.From).ToList();
var groupByTo = messages.GroupBy(m=>m.To);
List<SentMessagesModel> sentMessages = new List<SentMessagesModel>();
foreach(var currentGroup in groupByTo){
sentMessages.Add(new SentMessagesModel(){ SentTo = currentGroup.Key, Messages = currentGroup.ToList() })
}
return sentMessages;
}
catch (Exception e)
{
// Log the error to the database
LogException(e);
return null;
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Finding number of points where a given function is continuous
I was thinking about the above problem.Can someone point me in the right direction?Thanks everyone in advance for your time.
A:
Hint: This function is continuous exactly at the solutions of $\frac{3x}4=\sin x$ (why?)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
postgres trigger creation - ERROR: no language specified SQL state: 42P13
I am new to trigger. I am trying to create a trigger by following this link - http://www.postgresqltutorial.com/creating-first-trigger-postgresql/. But it gives some error. The code block and the error is given below ::
code block >>
CREATE OR REPLACE FUNCTION log_last_name_changes()
RETURNS trigger AS
$BODY$
BEGIN
IF NEW.last_name <> OLD.last_name THEN
INSERT INTO employee_audits(employee_id,last_name,changed_on)
VALUES(OLD.id,OLD.last_name,now());
END IF;
RETURN NEW;
END;
$BODY$
And the error >>
ERROR: no language specified
SQL state: 42P13
Can anyone help me please ?
A:
Try this way:
CREATE OR REPLACE FUNCTION log_last_name_changes()
RETURNS trigger AS
$BODY$
BEGIN
IF NEW.last_name <> OLD.last_name THEN
INSERT INTO employee_audits(employee_id,last_name,changed_on)
VALUES(OLD.id,OLD.last_name,now());
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE -- Says the function is implemented in the plpgsql language; VOLATILE says the function has side effects.
COST 100; -- Estimated execution cost of the function.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
passing shared array in julia using @everywhere
I found this post - Shared array usage in Julia, which is clearly close but I still don't really understand what to do in my case.
I am trying to pass a shared array to a function I define, and call that function using @everywhere. The following, which has no shared array, works:
@everywhere mat = rand(3,3)
@everywhere foo1(x::Array) = det(x)
Then this
@everywhere println(foo1(mat))
properly produces different results from each worker. Now let me include a shared array:
test = SharedArray(Float64,10)
@everywhere foo2(x::Array,y::SharedArray) = det(x) + sum(y)
Then this
@everywhere println(foo2(mat,test))
fails on the workers.
ERROR: On worker 2:
UndefVarError: test not defined
etc. I can get what I want like this:
for w in procs()
@spawnat w println(foo2(eval(:mat),test))
end
This works - but is it optimal? Is there a way to make it work with @everywhere?
A:
While it's tempting to use "named variables" on workers, it generally seems to work better if you access them via references. Schematically, you might do something like this:
mat = [@spawnat p rand(3,3) for p in workers()] # process 1 holds references to objects on workers
@sync for (i, p) in enumerate(workers())
@spawnat p foo(mat[i], sharedarray)
end
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to use PendingIntent on BroadcastReceiver
I am trying to set up a message application such that:
First we set up a message on PreferenceConnector and we receive the
message then
BroadcastReceiver checks that the message contents are equal then
I want to get a PendingIntent.
try {
if (PreferenceConnector.readString(context,"MSG","tempmsg2").equals(messages[0].getMessageBody())) {
Intent i=new Intent(context, SecureMobiActivity.class);
PendingIntent pi=PendingIntent.getBroadcast(context, 0, i, 0);
} else {
Toast.makeText(Remotelock.this, "message are not equal!", Toast.LENGTH_LONG).show();
}
catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
}
A:
its answer is so simple
try {
if (PreferenceConnector.readString(context,"MSG","tempmsg2").equals(messages[0].getMessageBody())) {
Intent i = new Intent(mContext,SecureMobiActivity.class);
i.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
mContext.startActivity(i);
} else {
Toast.makeText(Remotelock.this, "message are not equal!", Toast.LENGTH_LONG).show();
}
catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Closure of a Fundamental Weyl Chamber
Can someone explain what a "closure" of a Fundamental Weyl Chamber means? I assume it is related to an algebraic closure, but I don't see how. In addition, how does the Weyl group act on it and why does it act that way?
Thank you for the help
A:
It is the topological closure $\overline{C}$ ($C$ is the fundamental chamber). With respect to the topology of the ambient Euclidean space. The difference $\overline{C}\setminus C$ consist of walls, sections of the hyperplanes bordering $C$ (remember that those are in a bijective correspondence with the simple roots).
The Weyl group does not act on $\overline{C}$ at all in the sense it can move all the points out of the closure. More or less the exact opposite is true (presumably you are really asking about this). Each orbit of the Weyl group intersects $\overline{C}$ in a single point. The points in $C$ have trivial stabilizers. Any point $x$ on the boundary of $C$ has a non-trivial stabilizer. The stabilizer $\operatorname{Stab}_W(x)$ is generated by the simple reflections that keep $x$ fixed. In other words, these stabilizers are themselves Weyl groups of a lower rank root system.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to enrich the payload with an object from MongoDB (camel-mongodb)
I'm trying to pull object from MongoDb and ADD it to my current payload and save it in another database:
@Override
public void configure() throws Exception
{
from(kafkaEndpoint)
.convertBodyTo(DBObject.class)
.enrich("mongodb:mongoDb?database=myDbName1&collection=UserColl&operation=findOneByQuery",
(original, external) -> {
DBObject originalBody = original.getIn().getBody(DBObject.class);
DBObject externalBody = external.getIn().getBody(DBObject.class);
Map<String, DBObject> map = new HashMap<String, DBObject>();
map.put("original", originalBody);
map.put("external", externalBody);
original.getIn().setBody(map);
return original;
})
.to("mongodb:mongoDb?database=myDbName2&collection=UserColl&operation=insert");
}
The problem that enrich fetch the query from the In.body that holds my original object...
So how can I pass query ({"entity.id": ""}) to enrich(mongoldb:...) and preserve original object to merge it with results?
Thanks.
A:
@Override
public void configure() throws Exception
{
from(kafkaEndpoint)
.convertBodyTo(DBObject.class)
.enrich("direct:findOneByQuery", // <-------
(original, external) -> {
DBObject originalBody = original.getIn().getBody(DBObject.class);
DBObject externalBody = external.getIn().getBody(DBObject.class);
Map<String, DBObject> map = new HashMap<String, DBObject>();
map.put("original", originalBody);
map.put("external", externalBody);
original.getIn().setBody(map);
return original;
})
.to("mongodb:mongoDb?database=myDbName2&collection=UserColl&operation=insert");
}
from("direct:findOneByQuery")
.process(new Processor()
{
@Override
public void process(Exchange exchange) throws Exception
{
DBObject body = exchange.getIn().getBody(DBObject.class);
DBObject query = BasicDBObjectBuilder.start()
.append("entity._id", body.get("_id"))
.get();
exchange.getIn().setBody(query);
}
})
.to("mongodb:mongoDb?database=myDbName1&collection=UserColl&operation=findOneByQuery");
//
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Process on a lookup field
I have been trying to figure out a solution for a lookup field for a custom object. The lookup field looks up and grabs the customer number. I'm required to process further on it's Id.
The visualforce page has 2 item:
inputField for the lookup and
Execute button
The problem here I'm facing is that I'm not able to bind the lookup value that is manually entered by the user to the command button for further processing.
Here's what I tried:
Don't want to use extension as Id is not passed on the URL, so cant use getRecord();
Can't read from URL, so no getparameter().
Can do it with a String text, but not with a lookup field.
My VF:
<b>My Account</b>
<apex:inputField value="{!Obj.My_Account__c}"/>
<apex:commandButton value="Go" action="{!Process}"
Apex:
public with sharing class MyController {
public MyObject__c Obj{get;set;}
public MyController (ApexPages.StandardController controller) {
this.Obj= (MyObject__c )controller.getRecord();
System.debug('@@Obj'+this.Obj);
}co
public PageReference Process(){
//obj.My_Account__c is null
MyInput items = new MyInput ();
items.customers = new List<ID>{'007ID123213Qdf0'}; //this is hardcoded but should be manually from type My_Account__c
System.debug('My Results' + MyInput.processMe(items) ); // Working perfectly for hardcoded values as I'm not able to pass lookup values to command button
return null;
}
}
Any suggestion how I can do it on my existing controller?
A:
When you're using a standard controller, you should bind to the standard controller object. In other words:
<apex:inputField value="{!MyObject__c.My_Account__c}"/>
This has several other perks as well, such as querying the value from the database when an Id parameter is provided to the page, and automatic loading of the value into the lookup field when you use a button on the related list for the field "My Account", etc.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
VueJS - Component inside of v-for
I am trying to render a list of objects from my Vue-Instance. Each object should use a component, so I put the component into the v-for-loop. But all I get is list.title and list.text instead of the correct values.
Is there a special way to use components in v-for-loops?
I found this thread in the Vue-Forum, but don't know how to use it or if it's the right way.
App:
<div id="app">
<div v-for="list in lists">
<listcard title="list.title" text="list.text"></listcard>
</div>
</div>
Template:
<template id="listcard-template">
<div class="card">
<h2>{{ title }}</h2>
<p>{{ text }}</p>
</div>
</template>
My component:
Vue.component('listcard', {
template: '#listcard-template',
props: ['title', 'text']
})
Vue-Instance:
new Vue({
el: "#app",
data: {
lists: [
{title: "title1", text: "text1"},
{title: "title2", text: "text2"},
...
]
}
})
Thanks!
A:
You should pass then as dynamic prop using : in front of parameters:
<listcard :title=list.title :text=list.text></listcard>
From documentation:
A common mistake beginners tend to make is attempting to pass down a number using the literal syntax:
<!-- this passes down a plain string "1" -->
<comp some-prop="1"></comp>
However, since this is a literal prop, its value is passed down as a plain string "1", instead of an actual number. If we want to pass down an actual JavaScript number, we need to use the dynamic syntax to make its value be evaluated as a JavaScript expression:
<!-- this passes down an actual number -->
<comp :some-prop="1"></comp>
https://vuejs.org/guide/components.html#Literal-vs-Dynamic
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Best way to make an article model on Django?
I'm making a site and I would like to be able to change the content from the Django's administration. What would be the best way in making something like a model called an Article and it has a title and content. I need to know how I would be able to choose a position for the article. Would the best way be something such as a CharField named for example home_main and just add those in the template. Or is there a better way to do this?
A:
I would add a CharField with choices to the Article model. I can image there will only be a set of fixed choices where an article can be positioned.
POSITION_CHOICES = (
HOME_MAIN, "HOME_MAIN",
BUTTOM_MAIN, "BOTTOM_MAIN",
TOP_MAIN, "TOP_MAIN"
)
class Article(models.Model):
position = models.CharField(max_lenght=50, choices=POSITION_CHOICES)
https://docs.djangoproject.com/en/dev/ref/models/fields/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Neo4J ClientError.Statement.SyntaxError
I am getting Neo.ClientError.Statement.SyntaxError while loading data from CSV file.
Neo.ClientError.Statement.SyntaxError: Invalid input 'h': expected
'i/I' (line 5, column 3 (offset: 189)) "Merge (Zip_Code:Zip_Code
{code: row.zip_cd,type:'location'})"
Here is my Query:
Using Periodic Commit
LOAD CSV WITH HEADERS FROM "file:///DOL_data_whd_whisard_reduced.csv" AS row
Merge (State_Code:State_Code {code: row.st_cd})
where not row.st_cd is null
Merge (Zip_Code:Zip_Code {code: row.zip_cd,type:'location'})
where not row.zip_cd is null
Merge (Zip_Code)-[:located_in]->(State_Code)
There are some blank records in the csv and hence I have used not null but this is giving me error as:
Can anyone help me out of it?
A:
You are getting an error because you are using WHERE with MERGE clause. WHERE can not be used with MERGE.
You can modify your query to remove the syntax error as follows:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///DOL_data_whd_whisard_reduced.csv" AS row
WITH row
WHERE NOT row.st_cd IS NULL AND NOT row.zip_cd IS NULL
MERGE (state_code:State_Code {code: row.st_cd})
MERGE (zip_code:Zip_Code {code: row.zip_cd, type:'location'})
MERGE (zip_code)-[:located_in]->(state_code)
NOTE:
This will skip the record if one of st_cd or zip_cd is NULL.
It's not recommended to use more than one MERGE in a single query, consider writing 3 separate queries for this.
Recommended method:
Load State codes:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///DOL_data_whd_whisard_reduced.csv" AS row
WITH row
WHERE NOT row.st_cd IS NULL
MERGE (state_code:State_Code {code: row.st_cd})
Load Zip codes:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///DOL_data_whd_whisard_reduced.csv" AS row
WITH row
WHERE NOT row.zip_cd IS NULL
MERGE (zip_code:Zip_Code {code: row.zip_cd, type:'location'})
Create State-Zip relationships:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///DOL_data_whd_whisard_reduced.csv" AS row
WITH row
WHERE NOT row.st_cd IS NULL AND NOT row.zip_cd IS NULL
MATCH (state_code:State_Code {code: row.st_cd})
MATCH (zip_code:Zip_Code {code: row.zip_cd, type:'location'})
MERGE (zip_code)-[:located_in]->(state_code)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SQL Server 2008 Geometry.STBuffer(…) really slow
I’ve got a basic SQL query that looks like:
SELECT TOP 1
[geom].STBuffer(500)
FROM [db].[dbo].[boundaries]
Which essential takes a map boundary from the database and buffers it by 500m. The problem I’m having is that it’s incredibly slow and then the server runs out of memory! I’m sure something must be wrong as a simple operation like this in a GIS program takes seconds to run, whereas this runs for around a minute before giving up.
The boundary is fairly complicated, but it shouldn’t be so complicated as to cause the server to run out of memory, I’m sure of that.
If I reduce the buffer distance to say 100m, it runs and completes within around 14 seconds, which is still too slow to be useful in realtime.
Any idea as to why it might be so slow, and any tips as to how I can speed it up?
Thanks,
A:
This is a known limitation with STBuffer in Sql Server 2008 - it is prone to being slow and potentially running out of memory when the distance parameter is larger than the diameter of the object and the object has more than 1000 points. There is a connect item for this issue and it is fixed in Sql Server Denali.
As a workaround, you can try running Reduce on the object before invoking buffer to lower its complexity, and using BufferWithTolerance method to pass in a higher tolerance which will result in a less complex result.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Чем хороши книги Кнута "Искусство программирования"?
Подойдут ли они для прочтения новичку для понимания алгоритмов?
A:
Потому что эти книги базовые и фундаментальные. Они освещают все вопросы очень глубоко, и рассматривают их не с точки зрения конкретного языка или конкретной технологии, а с точки зрения составления алгоритмов и правильного мышления. Окей и не оставляют пробелов в темах, которые рассматривают.
Они учат программировать, правильному подходу учёного, а не ремесленника. Базовый вопрос у Кнута — не как решать задачу, а какой способ решения задачи наилучший в данных обстоятельствах.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can Lean Six Sigma be implemented in a Service-Oriented Company?
In the past I've studied about implementing Lean Six Sigma methodologies, but the literature and the use cases were related to product-oriented environment. Can Lean Six Sigma methodologies be implemented in a service-oriented environment?
A:
Convergys reported successful Six Sigma implementation
Convergys is a customer care company with call centers in many countries. I happen to know that they implemented Six Sigma in their call centers (service-oriented environment) in India and reported major savings. Here is a list of Lean Six Sigma success stories, including the Convergys' one. You might want to check these out and see whether any of it gives you additional info for the service-oriented environment.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Parsing and generating JSON
Mathematica's list of built-in formats is pretty extensive; however, JSON is not on that list. Is there an existing solution for generating and parsing JSON in Mathematica, or are we going to have to roll our own solution?
A:
UPDATE: As noted in Pillsy's answer, JSON is a built-in format for Import and Export as of Mathematica 8: http://reference.wolfram.com/mathematica/ref/format/JSON.html. But, as discussed in the comments, the following seems to be a more robust solution as of Mathematica 10.4.1:
WARNING: This involves doing an eval (ToExpression) so don't use this to parse strings from untrusted sources.
First, a really quick-and-dirty partial solution to JSON parsing would be this:
ToExpression[StringReplace[json, {"["->"{", "]"->"}", ":"->"->"}]]
Ie, just replace square brackets with curly braces and colons with arrows and then eval it.
All that remains is to not do those substitutions inside of strings.
(Also need a few more substitutions for null, true, false, and scientific notation.)
There's probably a more elegant solution to the not-within-strings problem, but the first thing to come to mind is to do substitutions like "{"->"(*MAGICSTRING*){" and then, after the eval (when comments outside of strings will have disappeared), reverse those substitutions.
(PS: Coming back to this later, I'm actually pretty pleased with the cleverness of that, and it seems to be perfectly robust. Magic strings FTW!)
That's slightly easier said than done but the following JSON parser seems to work:
cat = StringJoin@@(ToString/@{##})&; (* Like sprintf/strout in C/C++. *)
eval = ToExpression; (* Mathematica function names are too verbose! *)
parseJSON[json_String] := With[{tr = {"[" -> "(*_MAGIC__[__*){",
"]" -> "(*_MAGIC__]__*)}",
":" -> "(*_MAGIC__:__*)->",
"true" -> "(*_MAGIC__t__*)True",
"false" -> "(*_MAGIC__f__*)False",
"null" -> "(*_MAGIC__n__*)Null",
"e" -> "(*_MAGIC__e__*)*10^",
"E" -> "(*_MAGIC__E__*)*10^"}},
eval@StringReplace[cat@FullForm@eval[StringReplace[json, tr]], Reverse/@tr]]
(cat and eval are convenience functions. Simply cat = ToString would work in this case but I like this more general version that concatenates all its arguments into a string.).
Finally, here's a function to generate JSON (which does need the more general cat, as well as another utility function for displaying numbers in a JSON-appropriate way):
re = RegularExpression;
jnum[x_] := StringReplace[
ToString@NumberForm[N@x, ExponentFunction->(Null&)], re@"\\.$"->""]
genJSON[a_ -> b_] := genJSON[a] <> ":" <> genJSON[b]
genJSON[{x__Rule}] := "{" <> cat @@ Riffle[genJSON /@ {x}, ", "] <> "}"
genJSON[{x___}] := "[" <> cat @@ Riffle[genJSON /@ {x}, ", "] <> "]"
genJSON[Null] := "null"
genJSON[True] := "true"
genJSON[False] := "false"
genJSON[x_] := jnum[x] /; NumberQ[x]
genJSON[x_] := "\"" <> StringReplace[cat[x], "\""->"\\\""] <> "\""
A:
As of Mathematica 8, JSON is a built-in format supporting both Import and Export.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there any way of testing similar properties?
Let's assume I have a class with some similar properties:
public string First { get; set; }
public string Second { get; set; }
public string Third { get; set; }
I want to test them in the same way in my tests... So I write:
[Test]
public void TestFirst()
{
// Asserting strings here
}
Is there a way to avoid creating three Tests (one for First, one for Second and one for Third)?
I'm looking for something like [Values(First, Second, Third)], so i can then write one test that will iterate through the properties.
Cheers, and thanks in advance :)
A:
Thanks all for your answers and help. Learned plenty of things.
Here's what I've ended up doing. I've used reflection to get all the string properties, and then set to a value, check value is set, set to null, check to see it returns an empty string (logic in property's getter).
[Test]
public void Test_AllStringProperties()
{
// Linq query to get a list containing all string properties
var string_props= (from prop in bkvm.GetType()
.GetProperties(BindingFlags.Public | BindingFlags.Instance)
where
prop.PropertyType == typeof(string) &&
prop.CanWrite && prop.CanRead
select prop).ToList();
string_props.ForEach(p =>{
// Set value of property to a different string
string set_val = string.Format("Setting [{0}] to: \"Testing string\".", p.Name);
p.SetValue(bkvm, "Testing string", null);
Debug.WriteLine(set_val);
// Assert it was set correctly
Assert.AreEqual("Testing string", p.GetValue(bkvm, null));
// Set property to null
p.SetValue(bkvm,null,null);
set_val = string.Format("Setting [{0}] to null. Should yield an empty string.", p.Name);
Debug.WriteLine(set_val);
// Assert it returns an empty string.
Assert.AreEqual(string.Empty,p.GetValue(bkvm, null));
}
);
}
This way I don't need to be concerned if someone adds a property, since it'll be checked automatically, without me needing to update the test code (as you might guess, not everybody updates nor writes tests :)
Any comments on this solution will be welcomed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
stop other users to browse to some pages of my website
In my website i have used ajax at many places... But the problem is that the file, lets say some_ajax_file.php, is visible in the source code... I want that other users dont just type this in the url and go to this page... If they do so, they will then be redirected to another page...
I tried following code on that ajax page:
function curPageName() {
return substr($_SERVER["SCRIPT_NAME"],strrpos($_SERVER["SCRIPT_NAME"],"/")+1);
}
$cur_page=curPageName();
and checked it
if($cur_page=="some_ajax_file.php")
//then redirect...
it is working if the type some_ajax_file.php in the url...but the problem is that the the ajax function where i used this some_ajax_file.php is not working....plz help me ....i m stuck....
A:
You must have to search for $_SERVER['HTTP_X_REQUESTED_WITH']
if (isset($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) === 'xmlhttprequest')
{
// I'm AJAX!
}
else
{
header();
die();
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Lucene locking exceptions
I'm load testing a webservice which writes to a lucene index. If I make the same call repeatedly I get a
org.apache.lucene.store.LockObtainFailedException:
I assume this is because I'm trying to write to an index which is already locked by another thread and that thread waits.
My question is, what is the best way to solve this problem? Do I increase the waiting time or add the write requests to a queue?
Please advise, thanks.
A:
Why do you have multiple writers? IndexWriter is inherently thread-safe; you should have all your threads accessing the same writer. This will get rid of your locking issues.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
NPOI and setCellFormula(): decimal-separator is cutoff
I've a serious problem copying this formula with NPOI 1.2.5 from one cell to another with C#:
The original cell contain this:
=IF(H21>(H23*0.9997);IF(H21<(H23*1.0003);"OK";"Errore");"Errore")
The resulting cell reports exactly this formula, but with the decimal separator stripped. So I get this:
=IF(H21>(H23*9997);IF(H21<(H23*10003);"OK";"Errore");"Errore")
This is my debugger view right after the setFormula():
Any help would be greatly appreciated.
A:
This issue has been identified as a bug form the developer. We can't do much more than paying attention to it and wait for a fixed release.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there ANY cross-platform way of validating xml against an xsd in javascript?
As far as I can tell, the only way of doing it is to use the Microsoft DOM object, but as far as I'm aware this isn't universally available, if you're browsing with Firefox on Linux for example.
For reasons of security and minimizing network traffic I can't pass the xml to an external tool to validate (much as I wish I could). Is there any way of getting javascript to do this regardless of the browser/platform being used?
A:
Might be a tad late for you but maybe it'll help future searchers:
http://syssgx.github.io/xml.js/
A:
For browsers that provide it, you can use ActiveX and MSXML. This blog provides a tutorial on using it to do validation.
For Mozilla there is SchemaValidation developed as part of the XForms extension.
Beyond that, there was an SO user asking about his own validator. His question and information may be a place to start if you end up going that route.
See also the javascript and xsd tags on SO when used together.
However I'd suggest that you may want to look into an alternative - validating server-side, perhaps, or checking business logic by using XSLT to transform your XML and thereby prove that it meets your needs.
A:
Ok, after a fair amount of research, it seems the simple answer to this one is 'no', unless I write my own validator in javascript.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MySQL select distinct doesn't work
I have a database with 1 table with the following rows:
id name date
-----------------------
1 Mike 2012-04-21
2 Mike 2012-04-25
3 Jack 2012-03-21
4 Jack 2012-02-12
I want to extract only distinct values, so that I will only get Mike and Jack once.
I have this code for a search script:
SELECT DISTINCT name FROM table WHERE name LIKE '%$string%' ORDER BY id DESC
But it doesn't work. It outputs Mike, Mike, Jack, Jack.
Why?
A:
Because of the ORDER BY id DESC clause, the query is treated rather as if it was written:
SELECT DISTINCT name, id
FROM table
ORDER BY id DESC;
except that the id columns are not returned to the user (you). The result set has to include the id to be able to order by it. Obviously, this result set has four rows, so that's what is returned. (Moral: don't order by hidden columns — unless you know what it is going to do to your query.)
Try:
SELECT DISTINCT name
FROM table
ORDER BY name;
(with or without DESC according to whim). That will return just the two rows.
If you need to know an id for each name, consider:
SELECT name, MIN(id)
FROM table
GROUP BY name
ORDER BY MIN(id) DESC;
You could use MAX to equally good effect.
All of this applies to all SQL databases, including MySQL. MySQL has some rules which allow you to omit GROUP BY clauses with somewhat non-deterministic results. I recommend against exploiting the feature.
For a long time (maybe even now) the SQL standard did not allow you to order by columns that were not in the select-list, precisely to avoid confusions such as this. When the result set does not include the ordering data, the ordering of the result set is called 'essential ordering'; if the ordering columns all appear in the result set, it is 'inessential ordering' because you have enough data to order the data yourself.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to execute ng-xi18n without TypeError: Cannot read property 'create' of undefined?
I want to make use of the internationalization (i18n) of Angular 2 in my Ionic 2 project. Unfortunatly I can't create a translation source file with the ng-xi18n tool trying following command:
./node_modules/.bin/ng-xi18n
Following TypeError appears:
TypeError: Cannot read property 'create' of undefined
at Function.Extractor.create (/Users/user/Development/app/node_modules/@angular/compiler-cli/src/extractor.js:69:45)
at extract (/Users/user/Development/app/node_modules/@angular/compiler-cli/src/extract_i18n.js:7:34)
at Object.main (/Users/user/Development/app/node_modules/@angular/tsc-wrapped/src/main.js:47:16)
at Object.<anonymous> (/Users/user/Development/app/node_modules/@angular/compiler-cli/src/extract_i18n.js:14:9)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.runMain (module.js:604:10)
Extraction failed
I tried an upgrade of my packages without success. Here my dependencies in the file package.json
"dependencies": {
"@angular/common": "2.0.0-rc.4",
"@angular/compiler": "2.0.0-rc.4",
"@angular/compiler-cli": "^2.4.0",
"@angular/core": "2.0.0-rc.4",
"@angular/forms": "0.2.0",
"@angular/http": "2.0.0-rc.4",
"@angular/platform-browser": "2.0.0-rc.4",
"@angular/platform-browser-dynamic": "2.0.0-rc.4",
"@angular/platform-server": "^2.4.0",
"es6-shim": "0.35.1",
"ionic-angular": "2.0.0-beta.11",
"ionic-native": "1.3.10",
"ionicons": "3.0.0",
"reflect-metadata": "0.1.8",
"rxjs": "5.0.0-beta.6",
"typescript": "^2.1.4",
"zone.js": "0.6.12"
},
Maybe someone has solved this issue in the past. Thanks for answers in advanced.
A:
According to Angular.io, you simply need to specify a translation format (do not rely on defaults).
For example:
./node_modules/.bin/ng-xi18n --i18nFormat xlf
You should get a message.xlf file.
A:
To answer my own question: It was an issue with the dependencies. Rewriting the package.json solves this.
"dependencies": {
"@angular/common": "2.2.1",
"@angular/compiler": "2.2.1",
"@angular/compiler-cli": "2.2.1",
"@angular/core": "2.2.1",
"@angular/forms": "2.2.1",
"@angular/http": "2.2.1",
"@angular/platform-browser": "2.2.1",
"@angular/platform-browser-dynamic": "2.2.1",
"@angular/platform-server": "2.2.1",
"ionic-angular": "^2.0.0-rc.4",
"ionic-native": "^2.2.12",
"ionicons": "^3.0.0",
"rxjs": "5.0.0-beta.12",
"typescript": "^2.1.4",
"zone.js": "^0.6.21"
},
Maybe only a solution for short time, until the next error appears.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Combustion chamber question on topic?
What Constitutes A Combustion Chamber?
We don't 'do' fuel questions, but what about this one?
Should it be closed or is it 'innocent' enough?
I lean towards accepting it.
Related meta: Why can't I ask my question about amateur space projects and development?
A:
This question is about how a highly space-related technology works. We answer such questions all the time.
We close questions about how to home-make stuff, most importantly fuel. The question does not ask how you would make a combustion chamber at home. It asks how one works in principle.
I do not see any reason why that question should be closed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Interactive 2D map in Unity
I am pretty new to Unity but I have been able to find my way around. However, I have encountered a problem and no matter how much I google it I can't seem to find a solution that works. I am probably just doing something silly. Any help would be appreciated. I posted to the Unity forum but no one is responding so I figured I would give Stack Overflow a chance too.
So what I am trying to do is to create a 2D map in Unity. This map will be used to navigate through a 3D world. The map must be clickable because I will start with a world map and then based off the user's click, the map needs to change to that country's map. I have talked to different people and the way I want to go is with shapefiles since they already contain the information as to where the country and state boundaries are. In addition, I know I will have to work with shapefiles in the future so I figured I would just get some practice with them now. I wanted to start small so I downloaded a zipfile that contained the shapefile for the united states. I downloaded QGIS per a forums instructions and opened the shapefile in that software. In QGIS I was able to highlight individual states as shown in the photo below
Now I want this same functionality in Unity, except I don't just want to highlight it, I also want it to be clickable and to be able to identify that state as Montana.
So from here I was able to converted this file to a DXF and then converted that to an FBX. But after that I am lost. I am not sure I even had to do those conversions.
If anyone could help me in this endeavor I would great appreciate it! Again I am new to Unity so if you could keep that in mind I would greatly appreciate it.
A:
The answer below describes how do accomplish that. This was done with just 8 States for demonstration purposes.
There are 4 steps to do this but each step may have another step inside it. Create the model, export it and import it into Unity, add Mesh Collider to each one, detect click with OnPointerDown then change the color of the clicked GameObject,.
You need Maya 2016 and Unity 5.5 for this.
First thing to note is that Maya has a simple menu and option menu.
If I say "(option menu)", I am talking about the box that is next to the menu on the right side. Take a look at the image below for the difference between menu and option menu:
Make sure to check if any of instruction below has "(option menu)" in it.
1.Creating Image Reference for modeling the states.
A.Get reference map image.
Find a black and white image reference of the U.S. states. I will be using this one so download it.
B.Make Image Plane with the reference file.
Open Maya and create new Project and Import the the state image you downloaded.
To do this, go to Create --> Free Image Plane.
Under Image Plane Attribute, Select the icon in the Image Name then put the path of the image map reference you downloaded there.
C.Switch to Front view.
Select the Image Plane, position your mouse in the middle of the view then press the SPACE key on the keyboard. You will see four camera views. Move your mouse to the left-bottom view (Front View) then press the SPACE key again. Maya will switch to Front View. You will be working in Front View.
Now, Press F to zoom into the Image Plane.
D.Put the Image Plane in a layer then lock it.
Once the camera is zoomed in Image Plane, select the Image Plane again if not selected then click on Channel Box/Layer Editor.
Now, click on Layers --> Create Layer from Selected. There are three checkboxes on the created layer.
Continue to click on the third(last) checkboxes until it says "R". The "R" will lock the Image Plane so that it can't be selected again or interfere when modeling.
2.Creating each state Model.
A.Create curve
Go to Create --> Curve Tools --> Pencil Curve Tool
Draw any state one by one by holding down the left mouse button and tracing it around the chosen state. Leave a little space between the beginning and end of the line.
I will draw Nebraska as an Example.
B.Close the curve.
Select the curve then go to Curves --> Open/Close
C.Create a Nurbs Plane to cover the whole shape of curve you drew.
Select the curve then go to Modify --> Center Pivot
You will use this curve to cut out its shape from the Nurbs Plane.
Go to Create --> NURBS Primitives then make sure that Interactive Creation and Exit On Completion are both checked/enabled.
Go to Create --> NURBS Primitives --> Plane.
Hold down the Left mouse button then drag and move the mouse to create a Plane. Make sure that the size of the Plane covers the shape of the state drawn in step 2A.
D.Project the curve to the surface
Select both the Plane created in step 2C and the curve created in step 2A.
You do that by holding down the left mouse button and drag the mouse cursor over both of them then release.
Both Objects should now be selected.
Go to Surfaces --> Project Curve on Surface
E.Trim the selected object.
Select the Plane from Step D.
Go to Surfaces --> Trim Tool
Click inside(middle) of the curve then press Enter.
The shape of the curve should now be cut out of the Plane,
F.Delete History then Center the pivot.
Select everything created in this process(Left click and hold then move mouse around everything then release).
Go to Edit --> Delete by Type --> History.
Go to Modify --> Center Pivot
G.Convert the NubSurface Plane to Polygon because Unity and other Game Engines uses Polygon.
Select the Plane from Step E.
Go to Modify --> Convert -->NURBS To Polygons (option menu)
Use the Settings Below:
Type: Quads
Tessellation method: Standard fit
Chord height ratio: 0.54
Fractional tolerance: 0.01
Minimal Edge length: 0.001
3D delta: 0.0461
Click Apply.
Now, you have a polygon, rename it to the appropriate state name(Nebraska).
H.Delete Objects behind
With the polygon still selected.
Go to Modify --> Freeze Transformations
Go to Modify --> Center Pivot
Move the polygon away. After this, delete every other objects(NubSurface and curve) behind it.
Select the polygon again then go to Modify --> Reset Transformations. This will bring the polygon back to its original position.
Go to Modify --> Center Pivot
I.Add Material the state created.
Go to the Rendering tab.
Click on the Round Icon which is a Blinn material. It will create a Blinn Material. Name it "State_Material". If you have already created "State_Material", you don't have to create a new one.
Go to Rendering Mode.
Select the polygon.
Go to Lighting/Shading --> Assign Existing Material--> State_Material.
The goal is to assign one material(State_Material) to all State/polygon. This will prevent having to export 50 materials to Unity.
J.Automatically Map the UV.
Go to UV --> Automatic
K.Delete History then Center the pivot of the converted Object.
Go to Edit --> Delete by Type --> History.
Go to Modify --> Center Pivot
Done! Jump back to 2.Creating each state Model. and do the-same for the rest of the state.
3.Export as FBX.
A.Export as Fbx.
Select all the States. I have 8 in my scene.
Go to File --> Export Selection...(option menu).
Change File type to FBX export.
Click the Export Selection Button
Choose the Directory and file name then click Export Selection again.
4.Using Maps in Unity.
A.Import the Fbx into Unity.
Open Unity, go to Assets --> Import New Asset...
Choose the directory and FBX file saved through Maya.
B.Attach Mesh collider to each State/Plane.
Drag the Map to the Hierarchy.
Select all the states/models then go to Components --> Physics --> Mesh Collider.
C.Position the model to camera view.
Had to rotate the Object to 180 deg in y-axis then scale the x and y axis to 100. If you followed this tutorial exactly like like this with the-same image reference, then you will have to rotate and scale it the-same way.
D.Setup Event System
Go to GameObject --> Create Empty then name it "EventSystem".
Go to Component --> Event --> Event System.
Go to Component --> Event --> Standalone Input Module.
Select the "Main Camera" and go to Component --> Event --> Physics Raycaster. I said Physics Raycaster not Physics 2D Raycaster
E.Attach test Scripts
Go to GameObject --> Create Empty then name it "MapManager".
Select the MapManager GameObject and attach the MapManager script to it.
Select all the states/Model except for the parent Object then attach MapClickDetector script to all of them. Doing it once is easier.
Description of the Script:
The MapClickDetector script will detect mouse click, mouse over and other mouse actions then send it to the MapManager script. You can then do whatever you want in the MapManager script. It also sends the Map GameObject that is clicked so that you can get the name of the State.
To detect a click, IPointerDownHandler is implemented then OnPointerDown is used to detect which state in the map is clicked. You can also use Raycast but IPointerDownHandler is is better since it prevents problems with the UI.
MapManager script:
using UnityEngine;
public class MapManager : MonoBehaviour
{
Color normalColor = Color.red;
Color mouseDownColor = Color.green;
Color mouseEnterColor = Color.yellow;
// Use this for initialization
void Start()
{
}
// Update is called once per frame
void Update()
{
}
public void mapclick(GameObject objClicked)
{
Debug.Log("Clicked: " + objClicked.name);
}
public void mapMouseDown(GameObject objClicked)
{
Debug.Log("Pointer Down: " + objClicked.name);
MeshRenderer mr = objClicked.GetComponent<MeshRenderer>();
mr.material.color = mouseDownColor;
}
public void mapMouseUp(GameObject objClicked)
{
Debug.Log("Pointer Up: " + objClicked.name);
MeshRenderer mr = objClicked.GetComponent<MeshRenderer>();
mr.material.color = normalColor; ;
}
public void mapMouseEnter(GameObject objClicked)
{
Debug.Log("Pointer Enter: " + objClicked.name);
MeshRenderer mr = objClicked.GetComponent<MeshRenderer>();
mr.material.color = mouseEnterColor;
}
public void mapMouseExit(GameObject objClicked)
{
Debug.Log("Pointer Exit: " + objClicked.name);
MeshRenderer mr = objClicked.GetComponent<MeshRenderer>();
mr.material.color = normalColor;
}
}
MapClickDetector script:
using UnityEngine;
using UnityEngine.EventSystems;
public class MapClickDetector : MonoBehaviour, IPointerClickHandler, IPointerDownHandler,
IPointerUpHandler, IPointerEnterHandler, IPointerExitHandler
{
MapManager mapManager;
void Start()
{
addPhysicsRaycaster();
mapManager = GameObject.Find("MapManager").GetComponent<MapManager>();
}
void addPhysicsRaycaster()
{
PhysicsRaycaster physicsRaycaster = GameObject.FindObjectOfType<PhysicsRaycaster>();
if (physicsRaycaster == null)
{
Camera.main.gameObject.AddComponent<PhysicsRaycaster>();
}
}
public void OnPointerClick(PointerEventData eventData)
{
mapManager.mapclick(gameObject);
}
public void OnPointerDown(PointerEventData eventData)
{
mapManager.mapMouseDown(gameObject);
}
public void OnPointerUp(PointerEventData eventData)
{
mapManager.mapMouseUp(gameObject);
}
public void OnPointerEnter(PointerEventData eventData)
{
mapManager.mapMouseEnter(gameObject);
}
public void OnPointerExit(PointerEventData eventData)
{
mapManager.mapMouseExit(gameObject);
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Any good client-server data sync frameworks available for iPhone?
I'm just getting into the client-server data sync stage of my iPhone app project, and have managed to get my CoreData data model loading on both the iPhone client and my TurboGears server (which is good). I'm now beginning to tackle the problem of sync'ing data between the server and multiple clients, and while I could roll my own, this seems like one of those problems that is quite general and therefore there should be frameworks or libraries out there that provide a good deal of the functionality.
Does anyone know of one that might be applicable to this environment (e.g. Objective-C on iPhone, pyobjc / Python on the server)? If not, does anyone know of a design pattern or generally-agreed upon approach for this stuff that would be a good road to take for a self-implementation? I couldn't find a generally accepted term for this problem beyond "data synchronization" or "remote object persistence", neither of which hit much useful on Google.
I did come across the Funambol framework which looks like it provides this exact type of functionality, however, it is C++ / Java based and therefore seems like it might not be a good fit for the specific languages in my project.
Any help much appreciated.
A:
Since you are using TurboGears already, take a look at the RestController documentation. Using RESTful services has become a widely adopted architecture with many implementations for both clients and servers. Matt Gemmell's MGTwitterEngine is a good example of the client implementation of a specific API, Twitter.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Under what conditions can I access the HTML-DOM of a child IFrame from the hosting website?
What can cause my website to not have access to a child IFrame's DOM via Javascript? Are there cross-domain restrictions in place? Does HTTPS play a role?
A:
You can only access iframes if they are coming from the same domain. If you are hosting www.mysite.com and the iframe inserted is from www.yahoo.com you cannot access it. Trying to do that will get an access denied javascript error. This is one of the checks to avoid cross site scripting I believe.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
An expression subtly different from the one in Sherman-Morrison formula
Let $0\ne x\in\mathbb{R},\,\mathbf{y} \in \mathbb{R}^{n},\,\mathbf{1}=(1,1, \cdots, 1)^{T} \in \mathbb{R}^{n}$ and assume that $\Sigma=x I_{n}+\mathbf{y} \mathbf{1}^{T}+\mathbf{1} \mathbf{y}^{T}$ is positive definite. Prove that
$$
\Sigma^{-1}-\frac{\Sigma^{-1} \mathbf{1 1}^{T} \Sigma^{-1}}{\mathbf{1}^{T} \Sigma^{-1} \mathbf{1}}=\frac{1}{x}\left(I_{n}-\frac{1}{n} \mathbf{1 1}^{T}\right).
$$
A:
With the Sherman Morrison identity,
$$
\Sigma^{-1} - \frac{\Sigma^{-1}11^T\Sigma^{-1}}{1^T\Sigma 1} =
\lim_{m \to \infty}
\Sigma^{-1} - \frac{\Sigma^{-1}(m1)(m1)^T\Sigma^{-1}}{1 + (m1)^T\Sigma (m1)}
\\ =
\lim_{m \to \infty}(\Sigma + (m \mathbf 1)(m \mathbf 1)^T)^{-1}.
$$
On the other hand, we can write
$$
\Sigma(m) := \Sigma + (m \mathbf 1)(m \mathbf 1)^T = x I_n + \pmatrix{\mathbf 1 & \mathbf y}
\pmatrix{m^2 & 1\\1 & 0}
\pmatrix{\mathbf 1 & \mathbf y}^T.
$$
To show that $\Sigma(m)^{-1} \to \frac 1x (I - \frac{11^T}{n})$, it suffices to block-diagonalize $\Sigma$. We see that $\Sigma \mathbf z = x\mathbf z$ for all $\mathbf z$ perpendicular to both $\mathbf 1,\mathbf y$. On the other hand, let $v_1 = \mathbf 1/\sqrt{n}$, let $v_2$ be the component of $\mathbf y$ orthogonal to $\mathbf 1$. We have
$$
v_1^T\pmatrix{\mathbf 1 & \mathbf y}
\pmatrix{m^2 & 1\\1 & 0}
\pmatrix{\mathbf 1 & \mathbf y}^Tv_1 = m^2n + 2(v_1^Ty)^2\\
$$
$$
v_1^T\pmatrix{\mathbf 1 & \mathbf y}
\pmatrix{m^2 & 1\\1 & 0}
\pmatrix{\mathbf 1 & \mathbf y}^Tv_2 = \sqrt{n}\, v_2^Ty.
$$
$$
v_2^T\pmatrix{\mathbf 1 & \mathbf y}
\pmatrix{m^2 & 1\\1 & 0}
\pmatrix{\mathbf 1 & \mathbf y}^Tv_2 =
\pmatrix{0 & v_2^Ty} \pmatrix{m^2 & 1\\1 & 0}\pmatrix{0 & v_2^Ty} = 0
$$
So, relative to an orthogonal basis that extends $v_1,v_2$, the matrix of $\Sigma$ is
$$
x I_n + \pmatrix{m^2n + 2(v_1^Ty)^2 & \sqrt{n}\, v_2^Ty\\
\sqrt{n}\, v_2^Ty & 0} \oplus 0_{(n-2) \times (n-2)}.
$$
The inverse of this matrix is
$$
\pmatrix{x + m^2n + 2(v_1^Ty)^2 & \sqrt{n}\, v_2^Ty\\
\sqrt{n}\, v_2^Ty & x}^{-1} \oplus \frac 1x I = \\
\frac 1{xm^2 n + o(m)}\pmatrix{x & -\sqrt{n}\,v_2^Ty\\
-\sqrt{n}\, v_2^Ty & x + m^2n + 2(v_1^Ty)^2} \oplus \frac 1x I.
$$
As $m \to \infty$, this approaches
$$
\pmatrix{0 & 0\\
0 & 1/x} \oplus \frac 1x I,
$$
which is the matrix of $(I - \frac 1n 11^T)$ relative to the new basis. Thus, we have the desired conclusion.
Alternatively it suffices to show that $\Sigma(m)^{-1} \mathbf z \to (I - \frac{11^T}{n}) \mathbf z$ for all $\mathbf z$. In particular,
If $\mathbf z \perp \mathbf 1$, then $\Sigma(m)^{-1} \mathbf z \to \frac 1x \mathbf z$, and
$\Sigma(m)^{-1} \mathbf 1 \to 0$.
Alternatively, with the Woodbury identity, we have
$$
(x I_n)^{-1} - \Sigma^{-1} =
\frac 1{x^2} \pmatrix{y & 1}\left(\pmatrix{m^2&1\\1&0}^{-1} + \pmatrix{n & 1^Ty\\ 1^Ty & y^Ty}\right)^{-1}\pmatrix{y & 1}^T.
$$
It would therefore suffice to compute the limit of the middle term
$$
\left(\pmatrix{m^2&1\\1&0}^{-1} + \pmatrix{n & 1^Ty\\ 1^Ty & y^Ty}\right)^{-1},
$$
and show that $(x I_n)^{-1} - \Sigma^{-1} \to \frac 1{xn}11^T$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Python 'float' object is not iterable error
Error message:
File "C:/Users/artisan/PycharmProjects/API connection/polo.py", line
12, in poloniexapi
total = sum([int(num) for num in i["quoteVolume"]]) TypeError: 'float' object is not iterable
import requests
import json, requests
import _json
def poloniexapi(url):
response = requests.get(url)
json_obj = json.loads(response.text)
for i in json_obj:
print(i["quoteVolume"])
total = sum([int(num) for num in i["quoteVolume"]])
poloniexapi("https://poloniex.com/public?command=returnChartData¤cyPair=BTC_XMR&start=1405699200&end=9999999999&period=86400")
Appreciate all help :)
A:
for num in i['quoteVolume'] is trying to iterate over i['quoteVolume']. But this is just one number (the last number from the previous for loop), not a list. I think what you want is:
total = sum([int(i["quoteVolume"]) for i in json_obj])
|
{
"pile_set_name": "StackExchange"
}
|
Q:
JSTL SQL Update rowCount?
Is it possible to count the rows affected from an sql update statement inside jstl?
I have tried to do this but it does not work.
A:
According to the JSTL TLD Docs, the var attribute specifies the name of the variable containing the row count, so as per your example
reqConfirmBooking != 0
should work.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
CartoDB storage and Mapviews
I had setup cartoDB locally in my server. I am very curious to know about the free storage space alloted (in means of mapviews, tables, Data Storage) for using cartoDB from my own server for private views. And also, below is a snapshot of cartoDB storage statistics after setting in my own server. In that it is mentioned 10GB of space and 10k mapviews are free to use. I would like to know 10kmapviews for per month or year? I would also like to know is there any possiblity of storing the data in external sources like my own server and use cartoDB API alone for customization and In this case, will this charge?
A:
If you have your custom CartoDB installation, any of the limits of the CartoDB.com service apply.
In your screenshot I'm seeing that you are using a really old CartoDB version. Right now the CartoDB.com service does not charge anymore per map views, neither has a limit in the amount of created tables. In terms of storage quota, this is not a "monthly" limit, it's just a general limit. In your account, you can store up to X MB of data.
If you are working in your own server, you can edit your own accounts settings in order to give yourself more quota or any other features.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is Economic Populism?
I saw an article this morning on NBC declaring that Economic Populism Will Be Front And Center for 2016 Elections. It goes into detail about why this will be such an important issue in the coming year.
It does not, however, do very much to accurately define what Economic Populism actually is, and my google search for an explanation of the term only brings up a similar article from a year ago, a Wiki article on Populism (but not specifically economic populism) and Huffington Post Articles Tagged As "Economic Populism".
It sounds like it might just be a buzzword without any real meaning, but I wouldn't know since this is the first I've heard of it. Is there an actual definition for the concept of "Economic Populism"? And if so, what is it?
A:
For economic populism as actual economists talk about it, you might want to see macroeconomic populism, usually as observed in Latin America.
Policies include: fiscal stimulus (as opposed to austerity), printing more money, and expanding deficit. This "emphasizes growth and income distribution and deemphasizes the risks of inflation and deficit finance".
There's debate (see this and this) over whether and when there's a right time for macroeconomic populism, but in general it's pretty risky and not a long term solution.
In the context of the 2016 election, "economic populism" refers to a general opposition to income inequality and globalization (ie trade agreements), as well as opposition to money in politics (per the effects of the Citizen United decision). Basically, the middle class is shrinking, jobs are disappearing overseas, and all the politicians are in the pockets of billionaires. Worrying to say the least.
Economists don't really have satisfying answer to these issues - they're a little late to the inequality question, they're firmly in favor of globalization, and money in politics is primarily a political issue, not economic.
So yes, economic populism as used in political writing is a buzzword that has little grounding in actual economics. Probably the closest you get in economics is the work of Thomas Piketty on income inequality, who prescribes progressive taxation.
A:
There's a concept of economic populism but with points to take into account. The first, what you call economic populism in Latin America is known as Economía Solidaria (solidarity economy) which refer to an economy based not in grow the profits or valuate the company but to increase the quality of life through social activities in the community (Concept taken from economiasolidaria.org). Later, both populism and solidarity are used to define the same thing.
The concept was used for the first time in Brazil at the World Social Forum (2001) where they referred this kind of economy as one solution to destroy the gap between social structures. The main institution holding the concept itself are cooperatives around the globe. In the internet world, the solidarity economy is used by most of the open source projects and developers working in free software.
Paul Singer said about the solidarity economy:
"The solidarity economy has emerged as a way out of poverty and continues to do this."
There is also a good paper talking about the economic populism in Spanish called "Economía Popular", where the author explain the main characteristics of the populism and its impacts in the economy.
Spoiler alert: they have several similarities but in concepts or terminologies are differences.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Building an R package error with devtools::document
I am building R packages using devtools. I've built a package with some functions that I'd like to include. And I'd like to load the package and its documentation at startup. My package files are located at the location:
'~/global/Rcode/Startup Package'
My .Rprofile file looks like this:
.First <- function(){
library(devtools)
location <- '~/global/Rcode/Startup Package'
document(location)
}
However when I open R, the functions from the package are loaded but the documentation is not.
If I run the same lines of code after startup myself, namely:
library(devtools)
location <- '~/global/Rcode/Startup Package'
document(location)
then everything works and the package correctly documents. This thus seems like a rather weird bug!
(As a partial fix I can run
install(location)
and treat it like a normal r package and everything works fine, however this takes time and as I intend to update the package a lot and do not really want to have to run this every time, especially as the devtools option should work.)
A:
Make sure utils is loaded before loading devtools otherwise there's no help function for devtools to overwrite.
With .Rprofile:
.First = function(){
library(utils)
library(devtools)
document("./foo")
}
then R startup goes:
[stuff]
Type 'q()' to quit R.
Updating foo documentation
Loading foo
And help is devtools version:
> environment(help)
<environment: namespace:devtools>
Remove that library(utils) and you'll see the help function is the one in utils that won't find your package documentation.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Android - decodeBase64 crashes App
I have to encrypt a String but the app doesn't reach the encrypt method, it crashes on load.
I'm using Apache Commons Codec library.
private EditText txtPass = (EditText)findViewById(R.id.txtPass);
public String key = txtPass.getText().toString();
public byte[] key_Array = org.apache.commons.codec.binary.Base64.decodeBase64(key);
For some reason, the app crashes in the third line.
My logcat.
12-03 14:03:31.441 23420-23420/com.example.cristiano.automacao V/ActivityThread﹕ Class path: /data/app/com.example.cristiano.automacao-2.apk, JNI path: /data/data/com.example.cristiano.automacao/lib
12-03 14:03:31.591 23420-23420/com.example.cristiano.automacao W/dalvikvm﹕ VFY: unable to resolve static method 8974: Lorg/apache/commons/codec/binary/Base64;.decodeBase64 (Ljava/lang/String;)[B
12-03 14:03:31.601 23420-23420/com.example.cristiano.automacao W/dalvikvm﹕ VFY: unable to resolve static method 8984: Lorg/apache/commons/codec/binary/Base64;.encodeBase64String ([B)Ljava/lang/String;
12-03 14:03:31.611 23420-23420/com.example.cristiano.automacao W/dalvikvm﹕ threadid=1: thread exiting with uncaught exception (group=0x41c7b438)
Any clue about that?
Updated
I changed the code to this:
public static String key = "1234";
public static byte[] key_Array = decodeBase64(key);
But now I got other error
java.lang.NoSuchMethodError: org.apache.commons.codec.binary.Base64.decodeBase64
A:
Try this:
// decode data from base 64
private static byte[] decodeBase64(String dataToDecode)
{
byte[] dataDecoded = Base64.decode(dataToDecode, Base64.DEFAULT);
return dataDecoded;
}
//enconde data in base 64
private static byte[] encodeBase64(byte[] dataToEncode)
{
byte[] dataEncoded = Base64.encode(dataToEncode, Base64.DEFAULT);
return dataEncoded;
}
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.