text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
My computer's not starting up without the Ubuntu usb
Possible Duplicate:
Can't boot without Flash Drive plugged in
I installed Ubuntu 12.04 LTS in my Dell Inspiron 1545 from a usb stick but it is not booting up when I removed the usb from the computer. but it is working when I put the usb in my computer. I want ubuntu to start without the usb. Please can you help me? Thanks.
A:
Sounds like grub has been installed to the usb.
From your booted Ubuntu system, open a terminal, then use the following commands to install grub to the internal drive :
sudo grub-install /dev/sdX
sudo update-grub
Replacing sdX with the actual drive, which will probably be sda, but use disk utility to double check if you are unsure.
Accidently installed grub to usb
| {
"pile_set_name": "StackExchange"
} |
Q:
Comments in command-line Zsh
I switched quite recently from Bash to Zsh on Ubuntu and I'm quite happy about it. However, there is something I really miss and I did not find how to achieve the same thing.
In Bash, whenever I was typing a long command and noticed I had to run something else before, I just had to comment it out like in the following:
me@home> #mysuperlongcommand with some arguments
me@home> thecommandIhavetorunfirst #and then: then up up
me@home> #mysuperlongcommand with some arguments #I just need to uncomment it!
However, this quite recurrent situation is not as easy to address as with zsh, given #mysuperlongcommand will be run as such (and resulting in: zsh: command not found: #mysuperlongcommand.
A:
Having just started trying out zsh, I ran into this problem too. You can do setopt interactivecomments to activate the bash-style comments.
A:
I use
bindkey "^Q" push-input
From the zsh manual:
Push the entire current multiline construct onto the buffer stack and return to the top-level (PS1) prompt. If the current parser construct is only a single line, this is exactly like push-line. Next time the editor starts up or is popped with get-line, the construct will be popped off the top of the buffer stack and loaded into the editing buffer.
So it looks like this:
> long command
Ctrl+Q => long command disappears to the stack
> forgotten command
long command reappears from stack
> long command
Also, if you set the INTERACTIVE_COMMENTS option (setopt INTERACTIVE_COMMENTS), you will be able to use comments in interactive shells like you are used to.
A:
I find myself doing this often as well. What I do is cut the long command, execute the command that needs to go first and then paste the long command back in. This is easy: CTRL+U cuts the current command into a buffer, CTRL+Y pastes it. Works in zsh and bash.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to install scikit-learn from github
I noticed that somewhere in the library of scikit-learn there is a bug according to forums and i saw that they github has a developed library for scikit-learn. I am hoping they fixed the bugs there so I want to install scikit-learn on my ubuntu and raspbian os. I want to install the latest version 0.18 scikit-learn for python 2.7. can anybody help me. thank you in advance
A:
Literally on the readme file here on the scikit-learn git page.
For the latest branch i.e. 0.18x you have to do the following
git clone the repo as in the link I have provided or run the following
git clone https://github.com/scikit-learn/scikit-learn.git
checkout the latest branch with git checkout origin/0.18.X
You should now have the latest and greatest.
Note that this branch may have functions that will break your current scripts that depend on sklearn.
| {
"pile_set_name": "StackExchange"
} |
Q:
Recognize Non-Digits
I've programmed a neural network for recognizing single digits which are pushed to my server. I worked pretty well, until customers started to push "empty digits".
First I started just iterating manually over them and checking for NOT WHITE. Now it gets even more complicated, as "dirty" blanks are uploaded which have some noise in them.
Addiotional some people started to push diagonal & horizontal lines or X's instead of writing 0 (zero).
I wonder how I am supposed to train a "pre"-neural network which classifies these "not digits" Especially I struggle to find a way of training the zeroes depicted by a noisy blank.
A:
You're using a neural network, what I'd suggest is make the neural network output the probability of the input being a given class, e.g. it may output digit 5, certainty 75%.
Once you have those probabilities, you can work on finding a "cutoff" value below which you'd consider the input as being just noise/empty.
I've linked above to a question about getting the classification probabilities out of a NN.
| {
"pile_set_name": "StackExchange"
} |
Q:
Jshell /env command not allowing to Set More than one Jar file in Classpath
Hi I am using JDK11 on windows 10 operating system and Jshell version I am using is 11.0.1.
I am trying to execute the various Jshell commands and got stuck in execution of below commands
I have a sample program which is using the classes from more than one JAR file Employee.jar and spring-context-5.1.3.jar files.
After login to Jshell I am using the below command to set the class path in Jshell but it is throwing the error
jshell> /env -class-path D:\JshellClassPath\Employee.jar:D:\JshellClassPath\spring-context-5.1.3.jar
| File 'D:\JshellClassPath\Employee.jar:D:\JshellClassPath\spring-context-5.1.3.jar' for '--class-path' is not found.
if I set only one Jar file and execute the above command it is working fine but why I am not able to set multiple Jar files /env command?
A:
On Windows platform you have to use ; as separator instead of :.
| {
"pile_set_name": "StackExchange"
} |
Q:
query php jsonencode result in javascript
I'm echoing json with php like so:
$jsonarray= array();
while($rs = mysqli_fetch_assoc($result)) {
$jsonarray["userName"]= $rs["userName"];
$jsonarray["UserGender"]= $rs["UserGender"];
$jsonarray["channel"]= $rs["channel"];
$jsonarray["channelName"]= $rs["channelName"];
$jsonarray["camonoff"]= $rs["camonoff"];
$jsonarray["rtccam"]= $rs["rtccam"];
$jsonarray["iswatching"]= $rs["iswatching"];
$jsonarray["iswatchingembed"] = $rs["iswatchingembed"];
$jsonarray["islisteningtoradio"]= $rs["islisteningtoradio"];
$jsonarray["statusmsg"] = $rs["statusmsg"];
$jsonarray["activity"]= $rs["activity"];
echo json_encode($jsonarray);
}
With an ajax call i get the string like:
$.get('lib/class/json.php', data, function(returnData) {
jsondata= returnData;
jsonupdateregion(jsondata);
});
I pass the received data to a function like:
function jsonupdateregion(jsondata) {
var regions = ["Lobby", "Noord-Brabant", "Groningen", "Friesland", "Gelderland", "Limburg", "Zeeland", "Overijssel", "Drenthe", "Noord-Holland", "Zuid-Holland", "Utrecht", "Belgie", "Duitsland"];
var i;
str= "";
for (i = 0; i < regions.length; i++) {
str += regions[i]
+ getCount(regions[i], jsondata);
}
console.log(str);
}
The above fuction has to call the following function for every region in the array regions and return the number of occurrences
function getCount(regions, jsondata) {
var count = 0;
for (var i = 0; i < jsondata.length; i++) {
if (jsondata.channelName[i] == regions) {
count++;
}
}
return count;
}
The above result in a " Uncaught TypeError: Cannot read property '0' of undefined"
When i use the json.parse on the data i get an error like: " Uncaught SyntaxError: Unexpected token {
The php file itself sends a header with: "header('Content-Type: text/html; charset=utf-8');"
What am i doing wrong here?
When i use the json.parse i get an error stating an unexpectit token
I've altered the query on the server and it's now definitly outputting valid Json according to http://jsonlint.com/ .
if(isset($test)){
$sql = 'SELECT
userName,
UserGender,
UserRegion,
channel,
channelName,
camonoff,
rtccam,
iswatching,
iswatchingembed,
islisteningtoradio,
statusmsg,
activity
FROM
users';
$result=mysqli_query($conn,$sql);
$json = array();
if(mysqli_num_rows($result)){
while($row=mysqli_fetch_assoc($result)){
$json[]=json_encode($row);
}
}
mysqli_close($mysqli);
echo json_encode($json);
}
UPDATE:
The fault was in the javascript:
Had to change:
for (i = 0; i < obj.length; i++) {
if (obj.channelName[i] == regions) {
count++;
}
TO:
for (i = 0; i < obj.length; i++) {
if (obj[i].channelName == regions) {
count++;
}
And in php revert back to echoing
echo json_encode($json);
A:
At first, try to give jquery a hint, that json is sent and see, what it receives in your browser console
(http://api.jquery.com/jquery.get/)
$.get('lib/class/json.php', data, function(returnData) {
var jsondata = returnData;
console.log(jsondata);
jsonupdateregion(jsondata);
}, 'json);
it should output an object or just a string.... eventually your php echoes some new lines or other craty things before or after the answer.
What about the charset? Is your server maybe answering in iso-something? Then js would fail decoding your jsonstring if there are some crazy chars (ü, ß, ç)
Last thing
$jsonarray = array();
while($rs = mysqli_fetch_assoc($result)) {
$jsonarray[] = $rs; //add an element.. do not overwrite it
}
echo json_encode($jsonarray);
| {
"pile_set_name": "StackExchange"
} |
Q:
Hough circle detection accuracy very low
I am trying to detect a circular shape from an image which appears to have very good definition. I do realize that part of the circle is missing but from what I've read about the Hough transform it doesn't seem like that should cause the problem I'm experiencing.
Input:
Output:
Code:
// Read the image
Mat src = Highgui.imread("input.png");
// Convert it to gray
Mat src_gray = new Mat();
Imgproc.cvtColor(src, src_gray, Imgproc.COLOR_BGR2GRAY);
// Reduce the noise so we avoid false circle detection
//Imgproc.GaussianBlur( src_gray, src_gray, new Size(9, 9), 2, 2 );
Mat circles = new Mat();
/// Apply the Hough Transform to find the circles
Imgproc.HoughCircles(src_gray, circles, Imgproc.CV_HOUGH_GRADIENT, 1, 1, 160, 25, 0, 0);
// Draw the circles detected
for( int i = 0; i < circles.cols(); i++ ) {
double[] vCircle = circles.get(0, i);
Point center = new Point(vCircle[0], vCircle[1]);
int radius = (int) Math.round(vCircle[2]);
// circle center
Core.circle(src, center, 3, new Scalar(0, 255, 0), -1, 8, 0);
// circle outline
Core.circle(src, center, radius, new Scalar(0, 0, 255), 3, 8, 0);
}
// Save the visualized detection.
String filename = "output.png";
System.out.println(String.format("Writing %s", filename));
Highgui.imwrite(filename, src);
I have Gaussian blur commented out because (counter intuitively) it was greatly increasing the number of equally inaccurate circles found.
Is there anything wrong with my input image that would cause Hough to not work as well as I expect? Are my parameters way off?
EDIT: first answer brought up a good point about the min/max radius hint for Hough. I resisted adding those parameters as the example image in this post is just one of thousands of images all with varying radii from ~20 to almost infinity.
A:
I've adjusted my RANSAC algorithm from this answer: Detect semi-circle in opencv
Idea:
choose randomly 3 points from your binary edge image
create a circle from those 3 points
test how "good" this circle is
if it is better than the previously best found circle in this image, remember
loop 1-4 until some number of iterations reached. then accept the best found circle.
remove that accepted circle from the image
repeat 1-6 until you have found all circles
problems:
at the moment you must know how many circles you want to find in the image
tested only for that one image.
c++ code
result:
code:
inline void getCircle(cv::Point2f& p1,cv::Point2f& p2,cv::Point2f& p3, cv::Point2f& center, float& radius)
{
float x1 = p1.x;
float x2 = p2.x;
float x3 = p3.x;
float y1 = p1.y;
float y2 = p2.y;
float y3 = p3.y;
// PLEASE CHECK FOR TYPOS IN THE FORMULA :)
center.x = (x1*x1+y1*y1)*(y2-y3) + (x2*x2+y2*y2)*(y3-y1) + (x3*x3+y3*y3)*(y1-y2);
center.x /= ( 2*(x1*(y2-y3) - y1*(x2-x3) + x2*y3 - x3*y2) );
center.y = (x1*x1 + y1*y1)*(x3-x2) + (x2*x2+y2*y2)*(x1-x3) + (x3*x3 + y3*y3)*(x2-x1);
center.y /= ( 2*(x1*(y2-y3) - y1*(x2-x3) + x2*y3 - x3*y2) );
radius = sqrt((center.x-x1)*(center.x-x1) + (center.y-y1)*(center.y-y1));
}
std::vector<cv::Point2f> getPointPositions(cv::Mat binaryImage)
{
std::vector<cv::Point2f> pointPositions;
for(unsigned int y=0; y<binaryImage.rows; ++y)
{
//unsigned char* rowPtr = binaryImage.ptr<unsigned char>(y);
for(unsigned int x=0; x<binaryImage.cols; ++x)
{
//if(rowPtr[x] > 0) pointPositions.push_back(cv::Point2i(x,y));
if(binaryImage.at<unsigned char>(y,x) > 0) pointPositions.push_back(cv::Point2f(x,y));
}
}
return pointPositions;
}
float verifyCircle(cv::Mat dt, cv::Point2f center, float radius, std::vector<cv::Point2f> & inlierSet)
{
unsigned int counter = 0;
unsigned int inlier = 0;
float minInlierDist = 2.0f;
float maxInlierDistMax = 100.0f;
float maxInlierDist = radius/25.0f;
if(maxInlierDist<minInlierDist) maxInlierDist = minInlierDist;
if(maxInlierDist>maxInlierDistMax) maxInlierDist = maxInlierDistMax;
// choose samples along the circle and count inlier percentage
for(float t =0; t<2*3.14159265359f; t+= 0.05f)
{
counter++;
float cX = radius*cos(t) + center.x;
float cY = radius*sin(t) + center.y;
if(cX < dt.cols)
if(cX >= 0)
if(cY < dt.rows)
if(cY >= 0)
if(dt.at<float>(cY,cX) < maxInlierDist)
{
inlier++;
inlierSet.push_back(cv::Point2f(cX,cY));
}
}
return (float)inlier/float(counter);
}
float evaluateCircle(cv::Mat dt, cv::Point2f center, float radius)
{
float completeDistance = 0.0f;
int counter = 0;
float maxDist = 1.0f; //TODO: this might depend on the size of the circle!
float minStep = 0.001f;
// choose samples along the circle and count inlier percentage
//HERE IS THE TRICK that no minimum/maximum circle is used, the number of generated points along the circle depends on the radius.
// if this is too slow for you (e.g. too many points created for each circle), increase the step parameter, but only by factor so that it still depends on the radius
// the parameter step depends on the circle size, otherwise small circles will create more inlier on the circle
float step = 2*3.14159265359f / (6.0f * radius);
if(step < minStep) step = minStep; // TODO: find a good value here.
//for(float t =0; t<2*3.14159265359f; t+= 0.05f) // this one which doesnt depend on the radius, is much worse!
for(float t =0; t<2*3.14159265359f; t+= step)
{
float cX = radius*cos(t) + center.x;
float cY = radius*sin(t) + center.y;
if(cX < dt.cols)
if(cX >= 0)
if(cY < dt.rows)
if(cY >= 0)
if(dt.at<float>(cY,cX) <= maxDist)
{
completeDistance += dt.at<float>(cY,cX);
counter++;
}
}
return counter;
}
int main()
{
//RANSAC
cv::Mat color = cv::imread("HoughCirclesAccuracy.png");
// convert to grayscale
cv::Mat gray;
cv::cvtColor(color, gray, CV_RGB2GRAY);
// get binary image
cv::Mat mask = gray > 0;
unsigned int numberOfCirclesToDetect = 2; // TODO: if unknown, you'll have to find some nice criteria to stop finding more (semi-) circles
for(unsigned int j=0; j<numberOfCirclesToDetect; ++j)
{
std::vector<cv::Point2f> edgePositions;
edgePositions = getPointPositions(mask);
std::cout << "number of edge positions: " << edgePositions.size() << std::endl;
// create distance transform to efficiently evaluate distance to nearest edge
cv::Mat dt;
cv::distanceTransform(255-mask, dt,CV_DIST_L1, 3);
unsigned int nIterations = 0;
cv::Point2f bestCircleCenter;
float bestCircleRadius;
//float bestCVal = FLT_MAX;
float bestCVal = -1;
//float minCircleRadius = 20.0f; // TODO: if you have some knowledge about your image you might be able to adjust the minimum circle radius parameter.
float minCircleRadius = 0.0f;
//TODO: implement some more intelligent ransac without fixed number of iterations
for(unsigned int i=0; i<2000; ++i)
{
//RANSAC: randomly choose 3 point and create a circle:
//TODO: choose randomly but more intelligent,
//so that it is more likely to choose three points of a circle.
//For example if there are many small circles, it is unlikely to randomly choose 3 points of the same circle.
unsigned int idx1 = rand()%edgePositions.size();
unsigned int idx2 = rand()%edgePositions.size();
unsigned int idx3 = rand()%edgePositions.size();
// we need 3 different samples:
if(idx1 == idx2) continue;
if(idx1 == idx3) continue;
if(idx3 == idx2) continue;
// create circle from 3 points:
cv::Point2f center; float radius;
getCircle(edgePositions[idx1],edgePositions[idx2],edgePositions[idx3],center,radius);
if(radius < minCircleRadius)continue;
//verify or falsify the circle by inlier counting:
//float cPerc = verifyCircle(dt,center,radius, inlierSet);
float cVal = evaluateCircle(dt,center,radius);
if(cVal > bestCVal)
{
bestCVal = cVal;
bestCircleRadius = radius;
bestCircleCenter = center;
}
++nIterations;
}
std::cout << "current best circle: " << bestCircleCenter << " with radius: " << bestCircleRadius << " and nInlier " << bestCVal << std::endl;
cv::circle(color,bestCircleCenter,bestCircleRadius,cv::Scalar(0,0,255));
//TODO: hold and save the detected circle.
//TODO: instead of overwriting the mask with a drawn circle it might be better to hold and ignore detected circles and dont count new circles which are too close to the old one.
// in this current version the chosen radius to overwrite the mask is fixed and might remove parts of other circles too!
// update mask: remove the detected circle!
cv::circle(mask,bestCircleCenter, bestCircleRadius, 0, 10); // here the radius is fixed which isnt so nice.
}
cv::namedWindow("edges"); cv::imshow("edges", mask);
cv::namedWindow("color"); cv::imshow("color", color);
cv::imwrite("detectedCircles.png", color);
cv::waitKey(-1);
return 0;
}
A:
If you'd set minRadius and maxRadius paramaeters properly, it'd give you good results.
For your image, I tried following parameters.
method - CV_HOUGH_GRADIENT
minDist - 100
dp - 1
param1 - 80
param2 - 10
minRadius - 250
maxRadius - 300
I got the following output
Note: I tried this in C++.
| {
"pile_set_name": "StackExchange"
} |
Q:
menu item button click then do imageswap in different location on webpage
I am trying to do something that seems relatively simple, I think. I have a webpage that uses a CSS defined menu item list. When I click on one of the menu items, I'd like to change content in the middle of the page (in it's own div container). If another menu item is clicked, I'd like the content to again change. The content area in the middle of the page is in it's own div, so I think I'm just looking for code that would, on a button click, replace one item with another. I've looked at replaceWith from JQuery, but can't seem to see how that function would work in this situation. That is, from what I've read, it doesn't include a condition for when the menu item is clicked. I've also read through CSS styles and z-indexes, but can't seem to find what would work.
So, my question is, how can I change the content (a .gif in this situation) in it's own div container to another .gif when a menu button item is clicked?
Here's my code for that portion:
<div id="nav_container">
<ul id="nav">
<li class="first active"><a href="/" class="index.ph">Home</a></li>
<li><a href="JPLCSubpages/aboutcoaching/about.html" class="about-coaching">About</a>
<ul class="about-coaching">
<li><div class="wicoaching"><a href="index.php" style="border-top:0px">What is Coaching?</a>
</div></li>
<li><div class="wdinacoach"><a href="why-do-i-need-a-coach.php">Why Do I Need a Coach?</a></li>
<li><div class="coachingtherapy"><a href="isnt-coaching-just-therapy.php">Isn't Coaching Just Therapy?</a></li>
<li><div class="myphilosophy"><a href="my-coaching-philosophy.php">My Coaching Philosophy</a>
</li>
</ul>
</li>
<li><a href="../coachingservices/services.html" class="services">Services</a>
<ul class="services">
<li><a href="../coachingservices/life-coaching.php" style="border-top:0px">Life Coaching</a></li>
<li><a href="../coachingservices/professional-executive-coaching.php">Professional & Executive Coaching</a></li>
<li><a href="../coachingservices/group-team-coaching.php">Group & Team Coaching</a>
</li>
<li><a href="../coachingservices/sports-performance-coaching.php">Sports Performance Coaching</a></li>
</ul>
</li>
<li><a href="../coachingseminars/seminars.html" class="seminars">Seminars</a></li>
<li><a href="../coachingcontact/contact.html" class="contact">Contact</a></li>
</ul>
<div id="content_home">
<div style="text-align: left;" class="textbox"><img width="722" height="518" src="../../Images/AboutCoaching Images/AboutMe.gif" alt="" /></div>
So, for instance, if someone clicks on "What is Coaching?" menu item, I'd like the image in "textbox" to change. In one case, I'd like for the image "Aboutme.gif" to some other image. How can I accomplish this? I'm open to using Javascript, JQuery, or CSS.
Thanks in advance for your help.
A:
This is a farily simple task it might be a duplicate and the solution is found easily on the internet ,but please read through my answer and let me know if you need further instructions.
Using javascript events and functions:
<li>
<div class="wicoaching">
<a href="index.php" style="border-top:0px" onclick="ChangeImage();">
What is Coaching?</a>
</div>
</li>
This means that whenever someone clicks on 'What is coaching?' a function called ChangeImage will be executed.
Now you need to declare the function.
<script>
function ChangeImage{
var thePic = document.getElementById('MyImage');
// Note you need to add a name or image to your img tag.
thePic.src = "SomeOtherImage.gif";
}
</script>
If you instead want to remove the image and add another one:
var imageToBeRemoved = document.getElementById('MyImage');
var Parent = imageToBeRemoved.parentNode;
Parent.removeChild(imageToBeRemoved);
Now it's gone from your DOM. It would go inside your ChangeImage function, instead of changing the image source.
If you want to add another one with a different Source :
var newImage = document.createElement("img");
NewImage.src = 'MyNewImage.jpg';
Parent.appendChild("newImage"); // alternatively, you can reference to parent by its ID or Name using getElementById or a similar method.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is Abu Huraira considered unreliable by Shia fiqh?
I have seen that Abu Huraira is considered as an unreliable narrator by Shia:
One famous example would be the case of Abu Hurayrah, who narrated
over five thousand hadiths: Sunnis consider him a reliable source,
Shi'ites do not, and his case is not unique.
Why?
A:
[Not being a Shia myself, if someone sees a mistake here please edit and rectify it].
The Jafaris\Imamis do not consider Abu Hurairah to be a reliable narrator, because they have a narration from Imam Jafar to the effect:
سمعت جعفر بن محمد عليهما السلام يقول: ثلاثة كانوا يكذبون على رسول
الله أبو هريرة، وأنس بن مالك، وامرأة
Jafar bin Muhammad said: Three people told lies about the Messenger of
Allah: Abu Hurairah, Anas bin Malik and the woman.
— Al-Khisal by Sheikh Saduq, p190
| {
"pile_set_name": "StackExchange"
} |
Q:
How to read/write to HDFS from the driver in spark
I'd like to know whether it is possible to access the HDFS from the driver in a Spark application. That means, how to read/write a file from/to HDFS in the driver program. One possible solution is to read a file as a RDD (sc.textFile) and then collect it in the driver. However, this is not I'm looking for.
A:
If you want to access directly HDFS from the driver you can simply do (in Scala):
val hdfs = FileSystem.get(sc.hadoopConfiguration)
Then you can use the so created variable hdfs to access directly HDFS as a file system without using Spark.
(In the code snapshot I assumed you have a SparkContext called sc properly configured)
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there always $k \in \Bbb N$ such that $g^{k+1} \equiv g^k+1 \pmod p$, where $p$ is a prime number?
Let g be a generator of the group $\Bbb Z_p^*$. Show that there is a $k \in \Bbb N$ such that $g^{k+1} \equiv g^k+1 \pmod p$, where $p$ is a prime number.
Excuse me please for bad interpretation of my problem for the first time. I was thinking as follows:
$g^{k+1} \equiv g^k+1 \pmod p$ is equivalent to $g^{k+1}-g^k\equiv 1 \pmod p$, which can be rewrite as $(g-1)g^{k}\equiv 1 \pmod p$. I have tried to find solutions in $\Bbb Z_{11}^*$, where the generators are $2,6,7,8$. Then the solutions are:
If $g=2$ then $k=10.n$ ($n \geq 0$); if $g=6$ then $k=10.n+4$;if $g=7$ then $k=10.n+3$; if $g=8$ then $k=10.n+1$.
The coefficient $10n$ is clear from theorem, that for any $a \in \Bbb Z_p^*: a^{p-1} \equiv 1 \pmod p$. Therefore we must find a solution of $(g-1)g^k = 1, k < p-1$ to prove the equation above. I have figured out, that if $g$ is not any of $\Bbb Z_p^*$ generators, the solution may not exist. Specific thing about generators is that they generates the whole group..But how can I prove that for generators, the solution must exist?
A:
Write $G = ({\bf Z }/p{\bf Z})^*$. Because $g$ is its generator, the set $\{g^k | k \in {\bf Z } \}$ equals the underlying set of $G$.
Now, because $G$ is a group, left multiplication by any of its elements acts as a permutation on $G$. In particular, because $g \neq 1$ we have that $g - 1 \in G$, so multiplication by $g-1$ acts this way. Therefore the set $\{(g-1)g^k| k \in {\bf Z} \}$ is in bijection with $G$ and, in particular, contains $1$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does the following code work on online ide(gcc 7.2.0) but gives error on ubuntu?
When I run the following code on ubuntu(gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4)):
#include<iostream>
#include<vector>
#include<list>
using namespace std;
int main(){
vector <int> v;
v.push_back(1);
v.push_back(2);
v.push_back(3);
v.push_back(4);
v.push_back(5);
list<int> temp;
for(auto i:v){
cout<<i<<" ";
temp.push_back(i);
}
for(auto i:temp){
cout<<i<<" ";
}
}
I get the following errors:
try.cpp: In function ‘int main()’:
try.cpp:13:10: error: ‘i’ does not name a type
for(auto i:v){
^
try.cpp:17:1: error: expected ‘;’ before ‘for’
for(auto i:temp){
^
try.cpp:17:1: error: expected primary-expression before ‘for’
try.cpp:17:1: error: expected ‘;’ before ‘for’
try.cpp:17:1: error: expected primary-expression before ‘for’
try.cpp:17:1: error: expected ‘)’ before ‘for’
try.cpp:17:10: error: ‘i’ does not name a type
for(auto i:temp){
^
try.cpp:20:1: error: expected ‘;’ before ‘}’ token
}
^
try.cpp:20:1: error: expected primary-expression before ‘}’ token
try.cpp:20:1: error: expected ‘;’ before ‘}’ token
try.cpp:20:1: error: expected primary-expression before ‘}’ token
try.cpp:20:1: error: expected ‘)’ before ‘}’ token
try.cpp:20:1: error: expected primary-expression before ‘}’ token
But when I run the code on online ide I works fine.
What is the problem with the code?
The link for code on online ide:No errors
A:
Your code uses some of the C++11 features such as range based loops and auto specifier but you don't compile for the C++11 standard. You need to enable the C++11 support by including the -std=c++11 flag when compiling:
g++ -std=c++11 -o try try.cpp
The online compiler has this enabled by using the -std=gnu++1z flag.
| {
"pile_set_name": "StackExchange"
} |
Q:
change line endings in sublime
how could I change line endings to sublime default in existing file?
there is no problem with line endings when creating new files in sublime (default line endings property works well)
A:
You can see and convert line endings in:
View -> Line Endings
| {
"pile_set_name": "StackExchange"
} |
Q:
Returning a Class that Stores on Heap
I'm having a memory-related crash in my project. I managed to reduce it down to the following "toy" project:
class Foo {
public:
Foo() : bar(nullptr) {
bar = new int(3);
}
~Foo() {
if (bar) {
delete bar;
bar = nullptr;
}
}
private:
int* bar;
};
Foo test() {
Foo foo;
return foo;
}
int main() {
test();
// <--- Crash!
return 0;
}
I cannot figure out why I'm crashing at the line specified. This is what I've gathered so far, please do correct me if I'm wrong:
Basically, I'm creating foo on the stack in test(). Foo allocates some memory on the heap. All is well. Then I try to return foo. foo is returned, but it is immediately destroyed. Then, when exiting, I'm again trying to destroy foo; and hence I'm calling Foo's destructor twice, and I crash. What I do not get is why this is a problem. I'm checking for Foo::bar being null before deleting it, and if I do delete it, I set it to null afterwards.
Why should this cause a crash? How can I "fix" this? What am I doing wrong? I'm so confused! :(
Update: The main reason for why this is happening is because of lack of a copy constructor, as stated in the answers below. However, my original project did have copy constructor, or so I thought. Turns out if your class is derived, you must explicitly create a copy constructor for the derived class. Base(const Derived& other) does not count as a copy constructor!
A:
You violated the rule of three.
When you're manually managing memory inside your constructor and destructor, you MUST also provide a copy constructor and assignment operator, otherwise your program will double-delete and use memory after delete.
In your particular example, you have the compiler-provided copy constructor. When foo is copied to the return value of test, you have two objects with the same bar pointer. The local foo goes out of scope and is destroyed, then its bar is set to nullptr. But the copy, the return value, still has a non-null pointer. Then it is destroyed also, and deletes the same memory again.
| {
"pile_set_name": "StackExchange"
} |
Q:
Shelf life of open jarred anchovies and anchovy paste in the fridge?
I like having anchovies in some form or another around, and whenever I buy them (in paste or jarred form), I end up with extras and store them in the fridge. Searching around on the Web has a mix of opinions on their shelf life, with some saying days or weeks, and other saying months or years. The longer recommendations attribute the jarred anchovies lasting longer because of the oil ("as long as they're completely covered") and the paste being safe because of the salt, along with metal tubing not allowing air to ruin it.
These seem like reasonable claims, but I'd prefer a bit more information on this. Thanks!
A:
I would assume there is enough salt content to render these fairly stable in the fridge. I would just avoid fingers in the jar to avoid any potential surface contamination. In the long run, I would go with whole, salted anchovy, which have an indefinite shelf life. I keep an 800 gram can (opened, but covered with foil) in the fridge, using one or two anchovies at a time. It lasts me 8 - 12 months easily.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to stream data that is generated by an IoT gateway application to predix cloud?
I've devices configured with IoT gateway application(built with MEAN.JS), these devices send data to IoT gateway where i can access data now i wish to do some analytics on same data how can i send this data to predix cloud so that i can use predix services for analytics
A:
We can achieve this requirement using predix-uaa-client NPM package
all you've to do is-
1.create a predix account, predix-uaa service, predix-time-series service and attach uaa-service & time-series service
2.Make note of Uaa Url, client_id, client_secret and predix time-series ingest zone id
pass Uaa Url, client_id, client_secret and predix time-series ingest zone id to predix-uaa-client package it responses with token.access_token.
Use token.access_token as a Bearer token Authroization header in calls to secured services.
once it securely calls service create websocket connection and start sending data to predix-time-series service
here sample code to achieve it
| {
"pile_set_name": "StackExchange"
} |
Q:
Could I use just one BroadcastReceiver in my android app?
I am kind of confused about the BroadcastReceiver.
As title, I don't think I need another one BroadcastReceiver in my app.
Or, is there something wrong if I use a bunch of BroadcastReceiver in my app?
I think it will affect my OS execute memory and performance, am I right.
Thank you for your time and hot-heart.
A:
Its all up to you. You can have multiple BroadcastReceiver for different sets of intent-filter or use single broadcast receiver to handle all the intent-filters.
Usually its better to define different receivers based on the set of intent-filters which are supposed to offer functionality for a related group of task.
Like I said, its all up to you. If you have a large set of intent-filters and you want your code to be handled properly (based on the similar classification of tasks it performs) then go for multiple receivers. Otherwise its easy and logical to handle few filters in single receiver.
Moreover, the performance of your app will not be obstructed as it depends on the execution of tasks with in the receiver, not the quantity of receivers or filters.
Tip: Try to introduce Threads wherever you are expected to perform some heavy lifting :)
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get switch case values
I was wondering if there is a way to get the values of every case in a switch statement?
When you provide a not implemented case, I would like to throw some exception and provide a list of available case values.
switch (partName.Trim().ToLower())
{
case "engine":
//something
break;
case "door":
//something
break;
case "wheel":
//something
break;
default:
throw new NotImplementedException($"Available parts are {????}.");
}
A:
Short answer: no. There is no way of doing this programmatically.
Longer answer: you can work around this with an enum, eg
public enum Parts { engine, door, wheel }
...
if (Enum.TryParse(partName, out Parts part))
{
switch (part)
{
case Parts.engine:
//something
break;
case Parts.door:
//something
break;
case Parts.wheel:
//something
break;
}
}
else
{
var listOfValues = string.Join(", ", Enum.GetNames(typeof(Parts)));
throw new NotImplementedException($"Available parts are {listOfValues}.");
}
This isn't a complete solution as I might forget to add a case for one of the enum values and I'll get a confusing error telling me that the value I supplied is supported when it's not. But that limitation aside, it will work if the switch is correctly implemented.
A:
As has been already said, switch is of no help to you in this scenario. If you wanted, you could ditch the switch in favour of configured map with actions, e.g. (non-language-specific code):
Map<String, Action> transformations = Map.of(
Pair("engine", EngineTransformation()),
Pair("door", DoorTransformation()),
Pair("wheel", WheelTransformation()),
);
var partName = partName.Trim().ToLower();
if (!transformations.contains(partName) {
throw new NotImplementedException(
$"Available parts are {}.",
transformations.getKeys()
);
}
var someResult = transformations.get(partName).getValue().execute();
| {
"pile_set_name": "StackExchange"
} |
Q:
I've nested a ul in a ul but get an error. I've searched here but the solutions don't seem to apply
This is the error: Element ul not allowed as child of element ul in this context. (Suppressing further errors from this subtree.)
Here is the html:
<tr>
<td> Incorporated Business Accounts - additional requirements:
<ul>
<li> Business name and address </li>
<li> Nature of business and date of incorporation </li>
<li> BIN number</li>
<li> Certificate of Incorporation</li>
<li> Names of company directors</li>
<li> Names of directors </li>
<li> Proof of signing authority </li>
<ul>
<li> Ltd Companies: Memorandum and Articles of Incorporation/Bylaws </li>
<li> Registered Societies: Constitution and Bylaws or minutes</li>
<li> Strata Corporations: Bylaws or minutes</li>
</ul>
<li> Photo ID for all signers: if more than 3 signers, must ID at least 3 of those persons</li>
</ul>
</td></tr>
</tbody>
</table>
A:
You need to nest in under a <li> tag like so:
<ul>
<li>Item 1</li>
<li>Item 2
<ul>
<li>Nested item 1</li>
<li>Nested item 2</li>
</ul>
</li>
</ul>
A:
Ul should be inside li. Check the following
....
<li> Proof of signing authority
<ul>
<li> Ltd Companies: Memorandum and Articles of Incorporation/Bylaws </li>
<li> Registered Societies: Constitution and Bylaws or minutes</li>
<li> Strata Corporations: Bylaws or minutes</li>
</ul>
</li>
<li> Photo ID for all signers: if more than 3 signers, must ID at least 3 of those persons</li>
....
| {
"pile_set_name": "StackExchange"
} |
Q:
How to set foreign key ID in factory girl factory?
Here are my models:
class Section < ActiveRecord::Base
belongs_to :organization
end
class Organization < ActiveRecord::Base
has_many :sections
end
In my Loan factory, I would like to automatically create an organization and set it for it. How can I accomplish this?
FactoryGirl.define do
factory :section do
organization_id???
title { Faker::Lorem.words(4).join(" ").titleize }
subtitle { Faker::Lorem.sentence }
overview { Faker::Lorem.paragraphs(5).join("\n") }
end
end
A:
It's possible to set up associations within factories. You need first to have a factory for your organization :
FactoryGirl.define do
factory :section do
...
end
end
Then you can just call organization, FactoryGirl will take care on generating your organization
FactoryGirl.define do
factory :section do
organization
title { Faker::Lorem.words(4).join(" ").titleize }
subtitle { Faker::Lorem.sentence }
overview { Faker::Lorem.paragraphs(5).join("\n") }
end
end
if you want to know more you can go here : http://rubydoc.info/gems/factory_girl/file/GETTING_STARTED.md
| {
"pile_set_name": "StackExchange"
} |
Q:
How can we fix Sprint Planning meetings that are unproductive?
Currently, we have split "Sprint Planning Meeting" into two parts:
SPM1 - This we do at the first day of the sprint. Product owner discusses the stories that have come into the current sprint from Backlog. All stories are already discussed in Backlog so we don't have much here. Mostly we discuss if something was pending or not clear in backlog. Post this meeting, PO makes sure that team has 100% clarity about the stories.
SPM2 - This is a purely technical discussion. We don't include PO here. We break stories into the testable task so that team member gets broad overview what needs to be executed, what is expected of each task, facilitate parallel development.
The problems we are facing:
SPM1 Problem - There are fewer things to discuss. The team is not convinced on exact agenda of the meeting, they say why don't we discuss complete thing in Backlog only. (We have two backlogs in a sprint). The rest dependencies they suggested can be discussed via email etc.
SPM2 Problem - We struggle with creating tasks from stories. We don't have clear idea to what depth things should be discussed. Should discussion include which code layer i.e BL,DAL,UI, what code would be put OR just how different systems would connect with each other i.e. breaking of functionality and leaving rest to the developer?
Some additional context:
Team size: 5 developers, 1 tester, 1 Product Owner
Sprint length: two weeks
Average production experience of developers/tester: 2 years
Are we doing something wrong here? Should we split "Sprint Planning Meeting" into two parts like this, is it standard scrum practice? How should we resolve these problems?
A:
TL;DR
You have several process problems. Process problems have a cost. Those costs should be visible, and the cost of fixing the problems should be charged against the project to ensure transparency.
You can address your current set of process problems by:
Making better use of Backlog Refinement.
Having clear Sprint Goals.
Writing user stories (aka Product Backlog Items) that meet INVEST criteria.
Improving your Sprint Planning process.
Practicing continuous improvement through Sprint Retrospectives.
Allocating team capacity to process improvement initiatives.
The rest of this really long answer explains why and how you should do these things.
Analysis
Secondary (Symptomatic) Problems
You have several different problems. As I see it, your secondary problems flow from these statements you made in your original post.
The team is not convinced on exact agenda of the meeting[.]
This is because Backlog Refinement and Sprint Planning are not being properly leveraged.
We struggle with creating tasks from stories.
This is likely because stories don't meet INVEST criteria, are improperly sized or estimated, or because your team and your sprints lack cohesion.
The rest dependencies they suggested can be discussed via email etc.
This is a huge project smell. It ignores the agile principle that "[t]he most efficient and effective method of conveying information to and within a development team is face-to-face conversation." It also indicates that the team lacks cohesion, and is failing to find value in the Planning Meeting as currently structured.
Primary Problems
However, the problems above are actually X/Y problems. They are symptoms of a larger process problem. Based on the information provided, I see several primary issues.
You're not using Backlog Refinement to set a clear Sprint Goal for the upcoming Sprint Planning Meeting.
You're allocating too much time for Sprint Planning based on your sprint length.
Your Product Owner is not available for questions during Sprint Backlog development.
Your team needs coaching in effective decomposition and planning techniques.
In short, if you unpack all the problems, it basically comes down to a failure to fully leverage the Scrum process, and an inability to gel as a cohesive team rather than a collection of individuals assigned to the same project. Let's look at some possible solutions.
Solutions
The following solutions address your underlying process and teaming problems, and should therefore also address the specific follow-on issues you raised in your original post. Situations vary, though, so adapt them as needed.
Backlog Refinement and Sprint Goals
Your entire team should be involved in Backlog Refinement. While more mature teams can sometimes get away with just the Scrum Master and Product Owner involved in the meeting, the lack of cohesion in your Sprint Planning sessions could be addressed by making sure everyone leaves Backlog Grooming with a clear goal in mind and a set agenda.
Backlog Refinement is the time for the team to huddle with the Product Owner to determine what features are likely to be in scope for the upcoming Sprint, and to help the Product Owner:
Identify a Sprint Goal that will act as a filter for selecting stories for the upcoming Sprint.
Decompose any themes or epics for Sprint Planning into high-level (not detailed) user stories, e.g. refining them until they are likely to be no more than one Sprint in length.
Prioritize stories on the Product Backlog, answering technical questions that may affect scope, dependency ordering, or other things that impact the Product Owner's perceived value of the stories.
If the team leaves the Backlog Refinement meeting with a Sprint Goal and a pool of potential user stories for the next Sprint, you have an agenda!
Sprint Planning: Story and Task Planning
Allocate less time for your Sprint Planning sessions. As a rule of thumb, I've found that most teams need about 4 hours per week of sprint length for planning. Obviously, this will vary based on project complexity and team maturity, but if you're spending more than 6-8 hours planning a two-week sprint then you may have other process, framework, or skill deficits in play.
At the very beginning of Sprint Planning, the team must estimate it's capacity for the current sprint. This is often the velocity, adjusted for current conditions (e.g. vacations, changes in team composition, or complexity of the current phase of the project). This capacity estimate is used to limit the work that will be planned for the sprint, and to shape the forecast that is created through Sprint Planning.
The first half of Sprint Planning requires the whole team to review the Sprint Goal, and to work with the Product Owner to select stories from the top of the Product Backlog that will fit within a single sprint. This can involve some discussions, estimations, horse-trading, and on-the-fly reprioritization (by the PO) of the Product Backlog. Whether the team pulls one story or many, the team is responsible for popping stories off the top of the Product Backlog based on the team's estimated capacity for the sprint.
The second half of Sprint Planning also requires the whole team, including the Product Owner. This is the part where the team develops the Sprint Backlog, which are the tasks and dependencies needed to get each story to the Definition of Done. While very mature teams can often combine the two steps, most teams need this step to be explicit.
The goal here isn't to do a traditional work breakdown structure. Instead, it's to define the tasks needed to implement the story, or identify information gaps about scope or Definition of Done for a particular story. In other words, you need a rough outline of what needs to be done so that the acceptance criteria for the story is understand before you start working.
As an example, imagine you had a story like:
As a user,
I want to be able to change my username in the system
so that I don't have to call technical support to fix simple typos made during account creation.
A good story has a value consumer (the user), a feature (changing the username), and a context to constrain the scope of the implementation. A great story is often granular enough that the story's tasks are self-evident. If they aren't, you may need to revisit the INVEST criteria for story development.
During Sprint Backlog definition, the team needs to collaboratively figure out:
How they plan to test the feature.
What meetings or planning sessions they need to have to work out implementation details.
What dependencies they may have on other stories, tasks, or resources so that the Sprint Backlog can be prioritized.
Ask the Product Owner any clarifying questions about scope or context that will ensure the team is planning the right tasks.
As a sanity check, whether the identified tasks can still fit within a single sprint.
That's it! If the meeting starts sliding off into detailed discussions of how a developer plans to embiggen a widget, the team has likely missed the point and the Scrum Master needs to referee the process better.
Refine Your Process
If your team lacks cohesion, maturity, or experience with effective Scrum, then issues should be identified and discussed during the Sprint Retrospective. The Product Owner must then create user stories on the Product Backlog so that the team can allocate capacity to addressing the process issues.
For example, if the team struggles with writing stories that meet INVEST criteria, the problems or knowledge gaps should be called out in the retrospective. The Product Owner might then create a user story like:
As a Scrum Team Member,
I want to learn how define Product Backlog Items (PBIs) that are more independent
so that there are fewer dependencies between PBIs during Sprint Planning.
The story should then be estimated, prioritized, and allocated to a future sprint as work. Process issues can't be treated as hidden work or overhead; they must be made explicit, and any tasks associated with them must be explicitly counted against the team's capacity as either estimated work or as unaddressed tech/process debt that reduces available capacity.
In other words, process problems have a cost. Those costs should be visible, and the cost of fixing the problems should be charged against the project to ensure transparency.
| {
"pile_set_name": "StackExchange"
} |
Q:
C - Nested linked lists
I'm trying to create a linked list of students, each with a linked list of grades, but I'm having trouble accessing the linked list of grades inside the linked list of students.
typedef struct student_data_struct{
char student[MAX];
struct grades_list_struct *gradeP;
} student_Data;
typedef struct student_list_struct{
student_Data studentData;
struct student_list_struct *next;
} StudentNode;
typedef struct grades_list_struct{
int grade;
struct grades_list_struct *next;
} GradeNode;
GradeNode *insertGrade(int grade, GradeNode *head){
GradeNode *newNode=NULL;
newNode=(GradeNode*)calloc(1, sizeof(GradeNode));
if(head!=NULL){
newNode->grade=grade;
newNode->next=head;
return newNode;
} else {
newNode->grade=grade;
newNode->next=NULL;
return newNode;
}
}
StudentNode *insertStudent(char studentName[MAX], int studentGrade, StudentNode *head){
StudentNode *newNode=NULL;
newNode=(StudentNode*)calloc(1, sizeof(StudentNode));
newNode->studentData->gradeP=(GradeNode*)calloc(1, sizeof(GradeNode));
if (head==NULL){
strcpy(newNode->studentData.student, studentName);
newNode->next=NULL;
newNode->studentData->gradeP=insertGrade(studentGrade, newNode->studentData->gradeP);
return newNode;
} else {
strcpy(newNode->student, studentName);
newNode->gradeP->grade=studentGrade;
newNode->studentData->gradeP=insertGrade(studentGrade, newNode->studentData->gradeP);
return newNode;
}
}
When I try to allocate memory to the grade pointer,
newNode->studentData->gradeP=(GradeNode*)calloc(1, sizeof(GradeNode));
I get the error:
error: invalid type argument of '->' (have 'student_Data' {aka 'struct student_data_struct'})
As well, when I try to insert a grade for a student,
newNode->studentData->gradeP=insertGrade(studentGrade, newNode->studentData->gradeP);
I get the error:
error: invalid type argument of '->' (have 'student_Data' {aka 'struct student_data_struct'})
Any help would be greatly appreciated.
A:
You are accessing a struct member with pointer symbol. Try writing as given below:-
newNode->studentData.gradeP=(GradeNode*)calloc(1, sizeof(GradeNode));
newNode->studentData.gradeP=insertGrade(studentGrade, newNode->studentData.gradeP);
| {
"pile_set_name": "StackExchange"
} |
Q:
How this Java code works? Remove duplicates from an unsorted linked list
I have a LinkedList Node class:
public static class Node {
int value;
Node next;
Node(int value){
this.value = value;
this.next = null;
}
}
And the delete duplicates method:
public static void deleteDups(Node n) {
Hashtable<Integer, Boolean> table = new Hashtable<Integer, Boolean>();
Node previous = null;
while (n != null) {
if (table.containsKey(n.value)) previous.next = n.next;
else {
table.put(n.value, true);
previous = n;
}
n = n.next;
}
}
public static void printList(Node list) {
Node currentNode = list;
while(currentNode != null) {
System.out.print(currentNode.value + ", ");
currentNode = currentNode.next;
}
System.out.println("");
}
It works. But how? In the method I delete the list n and at the end n is null. But why it is not null here:
Node list = new Node(5);
list.next = new Node(2);
list.next.next = new Node(3);
list.next.next.next = new Node(2);
list.next.next.next.next = new Node(1);
list.next.next.next.next.next = new Node(3);
printList(list);
deleteDups(list);
printList(list);
Output:
5, 2, 3, 2, 1, 3
5, 2, 3, 1
What I missed?
If I print n and previous:
public static void deleteDups(Node n) {
Hashtable<Integer, Boolean> table = new Hashtable<Integer, Boolean>();
Node previous = null;
while (n != null) {
if (table.containsKey(n.value)) previous.next = n.next;
else {
table.put(n.value, true);
previous = n;
}
n = n.next;
System.out.println("");
System.out.println("");
System.out.print("previous is ");
printList(previous);
System.out.print("n is ");
printList(n);
}
}
The output is:
previous is 5, 2, 3, 2, 1, 3,
n is 2, 3, 2, 1, 3,
previous is 2, 3, 2, 1, 3,
n is 3, 2, 1, 3,
previous is 3, 2, 1, 3,
n is 2, 1, 3,
previous is 3, 1, 3,
n is 1, 3,
previous is 1, 3,
n is 3,
previous is 1,
n is
At the end n and previous are null! How it works then?
A:
Simply said it iterates the linked list and remembers each value it encountered. If it encounters a value it has already seen it breaks off the duplicate entry in the chain by connecting the previous node (the one it encountered before the duplicate) with the next one (the one after the duplicate one).
Just imagine it like taking a link out of chain.
This whole algorithm works in situ thus there is no need to recreate the list afterwards. You just throw away nodes that aren't needed anymore.
This is a bit dangerous as there could still be references to Nodes that aren't part of the filtered linked list anymore. It would be better to invalidate them somehow.
On the other hand the Nodes next properties will always lead to either a Node of the filtered linked list (after n steps) if there is still a value in the list the algorithm did not yet see or to null if the duplicate Node was at the end of the linked list already. Nevertheless the next Nodes will still have duplicates in it until it arrives at a Node that is part of the duplicate free linked list again but one could change the algorithm so that even this won't happen.
edit for answering why printList prints nothing/null at the end:
This is because n isn't your start node anymore at this point, it's always the "next" node. So at the end n is actually "null" because "null" is the element after the last node. You have to hold on your starting Node n (the one you passed into the deleteDups method, because this node depicts the starting node of your filtered list. You printList method only prints all the next nodes after the node you passed to it. So the last call that happens is your printList(previous) (this contains the last element) and printList(n) that does nothing because n is "null" at that point.
Just try to create a linked list of your nodes. Then pass the first Node into deleteDups and afterwards call printList with the Node you passed into it.
Node start = new Node(...);
//Initialize and connect more nodes consecutively here
printList(start); //unfiltered List will be printed
deleteDups(start);
printList(start); //filtered List will be printed, all duplicates have been unhinged from your linked list
| {
"pile_set_name": "StackExchange"
} |
Q:
Ansible get first element from list
Suppose I have the following vars_file:
mappings:
- primary: 1.1.1.1
secondary: 2.2.2.2
- primary: 12.12.12.12
secondary: 11.11.11.11
and hosts file
1.1.1.1
12.12.12.12
5.5.5.5
and the following playbook task
- name: Extract secondary from list
debug:
msg: "{{ (mappings | selectattr('primary', 'search', inventory_hostname) | list | first | default({'secondary':None})).secondary }}"
The current task works and will give empty string when no match are found, but I would like to know if there is a better way/cleaner way of doing it without passing a dictionary to the default constructor.
A:
An option would be to use json_query
- debug:
msg: "{{ mappings | json_query(\"[?primary=='\" + inventory_hostname + \"'].secondary\") }}"
, but selectattr works too
- debug:
msg: "{{ mappings | selectattr('primary', 'equalto', inventory_hostname) | map(attribute='secondary') | list }}"
| {
"pile_set_name": "StackExchange"
} |
Q:
.java file not running properly using mac Terminal usual java commands
I'm trying to run the program in this link (specifically Plotter.java).in the zip file there's an instruction on how to run them but they don't work .I've read other questions on running a java file from terminal and I've applied those solutions but none worked on this fileceven though I've run other codes without any problems (java -dir or javac ).
how can I run this program?
also I want to run it (the plotter) in eclipse console or a GUI made in eclipse
.
p.s:I havent included any code because the program has about 10 classes and also I'm new to java.
A:
I tried to see the code, it seems you need to give double in the command line while you don't, so it tries to read empty array of args. Try to write after the name of class you execute three doubles in the command line, it should work than.
If you want to run the same in eclipse, Use menu of Eclipse: Run -> Run Configurations -> Java Application -> mouse right click -> New -> Arguments -> add some arguments you need.
And please read Instructions file carefully, it explains everything.
| {
"pile_set_name": "StackExchange"
} |
Q:
F# vs OCaml: Stack overflow
I recently found a presentation about F# for Python programmers, and after watching it, I decided to implement a solution to the "ant puzzle" on my own.
There is an ant that can walk around on a planar grid. The ant can move one space at a time left, right, up or down. That is, from the cell (x, y) the ant can go to cells (x+1, y), (x-1, y), (x, y+1), and (x, y-1). Points where the sum of the digits of the x and y coordinates are greater than 25 are inaccessible to the ant. For example, the point (59,79) is inaccessible because 5 + 9 + 7 + 9 = 30, which is greater than 25. The question is: How many points can the ant access if it starts at (1000, 1000), including (1000, 1000) itself?
I implemented my solution in 30 lines of OCaml first, and tried it out:
$ ocamlopt -unsafe -rectypes -inline 1000 -o puzzle ant.ml
$ time ./puzzle
Points: 148848
real 0m0.143s
user 0m0.127s
sys 0m0.013s
Neat, my result is the same as that of leonardo's implementation, in D and C++. Comparing to Leonardo's C++ implementation, the OCaml version runs approx 2 times slower than C++. Which is OK, given that Leonardo used a queue to remove recursion.
I then translated the code to F# ... and here's what I got:
Thanassis@HOME /g/Tmp/ant.fsharp
$ /g/Program\ Files/FSharp-2.0.0.0/bin/fsc.exe ant.fs
Microsoft (R) F# 2.0 Compiler build 2.0.0.0
Copyright (c) Microsoft Corporation. All Rights Reserved.
Thanassis@HOME /g/Tmp/ant.fsharp
$ ./ant.exe
Process is terminated due to StackOverflowException.
Quit
Thanassis@HOME /g/Tmp/ant.fsharp
$ /g/Program\ Files/Microsoft\ F#/v4.0/Fsc.exe ant.fs
Microsoft (R) F# 2.0 Compiler build 4.0.30319.1
Copyright (c) Microsoft Corporation. All Rights Reserved.
Thanassis@HOME /g/Tmp/ant.fsharp
$ ./ant.exe
Process is terminated due to StackOverflowException
Stack overflow... with both versions of F# I have in my machine...
Out of curiosity, I then took the generated binary (ant.exe) and run it under Arch Linux/Mono:
$ mono -V | head -1
Mono JIT compiler version 2.10.5 (tarball Fri Sep 9 06:34:36 UTC 2011)
$ time mono ./ant.exe
Points: 148848
real 1m24.298s
user 0m0.567s
sys 0m0.027s
Surprisingly, it runs under Mono 2.10.5 (i.e. no stack overflow) - but it takes 84 seconds, i.e. 587 times slower than OCaml - oops.
So this program...
runs fine under OCaml
doesn't work at all under .NET/F#
works, but is very slow, under Mono/F#.
Why?
EDIT: Weirdness continues - Using "--optimize+ --checked-" makes the problem disappear, but only under ArchLinux/Mono ; under Windows XP and Windows 7/64bit, even the optimized version of the binary stack overflows.
Final EDIT: I found out the answer myself - see below.
A:
Executive summary:
I wrote a simple implementation of an algorithm... that wasn't tail-recursive.
I compiled it with OCaml under Linux.
It worked fine, and finished in 0.14 seconds.
It was then time to port to F#.
I translated the code (direct translation) to F#.
I compiled under Windows, and run it - I got a stack overflow.
I took the binary under Linux, and run it under Mono.
It worked, but run very slowly (84 seconds).
I then posted to Stack Overflow - but some people decided to close the question (sigh).
I tried compiling with --optimize+ --checked-
The binary still stack overflowed under Windows...
...but run fine (and finished in 0.5 seconds) under Linux/Mono.
It was time to check the stack size: Under Windows, another SO post pointed out that it is set by default to 1MB. Under Linux, "uname -s" and a compilation of a test program clearly showed that it is 8MB.
This explained why the program worked under Linux and not under Windows (the program used more than 1MB of stack). It didn't explain why the optimized version run so much better under Mono than the non-optimized one: 0.5 seconds vs 84 seconds (even though the --optimize+ appears to be set by default, see comment by Keith with "Expert F#" extract). Probably has to do with the garbage collector of Mono, which was somehow driven to extremes by the 1st version.
The difference between Linux/OCaml and Linux/Mono/F# execution times (0.14 vs 0.5) is because of the simple way I measured it: "time ./binary ..." measures the startup time as well, which is significant for Mono/.NET (well, significant for this simple little problem).
Anyway, to solve this once and for all, I wrote a tail-recursive version - where the recursive call at the end of the function is transformed into a loop (and hence, no stack usage is necessary - at least in theory).
The new version run fine under Windows as well, and finished in 0.5 seconds.
So, moral of the story:
Beware of your stack usage, especially if you use lots of it and run under Windows. Use EDITBIN with the /STACK option to set your binaries to larger stack sizes, or better yet, write your code in a manner that doesn't depend on using too much stack.
OCaml may be better at tail-recursion elimination than F# - or it's garbage collector is doing a better job at this particular problem.
Don't despair about ...rude people closing your Stack Overflow questions, good people will counteract them in the end - if the questions are really good :-)
P.S. Some additional input from Dr. Jon Harrop:
...you were just lucky that OCaml didn't overflow as well.
You already identified that actual stack sizes vary between platforms.
Another facet of the same issue is that different language implementations
eat stack space at different rates and have different performance
characteristics in the presence of deep stacks. OCaml, Mono and .NET
all use different data representations and GC algorithms that impact
these results... (a) OCaml uses tagged integers to distinguish pointers,
giving compact stack frames, and will traverse everything on the stack
looking for pointers. The tagging essentially conveys just enough information
for the OCaml run time to be able to traverse the heap (b) Mono treats words
on the stack conservatively as pointers: if, as a pointer, a word would point
into a heap-allocated block then that block is considered to be reachable.
(c) I do not know .NET's algorithm but I wouldn't be surprised if it ate stack
space faster and still traversed every word on the stack (it certainly
suffers pathological performance from the GC if an unrelated thread has a
deep stack!)... Moreover, your use of heap-allocated tuples means you'll
be filling the nursery generation (e.g. gen0) quickly and, therefore,
causing the GC to traverse those deep stacks often...
A:
Let me try to summarize the answer.
There are 3 points to be made:
problem: stack overflow happens on a recursive function
it happens only under windows: on linux, for the proble size examined, it works
same (or similar) code in OCaml works
optimize+ compiler flag, for the proble size examined, works
It is very common that a Stack Overflow exception is the result of a recursive vall. If the call is in tail position, the compiler may recognize it and apply tailcall optimization, therefore the recursive call(s) will not take up stack space.
Tailcall optimization may happen in F#, in the CRL, or in both:
CLR tail optimization1
F# recursion (more general) 2
F# tail calls 3
The correct explanation for "fails on windows, not in linux" is, as other said, the default reserved stack space on the two OS. Or better, the reserved stack space used by the compilers under the two OSes. By default, VC++ reserves only 1MB of stack space. The CLR is (likely) compiled with VC++, so it has this limitation. Reserved stack space can be increased at compile time, but I'm not sure if it can be modified on compiled executables.
EDIT: turns out that it can be done (see this blog post http://www.bluebytesoftware.com/blog/2006/07/04/ModifyingStackReserveAndCommitSizesOnExistingBinaries.aspx)
I would not reccomend it, but in extreme situations at least it is possible.
OCaml version may work because it was run under Linux.
However, it would be interesting to test also the OCaml version under Windows. I know that the OCaml compiler is more aggressive at tail-call optimization than F#.. could it even extract a tail-recusable function from your original code?
My guess about "--optimize+" is that it will still cause the code to recur, hence it will still fail under Windows, but will mitigate the problem by making the executable run faster.
Finally, the definitive solution is to use tail recursion (by rewriting the code or by realying on aggressive compiler optimization); it is a good way to avoid stack overflow problem with recursive functions.
| {
"pile_set_name": "StackExchange"
} |
Q:
Beautiful Soup output none while parsing URLs
I have written a function to parse the article URLs from the archives of NDTV News. It returns None output instead of a list of URLs. Why its returning None?
def parse_ndtv_archive_links():
url_count=0
links = []
url = makeURL()
while (url_count < len(url)):
page=requests.get(url[url_count]).text
soup=BeautifulSoup(page,'lxml')
section=soup.find('div', id="main-content")
for a in section.findAll('li'):
href=a.get('href')
links.append(href)
url_count += 1
return list(links)
print(parse_ndtv_archive_links())
So the parse function loop on each day archives of NDTV and fetches the URLs. So the makeURL() function generates a list of archive URLs for a period of time.
A:
It is because in your variable a is not stored tag <a> but tag <li> and tag <li> doesn't have attribute href. One way to solve this is like this.
for li in section.findAll('li'):
href=li.a.get('href')
links.append(href)
Edit: it separates days now
import requests
from bs4 import BeautifulSoup
urls = ['http://archives.ndtv.com/articles/2020-05.html']
for url in urls:
current_day = 1
page = requests.get(url).text
soup = BeautifulSoup(page, 'lxml')
days = soup.find('div', {'id': 'main-content'}).find_all('ul')
links = {day_num: [] for day_num in range(1, len(days)+1)}
for day in days:
for li in day.findAll('li'):
href = li.a.get('href')
links[current_day].append(href)
current_day += 1
print(links)
Result is stored in dictionary links where key is number of day in month and value is list of links. This dictionary contains only days for one month, if you wish to store data for more than one month you will need to tweek this code a little bit.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unmarshal a specific XML file from zip without extracting
I have a zip file having several xml files in it using zip and encoding/xml packages from Go archive. The thing I want to do is unmarshalling only a.xml into a type -i.e. without looping over all files inside:
test.zip
├ a.xml
├ b.xml
└ ...
a.xml would have a structure like:
<?xml version="1.0" encoding="UTF-8"?>
<root>
<app>
<code>0001</code>
<name>Some Test App</name>
</app>
<app>
<code>0002</code>
<name>Another Test App</name>
</app>
</root>
How to for select and unmarshal the file whose name is provided as a parameter in the commented out lines, for instance:
package marshalutils
import (
"archive/zip"
"log"
"fmt"
"encoding/xml"
)
type ApplicationRoot struct {
XMLName xml.Name `xml:"root"`
Applications []Application `xml:"app"`
}
type Application struct {
Code string `xml:"code"`
Name string `xml:"name"`
}
func UnmarshalApps(zipPath string, fileName string) {
// Open a zip archive for reading.
reader, err := zip.OpenReader(zipFilePath)
if err != nil {
log.Fatal(`ERROR:`, err)
}
defer reader.Close()
/*
* U N M A R S H A L T H E G I V E N F I L E ...
* ... I N T O T H E T Y P E S A B O V E
*/
}
A:
Well, here is the answer I have found with the return type declaration added to the sample function:
func UnmarshalApps(zipPath string, fileName string) ApplicationRoot {
// Open a zip archive for reading.
reader, err := zip.OpenReader(zipFilePath)
if err != nil {
log.Fatal(`ERROR:`, err)
}
defer reader.Close()
/*
* START OF ANSWER
*/
var appRoot ApplicationRoot
for _, file := range reader.File {
// check if the file matches the name for application portfolio xml
if file.Name == fileName {
rc, err := file.Open()
if err != nil {
log.Fatal(`ERROR:`, err)
}
// Prepare buffer
buf := new(bytes.Buffer)
buf.ReadFrom(rc)
// Unmarshal bytes
xml.Unmarshal(buf.Bytes(), &appRoot)
rc.Close()
}
}
/*
* END OF ANSWER
*/
}
| {
"pile_set_name": "StackExchange"
} |
Q:
how to trigger form submission from within an asynchronous callback inside form.submit handler (jquery)
I am handling a form submit like this:
$('#some-form').submit(function(ev) {
var $form = $(this);
$.getJSON('/some/endpoint', function(data) {
if (data.somecondition) {
$form.submit(); // <- not doing what I want
}
});
return false;
});
So, I'm beginning an asynchronous getJSON call and then returning false to stop the form submission. Within the getJSON callback, under some condition, I want to actually submit the form. But triggering submit() just calls the handler again and repeats the process.
I know I can unbind the submit handler and then submit, but there's got to be a better way, right? If there isn't a better way, what's the best way to structure this code to unbind the submit handler?
A:
Turns out it's actually very simple, just call submit() on the non-extended version of the element:
$form.get(0).submit();
Example here: http://jsbin.com/eyuteh/7/edit#html
Caveat: See @Avi Pinto's comment.
| {
"pile_set_name": "StackExchange"
} |
Q:
Internet Protocol version 4 (IPv4) padding?
Where is ip padding insert in ip data-gram format,
after payload?
A:
The only possible padding in an IPv4 packet would be in the header after any options. IPv4 options really are not used any longer, but if there are any options, the header must be padded to be sure that it ends on a 32-bit boundary. There is no payload padding because IPv4 simply doesn't care what is in the payload.
This is all explained in RFC 791, Internet Protocol:
3.1. Internet Header Format
A summary of the contents of the internet header follows:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|Version| IHL |Type of Service| Total Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Identification |Flags| Fragment Offset |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Time to Live | Protocol | Header Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Destination Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options | Padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Example Internet Datagram Header
-and-
Padding: variable
The internet header padding is used to ensure that the internet header
ends on a 32 bit boundary. The padding is zero.
-and-
The options might not end on a 32-bit boundary. The internet header
must be filled out with octets of zeros. The first of these would be
interpreted as the end-of-options option, and the remainder as
internet header padding.
-and-
Padding
The internet header Padding field is used to ensure that the data
begins on 32 bit word boundary. The padding is zero.
| {
"pile_set_name": "StackExchange"
} |
Q:
ggplot() error while using it in a function in R
I need a help on ggplot() which I am using for the very first time.
I have a function defined as follows :
myHist <- function(data, varName = "") {
gp <- ggplot(data, aes(data[, varName]))
gp <- gp + geom_histogram(alpha = 1, aes(fill=..count..))
gp <- gp + labs(title = paste("Histogram for ", varName, sep = " "))
gp <- gp + labs(x = varName, y = "N")
gp <- gp + scale_fill_gradient("", low = "blue", high = "red")
gp
}
then using it as follows :
myHist(data = iris, varName = "Petal.Width")
which gives the following error :
"Error in eval(expr, envir, enclos) : object 'varName' not found"
can anyone help ? while debugging if I initialize the parameters passed the inner part of the function works. But the function itself doesnt work
A:
You need to use aes_string because your variable is an input value of the function stored as a character value.
myHist <- function(data, varName = "") {
gp <- ggplot(data=data, aes_string(varName))
gp <- gp + geom_histogram(alpha = 1, aes(fill=..count..))
gp <- gp + labs(title = paste("Histogram for", varName, sep = " "))
gp <- gp + labs(x = varName, y = "N")
gp <- gp + scale_fill_gradient("", low = "blue", high = "red")
gp}
| {
"pile_set_name": "StackExchange"
} |
Q:
Probability of reaching node A from node B in exactly X steps
I have a three-node matrix with two edges (A-B and A-C). I would like to determine what the probability is starting from B and ending at C in exactly 100 steps.
I have only written out probabilities:
P(A|B) = 1
P(B|A) = 0.5
P(A|C) = 1
P(C|A) = 0.5
But there are so many combinations of ways to get from B to C in exactly 100 steps using these probabilities. Any suggestions on how to continue this problem?
A:
After an odd number of steps you must be at A and after an even number of steps you will be in either B or C, each with probability 0.5, therefore after 100 steps the probability of being in C is 0.5
Edit
More formally we can define a Markov chain with transition matrix:
$$ T = \left(\array{0&\tfrac{1}{2}&\tfrac{1}{2}\\1&0&0\\1&0&0}\right) $$
Now we can compute $T^2$ and $T^3$ to show that for $n\ge 1$, $T^{2n-1}=T$:
$$ T^2 = \left(\array{1&0&0\\0&\tfrac{1}{2}&\tfrac{1}{2}\\0&\tfrac{1}{2}&\tfrac{1}{2}}\right) $$
$$ T^3 = \left(\array{0&\tfrac{1}{2}&\tfrac{1}{2}\\1&0&0\\1&0&0}\right) = T $$
Therefore we calculate that $T^{100}=T^2$ and that $x_0 T^{100} = x_0 T^2$
$$ x_0 T^2 = \left(\array{0&1&0}\right) \left(\array{1&0&0\\0&\tfrac{1}{2}&\tfrac{1}{2}\\0&\tfrac{1}{2}&\tfrac{1}{2}}\right) = \left(\array{0&\tfrac{1}{2}&\tfrac{1}{2}}\right) $$
| {
"pile_set_name": "StackExchange"
} |
Q:
Cannot use `systemctl --user` due to "Failed to get D-bus connection: permission denied"
I'm trying to set up user-level services, using this answer to a similar question. I have create the required files and rebooted.
I'm making progress because I now get "Failed to get D-bus connection: permission denied" when it was "Failed to get D-bus connection: connection refused", but I'm stumped because I don't know what object it is trying to access (file? socket?) and so cannot even check current permissions. Any ideas?
So far I have added:
loginctl enable-linger userservice
/usr/lib/systemd/user/dbus.service (-rw-r--r-- root root)
[Unit]
Description=D-Bus User Message Bus
Requires=dbus.socket
[Service]
ExecStart=/usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation
ExecReload=/usr/bin/dbus-send --print-reply --session --type=method_call --dest=org.freedesktop.DBus / org.freedesktop.DBus.ReloadConfig
[Install]
Also=dbus.socket
/usr/lib/systemd/user/dbus.socket (-rw-r--r-- root root)
[Unit]
Description=D-Bus User Message Bus Socket
[Socket]
ListenStream=%t/bus
ExecStartPost=-/bin/systemctl --user set-environment DBUS_SESSION_BUS_ADDRESS=unix:path=%t/bus
[Install]
WantedBy=sockets.target
Also=dbus.service
/home/userservice/.config/systemd/user/userservice.service
[Unit]
Description=Test user-level service
[Service]
Type=dbus
BusName=com.wtf.service
ExecStart=/home/userservice/userservice.py
Restart=on-failure
[Install]
WantedBy=default.target
Not added any links elsewhere...
To make it fail:
systemctl --user status
Edit 2018-10-25:
Added export XDG_RUNTIME_DIR=/run/user/$(id -u) to .bashrc. The variable is set and now I get: Failed to get D-us connection: no such file or directory. Strangely, neither man systemctl nor systemctl --help mention the --user option, while both mention --system and specify that this is the default (so what are the other options).
Using RHEL 7.4 (with systemd 219 as reported by systemctl --version) with SELinux.
A:
So there's a long standing issue where the XDG_RUNTIME_DIR environment variable doesn't get set properly, or at all, when users log in, and therefore can't access the user D-Bus. This happens when the user logs in via some other method than the local graphical console.
You can work around this by adding to the user's $HOME/.bashrc:
export XDG_RUNTIME_DIR=/run/user/$(id -u)
Then log out and back in.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I generate a random percentage between a defined range in C#?
I'm brand new to C# and programming in general and I'm having trouble generating a random percentage from a defined range (i.e. between .5 and 1.0)
I've had success with a generating random int variables, however, my code for random percentages is not working:
Random r = new Random();
double percentage = r.Next(.5, 1.0);
WriteLine(percentage);
I'm expecting an output between .5 and 1, but I'm getting the following errors:
Compilation error (line 9, col 23): The best overloaded method match for 'System.Random.Next(int, int)' has some invalid arguments
Compilation error (line 9, col 30): Argument 1: cannot convert from 'double' to 'int'
Compilation error (line 9, col 34): Argument 2: cannot convert from 'double' to 'int'
Line 9 is referring to the last line of code I provided where the percentage variable is written. Thanks in advance for any help!
A:
r.Next() just gives you integers. But there is r.NextDouble(), but it gives you only a value between 0.0 and 1.0. So you need to put it into the desired range yourself:
double percentage = min + (max - min) * r.NextDouble();
with min = 0.5 and max = 1.0.
| {
"pile_set_name": "StackExchange"
} |
Q:
Are there any tools to assist in creating server-side traces?
Defining a new server-side trace in SQL Server using the available stored procedures (sp_trace_create, sp_trace_setevent, sp_trace_setfilter) seems to be quite a tedious process.
I'm looking for a tool that provides a nice GUI (probably similar to that in SQL Server Profiler) to help with definition of traces, but for those on the server side rather than the client side.
Does such a tool exist? I suppose it would be very similar to Profiler, but would set a file path for the trace file, rather than having the results returned directly to the tool.
I want to do a server-side trace as this is a production server with a very high throughput of transactional data, and I am concerned about the potential effect on performance if I use Profiler.
A:
As mentioned in my comment, there is an option in the profiler app to export a trace definition. The details of doing so are outlined on technet here: http://technet.microsoft.com/en-us/library/cc293613.aspx
The main caveat is that you have to start the trace before it can be scripted out, but you should be able to do this on a server separate from the one you actually mean to run the server side version of the trace on.
| {
"pile_set_name": "StackExchange"
} |
Q:
Arbitrary vs. Random
I'm currently assisting a basic course where the students have to write some proofs. Most of them use terminology like "Let $x$ be a random integer", instead of "Let $x$ be an arbitrary integer" or plainly "Let $x$ be an integer". Am I doing the right thing in correcting them?
My point of view is that a random integer is an integer chosen by some kind of random distribution. In that interpretation, there's some truth in saying that "Let $x$ be a random integer, then $x$ is not equal to 25", since the probability might be zero for that to happen, and in that sense the randomly chosen integer is not going to be 25. Meanwhile the sentence "Let $x$ be an integer, then $x$ is not equal to 25" is blatantly false, since we can take $x=25$ deterministically (but definitely not randomly).
A:
I originally wrote a more ambivalent response, but thinking about it further I've changed my mind.
It's clear that the phrase "let $x$ be a random integer" is mathematically . . . bad. What is at question is whether:
it is misleading to the student,
it is worth correcting,
and as a bonus, whether it is worth penalizing when repeated.
I think the answer to (3) is no (unless one is in a class dealing with probability), and the answer to (2) is yes, since if nothing else explaining why the phrase is wrong lets you preemptively address some of the usual confusions around quantifiers (e.g. we're allowed to pick a number that happens to be a counterexample "out of a hat").
I think the answer to (1) (and here's where I've changed my mind) is "yes" - or rather, it is "yes" enough that we should treat it as "yes." I think this is a case where poor use of language early on could set the student up for more confusion down the road, even if they are not being confused by the phrase at the moment. (And this is generally an argument for helping students with language use in mathematics.)
That said, I still think the answer to (3) is no (again, unless the class is dealing with probability).
A:
In common parlance, random and arbitrary are often used interchangeably. A quick check of on-line dictionaries confirms that the semantic overlap is well established in spite of the different origins of the two words.
The fledgling proof-writers need to be made aware that this is not the case in math, with random being used when probabilities are involved. On the other hand, "Let $x$ be an arbitrary integer; then $P(x)$ holds" translates $\forall x \in \mathbb{Z} \,.\, P(x)$ into English.
Next, it would probably help the aforementioned fledglings if they were shown why the distinction is useful. One practical reason is simplicity. If one deals with an arbitrary integer $x$, all that is assumed is that $x \in \mathbb{Z}$. Could $x = 25$ be true? Of course! Could $x = 25$ be false? Certainly!
If, however, $x$ is a randomly chosen integer, not much may be said without knowing the distribution from which $x$ was drawn. The probability of $x = 25$ may be greater than $0$ if the distribution is not uniform (as it must be if the sample space is countable). Besides, as you may well know, zero probability doesn't mean impossible. By avoiding the use of random all these issues are sidestepped.
In more advanced courses, students will be able to appreciate more reasons for keeping random and arbitrary, as well as probabilistic and nondeterministic, distinct. But the example above should be enough to get them started. At any rate, in framing my feedback to students at their first attempts with proofs, I'd assume that they had the right concept in mind, but didn't pick the correct mathematical term to express it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Custom Listbox Items
I just started my WP7 programming journey. I need to add 2 text lines as an item to the listbox.
What are my options?
I read about user control and some others. What is the easiest way to go about doing it?
A:
The easiest way to do this would be based on the code that comes in the default project types.
Create a new "DataBound" application and you'll see that it shows multiple lines per item using a DataTemplate for the ListBox.Item.
Update
To retrieve the selected item, cast its datacontext to whatever the object is on the viewmodel.
| {
"pile_set_name": "StackExchange"
} |
Q:
Canonical 'simple project' makefile
Your small C/C++ project has reached a point where it's no longer practical to have all your code in one file. You want to split out a few components. So you make a src/ directory, and then... you have to write a real Makefile. Something more than hello: hello.o. Uh-oh... was it $@ or $< or $^? Crap. You don't remember (I never do).
Do you have a 'one-size fits all' simple Makefile that can deal with straightforward source trees? If so, what's in it and why? I'm looking for the smallest, simplest Makefile that can compile a directory full of C files nicely without me having to edit the Makefile every time I add a file. Here's what I have so far:
CXX = clang++
CXXFLAGS = ...
LDFLAGS = ...
EXENAME = main
SRCS = $(wildcard src/*.cc)
OBJS = $(patsubst src%.cc,build%.o, $(SRCS))
all: $(EXENAME)
build/%.o: src/%.cc
@mkdir -p $(dir $@)
$(CXX) -c -o $@ $^ $(CXXFLAGS)
$(EXENAME): $(OBJS)
$(CXX) -o $@ $^ $(LDFLAGS)
clean:
rm -rf $(EXENAME) build/
This Makefile builds all the .cc files in the src/ directory into .o files in the build/ directory, then links them up into the parent directory.
What would you do differently?
A:
I would reconsider you decision not to have an explicit list of sources-- I think it may cause you trouble in the long run. But if that's your decision, this makefile is pretty good.
In the %.o rule I would use $< instead of $^, so that later you can add dependencies like
build/foo.o: bar.h
And when you're ready, you can take a look at Advanced Auto-Dependency Generation.
| {
"pile_set_name": "StackExchange"
} |
Q:
Setting center and resolution on initialization of view results in center going to [NaN, NaN] on zooming with mouse
I'm using the following to initialize my view for the map in openlayers 4.
var olview = new ol.View({
center: ol.proj.fromLonLat(coords),
resolution: resolution,
minResolution: 0.025,
maxResolution: 2500,
projection: 'EPSG:3857'
});
The coords and resolution are passed as arguments to the function that creates the view. When zooming with the mouse the center of the map goes to [NaN, NaN]. Using the or the plus/minus buttons or animate function on the view works as expected, and after animating the view the zoom functionality works as expected.
A:
The cause was that the resolution was being passed as a string. Openlayers doesn't check types or convert them, so passing types that aren't the expected type in the documentation can cause things to break (here's an issue discussing it). Setting resolution to parseFloat(resolution) fixed the problem.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to return sender from Jquery autocomplete
I use Jquery Autocomplete in my mvc3 application. I have a lot of textbox, and i try to do it smart :)
I need to return autocomplete field property to controler. like that:
<script type="text/javascript" >
$(document).ready(function () {
$(".AutoC[id]").autocomplete('@Url.Action("Liczba_wejsc", "Home")', { minChars: 1, selectFirst: true, extraParams: { "ID": $(this).attr('id')} });
});
</script>
<div class="editor-field">
@Html.ValidationMessageFor(m => m.some_prop)
<br/>@Html.TextBoxFor(m => m.some_prop, new {
id = "some_id", @class = "AutoC" })
</div>
But allways i get null.
A:
Ok i get it :
</script>
<script type="text/javascript" >
$(document).ready(function () {
$(".AutoC").each(function() {
var id = $(this).attr("id");
$(this).autocomplete('@Url.Action("Liczba_wejsc", "Home")', { minChars: 1, selectFirst: true, extraParams: { "ID": id} });
});
});
</script>
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I run an OS X app built with xCode to see if dependencies work
I am working on an OS X app in xCode, and I was wondering what the easiest way was to test the app in a "clean" environment without all of my developer's frameworks. I need to know if the app will work AS IS on other machines.
A:
What I do for my own projects is I have a second hard drive attached to my Macintosh on which I have installed various (clean) versions of MacOS with no Xcode or extra frameworks hidden in /usr/local/lib, /usr/lib, etc.
That way you can try your app out in a relatively clean environment, and if your app writes lots of files or installs things in System places, it's easy to reinstall MacOS and get a pristine set-up again without messing up your main development hard drives.
| {
"pile_set_name": "StackExchange"
} |
Q:
How change the color of a nokia.maps.map.StandardMarker without do a new object?
I have been trying something like that:
laststmarker is a nokia.maps.map.StandardMarker
ncolor is a string= #0000FF
laststmarker.brush=ncolor;
laststmarker.brush="{color:'"+ncolor+"'}";
laststmarker.brush={color:ncolor};
and other things, how do i change the color without remove and add it again to the map?
A:
The important thing to note here is that the brush is immutable - that means that you can't update the parameter directly - you need to use the setter e.g. marker.set("brush" , { color :"#FF0000"}); - this is usually followed by map.update(-1,0); in order to refresh the map.
The example below highlights a marker when the mouse pointer hovers over it. You need to use your own app id and token to get it to work.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
<meta http-equiv="X-UA-Compatible" content="IE=7; IE=EmulateIE9" />
<base href="http://www.wrc.com/" />
<title>Highlighing a marker</title>
<meta name="description" content="" />
<meta name="keywords" content="" />
<meta name="viewport" content="initial-scale=1.0, user-scalable=no" />
<script language="javascript" src="http://api.maps.nokia.com/2.2.4/jsl.js" type="text/javascript" charset="utf-8"></script>
</head>
<body>
<p> Place your pointer over the marker to highlight it.</p>
<div id="gmapcanvas" style="width:600px; height:600px;" > </div><br/><br/>
<script type="text/javascript">
// <![CDATA[
/////////////////////////////////////////////////////////////////////////////////////
// Don't forget to set your API credentials
//
// Replace with your appId and token which you can obtain when you
// register on http://api.developer.nokia.com/
//
nokia.Settings.set( "appId", "YOUR APP ID GOES HERE");
nokia.Settings.set( "authenticationToken", "YOUR AUTHENTICATION TOKEN GOES HERE");
/////////////////////////////////////////////////////////////////////////////////////
map = new nokia.maps.map.Display(document.getElementById('gmapcanvas'), {
'components': [
// Behavior collection
new nokia.maps.map.component.Behavior() ],
'zoomLevel': 5, // Zoom level for the map
'center': [41.0125,28.975833] // Center coordinates
});
// Remove zoom.MouseWheel behavior for better page scrolling experience
map.removeComponent(map.getComponentById("zoom.MouseWheel"));
var normalMarker = new nokia.maps.map.StandardMarker(new nokia.maps.geo.Coordinate(41.0125,28.975833), {brush: {color: "#FF0000"}});
normalMarker.addListener("mouseover" , function(evt) {
normalMarker.set("brush" , { color :"#0000FF"});
map.update(-1,0);
}, false);
normalMarker.addListener("mouseout" , function(evt) {
normalMarker.set("brush" , { color :"#FF0000"});
map.update(-1,0);
}, false);
map.objects.add(normalMarker);
// ]]>
</script>
</body>
</html>
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible that AD forest have different versions of AD on different sites?
As I plan upcoming AD upgrade (from 2008R2 to 2019) I face another problem: we're actually have AD Forest that consists of several sites, each of them on 2008R2 AD version. As we upgrade one site AD version from 2008R2 to 2019, other sites will remain on 2008R2 for quite site time until we'll see 2019 scheme works well and no problem appears. And now I doubt if the whole forest be functional and no problem arise when one site be on 2019 and other sites on 2008R2 AD scheme versions.
So the question is: in practice (that is, in real life) will the forest still be functional and usable as one site (and, eventually, one by one all other sites, too) be upgraded, while remaining sites still be on lower AD version? In general, the forest needed to have cross-sites auth, so no sophisticated features are needed, but anyway I'd better ask before jumping into the water :)
A:
When you say sites I assume you mean locations, and not domains... meaning you have different locations in a single domain that have Domain Controllers running different Windows Server versions, and not that you have different Domains in the same Forest.
If it is the case that you have different sites in the same Domain that have Domain Controllers running different Windows Server versions then you can add new Domain Controllers as long as they are supported in your current Domain Functional Level and Forest Functional Level. All Domain Controllers will operate at one DFL and FFL regardless of their Windows Server version.
Long story short, can you have Windows Server 2019 Domain Controllers running with older Windows Server version Domain Controllers? Yes, so long as you meet the requirements..
https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/active-directory-functional-levels
| {
"pile_set_name": "StackExchange"
} |
Q:
UDP socket errno in iOS
I am creating a UDP socket, and attempting to send to an existing server in the code below:
struct sockaddr_in servAddr;
memset(&servAddr, 0, sizeof(servAddr));
servAddr.sin_family = AF_INET;
servAddr.sin_addr.s_addr = inet_addr(SERVER IP ADDRESS GOES HERE);
servAddr.sin_port = htons(port);
int testSock = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP);
unsigned char byteData;
int sent;
unsigned int servSize = sizeof(servAddr);
if((sent = sendto(testSock, &byteData, 1, 0, (struct sockaddr *)&servAddr, (socklen_t)&servSize)) < 0){
NSLog(@"Error sending to server: %d %d", errno, sent);
}
Every time "sendto" returns -1, and errno is set to 63. I have never encountered this error before.
I can say with complete confidence that there is nothing wrong with the server, or the IP address or port provided. It has to be client-side.
A:
63 is 'filename too long'. In this case it is the sockaddr that appears too long to the kernel, and that is because you are passing a pointer as the length, instead of the actual length. The final parameter to sendto() isn't a pointer, it is a value. Remove the '&'.
| {
"pile_set_name": "StackExchange"
} |
Q:
Comparison of custom data type with parameters
I am learning Haskell and trying to implement this program. I have a custom data type
data CalculatorInput
= Exit
| Error String
| Operator (Int -> Int -> Int)
| Number Int
then I have a method getInput which returns a value of this type.
Now i am confused how to dispatch on values of this type. I have a method
simpleCalculator :: Int -> (Int -> Int -> Int) -> IO ()
simpleCalculator ans op = do
input <- getInput -- here i got the type as result
if input == Exit then return()
else if input == Number x then ans = ans op x; print ans
else simpleCalculator ans op
I want to know whether the input is a Number x
I tried to use case as well:
simpleCalculator :: Int -> (Int -> Int -> Int) -> IO ()
simpleCalculator ans op = do
input <- getInput -- here i got the type as result
--case input of
-- Exit -> return ()
-- Error x -> print x
-- Number n -> ans = ans op x; print ans -- getting error here, multiple statement not possible
-- _ -> simpleCalculator ans op
I tried to create instance of Eq as well
instance Eq CalculatorInput where
(==) Exit Exit = True
(==) (Number x) (Number y) = x == y
(==) _ _ = False
How can I compare custom data types with parameters or have multiple statements in a case branch?
A:
You're almost on the right track with your non-working code:
simpleCalculator :: Int -> (Int -> Int -> Int) -> IO ()
simpleCalculator ans op = do
input <- getInput -- here i got the type as result
case input of
Exit -> return ()
Error x -> print x
Number n -> ans = ans op x; print ans
_ -> simpleCalculator ans op
You can nest do notations allowing you to write the following correct program:
simpleCalculator :: Int -> (Int -> Int -> Int) -> IO ()
simpleCalculator ans op = do
input <- getInput -- here i got the type as result
case input of
Exit -> return ()
Error x -> print x
Number n -> do
let theAns = ans op x
print theAns
_ -> simpleCalculator ans op
As for the Eq instance, you can let the compiler do the work for you by using derivation, i.e. writing
data CalculatorInput
= Exit
| Error String
| Operator (Int -> Int -> Int)
| Number Int
deriving Eq
| {
"pile_set_name": "StackExchange"
} |
Q:
Interpreting multinomial logistic regression in scikit-learn
I am running a multinomial logistic regression for a classification problem involving 6 classes and four features.
Here is the code:
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.20)
logreg = LogisticRegression(multi_class = 'multinomial', solver = 'newton-cg')
logreg = logreg.fit(X_train, Y_train)
output2 = logreg.predict(X_test)
logreg.intercept_
logreg.coef_
logreg.classes_
And I get the following output:
Intercept
array([-1.33803785, -1.55807614, -1.63809549, -0.05199907, 3.72777888, 0.85842968])
Coefficients
array([[ 3.59830486, 5.1370334 , 1.32336325, 4.89734568],
[ 3.5507364 , 5.2459697 , 1.48523684, 4.81653704],
[ 3.35193267, 5.40124363, 2.04869296, 3.885547 ],
[ -5.4930705 , 5.49483357, 1.96479926, -6.7624365 ],
[ -8.61513183, -3.77761893, -7.79363153, -11.72171457],
[ 3.6072284 , -17.50146139, 0.97153921, 4.88472135]])
Classes
array([u'Dropper', u'Flat', u'Grower', u'New User', u'Non User', u'Stopper'], dtype=object)
I am not able to interpret the models. As I understand multinomial logistic regression, for K possible outcomes, running K-1 independent binary logistic regression models, in which one outcome is chosen as a "pivot" and then the other K-1 outcomes are separately regressed against the pivot outcome.
As per this, there must be 5 equations for the 6 classes. But here there are 6. How come?
A:
As the probabilities of each class must sum to one, we can either define n-1 independent coefficients vectors, or n coefficients vectors that are linked by the equation \sum_c p(y=c) = 1.
The two parametrization are equivalent.
See also in Wikipedia Multinomial logistic regression - As a log-linear model.
For a class c, we have a probability P(y=c) = e^{b_c.X} / Z, with Z a normalization that accounts for the equation \sum_c P(y=c) = 1.
These probabilities are the expected probabilities of a class given the coefficients. They can be computed with predict_proba
To have better insight of the coefficients, please consider the left plot in this example.
example http://scikit-learn.org/dev/_images/plot_logistic_multinomial_001.png
In this example there are 3 classes a, b, c and 2 features x0, x1. The class is noted y.
After the fit of a multinomial logistic, each class as a coefficients vector C with 2 components (for the 2 features): (C_a0, C_a1), (C_b0, C_b1), (C_c0, C_c1)
There is also an intercept (aka biais) I for each class, which are always unidimensional: I_a, I_b, I_c
The dash line represents the hyperplane defined by C and I:
example: for class a, the hyperplane is defined by the equation x0 * C_a0 + x1 * C_a1 + I_a = 0
This is the hyperplane where P(y=a) = e^{x0 * C_a0 + x1 * C_a1 + I_a} / Z = 1 / Z.
If C_a0 is positive, when x0 increases P(y=a) increases.
If C_a0 is negative, when x0 increases P(y=a) decreases.
However this is not the decision boundary.
The decision boundary between classes a and b is defined by the equation:
p(y=a) = p(y=b) which is e^{x0 * C_a0 + x1 * C_a1 + I_a} = e^{x0 * C_b0 + x1 * C_b1 + I_b}
or again x0 * C_a0 + x1 * C_a1 + I_a = x0 * C_b0 + x1 * C_b1 + I_b.
This boundary hyperplane is visible in the plot by the background colors.
If C_a0 - C_b0 is positive, when x0 increases P(y=a) / P(y=b) increases.
If C_a0 - C_b0 is negative, when x0 increases P(y=a) / P(y=b) decreases.
| {
"pile_set_name": "StackExchange"
} |
Q:
Changing keyboard layout on application focus
As everybody knows the en-US Keyboard-layout is the best one for programming. So I'd like to use it in my IDEs. But since I live in a non-en-US country I need the de-CH layout for all other applications. Now I wonder if it is possible to set the layout depending to which application currently has the focus. If that is possible, can a human brain adapt to such a behaviour or is it just confusing?
cheers,
AC
The operating system is Windows 7 and the IDEs are VisualStudio and Netbeans
A:
i thought about the same question some time ago, haven't fount an easy solution and so i changed the layout of may PC (where i do mostly programming) to en-US and left my Laptop on de-DE (i'm from germany) - after almost a week i changed back my PC to de-DE because i was confused ll the time...
| {
"pile_set_name": "StackExchange"
} |
Q:
How to give restricted access to user
I have a home PC and I created a reverse port forwarding to a server. Now I would like to go access to some people to the Home PC through this server. I would like to control the access of the user on this server so I added the following lines to the /etc/ssh/sshd_config
Match User restricteduser
ChrootDirectory /home/restricteduser
AllowAgentForwarding no
PermitOpen localhost:3333
but when I'm trying to connect to the server
ssh restricteduser@serverIP
restricteduser@serverIP's password:
I'm getting the following error:
Write failed: Connection reset by peer
A:
The logfiles for the ssh daemon should give you specific information on what's happening here. Check /var/log/auth.log.
However, I suspect that the ChrootDirectory is what is causing problems.
When remoteuser logs in, the ssh daemon tries to chroot to /home/restricteduser and start restricteduser's shell (probably /bin/bash). Because it's chrooted, the ssh daemon will be looking for /home/restricteduser/bin/bash.
Additionally, any libraries needed by the shell need to be present in the chroot (check with ldd /bin/bash), and the same applies to any files that the shell expects to be available when started. If the ssh daemon itself needs access to files, they will need to be present too.
If restricteduser is to run any programs once logged-in, they'll need to be in the chroot too, as well as their dependent libraries/files.
This can get quite complex. If you're simply looking to provide port-forwarding, check out the answer to How to create a restricted SSH user for port forwarding?
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I merge several hashes into one hash in Perl?
In Perl, how do I get this:
$VAR1 = { '999' => { '998' => [ '908', '906', '0', '998', '907' ] } };
$VAR1 = { '999' => { '991' => [ '913', '920', '918', '998', '916', '919', '917', '915', '912', '914' ] } };
$VAR1 = { '999' => { '996' => [] } };
$VAR1 = { '999' => { '995' => [] } };
$VAR1 = { '999' => { '994' => [] } };
$VAR1 = { '999' => { '993' => [] } };
$VAR1 = { '999' => { '997' => [ '986', '987', '990', '984', '989', '988' ] } };
$VAR1 = { '995' => { '101' => [] } };
$VAR1 = { '995' => { '102' => [] } };
$VAR1 = { '995' => { '103' => [] } };
$VAR1 = { '995' => { '104' => [] } };
$VAR1 = { '995' => { '105' => [] } };
$VAR1 = { '995' => { '106' => [] } };
$VAR1 = { '995' => { '107' => [] } };
$VAR1 = { '994' => { '910' => [] } };
$VAR1 = { '993' => { '909' => [] } };
$VAR1 = { '993' => { '904' => [] } };
$VAR1 = { '994' => { '985' => [] } };
$VAR1 = { '994' => { '983' => [] } };
$VAR1 = { '993' => { '902' => [] } };
$VAR1 = { '999' => { '992' => [ '905' ] } };
to this:
$VAR1 = { '999:' => [
{ '992' => [ '905' ] },
{ '993' => [
{ '909' => [] },
{ '904' => [] },
{ '902' => [] }
] },
{ '994' => [
{ '910' => [] },
{ '985' => [] },
{ '983' => [] }
] },
{ '995' => [
{ '101' => [] },
{ '102' => [] },
{ '103' => [] },
{ '104' => [] },
{ '105' => [] },
{ '106' => [] },
{ '107' => [] }
] },
{ '996' => [] },
{ '997' => [ '986', '987', '990', '984', '989', '988' ] },
{ '998' => [ '908', '906', '0', '998', '907' ] },
{ '991' => [ '913', '920', '918', '998', '916', '919', '917', '915', '912', '914' ] }
]};
A:
I think this is closer than anybody else has gotten:
This does most of what you want. I did not store things in arrays of singular
hashes, as I don't feel that that is useful.
Your scenario is not a regular one. I've tried to genericize this to some extent,
but was not possible to overcome the singularity of this code.
First of all because it appears you want to collapse everything with the same
id into a merged entity (with exceptions), you have to descend through the structure
pulling the definitions of the entities. Keeping track of levels, because you
want them in the form of a tree.
Next, you assemble the ID table, merging entities as possible. Note that you
had 995 defined as an empty array one place and as a level another. So given
your output, I wanted to overwrite the empty list with the hash.
After that, we need to move the root to the result structure, descending that in order
to assign canonical entities to the identifiers at each level.
Like I said, it's not anything that regular. Of course, if you still want a list
of hashes which are no more than pairs, that's an exercise left to you.
use strict;
use warnings;
# subroutine to identify all elements
sub descend_identify {
my ( $level, $hash_ref ) = @_;
# return an expanding list that gets populated as we desecend
return map {
my $item = $hash_ref->{$_};
$_ => ( $level, $item )
, ( ref( $item ) eq 'HASH' ? descend_identify( $level + 1, $item )
: ()
)
;
} keys %$hash_ref
;
}
# subroutine to refit all nested elements
sub descend_restore {
my ( $hash, $ident_hash ) = @_;
my @keys = keys %$hash;
@$hash{ @keys } = @$ident_hash{ @keys };
foreach my $h ( grep { ref() eq 'HASH' } values %$hash ) {
descend_restore( $h, $ident_hash );
}
return;
}
# merge hashes, descending down the hash structures.
sub merge_hashes {
my ( $dest_hash, $src_hash ) = @_;
foreach my $key ( keys %$src_hash ) {
if ( exists $dest_hash->{$key} ) {
my $ref = $dest_hash->{$key};
my $typ = ref( $ref );
if ( $typ eq 'HASH' ) {
merge_hashes( $ref, $src_hash->{$key} );
}
else {
push @$ref, $src_hash->{$key};
}
}
else {
$dest_hash->{$key} = $src_hash->{$key};
}
}
return;
}
my ( %levels, %ident_map, %result );
#descend through every level of hash in the list
# @hash_list is assumed to be whatever you Dumper-ed.
my @pairs = map { descend_identify( 0, $_ ); } @hash_list;
while ( @pairs ) {
my ( $key, $level, $ref ) = splice( @pairs, 0, 3 );
$levels{$key} |= $level;
# if we already have an identity for this key, merge the two
if ( exists $ident_map{$key} ) {
my $oref = $ident_map{$key};
my $otyp = ref( $oref );
if ( $otyp ne ref( $ref )) {
# empty arrays can be overwritten by hashrefs -- per 995
if ( $otyp eq 'ARRAY' && @$oref == 0 && ref( $ref ) eq 'HASH' ) {
$ident_map{$key} = $ref;
}
else {
die "Uncertain merge for '$key'!";
}
}
elsif ( $otyp eq 'HASH' ) {
merge_hashes( $oref, $ref );
}
else {
@$oref = sort { $a <=> $b || $a cmp $b } keys %{{ @$ref, @$oref }};
}
}
else {
$ident_map{$key} = $ref;
}
}
# Copy only the keys that do not appear at higher levels to the
# result hash
if ( my @keys = grep { !$levels{$_} } keys %ident_map ) {
@result{ @keys } = @ident_map{ @keys } if @keys;
}
# then step through the hash to make sure that the entries at
# all levels are equal to the identity
descend_restore( \%result, \%ident_map );
| {
"pile_set_name": "StackExchange"
} |
Q:
Fixed length for buttons in css
Can someone help me on how to have a fixed width for all buttons using css. Please find the jsFiddle link for the same. I have tried my best but unable to get a clue. Thanks in advance.
HTML:
<div id="container">
<div class="button-row width">
<a href="#" class="button rounded red effect-3 width">Menu Organizer</a>
</div>
<div class="button-row">
<a href="#" class="button rounded red effect-3 width" name="menu2">Place Order</a>
</div>
<div class="button-row-submenu width">
<a href="#" class="subbutton shape-3 red effect-3">Category</a>
</div>
<div class="button-row-submenu">
<a href="#" class="subbutton shape-4 red effect-3">Building</a>
</div>
<div class="button-row">
<a href="#" class="button rounded red effect-3">User Preference</a>
</div>
<div class="button-row">
<a href="#" class="button rounded red effect-3">Ok</a>
</div>
</div>
CSS:
/* some styles */
div#container {
width: 800px;
margin: 50px auto;
}
div.button-row {
margin: 20px 0;
text-align: left;
}
/* button */
.button {
font-family: Helvetica, Arial, sans-serif;
font-size: 15px;
font-weight: bold;
width: 200px;
color: #FFFFFF;
padding: 6px 50px;
margin: 0 20px;
text-shadow: 2px 2px 1px #595959;
filter: dropshadow(color=#595959, offx=1, offy=1);
text-decoration: none;
}
/* button shapes */
.rounded {
-webkit-border-radius: 50px;
-moz-border-radius: 50px;
border-radius: 50px;
}
/* button colors */
.red {
border: solid 1px #720000;
background-color: #c72a2a;
background: -moz-linear-gradient(top, #c72a2a 0%, #9e0e0e 100%);
background: -webkit-linear-gradient(top, #c72a2a 0%, #9e0e0e 100%);
background: -o-linear-gradient(top, #c72a2a 0%, #9e0e0e 100%);
background: -ms-linear-gradient(top, #c72a2a 0% ,#9e0e0e 100%);
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#9e0e0e', endColorstr='#9e0e0e',GradientType=0 );
background: linear-gradient(top, #c72a2a 0% ,#9e0e0e 100%);
-webkit-box-shadow: 0px 0px 1px #FF3300, inset 0px 0px 1px #FFFFFF;
-moz-box-shadow: 0px 0px 1px #FF3300, inset 0px 0px 1px #FFFFFF;
box-shadow: 0px 0px 1px #FF3300, inset 0px 0px 1px #FFFFFF;
}
.red:hover {
background-color: #b52f2f;
background: -moz-linear-gradient(top, #b52f2f 0%, #910b0b 100%);
background: -webkit-linear-gradient(top, #b52f2f 0%, #910b0b 100%);
background: -o-linear-gradient(top, #b52f2f 0%, #910b0b 100%);
background: -ms-linear-gradient(top, #b52f2f 0% ,#910b0b 100%);
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#910b0b', endColorstr='#910b0b',GradientType=0 );
background: linear-gradient(top, #b52f2f 0% ,#910b0b 100%);
}
.red:active {
background-color: #8f2222;
background: -moz-linear-gradient(top, #8f2222 0%, #660808 100%);
background: -webkit-linear-gradient(top, #8f2222 0%, #660808 100%);
background: -o-linear-gradient(top, #8f2222 0%, #660808 100%);
background: -ms-linear-gradient(top, #8f2222 0% ,#660808 100%);
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#660808', endColorstr='#660808',GradientType=0 );
background: linear-gradient(top, #8f2222 0% ,#660808 100%);
}
/* button effects */
.effect-3 {
transition: border-radius 1s;
-webkit-transition: border-radius 1s;
-moz-transition: border-radius 1s;
-o-transition: border-radius 1s;
-ms-transition: border-radius 1s;
}
.effect-3:hover {
border-radius: 5px;
-webkit-border-radius: 5px;
-moz-border-radius: 5px;
}
.width{
width: 550px;
}
Can someone help me on how to have a fixed width for all buttons using css.
A:
You can't set widths on inline elements. Solution: give them display:inline-block. Inline-blocks have widths.
/* some styles */
body {
background: url(../images/bg1.jpg) repeat;
}
div#container {
width: 800px;
margin: 50px auto;
}
div.button-row {
margin: 20px 0;
text-align: left;
}
div.button-row-submenu {
margin: 15px 50px;
text-align: left;
}
/* button */
.button {
display: inline-block;
font-family: Helvetica, Arial, sans-serif;
font-size: 15px;
font-weight: bold;
width: 200px;
color: #FFFFFF;
padding: 6px 50px;
margin: 0 20px;
text-shadow: 2px 2px 1px #595959;
filter: dropshadow(color=#595959, offx=1, offy=1);
text-decoration: none;
}
.subbutton {
font-family: Helvetica, Arial, sans-serif;
font-size: 13px;
font-weight: bold;
color: #FFFFFF;
padding: 5px 30px;
margin: 0 20px;
text-shadow: 2px 2px 1px #595959;
filter: dropshadow(color=#595959, offx=1, offy=1);
text-decoration: none;
}
/* button shapes */
.rounded {
-webkit-border-radius: 50px;
-moz-border-radius: 50px;
border-radius: 50px;
}
.shape-3 {
-webkit-border-radius: 40px 40px 5px 5px;
border-radius: 40px 40px 5px 5px;
-moz-border-radius-topleft: 40px;
-moz-border-radius-topright: 40px;
-moz-border-radius-bottomleft: 5px;
-moz-border-radius-bottomright: 5px;
}
.shape-4 {
-webkit-border-radius: 5px 5px 40px 40px;
border-radius: 5px 5px 40px 40px;
-moz-border-radius-topleft: 5px;
-moz-border-radius-topright: 5px;
-moz-border-radius-bottomleft: 40px;
-moz-border-radius-bottomright: 40px;
}
/* button colors */
.red {
border: solid 1px #720000;
background-color: #c72a2a;
background: -moz-linear-gradient(top, #c72a2a 0%, #9e0e0e 100%);
background: -webkit-linear-gradient(top, #c72a2a 0%, #9e0e0e 100%);
background: -o-linear-gradient(top, #c72a2a 0%, #9e0e0e 100%);
background: -ms-linear-gradient(top, #c72a2a 0%, #9e0e0e 100%);
filter: progid: DXImageTransform.Microsoft.gradient(startColorstr='#9e0e0e', endColorstr='#9e0e0e', GradientType=0);
background: linear-gradient(top, #c72a2a 0%, #9e0e0e 100%);
-webkit-box-shadow: 0px 0px 1px #FF3300, inset 0px 0px 1px #FFFFFF;
-moz-box-shadow: 0px 0px 1px #FF3300, inset 0px 0px 1px #FFFFFF;
box-shadow: 0px 0px 1px #FF3300, inset 0px 0px 1px #FFFFFF;
}
.red:hover {
background-color: #b52f2f;
background: -moz-linear-gradient(top, #b52f2f 0%, #910b0b 100%);
background: -webkit-linear-gradient(top, #b52f2f 0%, #910b0b 100%);
background: -o-linear-gradient(top, #b52f2f 0%, #910b0b 100%);
background: -ms-linear-gradient(top, #b52f2f 0%, #910b0b 100%);
filter: progid: DXImageTransform.Microsoft.gradient(startColorstr='#910b0b', endColorstr='#910b0b', GradientType=0);
background: linear-gradient(top, #b52f2f 0%, #910b0b 100%);
}
.red:active {
background-color: #8f2222;
background: -moz-linear-gradient(top, #8f2222 0%, #660808 100%);
background: -webkit-linear-gradient(top, #8f2222 0%, #660808 100%);
background: -o-linear-gradient(top, #8f2222 0%, #660808 100%);
background: -ms-linear-gradient(top, #8f2222 0%, #660808 100%);
filter: progid: DXImageTransform.Microsoft.gradient(startColorstr='#660808', endColorstr='#660808', GradientType=0);
background: linear-gradient(top, #8f2222 0%, #660808 100%);
}
/* button effects */
.effect-3 {
transition: border-radius 1s;
-webkit-transition: border-radius 1s;
-moz-transition: border-radius 1s;
-o-transition: border-radius 1s;
-ms-transition: border-radius 1s;
}
.effect-3:hover {
border-radius: 5px;
-webkit-border-radius: 5px;
-moz-border-radius: 5px;
}
.width {
width: 550px;
}
<div id="container">
<div class="button-row width">
<a href="#" class="button rounded red effect-3 width">Menu Organizer</a>
</div>
<div class="button-row">
<a href="#" class="button rounded red effect-3 width" name="menu2">Place Order</a>
</div>
<div class="button-row-submenu width">
<a href="#" class="subbutton shape-3 red effect-3">Category</a>
</div>
<div class="button-row-submenu">
<a href="#" class="subbutton shape-4 red effect-3">Building</a>
</div>
<div class="button-row">
<a href="#" class="button rounded red effect-3">User Preference</a>
</div>
<div class="button-row">
<a href="#" class="button rounded red effect-3">Ok</a>
</div>
</div>
| {
"pile_set_name": "StackExchange"
} |
Q:
Error with random Url image gallery View
I have made a random Image Gallery view with the Android Touch Gallery but I want to Show random Images. I have tried to generate an link with an random number.
I can not Play it and I have no idea how i can solve this.
Please Help.
Activity:
package com.ddd.fun1234;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Random;
import ru.truba.touchgallery.GalleryWidget.GalleryViewPager;
import ru.truba.touchgallery.GalleryWidget.UrlPagerAdapter;
import ru.truba.touchgallery.GalleryWidget.BasePagerAdapter.OnItemChangeListener;
import android.app.Activity;
import android.graphics.Bitmap;
import android.os.Bundle;
public class GalleryUrlAvtivity extends Activity {
private GalleryViewPager mViewPager;
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
}
public Bitmap GetImage() {
Random rn = new Random();
int n = 200000 - 199000 + 1;
int i = rn.nextInt() % n;
URL tempURL = null;
try {
tempURL = new URL("http://miniz.co/RageToonApp/Images/" + rn + ".jpg");
} catch (MalformedURLException e1) {
e1.printStackTrace();
}
List<String> items = new ArrayList<String>();
Collections.addAll(items, tempURL);
UrlPagerAdapter pagerAdapter = new UrlPagerAdapter(this, items);
pagerAdapter.setOnItemChangeListener(new OnItemChangeListener()
{
@Override
public void onItemChange(int currentPosition)
{
}
});
mViewPager = (GalleryViewPager)findViewById(R.id.viewer);
mViewPager.setOffscreenPageLimit(3);
mViewPager.setAdapter(pagerAdapter);
}
}
There is a fail in Collections.addAll. What can I use instead of this?
When you have an idea what i can do or what is a good second Option, please write it.
Daniel
A:
The problem is in these lines:
Random rn = new Random();
...
int i = rn.nextInt() % n;
...
tempURL = new URL("http://miniz.co/RageToonApp/Images/" + rn + ".jpg");
You are calculating i but instead of using it in the URL, you're using rn instead. By default, this calls Random.toString() which contains characters (in this case, a @) that are illegal in URLs.
To fix this you should change the last line to:
tempURL = new URL("http://miniz.co/RageToonApp/Images/" + i + ".jpg");
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there an alternative to Dropbox which has an smaller CPU overhead
Ok, this is not necessarily strictly Mac, however it does affect my Macs differently.
I have an 5 years old iMac which is ok with Dropbox running all the time, while my brand new MBP (2017) struggle with Dropbox's CPU overhead (struggles is an exaggeration but it does get the fan running at full blast most of the time). I'm a heavy user and have the higher plan, which means there are some files that are constantly being modified, yet I find it extremely annoying to have it happening.
Is there an alternative which uses less resources?
A:
This annoyed me too so I built Rsync Client. It's like an FTP app but uses rsync, and it only runs when you invoke it, so it's not relentlessly chewing resources. It works with many backend storage service providers, so you control where your data goes.
| {
"pile_set_name": "StackExchange"
} |
Q:
TensorFlow: Classify multiple images from a folder
The code I previously wrote works well on a single image. But now I want the program to run multiple images. I just need to give the folder name as an argument.
I modified my code in such a way that it opens a directory and saves the images but I get an error
ValueError: cannot reshape array of size 98304 into shape
(1,128,128,3)
on x_batch=images.reshape(1,128,128,3):
images = []
for filename in os.listdir(folder):
image = cv2.imread(os.path.join(folder,filename))
image = cv2.resize(image, (128, 128))
images = np.append(images, image)
images = np.array(images, dtype=np.uint8)
images = images.astype('float32')
images = np.multiply(images, 1.0/255.0)
x_batch=images.reshape(1,128,128,3) <------ ERROR HERE
sess = tf.Session()
saver = tf.train.import_meta_graph('/home/kubuntu/SimpleCode/.meta')
saver.restore(sess, tf.train.latest_checkpoint('./'))
graph = tf.get_default_graph()
y_pred = graph.get_tensor_by_name("y_pred:0")
x= graph.get_tensor_by_name("x:0")
y_true = graph.get_tensor_by_name("y_true:0")
y_test_images = np.zeros((1, 6))
feed_dict_testing= {x:x_batch, y_true:y_test_images}
result=sess.run(y_pred, feed_dict=feed_dict_testing)
print("Up :"+str(result[0][0]))
print("Down :"+str(result[0][1]))
print("Left :"+str(result[0][2]))
print("Right :"+str(result[0][3]))
print("Forward :"+str(result[0][4]))
print("Backward :"+str(result[0][5]))
Is this the correct way of reading images from a folder? How do I classify all the images inside a given folder and gives the prediction for each image?
A:
Based on your answers you should do the following:
for filename in os.listdir(folder):
image = cv2.imread(os.path.join(folder,filename))
image = cv2.resize(image, (128, 128))
image = np.array(image, dtype=np.uint8)
image = image.astype('float32')
image = np.multiply(image, 1.0/255.0)
x_batch=image.reshape(1,128,128,3)
The code failed when you were reading the second image, because the images array had two images appended and you were trying to reshape it like there is only one image.
Besides, it's an extremely bad practice to iteratively create tf.Session in a for loop and load the graph all the time. I would change the whole code in the following way:
with tf.Session() as sess:
saver = tf.train.import_meta_graph('/home/kubuntu/SimpleCode/.meta')
saver.restore(sess, tf.train.latest_checkpoint('./'))
graph = tf.get_default_graph()
y_pred = graph.get_tensor_by_name("y_pred:0")
x = graph.get_tensor_by_name("x:0")
y_true = graph.get_tensor_by_name("y_true:0")
y_test_images = np.zeros((1, 6))
for filename in os.listdir(folder):
image = cv2.imread(os.path.join(folder,filename))
image = cv2.resize(image, (128, 128))
image = np.array(image, dtype=np.uint8)
image = image.astype('float32')
image = np.multiply(image, 1.0/255.0)
x_batch=image.reshape(1,128,128,3)
feed_dict_testing= {x:x_batch, y_true:y_test_images}
result = sess.run(y_pred, feed_dict=feed_dict_testing)
print("Up :"+str(result[0][0]))
print("Down :"+str(result[0][1]))
print("Left :"+str(result[0][2]))
print("Right :"+str(result[0][3]))
print("Forward :"+str(result[0][4]))
print("Backward :"+str(result[0][5]))
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use MsTest in Continuous Integration without VS?
My problem is quite simple, I have a CI server which runs msbuild and mstest.
The problem is that the Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll doesn't exist (and I thinks other files related to mstest...) if we don't install VS in the server which is pretty stupid for a CI server...
How can I fix this?
A:
Unfortunately, there is no supported or easy way around having to install VS on the build agent machine in 2005 or 2008 (There will be a test agent installer in 2010). UPDATE: See this post from Steve Smith for more info which says pretty much the same thing
It's not just a matter of the assemblies being missing - if you want to run the tests, the runner is not just a separate little EXE and a DLL.
Yes, hard to believe! Needless to say, very few other test frameworks on the planet have this restriction, so unless you have a lot of tests, you could consider moving, for a variety of reasons which are covered in many places, example: The fundamental problems and impracticality of using MSTest...
EDIT: Prompted by Rihan's reply, I binged up the following Running mstest without Visual Studio. - It's not fully supported, but it 'works'...
EDIT 2: Running MSTest without Visual Studio - Gallio to the rescue looks a lot more promising in terms of being supported and non-hacky. NB see @Johannes Rudolph's comment on this post tho'
EDIT 3: Added info re 2010 status on this question
NOTE: I have a similar question for 2008 regarding what's required to support the /publish parameter of MSTest:- Running MSTEST.exe /publish on a TeamBuild server, what are the prerequisites?
A:
@Ruben Bartelink : You can get mstest.exe in your machine by installing test agents. you can find it @given location. Post installation, invoke mstest.exe with /testcontainer and /test options. It runs successfully and creates trx files. Check for something which can process trx and generate reports.
http://www.microsoft.com/en-us/download/details.aspx?id=38186
| {
"pile_set_name": "StackExchange"
} |
Q:
Can torches cause fire?
I was just about to dig myself a new basement to my wooden-home, when I suddenly figured that the ceiling was on fire (long story short, not even the cockroaches survived). The only thing I did was to place a torch underneath a wooden block, as I've done many times before without so devastating effects.
So I wonder, can torches ignite fires? If yes, always or only under certain conditions?
A:
According to Minepedia:
Fire is a block that was first seen in Indev. It has an animated face on all four sides, and two faces on the inside at slants. Fire is not created naturally in a map, with the exception being in The Nether, it will only be created if the player or a Ghast causes it or Lava burns a flammable object.
At no point does the minepedia describe a torch causing a fire, suggesting that no one else has seen this happen.
Could it be that your roof is near to a lava flow?
Or, you've accidentally used a flint and steel on the wood?
A:
Torches can't cause a fire.
Are there burned trees in the neighbourhood? A forest fire might be the cause.
Did you hear a loud "flinging" sound? You could've been unlucky enough to have a ghast around.
A:
No, torches cannot start a fire. Period.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why won't my ArrayList initialize properly?
I am having frustrating trouble getting my ArrayList to initialize. I am getting an error at the line binaryTreeList.set(1, root); saying
Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 1, Size: 0
at java.util.ArrayList.rangeCheck(Unknown Source)
at java.util.ArrayList.set(Unknown Source)
at BinaryTreeADT.<init>(BinaryTreeADT.java:18)
at Driver.main(Driver.java:7)
I'm trying to implement a simple binary tree using an ArrayList and I'd like the "root" element to be at ArrayList position 1. For some reason, the size of the `binaryTreeList is not growing, despite adding nodes to all of them.
Here is my code in order of Driver, BinaryTreeADT and MyTreeNode
public class Driver {
public static void main(String[] args) {
MyTreeNode mtn = new MyTreeNode(3, 'R');
BinaryTreeADT bt = new BinaryTreeADT(mtn);
bt.printTree();
}
}
BinaryTreeADT:
import java.util.ArrayList;
import javax.swing.tree.TreeNode;
public class BinaryTreeADT {
private ArrayList<MyTreeNode> binaryTreeList;
private MyTreeNode nullNode = new MyTreeNode(true); //This creates a null node that initially populates the array.
//Constructor with no root
public BinaryTreeADT(){
binaryTreeList = new ArrayList<MyTreeNode>(10);
}
public BinaryTreeADT(MyTreeNode root){
binaryTreeList = new ArrayList<MyTreeNode>(10);
initializeList();
binaryTreeList.set(1, root);
}
private void initializeList(){
for (int i = 0; i < binaryTreeList.size(); i++){
binaryTreeList.add(nullNode);
}
}
public void add(){
}
public void printTree(){
for (int i = 0; i < binaryTreeList.size(); i++){
if (binaryTreeList.get(i) != null)
System.out.println(binaryTreeList.get(i).getNodeChar() + " | ");
}
}
}
MyTreeNode:
import java.util.Enumeration;
import javax.swing.tree.TreeNode;
public class MyTreeNode implements TreeNode {
private int nodeKey;
private char nodeChar;
private boolean isNull;
public MyTreeNode(int key, char letter){
nodeKey = key;
nodeChar = letter;
}
//Constructor for Null Node
public MyTreeNode(boolean setNull){
isNull = setNull;
}
public boolean isNull(){ //Tells if this is a null node
return isNull;
}
@Override
public Enumeration children() {
// TODO Auto-generated method stub
return null;
}
@Override
public boolean getAllowsChildren() {
// TODO Auto-generated method stub
return false;
}
@Override
public TreeNode getChildAt(int arg0) {
// TODO Auto-generated method stub
return null;
}
@Override
public int getChildCount() {
// TODO Auto-generated method stub
return 0;
}
@Override
public int getIndex(TreeNode arg0) {
// TODO Auto-generated method stub
return 0;
}
@Override
public TreeNode getParent() {
// TODO Auto-generated method stub
return null;
}
public int getNodeKey() {
return nodeKey;
}
public void setNodeKey(int nodeKey) {
this.nodeKey = nodeKey;
}
public char getNodeChar() {
return nodeChar;
}
public void setNodeChar(char nodeChar) {
this.nodeChar = nodeChar;
}
@Override
public boolean isLeaf() {
// TODO Auto-generated method stub
return false;
}
}
A:
Reason is this line:
binaryTreeList.set(1, root);
Because size of binaryTreeList is zero . You have constructed the ArrayList and told it to have the initial Capacity to be 10 using constructor ArrayList(int initialCapacity) , But since nothing is inside ArrayList right now , so ArrayList#size() is returning as 0. That's why within your initializeList method the for loop is exited at very first iteration which is not initializing the binaryTreeList with 10 elements. So the size of binaryTreeList is again still 0. This is the reason that setting a value at index 1 which is not existing at all is throwing IndexOutOfBoundException.
You should instead define initializeList as:
private void initializeList(){
for (int i = 0; i < 10; i++){
binaryTreeList.add(nullNode);
}
}
A:
You are attempting to set the element at position 1 when your ArrayList is empty:
binaryTreeList.set(1, root);
Instead just use:
binaryTreeList.add(root);
A:
Your reference is out of bounds. You should set the 0th index to your root node. However, since your ArrayList is empty (size = 0), you need to actually add the new element, which will increment the size of the array.
binaryTreeList.add(root);
With arrays, indices start at 0, so the element at index 0 of an array is the first element, the element at index 1 is the second, etc. If you have an array of size n, the last element will be at index n-1.
Later, if you want to change an element at a certain index, you can set the 0th element to root:
binaryTreeList.set(0, root);
This will work provided the first argument (0 in this case) is less than or equal to the binaryTreeList.size()-1.
| {
"pile_set_name": "StackExchange"
} |
Q:
SQL Server: Existing column and value incrementing
I'm trying to create and increment by one some values to put into an already existing (but empty) column. I'm currently using the identity function, but I wouldn't mind using a custom made function. Right now, SSMS is saying there's incorrect syntax near IDENTITY. Could anybody help me fix this syntax?
ALTER Table anthemID IDENTITY(1,1)
A:
First, you can't make a column identity after the fact: it has to be set that way at creation time.
Second, I'm not quite sure what you mean by "increment the value of an already existing column by one." You can only increment the value of rows within a column--perform a DML (Data Modification Language) query. The script you suggested above is a DDL (Data Definition Language) query that actually modifies the structure of the table, affecting the entire column--all rows.
If you just want to increment all the rows by 1, you'd do this:
UPDATE dbo.YourTable SET anthemID = anthemID + 1;
On the other hand, if you want the anthemID column to acquire the identity property so that new inserts to the table receive unique, autoincrementing values, you can do that with some juggling:
Back up your database and confirm it is a good backup.
Script out your table including all constraints.
Drop all constraints on your table or other tables that involve anthemID.
ALTER TABLE dbo.YourTable DROP CONSTRAINT PK_YourTable -- if part of PK
ALTER TABLE dbo.AnotherTable DROP CONSTRAINT FK_AnotherTable_anthemID -- FKs
Rename your table
EXEC sp_rename 'dbo.YourTable', 'YourTableTemp';
Modify the script you generated above to make anthemID identity (add in identity(1,1) after int);
Run the modified script to create a new table with the same name as the original.
Insert the data from the old table to the new one:
SET IDENTITY_INSERT dbo.YourTable ON;
INSERT dbo.YourTable (anthemID, AnotherColumn, RestOfColumns)
SELECT anthemID, AnotherColumn, RestOfColumns
FROM dbo.YourTableTemp;
SET IDENTITY_INSERT dbo.YourTable OFF;
Re-add all constraints that were dropped.
Drop the original, renamed table after confirming you don't need the data any more.
You may be able to do this from SSMS's GUI table designer, and it will take care of moving the data over for you. However, this has bitten some people in the past and if you don't have a good database backup, well, don't do it because you might encounter some regret in the process.
UPDATE
Now that I know the column is blank, it's even easier.
ALTER TABLE dbo.YourTable DROP COLUMN anthemID;
ALTER TABLE dbo.YourTable ADD anthemID int identity(1,1) NOT NULL;
This does have the drawback of moving the column to the end of the table. If that's a problem, you can follow much the same procedure as I outlined above (to fix things yourself, or alternately use the designer in SQL Server Management Studio).
I recommend in the strongest terms possible that you use an identity column and do not try to create your own means of making new rows get an incremented value.
For emphasis, I'll quote @marc_s's comment above:
The SELECT MAX(ID)+1 approach is highly unsafe in a concurrent environment - in a system under some load, you will get duplicates. Don't do this yourself - don't try to reinvent the wheel - use the proper mechanisms (here: IDENTITY) that your database gives you and let the database handle all the nitty-gritty details!
I wholeheartedly agree with him.
| {
"pile_set_name": "StackExchange"
} |
Q:
Ability to reload change in Magento site's configuration without clearing cache
today I dealt with a task to load a module's configuration into running Magento site under heavy load. I copied config.xml file of new module and everything to fix some issue.
Our Magento runs with memcached caching backend.
To have a module running I had to clear cache completly and that had an impack on performance of the site, we had 500 of concurent users . So I'm looking for solution how to deploy changes in of configuration without clearing cache.
Is there any?
Thanks for any thoughts and ideas.
Jaro.
A:
Here is a method of updating the config cache rather than clearing it, thus avoiding race-conditions.
https://gist.github.com/2715268
| {
"pile_set_name": "StackExchange"
} |
Q:
What does it mean? "The red box is the bounding box for the full asset."
Here I read that: "... Android 2.3 ... The red box is the bounding box for the full asset."
and below said that there are two sizes of one icon:
Full Asset: 24w x 38h px (preferred, width may vary)
Icon: 24w x 24h px (preferred, width may vary)
So, what size should be an icon image in the end "Full Asset" or "Icon"?
A:
That mean you should use a 24x38 image.
But you should leave 7px on top and bottom transparent.
| {
"pile_set_name": "StackExchange"
} |
Q:
Electric shock from pedals and guitar. Jacks too long?
Since I bought a multipack of cheap patch cables for use between pedals I've been getting some small electric shocks from the guitar strings and pedals themselves, as well as no sound from the amp.
The first time I just replaced a cable through process of elimination and figured it was a faulty soldering job in it... now after the 3rd cable to have caused it I checked the continuity between tips and sleeves of all three cables and all seem OK so made me wonder why replacing these cables made any difference.
One thing I did notice is that the jacks on these cables are 2mm to 3mm longer than those on any other brand of cable I own and thus the sleeve is pushed further into the pedal. Could it be that the sleeve of the jack is coming into contact with the tip's contact in the socket in the pedal? This would be bridging ground and live and lead to no sound and a shock, wouldn't it? In the attached image the black cable is the offending cheap item versus another of my 'working' cables.
I'm just hoping someone can confirm that could cause the problem and whether anyone else has experierenced it before I make a journey to the shop (a well established guitar shop) to complain.
Thanks in advance, internet!!
A:
Normally a shock from your guitar strings is caused by a very dangerous fault in your amp. In any case, no part of any cable should have enough of a voltage on it to shock you, so either your amp or one of your pedals is the problem. No matter how badly a cable is wired, it can't be producing any voltage. The shocking voltage is getting into the cable from some power supply in your amp or pedals.
Don't die.
Get your amp checked first, since that's the one with lethal voltages in it. Most pedals are much safer. If your amp checks out, then experiment with your pedals to find out which one is shocking you and get that repaired or replaced.
Ok, after looking more closely at the picture, definitely stop using those cables with the long barrel. That might be causing connections to parts of the pedals or amps that are not supposed to be touched.
| {
"pile_set_name": "StackExchange"
} |
Q:
Java: Splitting a string at set length
I know that if I have a String with something like Apples,oranges I can split it by saying I want to split wherever theres a comma. But what if I want to split a set part of a string into several strings? More specifically, lets say I have a String with a date input such as "28062013", how can I split this into three strings so I have 28 in one string, 06 in one and 2013 in the last one? In other words, I want to split ddMMyyyy into three seperate strings of dd, MM and yyyy.
Any help is appreciated. Thanks!
A:
You can use the substring method of String. For instance:
String input = "28062013";
String date = input.substring(0, 2); // "28"
String month = input.substring(2, 4); // "06"
String year = input.substring(4); // "2013" - one-arg version goes to the end
| {
"pile_set_name": "StackExchange"
} |
Q:
Executable using images in different directory
I have the following directory structure
(root)
/ | \
bin resources src
| | |
Debug dot.bmp .cpp
|
.exe
I would like the .exe file to use dot.bmp.
Here's the code from the .cpp file that loads dot.bmp
player_img = il->load_image( "dot.bmp" );
I feel like I need to modify this line of code, but after changing it to:
player_img = il->load_image( "../resources/dot.bmp" );
I still get an error saying that the image couldn't be loaded.
What do I need to change? Is this even possible, or should I just give up and put the image in the same directory as the .exe?
A:
You need to go down one further level in order to get to the root.
../../resources/dot.bmp
Your executable is in bin/Debug but I think you coded under the assumption that it is in bin.
Assuming you are on Windows, the relative path will be relative to the current working directory rather than the directory where the executable resides. Often they are the same thing, but not necessarily.
I would be inclined to use fully-qualified paths and pre-pend the directory where the executable lives. You can obtain this by calling GetModuleFileName passing NULL as the hModule argument. This will return the full path to the executable so you will need to strip off the file name portion.
You will also need to think about deployment. This structure looks like your development structure but you may want a different organisation when you deploy the program. For example, I'd expect the executable to live in the bin directory when deployed.
One final thought. Assuming the images that your program needs is known at compile time it would be much easier to link them into the executable as resources. That way you simply don't have to worry about these issues at all and the executable can stand alone.
| {
"pile_set_name": "StackExchange"
} |
Q:
Prove the inequality $\{\sqrt[3]{n}\}>\frac1{3\sqrt[3]{n^2}}$
Prove that the inequality
$$\{\sqrt[3]{n}\}>\frac1{3\sqrt[3]{n^2}}$$
holds for every positive integer $n$ not equal to a cube of an integer.
My work so far:
$0\le\{\sqrt[3]{n}\}<1$
$\{\sqrt[3]{n}\}=\sqrt[3]{n}-\lfloor \sqrt[3]{n}\rfloor$. Let $\lfloor \sqrt[3]{n}\rfloor=k \in \mathbb Z$. Then
$$k<\sqrt[3]{n}-\frac1{3\sqrt[3]{n^2}}$$
A:
Let $k=\lfloor n^{1/3} \rfloor$. Then $k^3 \leq n-1$ so that $(n^{1/3}-k)(n^{2/3}+k^2+kn^{1/3}) \geq 1$.
Hope you can take it from here.
| {
"pile_set_name": "StackExchange"
} |
Q:
Searching for a word to pdf converter that will handle special fonts
I have a 500-page Word document made in MS Office 2007. While making this file I have imported 487 types of fonts. However, now it is really tough to convert the file to pdf. I have tried various converters including the built-in Office 'save as' option. There were some converters which were able to convert the document, but only one page of it.
Can anyone link me to a free full version (not trialware) of a Word to PDF converter that can keep my imported fonts intact?
I have tried many, such as Word2PDF, PDFonline, and many others. PDFCreator converted my file with 70% accuracy, but I'm still looking for something better.
A:
While I don't use it very often, when needed I have found PDFCreator to be nearly flawless. Unless I am misunderstanding what you mean by "keep the fonts intact". In all my experiences the appearance has been identical.
| {
"pile_set_name": "StackExchange"
} |
Q:
Linux read operations requesting duplicate bytes?
This is a bit of a strange question. I'm writing a fuse module using the go-fuse library, and at the moment I have a "fake" file with a size of 6000 bytes, and which will output some unrelated data for all read requests. My read function looks like this:
func (f *MyFile) Read(buf []byte, off int64) (fuse.ReadResult, fuse.Status) {
log.Printf("Reading into buffer of len %d from %d\n",len(buf),off)
FillBuffer(buf, uint64(off), f.secret)
return fuse.ReadResultData(buf), fuse.OK
}
As you can see I'm outputting a log on every read containing the range of the read request. The weird thing is that when I cat the file I get the following:
2013/09/13 21:09:03 Reading into buffer of len 4096 from 0
2013/09/13 21:09:03 Reading into buffer of len 8192 from 0
So cat is apparently reading the first 4096 bytes of data, discarding it, then reading 8192 bytes, which encompasses all the data and so succeeds. I've tried with other programs too, including hexdump and vim, and they all do the same thing. Interestingly, if I do a head -c 3000 dir/fakefile it still does the two reads, even though the later one is completely unnecessary. Does anyone have any insights into why this might be happening?
A:
I suggest you strace your cat process to see for yourself. On my system, cat reads by 64K chunks, and does a final read() to make sure it read the whole file. That last read() is necessary to make the distinction between a reading a "chunk-sized file" and a bigger file. i.e. it makes sure there is nothing left to read, as the file size could have changed between the fstat() and the read() system calls.
Is your "fake file" size being returned correctly to FUSE by stat/fstat() system calls?
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is one ans removed when assigning simulink/matlab?
Why is one ans removed when I assign y? I want to return the two descriptions.
x = rmi('get',gcs)
x =
2x1 struct array with fields:
doc
id
linked
description
keywords
reqsys
>> x.description
ans =
FirstReq
ans =
SecondRec
>> y = x.description
y =
FirstReq
>> y
y =
FirstReq
A:
You probably need to use the {}:
>> x.description
ans = FirstReq
ans = SecondRec
>> y = {x.description}
y =
{
[1,1] = FirstReq
[1,2] = SecondRec
}
You can then index into y using either () (output will be a cell array) or {} (output will be whatever the data type of the description field is):
>> y(1)
ans =
{
[1,1] = FirstReq
}
>> y{1}
ans = FirstReq
Note: I am using Octave, not MATLAB, but it should still apply.
| {
"pile_set_name": "StackExchange"
} |
Q:
Описание полей класса ES6
Скажите пожалуйста является ли правильной практикой описания внутренних полей класса (ES6) и функций замыкания следующим образом:
class A {
a = 1;
print = () => {
console.log(this.a);
}
}
A:
С точки зрения и 6, и 7 версий спецификации
Тело класса имеет следующую структуру
ClassBody[Yield] :
ClassElementList[?Yield]
ClassElementList[Yield] :
ClassElement[?Yield]
ClassElementList[?Yield] ClassElement[?Yield]
ClassElement[Yield]:
MethodDefinition[?Yield]
static MethodDefinition[?Yield]
;
В свою очередь MethodDefinition имеет вид
MethodDefinition[Yield] :
PropertyName[?Yield] ( StrictFormalParameters ) { FunctionBody }
GeneratorMethod[?Yield]
get PropertyName[?Yield] ( ) { FunctionBody }
set PropertyName[?Yield] ( PropertySetParameterList ) { FunctionBody }
Как можно заметить, в теле класса разрешено только определение функций, без использования оператора присваивания, и код в вопросе должен вызывать ошибку синтаксиса.
Поэтому на вопрос
является ли правильной практикой описания внутренних полей класса (ES6)
Можно однозначно ответить: нет, с точки зрения спецификации - данный код даже не должен выполняться.
С другой стороны, с точки зрения, например, TypeScript, такие объявления вполне имееют место быть.
A:
На данный момент мне не известно чтобы спецификация поддерживала объявление свойств в теле класса. Но я слышал что это будет в следующих версиях, а пока эту возможность можно реализовать с использованием babel.
Является ли это правильным? Да, является! Компилятор за Вас соберет эти свойства и запишет их в конструктор. Поэтому если Вам так удобней читать код, то пишите.
Так же хочется сказать пару слов о get/set упомянутые ранее.. Аксессоры стоит применять только в тех случаях, когда они возвращают не объект по прямой ссылке. То есть если у Вас есть свойство this._object и Вы хотите вернуть его через геттер get object() { return this._object; }, то это будет аморально и бессмысленно (если Вы конечно не хотите в при изменении вызвать что-то), так как суть геттеров ограничивать доступ к объекту, а в данном случаи Вы целиком отдаете объект и кто угодно сможет сделать с ним что угодно. Цель аксессоров примерно в этом get object() { return this._object.props; }.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is answer A (遊べるために) wrong in this question?
この公園には[a. 子供が遊べる/b. 子供が遊ぶ]ために、ブランコが設置してある。
I thought of "A swing is installed in order for children to be able to play in this park", but apparently that's wrong but I can't figure out why.
I see why B is correct but I don't get why A would be wrong.
A:
子供が遊べるために sounds awkward to me. I think we usually say:
potential form + ように
plain form + ために
to mean "so that ~~ can do ~~" / "for the purpose of ~~".
So in your example I think you could say:
子供が遊べるように (lit. so that children can play with it)
子供が遊ぶために (lit. for children to play with it)
Some examples:
[東大]{とうだい}に[入]{はい}れるように[一生懸命]{いっしょうけんめい}[勉強]{べんきょう}する。
(I'll study hard so that I can enter Tokyo University.)
東大に入るために一生懸命勉強する。
(I'll study hard in order to enter Tokyo University.)
| {
"pile_set_name": "StackExchange"
} |
Q:
Scala Implicit Conversion for companion object of extended class
I am trying to create a customRDD in Java.
RDD converts RDD[(K,V)] to PairRDDFunctions[K,V] using Scala implicit function rddToPairRDDFunctions() defined in object RDD.
I am trying to do the same with my CustomJavaRDD which extends CustomRDD which extends RDD.
Now it should call implicit function rddToCustomJavaRDDFunctions() whenever it encounters CustomJavaRDD[(K,V)], but for some reason it still goes to rddToPairRDDFunctions().
What am I doing wrong?
RDD.scala
class RDD[T]
object RDD {
implicit def rddToPairRDDFunctions[K, V](rdd: RDD[(K, V)])
(implicit kt: ClassTag[K], vt: ClassTag[V], ord: Ordering[K] = null):
PairRDDFunctions[K, V] = {
new PairRDDFunctions(rdd)
}
}
CustomRDD.scala
abstract class CustomRDD[T] extends RDD[T]
object CustomRDD {
implicit def rddToCustomJavaRDDFunctions[K,V](rdd: CustomJavaRDD[(K,V)]):
PairCustomJavaRDDFunction[K,V] = {
new PairCustomJavaRDDFunctions[K,V](rdd)
}
}
PairCustomJavaRDDFunctions.scala
class PairCustomJavaRDDFunctions[K: ClassTag, V: ClassTag](self: CustomRDD[(K, V)])
(implicit ord: Ordering[K] = null) {
def collectAsMap() = ???
}
There is no error; the program compiles successfully,
but let's say I have data: RDD which is an instance of CustomJavaRDD.
data.collectAsMap()
At the runtime it converts data into PairRDDFunctions; i.e. it makes implicit call to rddToPairRDDFunctions defined in RDD.scala.
But it should make call to rddToCustomJavaRDDFunctions defined in CustomRDD.scala and convert it into PairCustomJavaRDDFunctions.
A:
But it should make call to rddToCustomJavaRDDFunctions defined in CustomRDD.scala and convert it into PairCustomJavaRDDFunctions
No, Scala simply does not work this way. What you want, overriding an implicit conversion depending on the runtime type of an object, is simply not possible (without pre-existing machinery on both the library's part and yours).
Implicits are a strictly compile-time feature. When the compiler sees you using an RDD as if it were a PairRDDFunctions, it splices in a call to RDD.rddToPairRDDFunctions, as if you wrote it yourself. Then, when the code is translated to bytecode, that call has already been baked in and nothing can change it. There is no dynamic dispatch for this, it's all static. The only situation where rddToCustomJavaRDDFunctions will be called is when the static type of the expression in question is already CustomJavaRDD.
Really, this should not be necessary. Implicit conversions are really no more than glorified helper methods that save you keystrokes. (Implicit parameters, now those are interesting. ;) ) There should be no need to override them because the helper methods should already be polymorphic and work whether you have RDD, CustomRDD, or `RDD that travels through time to compute things faster`.
Of course, you can still do it, but it will only actually do anything under the above conditions, and that is probably not very likely, making the whole thing rather pointless.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I backup and restore GitLab Mattermost so it preserves usernames?
I would like to be able to backup and restore GitLab Mattermost so it preserves the usernames along with the messages. So far I've got it to preserve the messages, but it replaces all the usernames with "Someone".
I have reviewed Mattermost's backup documentation:
I'm using PostgreSQL, so I followed their link to the PostgreSQL backup documentation.
It recommends pg_dump for backup and psql for restore, something I'm quite familiar with from automating backup+restore for Zotonic sites I've built.
I have a hourly cron that runs this command:
sudo -i -u gitlab-psql -- /opt/gitlab/embedded/bin/pg_dump -h \
/var/opt/gitlab/postgresql mattermost_production |
gzip > "mattermost-backup.sql.gz"
When I take mattermost-backup.sql.gz and restore it like this:
gitlab-ctl stop mattermost
zcat mattermost-backup.sql.gz |
sudo -i -u gitlab-psql -- /opt/gitlab/embedded/bin/psql \
-h /var/opt/gitlab/postgresql \
-d mattermost_production
gitlab-ctl start mattermost
gitlab-psql logs this (including a lot of conflict and constraint errors):
SET
SET
SET
SET
SET
set_config
------------
(1 row)
SET
SET
SET
SET
CREATE EXTENSION
COMMENT
SET
SET
ERROR: relation "audits" already exists
ALTER TABLE
ERROR: relation "bots" already exists
ALTER TABLE
ERROR: relation "channelmemberhistory" already exists
ALTER TABLE
ERROR: relation "channelmembers" already exists
ALTER TABLE
ERROR: relation "channels" already exists
ALTER TABLE
ERROR: relation "clusterdiscovery" already exists
ALTER TABLE
ERROR: relation "commands" already exists
ALTER TABLE
ERROR: relation "commandwebhooks" already exists
ALTER TABLE
ERROR: relation "compliances" already exists
ALTER TABLE
ERROR: relation "emoji" already exists
ALTER TABLE
ERROR: relation "fileinfo" already exists
ALTER TABLE
ERROR: relation "groupchannels" already exists
ALTER TABLE
ERROR: relation "groupmembers" already exists
ALTER TABLE
ERROR: relation "groupteams" already exists
ALTER TABLE
ERROR: relation "incomingwebhooks" already exists
ALTER TABLE
ERROR: relation "jobs" already exists
ALTER TABLE
ERROR: relation "licenses" already exists
ALTER TABLE
ERROR: relation "linkmetadata" already exists
ALTER TABLE
ERROR: relation "oauthaccessdata" already exists
ALTER TABLE
ERROR: relation "oauthapps" already exists
ALTER TABLE
ERROR: relation "oauthauthdata" already exists
ALTER TABLE
ERROR: relation "outgoingwebhooks" already exists
ALTER TABLE
ERROR: relation "pluginkeyvaluestore" already exists
ALTER TABLE
ERROR: relation "posts" already exists
ALTER TABLE
ERROR: relation "preferences" already exists
ALTER TABLE
ERROR: relation "publicchannels" already exists
ALTER TABLE
ERROR: relation "reactions" already exists
ALTER TABLE
ERROR: relation "roles" already exists
ALTER TABLE
ERROR: relation "schemes" already exists
ALTER TABLE
ERROR: relation "sessions" already exists
ALTER TABLE
ERROR: relation "status" already exists
ALTER TABLE
ERROR: relation "systems" already exists
ALTER TABLE
ERROR: relation "teammembers" already exists
ALTER TABLE
ERROR: relation "teams" already exists
ALTER TABLE
ERROR: relation "termsofservice" already exists
ALTER TABLE
ERROR: relation "tokens" already exists
ALTER TABLE
ERROR: relation "useraccesstokens" already exists
ALTER TABLE
ERROR: relation "usergroups" already exists
ALTER TABLE
ERROR: relation "users" already exists
ALTER TABLE
ERROR: relation "usertermsofservice" already exists
ALTER TABLE
COPY 160
COPY 4
COPY 126
COPY 116
COPY 29
COPY 0
COPY 0
COPY 0
COPY 0
COPY 0
COPY 4
COPY 0
COPY 0
COPY 0
COPY 0
COPY 94
COPY 0
COPY 0
COPY 0
COPY 0
COPY 0
COPY 0
ERROR: duplicate key value violates unique constraint "pluginkeyvaluestore_pkey"
DETAIL: Key (pluginid, pkey)=(com.mattermost.nps, ServerUpgrade-5.17.0) already exists.
CONTEXT: COPY pluginkeyvaluestore, line 4
COPY 171
COPY 110
COPY 14
COPY 6
ERROR: duplicate key value violates unique constraint "roles_name_key"
DETAIL: Key (name)=(system_guest) already exists.
CONTEXT: COPY roles, line 1
COPY 0
COPY 24
COPY 15
ERROR: duplicate key value violates unique constraint "systems_pkey"
DETAIL: Key (name)=(AsymmetricSigningKey) already exists.
CONTEXT: COPY systems, line 1
COPY 29
COPY 4
COPY 0
COPY 0
COPY 1
COPY 0
ERROR: duplicate key value violates unique constraint "users_username_key"
DETAIL: Key (username)=(surveybot) already exists.
CONTEXT: COPY users, line 1
COPY 0
ERROR: multiple primary keys for table "audits" are not allowed
ERROR: multiple primary keys for table "bots" are not allowed
ERROR: multiple primary keys for table "channelmemberhistory" are not allowed
ERROR: multiple primary keys for table "channelmembers" are not allowed
ERROR: relation "channels_name_teamid_key" already exists
ERROR: multiple primary keys for table "channels" are not allowed
ERROR: multiple primary keys for table "clusterdiscovery" are not allowed
ERROR: multiple primary keys for table "commands" are not allowed
ERROR: multiple primary keys for table "commandwebhooks" are not allowed
ERROR: multiple primary keys for table "compliances" are not allowed
ERROR: relation "emoji_name_deleteat_key" already exists
ERROR: multiple primary keys for table "emoji" are not allowed
ERROR: multiple primary keys for table "fileinfo" are not allowed
ERROR: multiple primary keys for table "groupchannels" are not allowed
ERROR: multiple primary keys for table "groupmembers" are not allowed
ERROR: multiple primary keys for table "groupteams" are not allowed
ERROR: multiple primary keys for table "incomingwebhooks" are not allowed
ERROR: multiple primary keys for table "jobs" are not allowed
ERROR: multiple primary keys for table "licenses" are not allowed
ERROR: multiple primary keys for table "linkmetadata" are not allowed
ERROR: relation "oauthaccessdata_clientid_userid_key" already exists
ERROR: multiple primary keys for table "oauthaccessdata" are not allowed
ERROR: multiple primary keys for table "oauthapps" are not allowed
ERROR: multiple primary keys for table "oauthauthdata" are not allowed
ERROR: multiple primary keys for table "outgoingwebhooks" are not allowed
ERROR: multiple primary keys for table "pluginkeyvaluestore" are not allowed
ERROR: multiple primary keys for table "posts" are not allowed
ERROR: multiple primary keys for table "preferences" are not allowed
ERROR: relation "publicchannels_name_teamid_key" already exists
ERROR: multiple primary keys for table "publicchannels" are not allowed
ERROR: multiple primary keys for table "reactions" are not allowed
ERROR: relation "roles_name_key" already exists
ERROR: multiple primary keys for table "roles" are not allowed
ERROR: relation "schemes_name_key" already exists
ERROR: multiple primary keys for table "schemes" are not allowed
ERROR: multiple primary keys for table "sessions" are not allowed
ERROR: multiple primary keys for table "status" are not allowed
ERROR: multiple primary keys for table "systems" are not allowed
ERROR: multiple primary keys for table "teammembers" are not allowed
ERROR: relation "teams_name_key" already exists
ERROR: multiple primary keys for table "teams" are not allowed
ERROR: multiple primary keys for table "termsofservice" are not allowed
ERROR: multiple primary keys for table "tokens" are not allowed
ERROR: multiple primary keys for table "useraccesstokens" are not allowed
ERROR: relation "useraccesstokens_token_key" already exists
ERROR: relation "usergroups_name_key" already exists
ERROR: multiple primary keys for table "usergroups" are not allowed
ERROR: relation "usergroups_source_remoteid_key" already exists
ERROR: relation "users_authdata_key" already exists
ERROR: relation "users_email_key" already exists
ERROR: multiple primary keys for table "users" are not allowed
ERROR: relation "users_username_key" already exists
ERROR: multiple primary keys for table "usertermsofservice" are not allowed
ERROR: relation "idx_audits_user_id" already exists
ERROR: relation "idx_channel_search_txt" already exists
ERROR: relation "idx_channelmembers_channel_id" already exists
ERROR: relation "idx_channelmembers_user_id" already exists
ERROR: relation "idx_channels_create_at" already exists
ERROR: relation "idx_channels_delete_at" already exists
ERROR: relation "idx_channels_displayname_lower" already exists
ERROR: relation "idx_channels_name" already exists
ERROR: relation "idx_channels_name_lower" already exists
ERROR: relation "idx_channels_team_id" already exists
ERROR: relation "idx_channels_update_at" already exists
ERROR: relation "idx_command_create_at" already exists
ERROR: relation "idx_command_delete_at" already exists
ERROR: relation "idx_command_team_id" already exists
ERROR: relation "idx_command_update_at" already exists
ERROR: relation "idx_command_webhook_create_at" already exists
ERROR: relation "idx_emoji_create_at" already exists
ERROR: relation "idx_emoji_delete_at" already exists
ERROR: relation "idx_emoji_name" already exists
ERROR: relation "idx_emoji_update_at" already exists
ERROR: relation "idx_fileinfo_create_at" already exists
ERROR: relation "idx_fileinfo_delete_at" already exists
ERROR: relation "idx_fileinfo_postid_at" already exists
ERROR: relation "idx_fileinfo_update_at" already exists
ERROR: relation "idx_groupchannels_channelid" already exists
ERROR: relation "idx_groupmembers_create_at" already exists
ERROR: relation "idx_groupteams_teamid" already exists
ERROR: relation "idx_incoming_webhook_create_at" already exists
ERROR: relation "idx_incoming_webhook_delete_at" already exists
ERROR: relation "idx_incoming_webhook_team_id" already exists
ERROR: relation "idx_incoming_webhook_update_at" already exists
ERROR: relation "idx_incoming_webhook_user_id" already exists
ERROR: relation "idx_jobs_type" already exists
ERROR: relation "idx_link_metadata_url_timestamp" already exists
ERROR: relation "idx_oauthaccessdata_client_id" already exists
ERROR: relation "idx_oauthaccessdata_refresh_token" already exists
ERROR: relation "idx_oauthaccessdata_user_id" already exists
ERROR: relation "idx_oauthapps_creator_id" already exists
ERROR: relation "idx_oauthauthdata_client_id" already exists
ERROR: relation "idx_outgoing_webhook_create_at" already exists
ERROR: relation "idx_outgoing_webhook_delete_at" already exists
ERROR: relation "idx_outgoing_webhook_team_id" already exists
ERROR: relation "idx_outgoing_webhook_update_at" already exists
ERROR: relation "idx_posts_channel_id" already exists
ERROR: relation "idx_posts_channel_id_delete_at_create_at" already exists
ERROR: relation "idx_posts_channel_id_update_at" already exists
ERROR: relation "idx_posts_create_at" already exists
ERROR: relation "idx_posts_delete_at" already exists
ERROR: relation "idx_posts_hashtags_txt" already exists
ERROR: relation "idx_posts_is_pinned" already exists
ERROR: relation "idx_posts_message_txt" already exists
ERROR: relation "idx_posts_root_id" already exists
ERROR: relation "idx_posts_update_at" already exists
ERROR: relation "idx_posts_user_id" already exists
ERROR: relation "idx_preferences_category" already exists
ERROR: relation "idx_preferences_name" already exists
ERROR: relation "idx_preferences_user_id" already exists
ERROR: relation "idx_publicchannels_delete_at" already exists
ERROR: relation "idx_publicchannels_displayname_lower" already exists
ERROR: relation "idx_publicchannels_name" already exists
ERROR: relation "idx_publicchannels_name_lower" already exists
ERROR: relation "idx_publicchannels_search_txt" already exists
ERROR: relation "idx_publicchannels_team_id" already exists
ERROR: relation "idx_sessions_create_at" already exists
ERROR: relation "idx_sessions_expires_at" already exists
ERROR: relation "idx_sessions_last_activity_at" already exists
ERROR: relation "idx_sessions_token" already exists
ERROR: relation "idx_sessions_user_id" already exists
ERROR: relation "idx_status_status" already exists
ERROR: relation "idx_status_user_id" already exists
ERROR: relation "idx_teammembers_delete_at" already exists
ERROR: relation "idx_teammembers_team_id" already exists
ERROR: relation "idx_teammembers_user_id" already exists
ERROR: relation "idx_teams_create_at" already exists
ERROR: relation "idx_teams_delete_at" already exists
ERROR: relation "idx_teams_invite_id" already exists
ERROR: relation "idx_teams_name" already exists
ERROR: relation "idx_teams_update_at" already exists
ERROR: relation "idx_user_access_tokens_token" already exists
ERROR: relation "idx_user_access_tokens_user_id" already exists
ERROR: relation "idx_user_terms_of_service_user_id" already exists
ERROR: relation "idx_usergroups_delete_at" already exists
ERROR: relation "idx_usergroups_remote_id" already exists
ERROR: relation "idx_users_all_no_full_name_txt" already exists
ERROR: relation "idx_users_all_txt" already exists
ERROR: relation "idx_users_create_at" already exists
ERROR: relation "idx_users_delete_at" already exists
ERROR: relation "idx_users_email" already exists
ERROR: relation "idx_users_email_lower_textpattern" already exists
ERROR: relation "idx_users_firstname_lower_textpattern" already exists
ERROR: relation "idx_users_lastname_lower_textpattern" already exists
ERROR: relation "idx_users_names_no_full_name_txt" already exists
ERROR: relation "idx_users_names_txt" already exists
ERROR: relation "idx_users_nickname_lower_textpattern" already exists
ERROR: relation "idx_users_update_at" already exists
ERROR: relation "idx_users_username_lower_textpattern" already exists
After that completes and GitLab Mattermost is back up and running, it shows "Someone" instead of actual usernames.
How do I backup and restore GitLab Mattermost so it preserves usernames?
A:
On the replica system you are restoring to, you will need to:
Install GitLab with Mattermost (I used the Omnibus install)
Delete the mattermost_production database
Create an empty mattermost_production database
Load the backup SQL into the empty database
Why does the original approach fail?
When GitLab is installed, it populates the Mattermost database with some records. Running psql to load the backup on top of those records leads to an inconsistent state where the database has both the fresh install data and the backup data.
How do you fix this?
In order to better ensure a clean restore, you'll need to delete the mattermost_production database and create a new empty database called mattermost_production, then run the backup SQL into it with psql as follows:
gitlab-ctl stop mattermost
sudo -i -u gitlab-psql -- /opt/gitlab/embedded/bin/dropdb \
-h /var/opt/gitlab/postgresql \
mattermost_production
sudo -i -u gitlab-psql -- /opt/gitlab/embedded/bin/createdb \
-h /var/opt/gitlab/postgresql \
mattermost_production
zcat mattermost-backup.sql.gz |
sudo -i -u gitlab-psql -- /opt/gitlab/embedded/bin/psql \
-h /var/opt/gitlab/postgresql \
-d mattermost_production
gitlab-ctl start mattermost
| {
"pile_set_name": "StackExchange"
} |
Q:
memory allocation for array of pointers
I need to allocate memory for a pointer which needs to be used as a 2d array.I know how to allocate memory for char pointers and int pointers I am confused how memory is allocated of a array of pointers.A pictorial representation of the reason would be very helpful,also is the code below fine?
char *names[5];
for(i=0;i<5;i++)
{
names[i]=(*char)malloc(sizeof(char));
}
A:
No, this is not because you are allocating the array assuming a dimension of just 1 element of primitive type char (which is 1 byte).
I'm assuming you want to allocate 5 pointers to strings inside names, but just pointers.
You should allocate it according to the size of the pointer multiplied by the number of elements:
char **names = malloc(sizeof(char*)*5);
You don't need to allocate them one by one with a loop. Note that you need to specify that it is a pointer-of-pointers by using **
| {
"pile_set_name": "StackExchange"
} |
Q:
Logarithmic equation with variable both "free" and in logarithm
I am trying to calculate an area bordered by two functions and in the process I need to solve this equation:
$$e^{-10x}=-2x+1$$
I make it into a non-exponential form:
$$-10x=ln(-2x+1)$$
And now I am stuck. Every webpage and example I have found deals with cases where all the variables are inside logarithms, but this is not the case. Can you point me in the right direction please? I know what the results should be, but I'd like to know the steps.
Thanks
A:
You won't find a nice algebraic answer except with the Lambert W function. Numerically you can observe that at $x=\frac12$ the left side is small and positive (about $0.0067)$ and the right is zero. You might also notice that $x=0$ is a solution. There is a root just below $\frac 12$ and you can iterate $x_{i+1}=\frac 12 (1-e^{-10x_i})$ rapidly to convergence, finding $x \approx 0.496511$
| {
"pile_set_name": "StackExchange"
} |
Q:
Combo box drop-down list not initially drawn
As the title states my problem is with the expanded list of a WinAPI combo box showing up blank when opened. Any subsequent updates (as when moving the cursor around) will redraw the affected items however. In addition the list won't respond to any mouse input. This happens in both Windows XP as well as 7.
As near as I can tell in Spy++ the modal list receives WM_ERASEBKGND but stops short of processing WM_PAINT. Showing a combo box in a modal dialog box works nicely, by the way, but creating the control as part of a regular top-level window or spawning the same dialog template as a modeless child window does not.
I'm guessing I've forgotten something rather basic and embarrassing, e.g. not setting a clipping style or calling DoDialogMagic in the message loop or some such, but I just can't seem to figure it out on my own.
Anyway, here's a minimal repro case:
#include <windows.h>
#include <commctrl.h>
#include <tchar.h>
#pragma comment(lib, "user32.lib")
#pragma comment(lib, "comctl32.lib")
INT CALLBACK _tWinMain(HINSTANCE instance, HINSTANCE parent, LPTSTR commands, INT show) {
static const TCHAR title[] = _T("Combo Problem");
HWND hwnd;
HWND combo;
MSG msg;
/* First create our parent window */
const WNDCLASS cls = {
/* style */ 0,
/* lpfnWndProc */ DefWindowProc,
/* cbClsExtra */ 0,
/* cbWndExtra */ 0,
/* hInstance */ instance,
/* hIcon */ NULL,
/* hCursor */ LoadCursor(NULL, IDC_ARROW),
/* hbrBackground */ (HBRUSH) (COLOR_INACTIVEBORDER + 1),
/* lpszMenuName */ NULL,
/* lpszClassName */ title
};
RegisterClass(&cls);
hwnd = CreateWindow (
/* lpClassName */ title,
/* lpWindowName */ title,
/* dwStyle */ WS_OVERLAPPEDWINDOW | WS_VISIBLE,
/* x */ CW_USEDEFAULT,
/* y */ CW_USEDEFAULT,
/* nWidth */ 125,
/* nHeight */ 70,
/* hWndParent */ NULL,
/* hMenu */ NULL,
/* hInstance */ instance,
/* lpParam */ NULL
);
/* Now create and populate the combo box itself */
InitCommonControls();
combo = CreateWindow (
/* lpClassName */ _T("COMBOBOX"),
/* lpWindowName */ _T(""),
/* dwStyle */ CBS_DROPDOWNLIST | WS_CHILD | WS_VISIBLE,
/* x */ 10,
/* y */ 10,
/* nWidth */ 100,
/* nHeight */ 150,
/* hWndParent */ hwnd,
/* hMenu */ NULL,
/* hInstance */ instance,
/* lpParam */ NULL
);
SendMessage(combo, CB_ADDSTRING, 0, (LPARAM) _T("Alpha"));
SendMessage(combo, CB_ADDSTRING, 0, (LPARAM) _T("Beta"));
SendMessage(combo, CB_ADDSTRING, 0, (LPARAM) _T("Gamma"));
/* Finally run the message pump */
while(GetMessage(&msg, hwnd, 0, 0) > 0) {
TranslateMessage(&msg);
DispatchMessage(&msg);
}
return 0;
}
A:
You are passing a hwnd to GetMessage, this is usually not what you want, just use NULL.
| {
"pile_set_name": "StackExchange"
} |
Q:
Замена текста jQuery методом replace
Кто подскажет, почему это не работает?
window.onload = function() {
$('td').text().replace('1', 'Да');
}
<script src="http://code.jquery.com/jquery-1.8.3.js"></script>
<table>
<tr>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</table>
A:
У вас здесь две проблемы:
$('td') - возвращает коллекцию элементов, т.е. их несколько и соотвественно, надо каждый элемент обрабатывать отдельно в данном случае.
Результат выполнения replace никуда не присваивается, он просто пропадает.
window.onload = function() {
$('td').each((i, el) => { // проходим по каждому элементу массива
const $el = $(el);
const replacedText = $el.text().replace('1', 'Да'); // заменяем значения
$el.text(replacedText); // устанавливаем новое значение
});
}
<script src="https://code.jquery.com/jquery-1.8.3.js"></script>
<table>
<tr>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</table>
| {
"pile_set_name": "StackExchange"
} |
Q:
Solution explorer of visual studio
In solution explorer of asp.net application we are adding something in References section
for eg:in our project there are sample.Dal,sample.exeption ,system.core etc
What is actually References means,,,can we add by 'using' statement
A:
Using is used for namespace resolution. For example:
using System.Data;
lets you access the DataSet class without typing in the fully qualified name; System.Data.DataSet.
This doesn't however tell the compiler what assembly (DLL) the DataSet class lies in. So you need to tell it. So you refer to System.Data.dll by adding it to the references section in solution explorer.
| {
"pile_set_name": "StackExchange"
} |
Q:
Problem acessing server on docker container
I am trying to build and run docker image from example on this site: https://kubernetes.io/docs/tutorials/hello-minikube/
//server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};
var www = http.createServer(handleRequest);
www.listen(8080);
//Dockerfile
FROM node:6.14.2
EXPOSE 8080
COPY server.js .
CMD node server.js
I use commands
docker build -t nsj .
docker run nsj
They run without error but I cannot access the server on localhost:8080.
What is wrong?
A:
Seems like at least two things are wrong:
You need to map the port from your docker host
You need to bind your server to 0.0.0.0
So, probably these changes (untested):
In your code:
www.listen(8080, "0.0.0.0");
In your docker command:
docker run nsj -p 8080:8080
Note that having EXPOSE 8080 in your Dockerfile does not actually expose anything. It just "marks" this port in the docker engine's metadata and is intended for both documentation (so people reading the Dockerfile know what it does) and for tools that inspect the docker engine.
To quote from the reference:
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published
| {
"pile_set_name": "StackExchange"
} |
Q:
Padding size set to 100%, but covers half of the page?
I have styled a div so that I can have a bar at the top of the page for site navigation. Everything goes fine, but the padding of the bar refuses to cover the whole page? When I reload the page, I can see the bar flickering to cover the whole page, but it is then re-sized to about 50% of my browser.
This is my code:
div.title_main
{
background-color:rgb(40,40,40);
position:fixed;
right:50%;
top:-1%;
padding:0.1% 100%;
}
Thanks!
A:
Remove right: 50%; then it should work.
See this jsFiddle
| {
"pile_set_name": "StackExchange"
} |
Q:
Image error, not loading for S3 image retrieval
I have written code on my backend (hosted on Elastic Beanstalk) to retrieve a file from an S3 bucket and save it back to the bucket under a different name. I am using boto3 and have created an s3 client called 's3'.
bucketname is the name of the bucket, keyname is name of the key. I am also using the tempfile module
tmp = tempfile.NamedTemporaryFile()
with open(tmp.name, 'wb') as f:
s3.download_fileobj(bucketname, keyname, f)
s3.upload_file(tmp, bucketname, 'fake.jpg')
I was wondering if my understanding was off (still debugging why there is an error) - I created a tempfile and opened and saved within it the contents of the object with the keyname and bucketname. Then I uploaded that temp file to the bucket under a different name. Is my reasoning correct?
A:
The upload_file() command is expecting a filename (as a string) in the first parameter, not a file object.
Instead, you should use upload_fileobj().
However, I would recommend something different...
If you simply wish to make a copy of an object, you can use copy_object:
response = client.copy_object(
Bucket='destinationbucket',
CopySource='/sourcebucket/HappyFace.jpg',
Key='HappyFaceCopy.jpg',
)
| {
"pile_set_name": "StackExchange"
} |
Q:
Converting a excel field from a % to a decimal in a new formula
I have 4 different values being generated on a spreadsheet in different cells:
A=88.45%
B=88.45%
C=1.69%
D=95.67%
I need to add these values up and divide it by 4 to get the difference from a possibility of 100%. So, the functions I've written are:
A= =SUM(1-(D6/C6))
B= =SUM(1-(F6/C6))
C= =SUM(I6/H6)
D= =SUM(1-(K6/C6))
However, if I just have a formula that adds those cells up while they are displaying as a % and then divide it by 4 it doesn't give me the intended number which I need. The only way I've figured out to give me the accurate result was to convert them into decimals, but I'm not sure how to exactly write that. If it's even possible. I still need them displaying as a % in the original cells is the issue.
A= 0.1155
B= 0.1155
C= 0.0169
D= 0.0433
Divide those by 4 and you get the correct 7.28%. Then I need to subtract that from 100% to generate the actual average of those 4 cells.
Any insight?
A:
**EDIT* 2nd formula added
The proper formula to obtain your desired result of 7.28% from the four values in Row 34 (January) is:
=SUM(1-E34,1-G34,J34,1-L34)/4
OR
=1-AVERAGE(E34,G34,1-J34,L34)
where:
E34:= 88.45%
G34:= 88.45%
J34:= 1.69%
L34:= 95.67%
As an aside, the SUM function in each of those cells is entirely superfluous.
=1-D34/C34
will give the same result as
=SUM(1-(D34/C34))
| {
"pile_set_name": "StackExchange"
} |
Q:
Only run JQuery on manual update
I have a script for populating some dropdown lists.
the functions call a url in Django.
This is the issue I have:
When I want to load the page, the start_dates dropdown should be filled and the end_date dropdown should be kept empty.
When a value is selected in the start_dates dropdown, another script will run filling the end_dates dropdown.
However since I call the function to update the start_dates dropdown, the function to update the end_dates is automatically called which results in an error, because there is no value selected in start_date.
How would I be able to only call the funcion for end_dates when a date is selected manually?
<script>
$(document).ready(function ({}) {
json_to_select("{% url "get_start_dates_list" %}", "#id_start_date");
});
$(function () {
$("#id_start_date").change(function () {
json_to_select("{% url "get_end_dates_list" 4545645 %}".replace("4545645", $(this).val()), "#id_end_date")
})
});
function json_to_select(url, select_selector, oncomplete) {
var opt = $(select_selector);
opt.html("");
opt.append($("<option/>").val(0).text("loading"));
opt.change();
$.getJSON(url, function (data) {
var opt = $(select_selector);
var old_val = opt.val();
opt.html("");
opt.append($("<option/>").val("").text("---------"));
$.each(data, function () {
opt.append($("<option/>").val(this.id).text(this.value));
});
opt.val(old_val);
opt.change();
if (oncomplete) {
oncomplete();
}
})
}
</script>
A:
Just use a variable that indicates if this is the first time that start date value is being set. and ignore changing end date value at this first time:
var firstTime = true;
$("#id_start_date").change(function (){ if(!firstTime){
json_to_select("{% url "get_end_dates_list" 4545645 %}".replace("4545645", $(this).val()), "#id_end_date");
firstTime = false;}
});
cheking the firsttime should be done when changing the value of the startdate and It might not be in the best place in my code, but its only to show the idea.
| {
"pile_set_name": "StackExchange"
} |
Q:
List Manipulation and Removing Cases
I've imported a large data set, manually removing headers in the .rtf file where it originates. However, when I import this data, I receive this output:
data = Import["//pathtofile//data.rtf"]
Notebook[{Cell["1 3.32 10 10 10 670 600 1
2 2.26 6 8 5 700 640 1
3 2.35 8 6 8 640 530 1
4 2.08 9 10 7 670 600 1
5 3.38 8 9 8 540 580 1
6 3.29 10 8 8 760 630 1
7 3.21 8 8 7 600 400 1
8 2.00 3 7 6 460 530 1
9 3.18 9 10 8 670 450 1
10 2.34 7 7 6 570 480 1
11 3.08 9 10 6 491 488 1
12 3.34 5 9 7 600 600 1
13 1.40 6 8 8 510 530 1
14 1.43 10 9 9 750 610 1
15 2.48 8 9 6 650 460 1
16 3.73 10 10 9 720 630 1
17 3.80 10 10 9 760 500 1
18 4.00 9 9 8 800 610 1
19 2.00 9 6 5 640 670 1
20 3.74 9 10 9 750 700 1
21 2.32 9 7 8 520 440 1
22 2.79 8 8 7 610 530 1
23 3.21 7 9 8 505 435 1
24 3.08 9 10 8 559 607 1
25 3.75 10 9 9 760 620 1
26 3.16 10 9 8 640 490 1
27 2.73 9 8 7 520 360 1
28 3.06 8 10 10 580 460 1
29 1.07 7 8 6 700 520 1
30 3.35 10 10 10 620 570 1
31 1.82 6 8 6 490 550 1
32 3.12 10 10 7 640 520 1
33 2.25 9 7 4 550 290 1
34 2.93 10 10 10 600 520 1
35 3.16 9 7 7 400 390 1
36 2.83 10 9 9 710 530 1
37 3.10 9 10 9 750 670 1
38 3.07 7 4 7 660 480 1
39 2.87 9 9 9 620 480 1
40 3.61 10 10 9 630 440 1
41 2.17 10 7 7 650 450 1
42 1.95 7 8 9 550 570 1
43 3.35 10 10 10 730 650 1
44 3.58 10 7 8 710 400 1
45 2.75 10 7 5 770 720 1
46 3.26 10 10 9 610 560 1
47 3.41 9 4 7 600 360 1
48 3.67 10 10 10 640 570 1
49 3.81 10 10 7 750 540 1
50 3.30 10 10 9 650 480 1
51 2.30 9 10 10 590 420 1
52 3.62 10 10 8 660 630 1
53 3.20 9 5 7 570 570 1
54 2.55 7 8 8 570 480 1
55 2.82 10 9 9 660 550 1
56 3.25 9 7 8 690 550 1
57 2.21 7 7 8 670 500 1
58 2.50 10 9 9 660 460 1
59 3.03 8 8 7 600 630 1
60 1.92 9 10 8 447 320 1
61 4.00 7 6 6 600 410 1
62 1.81 9 9 9 620 580 1
63 2.70 6 8 6 580 470 1
64 2.96 9 7 8 630 630 1
65 2.76 10 10 10 600 560 1
66 2.71 8 7 9 700 440 1
67 3.40 9 10 9 550 560 1
68 2.65 8 10 8 680 450 1
69 2.48 8 8 7 630 500 1
70 3.86 10 10 10 750 760 1
71 2.62 9 8 7 491 391 1
72 3.72 7 8 7 550 500 1
73 3.50 8 7 8 630 500 1
74 2.83 6 7 7 690 440 1
75 3.06 8 6 5 540 400 1
76 4.00 9 10 10 640 480 1
77 3.70 8 10 8 520 410 1
78 2.81 9 7 4 559 488 1
79 1.93 8 6 8 590 510 1
80 3.70 10 10 10 580 580 1
81 2.96 9 7 6 670 440 1
82 2.64 9 9 8 620 590 1
83 3.09 10 10 8 540 470 1
84 3.00 4 3 4 620 560 1
85 2.97 10 10 10 770 540 1
86 2.81 10 10 10 620 570 1
87 3.32 10 9 10 660 560 1
88 3.40 7 8 4 710 500 1
89 1.84 9 6 6 610 390 1
90 0.40 6 6 7 560 690 1
91 2.88 9 7 6 690 460 1
92 2.77 6 5 9 590 440 1
93 2.26 5 7 7 530 440 1
94 2.03 6 7 9 540 610 1
95 2.43 7 10 10 530 560 1
96 2.63 10 10 6 640 500 1
97 1.66 8 4 3 590 470 1
98 3.41 9 9 9 520 490 1
99 2.12 7 7 8 559 545 1
100 3.33 7 6 7 500 460 1
101 1.69 8 7 7 490 390 1
102 2.46 6 7 7 490 370 1
103 1.59 8 9 7 670 480 1
104 1.14 10 10 7 720 610 1
105 0.65 9 7 7 640 520 1
106 2.12 7 6 7 520 380 1
107 2.82 4 5 7 400 470 1
108 2.34 8 9 7 480 410 1
109 2.11 6 9 9 480 390 1
110 1.34 6 7 8 530 470 1
111 2.53 8 9 8 550 500 1
112 2.75 10 10 10 720 500 1
113 3.14 9 8 9 640 630 1
114 2.25 10 10 10 690 580 1
115 1.00 8 9 10 640 600 1
116 2.79 9 6 7 690 400 1
117 2.39 6 5 6 470 330 1
118 2.15 6 6 6 480 460 1
119 0.75 7 6 6 540 590 1
120 3.06 5 9 9 510 380 1
121 2.50 9 9 9 560 500 1
122 2.78 9 9 10 600 510 1
123 2.44 8 8 8 690 490 1
124 1.11 7 7 7 590 480 1
125 3.12 10 10 10 580 340 1
126 2.17 8 7 8 650 500 1
127 0.12 4 6 6 630 490 1
128 2.00 6 5 6 530 320 1
129 3.22 9 7 9 650 490 1
130 1.88 10 6 6 620 430 1
131 2.04 8 7 7 690 440 1
132 2.58 10 9 9 720 740 1
133 2.16 6 6 6 590 440 1
134 2.50 7 10 10 630 500 1
135 1.85 10 8 7 700 480 1
136 1.46 7 7 8 630 540 1
137 2.95 9 9 8 620 400 1
138 0.80 8 10 9 470 410 1
139 0.91 6 5 7 586 697 1
140 2.67 9 9 10 586 670 1
141 2.51 9 8 7 700 500 1
142 1.79 7 7 5 550 570 1
143 2.42 6 6 8 505 518 1
144 0.58 5 7 7 515 285 1
145 3.00 10 10 9 774 688 1
146 2.76 10 10 10 570 570 0
147 3.35 9 9 9 580 540 0
148 3.80 10 9 8 560 530 0
149 2.38 9 9 10 650 570 0
150 2.58 10 10 9 440 430 0
151 3.18 10 10 10 570 750 0
152 2.87 8 8 7 476 576 0
153 3.16 8 9 8 680 700 0
154 3.07 9 8 9 490 480 0
155 3.68 10 8 9 590 490 0
156 3.34 10 9 10 590 580 0
157 1.93 10 8 8 650 490 0
158 2.43 9 5 9 480 520 0
159 3.28 10 10 10 670 490 0
160 3.66 10 10 10 710 600 0
161 2.29 7 6 8 570 570 0
162 2.19 6 5 6 540 460 0
163 3.06 10 10 10 620 510 0
164 3.41 8 6 8 630 470 0
165 3.14 9 9 10 630 700 0
166 2.85 10 8 8 610 460 0
167 3.47 10 10 9 720 680 0
168 3.44 10 10 9 640 590 0
169 3.90 10 10 10 650 500 0
170 3.65 9 9 9 640 430 0
171 1.32 9 8 9 740 460 0
172 3.23 10 10 10 600 660 0
173 2.86 10 9 10 650 430 0
174 2.51 8 9 10 510 440 0
175 2.86 8 9 8 550 510 0
176 3.34 10 9 9 570 530 0
177 3.33 9 7 9 630 560 0
178 3.69 10 10 8 710 470 0
179 1.80 7 7 7 620 470 0
180 2.57 9 10 10 417 518 0
181 2.28 8 10 10 600 600 0
182 1.60 4 7 7 460 460 0
183 2.00 2 4 6 300 290 0
184 1.69 7 6 7 560 480 0
185 3.06 9 10 9 590 420 0
186 2.75 8 9 8 559 435 0
187 2.62 9 10 8 550 440 0
188 0.39 7 10 9 550 660 0
189 2.44 10 9 9 650 350 0
190 3.46 9 9 8 610 520 0
191 2.37 8 7 9 530 480 0
192 1.25 7 8 6 480 360 0
193 2.80 10 9 9 550 450 0
194 2.14 5 4 8 560 420 0
195 2.45 7 7 8 430 330 0
196 2.71 9 7 10 490 400 0
197 2.59 10 10 10 590 470 0
198 2.93 9 9 10 690 510 0
199 2.53 7 6 9 570 480 0
200 1.95 6 6 9 550 600 0
201 3.39 9 9 10 510 570 0
202 2.69 8 6 9 470 420 0
203 1.94 8 8 8 470 330 0
204 3.00 10 8 9 510 360 0
205 2.09 9 7 8 450 460 0
206 1.85 10 8 10 530 550 0
207 3.34 10 9 10 490 410 0
208 2.25 6 9 10 530 510 0
209 4.00 10 10 10 580 490 0
210 2.72 6 5 7 580 490 0
211 2.61 9 7 8 620 420 0
212 2.32 6 6 7 430 460 0
213 3.39 10 10 10 500 390 0
214 3.64 8 6 8 590 580 0
215 1.80 8 7 9 620 600 0
216 1.52 9 9 10 520 570 0
217 3.40 6 9 9 480 480 0
218 2.86 9 4 8 640 470 0
219 3.32 10 9 10 640 560 0
220 2.07 9 7 6 600 440 0
221 0.85 7 7 9 510 480 0
222 1.86 7 9 7 356 350 0
223 2.59 5 4 7 630 470 0
224 2.28 9 8 9 559 488 0
", "Input", CharacterEncoding -> "MacintoshRoman"]},
WindowSize -> {740, 597},
WindowMargins -> {{270, Automatic}, {Automatic, 50}},
FrontEndVersion ->
"9.0 for Mac OS X x86 (32-bit, 64-bit Kernel) (January 25, 2013)",
StyleDefinitions -> "Default.nb"]
And I'm not sure what to do; I'm pretty new to Mathematica. I've made numerous attempts to remove the cases at the end using the DeleteCases function,
DeleteCases[data,{_String},{-2}];
As well as, using the Flatten function. My thought is that the reason they can't be deleted in this way is because they're a part of a larger cell than what I've specified, but I'm not sure how to correct this. I can't begin data analysis with these elements at the end of the table.
A:
data2= StringSplit /@ StringSplit[Cases[data, Cell[t_, "Input", ___] :> t, Infinity], "\n"]
gives
You can convert the strings to numbers using
ToExpression/@data2
| {
"pile_set_name": "StackExchange"
} |
Q:
Edit icon next to a comment is confusing on iPad (and other touch tablets)
I was working on iPad and noticed edit icon next to a comment. I decided that I can edit these comments. I tapped on the button but nothing happened . Later on PC I found a tooltip (not visible on the touch device ) explaining that comment was edited some time in the past. iPads and other tablets as touch devices do not support tooltips, but have big enough screens to not use mobile theme.
It is confusing. Who cares what's happened a few years ago in a first five minutes after creating of the comment.
I suggest to remove the icon or only keep it in the first hour since the comment was created.
Also the tooltip can be moved to Timestamp where it logically belongs. Timestamp tooltip can provide additional info, was comment changed(and may be when it was changed last time). Current implementation the timestamp as a permalink to the comment is not good, because timestamp and permalink are not logically related
Instead of this it's better to show permalink icon as suggested in question Improve the usability/intuitiveness of direct comment links
Update: After Shadow Wizard's answer I understand, that problem is not applicable for mobile devices with small screen. Should be created another "tablet" theme, that will support bigger screens, but wouldn't rely on tooltips?
A:
Edit icon next to a comment is confusing
No, it's not confusing.
This icon is visible only on the full site theme, where it does have a clear tooltip. Users on mobile devices won't see it, unless switching to the full site by tapping the link in the footer, then they should know that things are going to look and act weird.
Unlike posts, comments don't have revisions. We can't see what changed and when, but knowing "something here changed" is still useful, same way it's useful to know something in a post changed.
I see no reason to remove this icon in the full site mode. (It's already removed in mobile web.)
As for the second request:
Also the tooltip can be moved to Timestamp where it logically belongs
That would be even worse, and much more confusing, since the timestamp is a permalink to the comment. If anything, it should reflect that.
| {
"pile_set_name": "StackExchange"
} |
Q:
Create a copy of the values inside of an array from the array itself in C
I have an array with it the first x_size*y_size values filled, and now I need to duplicate them DATA_JUMP times in the same array in later memory.
for(int i=1; i < DATA_JUMP; i++)
for(int k = 0; k < x_size*y_size; k++)
orig_depth[k + i*x_size*y_size] = orig_depth[k];
This works like I want it to, however this may slow down on large datasets. I wonder if there is a trick to do this with a little pointer arithmetic.
A:
You can use
memcpy(orig_depth + i*x_size*y_size, orig_depth, x_size*y_size * sizeof *orig_depth);
instead of the inner loop. You have to be sure that i*x_size*y_size is not 0, because in this case the pointer would overlap.
sizeof *orig_depth is the size of 1 element of the array, while sizeof orig_depth is the size of the whole array.
for int orig_depth[100]; sizeof *orig_depth is sizeof(int) and sizeof orig_depth is 100 * sizeof(int).
The advantage of sizeof *orig_depth against sizeof(int) is, that you can change the type without the requirement of changing every sizeof. The best pratice name for this is: Single source of Truth https://en.wikipedia.org/wiki/Single_source_of_truth
| {
"pile_set_name": "StackExchange"
} |
Q:
Switch from .bat to .py - duplicate logging
I have a suite of tests I run with Nose and Python 2.7.
I used to run the suite with runner.bat file.
Using that, I would get nice logged output, like so:
2015-02-10 16:28:28,759 - DEBUG - Firefox version: 35.0
2015-02-10 16:28:28,788 - DEBUG - Running against Production on firefox
etc.
I want to port my .bat to .py for a number of reasons (mainly added functionality)
I have made a runner.py file. It sits in the exact same dir as runner.bat But now my logging is duplicated.
2015-02-10 17:04:57,315 - DEBUG - Firefox version: 35.0
2015-02-10 17:04:57,315 - DEBUG - Firefox version: 35.0
2015-02-10 17:04:57,355 - DEBUG - Running against Production on firefox
2015-02-10 17:04:57,355 - DEBUG - Running against Production on firefox
I tried adding logger.propogate=False to my logging object, but no luck. Anyone have ideas as to why I suddenly get duplicates when running with a .py?
LogManager.py
def configure_logging():
# Log to file
fileHandler = FileHandler(logging_path)
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
fileHandler.setFormatter(formatter)
logger.addHandler(fileHandler)
logger.setLevel(logging.DEBUG)
logger.propagate = False
configure_logging()
Runner.bat
@echo off
IF EXIST C:\TestOutput\version.txt del C:\TestOutput\version.txt
python C:\TestSuite\Utils\cleanup_logging_output_dir.py
nosetests -a level=gold
Runner.py
import os
import sys
import socket
import nose
import tempfile
import shutil
def prepare_tests():
... do lots of stuff ...
os.system("python {} {}".format(clean_log_script, config_dir)
nose_argv = [__file__, '-a', 'level=gold', '--with-id']
... augment nose_argv if needed based on other variables...
return nose_argv
######################
### Run nosetests ###
######################
test_argv = prepare_tests()
result = None
if __name__ == '__main__':
# don't run tests on current file
test_argv.extend(['--ignore-files', os.path.basename(__file__)])
result = nose.run(argv=test_argv)
A:
It appears there is a bug in Nose. The following produces duplicate logging:
logger = logging.getLogger("general_logger")
However the following does not:
logger = logging.getLogger(__name__)
| {
"pile_set_name": "StackExchange"
} |
Q:
Are Questions About Treating Injuries Off Topic?
Thinking specifically of Whats the best way to stop a nose bleed quickly? which raises the question: At what point does the treatment of injuries (as opposed to the prevention of injuries) get to be off topic?
I feel that there's a bit of a fuzzy line here (especially since some martial arts include a treatment component), but I'd personally advocate that specifically treating injuries in general after the fact is off topic, while "how do I practice while injured" and "how can I help keep from getting injured" are both on topic.
A:
I think it's on topic for martial arts training. Injury prevention, treatment ( which doesn't require a doctor), and training with injuries are a key part of many martial artists training. Especially the more sports oriented arts.
I think once the medical advice is at the point where you need to see a doctor, its off topic. But for basic treatment of typical martial arts injuries, that's right on topic.
but then, depends if you think of this place as about "martial arts themselves" or about people who practice marital arts and want to ask questions related to training ( or both )
A:
Injuries are bound to happen in martial arts. Robert Cartaino posted a really nice answer on the Fitness & Nutrition meta a year ago where we had the same issue:
Like anything else you read on the internet, there's a degree of
responsibility and caution that falls on both the askers and
answerers. Don't ask us when it is okay for you to resume
weightlifting if you've just be diagnosed with heart problems. It
sounds trite to constantly hear "Talk to your doctor", but
sometimes, it is the only advice. At the same time, folks answering
questions shouldn't throw around wild generalities when the author has
not provided sufficient information.
The Fitness & Nutrition FAQ states
wellness, general health, medical advice and injuries unrelated to
exercise
as off-topic, but we consider "injury prevention" as on-topic so long as there is a connection to fitness. I think we should take a similar approach here. Preventing and treating injuries related to martial arts (and note these subjects should be handled with care) are on-topic. I think what's been discussed on the site about injury so far is appropriate (even though not necessarily in depth).
For this site, I would consider this example to be on-topic:
What's the recommended way to deal with an injured hamstring while still practicing?
and this as off-topic:
What's the best way to stop a nose bleed quickly?
That question is too general in its current format, but something like what Jack B. Nimble suggested could make it better and more appropriate. I agree with what David H. Clements proposed in his question, "that specifically treating injuries in general after the fact is off topic, while 'how do I practice while injured' and 'how can I help keep from getting injured' are both on topic."
| {
"pile_set_name": "StackExchange"
} |
Q:
Avoid nested for loop while iterating Map>
I want to get a List<Integer> out of Map<Integer, Set<Integer>>.
Map<Integer, Set<Integer>> MAPAgreementAdhoc = new Map<Integer, Set<Integer>>{1 => new set<Integer>{1,2,3}, 2 => new Set<Integer>{2,3,4}};
My code to get a list of Values from map.values is as below but it's not complete. I need help to move forward.
Motive of this code is to avoid nested for loop.
Map<Integer, Set<Integer>> mapEnhancement = new Map<Integer, Set<Integer>>();
Map<Integer, Set<Integer>> MAPAgreementAdhoc = new Map<Integer, Set<Integer>>{1 => new set<Integer>{1,2,3}, 2 => new Set<Integer>{2,3,4}};
List<Integer> AgreementAttachment = new List<Integer>{1,2,3,4,5,6};
for(Integer oTemp : AgreementAttachment){
if(MAPAgreementAdhoc.containsKey(oTemp)){
Set<Integer> setEnhance = new Set<Integer>();
setEnhance = MAPAgreementAdhoc.get(oTemp);
mapEnhancement.put(oTemp, setEnhance);
}
}
//for(Integer tempId : mapEnhancement.values()){
system.debug(mapEnhancement.values());
//}
Output:
DEBUG|({1, 2, 3}, {2, 3, 4})
Expected Output:
{1,2,2,3,3,4}
A:
I'm typing this straight in the browser, but couldn't you just do:
List<Integer> results = new List<Integer>();
List<Integer> AgreementAttachment = new List<Integer>{1,2,3,4,5,6};
for(Integer index : AgreementAttachment)
{
if(MAPAgreementAdhoc.get(index) != null)
{
results.addAll(MAPAgreementAdhoc.get(index));
}
}
System.debug(results);
Of course, if you actually need the other Map of Sets you'll need to create & populate that too, but if it's just the list then this code will do what you need.
| {
"pile_set_name": "StackExchange"
} |
Q:
Magento 2 Layered Navigation in Advance Search Result
How can I add Layered Navigation in Advance Search result page(2 Column Left layout)?
How can I dot it?
A:
Override catalogsearch_advanced_index.xml in your theme
From
vendor/magento/module-catalog-search/view/frontend/layout/catalogsearch_advanced_index.xml
To
app/design/frontend/Vendor/theme/Magento_CatalogSearch/layout/catalogsearch_advanced_index.xml
Now change layout 1column to 2columns-left
<page xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" layout="2columns-left" xsi:noNamespaceSchemaLocation="urn:magento:framework:View/Layout/etc/page_configuration.xsd">
Now add container sidebar.main with left navigation block
Your final catalogsearch_advanced_index.xml like this
<?xml version="1.0"?>
<page xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" layout="2columns-left" xsi:noNamespaceSchemaLocation="urn:magento:framework:View/Layout/etc/page_configuration.xsd">
<head>
<title>Advanced Search</title>
</head>
<update handle="page_calendar"/>
<body>
<referenceContainer name="sidebar.main">
<block class="Magento\LayeredNavigation\Block\Navigation\Category" name="catalog.leftnav" before="-" template="layer/view.phtml">
<block class="Magento\LayeredNavigation\Block\Navigation\State" name="catalog.navigation.state" as="state" />
<block class="Magento\LayeredNavigation\Block\Navigation\FilterRenderer" name="catalog.navigation.renderer" as="renderer" template="layer/filter.phtml"/>
</block>
</referenceContainer>
<referenceContainer name="content">
<block class="Magento\CatalogSearch\Block\Advanced\Form" name="catalogsearch_advanced_form" template="advanced/form.phtml"/>
<block class="Magento\Framework\View\Element\Html\Calendar" name="html_calendar" as="html_calendar" template="Magento_Theme::js/calendar.phtml"/>
</referenceContainer>
</body>
</page>
Same way you need to override
1) catalogsearch_advanced_result.xml and
2) catalogsearch_result_index.xml
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the phone with 4g written icon in the notification bar? ( Image in description)
What is the phone with 4g written icon in the notification bar? I have a Moto G6 with Vodafone Gujarat India.
A:
That signifies the "VoLTE" calling. Because now your call is made over the LTE network, which is different than how they used to be placed earlier. Vodafone had launched VoLTE in many states but it's been a good time since they were launched in Gujarat. Surprising that you're noticing it only now.
VoLTE brings HD voice quality and faster call connect times. So, you get a lot of improvement in the experience.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does Davos go with Jon?
From reading Why did Jon Snow choose someone he barely knew to this post?, over at Movies and TV, I came to realise that Ser Davos is Jon's Hand. If that is the case why would Jon leave Sansa in charge and take Davos with him when:
He goes to meet with Daenerys.
Surely it makes more sense to leave Davos in charge, or at least leave Davos to oversee Sansa.
As it stands, it looks like Littlefinger is going to use this to his advantage to manipulate Sansa into what he wants from her and that doesn't look like a sensible thing for Jon to have done.
A:
Davos is important to the journey not just as a an advisor but because of his knowledge and experience
Davos might be Jon's Hand or counceler but more importantly he knows Dragonstone, how to get there, its geography and, as Bailey points in his comment, is an acomplished seaman and smuggler.
Given that Jon's travel primary objective is stopping the white walkers army, and getting Dragonglass is one of the ways to accomplish that, counting with Davos in the journey is a great asset.
| {
"pile_set_name": "StackExchange"
} |
Q:
Insufficient Privileges to log in to api store after admin password change
I have changed the carbon super user - admin's password in WSO2 API Manager. After doing that I cant log in to API Store. It says "Unable to log you in! Insufficient Privileges." What is the problem here?
A:
You have to modify the AuthManager section in the api-manager.xml with the new admin password.
| {
"pile_set_name": "StackExchange"
} |
Q:
Automorphisms group of a structure is a closed subgroup of the permutations over $\mathbb{N}$
Here is the second part of Example $7)$ of Kechris' "Classical Descriptive Set Theory" (pp. $59-60$):
More generally, consider a structure
$$\mathcal{A}=(A,(R_i)_{i\in I},(f_j)_{j\in J},(c_k)_{k\in K})$$
(in the sense of model theory) condisting of a set $A$, a family of relation $(R_i)_{i\in I}$, operations $(f_j)_{j\in J}$ and distinguished elements $(c_k)_{k\in K})$ on $A$.
Assume $A$ is countably infinite.
Let $Aut(\mathcal{A})$ be the group of automorphisms of $\mathcal{A}$.
Thinking, without loss of generally, of $A$ as being $\mathbb{N}$, $Aut(\mathcal{A})$ is a closed subgroup of $S(\mathbb{N})$ group of permutations over $\mathbb{N}$.
I don't see how to prove that $Aut(\mathcal{A})$ is closed in $S(\mathbb{N})$.
Thank you in advance for your help.
A:
A permutation is an automorphism when it preserves the interpretations of all the basic relation, function, and constant symbols. But preserving the interpretation of the symbols is the same as not failing to preserve the symbols. So the idea is to check that "for every symbol in the language, for every possible way $\sigma$ could fail to preserve the symbol, $\sigma$ doesn't do that" is a closed condition.
More precisely: for any finite tuples $\overline{a}$ and $\overline{b}$ in $\mathbb{N}^k$, there is an open set $$U(\overline{a},\overline{b}) = \{\sigma\in S(\mathbb{N})\mid \sigma(\overline{a}) = \overline{b}\}.$$
It's complement is a closed set $$C(\overline{a},\overline{b}) = \{\sigma\in S(\mathbb{N})\mid \sigma(\overline{a}) \neq \overline{b}\}.$$
Now the set of automorphisms of the structure $\mathcal{A}$ with domain $\mathbb{N}$ is the intersection of the following closed sets:
For every $k$-ary relation symbol $R$ in the language, and for any pair of tuples $\overline{a}$ and $\overline{b}$ in $\mathbb{N}^k$ such that $\mathcal{A}\models R(\overline{a})\not\leftrightarrow R(\overline{b})$, the closed set $C(\overline{a},\overline{b})$.
For every $k$-ary function symbol $f$ in the language, for any pair of tuples $\overline{a}$ and $\overline{b}$ in $\mathbb{N}^k$, and for any element $d$ in $\mathbb{N}$ such that $\mathcal{A}\models d\neq f(\overline{b})$, the closed set $C(\overline{a}f^{\mathcal{A}}(\overline{a}),\overline{b}d)$.
For every constant symbol $c$ in the language, and for any element $b$ in $\mathbb{N}$ such that $\mathcal{A}\models c\neq b$, the closed set $C(c^\mathcal{A},b)$.
More efficiently (but less explicitly), we can show that $\text{Aut}(\mathcal{A})$ is closed in $S(\mathbb{N})$ by showing that its complement is open, i.e. for every $\sigma\in S(\mathbb{N})\setminus \text{Aut}(\mathcal{A})$, $\sigma$ has an open neighborhood contained in
$S(\mathbb{N})\setminus \text{Aut}(\mathcal{A})$.
Well, if $\sigma$ fails to be an automorphism, this is already witnessed by some finite tuple $\overline{a}$, in the sense there is some symbol in the language such that $\sigma$ fails to preserve this symbol because it maps $\overline{a}$ to $\overline{b} = \sigma(\overline{a})$. Then any permutation of $\mathbb{N}$ that maps $\overline{a}$ to $\overline{b}$ will fail to be an automorphism, so $U(\overline{a},\overline{b})$ is an open neighborhood of $\sigma$ in $S(\mathbb{N})\setminus \text{Aut}(\mathcal{A})$, as desired.
| {
"pile_set_name": "StackExchange"
} |
Q:
OpenRefine: select value based on a variable another column
I have a problem with OpenRefine. I am adding a new column based on a url and from there calling an API for getting some terms from a controlled vocabulary (AAT).
I parse the results and I obtain a multivalued cells such as:
http://vocab.getty.edu/aat/300041366||aquatints (prints)::http://vocab.getty.edu/aat/300053242||aquatint (printing process)::http://vocab.getty.edu/aat/300191265||dust bags::http://vocab.getty.edu/aat/300191278||dust boxes::http://vocab.getty.edu/aat/300191278||dust boxes::http://vocab.getty.edu/aat/300191278||dust boxes::http://vocab.getty.edu/aat/300249564||aquatinters::http://vocab.getty.edu/aat/300249564||aquatinters::http://vocab.getty.edu/aat/300249564||aquatinters::http://vocab.getty.edu/aat/300249564||aquatinters::http://vocab.getty.edu/aat/300053242||aquatint (printing process)::http://vocab.getty.edu/aat/300041366||aquatints (prints)::http://vocab.getty.edu/aat/300041368||sandpaper aquatints::http://vocab.getty.edu/aat/300041368||sandpaper aquatints
Where I have the current structure:
URI||Corresponding_TERM::URI||Corresponding_TERM
etc.
I now need to choose one of those records. My solution is to use
value.split("::")[0]
in order to choose which element I want.
Unfortunately this solutions has very evident drawbacks, because the order of the elements in the array is not constant, so if the first element [0] would be the right one for one record, it would probably not be for the next one.
For explain myself better, I now have this kind of structure
-----------------------------------------------------------
|ID | Classification | Term_From_Thesaurus |
| 1 | Aquatints | uri||term1::uri||term2:: |
| 1 | Aquatints | uri||term1::uri||term2:: |
| 2 | Drypoints | uri||term3::uri||term4:: |
| 3 | Woodcut | uri||term5::uri||term6::uri||term7 |
-----------------------------------------------------------
And I need to associate term1 with Aquatints, term 4 with Drypoints and term 7 with Woodcut.
How can I do that?
A solution could be using facet and a lot of manual work, but maybe there is a better one?
What about going to each record and if ID = 1 they should use term1, if ID=2 should use term 4 etc. Would it be possible? I sincerely do not know how to use the value of another column as variable to determine the result of an operation. cell.cross would help, but in case I need to split the data into two files, and doesn’t seems to me a proper solution..
A:
So I'm not sure if I've understood your question correctly, but it is possible to "select value based on a variable in another column".
If you have:
-----------------------------------------------------------
|ID | Classification | Term_From_Thesaurus |
| 1 | Aquatints | uri||term1::uri||term2:: |
| 1 | Aquatints | uri||term1::uri||term2:: |
| 2 | Drypoints | uri||term3::uri||term4:: |
| 3 | Woodcut | uri||term5::uri||term6::uri||term7 |
-----------------------------------------------------------
Then if you split the 'Term_From_Thesaurus' column into an array, then you can use the number in the 'ID' column to select the relevant entry in the array. However, note that for this to work you need to have the number in the ID column to be converted into a Number type (if it isn't already). In this example I'll assume that the number in the ID column starts off as a String rather than Number.
So the formula:
value.split("::")[cells.ID.value.toNumber()-1]
Will find the first value in the first and second row, the second value in the third row and the 4th item in the 4th row. This is illustrated here:
The formula breaks down as follows:
value.split("::") = splits the list of URI/Term pairs into an array
cells.ID.value.toNumber() = converts the value in the ID column into
a number type
-1 = because array members are counted from zero
Hope this is clear
| {
"pile_set_name": "StackExchange"
} |
Q:
Creating a drought metric using precipitation data?
I'm currently working on a project in which I need to create a metric to quantify drought. For the state of Pennsylvania, I have daily precipitation data from many NOAA precipitation gauges (inches of rain per day). I am attempting to create a metric that can quantify how much rainfall certain areas received, hopefully boiling it down to one number. The goal of the metric is to describe the 1960's drought. What's the best way to do this?
I've looked into many scientific papers for similar metrics, but most use more in depth models, and not just daily precip data. What are some good ideas? Average daily/weekly/monthly? Average number of days without rainfall?
A:
One of the most widely used index to quantify the drought events is the standardized precipitation index (SPI) and it is actually based on precipitation time series only.
The index is used, among others, also by the NOAA.
I'll borrow the description of the index from a scientific article (Farinosi et al 2018)
Standardized precipitation Index (McKee et al., 1993)
The index is a measure expressed in standard deviation units of the
variation of the precipitation of a specific number of months respect
to the long run average (WMO, 2012). The number of months based on
which the SPI could be calculated usually varies between 3 and 48
months. Shorter time scales SPI is considered a good indicator of
variations of soil moisture, while on longer scales (up to 24 months),
it could be associated with groundwater or reservoir levels variation
(WMO, 2012). That for, a shorter SPI (3 months) is often utilized to
detect meteorological droughts; a medium SPI (6 months) is usually
associated with agricultural droughts; a longer SPI (12–24 months) is
associated with hydrological droughts. SPI was calculated using the R
package SPEI (Beguería and Vicente-Serrano, 2014).
Usually the minimum requirements to have a robust estimation of the SPI is to have a precipitation time series of at least 30 years.
You could use the SPI to quantify the number of "droughts" (choosing a threshold under which you define the event - for instance see this) in a period for your area of interest and then aggregate the number of "meteorological", "agricultural", and "hydrological" droughts.
| {
"pile_set_name": "StackExchange"
} |
Q:
Textbox maximum/minimum numeric values C#
I want to create an if Statement that validates whether an input number in a textbox is between 0 to 100. For example:
NumericUpDown num = new NumericUpDown();
num.Maximum = 100;
num.Minimum = 0;
if (int.Parse(txtMid.Text) < num.Minimum && int.Parse(txtMid.Text) > num.Maximum)
{
MessageBox.Show("Please input 0 to 100 only.");
}
That's all. Thanks in advance.
A:
You need to parse the txtbox1.Text string into an integer:
int val = 0;
bool res = Int32.TryParse(txtbox1.Text, out val);
if(res == true && val > -1 && val < 101)
{
// add record
}
else
{
MessageBox.Show("Please input 0 to 100 only.");
return;
}
Also, do you need to test one or two textboxes? If it's only one and you need the interval 0, 100, then your condition is wrong, because it always returns false (a number cannot be at the same time <= -1 and >= 101).
VERY IMPORTANT: I have reversed your if/else: you have to print the error in the else and add the record in the if.
| {
"pile_set_name": "StackExchange"
} |
Q:
If HTML is sent through $.get(..), jquery data table is not properly formatted
If in content.php I do not fill the table with the data, then I can see properly formatted jquery data table. However, if I fill it with the data (I tried both DB data and manual input of some random numbers), it is not formatted anymore and looks like a hell. Could it happen that $.get(..) (used in test.php) does not work properly in this example?
test.php
$(document).ready(function() {
loadContent();
});
function loadContent() {
$.get('modules/mod_scheduler/content.php', function(data) {
$('#table').html(data);
});
}
<div id="table"></div>
content.php
<?php
include_once '../../include/connect_db.php';
$query = "SELECT * FROM `TestTable`";
$result=execute_query($query);
?>
<table id="newspaper-b" border="0" cellspacing="2" cellpadding="2" width = "100%">
<thead>
<tr>
<th scope="col">Flight Num</th>
<th scope="col">Appearance Time</th>
<th scope="col">Target Time</th>
<th scope="col"></th>
</tr>
</thead>
<tbody>
<?php while($row=mysql_fetch_assoc($result)) {
$flightNum=$row['flightNum'];
$appearanceTime=$row['appearanceTime'];
$targetTime=$row['targetTime'];
?>
<tr>
<td><?php echo $flightNum; ?></td>
<td>
<?php echo $appearanceTime;?>
</td>
<td>
<?php echo $targetTime;?>
</td>
<td id="<?php echo $flightNum; ?>">
<div>
<img src='modules/images/edit.png' alt='Edit' />
</div>
</td>
</tr>
<?php }?>
</tbody>
</table>
Of course, I have also defined the following:
<link type="text/css" rel="stylesheet" href="modules/mod_scheduler/css/demo_table.css"/>
<link type="text/css" rel="stylesheet" href="modules/mod_scheduler/css/demo_page.css"/>
<link type="text/css" rel="stylesheet" href="modules/mod_scheduler/css/demo_table_jui.css"/>
<script type="text/javascript" src="modules/mod_scheduler/js/dataTable/jquery-ui.js"></script>
<script type="text/javascript" src="modules/mod_scheduler/js/dataTable/jquery.dataTables.js"></script>
<script language="javascript" type="text/javascript" src="modules/mod_scheduler/js/jqplot/plugins/jqplot.pointLabels.js"></script>
A:
You are returning a table, but you are not calling dataTable() on it that I can see. The way the DataTables plugin works, you typically call the dataTable() on a table. I can't recall what should happen with "arbitrary" style sheets (whatever styles you've set for the table), but certainly if you're using jQuery UI (which it looks like you are), it won't look right until you call the function, thereby adding all the necessary classes for jQuery UI theme. You could return those classes already in the table, but currently you are not.
Since you are doing this server-side, I would take it a step further and return JSON-formatted data for the table instead of a whole bunch of table markup. That's the more elegant and manageable way of using DataTables.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to create validator for lowercase user in Django 2.0?
my site is up and running. However, silly me I didnt put in a validator to check if users and username can only be in lower-case (I only realised that DJANGO allowed capitalized Users and usernames). Although, I have put up a warning but users usually ignore that still write upper case letters or alteast capitalized letters in their signup forms. I then encounter the problem of slugs not working, thereafter I have to manually change their usernames. I do not want to change the behavior of slugs and instead can I please ask for help from someone in changing my views? I have tried .lower() .format() as well and it didnt work. I am very weak on validators.
forms.py
from django.contrib.auth.models import User
from django import forms
from captcha.fields import CaptchaField
class SignUpForm(forms.ModelForm):
captcha = CaptchaField()
password = forms.CharField(max_length= 15, widget=forms.PasswordInput)
class Meta:
model = User
fields = ['username', 'email', 'password']
Views.py
from django.shortcuts import render, redirect
from django.http import HttpResponse
from django.contrib.auth import authenticate, login
from django.views import generic
from django.views.generic import View
from home.forms import SignUpForm
class SignUpFormView(View):
form_class = SignUpForm
template_name = 'home/signup.html'
#if there is no sign up yet
def get(self,request):
form = self.form_class(None)
return render(request, self.template_name, {'form': form})
#if going to sig up
def post(self,request):
form = self.form_class(request.POST)
if form.is_valid():
#it takes information but does save it
user = form.save(commit = False)
#cleaned normalized data
username = form.cleaned_data['username']
password = form.cleaned_data['password']
user.set_password(password)
user.save()
#returns if it is all correct
user = authenticate(username = username, password = password)
if user is not None:
if user.is_active:
login(request, user)
return redirect("userprofile:newprofile")
return render(request, self.template_name, {'form': form})
A:
You can add a validation in the SignupForm:
from django.contrib.auth.models import User
from django import forms
from captcha.fields import CaptchaField
class SignUpForm(forms.ModelForm):
captcha = CaptchaField()
password = forms.CharField(max_length= 15, widget=forms.PasswordInput)
def clean_username(self):
data = self.cleaned_data['username']
if not data.islower():
raise forms.ValidationError("Usernames should be in lowercase")
return data
class Meta:
model = User
fields = ['username', 'email', 'password']
So in case the data.islower() check fails (the username contains uppercase characters), it will raise a ValidationError, and thus the form is not valid.
Note that islower() only checks case-based characters, so if the string contains digits, it will still succeed. You thus might want to finetune the check. Or as specified in the documentation of str.islower:
Return True if all cased characters in the string are lowercase and
there is at least one cased character, False otherwise.
An alternative might be to convert the data to lowercase, such that a user that enters 'FooBar' gets as username foobar, although this can result in the fact that users get confused that the username they picked is not the username they get.
| {
"pile_set_name": "StackExchange"
} |
Q:
Join the two EditBox into 1 EditBox in Android
I am new in android.
I want to join two EditBox into 1 EditBox.
Anyone can help me here.
Thanks in Advance.
A:
Give background to a linear layout with vertical orientation and add two transparent text box in it.
<LinearLayout android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/background_of_big_edittext"
android:orientation="vertical" >
<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@android:color/transparent" />
<View
android:layout_width="match_parent"
android:layout_height="1dp"
android:background="@color/color_of_your_divider" />
<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@android:color/transparent" />
</LinearLayout>
| {
"pile_set_name": "StackExchange"
} |
Q:
Poisson Integral relation
If $$ I_n(r) = \int_0^\pi \frac{\cos nx}{r^2-2r\cos x+1} \, dx $$
How to prove that
$$ I_{n-1}(r)+I_{n+1}(r)= \left(r+\frac{1}{r}\right)I_n(r)\text{ ?}$$
I only find that $$I_{n-1}(r)+I_{n+1}(r)= \int_0^\pi \frac{2\cos nx\cos x}{r^2-2r\cos x+1} \, dx$$
A:
$$\begin{aligned}
I_{n-1}(r)+I_{n+1}(r) &=\int_0^{\pi}\frac{2\cos(nx) \cos x}{r^2-2r\cos x+1}\,dx \\
&=-\frac{1}{r}\int_0^{\pi} \frac{\cos(nx)(r^2-2r\cos x+1-r^2-1)}{r^2-2r\cos x+1}\,dx \\
&=-\frac{1}{r}\int_0^{\pi}\cos(nx)\,dx+\frac{r^2+1}{r}\int_0^{\pi} \frac{\cos nx}{r^2-2r\cos x+1}\,dx\\
&=\left(r+\frac{1}{r}\right)I_n(r) \\
\end{aligned}$$
| {
"pile_set_name": "StackExchange"
} |
Q:
Atribuir link a uma janela modal bootstrap
Como faço para ter o link de uma janela modal?
No caso quero enviar este link, o usuário ao clicar no link já abra direto o modal.
Como faço isso?
A:
Possível solução
Uma opção é usar hashtag para informar que o cliente quer abrir um modal, exemplo:
www.sitedevendas.com/produto#comprar
Dessa forma você saberia que o cliente quer abrir o modal comprar e assim você deve analisar em window.location.href se existe #comprar nessa string.
Exemplo
$(document).ready(function() {
if(window.location.href.indexOf('#comprar') != -1) {
$('#comprar').modal('show');
}
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Hide child view if parent view's height is less than a certain limit
I have few text elements and under that there is a webView.
I don't want the view to be scrollable.
So if the height of the device is very small (ex. 3.5 inch screens) I don't want to display the webView and only display the text elements.
Is there any way to hide the webView if it cannot be displayed fully?
A:
You can control Visibility of any View by
webView.setVisibility(View.VISIBLE)
webView.setVisibility(View.INVISIBLE)
You can also use
webView.setVisibility(View.GONE)
Just try to use those functions under some condition. Check size of parent view and if it's lower than some value, make webView INVISIBLE. The difference between INVISIBLE and GONE is that first one works similar to making it completely transparent (but it still uses space in layout) while the second one deletes the View from layout.
if(((CAST_HERE)webView.getParent()).getHeight() < minHeight)
webView.setVisibility(View.GONE)
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.