text
stringlengths 3
1.74M
| label
class label 2
classes | source
stringclasses 3
values |
---|---|---|
Are swimming pools breeding chlorine-resistant organisms?. Like the overuse of antibiotics. Are we breeding super microbes through the use of pool chlorine? | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Thread 1: EXC_BREAKPOINT (code=EXC_i386_BPT, subcode=0x0) error. <p>I have an iPad app I am making, but it crashes on startup even though there are no errors or warnings, the output doesn't output anything besides "(lldb)", and it marks a pointer.</p>
<p>This is the image of the post build crash.</p>
<p><img src="https://i.stack.imgur.com/sVXYD.png" alt="Image Of Crash"></p>
<p>And here is the code:</p>
<pre><code>- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
/* UIImage *myImage = [UIImage imageNamed:@"de_dust2.png"];
myImageView = [[UIImageView alloc] initWithImage:myImage];
myScrollView.contentSize = CGSizeMake(myImageView.frame.size.width, myImageView.frame.size.height);
myScrollView.maximumZoomScale = 4.0;
myScrollView.minimumZoomScale = 0.75;
myScrollView.clipsToBounds = YES;
myScrollView.delegate = self;
// [myScrollView addSubview:myImageView];*/
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// Override point for customization after application launch.
self.viewController = [[ViewController alloc] initWithNibName:@"ViewController" bundle:nil];
self.window.rootViewController = self.viewController;
[self.window makeKeyAndVisible];
[self.viewController setRed:1];
[self.viewController setGreen:0];
[self.viewController setBlue:0];
[self.viewController setAlpha:1];
[self.viewController checkRotation];
return YES;
}
</code></pre>
<p>I now also noticed that the error was given at the <code>[self.window makeKeyAndVisible];</code> line.</p>
| 0non-cybersec
| Stackexchange |
Building solar farms above the clouds for uninterrupted power. | 0non-cybersec
| Reddit |
My dog has the creepiest smile.. | 0non-cybersec
| Reddit |
Movie tie-in book covers are hideous.. | 0non-cybersec
| Reddit |
Algorithm for best subset of items. <p>I have a matrix M with size NxN where each position M(i,j) is an integer representing the relationship between item i and j. If i and j are the same item then the positions M(i,j) and M(j,i) are 0.</p>
<p>What I'd need is to regroup these N items in subgroups of 5 elements each one. The value of each group would be Σ(M(i,j) for each i, j in the group).<br>
And I would need to maximize the total value of all groups.</p>
<p>I studied lots of algorithms more than 15 years ago and I forgot the most of them, and nowadays are lots of new algorithms, so I'm a bit lost trying to find the best one for these case.</p>
<p>A friend told me to investigate Clustering algorithms but they have lots of different versions and specializations, so I don't know which one to look at first.</p>
<p>And just one more thing, besides this algorithm to maximize each group, would I need an algorithm to maximize the total value of all groups, discarding the non optimal selections? I remember algorithms that made this but I don't even remember their name.</p>
| 0non-cybersec
| Stackexchange |
Why am I getting an "execv(file, args)" error when using execl()?. <p>I am trying to use execl() to execute a new program but it keeps returning an execv() error saying that arg2 must not be empty.</p>
<pre><code>if pid == 0:
print("This is a child process")
print("Using exec to another program")
os.execl("example_prg_02.py")
</code></pre>
<p>Why would this be the case when using execl()? Does execl() require args too?</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
IPTables rule for neither of two interfaces. <p>I'm using IPtables and I have a doubt that I can't find. I want to apply a rule in the PREROUTING of the nat table. The rule is supposed to execute a chain but I want it to be executed for every interfaces except for two of them. I can't use wildcards because I need all of the other interfaces regardless their name (say I can't have it).</p>
<p>I have applied this rule:</p>
<pre><code>iptables -t nat -A PREROUTING -j my_chain ! -i eth0
</code></pre>
<p>That results into this:</p>
<pre><code>Chain PREROUTING (policy ACCEPT 19 packets, 3008 bytes)
pkts bytes target prot opt in out source destination
10 1538 my_chain all -- !eth0 * 0.0.0.0/0 0.0.0.0/0
</code></pre>
<p>But I need something like this:</p>
<pre><code>Chain PREROUTING (policy ACCEPT 19 packets, 3008 bytes)
pkts bytes target prot opt in out source destination
10 1538 my_chain all -- !(eth0 or tun0) * 0.0.0.0/0 0.0.0.0/0
</code></pre>
<p>The thing is it can not be in two different rules because one of this two interfaces will enter into the other's interface rule. I also tried something like:</p>
<pre><code>iptables -t nat -A PREROUTING -j my_chain ! -i eth0 ! -i tun0
</code></pre>
<p>But it returns: <code>multiple -i flags not allowed</code></p>
<p>Bassically I need a way to implement that <code>or</code> in the interface condition or <code>!eth0 and !tun0</code> (logical equivalent).</p>
<p>I'm using debian with iptables v1.4.21.</p>
<p>Thanks for your help!</p>
| 0non-cybersec
| Stackexchange |
Prove that the sequence is convergent and find its limit.. <p>Prove the sequence:$$y(n) = (y(n-1) + 2y(n-2))/3 \text{ for } n > 2 \text{ and }y(1)<y(2)$$ is convergent and find it's limit.</p>
<p><strong>My progress so far</strong></p>
<p>So far, I have been able to prove that that the sequence in monotonically increasing by proving $y(3)>y(1)$, then $y(4)>y(3)$ and I proved the rest by PMI.</p>
<p>If we prove that the sequence is bounded above, by Monotone Convergence theorem, the sequence shall be convergent. I haven't been able to prove it though.</p>
<p>Also, once we prove that the limit exists, how do we find it. Generally, we are able to do so by putting 'limit' on the defined sequence(After proving that the limit exists) and then we'd get a value. However, it isn't working here.</p>
| 0non-cybersec
| Stackexchange |
Homestead installation. <p>I could not figure out where I made a mistake here. My command <code>vagrant up</code> replies with the following lines</p>
<pre><code>$ vagrant up
Check your Homestead.yaml file, the path to your private key does not exist.
Check your Homestead.yaml file, the path to your private key does not exist.
</code></pre>
<p><a href="https://i.stack.imgur.com/AlJuQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AlJuQ.png" alt="enter image description here"></a></p>
| 0non-cybersec
| Stackexchange |
Concerning the feasibility of example driven modelling techniques
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
Concerning the feasibility of example-driven modelling
techniques
S. Thorne, D. Ball, Z. Lawson
UWIC and Cardiff University
[email protected], [email protected]
1.0 Introduction
We report on a series of experiments concerning the feasibility of example driven
modelling. The main aim was to establish experimentally within an academic
environment; the relationship between error and task complexity using a) Traditional
spreadsheet modelling, b) example driven techniques. We report on the experimental
design, sampling, research methods and the tasks set for both control and treatment
groups. Analysis of the completed tasks allows comparison of several different
variables. The experimental results compare the performance indicators for the
treatment and control groups by comparing accuracy, experience, training, confidence
measures, perceived difficulty and perceived completeness. The various results are
thoroughly tested for statistical significance using: the Chi squared test, Fisher’s exact
test for significance, Cochran’s Q test and McNemar’s test on difficulty.
1.1 Example-Driven Modelling
The principle concept of Example Driven Modelling (EDM) is to collect example
attribute classifications, provided by the user, to compute the mathematical function
of those examples and construct a generalised model via a machine learning
technique.
To clarify, figure 1 shows the concept from start to end. Firstly the user would have to
provide example attribute classifications for the problem they wish to model. The
examples are then formatted into a data set and fed through a learning algorithm. The
algorithm learns from the example data, provided which results in a general model,
which is able to generalise to new unseen examples in the problem domain.
Produces
Example
data set
Fed into Machine
Learning
Algorithm
Learns
General
model
User
Figure 1 Example-Driven Modelling (EDM)
117
mailto:[email protected]
mailto:[email protected]
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
This approach eliminates the need for the user to produce formulae, the user only
gives example data for the problem they wish to model. This therefore eliminates
errors in constructing formulae since the user is no longer required to produce them.
The burden of calculation is placed on the computer, which using a machine learning
algorithm, computes the function of the examples. As the literature suggests, this may
be a more effective use of human and computer strengths (Michie, 1989)
However in the case of example giving for EDM this is only theory and some
investigation into the feasibility of such an approach is required, i.e. how feasible is it
for humans to think up examples for a given problem.
2.0 Investigating the feasibility of giving examples
To investigate if giving examples works in practice an experiment was designed to
compare traditional spreadsheet modelling techniques and the novel approach of
giving examples. The first group, the “treatment” group, were required to give
example data to complete the tasks. The other group, the control group, were given
the same tasks to complete using a spreadsheet application.
2.1 Experimentation
The experiment into feasibility was designed in accordance guidelines cited by
Shadish et al. (2002) and Campbell and Stanley (1963). Also, published work using
experimental methodologies in spreadsheet research were considered (Hicks and
Panko 1995, Javrin and Morrison 1996, Panko and Halverson 1998, Javrin and
Morrison 2000, Howe and Simkin 2006)
2.2 Experiment aim
The main aim of the experiment was to establish experimentally within an academic
environment, using postgraduate students:
1. The relationship between error and task complexity using a) spreadsheet
modelling techniques, b) example giving
2. The (hypothesised) superiority of Example giving over traditional
spreadsheet modelling.
3. A satisfactory statistical measure of overconfidence.
4. The relationship between previous spreadsheet experience and accuracy for
both traditional spreadsheet modelling and example giving
From these aims and objectives, we will be able to determine the feasibility of Example
giving via three performance indicators
1. Whether the participants understand the instruction of giving examples, i.e.
can users understand the instructions of giving examples and generate valid
examples in the context of the experiment tasks.
2. The accuracy of the examples provided by the participants, i.e. what is the
error rate for examples provided by participants
118
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
3. The comparative error rate when compared to traditional modelling, i.e. how
does the error rate compare to that of traditional modelling and does this
warrant further investigation.
2.3 Experimental design
The experimental model chosen to evaluate the aims of the experiment is the
“Randomised two-group no posttest design”. Figure 2 shows the standard design of
such experiments, this diagram is read from left to right and shows the
Figure 2 Randomised two group no post test (Shadish et al. 2002)
The diagram shows the two randomised (R) groups, the treatment group (X), the
control group (which is left blank) and the two outcomes (O). In this case the control
group receive ‘standard’ treatment, i.e. they develop spreadsheet formulae using the
constructs and syntax in a spreadsheet application, such as Excel. The treatment group
receive the novel approach, this allows relative comparison between the control and
treatment groups.
2.4 Sampling
This sampling for this experiment is a cluster random sample as described by Shadish
et al. (2002) and Saunders et al. (2007). Cluster sampling identifies a suitable cluster
of participants and then randomly selects from within that group.
Considering similar development experiments in spreadsheets (Hicks and Panko
1995, Javrin and Morrison 1996, Panko and Halverson 2001), postgraduate Masters
students were selected as an appropriate cluster.
Selection within the cluster was random, participants were not divided upon ability or
any other basis.
Participants were invited to attend a session arranged for the experiment. Upon
arriving participants were divided into two groups, the control and treatment groups.
The appropriate materials for each group were distributed and the experiment began.
2.5 Research materials
The research materials for this experiment comprise two different packs handed to the
participants.
119
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
Both packs contained a questionnaire gathering information such as age, sex,
experience, number of years using spreadsheets, and a personal rating of their skill.
This questionnaire was completed first, before the participants started the tasks. The
point of this questionnaire is to gather demographic information and to determine the
experience of spreadsheet use for a participant.
Once questionnaire 1 was completed, the participants started the tasks for the group
they were assigned to (control or treatment). The scenarios contained in tasks for the
participants, regardless of group, were identical. The manner in which the groups
completed the tasks differed, the control group produced formulae in a spreadsheet
using the syntax and functionality of the application (Microsoft Excel). The treatment
group produced example attribute classifications for each task.
After completing the tasks as best they could, the final questionnaire, questionnaire 2,
was completed. This questionnaire gathered information on the participant’s
perception of their own performance, i.e. they were asked how difficult they felt each
task was and then asked to indicate how confident they were that the provided
answers were correct.
2.6 Experiment tasks
The five tasks for the experiment were identical, the method of completing them
varied for each group. The control group submitted answers created using Microsoft
Excel, the treatment group submitted attribute classifications written on paper.
The experiment tasks were designed to be progressively more difficult, requiring
progressively more complex answers from both groups.
2.7 The tasks
The tasks given to the control and treatment group were identical, the method in
which they answered varied.
For example, in the control group task 1 was to create a formula that could give a
grade (Pass or Fail) based upon a single mark (Exam mark). The formula was
required to distinguish between pass and fail, where fail was < 40 and pass was >= 40.
For the same task, task 1, the treatment group were required to give attribute
classifications (examples) for every classification in the problem. The two
classifications are pass and fail, the participants therefore had to submit an attribute
classification of pass and fail.
The tasks were also designed to be progressively more complex. For example task one
uses one value (exam mark), 2 classifications (pass and fail) and two parameters for
those classes (<40 Fail, >= 40 Pass).
In contrast, task 5 uses 2 values (exam and coursework mark), 4 classifications (fail,
pass, merit and distinction) 4 parameters (< 40 fail, >= 40 pass, >= 55 merit and >= 70
120
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
distinction) and 1 conditional rule (Both exam and coursework values must fall in
same class to award that class).
2.8 Marking the control group
Determining the mark of participants in the control group was based upon whether the
answer provided was a valid formula in excel and whether the formula satisfied the
specification in the task. If the formula fulfilled both criteria, it was deemed as
correct, otherwise it is incorrect.
For incorrect formula, degrees of incorrectness were measured by counting the
number of errors made in the submission. Errors can either be Mechanical, Logic or
Omission, see Panko (1998) for a definition of these error types.
Once the number of errors was totalled, the submission was given a classification.
The classifications were as follows: 0 errors = 5, 1 error = 4, 2-3 errors =3, 4 or more
= 2, No attempt = 1.
These above classifications are used in the confidence calculation only, the other
statistics are generated from dichotomous data.
2.9 Marking the treatment group
Determining the mark of the participants in the treatment group was based upon the
whether the attribute classifications were valid and whether the attribute
classifications provided satisfied the specification of the problem.
For incorrect attribute classifications, the number of errors per task was totalled and
then given a classification. The classifications were as follows: 0 errors = 5, 1 error =
4, 2-3 errors =3, 4 or more = 2, No attempt = 1.
These above classifications are used in the confidence calculation only, the other
statistics are generated from dichotomous data.
3.0 Summary statistics from experimentation
In this section performance indicators are compared between the treatment and
control groups. This indicates the usefulness of example giving in comparison to
spreadsheet modelling.
3.1 Accuracy
By comparing accuracy results gained from both the treatment and control groups, it
is evident that the treatment group were more accurate than the control group. See
Figure 4
121
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
Accuracy comparison
0%
10%
20%
30%
40%
50%
60%
70%
80%
1 2 3 4 5
Task
%
a
cc
u
ra
cy
Treatment group
Control group
Figure 3 Relative accuracy between Control and Treatment groups
As can be seen, the treatment task accuracy ranges between 78 and 60 percent, the
control group accuracy ranges between 66 and 30 percent. So comparatively,
producing examples is more accurate than producing formulae.
3.2 Confidence
The confidence calculation indicates whether the group were perfectly calibrated,
over or under confident. The formula for overconfidence is given in Figure 8 below.
rateerror Actual
rateerror percieved Ratio
ratio =Confidence
Figure 4 Confidence ratio calculation (Thorne et al. 2004)
Further details of this calculation are contained in Thorne et al. (2004)
Confidence measures
0.75
0.8
0.85
0.9
0.95
1
1.05
1 2 3 4 5
Task
C
o
ef
fic
ie
nt
Treatment group
Baseline
Control group
Figure 5 Confidence in Treatment and Control groups
122
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
The baseline on the graph shows the division between over and under confidence, a
value of less than 1 indicates under confidence, over 1 indicates overconfidence. A
value of 1 exactly indicates perfect calibration between expected outcome and
performance.
As can be seen, both groups were under confident in their work. This is an unusual
finding since the literature indicates that spreadsheet developers are usually
overconfident (Panko, 2003).
Although the data in figure 9 shows that both groups were mostly under confident,
there are some distinguishing features between them.
The treatment group’s data points are less erratic than the control group, indicating a
more consistent approach to evaluating their performance. This erratic grouping is
clearer if perceived difficulty (how difficult was this task?) and Perceived
completeness (did you complete the task successfully?) are mapped against each
other, see figure 6.
Difficulty and completeness
0
1
2
3
4
5
0 1 2 3 4 5
Percieved difficulty
P
er
ci
ev
ed
C
om
pl
et
en
es
s
Treatment group
Control group
Very Hard Average Very Easy
Don’t know
Probably not
No
Probably
Yes
Figure 6 Difficulty and completeness
In figure 6, the treatment group’s data points are bunched together, suggesting the
values are similar. The values are responses to difficulty and completeness questions,
this suggests that the treatment group found the task’s difficulty and perceived
completeness didn’t change as the tasks progressed. In figure 6, the data points read
right to left as tasks 1 to 5.
The control groups are more dispersed, indicating that the values change as the tasks
progress, i.e. as the tasks progressed they were harder and perceived to be less
complete.
4.0 Testing for statistical significance in the results
4.1 Introduction
123
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
The raw data for both experiments, when graphed, allows conclusions to be drawn
based up some basic statistics such as the mean value. Whilst this serves a purpose, it
does not tell us if the results are statistically significant.
In order to see if the results are statistically significant a number of significance tests
have been applied to the accuracy data. For example, the Chi squared test is used to
determine if the differences in accuracy are statistically significant in the control and
treatment groups. One can then determine if the increased accuracy observed in the
treatment group was due to the treatment or not.
4.2 Chi squared test on accuracy data
The Chi squared test determines if the differences in accuracy for the treatment and
control groups are due to the treatment and not chance. Once calculated, chi squared
indicates if the “null hypothesis” should be accepted or rejected. The null hypothesis
is usually the opposite of what the researcher wants to find, i.e. the null hypothesis is
“There is no difference between the groups”.
The raw data consists of 1’s and 0’s, the tasks were either correct (1) or incorrect (0).
This characteristic of the data allows us to use the chi squared statistic in figure 7.
Figure 7 Chi squared statistic
In cases where the sample size is small, Fisher’s Exact test can be used to complement
or replace the chi squared test (Fisher, 1922).
4.3 Fisher’s exact test on accuracy data
Fisher’s exact test determines the probability of the scenario being tested, or one more
extreme, occurring. For clarity the test determines the probability of the same scenario
or a more favourable one arising. Fisher’s is applied when sample sizes are small,
how small is unclear. Some cite less than 30 participants overall, some cite that less
than 10 in a cell and some cite less than 4 in cell.
4.4 Summary of chi squared and Fisher’s exact statistics
The combined results obtained from chi squared and Fisher’s exact are contained in
table 1 below.
124
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
Chi squared test Fisher’s exact
Task 1 1.396
0.5 < P < 0.01
Accept Null
0.205
80%
Task 2 0.673
0.5 < P < 0.01
Accept Null
0.301
70%
Task 3 2.03
0.5 < P < 0.01
Accept Null
0.128
88%
Task 4 2.03
0.5 < P < 0.01
Accept Null
0.128
88%
Task 5 4.22
0.05 <P< 0.02
Reject null at 95% level.
0.038
96%
Table 1 Combined Chi squared and Fisher's exact statistics
The data in table 1 and the data graphed in figure 8, show that for both Chi squared
and Fisher’s exact, tasks 1 to 4 are not statistically significant, assuming that 95% is
the minimum level of significance.
However, both show on task 5 statistical significance which therefore rejects the null
hypothesis on that test. We can conclude that for task 5 the observed difference in
accuracy was due to the treatment not chance.
Chi squared and Fisher's exact significance levels
40.0%
50.0%
60.0%
70.0%
80.0%
90.0%
100.0%
1 2 3 4 5
Task
C
o
n
fi
d
en
ce
le
ve
l
Chi squared
Fisher's exact
95% level
90% level
Figure 8 Chi squared and Fisher's exact significance levels
Since the tasks were designed to be progressively more difficult, one could interpret
the results to show that the treatment is only effective in sufficiently complex
scenarios.
125
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
Using Cochran’s Q test determines if the difficulty between tasks was statistically
significant
4.5 Cochran’s Q test on difficulty
Cochran’s Q test allows us to test if the difficulty between all five tasks in a particular
group was significantly different. The test therefore has to be performed on both the
control and treatment group. The formula for Cochran’s Q test is given in figure 13.
Figure 9 Cochran's Q
4.5.1 Cochran’s Q for the Control group
The calculation for Cochran’s Q statistic in the control group is as follows:
5 * 4 * (16 + 4 + 1 + 1 + 16)
= 760 / (270 – 194)
=10.00
DOF = 4
0.05 <P< 0.02
This shows that there is a significant difference in difficulty between tasks for the
control group, we reject the null hypothesis at the 95% level.
4.5.2 Cochran’s Q for the Treatment group
The calculation for Cochran’s Q statistic for the treatment group is as follows:
5 * 4 * (10.24 + 0.04 + 0.64 + 0.64 + 3.24)
=296/(390-364)
=11.386
DOF = 4
Look up on Chi Squared table
0.05 <P< 0.02
This shows that there is a significant difference in difficulty between tasks for the
treatment group, we reject the null hypothesis at the 95% level.
4.5.3 Conclusions on Cochran’s Q test
126
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
The calculations of Cochran’s Q test show that at the 95% confidence level, the null
hypothesis, all the tasks are the same difficulty, for both the control and the treatment
group is rejected. Therefore we can conclude that there is significant difference in
difficulty between tasks.
This supports the theory that as the difficulty increases, the treatment effect becomes
significant.
However, tasks 3 and 4 both show the same result for chi squared and Fisher’s exact,
see table 14. This might suggest that these two tasks were of similar difficulty based
on the results.
In order to establish if this is the case, we must compare the two sets of data for the
control and treatment group to see if there is statistical significance between them.
One method to compare two data sets for difference in difficulty is McNemar’s test on
difficulty (McNemar, 1947).
4.6 McNemar’s test on difficulty
The McNemar’s statistic allows us to test for significant difference in difficulty
between the two groups, in this case the results for task 3 and 4.
The test is X2 using 1 DOF, see figure 10 for the equation.
X2 = (b - c)2/(b + c). (1)
Figure 10 McNemar's test on difficulty
McNemar’s Calculations:
M = (3-3)2 / (3+3) = 0/6 = 0 (Control Group)
We therefore accept the null hypothesis, there is no difference between the two
groups, i.e. there is no significant difference in difficulty between tasks 3 and 4 for the
control group.
M = (2-2)2 / (2+2) = 0/4 = 0 (Treatment group)
We therefore accept the null hypothesis, there is no difference between the two
groups, i.e. there is no significant difference in difficulty between tasks 3 and 4 for the
treatment group.
4.9 Conclusions on significance testing
The chi squared and Fisher’s tests indicate that in both the control and treatment
groups, for tasks 1 to 4, there is no statistically significant difference in accuracy.
However, both chi squared and Fisher’s indicate that for task 5, in both control and
treatment groups ,the observed increase in accuracy is statistically significant. i.e. the
127
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
difference in accuracy is due to the treatment and not chance, ergo giving examples in
task 5 is more accurate than producing the equivalent formula.
Cochran’s Q test indicates that between all five tasks, there is a significant difference
in difficulty. McNemar’s test on the observed accuracy in tasks 3 and 4, which have
the same values, demonstrates that there is no significant difference in difficulty
between the tasks.
One possible explanation is that during the design of the materials, i.e. the tasks were
not sufficiently different to yield a significant change in difficulty, hence the same
accuracy values.
To conclude, there is a relationship between difficulty and statistically significant
accuracy for the treatment. The results suggest that if the task or problem is
sufficiently difficult, there is a statistically significant accuracy advantage in using the
treatment over the control.
5.0 Conclusions
The conclusions of the experimental comparison between the Treatment group, i.e.
giving examples and control group, i.e. producing formulae
5.1 Experimental Conclusions
1. The treatment group (giving examples) were considerably more accurate than the
control group (producing formulae), see figure 4. Accuracy in task 5 only was task
to be statistically significant, see table 2 and figure 14.
2. Both the treatment group (giving examples) and the control group (producing
formulae) were consistently under confident, see figure 10.
3. Both groups found the tasks progressively more difficult as Cochran’s Q test
indicated, except tasks 3 and 4 which showed no significance of this type, see
section 3.5.7.
5.2 Limitations
Limitations to this experimental study include both general criticisms of experimental
work and specific conditions that relate to the experiment. Also so criticism could be
made of the statistical significance tests due to the way that they are marked.
5.2.1 Criticisms of the experiment
Firstly, the sample of participants is from an academic environment, experimentation
with participants from a non academic environment would provide a broader view of
the usefulness of this method.
Although there was no time limit imposed on the participants to complete the tasks,
participants were not permitted to take the materials away from the venue. Some
128
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
might argue that this imposes a time pressure on the participants and that in reality
they are more likely to complete the tasks over a longer time period.
However, to keep control of the experimental conditions one must insist that
participants stay in the arranged venue until they have completed. Allowing them to
remove and complete materials at another venue may allow collusion and thus the
integrity of the experiment would be compromised.
It could be argued that the sampling approach taken in this experiment is not truly
random. A clustered random approach was taken, i.e. a cluster of individuals were
targeted and then randomly assigned to either the treatment or control group.
5.2.3 Criticisms of the significance testing
The significance tests show that only task 5 is statistically significant. The Cochran’s
Q statistic shows that the difficulty difference between the tasks is statistically
significant.
The tasks were designed to be progressively more difficult. The conclusion is
therefore that the treatment effect is only statistically significant in sufficiently
difficult tasks.
The statistics generated from the raw data are sensitive to the marking applied to the
answers provided to each question. The answers were dichotomous, i.e. attempts were
either correct or incorrect. In both the control and treatment group this mark was
based upon whether the solution provided was a valid solution that covered the
specification of the task.
If the method used to mark the answers provided for each task differed, one would
expect to see a change in the statistics. If the statistics were calculated data that had
been processed according to an invented marking criteria, the sensitivity of the
statistics would be greater.
However, since all of the statistics were strictly marked in a dichotomous fashion, this
sensitivity is not a limiting factor in this research.
5.3 Conclusion on the novel approach
The results of the experiment demonstrate that giving examples is more accurate,
easier and less prone to overconfidence than creating formula. It is therefore feasible
to use “giving examples” as the basis for a modelling method.
129
Concerning the feasibility of example-driven modelling techniques
Thorne, Ball & Lawson
References
Campbell. D and Stanley. J, (1963), ‘Experimental and Quasi experimental designs for research’,
Houghton Mifflin Company, 0-395-30787-2
Fisher, R.A. (1922). "On the interpretation of χ2 from contingency tables, and the calculation of P".
Journal of the Royal Statistical Society 85(1):87-94.
Hicks and Panko, (1995), ‘Capital Budgeting Spreadsheet Code Inspection at NYNEX’, Internet
http://panko.cba.hawaii.edu/ssr/Hicks/HICKS.HTM, 12.1.05, 12.00, Available.
Howe. H, Simkin. M, (2006), ‘Factors affecting the ability to detect spreadsheet errors’, Decision
Sciences Journal of Innovative Education, (4), 1, pp 101-122
Janvrin. D and Morrison. J, (1996), ‘Factors Influencing Risks and Outcomes in End-User
Development’ Proceedings of the Twenty-Ninth Hawaii International Conference on Systems Sciences,
Vol. II, Hawaii, IEEE Computer Society Press, pp. 346-355.
Janvrin. D and Morrison. J, (2000), ‘Using a structured design approach to reduce risks in End User
Spreadsheet development’, Information & management, 37, pp 1-12
McNemar Q., (1947), ‘Note on the sampling error of the difference between correlated proportions or
percentages’. Psychometrika, 12, 153-157.
Michie. D, Muggleton. S, Bain. M, Hayes-Michie.J, (1989), ‘An experimental comparison of human
and machine learning formalisms’, Procedings of the 6th International conference of machine learning,
pp113-119
Panko. R and Halverson. R, (1998), ‘Are Two Heads Better than One? (At Reducing Errors in
Spreadsheet Modelling?’ Office Systems Research Journal, 15 (1), pp. 21-32.
Panko. R, (1998), ‘What we know about spreadsheet errors’, Journal of End User Computing, Special
issue: Scaling up End User Development, pp 15-22
Panko. R, (2003), ‘Reducing overconfidence in spreadsheet development’, Proceedings of EUSPRIG
2003 - Building better spreadsheets from the ad-hoc to the quality engineered’, 1-86166-199-1
Saunders. M, Thornhill. A, Lewis. P, (2007), ‘Research designs for business students’, 4th edition,
Pearson Education Limited, Edinburgh, UK, ISBN 9780273701484
Shadish. W, Cook. T, Campbell. D, (2002), ‘Experimental and Quasi experimental designs for
generalised causal inference’, 1st Edition, Houghton Mifflin Company, Boston, USA, 0-395-61556-9
Thorne. S. Ball. D. Lawson. Z., (2004), ‘A novel approach to spreadsheet formulae production and
overconfidence measurement to reduce risk in spreadsheet modelling’, Proceedings of EUSPRIG 2004
– Risk reduction in End User Computing, Klagenfurt, pp 71-85, ISBN 1 902724 94 1
130
http://panko.cba.hawaii.edu/ssr/Hicks/HICKS.HTM
1.0 Introduction
1.1 Example-Driven Modelling
2.0 Investigating the feasibility of giving examples
2.2 Experiment aim
2.3 Experimental design
2.4 Sampling
2.5 Research materials
The research materials for this experiment comprise two different packs handed to the participants.
Both packs contained a questionnaire gathering information such as age, sex, experience, number of years using spreadsheets, and a personal rating of their skill. This questionnaire was completed first, before the participants started the tasks. The point of this questionnaire is to gather demographic information and to determine the experience of spreadsheet use for a participant.
Once questionnaire 1 was completed, the participants started the tasks for the group they were assigned to (control or treatment). The scenarios contained in tasks for the participants, regardless of group, were identical. The manner in which the groups completed the tasks differed, the control group produced formulae in a spreadsheet using the syntax and functionality of the application (Microsoft Excel). The treatment group produced example attribute classifications for each task.
After completing the tasks as best they could, the final questionnaire, questionnaire 2, was completed. This questionnaire gathered information on the participant’s perception of their own performance, i.e. they were asked how difficult they felt each task was and then asked to indicate how confident they were that the provided answers were correct.
2.6 Experiment tasks
The five tasks for the experiment were identical, the method of completing them varied for each group. The control group submitted answers created using Microsoft Excel, the treatment group submitted attribute classifications written on paper.
The experiment tasks were designed to be progressively more difficult, requiring progressively more complex answers from both groups.
2.7 The tasks
2.8 Marking the control group
2.9 Marking the treatment group
3.0 Summary statistics from experimentation
3.1 Accuracy
By comparing accuracy results gained from both the treatment and control groups, it is evident that the treatment group were more accurate than the control group. See Figure 4
As can be seen, the treatment task accuracy ranges between 78 and 60 percent, the control group accuracy ranges between 66 and 30 percent. So comparatively, producing examples is more accurate than producing formulae.
The confidence calculation indicates whether the group were perfectly calibrated, over or under confident. The formula for overconfidence is given in Figure 8 below.
The baseline on the graph shows the division between over and under confidence, a value of less than 1 indicates under confidence, over 1 indicates overconfidence. A value of 1 exactly indicates perfect calibration between expected outcome and performance.
As can be seen, both groups were under confident in their work. This is an unusual finding since the literature indicates that spreadsheet developers are usually overconfident (Panko, 2003).
Although the data in figure 9 shows that both groups were mostly under confident, there are some distinguishing features between them.
The treatment group’s data points are less erratic than the control group, indicating a more consistent approach to evaluating their performance. This erratic grouping is clearer if perceived difficulty (how difficult was this task?) and Perceived completeness (did you complete the task successfully?) are mapped against each other, see figure 6.
In figure 6, the treatment group’s data points are bunched together, suggesting the values are similar. The values are responses to difficulty and completeness questions, this suggests that the treatment group found the task’s difficulty and perceived completeness didn’t change as the tasks progressed. In figure 6, the data points read right to left as tasks 1 to 5.
The control groups are more dispersed, indicating that the values change as the tasks progress, i.e. as the tasks progressed they were harder and perceived to be less complete.
4.0 Testing for statistical significance in the results
4.1 Introduction
4.2 Chi squared test on accuracy data
4.3 Fisher’s exact test on accuracy data
4.4 Summary of chi squared and Fisher’s exact statistics
4.5 Cochran’s Q test on difficulty
4.5.1 Cochran’s Q for the Control group
5 * 4 * (16 + 4 + 1 + 1 + 16)
= 760 / (270 – 194)
=10.00
DOF = 4
This shows that there is a significant difference in difficulty between tasks for the control group, we reject the null hypothesis at the 95% level.
4.5.2 Cochran’s Q for the Treatment group
This shows that there is a significant difference in difficulty between tasks for the treatment group, we reject the null hypothesis at the 95% level.
4.5.3 Conclusions on Cochran’s Q test
4.6 McNemar’s test on difficulty
4.9 Conclusions on significance testing
5.0 Conclusions
5.1 Experimental Conclusions
5.2 Limitations
5.2.1 Criticisms of the experiment
5.3 Conclusion on the novel approach
| 0non-cybersec
| arXiv |
Oh...oh no.... | 0non-cybersec
| Reddit |
Here is mophie’s latest essential iPhone accessory. | 0non-cybersec
| Reddit |
Agile development methodology. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Algorithms: Find the best table to play (standing gambler problem). <p><em><strong>Preface</em></strong></p>
<p>This is not code golf. I'm looking at an interesting problem and hoping to solicit comments and suggestions from my peers. This question is not about <a href="http://en.wikipedia.org/wiki/Card_counting">card counting</a> (exclusively), rather, it is about determining the best table to engage based on observation. Assume if you will some kind of brain implant that makes worst case time / space complexity (on any given architecture) portable to the human mind. Yes, this is quite subjective. Assume a <a href="http://en.wikipedia.org/wiki/Playing_card#French">French deck</a> without the use of wild cards.</p>
<p><em><strong>Background</em></strong></p>
<p>I recently visited a casino and saw more bystanders than players per table, and wondered what selection process turned bystanders into betting players, given that most bystanders had funds to play (chips in hand).</p>
<p><em><strong>Scenario</em></strong></p>
<p>You enter a casino. You see n tables playing a variant of <a href="http://en.wikipedia.org/wiki/Blackjack">Blackjack</a>, with y of them playing <a href="http://en.wikipedia.org/wiki/Pontoon_%28game%29">Pontoon</a>. Each table plays with an indeterminate amount of card decks, in an effort to obfuscate the <a href="http://en.wikipedia.org/wiki/Casino_game#House_advantage">house advantage</a>.</p>
<p>Each table has a varying minimum bet. You have Z currency on your person. You want to find the table where:</p>
<ul>
<li>The least amount of card decks are in use</li>
<li>The minimum bet is higher than a table using more decks, but you want to maximize the amount of games you can play with Z.</li>
<li>Net losses, per player are lowest (I realize that this is, in most answers, considered to be incidental noise, but it could illustrate a broken shuffler)</li>
</ul>
<p><em><strong>Problem</em></strong></p>
<p>You can magically observe every table. You have X rounds to sample, in order to base your decision. For this purpose, every player takes no more than 30 seconds to play.</p>
<p>What algorithm(s) would you use to solve this problem, and what is their worst case complexity? Do you:</p>
<ul>
<li>Play Pontoon or Blackjack ?</li>
<li>What table do you select ?</li>
<li>How many rounds do you need to observe (what is the value of X), given that the casino can use no more than 8 decks of cards for either game? Each table has between 2 and 6 players.</li>
<li>How long did you stand around while finding a table?</li>
</ul>
<p>I'm calling this the "<strong>standing gambler problem</strong>" for lack of a better term. Please feel free to refine it.</p>
<p><em><strong>Additional</em></strong></p>
<p>Where would this be useful if not in a casino?</p>
<p><em><strong>Final</em></strong></p>
<p>I'm not looking for a magic gambling bullet. I just noticed a problem which became a bone that my brain simply won't stop chewing. I'm especially interested in applications way beyond visiting a casino.</p>
| 0non-cybersec
| Stackexchange |
"symlink" data to new database. <p>I am just trying to figure some ways to do manage archiving of some/most of our Application data within the database and wondering if something like this would be possible:</p>
<ol>
<li>Archive anything with a status of resolved, and a data updated over 3 years</li>
<li>Move MOST, but not all that data to the archive database, and replace the values within the current production database with a “SymLink” that points to the archived database?</li>
</ol>
<p>Here is a simplified example:</p>
<pre><code>#######################################################################################################
### Active_Prod ###
#######################################################################################################
# ALIASAPPTYPE ALIASAPPREASON PZINSKEY PZPVSTREAM #
# App_Type_1234 New Enrollee 132387Something6357997 <SYMLINKED to Archive_Prod.pzpvstream> #
# #
# #
#######################################################################################################
### Archive_Prod ###
#######################################################################################################
# ALIASAPPTYPE ALIASAPPREASON PZINSKEY PZPVSTREAM #
# App_Type_1234 New Enrollee 132387Something6357997 [BLOB Data] #
#######################################################################################################
</code></pre>
<p>So the query for <code>select * from Active_Prod</code> would return the following results:</p>
<pre><code>ALIASAPPTYPE ALIASAPPREASON PZINSKEY PZPVSTREAM
App_Type_1234 New Enrollee 132387Something6357997 [BLOB Data]
</code></pre>
<p>We would not be concerned with updating or inserting data as the <code>Archive_Prod</code> database would be set to read only anyway.
My thinking here is we could drastically reduce the <code>Active</code> DB2 instance by archiving most of the date (The bulk of the data resides in the BLOB anyway), but keep the “Key” fields in the “Active” database for speedier lookups.
But by creating a symlink of the data, we can improve the performance of PEGA, and improve the backup/restore times by dramatically reducing the overall size of the database.</p>
| 0non-cybersec
| Stackexchange |
Ghandhan statement on Characteristic of Fifth power of number $(n \cdot n \cdot n \cdot n \cdot n)$. <p>I have identified few unique characteristics of fifth power of a number i.e. $n \cdot n \cdot n \cdot n \cdot n$.
Below are the 2 Characteristics.</p>
<h2>For any integer number N,</h2>
<ol>
<li>Last digit of $N$ and its last digit of fifth power $N \cdot N \cdot N \cdot N \cdot N$ are same. </li>
<li>Value of $(N\cdot N\cdot N\cdot N \cdot N) - N$ is always divisible by $30$. </li>
</ol>
<p>Few Examples below, </p>
<ul>
<li><p>$N = 2$,</p>
<p>$$(N\cdot N\cdot N\cdot N \cdot N) = 32$$</p>
<p>$$\left(\frac{(N\cdot N\cdot N\cdot N \cdot N)-N}{30}\right) = 1$$ </p></li>
<li><p>$N = 4$</p>
<p>$$(N\cdot N\cdot N\cdot N \cdot N) = 1024$$ </p>
<p>$$\left(\frac{(N\cdot N\cdot N\cdot N \cdot N)-N)}{30}\right) = 34$$</p></li>
</ul>
<p>If this findings are not valid please defend this statement with your examples.</p>
| 0non-cybersec
| Stackexchange |
iOS 8.1 available on Monday.. | 0non-cybersec
| Reddit |
Share an external module between lazy loaded modules in angular2. <p>My app has components that use a heavy-weight external package (ag-grid, about 1MB) that is provided as an angular2 module (<code>AgGridModule</code>). I would like to load the package only when the components using it are required, so my <code>ContentModule</code> and all of its submodules are lazy loaded. The whole structure looks like this:</p>
<p><a href="https://i.stack.imgur.com/w9GFj.png"><img src="https://i.stack.imgur.com/w9GFj.png" alt="enter image description here"></a></p>
<p>However, when I import <code>AgGridModule</code> into both <code>Submodule1</code> and <code>Submodule3</code>, it ends up being included into compiled JS twice, making both 1.chunk.js and 3.chunk.js large. I tried importing it into <code>ContentModule</code>, but then the submodules do not recognize the components that are included in <code>AgGridModule</code>, even if I list them in the <code>exports</code> property of <code>ContentModule</code>.</p>
<pre><code>@NgModule({
imports: [
ContentRoutingModule,
SomeOtherModule,
AgGridModule.withComponents([])
],
exports: [
// this is the component from AgGridModule that I use
AgGridNg2
]
})
export class ContentModule { }
</code></pre>
<p>Is there a way to share a module between lazy loaded modules, or to expose some components of an imported module to lazy loaded children?</p>
<p>UPD: Creating a shared module and importing it into submodules does not help, there are still two chunks with about 1MB each:
<a href="https://i.stack.imgur.com/mUjvm.png"><img src="https://i.stack.imgur.com/mUjvm.png" alt="enter image description here"></a></p>
<p>UPD2: I solved the problem temporarily by merging Submodule1 and Submodule3 into a single module.</p>
| 0non-cybersec
| Stackexchange |
Find minimum and maximum values of a function. <p>I have a function and I would like to find its maximum and minimum values. My function is this:</p>
<pre><code>def function(x, y):
exp = (math.pow(x, 2) + math.pow(y, 2)) * -1
return math.exp(exp) * math.cos(x * y) * math.sin(x * y)
</code></pre>
<p>I have an interval for x [-1, 1] and y [-1, 1]. I would like to find a way, limited to this interval, to discover the max and min values of this function.</p>
| 0non-cybersec
| Stackexchange |
Not able to connect to network inside docker container. <p>I have a CentOS 7 host on which I am running Docker. When I do a ping from my host to 8.8.8.8, ping was successful whereas same inside a docker container is not working.</p>
<p>From Host</p>
<pre><code>[root@linux1 ~]# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=47 time=31.5 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=47 time=31.6 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 31.592/31.617/31.643/0.179 ms
</code></pre>
<p>From Docker Container (I am using basic ubuntu image):</p>
<pre><code>[root@linux1 ~]# docker run ubuntu ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 172.17.0.1 icmp_seq=1 Destination Host Unreachable
From 172.17.0.1 icmp_seq=2 Destination Host Unreachable
From 172.17.0.1 icmp_seq=3 Destination Host Unreachable
From 172.17.0.1 icmp_seq=4 Destination Host Unreachable
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 0 received, +4 errors, 100% packet loss, time 5000ms
pipe 4
</code></pre>
<p>Any suggestions would be helpful. Thanks</p>
| 0non-cybersec
| Stackexchange |
How can I change the voice used by Firefox Reader View (Narrator) in Ubuntu?. <p><a href="https://i.stack.imgur.com/6DYQpm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6DYQpm.png" alt="**how to change firefox reader view voice in ubuntu** "></a></p>
<p>The default voice as well as all alternative voices are very difficult to understand.</p>
<p>I cannot find any documentation about how this feature is wired up.</p>
| 0non-cybersec
| Stackexchange |
5 Terrifying Origin Stories Behind Popular Children's Songs. | 0non-cybersec
| Reddit |
Inequality involving $\lim \sup$. <blockquote>
<p>Let $\{a_n\}$ be sequence of positive terms. Prove that $\displaystyle \lim_{n\to\infty}\sup\left(\frac{a_1+a_{n+1}}{a_n}\right)^n\ge e$</p>
</blockquote>
<p>I'm tring to reduce the LHS to some form of the type $\displaystyle \lim_{n\to \infty}\left(1+\frac{1}{n}\right)^n$ and also tried using the fact that $\lim \sup a_n\ge \lim a_n$ but couldn't get much.</p>
| 0non-cybersec
| Stackexchange |
Top comment determines my "ice breaker" on hot or not [Social]. Have to translate to swedish though | 0non-cybersec
| Reddit |
Has anyone defined a limit of a sequence of fields? In particular, what is the limit of finite fields?. <p>I'm curious about
$$ \lim_{n \rightarrow \infty} \mathbb{F}_n $$
Is it $\mathbb{Z}$? That seems reasonable if you consider it as a set but of course $\mathbb{Z}$ is not a field so that is confusing. I think the problem is probably how you define the limit in this case. Has anyone ever done so?</p>
<p>Edit: Another question. What is the smallest field that contains all finite fields? We think the answer to this is $\mathbb{Q}$, but again we don't have a formal definition of "containment", so this is a problem too. Maybe using subfields. Have either of my questions ever been studied?</p>
| 0non-cybersec
| Stackexchange |
Hibs vs Hearts - Scottish Cup Final 1896 (unusual pitch markings + man standing in awesome hurr durr pose) [http://scotlandspeople.gov.uk]. | 0non-cybersec
| Reddit |
Unable to boot windows after installing drivers for graphics card. <p>I have a laptop Samsung Chronos 700z (I don't remember exact model number, and I don't have laptop in front of me to check, I can update it later if required) </p>
<p>Recently, while I was browsing internet, laptop suddenly turned off. I tried turn it on, but during loading OS it shuts itself down. Initially I thought it is because it was overheated, so after it cooled down, I used compressed air to remove all dust. </p>
<p>But this didn't solve problem. It worked when I run windows in safe mode, so I thought it was some issues with system, so i reinstalled system.
After I reinstalled new system on the top of old one, and everything seems working fine, till I install graphics card drivers. I got exactly same issues as i had at the beginning.</p>
<p>Other thing, I tried to run linux from live cd, and it didn't work.</p>
<p>Anyone have suggestion what i should do, or at least idea what happened?</p>
| 0non-cybersec
| Stackexchange |
From 1-1000, choosing at random, what is the probability that number is prime or composite with a prime factor p $\leq$ 29?. <p>An integer $k \in \{1,2, \dots, 999, 1000\}$ is selected at random. What is the probability that $k$ is a prime number or a composite number with a prime factor $p\leq29$?</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
How to convert from char* to id* with ARC enabled. <p>I'm trying to construct "fake" variable arguments list, using the technique described <a href="http://cocoawithlove.com/2009/05/variable-argument-lists-in-cocoa.html" rel="noreferrer">here</a>, but for ARC-enabled project and I can't figure out how to get rid of the error I'm getting.</p>
<p>Here's the code in question:</p>
<pre><code>NSMutableArray* argumentsArray = [NSMutableArray array];
// ... Here I fill argumentsArray with some elements
// And then, I want to construct a "fake" variable argument list
char* fakeArgList = (char*) malloc( sizeof(NSString*) * [argumentsArray count]);
[argumentsArray getObjects: (id*) fakeArgList];
NSString* content = [[NSString alloc] initWithFormat: formatString arguments:fakeArgList];
</code></pre>
<p>XCode complains on the <em>(id</em>) fakeArgList* casting, saying:</p>
<blockquote>
<p>Cast of non-Objective-C pointer type 'char *' to '_autoreleasing id *'
is disallowed with ARC</p>
</blockquote>
<p>My initial theory was that I just need to add __unsafe_unretained to (id*) casting to tell ARC that I'm responsible for that block of memory and it shouldn't retain/release it, but that doesn't work and I can't figure out how to fix this problem.</p>
<p><strong>Update:</strong> Here's the full function. It should take a printf-style format string and a variable list of field names inside the .plist and output a formatted string with data loaded from .plist. I.e., if I have a .plist file with fields "field1" = "foo" and "field2" = 3 and I call <code>[loadStringFromFixture: @"?param1=%@&param2=%d", @"field1", @field2]</code> then I should get string "?param1=foo&param2=3"</p>
<pre><code>- (NSString*) loadStringFromFixture:(NSString*) format, ...
{
NSString* path = [[NSBundle mainBundle] bundlePath];
NSString* finalPath = [path stringByAppendingPathComponent:@"MockAPI-Fixtures.plist"];
NSDictionary* plistData = [NSDictionary dictionaryWithContentsOfFile:finalPath];
va_list argumentsList;
va_start(argumentsList, format);
NSString* nextArgument;
NSMutableArray* argumentsArray = [NSMutableArray array];
while((nextArgument = va_arg(argumentsList, NSString*)))
{
[argumentsArray addObject: [plistData objectForKey:nextArgument]];
}
NSRange myRange = NSMakeRange(0, [argumentsArray count]);
id* fakeArgList = (__bridge id *)malloc(sizeof(NSString *) * [argumentsArray count]);
[argumentsArray getObjects:fakeArgList range:myRange];
NSString * content = [[NSString alloc] initWithFormat:formatString
arguments:(__bridge va_list)fakeArgList];
free(fakeArgList);
return content;
}
</code></pre>
| 0non-cybersec
| Stackexchange |
13 meals, about $2.50 ea, 375 Cal. Feeling so pumped today!. | 0non-cybersec
| Reddit |
Data miners discover an entire chapter cut from Metal Gear Solid V: The Phantom Pain (Warning: Spoilers). | 0non-cybersec
| Reddit |
My new kitten!! His name is Goose. | 0non-cybersec
| Reddit |
Biochemistry Problem:Calculating the virtual volume/diameter of a protein from the peptide sequence. I am given a problem to calclute the virtual volume/diamater of the GFP protein found in Pacific Northwest Jellyfish. My experience with the problem so far has been to visualize it in Jmol and I have also determined it encompasses eleven beta sheets with a single alpha helix. Now I am confused on how to measure the distance from one of the beta strands to the other. Using first principles and the van der waals radius of the side chain groups I can estimate the volume of any position in the protein.
Can any biochemistry guy help me understand a good model for how to approach any problem like this? | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Unable to sudo apt upgrade properly on Ubuntu-Budgie facing isuses with Python-Samba upgrade. <p>Hi I am new to Ubuntu and I am loving it already. I had just checked for update on Ubuntu-Budgie (Ubuntu 18.04.4 LTS) and during update I had encountered with the following error.</p>
<p>Here is the scneraio, I tried this command - <code>sudo apt upgrade</code> and I get the following output <br></p>
<blockquote>
<pre><code>Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
python-samba
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/1,919 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
(Reading database ... 365163 files and directories currently installed.)
Preparing to unpack .../python-samba_2%3a4.7.6+dfsg~ubuntu-0ubuntu2.16_amd64.deb ...
/var/lib/dpkg/info/python-samba.prerm: 6: /var/lib/dpkg/info/python-samba.prerm: pyclean: not found
dpkg: warning: old python-samba package pre-removal script subprocess returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: pyclean: not found
dpkg: error processing archive /var/cache/apt/archives/python-samba_2%3a4.7.6+dfsg~ubuntu-0ubuntu2.16_amd64.deb (--unpack):
new python-samba package pre-removal script subprocess returned error exit status 127
/var/lib/dpkg/info/python-samba.postinst: 6: /var/lib/dpkg/info/python-samba.postinst: pycompile: not found
dpkg: error while cleaning up:
installed python-samba package post-installation script subprocess returned error exit status 127
Errors were encountered while processing:
/var/cache/apt/archives/python-samba_2%3a4.7.6+dfsg~ubuntu-0ubuntu2.16_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
</code></pre>
</blockquote>
<p>I tried the following commands in addition to fix this based on google search <br></p>
<pre><code>sudo apt autoremove
sudo apt clean
sudo apt autoclean
sudo apt remove python-samba
sudo apt install --reinstall python-samba
sudo dpkg --configure -a
sudo apt --fix-broken install
</code></pre>
<p>But my bad nothing worked still I get the same output could you please help me in understanding and fixing the issue. Thanks.</p>
<h1>UPDATE 1</h1>
<p>As per the comments tried this command - <code>sudo apt install python-minimal</code> and I got the below errors</p>
<pre><code>Reading package lists... Done
Building dependency tree
Reading state information... Done
python-minimal is already the newest version (2.7.15~rc1-1).
python-minimal set to manually installed.
Suggested packages:
python-gpgme
The following packages will be upgraded:
python-samba
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/1,919 kB of archives.
After this operation, 0 B of additional disk space will be used.
(Reading database ... 365163 files and directories currently installed.)
Preparing to unpack .../python-samba_2%3a4.7.6+dfsg~ubuntu-0ubuntu2.16_amd64.deb ...
/var/lib/dpkg/info/python-samba.prerm: 6: /var/lib/dpkg/info/python-samba.prerm: pyclean: not found
dpkg: warning: old python-samba package pre-removal script subprocess returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: pyclean: not found
dpkg: error processing archive /var/cache/apt/archives/python-samba_2%3a4.7.6+dfsg~ubuntu-0ubuntu2.16_amd64.deb (--unpack):
new python-samba package pre-removal script subprocess returned error exit status 127
/var/lib/dpkg/info/python-samba.postinst: 6: /var/lib/dpkg/info/python-samba.postinst: pycompile: not found
dpkg: error while cleaning up:
installed python-samba package post-installation script subprocess returned error exit status 127
Errors were encountered while processing:
/var/cache/apt/archives/python-samba_2%3a4.7.6+dfsg~ubuntu-0ubuntu2.16_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
</code></pre>
<h1>UPDATE 2 and SOLUTION</h1>
<p>I tried the following commands and it worked for me based on provided solution.</p>
<pre><code>sudo apt-get -m --reinstall install python python-minimal dh-python
sudo apt-get -f install
sudo apt install --reinstall python-minimal
</code></pre>
| 0non-cybersec
| Stackexchange |
Intersecting great circles to find position. <p>Is it possible to find the intersection of two great circles when knowing the following:</p>
<p>A point $a$ on earth,</p>
<p>A point $b$ on earth, and</p>
<p>The bearings of $a$ and $b$ from an observer?</p>
| 0non-cybersec
| Stackexchange |
Dental training mannequin. | 0non-cybersec
| Reddit |
What anime do you regret completing and why?. It's a simple question that I felt could yield some interesting discussion. | 0non-cybersec
| Reddit |
Rails 5 default_url_options oddities. <p>I have a pretty simple rails app that I'm working on upgrading from Rails 4 to Rails 5, but I'm noticing some weirdness with <code>default_url_options</code></p>
<p>In <code>config/environments/test.rb</code> I have:</p>
<pre><code>Rails.application.routes.default_url_options[:host]= ENV["HTTP_HOST"] || "localhost"
Rails.application.routes.default_url_options[:port]= ENV["PORT"] || 3000
</code></pre>
<p>My application has a namespace called <code>api</code>. In my request specs, I'm seeing this:</p>
<pre><code>[1] pry> api_v3_sample_url
=> "http://www.example.com:3000/api/v3/sample"
[2] pry> Rails.application.routes.url_helpers.api_v3_sample_url
=> "http://localhost:3000/api/v3/sample"
</code></pre>
<p>What am I missing that is causing those URLs to be different?</p>
<p><strong>EDIT</strong></p>
<p>Per <a href="https://github.com/rspec/rspec-rails/issues/1275#issuecomment-69807351" rel="noreferrer">this thread</a> I set </p>
<pre><code>config.action_controller.default_url_options = {
host: ENV['HTTP_HOST'] || 'localhost'
}
</code></pre>
<p>in <code>config/environments/test.rb</code> but now I get this:</p>
<pre><code>> Rails.application.routes.url_helpers.api_v3_sample_url
ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true
> api_v3_sample_url
=> "http://www.example.com/api/v3/sample"
</code></pre>
<p><strong>EDIT 2</strong></p>
<p>Probably worth noting that these are request specs and not feature specs (not using capybara).</p>
| 0non-cybersec
| Stackexchange |
40lbs down in 2 1/2 months - Before/After pics. Warning: MOOBS. Sorry, I couldn't figure out paragraphs. 24 years old. 5"9. 228 lbs - 188 lbs. First new years resolution that I've actually stuck to. I entered into a weight loss pact with some mutual fatties and I think that's what gave me the extra boost. I'm currently winning! My diet consists of a piece of fruit for breakfast, a small tin of sardines in tomato sauce (drained) on two pieces of toast and a small helping of kimchi for lunch, and stir fry or a smoothie for dinner. I live in China where not much food has nutritional info on the label so I couldn't calorie count for the most part. Exercise consists of me running up and down the stairs of my apartment complex for 30 mins and 20-30 minutes on a rowing machine five times a week. I've had 18 cheat days so far, ranging from getting totally wasted to simply adding another portion of food in a day. On the whole, I don't think it's affected my weight loss as much as I thought it would. My current goal weight is 160 lbs. Before/After pics: http://i.imgur.com/H7Ir1.jpg
| 0non-cybersec
| Reddit |
Mom and Dad, 1980. | 0non-cybersec
| Reddit |
How do I add information to an exception message in Ruby?. <p>How do I add information to an exception message without changing its class in ruby?</p>
<p>The approach I'm currently using is</p>
<pre><code>strings.each_with_index do |string, i|
begin
do_risky_operation(string)
rescue
raise $!.class, "Problem with string number #{i}: #{$!}"
end
end
</code></pre>
<p>Ideally, I would also like to preserve the backtrace.</p>
<p>Is there a better way?</p>
| 0non-cybersec
| Stackexchange |
Network applications with high % of system time. <p>I have an Windows 2003 (don't laugh) server with 10GbE connectivity processing data coming to it over the network and sending it back out.</p>
<p>Here's the graph of overall system performance and the particular application being examined:</p>
<p><img src="https://i.stack.imgur.com/BbxFt.png" alt="sexy graph 1">
<img src="https://i.stack.imgur.com/tCbKl.png" alt="sexy graph 2"></p>
<p>The second graph is zoomed into the momentary spike and is relevant to the data in my answer.</p>
<p>How should I interpret the high percentage of kernel time on these processes? Overall, they're doing a lot of network I/O (66K PPS in, 96K PPS out) and I'm wondering if the correct interpretation is that the time spent in privileged space is copying the data back and forth between buffers and application memory. Would that make sense?</p>
| 0non-cybersec
| Stackexchange |
Zach Anner answers reddit questions.. | 0non-cybersec
| Reddit |
Helpful plugin for IDA using MIPS binaries, replaces offsets in function calls with xref names. . | 1cybersec
| Reddit |
How can I post to Twitter an URL to a Tumblr-hosted page without Twitter changing the link to point to the Tumblr app on mobile?. <p>I'm trying to publish a Twitter post that includes the URL of a page hosted on the Tumblr platform. Something like this:</p>
<blockquote>
<p>[My custom text here] <a href="https://plaintextoffenders.com/faq/devs" rel="nofollow noreferrer">https://plaintextoffenders.com/faq/devs</a></p>
</blockquote>
<p>(That plaintextoffenders.com page is hosted on Tumblr.)</p>
<p>Once the Twitter post is published, the link to the Tumblr-hosted page works fine when viewed on my computer's Twitter.com web client. </p>
<p>However, when viewed on the Twitter native app for iPhone, the link instead points at the Tumblr app on the App Store. There's no way for a viewer to bypass that, and just view the linked article.</p>
<p>The article does display fine -- without the App Store redirect -- when the article URL is entered directly into Safari on iPhone. So it's evidently Twitter that is changing the link to Tumblr on the app store, and not a redirect on the Tumblr-hosted site itself.</p>
<p>I tried setting up a redirect to my target article using the tinyurl.com URL shortener, but Twitter still changed the link to point at the Tumblr app on the App Store when viewed on the Twitter iPhone client.</p>
<p>Is there a way to compose my Twitter post such that the link works as expected when followed by a client reading the post on Twitter's native iPhone app?</p>
| 0non-cybersec
| Stackexchange |
Allow users to access only a few websites. <p>I want to block my network users to access most of the external websites. Some users may need access to Facebook (like the users from marketing department), while others may need access to banks websites.</p>
<p>What I want to do is to control the access of these users, allowing them to access only the necessary websites.</p>
<p>To do that, I've been thinking about using a Captive Portal to control authentication (so I'll know 'who' is requesting the website). Also, I'll need a proxy to deny access to the blocked websites.</p>
<p>Doing some research I've not found any single software capable of doing both tasks. I tried PacketFence and Squid. The first handled very well the authentication steps. The other, the URL blocking. But could not make both talk nor do the desired job.</p>
<p>Anyone have ever implemented something like this? Is it possible with any of these softwares?</p>
<p><strong>EDIT:</strong></p>
<p>It is very important that the users are authenticated against an Active Directory server.</p>
| 0non-cybersec
| Stackexchange |
Guide to Drinking Alcohol. | 0non-cybersec
| Reddit |
Joomla 3.2.2 - frontend connection problems, backend working perfect. <p>First of all, I'd like to say hello to everybody in this great community :) Countless times before, I was able to find my way out of a problem thanks to you. But now, I can't find any similar topic to my problem.</p>
<p>It started 5 days ago with frontend errors - Internal 500 or connection was reset. Every time after very long waiting. I thought that it's some problem on the provider side, they checked, said everything was fine, they checked two times. The second time they suggested I clean my browser cache. I did, and my site worked great. For a minute. It allowed one or two clicks in articles and... again, same old story. Then, it became even worse - even after cleaning the browser cache, I have problems to launch the frontend. Backend works perfect, I can edit, save and so on every article, module, and so on. When frontend finally manages to open, it reflects changes. So... it's very strange to me.</p>
<p>That was the situation till two days ago, when things got a bit better. </p>
<p>Now - no 500 internals or "connection was reset" during loading. Presently it looks like that : there is longer than average "awaiting for connection" time, and when finally download starts, it's very fast. Sometimes this waiting is 5, sometimes 10 seconds, which is way too long. I tried disabling modules, compressing js & css, but nothing helped to come to real awaiting time which is normally below 1s on other sites.</p>
<p>Hosting keeps claiming that it's not their fault, and suggested it's some module or extension causing problems, but as I said, I tried disabling every module one by one, with no result.</p>
<p>Did anybody encounter something like that?</p>
<p>By the way, other websites on same hosting and same Joomla version work just fine. Template? For the first two days it was ok, so it's not template either. I tried to change options of SEF links, url rewriting - nothing. It's driving me mad.</p>
<p>Additionaly, I created an account on Pingdom, and they show that my page loads worse than 70-80% of other websites. I also set checking each minute and letting me know via sms (it's free up to 20 smses) if the site is offline more than 5 minutes in a row - and I get one-two such smses daily.</p>
<p>Thank you in advance for your imput, maybe somebody has to look at this with a clear head.</p>
<p>Joomla version is 3.2.2</p>
<p>Cheers and thank you in advance,</p>
<p>Artur</p>
| 0non-cybersec
| Stackexchange |
Why though. | 0non-cybersec
| Reddit |
How to install Android SDK?. <p>How to install <strong>Android SDK</strong> from <code>android-sdk_r10-linux_x86.tgz</code> in Ubuntu 10.10 for using in Eclipse IDE?</p>
| 0non-cybersec
| Stackexchange |
Xpost. | 0non-cybersec
| Reddit |
Appropriate statistical test to test if probabilities are accurate. <p>I have some data that looks like this:</p>
<pre><code>Prob Outcome
0.09 0
0.10 0
0.10 0
0.11 1
0.84 1
0.99 1
0.86 1
0.78 1
0.86 1
0.00 0
etc.
</code></pre>
<p>i.e. a bunch a probabilities each with a single test. What statisitcal test should I use to test the hypothesis that the probabilities are correct?</p>
<p><strong>Further details</strong>: The data points are <em>combat probabilities</em> from the game Civilization IV, and I have over 3000 of them in my set. Thus, each probaility is generated using some unknown formula from different input data, depending on the relative strengths of the units in that battle.</p>
<p>It has been suggested that the outcomes do not accurately reflect the probabilities given: for instance, the computer player wins more often that it should, based on the probabilites displayed, which is what we want to test.</p>
<p>So there is a link insofar as we assume the probabilities displayed are generated using the same formula for each line. It's this unknown formula that we want to test for consistency with the actual results.</p>
| 0non-cybersec
| Stackexchange |
How to explicitly perform the circle eversion in the $3$-dimensional space?. <p>The following claim is a well-known consequence of the <a href="http://mathworld.wolfram.com/Whitney-GrausteinTheorem.html" rel="nofollow noreferrer">Whitney-Graustein theorem</a>:</p>
<blockquote>
<p><strong>Claim.</strong> It does not exist $H\colon\mathbb{S}^1\times[0,1]\overset{C^1}{\rightarrow}\mathbb{R}^2$ such that for all $t\in [0,1]$, $H(\cdot,t)\colon\mathbb{S}^1\rightarrow\mathbb{R}^2$ is an immersion, $H(\cdot,0)=(\cos(2\pi\cdot),\sin(2\pi\cdot))$ and $H(\cdot,1)=(\cos(2\pi \cdot),-\sin(2\pi\cdot))$.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/XfPxS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XfPxS.png" alt="enter image description here"></a></p>
<p>In other words, it is impossible to perform a circle eversion in the plane, namely it is impossible to continuously and regularly change the orientation of the circle while sticking to the plane. </p>
<p>However, I want to illustrate that it is possible to realize the circle eversion in the $3$-dimensional space. </p>
<p>The idea is to thicken the circle into a cylinder, perform a $\pi$-twist on the cylinder in order to put its inside out and finally to retract the everted cylinder onto its equatorial circle.</p>
<p>My main concern is to graphically represent the above process using a mathematical software, e.g. SageMath. I tried in vain to write down explicit formulas for it and here I am stuck. Please note that the following homotopy did not seem to do any good:</p>
<p>$$\forall x\in\mathbb{S}^1\times [-1,1],\forall t\in [0,1],H(x,t)=\frac{x}{\|x\|^{2t}}.$$</p>
<p>Any enlightenment will be greatly appreciated!</p>
| 0non-cybersec
| Stackexchange |
Rainbow Six Siege now features a starter edition for $14.99. | 0non-cybersec
| Reddit |
Liverpool Star Salah Would Be Happy To See De Rossi Joining Anfield. | 0non-cybersec
| Reddit |
Hey MFA, I have a very specific question in regards to jeans and the versatility of certain washes as opposed to others. Care to help me out? . Alright so the Brand that I have been buying form in the last few years is Naked and Famous and I'm looking to get 2 new pairs before I head off for school in the fall.
I want to get a dark of jeans, which is better/more versatile
1. http://www.nakedandfamousdenim.com/collection/men/weirdguy/solid-black-selvedge.html
2. http://www.nakedandfamousdenim.com/collection/men/weirdguy/black-selvedge.html
And now I'm looking to get a pair of Indigo jeans which between these two are best?
1. http://www.nakedandfamousdenim.com/collection/men/weirdguy/indigo-selvedge.html
2. http://www.nakedandfamousdenim.com/collection/men/weirdguy/natural-indigo-organic-selvedge.html
I know the second pair of Indigo's is darker slightly and people seem to prefer that around here. Although, I felt as that the light brown stitching on the back of the second pair would stand out, this would in turn cause a lack in versatility with certain colors.
Question : I'd prefer non-raw if possible does anyone know if the first solid black pair I linked are. I could not find that info.
And finally these are boots I own. I will be attempting choose my denim according to the colors of each boot and hopefully some great input from you guys!
Pair #1 : http://www.redwingheritage.com/boots#&f=&m=/detail/8111-heritage-us/8111-red-wing-lifestyle-mens-iron-ranger-boot-amber
Pair #2 http://www.redwingheritage.com/boots#&f=&m=/detail/9014-heritage-us/9014-red-wing-lifestyle-mens-beckman-boot-black.
Hopefully with this info I can make my decision, THANK YOU MFA! | 0non-cybersec
| Reddit |
IAmA lifetime vegetarian. I have never tasted meat. AMA. No, I really have never eaten meat, and I still haven't/won't (the thought of it disgusts me). I'm a 19 yo male, have had 3 serious relationships with meat eaters (the third one I'm still in), I'm not an animal rights activist, I'm not religious, and I grew up in a predominantly agricultural community (Bakersfield, CA). | 0non-cybersec
| Reddit |
Postfix: User unknown in virtual alias table. <p>For some reason when I send an email from my self-hosted postfix server, it works, but can't receive due to this:</p>
<pre><code>Nov 3 18:30:06 pi postfix/qmgr[31993]: CB178142FAB: from=<[email protected]>, size=738, nrcpt=1 (queue active)
Nov 3 18:30:06 pi postfix/error[1173]: CB178142FAB: to=<[email protected]>, orig_to=<megver83>, relay=none, delay=4.7, delays=4.7/0/0/0.01, dsn=5.1.1, status=bounced (User unknown in virtual alias table)
Nov 3 18:30:06 pi postfix/error[1232]: D3CEC142FAD: to=<[email protected]>, relay=none, delay=0.03, delays=0.01/0.01/0/0.01, dsn=5.1.1, status=bounced (User unknown in virtual alias table)
Nov 3 18:30:07 pi postfix/qmgr[31993]: 0E1AC142FAB: from=<[email protected]>, size=734, nrcpt=1 (queue active)
Nov 3 18:30:07 pi postfix/error[1173]: 0E1AC142FAB: to=<[email protected]>, orig_to=<megver83>, relay=none, delay=4.9, delays=4.8/0/0/0.01, dsn=5.1.1, status=bounced (User unknown in virtual alias table)
Nov 3 18:30:07 pi postfix/error[1232]: 1685A142FAD: to=<[email protected]>, relay=none, delay=0.03, delays=0.02/0/0/0.01, dsn=5.1.1, status=bounced (User unknown in virtual alias table)
Nov 3 18:34:37 pi postfix/error[1292]: BC405142FAB: to=<[email protected]>, relay=none, delay=0.28, delays=0.27/0/0/0.01, dsn=5.1.1, status=bounced (User unknown in virtual alias table)
Nov 3 19:11:39 pi postfix/qmgr[31993]: EDC14142FAB: from=<[email protected]>, size=1238, nrcpt=1 (queue active)
Nov 3 19:11:43 pi postfix/smtp[2064]: 3D67B142FAD: to=<[email protected]>, relay=spool.mail.gandi.net[217.70.184.6]:25, delay=3.8, delays=0.02/0.1/3.5/0.27, dsn=5.7.1, status=bounced (host spool.mail.gandi.net[217.70.184.6] said: 554 5.7.1 Service unavailable; Client host [190.100.12.50] blocked using pbl.spamhaus.org; https://www.spamhaus.org/query/ip/190.100.12.50 (in reply to RCPT TO command))
Nov 3 19:13:02 pi postfix/qmgr[31993]: 899D4142FAB: from=<[email protected]>, size=1256, nrcpt=1 (queue active)
Nov 3 19:13:02 pi postfix/error[1958]: 899D4142FAB: to=<[email protected]>, relay=none, delay=0.26, delays=0.25/0/0/0.01, dsn=5.1.1, status=bounced (User unknown in virtual alias table)
Nov 3 19:13:05 pi postfix/smtp[2064]: CA3A8142FAD: to=<[email protected]>, relay=spool.mail.gandi.net[217.70.184.6]:25, delay=2.3, delays=0.02/0/2.1/0.25, dsn=5.7.1, status=bounced (host spool.mail.gandi.net[217.70.184.6] said: 554 5.7.1 Service unavailable; Client host [190.100.12.50] blocked using pbl.spamhaus.org; https://www.spamhaus.org/query/ip/190.100.12.50 (in reply to RCPT TO command))
</code></pre>
<p>I'm trying to send a mail from [email protected] to [email protected], but doesn't work. However, as I said, it works the other way around. This is my <code>postconf -n</code></p>
<pre><code>alias_database = $alias_maps
alias_maps = hash:/etc/postfix/aliases
append_dot_mydomain = no
biff = no
broken_sasl_auth_clients = yes
command_directory = /usr/bin
compatibility_level = 2
daemon_directory = /usr/lib/postfix/bin
data_directory = /var/lib/postfix
debug_peer_level = 2
debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5
home_mailbox = Maildir/
html_directory = no
inet_interfaces = all
inet_protocols = ipv4
mail_owner = postfix
mailbox_size_limit = 134217728
mailq_path = /usr/bin/mailq
manpage_directory = /usr/share/man
message_size_limit = 134217728
meta_directory = /etc/postfix
mydomain = megver83.ga
myhostname = pi.megver83.ga
myorigin = $mydomain
newaliases_path = /usr/bin/newaliases
queue_directory = /var/spool/postfix
readme_directory = /usr/share/doc/postfix
relay_domains = *
relayhost =
sample_directory = /etc/postfix
sendmail_path = /usr/bin/sendmail
setgid_group = postdrop
shlib_directory = /usr/lib/postfix
smtp_tls_note_starttls_offer = yes
smtp_tls_security_level = may
smtpd_helo_required = yes
smtpd_sasl_auth_enable = yes
smtpd_sasl_local_domain =
smtpd_sasl_path = private/auth
smtpd_sasl_security_options = noanonymous
smtpd_sasl_type = dovecot
smtpd_tls_cert_file = /etc/letsencrypt/live/megver83.ga/fullchain.pem
smtpd_tls_key_file = /etc/letsencrypt/live/megver83.ga/privkey.pem
smtpd_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_tls_security_level = may
unknown_local_recipient_reject_code = 550
virtual_alias_domains = megver83.ga, eumela.ga, heckyel.ga
virtual_alias_maps = hash:/etc/postfix/virtual
</code></pre>
<p>/etc/postfix/virtual:</p>
<pre><code>megver83.ga megver83.ga
[email protected] megver83
</code></pre>
| 0non-cybersec
| Stackexchange |
[UK] Looking for certification advice.. Hi guys,
I trust you all had a great holiday? :)
I'm sad to say that I wasn't accepted to the SANS CyberAcademy as I mentioned in a previous thread I created; no clue why, but should be receiving more information from SANS by the end of Jan.
Due to not having been accepted, and also having received a great salary increase, I've decided on a New Years Resolution: to get either a OSCP cert and/or a few Cisco certifications.
That brings me to my question(s); if I want to get into penetration testing it would seem as the OSCP cert (PWK) is the one held in highest esteem, but what about the CREST (I live in the UK) certifications? I see a lot of job listings referencing them, but are they worth the paper they are written on compared to an OSCP cert?
Also, I'd love to have any more hands-on thoughts / experiences with the difference between Cisco and the equivalent CompTIA certs?
At a glance they seem very similar (albeit CompTIA being more overall tech-y), but CompTIA being slightly cheaper? | 1cybersec
| Reddit |
How to use a CFG to restrict a subset of a*b*c*d* so that there are at most as many a's and b's as d's?. <blockquote>
<p>Give Context-free Grammar for the language $\{a^i b^j c^k d^h \mid i,j,h \ge 0, k>0, i+j \le h\}$</p>
</blockquote>
<p>This is a training exercise, for which we don't get any answers, in a course I'm taking. I have found similar examples, but nothing that touches on the <code>i+j≤h</code> part of this. My biggest trouble is that it is ordered, so I have no idea how to add <strong><em>d</em></strong>'s to the end when I add <strong><em>a</em></strong>'s or <strong><em>b</em></strong>'s to the front. I haven't gotten very far because of this, but my thinking looks like this at the moment:</p>
<pre><code>S→ABcCD
A→aA | ϵ
B→bB | ϵ
C→cC | ϵ
D→dD | ϵ
</code></pre>
<p>I can't put things like <code>A→aAd</code> or <code>A→aBCD</code> because that would result in <strong><em>c</em></strong>'s and <strong><em>d</em></strong>'s before <strong><em>b</em></strong>'s in the end word/string. My conclusion is that I am probably on the wrong track, but all examples I find use some sort of partitioning like this.</p>
<p>So could anyone point me in the right direction?</p>
| 0non-cybersec
| Stackexchange |
Oregon fires next to a golf course.. | 0non-cybersec
| Reddit |
Josephson junction with circuitikz. <p>I would like to draw a Josephson junction which looks like this:</p>
<p><a href="https://i.stack.imgur.com/oL8Su.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oL8Su.png" alt="enter image description here"></a></p>
<p>and make circuit with <code>circuitikz</code> in latex.</p>
<p>Is it possible to define it as an element of a circuit and use it like a <code>node</code>?</p>
| 0non-cybersec
| Stackexchange |
I'm learning so much with each drawing I do, here's Jeremy Renner aka Hawkeye. :). | 0non-cybersec
| Reddit |
I am unable to mentally enjoy sex due to my penis size. Help?. Throwaway obviously, but I'm having some issues.
I've been single for a long while and my sex life has mainly consisted of one-night stands or brief hook-up buddy situations for the past two years. I'm confident about my ability to get girls in bed. I consider myself pretty good looking and socially fluid. I play the part of being a confident guy on the outside.
But once things get back to the bedroom, I start having issues. Last time I measured my penis is about 4.8 inches length, ~4.2 inches circumference. I know it's on the small side, and I've read all the advice articles about how size doesn't really matter, etc. etc. so we don't need to rehash that. I've asked all my female friends where they rank size, stamina and technique and they usually put technique and stamina at the top. But I still can't get over the smallness.
Basically, I can't enjoy sex because I don't believe the girl is enjoying it and I'm just thinking the whole time how she's probably really disappointed about the whole thing. I haven't really had a scarring incident, I've never been laughed out of a bedroom. My first girlfriend noted that it was on the small side, but she liked it. But I don't believe her or any women that say size don't matter. I mean I believe there exist means to compensate for it, but I think women would prefer a certain bigger size if they had a choice. Girls who are quiet in bed make me really nervous because I don't think they like it, and girls who are really loud in bed freak me out because I'm convinced they're just faking it.
And once in a while I'll have a female friend say something like, "I hooked up with this guy and he was _huge,_ it was incredible." or something like "Nothing more disappointing than a hot guy with a small penis," and I try my best to ignore it, but it seems like they're just telling me technique is the most important because they sense I have a lot of insecurities about it or that size really is all that matters.
So I haven't enjoyed sex very much. I just find it mentally exhausting because the whole time I'm worried about how long I'm lasting, if she's enjoying it, if she's disappointed, if she's going to blab to her friends about it. I'm just paranoid the whole time and feel like I'm acting out something I should be doing and not really just enjoying the act of it and just being with a woman.
*tl;dr: My smaller-than-average penis size is making me incredibly insecure to the point I can no longer enjoy sex because I spend the whole time fretting about whether or not the girl is enjoying it.*
Help please :( | 0non-cybersec
| Reddit |
My contribution to the flowcharts. I present: The Fieri Mage. | 0non-cybersec
| Reddit |
What do Flags and Reqs mean in uTorrent?. <p>I'm seeding a torrent file in uTorrent, and under <strong>Peers</strong> tab it shows the following statistics:</p>
<p><img src="https://i.stack.imgur.com/PQrlK.png" alt="01">
<img src="https://i.stack.imgur.com/umFOQ.png" alt="02"></p>
<p>What do those <strong>Flags</strong> (some combinations of upper and lower case letters like u, h, i, x, e, p) mean? Secondly, what does <strong>Reqs</strong> (0|5, 0|7, 0|11, etc.) mean? It's not visible for every peer and its value changes every second.</p>
| 0non-cybersec
| Stackexchange |
Why is the Git .git/objects/ folder subdivided in many SHA-prefix folders?. <p>Git internally stores objects (Blobs, trees) in the <code>.git/objects/</code> folder. Each object can be referenced by a SHA1 hash that is computed from the contents of the object.</p>
<p>However, Objects are not stored inside the <code>.git/objects/</code> folder directly. Instead, each object is stored inside a folder that starts with the prefix of its SHA1 hash. So an object with the hash <code>b7e23ec29af22b0b4e41da31e868d57226121c84</code> would be stored at <code>.git/objects/b7/e23ec29af22b0b4e41da31e868d57226121c84</code></p>
<p>Why does Git subdivide its object storage this way?</p>
<p>The resources I could find, such as <a href="http://git-scm.com/book/en/v2/Git-Internals-Git-Objects">the page on Git's internals</a> on git-scm, only only explained <em>how</em>, not <em>why</em>.</p>
| 0non-cybersec
| Stackexchange |
Password Gorilla will not launch after my 18.04 update.. <p>Password Gorilla will not launch after 18.04 upgrade. No error messages apparent.</p>
| 0non-cybersec
| Stackexchange |
If you look closely you can see my dad's hummer.. | 0non-cybersec
| Reddit |
easy counting set. <p>Let n be integer.Given the first 2n numbers.In how many ways we can arrange them so the sum of 2 adjasent numbers is odd number.Solution?:For the first postion we have 2n choices,second n,third n-1,fourth n-1 as we continue (n-1) and the result is
2n.n.$((n-1)!)^2$?</p>
| 0non-cybersec
| Stackexchange |
Alternatives for Thermal paste. <p>I would like to know if anyone has had success using an alternative substance for thermal paste. (I heard wheel bearing grease was good)</p>
<p>I do appreciate the warnings, but I am not worried about the hardware and it will be fun to test.</p>
| 0non-cybersec
| Stackexchange |
How do you stick to your diet when cutting after a long day of working?. Recently, I got hired at the amazon fulfillment center in delaware and I love it here! but the only bad thing about it is that after im done working for the day, I am so tired and hungry that I gorge myself on whatever food is available. I try so hard not to gorge myself but this job just leaves me so hungry, especially on days when I go to the gym. currently Im at 225 pounds but need to lose more but eating only 1500 calories a day while working here is pretty hard. I dont know if the answer is right in front of my face or if someone can help me, I just really need some advice on how to stay withing my calorie range with these bad cravings.
Edit: Thanks everyone! | 0non-cybersec
| Reddit |
Why does click package validation fail?. <p>I'm trying to publish an app for Ubuntu touch, but I can't get past the validation phase.
I'm using Ubuntu SDK. The current build configuration is for device (armhf). I was able to run the app on the device. From the "Publish" tab, I clicked "Build and validate click package", and I got 11 "Error" nodes, with no further information.
<a href="https://i.stack.imgur.com/iaOdO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iaOdO.png" alt="enter image description here"></a></p>
<p>The same if I select "Validate existing click package" and choose my click file from the build directory.</p>
<p>I did expand the "Log" node, but there's just a huge JSON with nothing suspect inside (not that I understand much of its content).</p>
<p>How could I find out what's wrong?</p>
<hr>
<p>Edit: On a closer look, I found this error in the <a href="http://pastebin.ubuntu.com/12018607/" rel="nofollow noreferrer">log</a>:</p>
<pre><code>"error": {
"security_policy_version_matches_framework (Trolly.apparmor)": {
"manual_review": false,
"text": "Invalid framework 'ubuntu-sdk-15.04-html'"
}
}
</code></pre>
<p>My <code>apparmor</code> file contains:</p>
<pre><code>{
"policy_groups": [
"networking",
"webview"
],
"policy_version": 1.3
}
</code></pre>
| 0non-cybersec
| Stackexchange |
My cat likes to get comfortable. | 0non-cybersec
| Reddit |
How do I map a property with no setter and no backing property fluently with NHibernate?. <p>Let's say I have the following entity:</p>
<pre><code>public class CalculationInfo
{
public virtual Int64 Id { get; set; }
public virtual decimal Amount { get; set; }
public virtual decimal SomeVariable { get; set; }
public virtual decimal SomeOtherVariable { get; set; }
public virtual decimal CalculatedAmount
{
get
{
decimal result;
// do crazy stuff with Amount, SomeVariable and SomeOtherVariable
return result;
}
}
}
</code></pre>
<p>Basically <strong>I want to read and write all of the fields to my database with NHibernate with the exception of <code>CalculatedAmount</code></strong>, which I simply want to write and not read back in.</p>
<p>Every similar issue and corresponding answer has dealt with specifying a backing store for the value, which I won't have in this scenario.</p>
<p>How can I accomplish this using Fluent NHibernate?</p>
<p>Thanks!</p>
<p><strong>UPDATE:</strong> Here's what I've tried, and the error it leads to:</p>
<p>Here's my mapping for the property...</p>
<pre><code>Map(x => x.CalculatedAmount)
.ReadOnly();
</code></pre>
<p>And the exception it yields...</p>
<p><em>Could not find a setter for property 'CalculatedAmount' in class 'xxx.CalculationInfo'</em></p>
| 0non-cybersec
| Stackexchange |
Finding degree of the extension. <p>Is it true that the degree of extension $\mathbb Q(\sqrt {2},\sqrt {3},\sqrt {5},\dotsc,\sqrt {p_n}) / \mathbb Q$ is $2^n$ where $p_n$ is the $n$th prime number. If so, how to prove this? My idea is to consider the chain of extensions $\mathbb Q\subset \mathbb Q(\sqrt{2}) \subset \mathbb Q(\sqrt{2},\sqrt{3}) \subset \dotsb \subset \mathbb Q(\sqrt {2},\sqrt {3},\sqrt {5},...,\sqrt {p_n})$ and using transitivity. I am having problems in finding degrees of intermediate extensions. Please help me.</p>
| 0non-cybersec
| Stackexchange |
Name of the following summation: $\sum_{a=b}^{\infty}{\binom{a-1}{b-1}x^{a-b}}=(1-x)^{-b}$. <p>I was proofing a formula when I meet a summation that I culdn't solve.
After some efforts and investigations I've successfully recognized it in its generalized formula:
$$\sum_{a=b}^{\infty}{\binom{a-1}{b-1}x^{a-b}}=(1-x)^{-b}$$
that I saw online in a list of knowed series.</p>
<p>I've searched for a long time now, but I can't find information about it, even it's name, can you help me please?</p>
<p>I would like to prove it.</p>
| 0non-cybersec
| Stackexchange |
Apple TV coming to India next week, to be priced at Rs 7,900. | 0non-cybersec
| Reddit |
Use find to find a directory and move it to a different path. <p>I have hundreds of thousands of files in hundreds of directories.</p>
<p>An example directory structure is</p>
<pre><code>./main/foo1/bar/*
./main/foo2/bar/*
./main/foo3/bar/*
./main/foo1/ran/*
./main/foo2/ran/*
</code></pre>
<p>For folders that have 'bar' directories, I want to move contents to following structure.</p>
<pre><code>./secondary/bar/foo1/*
./secondary/bar/foo2/*
./secondary/bar/foo3/*
</code></pre>
<p>Can this be accomplished using find and mv?</p>
| 0non-cybersec
| Stackexchange |
Size of the Universe - Animation HD. | 0non-cybersec
| Reddit |
Tattoo by Stefano at Frith Street Tattoo, London, UK. | 0non-cybersec
| Reddit |
One of my favorite behind the scenes photos from "The Dark Knight Rises". Close up on Bale. Cotillard waiting for her moment. Pfister on camera. Nolan watching from above. (x-post from r/batman) . | 0non-cybersec
| Reddit |
Prove that $f\in L^1(A)\Leftrightarrow \sum_{n}^{\infty}m(\{ x\in A : f(x)\geq n \}) < \infty$. <p>I'm stuck with some problem of my Integral Calculation in Several Variables course. The problem goes like this:</p>
<blockquote>
<p>Let <span class="math-container">$A\subset \mathbb{R}$</span> be a measurable set with <span class="math-container">$m(A)<\infty$</span>, and <span class="math-container">$f:A\longrightarrow [0,\infty)$</span> a Lebesgue-measurable function. Prove that:
<span class="math-container">$$f\in L^1(A)\Longleftrightarrow \sum_{n}^{\infty}m(\{ x\in A : f(x)\geq n \}) < \infty.$$</span></p>
</blockquote>
<p>The notation I used is:</p>
<ul>
<li><span class="math-container">$m$</span> as the Lebesgue measure function</li>
<li><span class="math-container">$L^1(A)=\{ f:A\rightarrow \mathbb{\overline{R}} : \int_{A}|f|\,\mathrm{d}m<+\infty \}$</span></li>
</ul>
<p>I've started defining the set <span class="math-container">$A=f^{-1}([0,\infty))$</span> as a numerable sum of disjoint measurable sets (because it's said it's measurable) <span class="math-container">$\sum^{\infty}_{k=0}\cup I_k$</span>, being each <span class="math-container">$I_k$</span> the real interval <span class="math-container">$[k,k+1)$</span>. I imagine I should come to some conclusion like that any unbounded (upper bound) sets of <span class="math-container">$f(x)$</span> with <span class="math-container">$(x\in A)$</span> have measure <span class="math-container">$0$</span>.</p>
| 0non-cybersec
| Stackexchange |
What was the turn of the century(Edwardian)life like for the upper class American?. What was life like for upper class American families during this time(Edwardian 1901-1914 )? Specifically upstate New York. Was is it so different from the past Victorian Era and the Roaring 20's that came after? What were everyone's roles and way of life? | 0non-cybersec
| Reddit |
Why kernel modesetting, instead of privilege separation?. <p>Kernel modesetting was kind of painful to get on Linux at first, but now it's pretty awesome to have. I mean, X not need to run as root? High-res hardware accelerated consoles? Cool stuff.</p>
<p>Problem is, a lot of UNIX platforms don't have modesetting kernel drivers of any sort. So hardware that relies on KMS is now mostly limited to Linux.</p>
<p>My question: why actually implement this in the kernel?</p>
<p>If hardware access is needed to set the screen resolution, why not use a separate privileged daemon, or a small setuid binary? That would maintain the advantage of separating out the privileged code, and letting the display server run as limited user; while getting rid of the special driver requirement, and making cross-UNIX support easier. Right? Or am I missing something significant here?</p>
| 0non-cybersec
| Stackexchange |
One Man's Journey To Mexico For Heroin Addiction Treatment Using Ibogaine -- "It's not just [that] it gets you off the heroin, it's like, it hits the reset button". | 0non-cybersec
| Reddit |
understanding of hash code. <p>hash function is important in implementing hash table. I know that in java
Object has its hash code, which might be generated from weak hash function.</p>
<p>Following is one snippet that is "supplement hash function"</p>
<pre><code>static int hash(Object x) {
int h = x.hashCode();
h += ~(h << 9);
h ^= (h >>> 14);
h += (h << 4);
h ^= (h >>> 10);
return h;
}
</code></pre>
<p>Can anybody help to explain what is the fundamental idea of a hash algorithm
? to generate non-duplicate integer? If so, how does these bitwise
operations make it?</p>
| 0non-cybersec
| Stackexchange |
Squid proxy between two firewalls, need iptables solution. <p>At the company I work for we need to implement what I think it's called transparent proxy.</p>
<p>How it's now:</p>
<p>A(lower secured area)--Cisco ASA-----Cisco ASA----B(higher secured area)</p>
<p>What we need:</p>
<p>A(lower secured area)--Cisco ASA---(eth0)Proxy(eth1)---Cisco ASA----B(higher secured area)</p>
<p>We've already set up an alpine linux with squid proxy, added two interfaces for both sides towards the firewalls but hit a wall with the iptables configuration.</p>
<p>The proxy just needs to log traffic and pass through everything, without change to packets on src/dst. We don't need any kind of filtering or blocking, all 1-65535 ports can be allowed.</p>
<p>Read about TPROXY, but couldn't find a good example to try.</p>
<p>I know that there are other design options for an implementations like this, but this is how it must be done.</p>
| 0non-cybersec
| Stackexchange |
List service as failing only after a certain duration. <p>My Debian systems have <code>unattended-upgrades</code> installed, which installs security upgrades automatically, once per day. I also have a Nagios check that reports whether upgrades need to be installed.</p>
<p>In this setup, it is normal that this check can report a failing state, but not for longer than 24h. Can I configure Nagios to consider a service as “up” unless it has been failing for more than 24h?</p>
<p>(<code>retry_interval</code> only seems to affect when I get a notification, but I also don't want the front-end to be red during these expected failures.)</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Kindle Fire 7" Teardown (2015 5th Gen). | 0non-cybersec
| Reddit |
Partial upgrade - why remove MariaDB?. <p>My 12.04 system wants to run a partial upgrade, as part of which it proposes to remove certain MariaDB packages (see screenshot below). Attached is my <code>sources.list</code> file - I don't understand why the system should be proposing the removal of the MariaDB packages, given I have explicitly chosen MariaDB as a replacement for MySQL?</p>
<p><img src="https://i.stack.imgur.com/7D0p7.png" alt="enter image description here"></p>
<pre><code>clive@cooler-master:~$ cat /etc/apt/sources.list
# deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release i386 (20120423)]/ precise main restricted
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.
deb http://gb.archive.ubuntu.com/ubuntu/ precise main restricted
deb-src http://gb.archive.ubuntu.com/ubuntu/ precise main restricted
## Major bug fix updates produced after the final release of the
## distribution.
deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-updates main restricted
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://gb.archive.ubuntu.com/ubuntu/ precise universe
deb-src http://gb.archive.ubuntu.com/ubuntu/ precise universe
deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates universe
deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-updates universe
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://gb.archive.ubuntu.com/ubuntu/ precise multiverse
deb-src http://gb.archive.ubuntu.com/ubuntu/ precise multiverse
deb http://gb.archive.ubuntu.com/ubuntu/ precise-updates multiverse
deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-updates multiverse
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
deb http://gb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse
deb-src http://gb.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu precise-security main restricted
deb-src http://security.ubuntu.com/ubuntu precise-security main restricted
deb http://security.ubuntu.com/ubuntu precise-security universe
deb-src http://security.ubuntu.com/ubuntu precise-security universe
deb http://security.ubuntu.com/ubuntu precise-security multiverse
deb-src http://security.ubuntu.com/ubuntu precise-security multiverse
## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
deb http://archive.canonical.com/ubuntu precise partner
# deb-src http://archive.canonical.com/ubuntu precise partner
## This software is not part of Ubuntu, but is offered by third-party
## developers who want to ship their latest software.
deb http://extras.ubuntu.com/ubuntu precise main
deb-src http://extras.ubuntu.com/ubuntu precise main
# MariaDB 5.5 repository list - created 2012-09-21 09:23 UTC
# http://downloads.mariadb.org/mariadb/repositories/
deb http://ftp.heanet.ie/mirrors/mariadb/repo/5.5/ubuntu precise main
deb-src http://ftp.heanet.ie/mirrors/mariadb/repo/5.5/ubuntu precise main
# deb http://repository.spotify.com stable non-free
deb http://deb.opera.com/opera/ stable non-free
deb http://ppa.launchpad.net/yorba/ppa/ubuntu precise main
deb-src http://ppa.launchpad.net/yorba/ppa/ubuntu precise main
</code></pre>
| 0non-cybersec
| Stackexchange |
Man Survives Bear Mauling in Alaska. | 0non-cybersec
| Reddit |
Lake Brienz Switzerland 🇨🇭. | 0non-cybersec
| Reddit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.