text
stringlengths 3
1.74M
| label
class label 2
classes | source
stringclasses 3
values |
---|---|---|
Pgfplots labels jump between beamer slides. <p>Below is a MWE that demonstrates my problem. The problem is how the labels move through each slide. It seems as though they are first right aligned and then become centered for the next slide. How can I have each label centered under the tick mark on every slide so that the diagram doesn't move between slides.</p>
<pre><code>\documentclass{beamer}
\usepackage{pgfplots}
\begin{document}
\begin{frame}{Minimal Example}
\begin{tikzpicture}
\begin{axis}[
font=\tiny,
% enlarge y limits={value=0.2,upper},
% scaled ticks=false,
xticklabels={,
\only<4>{\phantom{$\mu-3\sigma$}} \only<4->{$\mu-3\sigma$},
\only<3>{\phantom{$\mu-2\sigma$}} \only<3->{$\mu-2\sigma$},
\only<2>{\phantom{$\mu-\sigma$}} \only<2->{$\mu-\sigma$},
$\mu$,
\only<2>{\phantom{$\mu+\sigma$}} \only<2->{$\mu+\sigma$},
\only<3>{\phantom{$\mu+2\sigma$}} \only<3->{$\mu+2\sigma$},
\only<4>{\phantom{$\mu+3\sigma$}} \only<4->{$\mu+3\sigma$},
},
yticklabels={,},
]
\addplot[blue] coordinates {(0,0) (0,1) (6,1) (6,0)};
\addplot[red] coordinates {(1,0) (1,2) (5,2) (5,0)};
\addplot[green] coordinates {(2,0) (2,3) (4,3) (4,0)};
\end{axis}
\end{tikzpicture}
\end{frame}
\end{document}
</code></pre>
<p>The construction of showing labels between slides I took from this <a href="https://tex.stackexchange.com/a/44166/11162">solution</a>.</p>
| 0non-cybersec
| Stackexchange |
SQL Server Management Studio (SSMS) is way too slow in its GUI. <p>I know this question has been asked before <a href="https://dba.stackexchange.com/questions/20725/sql-server-management-studio-slow-opening-new-windows">here</a> and <a href="https://superuser.com/questions/7247/why-does-it-take-so-long-for-sql-management-studio-to-connect">here</a>. But none of them could sovle my problem. I have this environment:</p>
<ol>
<li>Windows 10, build 1903 (freshly installed)</li>
<li>Microsoft SQL Server 2016 (SP2) (KB4052908) - 13.0.5026.0 (X64) Mar 18 2018 09:11:49 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows 10 Enterprise 10.0 (Build 18362: )</li>
<li>SSMS v17.3 (14.0.17199.0)</li>
</ol>
<p>Any activity I want to do in it, from opening, to connecting to a database engine, to right-clicking on a database, to creating a new database, to opening up a new query window, to browsing tables, any activity that is not query takes like 5 to 10 seconds to perform. It's clearly apparent that SSMS is doing something for each activity, and it gets stuck somewhere.</p>
<p>Here are the things I've done so far, without effect:</p>
<ol>
<li>Blocked Microsoft's certificate URL (adding 127.0.0.1 crl.microsoft.com to hosts file)</li>
<li>Downloading certificate and installing it from <a href="http://crl.microsoft.com/pki/crl/products/MicrosoftRootAuthority.crl" rel="nofollow noreferrer">http://crl.microsoft.com/pki/crl/products/MicrosoftRootAuthority.crl</a></li>
<li>Connecting to "local" instead of "."</li>
<li>Resetting user-defined settings in "C:\Users\user\AppData\Roaming\Microsoft\SQL Server Management Studio"</li>
<li>No antivirus is installed (only Windows Defender, the default of Windows)</li>
</ol>
<p>It's a shame that a program from such a reputable company can't work smooth out of the box, and troubleshooting it is sooooooooooooooo difficult.</p>
<p>Could you please help. How can I diagnose what's wrong with SSMS.</p>
<p>Update: This problem exists even with SSMS v18.2 (15.0.18142.0)</p>
| 0non-cybersec
| Stackexchange |
How to start with automated theorem proving?. <p>I'm interested in this question, but I'm not going to list my knowledge/demands but rather gear it to more general purpose; so the first thing concerns the prerequisites, i.e. </p>
<blockquote>
<p>How much theoretical knowledge (mathematical logic, programming and other) should one have prior to engaging with automated theorem proving (ATP)? Are there any fields of mathematical logic that aren't necessary prerequisites but still provide a deeper insight into ATP?</p>
</blockquote>
<p>After the prerequisities are done, one just needs to dive in:</p>
<blockquote>
<p>How does one start with ATP? Are there any books, lecture notes, which explain the crucial concepts? After one is done with the general idea of ATP, how does one proceed to <em>do</em> it?</p>
</blockquote>
<p>However, one might be concerned (at least that's what my main concern is) about the many different theorem-provers; how does one choose, and is there a chance that if one chooses the wrong one, they are going to be stuck with obsolete knowledge (even in terms of pure mathematics). In other words</p>
<blockquote>
<p>How concerned should one be with "aging" of the theorem-provers? Are there any language-agnostic approaches?</p>
</blockquote>
| 0non-cybersec
| Stackexchange |
Can NOT connect to local running virtual machine when dial in corporate VPN. <p>I have an ubuntu virtual machine running on my local pc via NAT. I can use putty to ssh into my virtual machine(I have some special reason I need to use putty to connect my local running VM). </p>
<p>But when I dial in corporate VPN, putty doesn't work, after some time, it reports time out error. the VM is still running, I can't even ping the VM.</p>
<p>Why it's not working??</p>
| 0non-cybersec
| Stackexchange |
Hell House LLC is now free to stream for amazon prime members!. Just in case you guys didn't know - Hell House LLC is one of the scariest movies I've seen in a long time, and until this point wasn't on any streaming services. Definitely watch it if you're an amazon prime member. | 0non-cybersec
| Reddit |
When I install applications with snap, i always have to start them as root. How do I fix that?. <p>Additionally to that, I tried to install an application on ubuntu using <code>snap install</code>, which is originally a windows application which used wine in the background.<br>
This application was pretty huge and I realized there, that it got installed in the root directory bcs it filled the complete free disk space of the root partition. How can I change the directory or fix that I have to execute applications installed by snap as root?</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
confused about cisco firewall configuration when allowing all other traffic on certain ports (src dst). <p>I am trying to practice some scenarios on a GNS3 lab that i am doing on my spare time (just doing a very very basic firewall for now since this is new to me)</p>
<p>currently my scenerios I am trying to accomplish is </p>
<pre><code>1. Allow SSH (tcp destined to port 22) from
10.0.0.0/8
131.11.11.11/32 (fake ip)
into my entire network (10.25.0.0/16).
2 Disallow all other SSH (tcp destined to port 22) to MY network.
3 Allow all other traffic inbound to my network.
and I am implementing this on my border routers
</code></pre>
<p>so in my cisco switch (my ABR is called R2)
I am using this format for firewall that i found online</p>
<p><code>#SEQUENCENUM (permit/deny) PROTO SRCIPADDRESS SRCWILDCARD [OPERATOR] [PORT] DESTIPADDRESS DESTNETMASK [OPERATOR] [PORT]</code></p>
<p>I have in my 'show access-list'</p>
<p>Extended IP access list 100</p>
<pre><code>100 permit tcp 10.0.0.0 0.255.255.255 host 10.25.0.0 eq 22
200 permit tcp host 131.11.11.11 host 10.25.0.0 eq 22
300 deny tcp any host 10.25.0.0 eq 22
400 permit ip any host 10.25.0.0
999 permit ip any 10.25.0.0 0.0.255.255
</code></pre>
<p>for 400 - would this be correct syntax to allow all other traffic inbound to my network?</p>
<p>and for 999 - I want to permit all other traffic (that is not tcp to port 22 to your network) is this correct?</p>
<p>thanks a bunch guys</p>
| 0non-cybersec
| Stackexchange |
Installing just the JRE for Java 13. <p>I'm trying to run some software that relies on Java. Currently I have:</p>
<pre><code>~ » java --version jpage@LMDP-PJacob
java 9.0.4
Java(TM) SE Runtime Environment (build 9.0.4+11)
Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)
</code></pre>
<p>I'm getting a runtime error for the application (Cassandra):</p>
<blockquote>
<p>Improperly specified VM option 'ThreadPriorityPolicy=42'</p>
</blockquote>
<p>...so figured I may need to upgrade to a newer version of Java. I see there's a Java 13, but I seem to only be able to download the entire <em>JDK</em> from <a href="https://www.oracle.com/technetwork/java/javase/downloads/index.html" rel="nofollow noreferrer">Oracle's download page</a>. Since I do not plan on doing any Java <em>development</em>, I'd rather not clutter up my HD with a bunch of development crap.</p>
<p>Does the world of Java still have the concept of a distinct JRE versus JDK? If so, why can't I find the download for just the JRE? Or am I getting some stuff confused? Java versioning has long been confusing to me.</p>
| 0non-cybersec
| Stackexchange |
How to set log level for Firestore?. <p>How to set log level for Firestore?</p>
<p>According to documentation <a href="https://firebase.google.com/docs/reference/js/firebase.firestore.Firestore#setLogLevel" rel="noreferrer">here</a>, i should use setLogLevel method but i can't see method at Firestore client objects, like <code>FirestoreClient.getFirestore()</code>.</p>
| 0non-cybersec
| Stackexchange |
Support Function and Mean Curvature. <p>I am working with surfaces in Euclidean 3-space. If we let $X = X(u,v)$ denote a parameterization of such a surface, then the mean curvature, $H = H(u,v)$, can be computed in terms of the coefficients for the first and second fundamental forms. </p>
<p>My question is this: Is it possible to express the mean curvature, $H(u,v)$, in terms of the support function for this surface? The support function is defined to be $h = h(u,v) = \langle X, N\rangle$ where $N$ is a unit normal. (This function measures the oriented distance from a tangent plane to the origin.)</p>
<p>For curves in the plane there is a nifty result along these lines. If the curve has non-vanishing curvature its unit normal can be used for a parameterization, and in this situation the curvature satisfies $1/k = \pm (h''+ h)$ where, again, $h$ is the support function for the curve.</p>
<p>I'm hoping there is a similar result for convex surfaces in space, but, sadly, have been unable to find such a relationship. Any help would be greatly appreciated.</p>
| 0non-cybersec
| Stackexchange |
Burger King is giving away 1-cent Whoppers, but you have to visit McDonald's first. | 0non-cybersec
| Reddit |
Minimizing distance fails?. <p>This may seem like a stupid question. Let's say I want to minimize the distance from $\left(0,0\right)$ to the function $f\left(x\right)=\sqrt{x^2-4}$. </p>
<p>I know how to do this using Calculus, but it always fails: $d=\sqrt{\left(x-0\right)^2+\left(y-0\right)^2}=\sqrt{x^2+\left(\sqrt{x^2-4}\right)^2}=\sqrt{2x^2-4}$. </p>
<p>When I minimize the resulting function, I receive $x=0$, which isn't even in the domain of $f$, and it even yields an imaginary distance of $2i$.</p>
<p>Obviously, the answers are $x=-2,2$, and the points are $\left(-2,0\right)$ and $\left(2,0\right)$ yielding a distance of $2$. </p>
<p>Is the reason why this is failing due to the fact that my point(s) in need is an endpoint of the function?</p>
<p>I've realized that similar functions do the same thing. For instance, if I want to find the minimum distance from $\left(0,0\right)$ to $f\left(x\right)=\sqrt{x-1}$ it fails, but if I choose a point like $\left(4,0\right)$, it works fine. Any rationale for this?</p>
<p>Thanks.</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Almost couldn't find a parking spot/photo spot 2 hours before sunrise in the Maroon Bells! This photo (straight out of camera with no extra editing) explains why. [OC][3000*2000]. | 0non-cybersec
| Reddit |
Just checked for pokemon go at 12:02. I guess it's not out July 1. 30 more chances left.. | 0non-cybersec
| Reddit |
Nice try, other guy.. | 0non-cybersec
| Reddit |
Compare Two Audio(locally stored pre-recorded voice command and recorded from microphone in app) in iOS. <p>In-app, I have to compare live recording from previously locally stored voice command if it matches(not only text but also identified person's voice) then perform necessary action.</p>
<p><strong>1-match voice commands from the same person.</strong></p>
<p><strong>2-match command 's text.</strong></p>
<p>I applied many ways but none are working as per my expectation.</p>
<p><strong><em>First:</em></strong>
use Speech to text Library like <a href="http://www.politepix.com/openears/">OpenEars</a>,<a href="https://developer.nuance.com/public/index.php?task=relNotes">SpeechKit</a> but these libraries convert only text from speech.</p>
<p><strong>Result: Failed As My expectation</strong></p>
<p><strong><em>Second:(Audio Finger printing)</em></strong></p>
<p><strong><a href="https://www.acrcloud.com/">acrcloud Library</a> :</strong> in this library, I record a command and stored that mp3file on acrcloud server and match with live recording(spoken by me) it doesn't match but when I play the same recording(recorded MP3 file of my voice ) which is uploaded to the acrcloud server then it matches.
<strong>Result: Failed As My expectation</strong></p>
<p><strong><a href="https://api.ai/">API.AI</a> :</strong> in this library,it is like speech to text ,I stored some text command on his server and then anyone speaks the same command the result get success.
<strong>Result: Failed As My expectation</strong></p>
<p>Please Suggest me how to solve this problem for iOS Application</p>
| 0non-cybersec
| Stackexchange |
Maven and eclipse: a reliable way to add non-Maven or external jars to a project?. <p>Maven is great. It mostly keeps me out of jar dependency hell by specifying versions of dependent packages in the <code>pom</code> configuration, and applies them automatically. It also has great integration with Eclipse via m2e, so that things work seamlessly in an IDE.</p>
<p>This is all great for dependencies that are globally known to Maven. However, sometimes, there are libraries that need to be included in a project that is not available in the Maven repos. In this case, I usually add them to a <code>lib/</code> directory in my project. As long as they are in the classpath then things compile.</p>
<p>However, the problem is getting them to be included automatically when importing a project. I've been tolerating this problem with half-baked fixes and hacks for far too long. Every time someone installs this project, I have to tell them to manually add the jars in <code>lib/</code> to their Eclipse build path so that all the errors go away. Something like the following:</p>
<p><img src="https://i.stack.imgur.com/wyBh8.png" alt="enter image description here"></p>
<p>I'm searching for a way to automate this process in a way that works with both the <code>mvn</code> command line program and Eclipse: more an emphasis on Eclipse, because it's nice to have projects that just compile when you import them.</p>
<p><strong>I don't want to set up a repo server for this, nor do I have any in-house proprietary components that would warrant setting up anything locally. I just have some jar files where the developers don't use Maven; and I want to compile with them...I should just be able to include them in the distribution of my software, right?</strong></p>
<p>I'm really looking for a reasonable way to implement this that will also work in Eclipse with no fuss. <a href="http://charlie.cu.cc/2012/06/how-add-external-libraries-maven/" rel="noreferrer">This is one solution</a> I've found promising, but
there definitely doesn't seem to be an authoritative solution to this problem. The only other thing that comes close is the <a href="http://code.google.com/p/addjars-maven-plugin/" rel="noreferrer">maven-addjars-plugin</a>, which works okay but only on the commandline. This plugin is not bad, and has a pretty reasonable configuration: </p>
<pre><code><plugin>
<groupId>com.googlecode.addjars-maven-plugin</groupId>
<artifactId>addjars-maven-plugin</artifactId>
<version>1.0.5</version>
<executions>
<execution>
<goals>
<goal>add-jars</goal>
</goals>
<configuration>
<resources>
<resource>
<directory>${project.basedir}/lib/java-aws-mturk</directory>
</resource>
<resource>
<directory>${project.basedir}/lib/not-in-maven</directory>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
</code></pre>
<p>However, trying to get it to run in Eclipse involves adding the following mess about lifecycle mapping to your <code>pom.xml</code>, which I have never gotten to work; I don't even think it is configured to actually add anything to the Eclipse build path.</p>
<pre><code><pluginManagement>
<plugins>
<!--This plugin's configuration is used to store Eclipse m2e settings only. It has no influence on the Maven build itself.-->
<plugin>
<groupId>org.eclipse.m2e</groupId>
<artifactId>lifecycle-mapping</artifactId>
<version>1.0.0</version>
<configuration>
<lifecycleMappingMetadata>
<pluginExecutions>
<pluginExecution>
<pluginExecutionFilter>
<groupId>
com.googlecode.addjars-maven-plugin
</groupId>
<artifactId>
addjars-maven-plugin
</artifactId>
<versionRange>
[1.0.5,)
</versionRange>
<goals>
<goal>add-jars</goal>
</goals>
</pluginExecutionFilter>
<action>
<execute />
</action>
</pluginExecution>
</pluginExecutions>
</lifecycleMappingMetadata>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</code></pre>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
How would I implement a "self-destruct" feature into the free trial version of my software?. <p>There is the ongoing argument of free trial versus a freemium model (that is, a free-for-life version of their software with restricted and/or stripped down features) for allowing potential customers and users to test run their product. Upon my research, I can conclude that the free trial is the way to go on both for the benefit of the user experience of the individual using the software and for the benefit of the vendor in both aspect of sales and maximizing usage. There are many factors for a free trial software that can greatly maximize user usage like the length of the free trial.</p>
<p>One keyword that reoccurs on my research for "freemium" is "frustrating". Many individuals chose to uninstall the software instead of having to use a piece of software where some features were unavailable to them. At the same time, these users never had the chance to use the "paid" features. Unbeknownst to them, and hidden by the very own vendors who are selling the software, they don't know and cannot know what benefits the Pro features will bring. Without first having to use them, a user will not know they have the feeling of "needing" something. Which brings me onto my next point of a free trial model.</p>
<p>Some opinions of a free trial user is "I cannot imagine using this software without the Pro features." This goes back to the point of "the user not knowing they need something until they first understanding the feeling of have." Those that have had 14 days to use a the "full" version features said they cannot imagine not having or using the features provided there. So when fourteen days were over, they were more likely to dish out money than someone who's never experienced the full features. The length of the free trial is also an important factor is creating a lasting impression on users. In an experiment conducted by Visual Website Optimizer, they noticed that for a 14 day free trial versus a 30 day free trial, while the number of sign ups and installs were the same, the usage for the 14 day trial increased 102%. This, of course, in turn increased their revenue as well.</p>
<p>Another very important point to mention is that "offering a useful and fully functional free version of the product" is VERY IMPORTANT. Fully functional free trials are effective in getting media coverage, and this publicity for new software and/or software vendors are fairly crucial.</p>
<p>One other relevant aspect is the importance for users to give feedback. Consider, in the fully functional time-limited free trial, the ability for users to give feedback.</p>
<p>One other feature important for our software is the need for telemetric data, that is, quantitative and comprehensive data on how a user uses our software. Some of usage statistics may fall into a legal grey area, as laws are different depending on the location in the United States, and the world. One way to combat this legal issue is to have an opt-in feature for gathering anonymous usage statistics. An opt-in feature would mean giving the user an option to turn off statistics gathering and at the same time, the user must be very well aware of what the gathering of anonymous usage information does. It is important to make it CLEAR to the user what data will be collected, what "we" will be doing with it, and make it easy to turn off any time, including allowing them to change their mind for turning it on or off. For more detailed statistics, like tracking individual activities of users, it could lead to legal issues. The Eclipse IDE logs detailed usage statistics, but it does it by the full consent of the user. We may have to potentially prepare a consent form with our legal team. </p>
<p>The Eclipse Usage Information Collection collects this information:
1. Plug-ins that are started by the system.
2. Commands accessed via the keyboard shortcuts and actions invoked via menus or toolbars.
3. When the "view" of the editor is given focus.
4. System information like the version of the software being used, the operating system being used.
5. Description of internal errors.</p>
<p>Kill Switch</p>
<p>A kill switch for our software can be managed logging the initial data, encrypting it with a salt, and whenever it's an invalid date, that is, the user tried to change it, it would disable the software. Another option is to have internet authentication on install, log that date to a central web database, and check the date every time the application is opened.</p>
<p>On disabling the software, we can delete vital DLLs. The option of having to pay to generate a report cannot be considered.</p>
<hr>
<p>I am interested in implementing a free trial version to my existing software. I plan on having the trial last 14 days. Upon the 14th day, my software would prompt the user to either pay for the paid version, or have the consequence of not being able to use it. The free trial version is entirely unlocked, meaning all paid features are there.</p>
<p>However, my dilemma is about the "best" way to implement what to do for an end-of-trial solution. Do I delete vital DLLs? Have a user authentication system upon installation or use? Encrypt the initial time and date of use with a salt, and if it's an invalid date (AKA they try to change their initial date), disable the software?</p>
<p>I am interested in knowing what are some effective measures of disabling software.</p>
| 0non-cybersec
| Stackexchange |
I've been doing this thing and finally got an email regarding said thing! The thing shall continue!. | 0non-cybersec
| Reddit |
side effects include: possible warming of his cold heart. | 0non-cybersec
| Reddit |
Test for Localhost in Powershell?. <p>I have a powershell script that calls <code>Get-WmiObject</code> with <code>-Credential</code>. However, this errors out if I am running it against the local machine:</p>
<p><code>Get-WmiObject : User credentials cannot be used for local connections</code></p>
<p>What is the proper way to add an if localhost logic to avoid this error? Or is there a better way?</p>
| 0non-cybersec
| Stackexchange |
Arrow - A television series about DC's superhero "The Green Arrow".. | 0non-cybersec
| Reddit |
Full-size cutaway demonstration model of an M190 "Honest John" cluster bomb chemical warhead section containing demonstration M134 GB (Sarin) bomblets. Circa 1960. [4615x3631]. | 0non-cybersec
| Reddit |
Alexa Intent Schema: Random input being identified as intents. <p>I have two intents that use the same slot types. However, if the input is a random string, Alexa automatically identifies an intent in its JSON request even though it is not part of the utterances. For example, in the example below, if the user input was 'bla bla bla', <code>GetAccountBalance</code> is identified as the intent with no slot value even though it is not part of provided utterances.</p>
<p>What is the way to error-check for these cases and what is the best practice to avoid cases like this when developing the intent schema? Is there a way to create an intent that can handle all random inputs?</p>
<p>Example Schema:</p>
<pre><code>{
"intents": [
{
"intent": "GetAccountBalance",
"slots": [
{
"name": "AccountType",
"type": "ACCOUNT_TYPE"
}
]
},
{
"intent": "GetAccountNumber",
"slots": [
{
"name": "AccountType",
"type": "ACCOUNT_TYPE"
}
]
}
]
}
</code></pre>
<p>Utterances:</p>
<pre><code>GetAccountBalance what is my account balance for {AccountType} Account
GetAccountBalance what is my balance for {AccountType} Account
GetAccountBalance what is the balance for my {AccountType} Account
GetAccountBalance what is {AccountType} account balance
GetAccountBalance what is my account balance
GetAccountBalance what is account balance
GetAccountBalance what is the account balance
GetAccountBalance what is account balance
GetAccountNumber what is my account number for {AccountType} Account
GetAccountNumber what is my number for {AccountType} Account
GetAccountNumber what is the number for my {AccountType} Account
GetAccountNumber what is {AccountType} account number
GetAccountNumber what is my account number
GetAccountNumber what is account number
GetAccountNumber what is the account number
GetAccountNumber what is account number
</code></pre>
| 0non-cybersec
| Stackexchange |
cursed_pig. | 0non-cybersec
| Reddit |
Why isn't my Bash script returning the correct answer to this Project Euler?. <p>I'm trying to use Bash to complete <a href="https://projecteuler.net/problem=13" rel="nofollow noreferrer">Project Euler 13</a>. Below is my code that I just cannot figure out what's wrong with.</p>
<pre><code>#!/bin/bash
sum=0
while read -r -d $'\r' line; do
sum=$(echo $sum + $line | bc)
done <<< "$(curl -s http://pastebin.com/raw/uHZ0PZjm)"
echo "${sum:0:10}"
exit
</code></pre>
<p>It used to result in two errors,</p>
<pre><code>(standard_in) 1: syntax error
</code></pre>
<p>and</p>
<pre><code>(standard_in) 1: illegal character: ^M
</code></pre>
<p>After some research, it seemed to be an issue with the EOF terminators. I then ran dos2unix on it and it no longer gives the second error, but is still giving the first repeatedly. It seems to be some issue with how I'm piping the data into bc, but I've no clue what or how to fix it.</p>
<p>The correct answer is 5537376230.
Thank you much for anything you can help with!</p>
<p>System info is</p>
<blockquote>
<p>GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)</p>
</blockquote>
<p>I'm using cmder on Windows 10.</p>
| 0non-cybersec
| Stackexchange |
Using C++ Boost memory mapped files to create disk-back data structures. <p>I have been looking into using Boost.Interprocess to create a disk-backed data structure. The examples on Boost Documentation (<a href="http://www.boost.org/doc/libs/1_41_0/doc/html/interprocess.html" rel="nofollow noreferrer">http://www.boost.org/doc/libs/1_41_0/doc/html/interprocess.html</a>) are all for using shared memory even though they mention that memory mapped files can also be used. I am wondering whether anyone here has used memory mapped files? Any publicly available code samples to get started (say, a memory mapped file backed map or set)?</p>
| 0non-cybersec
| Stackexchange |
How to create a choropleth of the world using d3?. <p><a href="http://mbostock.github.com/d3/ex/choropleth.html">This tutorial</a> is a great intro to creating choropleths with d3, but it's data is US-centric. Where do I get the corresponding data for a world map?</p>
<p>I'm sure it's in the docs somewhere, but I can't find it. <a href="https://github.com/mbostock/d3/wiki/Geo-Projections">This</a> is the closest I've found, but the one world map on there specifically says it's not recommended for choropleths. Also, </p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Rcpp Parallel or openmp for matrixvector product. <p>I am trying to program the naive parallel version of Conjugate gradient, so I started with the simple Wikipedia algorithm, and I want to change the <code>dot-products</code> and <code>MatrixVector</code> products by their appropriate parallel version, The Rcppparallel documentation has the code for the <code>dot-product</code> using parallelReduce; I think I'm gonna use that version for my code, but I'm trying to make the <code>MatrixVector</code> multiplication, but I haven't achieved good results compared to R base (no parallel) </p>
<p>Some versions of parallel matrix multiplication: using OpenMP, Rcppparallel, serial version, a serial version with Armadillo, and the benchmark</p>
<pre><code>// [[Rcpp::depends(RcppParallel)]]
#include <Rcpp.h>
#include <RcppParallel.h>
#include <numeric>
// #include <cstddef>
// #include <cstdio>
#include <iostream>
using namespace RcppParallel;
using namespace Rcpp;
struct InnerProduct : public Worker
{
// source vectors
const RVector<double> x;
const RVector<double> y;
// product that I have accumulated
double product;
// constructors
InnerProduct(const NumericVector x, const NumericVector y)
: x(x), y(y), product(0) {}
InnerProduct(const InnerProduct& innerProduct, Split)
: x(innerProduct.x), y(innerProduct.y), product(0) {}
// process just the elements of the range I've been asked to
void operator()(std::size_t begin, std::size_t end) {
product += std::inner_product(x.begin() + begin,
x.begin() + end,
y.begin() + begin,
0.0);
}
// join my value with that of another InnerProduct
void join(const InnerProduct& rhs) {
product += rhs.product;
}
};
struct MatrixMultiplication : public Worker
{
// source matrix
const RMatrix<double> A;
//source vector
const RVector<double> x;
// destination matrix
RMatrix<double> out;
// initialize with source and destination
MatrixMultiplication(const NumericMatrix A, const NumericVector x, NumericMatrix out)
: A(A), x(x), out(out) {}
// take the square root of the range of elements requested
void operator()(std::size_t begin, std::size_t end) {
for (std::size_t i = begin; i < end; i++) {
// rows we will operate on
//RMatrix<double>::Row rowi = A.row(i);
RMatrix<double>::Row rowi = A.row(i);
//double res = std::inner_product(rowi.begin(), rowi.end(), x.begin(), 0.0);
//Rcout << "res" << res << std::endl;
out(i,1) = std::inner_product(rowi.begin(), rowi.end(), x.begin(), 0.0);
//Rcout << "res" << out(i,1) << std::endl;
}
}
};
// [[Rcpp::export]]
double parallelInnerProduct(NumericVector x, NumericVector y) {
// declare the InnerProduct instance that takes a pointer to the vector data
InnerProduct innerProduct(x, y);
// call paralleReduce to start the work
parallelReduce(0, x.length(), innerProduct);
// return the computed product
return innerProduct.product;
}
//librar(Rbenchmark)
// [[Rcpp::export]]
NumericVector matrixXvectorRcppParallel(NumericMatrix A, NumericVector x) {
// // declare the InnerProduct instance that takes a pointer to the vector data
// InnerProduct innerProduct(x, y);
int nrows = A.nrow();
NumericVector out(nrows);
for(int i = 0; i< nrows;i++ )
{
out(i) = parallelInnerProduct(A(i,_),x);
}
// return the computed product
return out;
}
// [[Rcpp::export]]
arma::rowvec matrixXvectorParallel(arma::mat A, arma::colvec x){
arma::rowvec y = A.row(0)*0;
int filas = A.n_rows;
int columnas = A.n_cols;
#pragma omp parallel for
for(int j=0;j<columnas;j++)
{
//y(j) = A.row(j)*x(j))
y(j) = dotproduct(A.row(j),x);
}
return y;
}
arma::mat matrixXvector2(arma::mat A, arma::mat x){
//arma::rowvec y = A.row(0)*0;
//y=A*x;
return A*x;
}
arma::rowvec matrixXvectorParallel2(arma::mat A, arma::colvec x){
arma::rowvec y = A.row(0)*0;
int filas = A.n_rows;
int columnas = A.n_cols;
#pragma omp parallel for
for(int j = 0; j < columnas ; j++){
double result = 0;
for(int i = 0; i < filas; i++){
result += x(i)*A(j,i);
}
y(j) = result;
}
return y;
}
</code></pre>
<p><strong>Benchmark</strong></p>
<pre><code> test replications elapsed relative user.self sys.self user.child sys.child
1 M %*% a 20 0.026 1.000 0.140 0.060 0 0
2 matrixXvector2(M, as.matrix(a)) 20 0.040 1.538 0.101 0.217 0 0
4 matrixXvectorParallel2(M, a) 20 0.063 2.423 0.481 0.000 0 0
3 matrixXvectorParallel(M, a) 20 0.146 5.615 0.745 0.398 0 0
5 matrixXvectorRcppParallel(M, a) 20 0.335 12.885 2.305 0.079 0 0
</code></pre>
<p>My last trial at the moment was using parallefor with Rcppparallel, but I'm getting memory errors and I dont have idea where the problem is </p>
<pre><code>// [[Rcpp::export]]
NumericVector matrixXvectorRcppParallel2(NumericMatrix A, NumericVector x) {
// // declare the InnerProduct instance that takes a pointer to the vector data
int nrows = A.nrow();
NumericMatrix out(nrows,1); //allocar mempria de vector de salida
//crear worker
MatrixMultiplication matrixMultiplication(A, x, out);
parallelFor(0,A.nrow(),matrixMultiplication);
// return the computed product
return out;
}
</code></pre>
<p>What I notice is that when I check in my terminal using htop how the processors are working, I see in htop when I apply the conventional Matrix vector multiplication using R-base, that is using all the processors, so Does the matrix multiplication perform parallel by default? because in theory, only one processor should be working if is the serial version.</p>
<p><a href="https://i.stack.imgur.com/0mfdn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0mfdn.png" alt="Processors usage"></a></p>
<p>If someone knows which is the better path, OpenMP or Rcppparallel, or another way, that gives me better performance than the apparently serial version of R-base.</p>
<p>The serial code for conjugte gradient at the moment</p>
<pre><code>// [[Rcpp::export]]
arma::colvec ConjugateGradient(arma::mat A, arma::colvec xini, arma::colvec b, int num_iteraciones){
//arma::colvec xnew = xini*0 //inicializar en 0's
arma::colvec x= xini; //inicializar en 0's
arma::colvec rkold = b - A*xini;
arma::colvec rknew = b*0;
arma::colvec pk = rkold;
int k=0;
double alpha_k=0;
double betak=0;
double normak = 0.0;
for(k=0; k<num_iteraciones;k++){
Rcout << "iteracion numero " << k << std::endl;
alpha_k = sum(rkold.t() * rkold) / sum(pk.t()*A*pk); //sum de un elemento para realizar casting
(pk.t()*A*pk);
x = x+ alpha_k * pk;
rknew = rkold - alpha_k*A*pk;
normak = sum(rknew.t()*rknew);
if( normak < 0.000001){
break;
}
betak = sum(rknew.t()*rknew) / sum( rkold.t() * rkold );
//actualizar valores para siguiente iteracion
pk = rknew + betak*pk;
rkold = rknew;
}
return x;
}
</code></pre>
<p>I wasn't aware of the use of BLAS in R, thanks Hong Ooi and tim18, so the new benchmark using option(matprod="internal") and option(matprod="blas")</p>
<pre><code>options(matprod = "internal")
res<-benchmark(M%*%a,matrixXvector2(M,as.matrix(a)),matrixXvectorParallel(M,a),matrixXvectorParallel2(M,a),matrixXvectorRcppParallel(M,a),order="relative",replications = 20)
res
test replications elapsed relative user.self sys.self user.child sys.child
2 matrixXvector2(M, as.matrix(a)) 20 0.043 1.000 0.107 0.228 0 0
4 matrixXvectorParallel2(M, a) 20 0.069 1.605 0.530 0.000 0 0
1 M %*% a 20 0.072 1.674 0.071 0.000 0 0
3 matrixXvectorParallel(M, a) 20 0.140 3.256 0.746 0.346 0 0
5 matrixXvectorRcppParallel(M, a) 20 0.343 7.977 2.272 0.175 0 0
</code></pre>
<p>options(matprod="blas")</p>
<pre><code>options(matprod = "blas")
res<-benchmark(M%*%a,matrixXvector2(M,as.matrix(a)),matrixXvectorParallel(M,a),matrixXvectorParallel2(M,a),matrixXvectorRcppParallel(M,a),order="relative",replications = 20)
res
test replications elapsed relative user.self sys.self user.child sys.child
1 M %*% a 20 0.021 1.000 0.093 0.054 0 0
2 matrixXvector2(M, as.matrix(a)) 20 0.092 4.381 0.177 0.464 0 0
5 matrixXvectorRcppParallel(M, a) 20 0.328 15.619 2.143 0.109 0 0
4 matrixXvectorParallel2(M, a) 20 0.438 20.857 3.036 0.000 0 0
3 matrixXvectorParallel(M, a) 20 0.546 26.000 3.667 0.127 0 0
</code></pre>
| 0non-cybersec
| Stackexchange |
I cannot reach the website www.github.com. <p>Whatever i tried to open github . com in my browser (tried with chrome and firefox), i failed to load web site</p>
<ul>
<li>i can reach every other web addresses except github . com</li>
<li>i cannot take projec source codes from console via "git clone git:\..."</li>
<li><p>i can ping the site
<code>
kursat@kursat:~$ ping github . com
PING github.com (192.30.252.128) 56(84) bytes of data.
64 bytes from github.com (192.30.252.128): icmp_seq=1 ttl=49 time=125 ms
64 bytes from github.com (192.30.252.128): icmp_seq=2 ttl=49 time=131 ms
</code></p></li>
<li><p>when i check nslookup output was like this
<code>
kursat@kursat:~$ nslookup github.com
Server: 127.0.1.1
Address: 127.0.1.1#53
Non-authoritative answer:
Name: github.com
Address: 192.30.252.129
</code></p></li>
<li><p>when i try to make wget
<code>
kursat@kursat:~$ wget github .com
--2015-02-12 15:32:14-- http:// github .com/
Resolving github.com (github.com)... 192.30.252.128
Connecting to github.com (github.com)|192.30.252.128|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: <a href="https://github" rel="nofollow noreferrer">https://github</a> .com/ [following]
--2015-02-12 15:32:14-- https:// github.com/
Connecting to github.com (github.com)|192.30.252.128|:443... connected.
HTTP request sent, awaiting response...
</code></p></li>
<li><p>when i make eth1 interface down and connect to the site via wlan0,i can connect the github.com</p></li>
</ul>
<p>So i am really wondering what stops my eth1 to reach github.com ?</p>
| 0non-cybersec
| Stackexchange |
Javascript frameworks for large development teams. <p>My company is reevaluating what kind of web framework we want to use. We are currently using the Ext 4.0 framework but there are questions being raised that it may not be the right framework to use. I like what Ext has to offer (rich GUIs, data package and class system) are there other frameworks out there that are similar? Are there frameworks out there tailored to medium/large software companies? </p>
<p>Info:
Potentially 100's of developers converting thick client screens to the web. Data modeling is important and well as rich GUI support. Maintainability and uniformity across multiple products important as well.</p>
<p>Any info is greatly appreciated.</p>
| 0non-cybersec
| Stackexchange |
Problem to understand the proof : $\forall \varepsilon >0, \exists \delta >0: m(E)<\delta \implies \int_E|f|<\varepsilon $. <p>Let <span class="math-container">$f\in L^1(\mathbb R^d)$</span>. Then, for all <span class="math-container">$\varepsilon >0$</span> there is <span class="math-container">$\delta >0$</span> s.t. for all measurable set <span class="math-container">$E\subset \mathbb R^d$</span> s.t. <span class="math-container">$m(E)<\delta $</span>, we have <span class="math-container">$$\int_E|f|<\varepsilon .$$</span></p>
<hr>
<p>The proof goes as follow : let <span class="math-container">$\varepsilon >0$</span> and set <span class="math-container">$F_n=\{x\mid |f|\leq n\}$</span>. Set <span class="math-container">$f_N(x)=f(x)\boldsymbol 1_{F_n}$</span>. Then <span class="math-container">$|f_N|$</span> is an increasing and non negative sequence. By Monotone convergence theorem, there is <span class="math-container">$N$</span> s.t. <span class="math-container">$$\int (|f|-|f_N|)<\frac{\varepsilon }{2}.$$</span></p>
<p>Let <span class="math-container">$\delta >0$</span> s.t. <span class="math-container">$\delta N<\frac{\varepsilon }{2}$</span>. Then, if <span class="math-container">$m(E)<\delta $</span>,
<span class="math-container">$$\int_E|f|\leq \int(|f|-|f_n|)+\int_{E}|f_N|\leq \frac{\varepsilon }{2}+Nm(E)\leq \varepsilon .$$</span></p>
<hr>
<p>My problem : I have doubt with this proof since for me <span class="math-container">$\delta $</span> depend on <span class="math-container">$\varepsilon $</span> (of course) and <span class="math-container">$N$</span> (that's not good since <span class="math-container">$\delta $</span> should depend on <span class="math-container">$\varepsilon $</span> only). Don't we have a problem ? (it's an official solution, so the solution should be correct). Could someone explain ?</p>
| 0non-cybersec
| Stackexchange |
Create a personal desktop app in Java. Hello. I have recently started following some java tutorials for beginners and I seem to really like programming this language. I decided to try and do something more so then I thought about creating a personal desktop app connected to a data base, so that every time I introduce text, numbers, date/time to this app's fields, they will be saved to the data base. Is this hard? Also some in depth guide would be really appreciated. Thank you | 0non-cybersec
| Reddit |
Luxury Limousine Hire in Southampton. | 0non-cybersec
| Reddit |
TIL when South Australia voted to give women the vote, the bill was amended by the opposition to also give women the right to run for parliament. They thought this was too preposterous to pass, but it did and it was the first place in the world to do so.. | 0non-cybersec
| Reddit |
Deadly Premonition: A cult classic that's more than just "so bad, it's good". | 0non-cybersec
| Reddit |
IF EXIST C:\directory\ goto a else goto b problems windows XP batch files. <p>whenever i run the <code>code</code> below it occurs to me I have made a mistake using the if exist lines, as no matter whether the directory exists or not, it acts as if the line was never there... either that or its not reading the else line.</p>
<hr>
<pre><code>echo off
echo
echo (c) Ryan Leach 2010
echo Stockmaster Backup System for exclusive use of Riverland Paper Supplies
echo
echo Please ensure that all computers are out of stock master to the windows xp screen
echo and that the backup usb with the day of the week labeled on it is inserted
pause
IF EXIST D:\RPS_BACKUP\backups_to_zip\ goto zipexist else goto zipexistcontinue
:zipexist
IF EXIST d:\RPS_BACKUP\backups_old\ rd /s /q D:\RPS_BACKUP\backups_old
echo backup did not complete last time, backup will restart from zip-usb phase.
pause
call zip
goto tidyup
:zipexistcontinue
IF EXIST D:\RPS_BACKUP\backups_old\ goto oldexists else oldexistscontinue
:oldexists
IF EXIST d:\RPS_BACKUP\backup_temp\ rename D:\RPS_BACKUP\backups_temp backups_to_zip
rd /s /q D:\RPS_BACKUP\backups_old
echo backup did not complete last time, backup will restart at the zip to usb phase.
pause
call zip
goto tidyup
:oldexistscontinue
IF EXIST D:\RPS_BACKUP\backups_temp\ goto tempexists else goto tempexistscontinue
:tempexists
IF EXIST D:\RPS_BACKUP\backups_old\ goto backupfailed else goto tempexistscontinue
:backupfailed
@rd /s /q D:\RPS_BACKUP\backups_temp
echo backup did not complete last time, backup will restart from start.
pause
:tempexistscontinue
md D:\RPS_BACKUPS\backups_temp
xcopy \\user1\c\* D:\RPS_BACKUP\backups_temp\user1\c /h /e /z /f /r /i /s /k
IF NOT ERRORLEVEL == 1 GOTO ErrorHandler
xcopy C:\* D:\RPS_BACKUP\backups_temp\user2\c /h /e /f /r /i /s /k
IF NOT ERRORLEVEL == 1 GOTO ErrorHandler
xcopy \\user3\c\* D:\RPS_BACKUP\backups_temp\user3\c /h /e /z /f /r /i /s /k
IF NOT ERRORLEVEL == 1 GOTO ErrorHandler
call sub
call zip
:tidyup
rename D:\RPS_BACKUP\backups_to_zip backups
pause
goto :eof
:ErrorHandler
echo xcopyerrorcode is ERRORLEVEL contact ryan
pause
</code></pre>
| 0non-cybersec
| Stackexchange |
17 year old films himself raping a 1 year old child, won't get prison time. | 0non-cybersec
| Reddit |
Bad Things Happen. | 0non-cybersec
| Reddit |
What's the capital of Greece?. About €2.50 | 0non-cybersec
| Reddit |
Double Integral, spectrum integrated density. <p>Good Afternoon,</p>
<blockquote>
<p>I am trying to understand this equality :</p>
<p><span class="math-container">$$ \mathbb E \left [ \int_{-1/2}^{1/2} \int_{-1/2}^{1/2} d Z^*(f) d
Z(f') \right ] = \int_{-1/2}^{1/2} d S^{(I)} $$</span></p>
</blockquote>
<p>where <span class="math-container">$*$</span> stands for the complex conjugate, and where the <span class="math-container">$dZ^*$</span> and <span class="math-container">$dZ$</span> are orthogonal processes (and <span class="math-container">$L_1$</span> obviously for convergence sake), in other words :</p>
<p><span class="math-container">$$ \mathbb E \left [ dZ^*(f) dZ(f') \right ] = 1_{f = f'} dS^{(I)}(f) $$</span></p>
<p>my concern is that I have the impression the above equality is "kind of" the same as:</p>
<p><span class="math-container">$$ \int_{-1/2}^{1/2} \int_{-1/2}^{1/2} 1_{x=y} dx dy ( = 0) = \int_{-1/2}^{1/2} dx (\neq 0) $$</span></p>
<p>Can someone explain me why is the first equality true, and if there is any bit of information missing please let me know.</p>
| 0non-cybersec
| Stackexchange |
How to find the header file where a c function is defined?. <p>Is there an easy way to find out which header file a C function declaration is in? <code>cd</code>ing into <code>/usr/include</code> and running (<code>grep -E 'system.*\(' *.h -R</code>) works with some trial and error, but isn't there an easier way to do this?</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Angular 5: Passing dynamic class name to "select" attribute of ng-content. <p>I'm following <a href="https://dzone.com/articles/simplifying-content-projection-in-angular" rel="nofollow noreferrer">THIS</a> tutorial article to test how Angular Projection works. In this article I came across <code>select</code> attribute of <code>ng-content</code>, to which we can pass <code>class name</code> or <code>attribute</code> to select and target a particular <code>ng-content</code>.</p>
<p>Eg:</p>
<pre><code>@Component({
selector: 'greet',
template: `
<ng-content select=".headerText"></ng-content>
<ng-content select="btnp"></ng-content>
`
<greet>
<h1 class="headerText">Hello</h1>
</greet>
<greet>
<button btnp>Click Here</button>
</greet>
</code></pre>
<p>The above example works fine. But, now what I want is to dynamically pass a class name to select like:</p>
<pre><code><ng-content select=".headerText{{some_id}}"></ng-content>
</code></pre>
<p>But, when I attempt this, I get error:</p>
<blockquote>
<p>Can't bind to 'select' since it isn't a known property of
'ng-content'.</p>
</blockquote>
<p>How can I achieve this?</p>
| 0non-cybersec
| Stackexchange |
Another OfferUp Allstar. | 0non-cybersec
| Reddit |
SQLiteConstraintException error showing after start of every activity. <p>I have this error popping out in logcat all the time. It always shows after every change of activity. And sometimes, the app disappears and in a second it shows again. There is not any fatal error in logcat, all I see is this:</p>
<pre><code>2020-05-20 11:53:26.422 2940-8484/? E/SQLiteDatabase: Error inserting flex_time=3324000 job_id=-1 period=6650000 source=16 requires_charging=0 preferred_network_type=1 target_class=com.google.android.gms.measurement.PackageMeasurementTaskService user_id=0 target_package=com.google.android.gms tag=Measurement.PackageMeasurementTaskService.UPLOAD_TASK_TAG task_type=0 required_idleness_state=0 service_kind=0 source_version=201516000 persistence_level=1 preferred_charging_state=1 required_network_type=0 runtime=1589968406417 retry_strategy={"maximum_backoff_seconds":{"3600":0},"initial_backoff_seconds":{"30":0},"retry_policy":{"0":0}} last_runtime=0
android.database.sqlite.SQLiteConstraintException: UNIQUE constraint failed: pending_ops.tag, pending_ops.target_class, pending_ops.target_package, pending_ops.user_id (code 2067 SQLITE_CONSTRAINT_UNIQUE)
at android.database.sqlite.SQLiteConnection.nativeExecuteForLastInsertedRowId(Native Method)
at android.database.sqlite.SQLiteConnection.executeForLastInsertedRowId(SQLiteConnection.java:879)
at android.database.sqlite.SQLiteSession.executeForLastInsertedRowId(SQLiteSession.java:790)
at android.database.sqlite.SQLiteStatement.executeInsert(SQLiteStatement.java:88)
at android.database.sqlite.SQLiteDatabase.insertWithOnConflict(SQLiteDatabase.java:1599)
at android.database.sqlite.SQLiteDatabase.insert(SQLiteDatabase.java:1468)
at aplm.a(:com.google.android.gms@[email protected] (120406-309763488):76)
at aplb.a(:com.google.android.gms@[email protected] (120406-309763488):173)
at aplb.a(:com.google.android.gms@[email protected] (120406-309763488):21)
at aplb.a(:com.google.android.gms@[email protected] (120406-309763488):167)
at aphk.run(:com.google.android.gms@[email protected] (120406-309763488):8)
at sob.b(:com.google.android.gms@[email protected] (120406-309763488):12)
at sob.run(:com.google.android.gms@[email protected] (120406-309763488):7)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at sub.run(:com.google.android.gms@[email protected] (120406-309763488):0)
at java.lang.Thread.run(Thread.java:919)
</code></pre>
<p>But it does not show anywhere to the code is there any solution to it?</p>
<p>Edit: Google libraries:</p>
<pre><code>implementation 'com.google.android.material:material:1.1.0'
implementation 'com.google.firebase:firebase-core:17.1.0'
implementation 'com.google.firebase:firebase-database:19.0.0'
implementation 'com.google.firebase:firebase-analytics:17.0.1'
implementation 'com.google.firebase:firebase-perf:19.0.0'
implementation 'com.google.android.gms:play-services-ads:18.1.1'
</code></pre>
| 0non-cybersec
| Stackexchange |
Researchers identify brain differences linked to insomnia. | 0non-cybersec
| Reddit |
How can I add new autostart programs in Lubuntu?. <p>In Lubuntu, there's no 'Add New Program...' button in Desktop Session Settings. Is there an easy way to add new autostart programs in Lubuntu? </p>
| 0non-cybersec
| Stackexchange |
Prevention of devices using same OTP secret. <p>I have a requirement of OTP applications on mobile devices not sharing the same secret (even if the mobile devices are owned by the same user). A single secret must be present in a single device.</p>
<p>Open source applications that implement OTP (like Google Authenticator and FreeOTP) do not satisfies my requirement: the secret is not device unique, due the fact that I can scan the QR-Code with more than one device and backend never will know about that. I think it is not something related with the application itself, but with the RFC 4226 that not specifies this requirement.</p>
<p>So I thought about a process to mitigate the risk of users using OTP secret in more than one device (need internet connection - not a requirement be offline). The steps:</p>
<ol>
<li>App generate unique secret protection key on first execution</li>
<li>App send the secret protection key to the server</li>
<li>Server generate a unique secret for app</li>
<li>Server encrypt the secret using the secret protection key from the app and return the blob to the app</li>
<li>App decrypt the info using the generated key and starts to generate OTPs</li>
<li>Both encrypted secret and secret protection key would be storaged on app</li>
</ol>
<p>I know that this approach is not tamper-proof and the secret could be restored from storage but would be more difficult.</p>
<p>About all explained here, my questions are:</p>
<ul>
<li>Would be a good approach exchange OTP's secret through web, even if it is protect by TLS?</li>
<li>Is the unique secret protection adding security or a flaw to the process?</li>
<li>Would be possible to achieve a similar result in a offline sync?</li>
<li>Is there open source frameworks to achieve a better protection of the secret key (<em>i.e.</em> not exposing directly to user, like QR-Code does)?</li>
</ul>
| 0non-cybersec
| Stackexchange |
Saw these gardener eels at the aquarium today :). | 0non-cybersec
| Reddit |
My mediawiki page displayed index of/ after updating to apache 2.4.23. <p>I had this problem with MediaWiki after updating apache
At first, i installed using yum and i had apache 2.4.6, php 5.4 on my Centos7</p>
<p>Then i want to update them to latest version which is apache 2.4.23 and PHP 7.0
Until now, the only way to update apache is using source code. I removed the 2.4.6 and install new 2.4.23</p>
<p>Btw, the old DocumentRoot is /var/www/html
After updating, the new DocumentRoot is /usr/local/apache2/htdocs
I copy all the MediaWiki file from old documentroot to new documentroot. But when i access the server's IP, i just saw the index of/ and list of MediaWiki file.
How should i fix this?</p>
| 0non-cybersec
| Stackexchange |
Finding the sample space and the event space.. <p>A number of n digits is randomly generated. Find the probability that the number is a numerical palindrome.</p>
<p>How do you denote the sample space of this problem?</p>
<p>My try.</p>
<p><span class="math-container">$\Omega = \{a_1a_2a_3...a_n\} $</span></p>
<p>a) I wasn't sure whether to write it with commas or not. I mean <span class="math-container">$\Omega = \{a_1,a_2,a_3...,a_n\}$</span></p>
<p>The event space, <span class="math-container">$F = P(\Omega)$</span>, where P denotes the power set. And its cardinality <span class="math-container">$$|P(\Omega)|=2^{10^n}$$</span></p>
| 0non-cybersec
| Stackexchange |
Creating a second EDMX file never runs the Entity Data Model Wizard. <p>I am trying to add a second connection in my project, and I went through the steps: got the connection string, confirmed I could access it, planned the location in the solution.</p>
<p>I did an Add .. New .. Data .. Entity Framework 6.x dbContext Generator
The hour glass ran for a minute, then it returned to the solution without any information collection and no EDMX record was created. It created all of the files that would normally go under the edmx file though. Including the file that has a pointer name back to the edmx file. </p>
<p>This is in Visual Studio 2015.
I also removed and reinstalled Entity Framework and tried using EF 5.x instead of 6.x in two separate projects with no EF in the main project. No change in behaviour.</p>
<p>So does anyone know what could cause this and how to get around it so that I can get this last step completed?</p>
<p>Thanks,
Michael</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Add an Image from url into custom InfoWindow google maps v2. <p>I'm am working in an android app. The user make a search at google maps for restaurants. In google map display markers for all of his neighbor's restaurant. If he tap at a marker it show up a custom InfoWindow. My problem is that I can't load the image that return form Google places. Im getting right the url of image but I can't show it at Window.</p>
<p>InfoWindow</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical"
android:background="@color/bg_color" >
<ImageView
android:id="@+id/place_icon"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:focusable="false"" />
<TextView
android:id="@+id/place_title"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/place_vicinity"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="horizontal"
android:background="@color/bg_color" >
<RatingBar
android:id="@+id/place_rating"
style="?android:attr/ratingBarStyleSmall"
android:numStars="5"
android:rating="0"
android:isIndicator="false"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginLeft="5dip" />
<ImageView
android:id="@+id/navigate_icon"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:focusable="false"
android:src="@drawable/navigate" />
</LinearLayout>
</code></pre>
<p></p>
<p>On create i have this</p>
<pre><code>mGoogleMap.setInfoWindowAdapter(new InfoWindowAdapter() {
// Use default InfoWindow frame
@Override
public View getInfoWindow(Marker arg0) {
return null;
}
// Defines the contents of the InfoWindow
@Override
public View getInfoContents(Marker arg0) {
// Getting view from the layout file info_window_layout
View v = getLayoutInflater().inflate(R.layout.info_window_layout, null);
// Getting the snippet from the marker
String snippet = arg0.getSnippet();
// Getting the snippet from the marker
String titlestr = arg0.getTitle();
String cutchar1= "%#";
String cutchar2= "%##";
String ratingstr = snippet.substring(0,snippet.indexOf( cutchar1 ));
String vicinitystr = snippet.substring(snippet.indexOf( cutchar1 )+2, snippet.indexOf( cutchar2 ) );
String iconurl= snippet.substring(snippet.indexOf( cutchar2 )+3);
// Getting reference to the TextView to set latitude
TextView title = (TextView) v.findViewById(R.id.place_title);
TextView vicinity = (TextView) v.findViewById(R.id.place_vicinity);
ImageView image = (ImageView) v.findViewById(R.id.navigate_icon);
// Setting the latitude
title.setText(titlestr);
// declare RatingBar object
RatingBar rating=(RatingBar) v.findViewById(R.id.place_rating);// create RatingBar object
if( !(ratingstr.equals("null")) ){
rating.setRating(Float.parseFloat(ratingstr));
}
vicinity.setText(vicinitystr);
final DownloadImageTask download = new DownloadImageTask((ImageView) v.findViewById(R.id.place_icon) ,arg0);
download.execute(iconurl);
// Returning the view containing InfoWindow contents
return v;
}
</code></pre>
<p>});</p>
<p>and the DownloadImage code is:</p>
<pre><code>private class DownloadImageTask extends AsyncTask<String, Void, Bitmap> {
ImageView bmImage;
Marker marker;
boolean refresh;
public DownloadImageTask(final ImageView bmImage, final Marker marker) {
this.bmImage = bmImage;
this.marker=marker;
this.refresh=false;
}
public void SetRefresh(boolean refresh ){
this.refresh=true;
}
/* @Override
protected void onPreExecute()
{
super.onPreExecute();
bmImage.setImageBitmap(null);
}*/
@Override
protected Bitmap doInBackground(String... urls) {
String urldisplay = urls[0];
Bitmap mIcon11 = null;
try {
InputStream in = new java.net.URL(urldisplay).openStream();
mIcon11 = BitmapFactory.decodeStream(in);
} catch (Exception e) {
Log.e("Error", e.getMessage());
e.printStackTrace();
}
return mIcon11;
}
@Override
protected void onPostExecute(Bitmap result) {
if(!refresh){
SetRefresh(refresh);
bmImage.setImageBitmap(result);
marker.showInfoWindow();
}
}
}
</code></pre>
<p>Finally when I execute the code and tap the marker the getInfoContents doesn't stop execute and the icon does not appear. </p>
<p>Why this happen?</p>
| 0non-cybersec
| Stackexchange |
ElGamal like encryption. <p>How can I approach the following exercise:</p>
<p><img src="https://i.stack.imgur.com/eWx3r.png" alt="enter image description here"></p>
<p>Source: An Introduction to Mathematical Cryptography by Hoffstein</p>
<p>This exercise describes an approach similar to ElGamal cryptosystem with a numerical example, and in order to solve it, one should do some "reverse-engineering" and find a ay how to deduce a general algorithm from the example given.</p>
<p>I copied the entire text so that you get some extra context of this task.
I don't know in which relationship are the exponents.</p>
<p>The only conclusion I've managed to made is: $m ^{ a \cdot b \cdot a' \cdot b' } = m$ with $m, a$ and $b$ defined as above and $a'= 15619$ and $b'=31883$.</p>
<p>One can be very fast trapped to think of an obvious solution - namely that $a$ and $a'$ are inverses in $\mathbb{Z}_p$, but they are not because:
$gcd(3589,32611) = 1 = 822*32611 - 7469*3589 \Rightarrow - 7469 = 25142 (\mod 32611)$ and $25142$ is not equal to $15619$. </p>
<p>(This also means that I was barking barking up the wrong tree saying that $aa'$ and $bb'$ are such numbers that $\exists k: m^{ k \varphi ( p + 1 )} mod p = 1$, ie $m^{(aa')(bb')} mod p =m$ => $k \varphi ( p + 1 ) = aa'bb'$, where $\varphi(n)$ is defined like in the Euler Theorem. This is wrong because we should be able to calculate a' without any knowledge about b).</p>
| 0non-cybersec
| Stackexchange |
If eyes could kill. | 0non-cybersec
| Reddit |
When you die, legally your downloads will die with you. You can bequeath CDs, so why can't you pass on iTunes tracks and ebooks when you die?. | 0non-cybersec
| Reddit |
Use Javascript to get the Sentence of a Clicked Word. <p>This is a problem I'm running into and I'm not quite sure how to approach it.</p>
<p>Say I have a paragraph:</p>
<pre><code>"This is a test paragraph. I love cats. Please apply here"
</code></pre>
<p>And I want a user to be able to click any one of the words in a sentence, and then return the entire sentence that contains it.</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Proof that every polygon has at least 2 ears. <p>I had this question asked a few weeks ago and gave an argument that involved finding an ear and clipping it, professor said it was not quite the correct answer and that I lacked some insights.</p>
<p>How would you prove this? I'm not allowed to use the dual graph of a triangulation, because given that any polygon has at least 2 ears, then we can triangulate the polygon by ear clipping. </p>
<p><strong>So how can I prove that every polygon has at least 2 ears?</strong></p>
| 0non-cybersec
| Stackexchange |
What is your favorite voice line in the game thus far?. Maybe it's too soon, maybe I'm bored, but this was on my mind.
My favorite is Reinhardt's "Well done, my friends!" Whenever I heard it in the previews, it always put a smile on my face.
What about you guys? | 0non-cybersec
| Reddit |
Does anyone know what this octagon thing is?. My gym recently put this thing in, and I am not sure how to use it. Well, I am not sure of a lot of things. I just started working out and the staff are not knowledgeable or want me to pay for some personal training sessions.
http://m.imgur.com/g0oMWYX
Also I was wondering what this contraption is for: I assume you just stand in the middle and lift the weight up.
http://m.imgur.com/BIQyeIH
| 0non-cybersec
| Reddit |
When to write software project documentation on code. <p>I usually write my software projects in java, and I am still a bit confused as to when to document my classes, interfaces and methods.</p>
<p>There are two ways:</p>
<p><strong>1)</strong> Write documentation after declaring or coding a class/interface/method/constructor. This way I am sure documentation is handled immediately.</p>
<p><strong>Disadvantage:</strong> I might modify the arguments of a method/constructor or I might modify the functionality of the class or interface and forget to edit the documentation.</p>
<p><strong>2)</strong> Write documentation after finishing the project (or a major finish/version of the project), this way I am sure to document the full functionality/arguments of methods/constructors as well as documenting all exceptions thrown.</p>
<p><strong>Disadvantage:</strong> It usually becomes another great overwhelming task to go through hundreds of classes and methods at the end of the project, trying to write documentation code.</p>
<p>As you can see both scenarios have their disadvantages but I think one is more advantageous than the other. I am puzzled as to which. Also I am not implying this to Java alone, it can be applied to any programming language that requires documentation.</p>
| 0non-cybersec
| Stackexchange |
Fine-grained Code Coverage Measurement in
Automated Black-box Android Testing
A Preprint
Aleksandr Pilgun∗
SnT, University of Luxembourg
Luxembourg
Olga Gadyatskaya
SnT, University of Luxembourg
Luxembourg
Stanislav Dashevskyi
SnT, University of Luxembourg
Luxembourg
Yury Zhauniarovich
Qatar Computing Research Institute, HBKU
Qatar
Artsiom Kushniarou†
PandaDoc Inc.
Belarus
Abstract
Today, there are millions of third-party Android applications. Some of these applications
are buggy or even malicious. To identify such applications, novel frameworks for automated
black-box testing and dynamic analysis are being developed by the Android community,
including Google. Code coverage is one of the most common metrics for evaluating effec-
tiveness of these frameworks. Furthermore, code coverage is used as a fitness function for
guiding evolutionary and fuzzy testing techniques. However, there are no reliable tools for
measuring fine-grained code coverage in black-box Android app testing.
We present the Android Code coVerage Tool, ACVTool for short, that instruments Android
apps and measures the code coverage in the black-box setting at the class, method and
instruction granularities. ACVTool has successfully instrumented 96.9% of apps in our
experiments. It introduces a negligible instrumentation time overhead, and its runtime
overhead is acceptable for automated testing tools. We show in a large-scale experiment
with Sapienz, a state-of-art testing tool, that the fine-grained instruction-level code coverage
provided by ACVTool helps to uncover a larger amount of faults than coarser-grained code
coverage metrics.
Keywords Android · Automated black-box testing · Code coverage · Third-party applications
1 Introduction
Code coverage measurement is an essential element of the software development and quality assurance cycles
for all programming languages and ecosystems, including Android. It is routinely applied by developers,
testers, and analysts to understand the degree to which the system under test has been evaluated [3], to
generate test cases [59], to compare test suites [22], and to maximize fault detection by prioritizing test
cases [61]. In the context of Android application analysis, code coverage has become a critical metric. Fellow
researchers and practitioners evaluate the effectiveness of tools for automated testing and security analysis
using, among other metrics, code coverage [17, 43, 30, 34, 53]. It is also used as a fitness function to guide
application exploration in testing [43, 49, 35].
Unfortunately, the Android ecosystem introduces a particular challenge for security and reliability analysis:
Android applications (apps for short) submitted to markets (e.g., Google Play) have been already compiled
and packaged, and their source code is often unavailable for inspection, i.e., the analysis has to be performed
in the black-box setting. Measuring the code coverage achieved in this setting is not a trivial endeavor. This
∗Corresponding author
†This work has been done when Artsiom was with SnT, University of Luxembourg, Luxembourg
ar
X
iv
:1
81
2.
10
72
9v
1
[
cs
.C
R
]
2
7
D
ec
2
01
8
Preprint
is why some black-box testing systems, e.g., [46, 16], use only open-source apps for experimental validation,
where the source code coverage could be measured by popular tools developed for Java, such as EMMA [45]
or JaCoCo [31].
In the absence of source code, code coverage is usually measured by instrumenting the bytecode of applica-
tions [36]. Within the Java community, the problem of code coverage measurement at the bytecode level is
well-developed and its solution is considered to be relatively straightforward [51, 36]. However, while An-
droid applications are written in Java, they are compiled into bytecode for the register-based Dalvik Virtual
Machine (DVM), which is quite different from the Java Virtual Machine (JVM). Thus, there are significant
disparities in the bytecode for these two virtual machines.
Since the arrangement of the Dalvik bytecode complicates the instrumentation process [30], there have been
so far only few attempts to track code coverage for Android applications at the bytecode level [62], and they
all still have limitations. The most significant one is the coarse granularity of the provided code coverage
metric. For example, ELLA [21], InsDal [40] and CovDroid [60] measure code coverage only at at the method
level. Another limitation of the existing tools is the low percentage of successfully instrumented apps. For
instance, the tools by Huang et al. [30] and Zhauniarovich et al. [64] support fine-grained code coverage
metrics, but they could successfully instrument only 36% and 65% of applications from their evaluation
samples, respectively. Unfortunately, such instrumentation success rates are prohibitive for these tools to
be widely adopted by the Android community. Furthermore, the existing tools suffer from limited empirical
evaluation, with a typical evaluation dataset being less than 100 apps. Sometimes, research papers do not
even mention the percentage of failed instrumentation attempts (e.g., [40, 12, 60]).
Remarkably, in the absence of reliable fine-grained code coverage reporting tools, some frameworks integrate
their own black-box code coverage measurement libraries, e.g., [43, 48, 12]. However, as code coverage
measurement is not the core contribution of these works, the authors do not provide detailed information
about the rates of successful instrumentation, as well as other details related to the code coverage performance
of these libraries.
In this paper, we present ACVTool – the Android Code coVerage measurement Tool that does not suffer
from the aforementioned limitations. The paper makes the following contributions:
• An approach to instrument Dalvik bytecode in its smali representation by inserting probes to track
code coverage at the levels of classes, methods and instructions. Our approach is fully self-contained
and transparent to the testing environment.
• An implementation of the instrumentation approach in ACVTool, which can be integrated with
any testing or dynamic analysis framework. Our tool presents the coverage measurements and
information about incurred crashes as handy reports that can be either visually inspected by an
analyst, or processed by an automated testing environment.
• Extensive empirical evaluation that shows the high reliability and versatility of our approach.
– While previous works [30, 64] have only reported the number of successfully instrumented apps3,
we also verified whether apps can be successfully executed after instrumentation. We report
that 96.9% have been successfully executed on the Android emulator – it is only 0.9% less than
the initial set of successfully instrumented apps.
– In the context of automated and manual application testing, ACVTool introduces only a neg-
ligible instrumentation time overhead. In our experiments ACVTool required on average
33.3 seconds to instrument an app. The runtime overhead introduced by ACVTool is also not
prohibitive. With the benchmark PassMark application [47], the instrumentation code added by
ACVTool introduces 27% of CPU overhead, while our evaluation of executions of original and
repackaged app version by Monkey [24] show that there is no significant runtime overhead
for real apps (mean difference of timing 0.12 sec).
– We have evaluated whether ACVTool reliably measures the bytecode coverage by comparing its
results with those reported by JaCoCo [31], a popular code coverage tool for Java that requires
the source code. Our results show that the ACVTool results can be trusted, as code coverage
statistics reported by ACVTool and JaCoCo are highly correlated.
– By integrating ACVTool with Sapienz [43], an efficient automated testing framework for An-
droid, we demonstrate that our tool can be useful as an integral part of an automated testing
3For ACVTool, it is 97.8% out of 1278 real-world Android apps.
2
Preprint
or security analysis environment. We show that fine-grained bytecode coverage metric is bet-
ter in revealing crashes, while activity coverage measured by Sapienz itself shows performance
comparable to not using coverage at all. Furthermore, our experiments indicate that different
levels of coverage granularity can be combined to achieve better results in automated testing.
• We release ACVTool as an open-source tool to support the Android testing and analysis com-
munity. Source code and a demo video of ACVTool are available at https://github.com/pilgun/
acvtool.
ACVTool can be readily used with various dynamic analysis and automated testing tools, e.g., Intel-
liDroid [56], CopperDroid [50], Sapienz [43], Stoat [49], DynoDroid [42], CuriousDroid [14] and the like,
to measure code coverage. This work extends our preliminary results reported in [44, 20].
This paper is structured as follows. We give some background information about Android applications and
their code coverage measurement aspects in Section 2. The ACVTool design and workflow are presented in
Section 3. Section 4 details our bytecode instrumentation approach. In Section 5, we report on the exper-
iments we performed to evaluate the effectiveness and efficiency of ACVTool, and to assess how compliant
is the coverage data reported by ACVTool to the data measured by the JaCoCo system on the source code.
Section 6 presents our results on integrating ACVTool with the Sapienz automated testing framework, and
discusses the contribution of code coverage data to bug finding in Android apps. Then we discuss the limi-
tations of our prototype and threats to validity for our empirical findings in Section 7, and we overview the
related work and compare ACVTool to the existing tools for black-box Android code coverage measurement
in Section 8. We conclude with Section 9.
2 Background
2.1 APK Internals
Android apps are distributed as apk packages that contain the resources files, native libraries (*.so), com-
piled code files (*.dex), manifest (AndroidManifest.xml), and developer’s signature. Typical application
resources are user interface layout files and multimedia content (icons, images, sounds, videos, etc.). Na-
tive libraries are compiled C/C++ modules that are often used for speeding up computationally intensive
operations.
Android apps are usually developed in Java and, more recently, in Kotlin – a JVM-compatible language [18].
Upon compilation, code files are first transformed into Java bytecode files (*.class), and then converted into
a Dalvik executable file (classes.dex) that can be executed by the Dalvik/ART Android virtual machine
(DVM). Usually, there is only one dex file, but Android also supports multiple dex files. Such apps are called
multidex applications.
In contrast to the most JVM implementations that are stack-based, DVM is a register-based virtual machine4.
It assigns local variables to registers, and the DVM instructions (opcodes) directly manipulate the values
stored in the registers. Each application method has a set of registers defined in its beginning, and all
computations inside the method can be done only through this register set. The method parameters are also
a part of this set. The parameter values sent into the method are always stored in the registers at the end
of method’s register set.
Since raw Dalvik binaries are difficult for human understanding, several intermediate representations have
been proposed that are more analyst-friendly: smali [32, 27] and Jimple [52]. In this work, we work
with smali, which is a low-level programming language for the Android platform. Smali is supported
by Google [27], and it can be viewed and manipulated using, e.g., the smalidea plugin for the IntelliJ
IDEA/Android Studio [32].
The Android manifest file is used to set up various parameters of an app (e.g., whether it has been compiled
with the “debug” flag enabled), to list its components, and to specify the set of declared and requested
Android permissions. The manifest provides a feature that is very important for the purpose of this paper:
it allows to specify the instrumentation class that can monitor at runtime all interactions between the
Android system and the app. We rely upon this functionality to enable the code coverage measurement, and
to intercept the crashes of an app and log their details.
4We refer the interested reader to the official Android documentation about the Dalvik bytecode internals [25]
and the presentation by Bornstein [10].
3
https://github.com/pilgun/acvtool
https://github.com/pilgun/acvtool
Preprint
Before an app can be installed onto a device, it must be cryptographically signed with a developer’s certificate
(the signature is located under the META-INF folder inside an .apk file) [63]. The purpose of this signature
is to establish the trust relationship between the apps of the same signature holder: for example, it ensures
that the application updates are delivered from the same developer. Still, such signatures cannot be used to
verify the authenticity of the developer of an application being installed for the first time, as other parties
can modify the contents of the original application and re-sign it with their own certificates. Our approach
relies on this possibility of code re-signing to instrument the apps.
2.2 Code Coverage
The notion of code coverage refers to the metrics that help developers to estimate the portion of the source
code or the bytecode of a program executed at runtime, e.g., while running a test suite [3]. Coverage
metrics are routinely used in the white-box testing setting, when the source code is available. They allow
developers to estimate the relevant parts of the source code that have never been executed by a particular
set of tests, thus facilitating, e.g., regression-testing and improvement of test suites. Furthermore, code
coverage metrics are regularly applied as components of fitness functions that are used for other purposes:
fault localization [51], automatic test generation [43], and test prioritization [51]. In particular, security
testing of Android apps falls under the black-box testing category, as the source code of third-party apps is
rarely available: there is no requirement to submit the source code to Google Play. Still, Google tests all
submitted apps to ensure that they meet the security standards5. It is important to understand how well
a third-party app has been exercised in the black-box setting, and various Android app testing tools are
routinely evaluated with respect to the achieved code coverage [30, 34, 17, 53].
There exist several levels of granularity at which the code coverage can be measured. Statement coverage,
basic block coverage, and function (method) coverage are very widely used. Other coverage metrics exist
as well: branch, condition, parameter, data-flow, etc [3]. However, these metrics are rarely used within the
Android community, as they are not widely supported by the most popular coverage tools for Java and
Android source code, namely JaCoCo [31] and EMMA [45]. On the other hand, the Android community
often uses the activity coverage metric, that counts the proportion of executed activities [43, 6, 62, 14] (classes
of Android apps that implement the user interface), because this metric is useful and is relatively easy to
compute.
There is an important distinction in measuring the statement coverage of an app at the source code and at
the bytecode levels: the instructions and methods within the bytecode may not exactly correspond to the
instructions and methods within the original source code. For example, a single source code statement may
correspond to several bytecode instructions [10]. Also, a compiler may optimize the bytecode so that the
number of methods is different, or the control flow structure of the app is altered [51, 36]. It is not always
possible to map the source code statements to the corresponding bytecode instructions without having the
debug information. Therefore, it is practical to expect that the source code statement coverage cannot be
reliably measured within the black-box testing scenario, and we resort to measuring the bytecode instruction
coverage.
3 ACVTool Design
ACVTool allows to measure and analyze the degree to which the code of a closed-source Android app is
executed during testing, and to collect crash reports occurred during this process. We have designed the
tool to be self-contained by embedding all dependencies required to collect the runtime information into
the application under the test (AUT). Therefore, our tool does not require to install additional software
components, allowing it to be effortlessly integrated into any existing testing or security analysis pipeline.
For instance, we have tested ACVTool with the random UI event generator Monkey [24], and we have
integrated it with the Sapienz tool [43] to experiment with fine-grained coverage metrics (see details in
Section 6). Furthermore, for instrumentation ACVTool uses only the instructions available on all current
Android platforms. The instrumented app is thus compatible with all emulators and devices. We have tested
whether the instrumented apps work using an Android emulator and a Google Nexus phone.
Figure 1 illustrates the workflow of ACVTool that consists of three phases: offline, online and report gener-
ation. At the time of the offline phase, the app is instrumented and prepared for running on a device or an
emulator. During the online phase, ACVTool installs the instrumented app, runs it and collects its runtime
5https://www.android.com/security-center/
4
https://www.android.com/security-center/
Preprint
Decompile
Android Manifest
Smali Code
Instrumented
Android Manifest
Instrumented
Smali Code
Instrument Build&Sign
Install Test
Instrumentation
Report
Runtime
Report
Collect
Apktool ACVTool
ACVTool
Apktool
apksigner
adb
manual
automatic adb Crash
Data
Offline
Online
Generate
Figure 1: ACVTool workflow
information (coverage measurements and crashes). At the report generation phase, the runtime information
of the app is extracted from the device and used to generate a coverage report. Below we describe these
phases in detail.
3.1 Offline Phase
The offline phase of ACVTool is focused on app instrumentation. In the nutshell, this process consists of
several steps depicted in the upper part of Figure 1. The original Android app is first decompiled using
apktool [54]. Under the hood, apktool uses the smali/backsmali disassembler [32] to disassemble .dex
files and transform them into smali representation. To track the execution of the original smali instructions,
we insert special probe instructions after each of them. These probes are invoked right after the corresponding
original instructions, allowing us to precisely track their execution at runtime. After the instrumentation,
ACVTool compiles the instrumented version of the app using apktool and signs it with apksigner. Thus,
by relying upon native Android tools and the well-supported tools provided by the community, ACVTool is
able to instrument almost every app. We present the details of our instrumentation process in Section 4.
In order to collect the runtime information, we used the approach proposed in [64] by developing a special
Instrumentation class. ACVTool embeds this class into the app code, allowing the tool to collect the
runtime information. After the app has been tested, this class serializes the runtime information (represented
as a set of boolean arrays) into a binary representation, and saves it to the external storage of an Android
device. The Instrumentation class also collects and saves the data about crashes within the AUT, and
registers a broadcast receiver. The receiver waits for a special event notifying that the process collecting
the runtime information should be stopped. Therefore, various testing tools can use the standard Android
broadcasting mechanism to control ACVTool externally.
ACVTool makes several changes to the Android manifest file (decompiled from binary to normal xml format
by apktool). First, to write the runtime information to the external storage, we additionally request
the WRITE_EXTERNAL_STORAGE permission. Second, we add a special instrument tag that registers our
Instrumentation class as an instrumentation entry point.
After the instrumentation is finished, ACVTool assembles the instrumented package with apktool, re-signs
and aligns it with standard Android utilities apksigner and zipalign. Thus, the offline phase yields an
instrumented app that can be installed onto a device and executed.
It should be mentioned that we sign the application with a new signature. Therefore, if the application
checks the validity of the signature at runtime, the instrumented application may fail or run with reduced
functionality, e.g., it may show a message to the user that the application is repackaged and may not work
properly.
5
Preprint
Figure 2: ACVTool html report
Along with the instrumented apk file, the offline phase produces an instrumentation report. It is a serialized
code representation saved into a binary file with the pickle extension that is used to map probe indices in
a binary array to the corresponding original bytecode instructions. This data along with the runtime report
(described in Section 3.2) is used during the report generation phase. Currently, ACVTool can instrument
an application to collect instruction-, method- and class-level coverage information.
3.2 Online Phase
During the online phase, ACVTool installs the instrumented app onto a device or an emulator using the
adb utility, and initiates the process of collecting the runtime information by starting the Instrumentation
class. This class is activated through the adb shell am instrument command. Developers can then test
the app manually, run a test suite, or interact with the app in any other way, e.g., by running tools, such
as Monkey [24], IntelliDroid [56], or Sapienz [43]. ACVTool’s data collection does not influence the app
execution. If the Instrumentation class has been not activated, the app can still be run in a normal way.
After the testing is over, ACVTool generates a broadcast that instructs the Instrumentation class to stop
the coverage data collection. Upon receiving the broadcast, the class consolidates the runtime information
into a runtime report and stores it on the external storage of the testing device. Additionally, ACVTool keeps
the information about all crashes of the AUT, including the timestamp of a crash, the name of the class
that crashed, the corresponding error message and the full stack trace. By default, ACVTool is configured
to catch all runtime exceptions in an AUT without stopping its execution – this can be useful for collecting
the code coverage information right after a crash happens, helping to pinpoint its location.
3.3 Report Generation Phase
The runtime report is a set of boolean vectors (with all elements initially set to False), such that each of
these vectors corresponds to one class of the app. Every element of a vector maps to a probe that has been
inserted into the class. Once a probe has been executed, the corresponding vector’s element is set to True,
meaning that the associated instruction has been covered. To build the coverage report, which shows what
original instructions have been executed during the testing, ACVTool uses data from the runtime report,
showing what probes have been invoked at runtime, and from the instrumentation report that maps these
probes to original instructions.
Currently, ACVTool generates reports in the html and xml formats. These reports have a structure similar to
the reports produced by the JaCoCo tool [31]. While html reports are convenient for visual inspection, xml
reports are more suitable for automated processing. Figure 2 shows an example of a html report. Analysts
can browse this report and navigate the hyperlinks that direct to the smali code of individual files of the
app, where the covered smali instructions are highlighted (as shown in Figure 3).
4 Code Instrumentation
Huang et al. [30] proposed two different approaches for measuring bytecode coverage: (1) direct instrumen-
tation by placing probes right after the instruction that has to be monitored for coverage (this requires using
additional registers); (2) indirect instrumentation by wrapping probes into separate functions. The latter
instrumentation approach introduces significant overhead in terms of added methods, that could potentially
6
Preprint
Figure 3: Covered smali instructions highlighted by ACVTool
lead to reaching the upper limit of method references per .dex file (65536 methods, see [26]). Thus, we built
ACVTool upon the former approach.
1 private void updateElements() {
2 boolean updated = false;
3 while (!updated) {
4 updated = updateAllElements();
5 }
6 }
Listing 1: Original Java code example.
1 .method private updateElements()V
2 .locals 1
3 const/4 v0, 0x0
4 .local v0, "updated":Z
5 :goto_0
6 if−nez v0, :cond_0
7 invoke−direct {p0}, Lcom/demo/Activity;−>updateAllElements()Z
8 move−result v0
9 goto :goto_0
10 :cond_0
11 return−void
12 .end method
Listing 2: Original smali code example.
4.1 Bytecode representation
To instrument Android apps, ACVTool relies on the apkil library [58] that creates a tree-based structure of
smali code. The apkil’s tree contains classes, fields, methods, and instructions as nodes. It also maintains
relations between instructions, labels, try–catch and switch blocks. We use this tool for two purposes: (1)
apkil builds a structure representing the code that facilitates bytecode manipulations; (2) it maintains links
to the inserted probes, allowing us to generate the code coverage report.
Unfortunately, apkil has not been maintained since 2013. Therefore, we adapted it to enable support for
more recent versions of Android. In particular, we added the annotation support for classes and methods,
which has appeared in the Android API 19, and has been further extended in the API 22. We plan to
support the new APIs in the future.
Tracking the bytecode coverage requires not only to insert the probes while keeping the bytecode valid,
but also to maintain the references between the original and the instrumented bytecode. For this purpose,
when we generate the apkil representation of the original bytecode, we annotate the nodes that represent
the original bytecode instructions with additional information about the probes we inserted to track their
execution. We then save this annotated intermediate representation of the original bytecode into a separate
serialized .pickle file as the instrumentation report.
7
Preprint
1 .method private updateElements()V
2 .locals 4
3 move−object/16 v1, p0
4 sget−object v2, Lcom/acvtool/StorageClass;−>Activity1267:[Z
5 const/16 v3, 0x1
6 const/16 v4, 0x9
7 aput−boolean v3, v2, v4
8 const/4 v0, 0x0
9 goto/32 :goto_hack_4
10 :goto_hack_back_4
11 :goto_0
12 goto/32 :goto_hack_3
13 :goto_hack_back_3
14 if−nez v0, :cond_0
15 goto/32 :goto_hack_2
16 :goto_hack_back_2
17 invoke−direct {v1}, Lcom/demo/Activity;−>updateAllElements()Z
18 move−result v0
19 goto/32 :goto_hack_1
20 :goto_hack_back_1
21 goto :goto_0
22 :cond_0
23 goto/32 :goto_hack_0
24 :goto_hack_back_0
25 return−void
26 :goto_hack_0
27 const/16 v4, 0x4
28 aput−boolean v3, v2, v4
29 goto/32 :goto_hack_back_0
30 :goto_hack_1
31 const/16 v4, 0x5
32 aput−boolean v3, v2, v4
33 goto/32 :goto_hack_back_1
34 :goto_hack_2
35 const/16 v4, 0x6
36 aput−boolean v3, v2, v4
37 goto/32 :goto_hack_back_2
38 :goto_hack_3
39 const/16 v4, 0x7
40 aput−boolean v3, v2, v4
41 goto/32 :goto_hack_back_3
42 :goto_hack_4
43 const/16 v4, 0x8
44 aput−boolean v3, v2, v4
45 goto/32 :goto_hack_back_4
46 .end method
Listing 3: Instrumented smali code example. The yellow lines highlight the added instructions.
4.2 Register management
To exemplify how our instrumentation works, Listing 1 gives an example of a Java code fragment, Listing 2
shows its smali representation, and Listing 3 illustrates the corresponding smali code instrumented by
ACVTool.
The probe instructions that we insert are simple aput-boolean opcode instructions (e.g., Line 7 in Listing 3).
These instructions put a boolean value (the first argument of the opcode instruction) into an array identified
by a reference (the second argument), to a certain cell at an index (the third argument). Therefore, to store
these arguments we need to allocate three additional registers per app method.
The addition of these registers is not a trivial task. We cannot simply use the first three registers in the
beginning of the stack because this will require modification of the remaining method code and changing
the corresponding indices of the registers. Moreover, some instructions can address only 16 registers [26],
therefore the addition of new registers could make them malformed. Similarly, we cannot easily use new
registers in the end of the stack because method parameters registers must always be the last ones.
To overcome this issue, we use the following hack. We allocate three new registers, however, in the beginning
of a method we copy the values of the argument registers to their corresponding places in the original method.
For instance, in Listing 3 the instruction at Line 3 copies the value of the parameter p0 into the register
v1 that has the same register position as in the original method (see Listing 2). Depending on the value
type, we use different move instructions for copying: move-object/16 for objects, move-wide/16 for paired
8
Preprint
registers (Android uses register pairs for long and double types), move/16 for others. Then we update all
occurrences of parameter registers through the method body from p names to their v aliases (compare the
Line 7 in Listing 2 with Line 17 in Listing 3). Afterwards, the last 3 registers in the stack are safe to use for
the probe arguments (for instance, see Lines 4-6 in Listing 3).
4.3 Probes insertion
Apart from moving the registers, there are other issues that must be addressed for inserting the probes
correctly. First, it is impractical to insert probes after certain instructions that change the the execution
flow of a program, namely return, goto (line 21 in listing 3), and throw. If a probe was placed right after
these instructions, it would never be reached during the program execution.
Second, some instructions come in pairs. For instance, the invoke-* opcodes, which are used to invoke a
method, must be followed by the appropriate move-result* instruction to store the result of the method
execution [26] (see Lines 17-18 in Listing 3). Therefore, we cannot insert a probe between them. Similarly,
in case of an exception, the result must be immediately handled. Thus, a probe cannot be inserted between
the catch label and the move-exception instruction.
These aspects of the Android bytecode mean that we insert probes after each instruction, but not after
the ones modifying the execution flow, and the first command in the paired instructions. These excluded
instructions are untraceable for our approach, and we do not consider them to be a part of the resulting code
coverage metric. Note that in case of a method invocation instruction, we log each invoked method, so that
the computed method code coverage will not be affected by this.
The VerifyChecker component of the Android Runtime that checks the code validity at runtime poses
additional challenges. For example, the Java synchronized block, which allows a particular code section to
be executed by only one thread at a time, corresponds to a pair of the monitor-enter and monitor-exit
instructions in the Dalvik bytecode. To ensure that the lock is eventually released, this instruction pair
is wrapped with an implicit try–catch block, where the catch part contains an additional monitor-exit
statement. Therefore, in case of an exception inside a lock, another monitor-exit instruction will unlock
the thread. VerifyChecker ensures that the monitor-exit instruction will be executed only once, so it
does not allow to add any instructions that may potentially raise an exception. To overcome this limitation,
we insert the goto/32 statement to redirect the flow to the tracking instruction, and a label to go back
after the tracking instruction was executed. Since VerifyChecker examines the code sequentially, and the
goto/32 statement is not considered as a statement that may throw exceptions, our approach allows the
instrumented code to pass the code validity check.
5 Evaluation
Our code coverage tracking approach modifies the app bytecode by adding probes and repackaging the orig-
inal app. This approach could be deemed too intrusive to use with the majority of third-party applications.
To prove the validity and the practical usefulness of our tool, we have performed an extensive empirical
evaluation of ACVTool with respect to the following criteria:
Effectiveness. We report the instrumentation success rate of ACVTool, broken down in the following
numbers:
• Instrumentation success rate. We report how many apps from our datasets have been successfully
instrumented with ACVTool.
• App health after instrumentation. We measure percentage of the instrumented apps that can run
on an emulator. We call these apps healthy6. To report this statistic, we installed the instrumented
apps on the Android emulator and launched their main activity. If an app is able to run for 3 seconds
without crashing, we count it as healthy.
Efficiency. We assess the following characteristics:
• Instrumentation-time overhead. Traditionally, the preparation of apps for testing is considered to
be an offline activity that is not time-sensitive. Given that the black-box testing may be time-
demanding (e.g., Sapienz [43] tests each application for hours), our goal is to ensure that the instru-
6To the best of our knowledge, we are the first to report the percentage of instrumented apps that are healthy.
9
Preprint
mentation time is insignificant in comparison to the testing time. Therefore, we have measured the
time ACVTool requires to instrument apps in our datasets.
• Runtime overhead. Tracking instructions added into an app introduce their own runtime overhead,
what may be a critical issue in testing. Therefore, we evaluate the impact of the ACVTool in-
strumentation on app performance and codebase size. We quantify runtime overhead by using the
benchmark PassMark application [47], by comparing executions of original and instrumented app
versions, and by measuring the increase in .dex file size.
Compliance with other tools. We compare the coverage data reported by ACVTool with the cover-
age data measured by JaCoCo [31] which relies upon white-box approach and requires source code. This
comparison allows us to draw conclusions about the reliability of the coverage information collected by
ACVTool.
To the best of our knowledge, this is the largest empirical evaluation of a code coverage tool for Android done
so far. In the remainder of this section, after presenting the benchmark application sets used, we report on
the results obtained in dedicated experiments for each of the above criteria. The experiments were executed
on an Ubuntu server (Xeon 4114, 2.20GHz, 128GB RAM).
5.1 Benchmark
We downloaded 1000 apps from the Google Play sample of the AndroZoo dataset [2]. These apps were
selected randomly among apps built after Android API 22 was released, i.e., after November 2014. These
are real third-party apps that may use obfuscation and anti-debugging techniques, and could be more difficult
to instrument.
Among the 1000 Google Play apps, 168 could not be launched: 12 apps were missing a launchable activity,
1 had encoding problem, and 155 that crashed upon startup. These crashes could be due to some miscon-
figurations in the apps, but also due to the fact that we used an emulator. Android emulators lack many
features present in real devices. We have used the emulator, because we subsequently test ACVTool together
with Sapienz [43] (these experiments are reported in the next section). We excluded these unhealthy apps
from the consideration. In total, our Google Play benchmark contains 832 healthy apps. The apk sizes
in this set range from 20KB to 51MB, with the average apk size 9.2MB.
As one of our goals is to evaluate the reliability of the coverage data collected by ACVTool comparing to
JaCoCo as a reference, we need to have some apps with the available source code. To collect such apps, we
use the F-Droid7 dataset of open source Android apps (1330 application projects as of November 2017). We
managed to git clone 1102 of those, and found that 868 apps used Gradle as a build system. We have
successfully compiled 627 apps using 6 Gradle versions8.
To ensure that all of these 627 apps can be tested (healthy apps), we installed them on an Android emulator
and launched their main activity for 3 seconds. In total, out of these 627 apps, we obtained 446 healthy
apps that constitute our F-Droid benchmark. The size of the apps in this benchmark ranges from 8KB
to 72.7MB, with the average size of 3.1MB.
5.2 Effectiveness
5.2.1 Instrumentation success rate
Table 1 summarizes the main statistics related to the instrumentation success rate of ACVTool.
Before instrumenting applications with ACVTool, we reassembled, repackaged, rebuilt (with apktool,
zipalign, and apksigner) and installed every healthy Google Play and F-Droid app on a device. In
Google Play set, one repackaged app had crashed upon startup, and apktool could not repackage 22 apps,
raising AndrolibException. In the F-Droid set, apktool could not repackage only one app. These apps
were excluded from subsequent experiments, and we consider them as failures for ACVTool (even though
ACVTool instrumentation did not cause these failures).
7https://f-droid.org/
8Gradle versions 2.3, 2.9, 2.13, 2.14.1, 3.3, 4.2.1 were used. Note that the apps that failed to build and launch
correctly are not necessarily faulty, but they can, e.g., be built with other build systems or they may work on
older Android versions. Investigating these issues is out of the scope of our study, so we did not follow up on the
failed-to-build apps.
10
https://f-droid.org/
Preprint
Table 1: ACVTool performance evaluation
Parameter Google Play F-Droid Totalbenchmark benchmark
Total # healthy apps 832 446 1278
Instrumented apps 809 (97.2%) 442 (99.1%) 1251 (97.8%)
Healthy instrumented apps 799 (96.0%) 440 (98.7%) 1239 (96.9%)
Avg. instrumentation time 36.6 sec 27.4 sec 33.3 sec
Besides the 24 apps that could not be repackaged in both app sets, ACVTool has instrumented all remaining
apps from the Google Play benchmark. Yet, it failed to instrument 3 apps from the F-Droid set. The
found issues were the following: in 2 cases apktool raised an exception ExceptionWithContext declaring an
invalid instruction offset, in 1 case apktool threw ExceptionWithContext stating that a register is invalid
and must be between v0 and v255.
5.2.2 App health after instrumentation
From all successfully instrumented Google play apps, 10 applications crashed at launch and generated
runtime exceptions, i.e., they became unhealthy after instrumentation with ACVTool (see the third row
in Table 1). Five cases were due absence of Retrofit annotation (four IllegalStateException and one
IllegalArgumentException), 1 case – ExceptionInInitializerError, 1 case –
NullPointerException, 1 case – RuntimeException in a background service. In the F-Droid dataset, 2
apps became unhealthy due to the absence of Retrofit annotation, raising IllegalArgumentException.
Upon investigation of the issues, we suspect that they could be due to faults in the ACVTool implementation.
We are working to properly identify and fix the bugs, or to identify a limitation in our instrumentation
approach that leads to a fault for some type of apps.
Conclusion: we can conclude that ACVTool is able to process the vast majority of apps in our dataset,
i.e., it is effective for measuring code coverage of third-party Android apps. For our total combined dataset
of 1278 originally healthy apps, ACVTool has instrumented 1251, what constitutes 97.8%. From the instru-
mented apps, 1239 are still healthy after instrumentation. This gives us the instrumentation survival rate of
99%, and the total instrumentation success rate of 96.9% (of the originally healthy population). The instru-
mentation success rate of ACVTool is much better than the instrumentation rates of the closest competitors
BBoxTester [64] (65%) and the tool by Huang et al. [30] (36%).
5.3 Efficiency
5.3.1 Instrumentation-time overhead
Table 1 presents the average instrumentation time required for apps from our datasets. It shows that
ACVTool generally requires less time for instrumenting the F-Droid apps (on average, 27.4 seconds per app)
than the Google Play apps (on average, 36.6 seconds). This difference is due to the smaller size of apps, and,
in particular, the size of their .dex files. For our total combined dataset the average instrumentation time
is 33.3 seconds per app. This time is negligible comparing to the testing time usual in the black-box setting
that could easily reach several hours.
5.3.2 Runtime overhead
Running two copies with Monkey To assess the runtime overhead induced by our instrumentation
in a real world setting, we ran the original and instrumented versions of 50 apps randomly chosen from
our dataset with Monkey [24] (same seed, 50ms throttle, 250 events), and timed the executions. This
experiment showed that our runtime overhead is insignificant: mean difference of timing was 0.12 sec,
standard deviation 0.84 sec. While this experiment does not quantify the overhead precisely, it shows that
our overhead is not prohibitive in a real-world test-case scenario. Furthermore, we have not observed any
significant discrepancies in execution times, indicating that instrumented apps’ behaviour was not drastically
different from the original ones’ behaviour, and there were no unexpected crashes. Note that in some cases
two executions of the same app with the same Monkey script can still diverge due to the reactive nature of
Android programs, but we have not observed such cases in our experiments.
11
Preprint
Table 2: PassMark overhead evaluation
Granularity of instrumentation OverheadCPU .dex size
Only class and method +17% +11%
Class, method, and instruction +27% +249%
Table 3: Increase of .dex files for the Google Play benchmark
Summary statistics Original file size Size of instrumented fileMethod Instruction
Minimum 4.9KB 17.6KB (+258%) 19.9KB (+304%)
Median 2.8MB 3.1MB (+10%) 7.7MB (+173%)
Mean 3.5MB 3.9MB (+11%) 9.0MB (+157%)
Maximum 18.8MB 20MB (+7%) 33.6MB (+78%)
PassMark overhead To further estimate the runtime overhead we used a benchmark application called
PassMark [47]. Benchmark applications are designed to assess performance of mobile devices. The PassMark
app is freely available on Google Play, and it contains a number of test benchmarks related to assessing CPU
and memory access performance, speed of writing to and reading from internal and external drives, graphic
subsystem performance, etc. These tests do not require user interaction. Research community has previously
used this app to benchmark their Android related-tools (e.g., [7]).
For our experiment, we used the PassMark app version 2.0 from September 2017. This version of the app
is the latest that runs tests in the managed runtime (Dalvik and ART) rather than on a bare metal using
native libraries. We have prepared two versions of the PassMark app instrumented with ACVTool: one
version to collect full coverage information at the class, method and instruction level; and another version
to log only class and method-level coverage.
Table 2 summarizes the performance degradation of the instrumented PassMark version in comparison to the
original app. When instrumented, the size of Passmark .dex file increased from 159KB (the original version)
to 178KB (method granularity instrumentation), and to 556KB (instruction granularity instrumentation).
We have run Passmark application 10 times for each level of instrumentation granularity against the original
version of the app. In the CPU tests that utilize high-intensity computations, Passmark slows down, on
average, by 17% and 27% when instrumented at the method and instruction levels, respectively. Other
subsystem benchmarks did not show significant changes in numbers.
Evaluation with PassMark is artificial for a common app testing scenario, as the PassMark app stress-tests
the device. However, from this evaluation we can conclude that performance degradation under the ACVTool
instrumentation is not prohibitive, especially if it is used with modern hardware.
Dex size inflation As another metrics for overhead, we analysed how much ACVTool enlarges Android
apps. We measured the size of .dex files in both instrumented and original apps for the Google Play
benchmark apps. As shown in Table 3, the .dex file increases on average by 157% when instrumented
at the instruction level, and by 11% at the method level. Among already existing tools for code coverage
measurement, InsDal [40] has introduced .dex size increase of 18.2% (on a dataset of 10 apks; average .dex
size 3.6MB), when instrumenting apps for method-level coverage. Thus, ACVTool shows smaller code size
inflation in comparison to the InsDal tool.
Conclusion: ACVTool introduces an off-line instrumentation overhead that is negligible considering the
total duration of testing, which can last hours. The run-time overhead in live testing with Monkey is
negligible. In the stress-testing with the benchmark PassMark app, ACVTool introduces 27% overhead in
CPU. The increase in code base size introduced by the instrumentation instructions, while significant, is not
prohibitive. Thus, we can conclude that ACVTool is efficient for measuring code coverage in Android app
testing pipelines.
5.4 Compliance with JaCoCo
When the source code is available, developers can log code coverage of Android apps using the JaCoCo
library [31] that could be integrated into the development pipeline via the Gradle plugin. We used the
12
Preprint
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
P
ea
rs
on
's
c
or
re
la
tio
n
(a) Boxplot of correlation for
code coverage data computed
by ACVTool and JaCoCo.
0 200 400 600 800 100
0
ACVTool # of instructions
0
200
400
600
800
1000
Ja
C
oC
o
#
of
in
st
ru
ct
io
ns
(b) Scatterplot of the number of instructions in
app methods, as computed by ACVTool and Ja-
CoCo.
Figure 4: Compliance of coverage data reported by ACVTool and JaCoCo.
coverage data reported by this library to evaluate the correctness of code coverage metrics reported by
ACVTool.
For this experiment, we used only F-Droid benchmark because it contains open-source applications. We put
the new jacocoTestReport task in the Gradle configuration file and added our Instrumentation class into
the app source code. In this way we avoided creating app-specific tests, and could run any automatic testing
tool. Due to the diversity of project structures and versioning of Gradle, there were many faulty builds. In
total, we obtained 141 apks correctly instrumented with JaCoCo, i.e., we could generate JaCoCo reports for
them.
We ran two copies of each app (instrumented with ACVTool and with JaCoCo) on the Android emulator
using the same Monkey [24] scripts for both versions. Figure 4a shows the boxplot of correlation of code
coverage measured by ACVTool and JaCoCo. Each data point corresponds to one application, and its value
is the Pearson correlation coefficient between percentage of executed code, for all methods included in the
app. The minimal correlation is 0.21, the first quartile is 0.94, median is 0.99, and maximal is 1.00. This
means that for more than 75% of apps in the tested applications, their code coverage measurements have
correlation equal to 0.94 or higher, i.e., they are strongly correlated. Overall, the boxplot demonstrates that
code coverage logged by ACVTool is strongly correlated with code coverage logged by JaCoCo.
The boxplot in Figure 4a contains a number of outliers. Discrepancies in code coverage measurement have
appeared due to several reasons. First of all, as mentioned in Section 4, ACVTool does not track some
instructions. It is our choice to not count those instructions towards covered. In our F-Droid dataset, about
half of app methods consist of 7 smali instructions or less. Evidently, the correlation of covered instructions
for such small methods can be perturbed by these untraceable instructions.
The second reason for the slightly different code coverage reported is the differences in the smali code and
Java bytecode. Figure 4b shows a scatterplot of method instruction numbers in smali code (measured by
ACVTool, including the “untrackable” instructions) and in Java code (measured by JaCoCo). Each point
in this Figure corresponds to an individual method of one of the apks. The line in the Figure is the linear
regression line. The data shape demonstrates that the number of instructions in smali code is usually
slightly smaller than the number of instructions in Java bytecode.
Figure 4b also shows that there are some outliers, i.e., methods that have low instruction numbers in smali,
but many instructions in Java bytecode. We have manually inspected all these methods and found that
outliers are constructor methods that contain declarations of arrays. Smali (and Dalvik VM) allocates such
arrays with only one pseudo-instruction (.array-data), while Java bytecode is much longer [10].
13
Preprint
Table 4: Crashes found by Sapienz in 799 apps
Coverage metrics # uniquecrashes
# faulty
apps
# crash
types
Activity coverage 547 (47%) 381 30
Method coverage 578 (50%) 417 31
Instruction coverage 612 (53%) 429 30
Without coverage 559 (48%) 396 32
Total 1151 574 37
Conclusion: overall, we can summarize that code coverage data reported by ACVTool generally agree with
data computed by JaCoCo. The discrepancies in code coverage appear due to the different approaches that
the tools use, and the inherent differences in the Dalvik and Java bytecodes.
6 Contribution of Code Coverage Data to Bug Finding
To assess the usefulness of ACVTool in practical black-box testing and analysis scenarios, we integrated
ACVTool with Sapienz [43] – a state-of-art automated Android search-based testing tool. Its fitness function
looks for Pareto-optimal solutions using three criteria: code coverage, number of found crashes and the length
of a test suite. This experiment had two main goals: (1) ensure that ACVTool fits into a real automated
testing/analysis pipeline; (2) evaluate whether fine-grained code coverage measure provided by ACVTool
can be useful to automatically uncover diverse types of crashes with black-box testing strategy.
Sapienz integrates three approaches to measure code coverage achieved by a test suite: EMMA [45] (reports
source code statement coverage); ELLA [21] (reports method coverage); and its own plugin to measure
coverage in terms of launched Android activities. EMMA does not work without the source code of apps,
and thus in the black-box setting only ELLA and own Sapienz plugin could be used. The original Sapienz
paper [43] has not evaluated the impact of code coverage metric used on the discovered crashes population.
Our previously reported experiment with JaCoCo suggests that ACVTool can be used to replace EMMA, as
the coverage data reported for Java instructions and smali instructions are highly correlated and comparable.
Furthermore, ACVTool integrates capability to measure coverage in terms of classes and methods, and thus
it can also replace ELLA within the Sapienz framework. Note that the code coverage measurement itself
does not interfere with the search algorithms used by Sapienz.
As our dataset, we use the healthy instrumented apks from the Google Play dataset described in the previous
section. We have run Sapienz against each of these 799 apps, using its default parameters. Each app has been
tested using the activity coverage provided by Sapienz, and the method and instruction coverage supplied
by ACVTool. Furthermore, we also ran Sapienz without coverage data, i.e., substituting coverage for each
test suite as 0.
On average, each app has been tested by Sapienz for 3 hours for each coverage metric. After each run,
we collected the crash information (if any), which included the components of apps that crashed and Java
exception stack traces.
In this section we report on the results of crash detection with different coverage metrics and draw conclusions
about how different coverage metrics contribute to bug detection.
6.1 Descriptive statistics of crashes
Table 4 shows the numbers of crashes grouped by coverage metrics that Sapienz has found in the 799 apps.
We consider a unique crash as a unique combination of an application, its component where a crash occurred,
the line of code that triggered an exception, and a specific Java exception type.
In total, Sapienz has found 574 apps out of 799 to be faulty (at least one crash detected), and it has logged
1151 unique crashes with the four coverage conditions. Figure 5a summarizes the crash distribution for the
coverage metrics. The intersection of all code coverage conditions’ results contains 191 unique crashes (18%
of total crash population). Individual coverage metrics have found 53% (instruction coverage), 50% (method
coverage), 48% (without coverage), and 47% (activity coverage) of the total found crashes.
14
Preprint
155
9
125
9
26
191
23
10
104
145
30 34
165
117
8
Activity Method
Instruction Without coverage
(a) Crashes found with Sapienz using different coverage metrics
in 799 apps.
a m i n am a
i
an m
i
m
n in
am
i
am
n
ai
n
m
in
am
in
0
200
400
600
800
1000
1200
#
of
c
ra
sh
es
(b) Barplot of crashes found by coverage
metrics individually and jointly (a stands for
activity, m for method, i for instruction cov-
erage, and n for no coverage).
Figure 5: Crashes found by Sapienz.
Our empirical results suggest that coverage metrics at different granularities can find distinct crashes. Par-
ticularly, we note the tendencies for instruction and method coverage, and for activity coverage and Sapienz
without coverage data to find similar crashes. This result indicates that activity coverage could be too
coarse-grained and comparable in effect to ignoring coverage data at all. Instruction and method-level cov-
erage metrics, on the contrary, are relatively fine-grained and are able to drive the genetic algorithms in
Sapienz towards different crash populations. Therefore, it is possible that a combination of a coarse-grained
metric (or some executions without coverage data) and a fine-grained metric measured by ACVTool could
provide better results in testing with Sapienz.
We now set out to investigate how multiple runs affect detected crashes, and whether a combination of
coverage metrics could detect more crashes than a single metric.
6.2 Evaluating behavior on multiple runs
Before assessing whether a combination of metrics could be beneficial for evolutionary testing, we look at
assessing the impact of randomness on Sapienz’ results. Like many other automated testing tools for Android,
Sapienz is non-deterministic. Our findings may be affected by this. To determine the impact of coverage
metrics in finding crashes on average, we need to investigate how crash detection behaves in multiple runs.
Thus, we have performed the following two experiments on a set of 150 apks randomly selected from the 799
healthy instrumented apks.
6.2.1 Performance in 5 runs
We have run Sapienz for 5 times with each coverage metric and without coverage data, for each of 150 apps.
This gives us two crash populations: P1 that contains unique crashes detected in the 150 apps during the
first experiment, and P5 that contains unique crashes detected in the same apps running Sapienz 5 times.
Table 5 summarizes the populations of crashes found by Sapienz with each of the coverage metrics and
without coverage.
As expected, running Sapienz multiple times increases the amount of found crashes. In this experiment, we
are interested in the proportion of crashes contributed by coverage metrics individually. If coverage metrics
are interchangeable, i.e., they do not differ in capabilities of finding crashes, and they will, eventually, find
the same crashes, the proportion of crashes found by individual metrics to the total crashes population can
be expected to significantly increase: each metric, given more attempts, will find a larger proportion of the
total crash population.
As shown in Table 5, the activity coverage has found a significantly larger proportion of total crash population
(52% from 42%). Sapienz without coverage data also shows better performance over multiple runs (51%
from 43%). Yet, the instruction coverage has only slightly increased performance (54% from 50%), while the
15
Preprint
Table 5: Crashes found in 150 apps with 1 and 5 runs
Coverage metrics CrashesP1: 1 run P5: 5 runs
Activity coverage 86 (42%) 184 (52%)
Method coverage 104 (51%) 174 (49%)
Instruction coverage 103 (50%) 190 (54%)
No coverage 89 (43%) 180 (51%)
Total 203 351
all a m i
0
2
4
6
8
10
12
Figure 6: Boxplots of crashes detected per app (a stands for activity, m for method, and i for instruction,
respectively).
Table 6: Summary statistics for crashes found per apk, in 150 apk
Statistics 1 run × 3 metrics 3 runs × 1 metricactivity method instruction
Min 0 0 0 0
1st. Quartile 0 0 0 0
Mean 1.20 1.02 0.97 1.08
Median 1 0 0.5 1
3rd. Quartile 2 1 1 1.75
Max 11 11 8 8
method coverage has fared worse (49% from 51%). These findings suggest that the coverage metrics are not
interchangeable, and even with 5 repetitions they are not able to find all crashes that were detected by other
metrics. Our findings in this experiment are consistent with the previously reported smaller experiment that
involved only 100 apps (see [20] for more details).
6.2.2 The Wilcoxon signed-rank test
The previous experiment indicates that even repeating the runs multiple times does not allow any of the
code coverage metrics to find the same amount of bugs as all metrics together. The instruction coverage
seems to perform slightly better than the rest in the repeating runs, but not a lot. We now fix the time
that Sapienz spends on each apk9, and we want to establish whether the amount of crashes that Sapienz
can find in an apk with 3 metrics is greater than the amount of crashes found with just one metric but with
3 attempts. This will suggest that the combination of 3 metrics is more effective in finding crashes than
each individual metric. For each apk from the chosen 150 apps, we compute the number of crashes detected
by Sapienz with each of the three coverage metrics executed once. We then have executed Sapienz 3 times
against each apk with each coverage metric individually.
Table 6 summarizes the basic statistics for the apk crash numbers data, and the data shapes are shown as
boxplots in Figure 6. The summary statistics show that Sapienz equipped with 3 coverage metrics has found,
9In these testing scenarios, Sapienz spends the same amount of time per app (3 runs), but the coverage conditions
are different.
16
Preprint
on average, more crashes per apk than Sapienz using only one metric but executed 3 times. To verify this,
we apply the Wilcoxon signed-rank test [55]. This statistical test is appropriate, as our data is paired but
not necessarily normally distributed.
The null-hypothesis for the Wilcoxon test is that there is no difference which metric to use in Sapienz. Then,
on average, Sapienz with 3 metrics will find the same amount of crashes in an app as Sapienz with 1 metric
but run 3 times. Alternative hypothesis is that Sapienz with several coverage metrics will consistently find
more crashes. Considering the standard significance level of 5%, on our data, the results of the Wilcoxon
test rejected the null-hypothesis for the activity and method coverage metrics (p-values 0.002 and 0.004,
respectively), but not for the instruction coverage (p-value 0.09, which is more than the threshold 0.05).
Cohen’s d for effect size are equal, respectively, 0.297, 0.282 and 0.168 for each of the metrics.
These results show that it is likely that a combination of coverage metrics achieves better results than only
activity or method coverage. The outcome for the instruction coverage is inconclusive. We posit that a
larger-size experiment could provide a more clear picture.
Indeed, our findings from this last experiment are also not fully consistent with the previously reported
experiment on a smaller set of 100 apps [20]. The difference could be explained by two factors. First, we
have used only healthy instrumented apps in this experiment (the ones that did not crash upon installation).
The experiment reported in [20] did not involved the check for healthiness, and the crashing apps could have
affected the picture. In the unhealthy app case, Sapienz always reports one single crash for it, irrespectively
of which coverage metrics is used. Note that in our Google Play sample approximately 17% are unhealthy,
i.e., they cannot be executed on an emulator, as required by Sapienz. Second, the new apps tested in this
experiment could have behaved slightly differently than the previously tested cohort, and the instruction
coverage was able to find more bugs in them.
6.3 Analysis of results
Our experiments show that ACVTool can be integrated into an automated testing pipeline and it can be used
in conjunction with available testing tools such as Sapienz. Furthermore, our results establish that a fine-
grained code coverage measurement tool, such as ACVTool, can be helpful in improving automated testing
tools that rely on code coverage. Indeed, in our experiment with 799 Google Play apps, the instruction-level
coverage has been able to identify more faults in a single Sapienz run than other considered coverage metrics.
Moreover, we compared three coverage metrics executed once with individual metrics executed 3 times, and
the method and activity coverage metrics were found less effective by the Wilcoxon test of means, while the
instruction coverage alone could be reasonably effective in finding bugs.
We can also conclude that better investigation and integration of different coverage granularities is warranted
in the automated Android testing domain, just like in software testing in general [15]. In our experiment
with 799 apps, Sapienz without coverage data has shown results most comparable to Sapienz equipped
with activity coverage. This finding could indicate that activity coverage is too coarse-grained for being a
useful fitness-function in automated Android testing. On the other hand, our experiment with repeating
executions 5 times shows that no coverage metrics is able to find the vast majority of the total found crash
population. This result indicates that different granularities of coverage are not directly interchangeable.
Further investigation of these aspects could be a promising line of research.
7 Discussion
ACVTool addresses the important problem of measuring code coverage of closed-source Android apps. Our
experiments show that the proposed instrumentation approach works for the majority of Android apps,
the measured code coverage is reliable, and the tool can be integrated with security analysis and testing
tools. We have already shown that integration of the coverage feed produced by our tool into an automated
testing framework can help to uncover more application faults. Our tool can further be used, for example,
to compare code coverage achieved by dynamic analysis tools and to find suspicious code regions.
In this section, we discuss limitations of the tool design and current implementation, and summarize the
directions in which the tool can be further enhanced. We also review threats to validity regarding the
conclusions we make from the Sapienz experiments.
17
Preprint
7.1 Limitations of ACVTool
ACVTool design and implementation have several limitations. An inherent limitation of our approach is that
the apps must be first instrumented before their code coverage can be measured. Indeed, in our experiments,
there was a fraction of apps that could not be instrumented. Furthermore, apps can employ various means
to prevent repackaging, e.g., they can check signature at the start, and stop executing in case of a failed
signature check. This limitation is common to all tools that instrument applications (e.g., [64, 21, 30, 60,
40]). Considering this, ACVTool has successfully instrumented 96.9% of our total original dataset selected
randomly from F-Droid and Google Play. Our instrumentation success rates are significantly higher than
any of the related work, where this aspect has been reported (e.g., [30, 64]). Therefore, ACVTool is highly
practical and reliable. We examine the related work and compare ACVTool to the available tools in the
subsequent Section 8.
We investigated the runtime overhead introduced due to our instrumentation, which could be another po-
tential limitation. Our results show that ACVTool does not introduce a prohibitive runtime overhead. For
example, the very resource-intensive computations performed by the PassMark benchmark app degrade by
27% in the instruction-level instrumented version. This is a critical scenario, and the overhead for an average
app will be much smaller, what is confirmed by our experiments with Monkey.
We assessed that the code coverage data from ACVTool is compliant to the measurements from the well-
known JaCoCo [31] tool. We have found that, even though there could be slight discrepancies in the number
of instructions measured by JaCoCo and ACVTool, the coverage data obtained by both tools is highly
correlated and commensurable. Therefore, the fact that ACVTool does not require the source code makes
it, in contrast to JaCoCo, a very promising tool for simplifying the work of Android developers, testers, and
security specialists.
One of the reasons for the slight difference in the JaCoCo and ACVTool measurements of the number of
instructions is the fact that we do not track several instructions, as specified in Section 4. Technically,
nothing precludes us from adding probes right before the “untraceable” instructions. However, we consider
this solution to be inconsistent from the methodological perspective, because we deem the right place for a
probe to be right after the executed instruction. In the future we plan to extend our approach to compute also
basic block coverage, and then the “untraceable” instruction aspect will be fully and consistently eliminated.
Another limitation of our current approach is the limit of 255 registers per method. While this limitation
could potentially affect the success rate of ACVTool, we have encountered only one app, in which this limit
was exceeded after the instrumentation. This limitation can be addressed either by switching to another
instrumentation approach, whereby inserting probes as specific method calls, or by splitting big methods.
Both of the approaches may require to reassemble an app that has more than 64K methods into a multidex
apk [23]. We plan this extension as the future work.
Our current ACVTool prototype does not fully support multidex apps. It is possible to improve the prototype
by adding full support for multidex files, as the instrumentation approach itself is extensible to multiple dex
files. In our dataset, we have 46 multidex apps, what constitutes 3.5% of the total population. In particular,
in the Google Play benchmark there were 35 apks with 2 dex files, and 9 apks containing from 3 to 9 dex files
(overall, 44 multidex apps). In the F-Droid benchmark, there were two multidex apps that contained 2 and
42 dex files, respectively. The current version of ACVTool prototype is able to instrument multidex apks
and log coverage data for them, but coverage will be reported only for one dex file. While we considered
the multidex apks, if instrumented correctly, as a success for ACVTool, after excluding them, the total
instrumentation success rate will become 93.1%, what is still much higher than other tools.
Also, the current implementation still has a few issues (3.3% of apps have not survived instrumentation)
that we plan to fix in subsequent releases.
7.2 Threats to validity
Our experiments with Sapienz reported in Section 6 allow us to conclude that black-box code coverage
measurement provided by ACVTool is useful for state-of-art automated testing frameworks. Furthermore,
these experiments suggest that different granularities of code coverage could be combined for achieving
time-efficient and effective bug finding.
At this point, it is not yet clear which coverage metric works best. However, the fine-grained instruction-
level coverage provided with ACVTool has been able to uncover more bugs than other coverage metrics, on
our sample. Further investigation of this topic is required to better understand exactly how granularity of
18
Preprint
code coverage affects the results, and whether there are other confounding factors that may influence the
performance of Sapienz and other similar tools.
We now discuss the threats to validity for the conclusions we draw from our experiments. These threats to
validity could potentially be eliminated by a larger-scale experiment.
Internal validity. Threats to internal validity concern the experiment’s aspects that may affect validity of
the findings. First, our preliminary experiment involved only a sample of 799 Android apps. It is, in theory,
possible that on a larger dataset we will obtain different results in terms of amount of unique crashes and
their types. A significantly larger experiment involving thousands of apps could lead to more robust results.
Second, Sapienz relies on the random input event generator Monkey [24] as the underlying test engine, and
thus it is not deterministic. It is possible that this indeterminism may have influence on our current results,
and the results obtained with the different coverage metrics could converge on many runs. Our experiments
with repeating executions 5 times indicate that this is unlikely. However, the success of Sapienz in finding
bugs without coverage information shows that Monkey is powerful enough to explore significant proportions
of code, even without evolution of test suites following a coverage-based fitness function. This threat warrants
further investigation.
Third, we perform our experiment using the default parameters of Sapienz. It is possible that their values,
e.g., the length of a test sequence, may also have an impact on the results. In our future work, we plan to
investigate this threat further.
The last but not least, we acknowledge that the tools measuring code coverage may introduce some additional
bugs during the instrumentation process. In our experiments, results for the method and instruction-level
coverage have been collected from app versions instrumented with ACVTool, while data for the activity cov-
erage and without coverage were gathered for the original apk versions. If ACVTool introduces bugs during
instrumentation, this difference may explain why the corresponding populations of crashes for instrumented
(method and instruction coverage) and original (activity coverage and no coverage) apps tend to be close.
We have tried to address this threat to validity in two ways. First, we have manually inspected a selection of
crashes to evaluate whether they appear due to instrumentation. We have not found such evidence. Second,
we have run original and instrumented apks with Monkey to assess run-time overhead (as reported in Sec-
tion 5), and we have not seen discrepancies in executions of these apps. If the instrumented version would
crash unexpectedly, but the original one would continue running under that same Monkey test, this would
be evident from the timings.
External validity. Threats to external validity concern the generalization of our findings. To test the
viability of our hypothesis, we have experimented with only one automated test design tool. It could be
possible that other similar tools that rely upon code coverage metrics such as Stoat [49], AimDroid [29] or
QBE [35] would not obtain better results when using the fine-grained instruction-level coverage. We plan to
investigate this further by extending our experiments to include more automated testing tools that rely on
code coverage.
It should be also stressed that we used apps from the Google Play for our experiment. While preparing a
delivery of an app to this market, developers usually apply different post-processing tools, e.g., obfuscators
and packers, to prevent potential reverse-engineering. Some crashes in our experiment may be introduced by
these tools. In addition, obfuscators may introduce some additional dead code and alter the control flow of
apps. These features may also impact the code coverage measurement, especially in case of more fine-grained
metrics. Therefore, in our future work we plan also to investigate this issue.
To conclude, our experiment with Sapienz [43] has demonstrated that well-known search-based testing al-
gorithms that rely on code coverage metrics can benefit from the fine-grained code coverage provided by
ACVTool. Automated testing tools may be further improved by including several code coverage metrics
with different levels of measurement granularity. This finding confirms the downstream value of our tool.
8 Related work
8.1 Android app testing
Automated testing of Android applications is a very prolific research area. Today, there are many frameworks
that combine UI and system events generation, striving to achieve better code coverage and fault detection.
E.g., Dynodroid [42] is able to randomly generate both UI and system events. Interaction with the system
19
Preprint
components via callbacks is another facet, which is addressed by, e.g., EHBDroid [48]. Recently, the survey
by Choudhary et al. [17] has compared the most prominent testing tools that automatically generate app
input events in terms of efficiency, including code coverage and fault detection. Two recent surveys, by
Linares et al. [39] and by Kong et al. [34], summarize the main efforts and challenges in the automated
Android app testing area.
8.2 Coverage measurement tools in Android
White-box coverage measurement Tools for white-box code coverage measurement are included into
the Android SDK maintained by Google [28]. Supported coverage libraries are JaCoCo [31], EMMA [45],
and the IntelliJ IDEA coverage tracker [33]. These tools are capable of measuring fine-grained code coverage,
but require that the source code of an app is available. This makes them suitable only for testing apps at
the development stage.
Table 7: Summary of black-box coverage measuring tools
Tool
Tool details Results of empirical evaluation
Coverage
granularity
Target
representation
Code
avail-
able
Sample
size
Instrumentation
success rate (%)
Overhead Compli-
ance
evalu-
ated
Instru-
mented
Executed Instr.
time
(sec/app)
Run
time
(%)
ELLA [21, 53] method Dalvik bytecode Y 68 [53] 60% [53] 60% [53] N/A N/A N/A
Huang et al. [30] class, method,
basic block,
instruction
Dalvik bytecode
(smali)
N 90 36% N/A N/A N/A Y
BBoxTester [64] class, method,
basic block
Java bytecode Y 91 65% N/A 15.5 N/A N
Asc [48] basic block,
instruction
Jimple Y 35 N/A N/A N/A N/A N
InsDal [40, 57, 41] class, method Dalvik bytecode N 10 N/A N/A 1.5 N/A N
Sapienz [43] activity Dalvik bytecode Y 1112 N/A N/A N/A N/A N
DroidFax [12, 11, 13] instruction Jimple Y 195 N/A N/A N/A N/A N
AndroCov [9, 37] method,
instruction
Jimple Y 17 N/A N/A N/A N/A N
CovDroid [60] method Dalvik bytecode
(smali)
N 1 N/A N/A N/A N/A N
ACVTool (this paper) class, method,
instruction
Dalvik
bytecode
(smali)
Y 1278 97.8% 96.9% 33.3 up to
27% on
Pass-
Mark
Y
Black-box coverage measurement Several frameworks for measuring black-box code coverage of An-
droid apps already exist, however they are inferior to ACVTool. Notably, these frameworks often measure
code coverage at coarser granularity. For example, ELLA [21], InsDal [40], and CovDroid [60] measure code
coverage only at the method level.
ELLA [21] is arguably one of the most popular tools to measure Android code coverage in the black-box
setting, however, it is no longer supported. An empirical study by Wang et al. [53] has evaluated performance
of Monkey [24], Sapienz [43], Stoat [49], and WCTester [62] automated testing tools on large and popular
industry-scale apps, such as Facebook, Instagram and Google. They have used ELLA to measure method
code coverage, and they reported the total success rate of ELLA at 60% (41 apps) on their sample of 68
apps.
Huang et al. [30] proposed an approach to measure code coverage for dynamic analysis tools for Android
apps. Their high-level approach is similar to ours: an app is decompiled into smali files, and these files are
instrumented by placing probes at every class, method and basic block to track their execution. However, the
authors report a low instrumentation success rate of 36%, and only 90 apps have been used for evaluation.
Unfortunately, the tool is not publicly available, and we were unable to obtain it or the dataset by contacting
the authors. Because of this, we cannot compare its performance with ACVTool, although we report much
higher instrumentation rate, evaluated against much larger dataset.
BBoxTester [64] is another tool for measuring black-box code coverage. Its workflow includes app disassem-
bling with apktool and decompilation of the dex files into Java jar files using dex2jar [1]. The jar files are
instrumented using EMMA [45], and assembled back into an apk. The empirical evaluation of BBoxTester
showed the successful repackaging rate of 65%, and the instrumentation time has been reported to be 15
20
Preprint
seconds per app. We were able to obtain the original BBoxTester dataset. Out of 91 apps, ACVTool failed to
instrument just one. This error was not due to our own instrumentation code: apktool could not repackage
this app. Therefore, ACVTool successfully instrumented 99% of this dataset, against 65% of BBoxTester.
The InsDal tool [40] instruments apps for class and method-level coverage logging by inserting probes in the
smali code, and its workflow is similar to ACVTool. The tool has been applied for measuring code coverage
in the black-box setting with the AppTag tool [57], and for logging the number of method invocations in
measuring the energy consumption of apps [41]. The information about instrumentation success rate is not
available for InsDal, and it has been evaluated on a limited dataset of 10 apps. The authors have reported
average instrumentation time overhead of 1.5 sec per app, and average instrumentation code overhead of
18.2% of dex file size. ACVTool introduces smaller code size overhead of 11%, on average, but requires more
time to instrument an app. On our dataset, average instrumentation time is 24.1 seconds per app, when
instrumenting at the method level only. It is worth noting that half of this time is spent on repackaging
with apktool.
CovDroid [60], another black-box code coverage measurement system for Android apps, transforms apk code
into smali-representation using the smali disassembler [32] and inserts probes at the method level. The
coverage data is collected using an execution monitor, and the tool is able to collect timestamps for executed
methods. While the instrumentation approach of ACVTool is similar in nature to that of CovDroid, the
latter tool has been evaluated on a single application only.
Alternative approaches to Dalvik instrumentation focus on performing detours via other languages, e.g.,
Java or Jimple. For example, Bartel et al. [8] worked on instrumenting Android apps for improving their
privacy and security via translation to Java bytecode. Zhauniarovich et al. [64] translated Dalvik into Java
bytecode in order to use EMMA’s code coverage measurement functionality. However, the limitation of such
approaches, as reported in [64], is that not all apps can be retargeted into Java bytecode.
The instrumentation of apps translated into the Jimple representation has been used in, e.g., Asc [48],
DroidFax [12], and AndroCov [37, 9]. Jimple is a suitable representation for subsequent analysis with
Soot [5], yet, unlike smali, it does not belong to the “core” Android technologies maintained by Google.
Moreover, Arnatovich et al. [4] in their comparison of different intermediate representations for Dalvik
bytecode advocate that smali is the most accurate alternative to the original Java source code and therefore
is the most suitable for security testing.
Remarkably, in the absence of reliable fine-grained code coverage reporting tools, some frameworks [43, 48,
12, 38, 14] integrate their own black-box coverage measurement libraries. Many of these papers do note that
they have to design their own code coverage measurement means in the absence of a reliable tool. ACVTool
addresses this need of the community. As the coverage measurement is not the core contribution of these
works, the authors have not provided enough information about the rates of successful instrumentation, and
other details related to the performance of these libraries, so we are not able to compare them with ACVTool.
App instrumentation Among the Android application instrumentation approaches, the most relevant for
us are the techniques discussed by Huang et al. [30], InsDal [40] and CovDroid [60]. ACVTool shows much
better instrumentation success rate, because our instrumentation approach deals with many peculiarities of
the Dalvik bytecode. A similar instrumentation approach has been also used in the DroidLogger [19] and
SwiftHand [16] frameworks, which do not report their instrumentation success rates.
Summary Table 7 summarizes the performance of ACVTool and code coverage granularities that it sup-
ports in comparison to other state-of-the-art tools. ACVTool significantly outperforms any other tool that
measures black-box code coverage of Android apps. Our tool has been extensively tested with real-life appli-
cations, and it has excellent instrumentation success rate, in contrast to other tools, e.g., [30] and [64]. We
attribute the reliable performance of ACVTool to the very detailed investigation of smali instructions we
have done, that is missing in the literature. ACVTool is available as open-source to share our insights with
the community, and to replace the outdated tools (ELLA [21] and BBoxTester[64]) or publicly unavailable
tools ([30, 60]).
9 Conclusions
In this paper, we presented an instrumentation technique for Android apps. We incorporated this technique
into ACVTool – an effective and efficient tool for measuring precise code coverage of Android apps. We were
21
Preprint
able to instrument and execute 96.9% out of 1278 apps used for the evaluation, showing that ACVTool is
practical and reliable.
The empirical evaluation that we have performed allows us to conclude that ACVTool will be useful for
both researchers who are building testing, program analysis, and security assessment tools for Android, and
practitioners in industry who need reliable and accurate coverage information.
To enable better support for automated testing community, we are working to add support for multidex apps,
extend the set of available coverage metrics to branch coverage, and to alleviate the limitation caused by the
fixed amount of registers in a method. Also, as an interesting line of future work, we consider on-the-fly dex
file instrumentation that will make ACVTool even more useful in the context of analyzing highly complex
applications and malware.
Furthermore, our experiments with Sapienz have produced interesting conclusions that the granularity of
coverage is important, when used as a component of the fitness function in the black-box app testing. The
second line of the future work for us is to expand our experiments to more apps and more testing tools, thus
establishing better guidelines on which coverage metric is more effective and efficient in bug finding.
Acknowledgements
This work has been partially supported by Luxembourg National Research Fund through grants
C15/IS/10404933/COMMA and AFR-PhD-11289380-DroidMod.
References
[1] dex2jar, 2017.
[2] K. Allix, T. F. Bissyande, J. Klein, and Y. L. Traon. Androzoo: Collecting millions of android apps for
the research community. In 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories
(MSR), pages 468–471, May 2016.
[3] Paul Ammann and Jeff Offutt. Introduction to Software Testing. Cambridge University Press, 2 edition,
2016.
[4] Yauhen Leanidavich Arnatovich, Hee Beng Kuan Tan, and Lwin Khin Shar. Empirical comparison of
intermediate representations for android applications. In 26th International Conference on Software
Engineering and Knowledge Engineering, 2014.
[5] Steven Arzt, Siegfried Rasthofer, and Eric Bodden. The soot-based toolchain for analyzing android
apps. In Proceedings of the 4th International Conference on Mobile Software Engineering and Systems,
MOBILESoft ’17, pages 13–24, Piscataway, NJ, USA, 2017. IEEE Press.
[6] Tanzirul Azim and Iulian Neamtiu. Targeted and depth-first exploration for systematic testing of
android apps. In Proceedings of the 2013 ACM SIGPLAN International Conference on Object Oriented
Programming Systems Languages & Applications, OOPSLA ’13, pages 641–660, New York, NY,
USA, 2013. ACM.
[7] M. Backes, S. Bugiel, O. Schranz, P. v. Styp-Rekowsky, and S. Weisgerber. Artist: The android
runtime instrumentation and security toolkit. In 2017 IEEE European Symposium on Security and
Privacy (EuroS P), pages 481–495, April 2017.
[8] Alexandre Bartel, Jacques Klein, Martin Monperrus, Kevin Allix, and Yves Le Traon. In-vivo bytecode
instrumentation for improving privacy on android smartphones in uncertain environments, 2012.
[9] Nataniel P. Borges, Jr., Maria Gómez, and Andreas Zeller. Guiding app testing with mined interaction
models. In Proceedings of the 5th International Conference on Mobile Software Engineering and Systems,
MOBILESoft ’18, pages 133–143, New York, NY, USA, 2018. ACM.
[10] D. Bornstein. Google I/O 2008 - Dalvik Virtual Machine Internals, 2008.
[11] H. Cai, N. Meng, B. Ryder, and D. Yao. Droidcat: Effective android malware detection and catego-
rization via app-level profiling. IEEE Transactions on Information Forensics and Security, pages 1–1,
2018.
[12] H. Cai and B. G. Ryder. Droidfax: A toolkit for systematic characterization of android applications. In
2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages 643–647,
Sep. 2017.
22
Preprint
[13] H. Cai and B. G. Ryder. Understanding android application programming and security: A dynamic
study. In 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages
364–375, Sep. 2017.
[14] Patrick Carter, Collin Mulliner, Martina Lindorfer, William Robertson, and Engin Kirda. Curiousdroid:
Automated user interface interaction for android application analysis sandboxes. In Jens Grossklags and
Bart Preneel, editors, Financial Cryptography and Data Security, pages 231–249, Berlin, Heidelberg,
2017. Springer Berlin Heidelberg.
[15] Thierry Titcheu Chekam, Mike Papadakis, Yves Le Traon, and Mark Harman. An empirical study
on mutation, statement and branch coverage fault revelation that avoids the unreliable clean program
assumption. In Proceedings of the 39th International Conference on Software Engineering, ICSE ’17,
pages 597–608, Piscataway, NJ, USA, 2017. IEEE Press.
[16] Wontae Choi, George Necula, and Koushik Sen. Guided gui testing of android apps with minimal
restart and approximate learning. In Proceedings of the 2013 ACM SIGPLAN International Conference
on Object Oriented Programming Systems Languages & Applications, OOPSLA ’13, pages 623–640,
New York, NY, USA, 2013. ACM.
[17] Shauvik Roy Choudhary, Alessandra Gorla, and Alessandro Orso. Automated test input generation for
android: Are we there yet? (e). In Proceedings of the 2015 30th IEEE/ACM International Conference
on Automated Software Engineering (ASE), ASE ’15, pages 429–440, Washington, DC, USA, 2015.
IEEE Computer Society.
[18] Mike Cleron. Android Announces Support for Kotlin, May 2017.
[19] Shuaifu Dai, Tao Wei, and Wei Zou. Droidlogger: Reveal suspicious behavior of android applications
via instrumentation. In 2012 7th International Conference on Computing and Convergence Technology
(ICCCT), pages 550–555, Dec 2012.
[20] Stanislav Dashevskyi, Olga Gadyatskaya, Aleksandr Pilgun, and Yury Zhauniarovich. The influence
of code coverage metrics on automated testing efficiency in android. In Proceedings of the 2018 ACM
SIGSAC Conference on Computer and Communications Security, CCS ’18, pages 2216–2218, New York,
NY, USA, 2018. ACM.
[21] ELLA. A tool for binary instrumentation of Android apps, 2016.
[22] Milos Gligoric, Alex Groce, Chaoqiang Zhang, Rohan Sharma, Mohammad Amin Alipour, and Darko
Marinov. Guidelines for coverage-based comparisons of non-adequate test suites. ACM Trans. Softw.
Eng. Methodol., 24(4):22:1–22:33, September 2015.
[23] Google. Enable Multidex for Apps with Over 64K Methods.
[24] Google. UI/Application Exerciser Monkey.
[25] Google. Dalvik Executable format, 2017.
[26] Google. Dalvik bytecode, 2018.
[27] Google. smali, 2018.
[28] Google. Test your app, 2018.
[29] T. Gu, C. Cao, T. Liu, C. Sun, J. Deng, X. Ma, and J. LÃij. Aimdroid: Activity-insulated multi-
level automated testing for android applications. In 2017 IEEE International Conference on Software
Maintenance and Evolution (ICSME), pages 103–114, Sep. 2017.
[30] C. Huang, C. Chiu, C. Lin, and H. Tzeng. Code coverage measurement for android dynamic analysis
tools. In 2015 IEEE International Conference on Mobile Services, pages 209–216, June 2015.
[31] JaCoCo. Java code coverage library, 2018.
[32] JesusFreke. smali/backsmali.
[33] JetBrains. Code coverage, 2017.
[34] P. Kong, L. Li, J. Gao, K. Liu, T. F. Bissyande, and J. Klein. Automated testing of android apps: A
systematic literature review. IEEE Transactions on Reliability, pages 1–22, 2018.
[35] Y. Koroglu, A. Sen, O. Muslu, Y. Mete, C. Ulker, T. Tanriverdi, and Y. Donmez. Qbe: Qlearning-based
exploration of android applications. In 2018 IEEE 11th International Conference on Software Testing,
Verification and Validation (ICST), pages 105–115, April 2018.
23
Preprint
[36] N. Li, X. Meng, J. Offutt, and L. Deng. Is bytecode instrumentation as good as source code instrumen-
tation: An empirical study with industrial tools (experience report). In 2013 IEEE 24th International
Symposium on Software Reliability Engineering (ISSRE), pages 380–389, Nov 2013.
[37] Y. Li. AndroCov. measure test coverage without source code, 2016.
[38] Yuanchun Li, Ziyue Yang, Yao Guo, and Xiangqun Chen. Droidbot: a lightweight ui-guided test input
generator for android. In 2017 IEEE/ACM 39th International Conference on Software Engineering
Companion (ICSE-C), pages 23–26, May 2017.
[39] M. Linares-VÃąsquez, K. Moran, and D. Poshyvanyk. Continuous, evolutionary and large-scale: A new
perspective for automated mobile app testing. In 2017 IEEE International Conference on Software
Maintenance and Evolution (ICSME), pages 399–410, Sep. 2017.
[40] J. Liu, T. Wu, X. Deng, J. Yan, and J. Zhang. Insdal: A safe and extensible instrumentation tool on
dalvik byte-code for android applications. In 2017 IEEE 24th International Conference on Software
Analysis, Evolution and Reengineering (SANER), pages 502–506, Feb 2017.
[41] Q. Lu, T. Wu, J. Yan, J. Yan, F. Ma, and F. Zhang. Lightweight method-level energy consumption
estimation for android applications. In 2016 10th International Symposium on Theoretical Aspects of
Software Engineering (TASE), pages 144–151, July 2016.
[42] Aravind Machiry, Rohan Tahiliani, and Mayur Naik. Dynodroid: An input generation system for
android apps. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering,
ESEC/FSE 2013, pages 224–234, New York, NY, USA, 2013. ACM.
[43] Ke Mao, Mark Harman, and Yue Jia. Sapienz: Multi-objective automated testing for android appli-
cations. In Proceedings of the 25th International Symposium on Software Testing and Analysis, ISSTA
2016, pages 94–105, New York, NY, USA, 2016. ACM.
[44] Aleksandr Pilgun, Olga Gadyatskaya, Stanislav Dashevskyi, Yury Zhauniarovich, and Artsiom Kush-
niarou. An effective android code coverage tool. In Proceedings of the 2018 ACM SIGSAC Conference
on Computer and Communications Security, CCS ’18, pages 2189–2191, New York, NY, USA, 2018.
ACM.
[45] V. Rubtsov. Emma: Java code coverage tool, 2006.
[46] Alireza Sadeghi, Reyhaneh Jabbarvand, and Sam Malek. Patdroid: Permission-aware gui testing of
android. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ES-
EC/FSE 2017, pages 220–232, New York, NY, USA, 2017. ACM.
[47] PassMark Software. Passmark. interpreting your results from performancetest, 2018.
[48] Wei Song, Xiangxing Qian, and Jeff Huang. Ehbdroid: Beyond gui testing for android applications.
In Proceedings of the 32Nd IEEE/ACM International Conference on Automated Software Engineering,
ASE 2017, pages 27–37, Piscataway, NJ, USA, 2017. IEEE Press.
[49] Ting Su, Guozhu Meng, Yuting Chen, Ke Wu, Weiming Yang, Yao Yao, Geguang Pu, Yang Liu, and
Zhendong Su. Guided, stochastic model-based gui testing of android apps. In Proceedings of the 2017
11th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2017, pages 245–256, New
York, NY, USA, 2017. ACM.
[50] K. Tam, S. Khan, A. Fattori, and L. Cavallaro. CopperDroid: Automatic reconstruction of Android
malware behaviors. In Proceedings of the Network and Distributed System Security Symposium (NDSS),
2015.
[51] D. Tengeri, F. HorvÃąth, ÃĄ. BeszÃľdes, T. Gergely, and T. GyimÃşthy. Negative effects of bytecode
instrumentation on java source code coverage. In 2016 IEEE 23rd International Conference on Software
Analysis, Evolution, and Reengineering (SANER), volume 1, pages 225–235, March 2016.
[52] R. Vallee-rai and L. Hendren. Jimple: Simplifying Java bytecode for analyses and transformations.
2004.
[53] Wenyu Wang, Dengfeng Li, Wei Yang, Yurui Cao, Zhenwen Zhang, Yuetang Deng, and Tao Xie.
An empirical study of android test generation tools in industrial cases. In Proceedings of the 33rd
ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, pages 738–748,
New York, NY, USA, 2018. ACM.
[54] R. Wiśniewski and C. Tumbleson. Apktool - A tool for reverse engineering 3rd party, closed, binary
Android apps, 2017.
24
Preprint
[55] Claes Wohlin, Per Runeson, Martin Hst, Magnus C. Ohlsson, Bjrn Regnell, and Anders Wessln. Exper-
imentation in Software Engineering. Springer Publishing Company, Incorporated, 2012.
[56] M. Y Wong and D. Lie. IntelliDroid: A targeted input generator for the dynamic analysis of Android
malware. In Proceedings of the Network and Distributed System Security Symposium (NDSS), 2016.
[57] Jiwei Yan, Tianyong Wu, Jun Yan, and Jian Zhang. Target directed event sequence generation for
android applications, 2016.
[58] K. Yang. APK Instrumentation library, 2018.
[59] Q. Yang, J. J. Li, and D. M. Weiss. A survey of coverage-based testing tools. The Computer Journal,
52(5):589–597, Aug 2009.
[60] C. Yeh and S. Huang. Covdroid: A black-box testing coverage system for android. In 2015 IEEE 39th
Annual Computer Software and Applications Conference, volume 3, pages 447–452, July 2015.
[61] S. Yoo and M. Harman. Regression testing minimization, selection and prioritization: A survey. Softw.
Test. Verif. Reliab., 22(2):67–120, March 2012.
[62] Xia Zeng, Dengfeng Li, Wujie Zheng, Fan Xia, Yuetang Deng, Wing Lam, Wei Yang, and Tao Xie. Au-
tomated test input generation for android: Are we really there yet in an industrial case? In Proceedings
of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering,
FSE 2016, pages 987–992, New York, NY, USA, 2016. ACM.
[63] Yury Zhauniarovich, Olga Gadyatskaya, Bruno Crispo, Francesco La Spina, and Ermanno Moser.
Fsquadra: fast detection of repackaged applications. In IFIP Annual Conference on Data and Ap-
plications Security and Privacy, pages 130–145. Springer, 2014.
[64] Yury Zhauniarovich, Anton Philippov, Olga Gadyatskaya, Bruno Crispo, and Fabio Massacci. Towards
Black Box Testing of Android Apps. In 2015 Tenth International Conference on Availability, Reliability
and Security, pages 501–510, August 2015.
25
1 Introduction
2 Background
2.1 APK Internals
2.2 Code Coverage
3 ACVTool Design
3.1 Offline Phase
3.2 Online Phase
3.3 Report Generation Phase
4 Code Instrumentation
4.1 Bytecode representation
4.2 Register management
4.3 Probes insertion
5 Evaluation
5.1 Benchmark
5.2 Effectiveness
5.2.1 Instrumentation success rate
5.2.2 App health after instrumentation
5.3 Efficiency
5.3.1 Instrumentation-time overhead
5.3.2 Runtime overhead
5.4 Compliance with JaCoCo
6 Contribution of Code Coverage Data to Bug Finding
6.1 Descriptive statistics of crashes
6.2 Evaluating behavior on multiple runs
6.2.1 Performance in 5 runs
6.2.2 The Wilcoxon signed-rank test
6.3 Analysis of results
7 Discussion
7.1 Limitations of ACVTool
7.2 Threats to validity
8 Related work
8.1 Android app testing
8.2 Coverage measurement tools in Android
9 Conclusions
| 1cybersec
| arXiv |
The soccer equivalent of a knuckleball. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Recon with the USB Rubber Ducky. | 1cybersec
| Reddit |
NCIS's Pauley Perrette (Abby Sciutto) was attacked by a "psychotic homeless man" Thursday night. She's OK.. | 0non-cybersec
| Reddit |
Person Spends 50K on Plastic Surgery to Become ‘Sexless Alien’. | 0non-cybersec
| Reddit |
Analysis of Coupon Collector variant with diminishing probabilities of finding a new coupon. <p>The variant of this Coupon Collector problem can be defined like so. Suppose you initially have exactly one of <span class="math-container">$n$</span> different types of coupons in a pile. The probability of grabbing a coupon of type <span class="math-container">$i$</span> at any moment is equal to the number of coupons of type <span class="math-container">$i$</span> in the pile, divided by the total number of coupons. Clearly, at the start, each type of coupon might be obtained with probability <span class="math-container">$1/n$</span>. However, suppose at some turn you select coupon <span class="math-container">$i$</span>. That coupon of type <span class="math-container">$i$</span> will then be put back in the pile <em>and</em> you will add into the pile a new coupon of type <span class="math-container">$i$</span>. This implies that there's one more coupon of type <span class="math-container">$i$</span> in the pile than before and the total number of coupons in the pile has increased by one.</p>
<p>The goal, given these dynamics, is to find the number of turns until you have seen all the coupons at least once. Interestingly, it can be shown that the expected number of turns is unbounded, which makes sense since you become more likely to see a coupon you've seen before and less likely to see a coupon you've never seen. So instead of bounding the expected number of turns, instead we want to know the number of turns such that we see all the coupons with probability <span class="math-container">$\geq 1/2$</span>. More specifically, define <span class="math-container">$X_n$</span> as the number of turns needed to find the <span class="math-container">$n^{th}$</span> distinct coupon. Our goal is to find some values <span class="math-container">$t_l$</span> and <span class="math-container">$t_u$</span> such that <span class="math-container">$P(X_n \leq t_u ) \geq 1/2$</span> and <span class="math-container">$P(X_n \geq t_l) \geq 1/2$</span>.</p>
<p>If one goes through the work to model these dynamics, one can find that the probability that <span class="math-container">$X_n = s$</span> for some <span class="math-container">$s$</span> can be described by</p>
<p><span class="math-container">\begin{align}
P(X_n = s) &= \frac{n(n-1)}{s(s+1)} \prod_{i=2}^{(n-1)} \frac{(s-i)}{(s+i)}
\end{align}</span></p>
<p>This distribution can be checked against a simple Monte Carlo (MC) styled approach, and so below is the analytical distribution versus the MC computed distribution for <span class="math-container">$n=5$</span> (with the domain cropped for visualization purposes).</p>
<p><a href="https://i.stack.imgur.com/fPY2I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fPY2I.png" alt="Monte Carlo Results vs Analytical Distribution"></a></p>
<p>Now using this distribution, we can readily obtain the probability functions we need to find values for <span class="math-container">$t_l$</span> and <span class="math-container">$t_u$</span>. More precisely, we have that</p>
<p><span class="math-container">\begin{align}
P(X_n \leq t_u ) &= n(n-1)\sum_{s=n}^{t_u} \frac{\prod_{i=2}^{(n-1)}(s-i)}{s \prod_{i=1}^{(n-1)} (s+i)}\\
P(X_n \geq t_l) &= n(n-1) \sum_{s=t_l}^{\infty} \frac{\prod_{i=2}^{(n-1)}(s-i)}{s \prod_{i=1}^{(n-1)} (s+i)}
\end{align}</span></p>
<p>What I've tried to do from here is find some tight enough lower bounds on the above quantities and then find values for <span class="math-container">$t_u$</span> and <span class="math-container">$t_l$</span> such that those lower bounds equal <span class="math-container">$1/2$</span>. However, I've been finding that most of the lower bounds I try to use are not tight enough, so I end up not being able to find some value for say <span class="math-container">$t_u$</span> such that the lower bound I find can even be equal to <span class="math-container">$1/2$</span>. For example, let me show one bound I obtained that is unable to work for finding <span class="math-container">$t_u$</span>. First notice that <span class="math-container">$f(s) = \frac{(s-i)}{(s+i)}$</span> is an increasing function for at least <span class="math-container">$s > i$</span>. Using this fact, we can bound things in the following way</p>
<p><span class="math-container">\begin{align}
P(X_n \leq t_u ) &= n(n-1)\sum_{s=n}^{t_u} \frac{\prod_{i=2}^{(n-1)}(s-i)}{s \prod_{i=1}^{(n-1)} (s+i)} \\
&\geq \frac{n(n-1) \prod_{i=2}^{(n-1)}(n-i)}{\prod_{i=2}^{(n-1)} (n+i)} \sum_{s=n}^{t_u} \frac{1}{s(s+1)} \\
&= \frac{n(n-1)(n-2)!}{(2n-1)\cdots(n+2)} \left(\frac{t_u}{1 + t_u} - \frac{(n-1)}{n}\right) \tag{since $\sum_{i=1}^t \frac{1}{i(i+1)} = \frac{t}{1+t}$} \\
&= \underbrace{\frac{2n(n+1)}{\binom{2n}{n}}}_{\dagger}\underbrace{\left(\frac{t_u}{1 + t_u} - \frac{(n-1)}{n}\right)}_{\ddagger}
\end{align}</span></p>
<p>Sadly, this result is unusable because the <span class="math-container">$\dagger$</span> term is very small for large <span class="math-container">$n$</span>, while the <span class="math-container">$\ddagger$</span> term grows very slowly, so a value for <span class="math-container">$t_u$</span> that allows this overall bound to grow above <span class="math-container">$1/2$</span> is enormous. I also tried to approach this lower bound by investigating the product terms and finding the partial fraction decomposition. One interesting fact I proved by induction was that</p>
<p><span class="math-container">\begin{align}
\frac{\prod_{i=2}^m(s-i)}{\prod_{i=1}^m(s+i)} &= \sum_{i=1}^m \frac{a_i}{(s+i)}
\end{align}</span></p>
<p>with <span class="math-container">$\sum_{i=1}^m a_i = 1$</span> for all <span class="math-container">$m>1$</span>. I also proved to myself the following claim:</p>
<p><span class="math-container">\begin{align}
\sum_{s=1}^t \frac{1}{s(s+i)} = \frac{1}{i}\left(H_t + H_i - H_{(t+i)}\right) \end{align}</span></p>
<p>where <span class="math-container">$H_n$</span> is the Harmonic number. These results can allow us to obtain that</p>
<p><span class="math-container">\begin{align}
P(X_n \leq t_u ) &= n(n-1)\sum_{s=n}^{t_u} \frac{\prod_{i=2}^{(n-1)}(s-i)}{s \prod_{i=1}^{(n-1)} (s+i)} \\
&= n(n-1)\sum_{s=n}^{t_u} \frac{1}{s} \sum_{i=1}^{(n-1)} \frac{a_i}{(s+i)} \\
&= n(n-1) \sum_{i=1}^{(n-1)} a_i \sum_{s=n}^{t} \frac{1}{s(s+i)} \\
&= n(n-1) \sum_{i=1}^{(n-1)} \frac{a_i}{i} \left(H_t + H_i - H_{(t+i)}\right)
\end{align}</span></p>
<p>The dilemma with this is I had not worked out exact representations for <span class="math-container">$a_i$</span> but when you start to do so, you realize about half the terms are positive and half negative, with a wide range of magnitudes. Not sure how to best approach bounding this.</p>
<p>This leaves me with my question. Does anyone have any tips or thoughts on how to approach getting a relatively tight bound for these probabilities? Or better, any ideas for how to find <span class="math-container">$t_u$</span> and <span class="math-container">$t_l$</span> that require no bounding of their corresponding probability distributions?</p>
| 0non-cybersec
| Stackexchange |
Camera Man for The Gold.. | 0non-cybersec
| Reddit |
Failure to upgrade or update. <p>Could someone help me know what is wrong with my laptop? I failed to update or upgrade with my terminal or update manager. The system in terminal returns an error of unable to fetch some archives after trying to upgrade.</p>
| 0non-cybersec
| Stackexchange |
String comparison with list/array of variables. <p>Is it possible to do a string comparison with a list/array of variables?</p>
<p>Here is my example code, ticking the defined <code>\currentbox</code>:</p>
<pre><code>\documentclass{standalone}
\usepackage[most]{tcolorbox}
\makeatletter
% DEFINE BOX WITH GIVEN NAME
\newcommand\mybox[1]{%
\ifnum\pdf@strcmp{\unexpanded{#1}}{\currentbox}=0 %
\expandafter\@firstoftwo
\else
\expandafter\@secondoftwo
\fi
{\setlength\fboxsep{0pt}\colorbox{teal}{\framebox(18,10){\textbf{\color{white}#1}}}}
{\framebox(18,10){\textbf{#1}}}%
}
\makeatother
% define current box
\def\currentbox{B}
\begin{document}
\begin{tcbitemize}[size=fbox,
colframe=white,
colback=white,
raster equal height,
raster force size=false,
raster equal skip=0pt,
raster columns=4]
\tcbitem[width=0.20\linewidth]
\mybox{A}
\tcbitem[width=0.20\linewidth]
\mybox{B}
\tcbitem[width=0.20\linewidth]
\mybox{C}
\tcbitem[width=0.20\linewidth]
\mybox{D}
\end{tcbitemize}%
\end{document}
</code></pre>
<p>So if <code>\currentbox</code> is a list/array containing 'A' and 'B' and it is possible to <code>strcmp</code> if it is in a list/array, this would generate:</p>
<p><a href="https://i.stack.imgur.com/c583J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c583J.png" alt="enter image description here"></a></p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
I was sent from the future to prevent the McPotG. | 0non-cybersec
| Reddit |
I figured that strawpoll is a pretty good site. | 0non-cybersec
| Reddit |
To validate her overreacting to a burrito.. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Help with first credit card. So I'm 27 and don't have a credit card. Graduating College within the next year and will be moving out. So I need to build credit. Thinking about using one for gas and food - paying monthly. I use Bank of America and would like to get a card from them, but I was overwhelmed on their websites. So man options for cards!
Maybe a student card with cash rewards?
Also I live at home at the moment and don't pay rent. Currently work part-time and school full-time and I bet I make like only 6-8k a year.
Do I seriously put 8k for wage and $0 for rent. I'm planning and hoping to land a job within the next year and possible sooner that pays much more then my current situation. Also I'll probably be paying at least $500 rent by next year.
I really new this credit cards, but Ive been told I need to have at least one. Maybe I should just talk with someone at my bank for help? | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Please don't upvote -- just some noob questions. So I'm in my mid 20s and I've never cared to watch any games before, but this year is totally changed. I guess I got some kind of bug.
Anyway, I'm from Dallas but I don't feel like I want to make the cowboys "my team." So who do ya'll think would be more fun to watch.
Additionally, I like to read up on stuff, so what are your go-to resources to pick up info/stats/plays etc for the season?
**Edit:** Holy shit dudes. Thanks so much, this has been a great help.
Also, I told yall not to upvote. Assholes.
**P.S.** --I am going to be watching the Boys tonight to see how they do, but I do really like the idea of rooting for the Skins. Also, the Texans and Packers are high in my esteem, so I'll be keeping an eye on all of thems. But who knows, maybe I'll wind up liking the Raiders :P Thanks again everyone. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Applying a function in each row of a big PySpark dataframe?. <p>I have a big dataframe (~30M rows). I have a function <code>f</code>. The business of <code>f</code> is to run through each row, check some logics and feed the outputs into a dictionary. The function needs to be performed row by row.</p>
<p>I tried:</p>
<p><code>
dic = dict()
for row in df.rdd.collect():
f(row, dic)
</code></p>
<p>But I always meet the error OOM. I set the memory of Docker to 8GB.</p>
<p>How can I effectively perform the business?</p>
<p>Thanks a lot</p>
| 0non-cybersec
| Stackexchange |
TIFU by suffering explosive diarrhea while my car drove itself into a fast food restaurant. Obligatory "this didn't actually happen today." I don't remember when it was exactly, but it had to have been around 2001.
I'm posting this because I posted it as a comment on an earlier post, and a few people told me they thought my story was WAY worse than the original and deserved its own TIFU... And yes, what follows is verbatim what I wrote over there.
I was in the fast food drive-thru (building on one side, curb and hedge row on the other - i.e. I can't pull out of line) when my stomach started its death moans... Apparently it was anticipating what was to come and trying to warn me off.
The car in front of me finally moves, I gun it past the window without picking up (or paying for) my food, and almost literally skid sideways into the handicapped spot (illegally) RIGHT outside the door closest to their bathroom. I'm struggling to keep it in at this point, but if I don't get out NOW I'm gonna fertilize the interior of my truck. I get out of the truck and am doing the best I can to sprint for the door while simultaneously clenching my ass muscles as tightly as humanly possible...
I made it to the door before the dam broke... There was a massive trail of liquid shit from the door all the way into the bathroom, and by the time I got my pants down and sat on the toilet I was empty. Not one drop ended up in the toilet. I did the best I could to clean myself and the bathroom up (let's face it, there's no "cleaning up" in a public restroom after an event like this) and make a bee-line for the door.
I opened the bathroom door and there stood a very unhappy store manager and a police officer. Behind them was my truck. Sitting inside the entry of the restaurant. In my desperation I forgot to put it in park before I got out and it drove itself through the doors just a few seconds behind me.
Not only did I get tickets for parking in the handicapped spot illegally, and for careless operation of a motor vehicle (for not putting it in park), but I also then had to explain to a judge (and my insurance company) that it all happened while I was shitting myself running through a restaurant, and not because I'm a careless idiot or drunk. The judge actually laughed at me and said "I don't even care if it's made up, that's the best excuse I've ever heard!" He dismissed the tickets, which saved me a few hundred dollars. My insurance company was not as amused...
**TL;DR** I shit my pants and all over the floor of a restaurant while my truck drove itself through the front doors. I got tickets, the judge laughed at me before dismissing them, and my insurance company fought hard to avoid paying for the damages.
**Edit:** formatting
**Edit 2:** This is now my top post... Glad I could make a few people smile, even though it was at the expense of my dignity.
**Edit 3:** You did it Reddit! Thank you for completely ruining my internet anonymity! Apparently this hit the front page some time in the early morning hours while I was sleeping. I woke up to a text from a good friend who recognized the story, and its taken a considerable bribe to convince him not to tell EVERYONE I know. In any case, thanks everyone for enjoying my most embarrassing moment, and thanks for the support. I laugh about it now (and you all probably will too for a long time), but those were definitely the most mortifying moments of my life. | 0non-cybersec
| Reddit |
Bachelor party dancer. | 0non-cybersec
| Reddit |
Prove that if $p\mid ab$ where $a$ and $b$ are positive integers and $a\lt p$ then $p\le b$. <p>I have found an old textbook called "Real Variables by Claude W. Burrill and John R. Knudsen" in the first chapter this textbook uses 15 axioms to derive much of the well known and basic facts about the integers, i have been reading and solving all the exercise and so far so good until exercise 1-27 which asks the following: "Prove that if <span class="math-container">$p$</span> is prime and divides <span class="math-container">$ab$</span> where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are positive and <span class="math-container">$a\lt p$</span>, then <span class="math-container">$p\le b$</span>." this would be very easy if we assume Euclid's lemma but it hasn't been proven and the very next exercise asks for its proof so i believe that there is a way to prove it without Euclid's lemma but how? Is there even a way to prove this without Euclid's lemma? I also believe i'm not allowed to use Bézout's identity because its proof is exercise 1-29</p>
<p>I have been thinking about this problem since yesterday and i searched online for exercise solutions for this textbook but there was no results.</p>
<p>As another question:does the theorem above imply Euclid's lemma in a straightforward way?</p>
| 0non-cybersec
| Stackexchange |
ELI5: Why are humans predominantly right handed?. | 0non-cybersec
| Reddit |
[NO SPOILERS] BOOM! Nailed it. My excitement runneth over!. | 0non-cybersec
| Reddit |
Do you think biweekly is a good time to weigh yourself? If I'm sticking to my diet and exercise. [SW: 241lbs GW: 180lbs]. i used to check weight more frequently. But I figure if I wait 2 weeks after a full two weeks of good diet and exercise, Then weigh day would be like a reward day! Lol because hopefully I lose more weight in two weeks than in a day. Thoughts | 0non-cybersec
| Reddit |
Wack. | 0non-cybersec
| Reddit |
Find unique pairs in list of pairs. <p>I have a (large) list of lists of integers, e.g.,</p>
<pre><code>a = [
[1, 2],
[3, 6],
[2, 1],
[3, 5],
[3, 6]
]
</code></pre>
<p>Most of the pairs will appear twice, where the order of the integers doesn't matter (i.e., <code>[1, 2]</code> is equivalent to <code>[2, 1]</code>). I'd now like to find the pairs that appear only <em>once</em>, and get a Boolean list indicating that. For the above example,</p>
<pre><code>b = [False, False, False, True, False]
</code></pre>
<p>Since <code>a</code> is typically large, I'd like to avoid explicit loops. Mapping to <code>frozenset</code>s may be advised, but I'm not sure if that's overkill.</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Woodland mansion didn't spawn in my world, it spawned in the copy I made, can I make it appear in my original save?. I traded for a woodland explorer map and went off to find it. Two hours later I had travelled over 11000 blocks just to find - no mansion. I created a copy of my world and tp;d to the mansion, and there it was. Is there any way I can make the mansion spawn in my original save? I don't have cheats turned on so I can't even tp back to my house lol.
I'm playing on PC, version 1.13.2
The seed is -1814936794799467066 and the coordinates for the mansion are
X: 11818.300
Y: 7100000
Z: 6875.269
My options now are to just accept defeat and travel the 11000 blocks home again, this was such a let down. | 0non-cybersec
| Reddit |
While using Intel GPU, Ubuntu 16.04 freezes on shutdown. <p>My laptop has <strong>Intel 6700HQ</strong> CPU and <strong>Nvidia 960M</strong> as graphics card. I have <strong>nvidia-361</strong> driver installed. </p>
<p>I can use the nvidia program provided with the driver to switch between Intel GPU and Nvidia GPU. Nvidia GPU works fine actually, and gives me no problems most of the time. But it heats a lot and drains/heats up the battery too. So it is not practical to use.</p>
<p>While switching from nvidia GPU to intel GPU, I'm not having any problems. When nvidia is on, I can easily reboot, suspend, sleep, poweroff. But when I'm on intel GPU, I can do none of those. It always freezes and I have to hold on the power button to force close it. If it sleeps, it won't wake up and so on. Same problem occurs if I use terminal commands (such as <code>sudo reboot</code>) to do these too.</p>
<p>It stays like one of these. And nothing happens:
<img src="https://i.stack.imgur.com/EtC49.jpg" alt="It stays like one of these. And nothing happens."></p>
<p>The cursor doesn't blink. Screen is completely frozen.</p>
<p>I've removed "quiet splash" from grub to see what's wrong in more details, but it gave something like <img src="https://i.stack.imgur.com/gJZ3X.jpg" alt="this, which, doesn't tell much."></p>
| 0non-cybersec
| Stackexchange |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.