text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
InputStreamReader throws NullPointerException when launching via cmd
I wrote some code to open and read the content of a csv file:
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(this.getClass().getResourceAsStream(fileName)));
String line;
try {
line = bufferedReader.readLine();
while (line != null) {
line = bufferedReader.readLine();
}
} catch (IOException e) {
e.printStackTrace();
} finally {
// close buffered reader
}
The code is working fine in unit tests, no exception is raised. However once I try to launch the program via cmd it throws a NPE coming from InputStreamReader:
Exception in thread "main" java.lang.NullPointerException
at exercise.FileLoader.loader(FileLoader.java:28)
at exercise.Application.main(Application.java:22)
The program takes actually the file name as parameter:
public static void main(String[] args) {
if (args.length > 1) {
System.out.println("Too many input arguments.");
System.exit(-1);
}
String fileName = args[0];
//here runs the method who reads the csv file above
}
Could you tell me what is happening ?
A:
The following reads not a File on the file system, but a resource on the class path (principly read-only).
BufferedReader bufferedReader = new BufferedReader(
new InputStreamReader(getClass().getResourceAsStream(fileName)));
Also the encoding is that of current platform, which might differ on another PC.
And I see no close() which probably got deleted on preparing the question.
For the file system:
Path path = Paths.get(filename);
try (BufferedReader bufferedReader =
Files.newBufferedReader(path, Charset.defaultCharset())) {
line = ...
...
} // Automatic close.
There has to be taken some care when the path is not absolute. Then it depends where the "working directory" points to, where the application is started.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Trying to setup a Maven Nexus Server but getting "Could not resolve dependencies"
Trying to setup a Maven Nexus Server but getting "Could not resolve dependencies" only on some of the packages.
Here is the output of my mvn package
[INFO] ------------------------------------------------------------------------
[INFO] Building School Visit 1.0.0-BUILD-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[WARNING] The POM for org.springframework.webflow:spring-faces:jar:2.3.1.BUILD-SNAPSHOT is missing, no dependency information available
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.414s
[INFO] Finished at: Tue Sep 25 12:42:45 EDT 2012
[INFO] Final Memory: 6M/100M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project SchoolVisit: Could not resolve dependencies for project org.uftwf.schoolvisit:SchoolVisit:war:1.0.0-BUILD-SNAPSHOT: Failure to find org.springframework.webflow:spring-faces:jar:2.3.1.BUILD-SNAPSHOT in http://localhost:8080/nexus/content/groups/public was cached in the local repository, resolution will not be reattempted until the update interval of nexus has elapsed or updates are forced -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
Here is my settings.xml file:
<settings>
<mirrors>
<mirror>
<!--This sends everything else to /public -->
<id>nexus</id>
<mirrorOf>*</mirrorOf>
<url>http://localhost:8080/nexus/content/groups/public</url>
</mirror>
</mirrors>
<profiles>
<profile>
<id>nexus</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<repositories>
<repository>
<id>release</id>
<url>http://127.0.0.1:8080/nexus-2.1.2/content/groups/public</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<repository>
<id>snapshots</id>
<url>http://127.0.0.1:8080/nexus-2.1.2/content/groups/public-snapshots</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
</profile>
</profiles>
<activeProfiles>
<!--make the profile active all the time -->
<activeProfile>nexus</activeProfile>
</activeProfiles>
</settings>
and here is my pom.xml file
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.uftwf.schoolvisit</groupId>
<artifactId>SchoolVisit</artifactId>
<name>School Visit</name>
<packaging>war</packaging>
<version>1.0.0-BUILD-SNAPSHOT</version>
<properties>
<java-version>1.5</java-version>
<springframework-version>3.1.2.RELEASE</springframework-version>
<springwebflow-version>2.3.1.BUILD-SNAPSHOT</springwebflow-version>
<springsecurity-version>3.1.2.RELEASE</springsecurity-version>
<org.slf4j-version>1.6.6</org.slf4j-version>
<hibernate.version>4.1.1.Final</hibernate.version>
</properties>
<dependencies>
<!-- Spring -->
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-orm</artifactId>
<version>${springframework-version}</version>
<!-- will come with all needed Spring dependencies such as spring-core
and spring-beans -->
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-aspects</artifactId>
<version>${springframework-version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId>
<version>${springframework-version}</version>
<exclusions>
<exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.webflow</groupId>
<artifactId>spring-faces</artifactId>
<version>${springwebflow-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-core</artifactId>
<version>${springsecurity-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-config</artifactId>
<version>${springsecurity-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-web</artifactId>
<version>${springsecurity-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-ldap</artifactId>
<version>${springsecurity-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-cas</artifactId>
<version>${springsecurity-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-cas-client</artifactId>
<version>3.0.7.RELEASE</version>
<exclusions>
<exclusion>
<artifactId>cas-client-core</artifactId>
<groupId>org.jasig.cas</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>javax.transaction</groupId>
<artifactId>jta</artifactId>
<version>1.1</version>
</dependency>
<dependency>
<groupId>javax.mail</groupId>
<artifactId>mail</artifactId>
<version>1.4.5</version>
</dependency>
<dependency>
<groupId>cglib</groupId>
<artifactId>cglib</artifactId>
<version>2.2.2</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.21</version>
</dependency>
<dependency>
<groupId>c3p0</groupId>
<artifactId>c3p0</artifactId>
<version>0.9.1.2</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpcore</artifactId>
<version>4.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.2.1</version>
<exclusions>
<exclusion>
<artifactId>commons-logging</artifactId>
<groupId>commons-logging</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjtools</artifactId>
<version>1.6.2</version>
</dependency>
<dependency>
<groupId>jstl</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>
<!-- Logging -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-ext</artifactId>
<version>${org.slf4j-version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>${org.slf4j-version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${org.slf4j-version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>jcl-over-slf4j</artifactId>
<version>${org.slf4j-version}</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.16</version>
</dependency>
<!-- JSF-303 Dependency Injection -->
<dependency>
<groupId>javax.inject</groupId>
<artifactId>javax.inject</artifactId>
<version>1</version>
</dependency>
<!-- Servlet -->
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>servlet-api</artifactId>
<version>2.5</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>javax.servlet.jsp</groupId>
<artifactId>jsp-api</artifactId>
<version>2.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>
<!-- Sun Mojarra JSF 2 runtime -->
<dependency>
<groupId>com.sun.faces</groupId>
<artifactId>jsf-api</artifactId>
<version>2.1.7</version>
</dependency>
<dependency>
<groupId>com.sun.faces</groupId>
<artifactId>jsf-impl</artifactId>
<version>2.1.7</version>
</dependency>
<!-- JSR 303 validation -->
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<version>1.0.0.GA</version>
</dependency>
<!-- **********************************************************************
** HIBERNATE DEPENDENCIES ** ********************************************************************** -->
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>${hibernate.version}</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>4.0.2.GA</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>4.0.2.GA</version>
</dependency>
<!-- Test -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.7</version>
<scope>test</scope>
</dependency>
</dependencies>
<repositories>
<!-- For testing against latest Spring snapshots -->
<repository>
<id>org.springframework.maven.snapshot</id>
<name>Spring Maven Snapshot Repository</name>
<url>http://maven.springframework.org/snapshot</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<!-- For developing against latest Spring milestones -->
<repository>
<id>org.springframework.maven.milestone</id>
<name>Spring Maven Milestone Repository</name>
<url>http://maven.springframework.org/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<!-- For Hibernate Validator -->
<repository>
<id>org.jboss.repository.releases</id>
<name>JBoss Maven Release Repository</name>
<url>https://repository.jboss.org/nexus/content/repositories/releases</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<!-- For Sun Mojarra JSF 2 implementation -->
<repository>
<id>maven2-repository.dev.java.net</id>
<name>Java.net Repository for Maven</name>
<url>http://download.java.net/maven/2/</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
<!-- For PrimeFaces JSF component library -->
</repositories>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>tomcat-maven-plugin</artifactId>
<configuration>
<mode>both</mode>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>tomcat-maven-plugin</artifactId>
<configuration>
<server>6dvews01</server>
</configuration>
</plugin>
</plugins>
<finalName>SchoolVisit</finalName>
</build>
</project>
Can someone please tell me why I am geting this error.
A:
The problem is not your project or in your settings.xml file.
Maven will contact your localhost Nexus repository to look for org.springframework.webflow:spring-faces:jar:2.3.1.BUILD-SNAPSHOT and apparently is not finding it.
Does this artifact exist in your local Nexus repository? Do you have other repositories set up in your Nexus for it to go and fetch artifacts for which it does not have?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Differential equation solved by $y(x) = c_1e^x + c_2xe^x + c_3x^2e^x + c_4\cos(x) + c_5\sin(x)$
Consider the following solution: $$y(x) = c_1e^x + c_2xe^x + c_3x^2e^x + c_4\cos(x) + c_5\sin(x)$$
where all the $c_i$ are real constants ($c_i \in \Bbb R$).
How to find a differential equation that have the previous solution?
I re-wrote the solution to the following:
$$y(x)=(c_1 + c_2x + c_3x^2)e^x + c_4\cos(x) + c_5\sin(x)$$
I can see that the solution looks like: $y(x) = Q(x)e^x + a\cos(x) + b\sin(x)$ where $Q$ is a second degree polynomial, but I don't really know how to take it from here.
A:
\begin{eqnarray}
\text{Operator} &\quad& \text{ Annihilates}\\
D^{n+1} && Q_n(x)\text{ a degree n polynomial}\\
D-a && e^{ax}\\
(D-a)^{n+1} && Q_n(x)e^{ax}\\
D^2+b^2 && c_1\cos(bx)+c_2\sin(bx)\\
(D^2+b^2)^{n+1} && Q_n(x)(c_1\cos(bx)+ c_2\sin(bx))\\
D^2-2aD+(a^2+b^2) && e^{ax}(c_1\cos(bx)+ c_2\sin(bx))\\
\left(D^2-2aD+(a^2+b^2)\right)^{n+1} &&Q_n(x)e^{ax}(c_1\cos(bx)+ c_2\sin(bx))
\end{eqnarray}
If operator $L_1$ annihilates $f(x)$ and operator $L_2$ annihilates $g(x)$ then operator $L_1L_2$ annihilates $f(x)+g(x)$.
$(D-1)^3$ annihilates $(c_1+c_2x+c_3x^2)e^x$
$(D^2+1)$ annihilates $c_4\cos(x)+c_5\sin(x)$
so $(D-1)^3(D^2+1)$ annihilates their sum.
So $$y = c_1e^x + c_2xe^x + c_3x^2e^x + c_4\cos(x) + c_5\sin(x)$$ is the general solution of
$$ (D-1)^3(D^2+1)y=0 $$
which is
$$ y^{(5)}-3y^{(4)}+4y^{(3)}-4y^{\prime\prime}+3y^\prime-y=0 $$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why expression inside map() function executed only when using count() function?
When I use count() function "inside test" message called three times as expected but when I remove count() function call test() function doesn't called. From count() function documentation I understand that it returns the count of elements in given stream.
public class Start {
public static int test(int input) {
System.out.println("inside processRecord");
return input;
}
public static void main(String[] args) throws InterruptedException {
List<Integer> data = Arrays.asList(1,2,3);
data.parallelStream().map(Start::test).count();
}
}
A:
because count is a terminal operation, and streams are invoked/executed only when a terminal one is present; they are said to be lazy...
Just notice that in java-9 and above, your example would not print those statements from map either way, since all you care about is how many and map is sort of useless...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Fill Array with HashMap
Updated: I filled an array through HashMap, Iam using Asynctask for http request & after filling array put that array in dialog box. When I first run my app it gives me an empty dialog box & didn't give any error but when I re run my app it shows all array elements in dialog box perfectly. Whats the reason ?
//JsonResponse Inner Class in main class
private class JsonResponse extends AsyncTask<String, Void, String> {
String response = "";
private ArrayList<HashMap<String, String>> prServices_resultList = new ArrayList<HashMap<String, String>>();
protected void onPreExecute()
{
}
protected void onPostExecute(String result)
{
if(response.equalsIgnoreCase("Success"))
{
ResultList_List = prServices_resultList;
int z=0;
for (HashMap<String, String> hashList : prServices_resultList)
{
Av_List[z] = hashList.get(android_Av_ID);
Av_Lat[z] = Double.parseDouble(hashList.get(android_Av_LAT));
Av_Lng[z] = Double.parseDouble(hashList.get(android_Av_LONG));
z++;
}
}
}
protected String doInBackground(final String... args)
{
JSONParser jParser = new JSONParser();
JSONArray jArrayServices = jParser.getJSONFromUrl(url_Services);
try{
for (int i = 0; i < jArrayServices.length(); i++)
{
JSONObject jsonElements = jArrayServices.getJSONObject(i);
String S_id = jsonElements.getString(android_S_ID);
String S_name = jsonElements.getString(android_S_NAME);
HashMap<String, String> hashServices = new HashMap<String, String>();
// adding each child node to HashMap key
hashServices.put(android_S_ID, S_id);
hashServices.put(android_S_NAME, S_name);
// adding HashList to ArrayList
prServices_resultList.add(hashServices);
}
response = "Success";
}
catch(JSONException e)
{
e.printStackTrace();
}
return response;
}
}
In my main class when i press a button:
new JsonResponse().execute;
In main class above onCreate i declare like:
static ArrayList<HashMap<String, String>> ResultList_Services = new ArrayList<HashMap<String, String>>();
String[] Db_Services = new String[ResultList_Services.size()];
String[] Db_ServicesID = new String[ResultList_Services.size()];
Now I get an error: java.lang.ArrayIndexOutOfBoundsException: length=0; index=0
A:
First of all, adding values to static field ResultList_Services accessible for UI-thread from the background thread is very bad practice. You should rewrite your code to be thread-safe. The are several options, here is the one:
private class JsonResponse extends AsyncTask<String, Void, String> {
private ArrayList<HashMap<String, String>> resultList = new ArrayList<HashMap<String, String>>();
...
protected String doInBackground(final String... args)
{
...
// adding HashList to private JsonResponse's field
resultList.add(hashServices);
...
}
protected void onPostExecute(String result)
{
if (response.equalsIgnoreCase("Success"))
{
ResultList_Services = resultList;
//TODO: notify your Activity/Dialog on completion
}
}
}
Concerning your question - the reason for not seeing new records, is that you show the dialog when there are no values in the ResultList. You should request to show it from onPostExecute, for example.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Stop multiple PCs running same script for generic mailbox
I made a script to auto forward messages (with custom response) and, from what i gathered, it has to be on a running Outlook for it to be working.
The issue is that if a couple of machines are running that script will it "go off" multiple times?
from specific sender
containing XYZ in subject
except when it contains ABC in subject
Public Sub FW(olItem As Outlook.MailItem)
Dim olForward As Outlook.MailItem
Set olForward = olItem.Forward
With olForward
'Stuff happens here that work properly
End With
End If
'// Clean up
Set olItem = Nothing
Set olForward = Nothing
End Sub
A:
As @Barney comment is absolutely correct and multiple runs of the script will trigger multiple forward of the item, I would like to add what you should do to perform your action once.
In the script right after successful forward of the message you should add a custom property into the item. The property will just indicate that the message was already forwarded (may be parsed/touched by your script). Now make the condition for entire item handling and check this property exists. If it does, do not perform any actions. The following resource will help with custom properties: How To: Add a custom property to the UserProperties collection of an Outlook e-mail item
|
{
"pile_set_name": "StackExchange"
}
|
Q:
iOS - Passing blocks to functions
I have a method call (it's a call from AFNetworking):
AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON)
{
NSLog(@"IP Address: %@", [JSON valueForKeyPath:@"origin"]);
} failure:^(NSURLRequest *request , NSURLResponse *response , NSError *error , id JSON)
{
NSLog(@"Failed: %@",[error localizedDescription]);
}];
and I'm trying to pull the sucess and failure blocks out into a separate variable, that I can later pass into the method. But I can't figure out how to declare the blocks as variables. I'm looking to do something like this:
IDontKnowWhatKindOfDataTypeGoesHere successBlock = ^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON)
{
NSLog(@"IP Address: %@", [JSON valueForKeyPath:@"origin"]);
}
and the same for the failureBlock.
So then I'm looking to make the AFJSONRequestOperation call like:
AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:successBlock failure:failureBlock];
But I cant figure how what the data types of the successBlock and failureBlock should be.
This is more for organization I suppose. I have a lot of code in my successBlock and the autoformatting of Xcode pushes is all to the right side of the screen which is totally annoying. So if I can pull it out (which should be possible, right), then I can organize my code better.
Thanks!
A:
It's awkward until you get used to it. The variable name appears mixed in with the type...
void (^success)(NSURLRequest *, NSHTTPURLResponse *, id) = ^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) {
NSLog(@"request is %@, response is %@ and json is %@", request, response, JSON);
};
The stack variable name in this case is success. You can now refer to it in subsequent expressions that take the same type, like ...
AFJSONRequestOperation *operation = [AFJSONRequestOperationJSONRequestOperationWithRequest:request
success:success];
You can also make a block an property, like this:
@property (copy, nonatomic) void (^success)(NSURLRequest *, NSHTTPURLResponse *, id);
And call it like this:
self.success(aRequest, aResponse, someJSON);
Remember to nil it out when you're done calling it, so the caller has less to worry about creating a retain cycle.
Edit: good suggestion in commentary about using typedef to make this easier on the eyes and fingers:
typedef void (^SuccesBlock)(NSURLRequest *, NSHTTPURLResponse *, id);
So the stack variable looks like this:
SuccessBlock success = ^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) {
NSLog(@"request is %@, response is %@ and json is %@", request, response, JSON);
};
and the property looks like this:
@property (copy, nonatomic) SuccessBlock success;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Which is the missing number?
I am working on this puzzle since last week.
01 ?? 05 12
10 13 15 09
14 02 08 03
04 07 11 06
which is the missing number ??
note : answer is not 16
A:
A valid answer is
00
Explanation:
The existing numbers in the 4x4 matrix range from 1 through 15. From my perspective, there seem to be no logical coherence in their placement. Hence the remaining possible number in the sequence is either one more than the last number, 16, or one less than the first number 00. As 16 is ruled out in the question, 00 is the remaining option.
Credits to @SayedMohdAli for discussing this in the comments.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Spinner in Dialog - NullPointerException - Android
I want to show a custom dialog with a spinner in it. But when i do I get NullPointerException at the setAdapter() method.I have been trying for over a week now and couldnt figure out how to get this right. here's my code:
AlertDialog alertDialog;
LayoutInflater inflater =
(LayoutInflater) this.getSystemService(LAYOUT_INFLATER_SERVICE);
View layout = inflater.inflate(R.layout.form,
(ViewGroup) findViewById(R.id.layout_root));
ArrayAdapter<String> spinnerAdapter = new ArrayAdapter<String>(this,
android.R.layout.simple_spinner_item, new String[] {"0","1","2"});
Spinner spinner = (Spinner) findViewById(R.id.spinner1);
//I get the error in the following line:
try{
spinner.setAdapter(spinnerAdapter);
}catch(Exception exception){
Toast.makeText(getApplicationContext(),
"Exception: "+exception,Toast.LENGTH_SHORT).show();
}
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setView(layout);
alertDialog = builder.create();
alertDialog.setTitle("Security");
alertDialog.show();
}
Here's the xml file form.xml:
?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical"
android:id="@+id/layout_root" >
<Spinner
android:id="@+id/spinner1"
android:layout_width="fill_parent"
android:layout_height="wrap_content" />
</LinearLayout>
Please help me out. I've followed the link : Spinner in dialog - NullPointerException
which discusses the same problem but i still couldnt do it.
A:
Try this:
Spinner spinner = (Spinner)layout.findViewById(R.id.spinner1);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
javascript group unique items in array
I have an array in the below format.
var produce = [
{'supplierID':1,'produceID':1, 'name':'apple', 'qty': 10},
{'supplierID':1,'produceID':2, 'name':'carrot', 'qty': 10},
{'supplierID':1,'produceID':2, 'name':'bean', 'qty': 20},
{'supplierID':1,'produceID':1, 'name':'bananna', 'qty': 30},
{'supplierID':1,'produceID':1, 'name':'orange', 'qty': 65},
{'supplierID':2,'produceID':2, 'name':'pumpkin', 'qty': 120},
{'supplierID':2,'produceID':2, 'name':'cucumber', 'qty': 18},
{'supplierID':2,'produceID':1, 'name':'strawberry', 'qty': 130},
{'supplierID':2,'produceID':1, 'name':'mango', 'qty': 60},
{'supplierID':2,'produceID':1, 'name':'grapes', 'qty': 140}
];
//produceID 1 = fruit
//produceID 2 = veg
I want it in this sort of format.
{
'id': 1,
'fruit': [
{
'name': 'apple',
'qty': 10
},
{
'name': 'bananna',
'qty': 30
},
{
'name': 'orange',
'qty': 65
}
],
'veg': [
{
'name': 'carrot',
'qty': 10
},
{
'name': 'bean',
'qty': 20
},
]
},
{
'id': 2,
'fruit': [
{
'name': 'strawberry',
'qty': 130
},
{
'name': 'mango',
'qty': 60
},
{
'name': 'grapes',
'qty': 140
}
],
'veg': [
{
'name': 'pumpkin',
'qty': 120
},
{
'name': 'cucumber',
'qty': 18
},
]
}
So that I can group my items first by supplier then by produce type (fruit/veg) (using angular js)
<div style="width:100%" ng-repeat="res in results">
<h2>Supplier - {{res.id}}</h2>
<h3>Fruit</h3>
<ul>
<li ng-repeat="f in res.fruit">{{f.name}}</li>
</ul>
<h3>Veg</h3>
<ul>
<li ng-repeat="v in res.veg">{{v.name}}</li>
</ul>
</div>
A working eg of this can be seen in this codepen. http://codepen.io/anon/pen/rVVJjg
How can I achieve this?
So far I have found that I need to find all suppliers which I can with the below.
//get unique suppliers
var unique = {};
var distinct = [];
for( var i in produce ){
if( typeof(unique[produce[i].supplierID]) == "undefined"){
distinct.push(produce[i].supplierID);
}
unique[produce[i].supplierID] = 0;
}
console.log(distinct);
I can also get the right format for supplierID == 1 as shown below, but not sure how I could scale this to handle multiple suppliers
var fruit = [];
var veg = [];
var res = [];
for (var i in produce) {
if (produce[i].supplierID == 1) {
if (produce[i].produceID == 1) {
fruit.push(produce[i]);
}
else {
veg.push(produce[i]);
}
}
}
res.push(fruit);
res.push(veg);
console.log(res);
How can I acheive this?
A:
Check this out:
var produce = [
{'supplierID':1,'produceID':1, 'name':'apple', 'qty': 10},
{'supplierID':1,'produceID':2, 'name':'carrot', 'qty': 10},
{'supplierID':1,'produceID':2, 'name':'bean', 'qty': 20},
{'supplierID':1,'produceID':1, 'name':'bananna', 'qty': 30},
{'supplierID':1,'produceID':1, 'name':'orange', 'qty': 65},
{'supplierID':2,'produceID':2, 'name':'pumpkin', 'qty': 120},
{'supplierID':2,'produceID':2, 'name':'cucumber', 'qty': 18},
{'supplierID':2,'produceID':1, 'name':'strawberry', 'qty': 130},
{'supplierID':2,'produceID':1, 'name':'mango', 'qty': 60},
{'supplierID':2,'produceID':1, 'name':'grapes', 'qty': 140}
];
res = [];
suppliers = {};
for (var i in produce) {
product = produce[i];
var supplier = suppliers[product.supplierID]
if (!supplier) {
supplier = {id: product.supplierID, fruit: [], veg: []};
suppliers[product.supplierID] = supplier;
res.push(supplier);
}
if (product.produceID == 1) {
supplier.fruit.push({name: product.name, qty: product.qty});
} else if (product.produceID == 2) {
supplier.veg.push({name: product.name, qty: product.qty});
}
};
console.log(res);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I make a grid with more than 12 columns?
How can I make a grid with more than 12 columns? I'd like to make a grid to represent 24 hours in a day in half hour increments (total of 48 columns).
<div class="row display">
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">1</div>
</div>
A:
You can simply use a nested Grid.
First you divide your row into 2 columns. Than you place your 12 hours into each section:
<div class="row">
<div class="small-6 large-6 columns">
<div class="row">
<div class="small-2 large-1 columns">1</div>
<div class="small-2 large-1 columns">2</div>
<div class="small-2 large-1 columns">3</div>
<div class="small-2 large-1 columns">4</div>
<div class="small-2 large-1 columns">5</div>
<div class="small-2 large-1 columns">6</div>
<div class="small-2 large-1 columns">7</div>
<div class="small-2 large-1 columns">8</div>
<div class="small-2 large-1 columns">9</div>
<div class="small-2 large-1 columns">10</div>
<div class="small-2 large-1 columns">11</div>
<div class="small-2 large-1 columns">12</div>
</div>
</div>
<div class="small-6 large-6 columns">
<div class="row">
<div class="small-2 large-1 columns">13</div>
<div class="small-2 large-1 columns">14</div>
<div class="small-2 large-1 columns">15</div>
<div class="small-2 large-1 columns">16</div>
<div class="small-2 large-1 columns">17</div>
<div class="small-2 large-1 columns">18</div>
<div class="small-2 large-1 columns">19</div>
<div class="small-2 large-1 columns">20</div>
<div class="small-2 large-1 columns">21</div>
<div class="small-2 large-1 columns">22</div>
<div class="small-2 large-1 columns">23</div>
<div class="small-2 large-1 columns">24</div>
</div>
</div>
</div>
A:
You can customize Foundation on this subpage of Zurb, and you can set 48 columns for yourself. Then you can use large classes from .large-1 to .large-48, and small classes from .small-1 to .small-48.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to update user profile photo with msgraph-sdk-java?
Here is what I have so far:
ProfilePhoto photo = new ProfilePhoto();
photo.???
IProfilePhotoRequest request = graphServiceClient.users(userId).photo().buildRequest();
request.patch(photo, new ICallback<ProfilePhoto>(){
@Override
public void success(final ProfilePhoto result) {
}
@Override
public void failure(ClientException e) {
}
});
However I don't know how to set the 'Binary data for the image':
PUT https://graph.microsoft.com/v1.0/me/photo/$value
Content-type: image/jpeg
Binary data for the image
A:
You need to use BaseProfilePhotoStreamRequest but not the one in your code. Such thing like:
IBaseProfilePhotoStreamRequest request = graphServiceClient.users(userId).photo().getContent().buildRequest();
request.put(imageBytes);
Reference Code to get image binary:
import java.awt.image.BufferedImage;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import sun.misc.BASE64Decoder;
import sun.misc.BASE64Encoder;
public class TestImageBinary {
static BASE64Encoder encoder = new sun.misc.BASE64Encoder();
static BASE64Decoder decoder = new sun.misc.BASE64Decoder();
public static void main(String[] args) {
System.out.println(getImageBinary());
base64StringToImage(getImageBinary());
}
static String getImageBinary(){
File f = new File("c://20090709442.jpg");
BufferedImage bi;
try {
bi = ImageIO.read(f);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(bi, "jpg", baos);
byte[] bytes = baos.toByteArray();
return encoder.encodeBuffer(bytes).trim();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Oracle shows 701 weird table instead of default ones
I just installed OracleDB10 because we use it in school.
I connected to the database via:
hr/password as sysdba;
Then I executed the following code in order to show the few default tables that we used to work with in class (jobs, employees, department and similar tables):
select table_name from user_tables;
As a result I got 701 table, many of them have dollar signs, I looked thorouly through the results and found that the tables I need are within the results, such as the table 'COUNTRIES'
However, if I try to do
desc countries;
Or
SELECT * FROM COUNTRIES;
It echoes an "inexistant table" error.
Any idea what's causing this and how to fix it?
A:
Most of the tables you are seeing are system tables that can be used to query meta information about Oracle. Other table probably contain sample data.
The error message indicates that the tables are either in a different schema or that you have no right to access them.
If I'm not mistaken, COUNTRIES is a sample table in the HR schema. So if you connect with the HR user, you should be able to access them. Try either to connect as a regular user (without as sysdba):
hr/password
Or put the schmea name in front of the table name:
select * from HR.COUNTRIES;
BTW: The schema is directly linked to the user. Therefore user and schema is more or less the same and it is often called owner as well, e.g. in DBA_TABLES.
If you have insufficient rights, then you would need to grant it (using the SYSDBA user):
GRANT SELECT, INSERT, UPDATE, DELETE ON HR.COUNTRIES TO ABDEL;
(ABDEL or whatever your username is.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Elegant ways to return multiple values from a function
It seems like in most mainstream programming languages, returning multiple values from a function is an extremely awkward thing.
The typical solutions are to make either a struct or a plain old data class and return that, or to pass at least some of the parameters by reference or pointer instead of returning them.
Using references/pointers is pretty awkward because it relies on side effects and means you have yet another parameter to pass.
The class/struct solution is also IMHO pretty awkward because you then end up with a million little classes/structs that are only used to return values from functions, generating unnecessary clutter and verbosity.
Furthermore, a lot of times there's one return value that is always needed, and the rest are only used by the caller in certain circumstances. Neither of these solutions allow the caller to ignore unneeded return types.
The one language I'm aware of that handles multiple return values elegantly is Python. For those of you who are unfamiliar, it uses tuple unpacking:
a, b = foo(c) # a and b are regular variables.
myTuple = foo(c) # myTuple is a tuple of (a, b)
Does anyone have any other good solutions to this problem? Both idioms that work in existing mainstream languages besides Python and language-level solutions you've seen in non-mainstream languages are welcome.
A:
Pretty much all ML-influenced functional langues (which is most of them) also have great tuple support that makes this sort of thing trivial.
For C++ I like boost::tuple plus boost::tie (or std::tr1 if you have it)
typedef boost::tuple<double,double,double> XYZ;
XYZ foo();
double x,y,z;
boost::tie(x,y,z) = foo();
or a less contrived example
MyMultimap::iterator lower,upper;
boost::tie(lower,upper) = some_map.equal_range(key);
A:
A few languages, notably Lisp and JavaScript, have a feature called destructuring assignment or destructuring bind. This is essentially tuple unpacking on steroids: rather than being limited to sequences like tuples, lists, or generators, you can unpack more complex object structures in an assignment statement. For more details, see here for the Lisp version or here for the (rather more readable) JavaScript version.
Other than that, I don't know of many language features for dealing with multiple return values generally. However, there are a few specific uses of multiple return values that can often be replaced by other language features. For example, if one of the values is an error code, it might be better replaced with an exception.
While creating new classes to hold multiple return values feels like clutter, the fact that you're returning those values together is often a sign that your code will be better overall once the class is created. In particular, other functions that deal with the same data can then move to the new class, which may make your code easier to follow. This isn't universally true, but it's worth considering. (Cpeterso's answer about data clumps expresses this in more detail).
A:
PHP example:
function my_funct() {
$x = "hello";
$y = "world";
return array($x, $y);
}
Then, when run:
list($x, $y) = my_funct();
echo $x.' '.$y; // "hello world"
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Custom deserializer or different class design Retrofit
Retrofit makes things so easy for a noob like me. However, the API response structure that I'm requesting for my current project doesn't follow the same format as I have used before. I am unsure of whether I need to rewrite my POJO or make a custom deserializer in GSON. I cannot change the JSON structure and a custom deserializer seems daunting to me.
Here is the JSON:
{
"Green Shirt": [
{
"id": "740",
"name": “Nice Green Shirt",
"quantity": "0",
"make": "",
"model": "",
"price": “15.00",
"size": "XXS",
"sku": null,
"image": "https:\/\/google.com\/green_shirt.jpg",
"new_record": false,
"category_name": "",
"bar_code": "",
},
{
"id": "743",
"name": "Green Shirt",
"quantity": “68",
"make": "",
"model": "",
"price": “20.00",
"size": "XS",
"sku": null,
"image": "https:\/\/google.com\/green_shirt.jpg",
"new_record": false,
"category_name": "",
"bar_code": "",
}
],
"Dark Blue Jeans": [
{
"id": "1588",
"name": "Dark Blue Jeans",
"quantity": "0",
"make": "",
"model": "",
"price": "0.00",
"size": “S",
"sku": null,
"image": "https:\/\/google.com\/dark_blue_jeans.jpg",
"new_record": false,
"category_name": "",
"bar_code": "",
"category": null
},
{
"id": "1559",
"name": "Dark Blue Jeans",
"quantity": "4",
"make": "",
"model": "",
"price": "0.00",
"size": “XL",
"sku": null,
"image": "https:\/\/google.com\/dark_blue_jeans.jpg",
"new_record": false,
"category_name": "",
"bar_code": "",
"category": null
}
],
"White Belt": [
{
"id": "1536",
"name": "White Belt",
"quantity": "37",
"make": "",
"model": "",
"price": "0.00",
"size": "One Size",
"sku": null,
"image": "https:\/\/google.com\/white_belt.jpg",
"new_record": false,
"category_name": "",
"bar_code": "",
"category": null
}
]
}
Here is the POJO:
public class Product
{
private String model;
private String bar_code;
private String image;
private null sku;
private String new_record;
private String size;
private String id;
private null category;
private String price;
private String category_name;
private String name;
private String quantity;
private String make;
public String getModel ()
{
return model;
}
public void setModel (String model)
{
this.model = model;
}
public String getBar_code ()
{
return bar_code;
}
public void setBar_code (String bar_code)
{
this.bar_code = bar_code;
}
public String getImage ()
{
return image;
}
public void setImage (String image)
{
this.image = image;
}
public null getSku ()
{
return sku;
}
public void setSku (null sku)
{
this.sku = sku;
}
public String getNew_record ()
{
return new_record;
}
public void setNew_record (String new_record)
{
this.new_record = new_record;
}
public String getSize ()
{
return size;
}
public void setSize (String size)
{
this.size = size;
}
public String getId ()
{
return id;
}
public void setId (String id)
{
this.id = id;
}
public null getCategory ()
{
return category;
}
public void setCategory (null category)
{
this.category = category;
}
public String getPrice ()
{
return price;
}
public void setPrice (String price)
{
this.price = price;
}
public String getCategory_name ()
{
return category_name;
}
public void setCategory_name (String category_name)
{
this.category_name = category_name;
}
public String getName ()
{
return name;
}
public void setName (String name)
{
this.name = name;
}
public String getQuantity ()
{
return quantity;
}
public void setQuantity (String quantity)
{
this.quantity = quantity;
}
public String getMake ()
{
return make;
}
public void setMake (String make)
{
this.make = make;
}
@Override
public String toString()
{
return "ClassPojo [model = "+model+", bar_code = "+bar_code+", image = "+image+", sku = "+sku+", new_record = "+new_record+", size = "+size+", id = "+id+", category = "+category+", price = "+price+", category_name = "+category_name+", name = "+name+", quantity = "+quantity+", make = "+make+"]";
}
}
Here is the request and Retrofit interface:
public static void requestData(String username,String password) {
RestAdapter.Builder builder = new RestAdapter.Builder()
.setClient(new OkClient(new OkHttpClient()))
.setEndpoint(ENDPOINT);
if (username != null && password != null) {
// concatenate username and password with colon for authentication
final String credentials = username + ":" + password;
builder.setRequestInterceptor(new RequestInterceptor() {
@Override
public void intercept(RequestFacade request) {
// create Base64 encodet string
String string = "Basic " + Base64.encodeToString(credentials.getBytes(), Base64.NO_WRAP);
request.addHeader("Accept", "application/json");
request.addHeader("Authorization", string);
}
});
}
RestAdapter adapter = builder.build();
ProductAPI api = adapter.create(ProductAPI.class);
api.getInventory(new Callback<List<Product>>() {
@Override
public void success(List<Product> products, Response response) {
Log.d(TAG, response.getUrl());
Log.d(TAG, response.getReason());
mInventory = product;
}
@Override
public void failure(RetrofitError error) {
Log.d(TAG,error.getMessage());
}
});
}
public interface ProductAPI {
@GET("/v2/get-inventory")
public void getInventory(Callback<List<Product>> response);
}
This is the error I get because the JSON starts with '{' instead of '['
com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at line 1 column 2 path $
A:
I told Retrofit (GSON?) to look for a Map<String,List<Product>> instead of just a List<Product> and it figured it out, how convenient.
:
api.getInventory(new Callback<Map<String,List<Product>>>() {
@Override
public void success(Map<String,List<Product>> products, Response response) {
mInventory = products;
}
@Override
public void failure(RetrofitError error) {
Log.d(TAG,error.getMessage());
}
});
public interface ProductAPI {
@GET("/v2/get-inventory")
public void getInventory(Callback<Map<String,List<Product>>> response);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Newtonsoft.Json.Linq.JObject 'does not contain a definition for X prop
I am trying to store the one property of my object in a variable and it gives me an error referring to my object does not have a definition for my property.
Error: 'Newtonsoft.Json.Linq.JObject' does not contain a definition for 'str'.
This only happens within my project, if I do it within a separate compiler, everything runs correctly and it should be, but I don't understand why I get this error in the project.
https://dotnetfiddle.net/Jf5xQf
A:
The result you are receiving for JObject.Parse is of type JObject. To fetch the str value, you need to use
valorcito = d["str"];
You would be interested to read on querying Json with Linq with your current approach.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Non-graphical output from pycallgraph
I've started writing a small Python utility to cache functions. The available caching tools (lru_cache, Beaker) do not detect changes of sub-functions.
For this, I need a Call Graph. There exists an excellent tool in pycallgraph by Gerald Kaszuba. However, so far I've only got it to output function-name strings. What I need are either function-objects or function-code-hashes.
What I mean with these two terms: Let def foo(x): return x, then foo is the function-object, and hash(foo.__code__.co_code) is the function-code-hash.
What I have
You can see what I have here. But below is a minimal example. The problem I have in this example, is that I can't go from a function name (the string) to the function definition again. I'm trying with eval(func).
So, I guess there are two ways of solving this:
Proper pycallgraph.output, or some otherway to get what I want directly from Pycallgraph.
Dynamically loading the function from the function.__name__ string.
import unittest
from pycallgraph import PyCallGraph
from pycallgraph.output import GraphvizOutput
class Callgraph:
def __init__(self, output_file='callgraph.png'):
self.graphviz = GraphvizOutput()
self.graphviz.output_file = output_file
def execute(self, function, *args, **kwargs):
with PyCallGraph(output=self.graphviz):
ret = function(*args, **kwargs)
self.graph = dict()
for node in self.graphviz.processor.nodes():
if node.name != '__main__':
f = eval(node.name)
self.graph[node.name] = hash(f.__code__.co_code)
return ret
def unchanged(self):
'''Checks each function in the callgraph whether it has changed.
Returns True if all the function have their original code-hash. False otherwise.
'''
for func, codehash in self.graph.iteritems():
f = eval(func)
if hash(f.__code__.co_code) != codehash:
return False
return True
def func_inner(x):
return x
def func_outer(x):
return 2*func_inner(x)
class CallgraphTest(unittest.TestCase):
def testChanges(self):
cg = Callgraph()
y = cg.execute(func_outer, 3)
self.assertEqual(6, y)
self.assertTrue(cg.unchanged())
# Change one of the functions
def func_inner(x):
return 3+x
self.assertFalse(cg.unchanged())
# Change back!
def func_inner(x):
return x
self.assertTrue(cg.unchanged())
if __name__ == '__main__':
unittest.main()
A:
I've solved this by patching tracer.py with the appropriate hashes.
# Work out the current function or method
func_name = code.co_name
+ func_hash = hash(code.co_code)
I am calculating the value just where the function name is saved. Later on, you'd obviously also need to save that value. I am doing that with a dictionary where the func_name is the key and the hash is the value. In the function where nodes are created I am then assigning this to a new field in stat_group.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Quotation marks in Word macros
I am trying to use a Word macro to find and replace some text. Part of my original text is in italics (Our Notebook), and I want the replacement text to instead enclose those italicized words in quotation marks ("Our Notebook") and remove the italics. My 'bad' code is show below. Is there a simple fix for this?
Selection.Find.ClearFormatting
Selection.Find.Replacement.ClearFormatting
With Selection.Find
.Text = "From Our Notebook"
.Replacement.Text = "From "Our NoteBook""
.Forward = True
.Wrap = wdFindContinue
.Format = False
.MatchCase = False
.MatchWholeWord = False
.MatchWildcards = False
.MatchSoundsLike = False
.MatchAllWordForms = False
End With
A:
I believe I have figured this out. The solution is to use two double quotation marks to indicate an output of single double quotation mark. Since these have to go inside a set of double quotation marks (e.g., .Replacement.Text = "From ""Our NoteBook"""), you end up with a lot of quotation marks!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Pandas updating values in a column using a lookup dictionary
I have column in a Pandas dataframe that I want to use to lookup a value of cost in a lookup dictionary.
The idea is that I will update an existing column if the item is there and if not the column will be left blank.
All the methods and solutions I have seen so far seem to create a new column, such as apply and assign methods, but it is important that I preserve the existing data.
Here is my code:
lookupDict = {'Apple': 1, 'Orange': 2,'Kiwi': 3,'Lemon': 8}
df1 = pd.DataFrame({'Fruits':['Apple','Banana','Kiwi','Cheese'],
'Pieces':[6, 3, 5, 7],
'Cost':[88, 55, 65, 55]},)
What I want to achieve is lookup the items in the fruit column and if the item is there I want to update the cost column with the dictionary value multiplied by the number of pieces.
For example for Apple the cost is 1 from the lookup dictionary, and in the dataframe the number of pieces is 6, therefore the cost column will be updated from 88 to (6*1) = 6. The next item is banana which is not in the lookup dictionary, therefore the cost in the original dataframe will be left unchanged. The same logic will be applied to the rest of the items.
The only way I can think of achieving this is to separate the lists from the dataframe, iterate through them and then add them back into the dataframe when I'm finished. I am wondering if it would be possible to act on the values in the dataframe without using separate lists??
From other responses I image I have to use the loc indicators such as the following: (But this is not working and I don't want to create a new column)
df1.loc[df1.Fruits in lookupDict,'Cost'] = lookupDict[df1.Fruits] * lookupD[df1.Pieces]
I have also tried to map but it overwrites all the content of the existing column:
df1['Cost'] = df1['Fruits'].map(lookupDict)*df1['Pieces']
EDIT*******
I have been able to achieve it with the following using iteration, however I am still curious if there is a cleaner way to achieve this:
#Iteration method
for i,x in zip(df1['Fruits'],xrange(len(df1.index))):
fruit = (df1.loc[x,'Fruits'])
if fruit in lookupDict:
newCost = lookupDict[fruit] * df1.loc[x,'Pieces']
print(newCost)
df1.loc[x,'Cost'] = newCost
A:
If I understood correctly:
mask = df1['Fruits'].isin(lookupDict.keys())
df1.loc[mask, 'Cost'] = df1.loc[mask, 'Fruits'].map(lookupDict) * df1.loc[mask, 'Pieces']
Result:
In [29]: df1
Out[29]:
Cost Fruits Pieces
0 6 Apple 6
1 55 Banana 3
2 15 Kiwi 5
3 55 Cheese 7
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Coordinate system for shapefile, no .prj file, using R
I am trying to use the shapefiles found on this page for oil & gas data from the Netherlands: https://www.nlog.nl/en/files-interactive-map from table at the bottom with "ARC_grid" and "Google Earth (WGS84)" columns and icons.
Specifically the "Oil and gas fields" ARC_grid files (https://www.nlog.nl/sites/default/files/nlog_velden_ed_1950_utm_31n_20170829.zip)
I am using R to download, read and plot the shapefile. I am able to do this OK, but the shapefile does not seem to have a coordinate reference system, I believe due to the absence of the .prj file.
The coordinates I get when I read the file in look like this:
[1,] 683985.7 5931987
[2,] 684138.5 5931975
I would like them in Lat/Long, but do not know how to find out what conversion to use initially.
I am looking for an R-based solution.
Using R, this is how I have loaded the file:
temp <- tempfile(fileext = ".zip")
download.file("https://www.nlog.nl/sites/default/files/nlog_velden_ed_1950_utm_31n_20170829.zip", destfile = temp)
filepaths <- unzip(temp)
map <- readOGR(".")
map@proj4string
# CRS arguments: NA
A:
From a quick look, most other datasets on the website have a specific PRJ file - perhaps that particular one just wasn't copied over properly. I'd try copying one from a related dataset, put it in the same directory as the shapefile you need and rename it to: nlog_velden_ed_1950_utm_31n_20170829.prj.
Then you can reproject as needed in ArcGIS. I'd double check to make sure everything seems right, but I'd bet that would do it.
A:
Go to projfinder, type in one of your coordinate points and zoom the map to where you think the point should be:
http://projfinder.com/
projfinder will then try lots of coordinate systems and for the ones that map onto your map it will list them.
you then need to use your knowledge to figure out which one its most likely to be. In this case WGS84 UTM zone 31N unless its old data that might have used a previous geodetic reference like WGS 72 - but they're pretty close together.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the purpose of group "staff"?
What is the purpose of group "staff"? Interestingly, note that the set user or group ID on execution (s) bit is set.
michael@greenbeanDev:~ $ ls -l /usr
total 72
drwxr-xr-x 2 root root 28672 May 26 13:10 bin
drwxr-xr-x 2 root root 4096 Jan 7 2015 games
drwxr-xr-x 35 root root 20480 May 21 15:36 include
drwxr-xr-x 49 root root 4096 May 26 13:06 lib
drwxrwsr-x 11 root staff 4096 May 26 15:55 local
drwxr-xr-x 2 root root 4096 May 26 13:06 sbin
drwxr-xr-x 99 root root 4096 Apr 5 15:59 share
drwxr-xr-x 2 root root 4096 Jan 7 2015 src
michael@greenbeanDev:~ $ cat /etc/group | grep staff
staff:x:50:
michael@greenbeanDev:~ $ cat /etc/passwd | grep staff
michael@greenbeanDev:~ $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 8 (jessie)"
NAME="Raspbian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
michael@greenbeanDev:~ $
A:
According to the Debian Wiki:
staff: Allows users to add local modifications to the system (/usr/local) without needing root privileges (note that executables in /usr/local/bin are in the PATH variable of any user, and they may "override" the executables in /bin and /usr/bin with the same name). Compare with group "adm", which is more related to monitoring/security.
A:
This isn't peculiar to Raspbian or even GNU/Linux; evidently it's used on OSX too, although perhaps not the same way. Both operating systems are a form of unix -- I found that OSX question by quickly searching "unix staff group". OSX actually aims for (and receives) certification from SUS and POSIX; the use of staff there may be to comply with the former.
Linux distros are not so certified, however. There's no staff by default on Fedora, so it is probably just the Debian side of the family. The explanation of purpose from their wiki is:
staff: Allows users to add local modifications to the system (/usr/local) without needing root privileges (note that executables in /usr/local/bin are in the PATH variable of any user, and they may "override" the executables in /bin and /usr/bin with the same name). Compare with group "adm", which is more related to monitoring/security.
Which also explains why /usr/local is set that way.
note that the set user or group ID on execution (s) bit is set
This means that files created in that directory will inherit that gid:
GNU Coreutils: Directories and the Set-User-ID and Set-Group-ID Bits
The only difference this makes with a normal umask of 022 (meaning by default, files are created with group write permission masked out) is that it means subdirectories created there (and files, but this is not so relevant) will inherit that gid -- and the standard ones that are there from the beginning are also set 2775 (group writable with gid bit set).
This means anyone in staff should be able to install anything to /usr/local with write access to the standard hierarchy of subdirectories (etc, bin, lib, share).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Iterating over Observable values and subscribe to new Observable
Let's say if have an Array inside an Observable, for each value of that array, it want to make an API call (which returns an Observable again). I'll break it down to a simple example. Because I need to iterate over the first observables values, how do I make sure that data contains the actual data, and not another observable?
I tried switchMap, mergeMap, etc..
const observables = Observable.of([{ id: 1 }, { id: 2 }]);
const data = Observable.of('data');
observables.pipe(
// I tried a lot, something like this
map(values => {
if(Array.isArray(values)) {
values.map(value => value.data = data); // data will be an Observable
}
return values;
})
).subscribe(result => {
console.log(result) // I want: [{ id: 1, data: 'data' }, { ... }]
});
A:
Depending upon your requirement of sending API requests you could use any of mergeMap, concatMap, forkJoin, etc.
I will be giving an example using forkJoin and mergeMap
const observableData: Observable<{ id: number; data?: any }[]> = of([
{ id: 11 },
{ id: 12 }
]);
return observableData.pipe(
mergeMap(values => {
// first map all the observales to make an array for API calls
let apiArray = values.map(eachValue => {
return this.yourApiService.getData(eachValue.id);
});
// now you have to make API calls
return forkJoin(...apiArray).pipe(
map(apiData => {
// now modify your result to contain the data from API
// apiData will be an array conating results from API calls
// **note:** forkJoin will return the data in the same sequence teh requests were sent so doing a `forEach` works here
values.forEach((eachOriginalValue, index) => {
eachOriginalValue.data = apiData[index].name; // use the key in which you get data from API
});
return values;
}),
catchError(e => {
console.log("error", e);
return of(e);
})
);
})
);
See a working example here: https://stackblitz.com/edit/forkjoinwithmergemap?file=src%2Fapp%2Fapp.component.ts
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there any other simpler method to calculate the number of lines passing through the following dots?
Given a diagram as follows.
Only dots on AD are collinear, so are those on BC.
The objective is to find the numbers of lines passing through the dots.
Note that any overlapped lines are considered as a single line.
My attempt:
No collinear dots (on AB and CD) contribute $C_2^4=6$ lines.
Every collinear dot on BC can be paired with a single dot on CD. It contributes $3\times 2=6$ lines.
Every collinear dot on BC can be paired with a single dot on AB. It contributes $3\times 2=6$ lines.
Every collinear dot on AD can be paired with a single dot on CD. It contributes $4\times 2=8$ lines.
Every collinear dot on AD can be paired with a single dot on AB. It contributes $4\times 2=8$ lines.
A single dot on AD can paired with a single dot on BC. It contributes $4\times 3=12$ lines.
There is a line passing through dots on BC.
There is a line passing through dots on AD.
There are 48 lines in total.
Question
Is there any simpler method?
A:
A simpler approach would be to count all pairs of dots, and then subtract multiply counted lines.
If no two dots were collinear, we'd have ${11 \choose 2} = 55$ lines.
Now, this counts the line $AD$ ${4 \choose 2} = 6$ times, so we need to subtract 5.
Also, we've counted the line $BC$ ${3 \choose 2} = 3$ times, so we need to subtract 2.
Overall, we have ${11 \choose 2} - ({4 \choose 2} - 1) - ({3 \choose 2} - 1) = 55 - 5 - 2 = 48$ lines.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Thesis publication
I have been trying to publish my Master's thesis, I am sure It's an idea that what spreading. I would like to know if i can publish the thesis here on this site, and if so please how can I go about it?
A:
This is a question-and-answer site, not a publishing platform.
If you want to publish your master’s thesis, the best way is to convert it to a proper journal article (or conference paper, if that is a thing in your field). Your supervisor is probably the best person to advise you on this.
A:
In addition to the amazing answer by @Wrzlprmft, I have the following to make you aware of few things:
Many publishers, mostly 'unknown' and 'business oriented' ones do take advantage of the weak situations of the Graduate students by claiming that they would publish the whole thesis as a book.
One such example is Lambert Academic Publishing (LAP) whose status to be called as a legit publisher is highly questionable. Even I was about to get trapped by them. Have a look at the following questions:
Is Lambert Academic Publishing a reputable company?
Be alert: LAP s[c|p]am
Your Thesis and the Predatory Publisher (you must read this)
Another example is OmniScriptum. Read the blog here.
Be careful, it is your hard work produced as a thesis. You might lose copyright, ownership, and then left with nothing. Look how much they are earning from your hard work (a sample e-shopping site).
If you want slightly faster publication (i.e. the time to get published), target high tier conferences in your field. (I am assuming here that you belong to Engineering fields)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Dynamic output from python subprocess module
how can i achieve output dynamically using subprocess module (while the external program keeps running) in python. The external program from which i want to get output dynamically is ngrok ,
ngrok keep running as long as my program is running but i need output while the process is running so that i can extract the newly generated "forwarding url"
when i try to do :
cmd = ['ngrok', 'http', '5000']
output = subprocess.Popen(cmd, stdout=subprocess.PIPE, buffersize=1)
it keep storing output in buffers
A:
I know this is a duplicate, but I can't find any relevant threads about this now. All i get is output.communicate().
So here's a snippet that might be useful:
import subprocess
cmd = ['ngrok', 'http', '5000']
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while process.poll() is None:
print(process.stdout.readline())
print(process.stdout.read())
process.stdout.close()
This would output anything the process outputs, through your script into your output. It does so by looking for a newline character before outputting.
This piece of code would work, if it weren't for the fact that ngrok uses ncurses and/or hogs the output to it's own user/thread much like when SSH asks for a password when you do ssh user@host.
process.poll() checks if the process has an exit-code (if it's dead), if not, it continues to loop and print anything from the process's stdout.
There's other (better) ways to go about it, but this is the bare minimum I can give you without it being complicated really fast.
For instance, process.stdout.read() could be used in junction with select.select() to achieve better buffered output where new-lines are scares. Because if a \n never comes, the above example might hang your entire application.
There's a lot of buffer-traps here that you need to be aware of before you do manual things like this. Otherwise, use process.communicate() instead.
Edit: To get around the hogging/limitation of I/O used by ngrok, you could use pty.fork() and read the childs stdout via the os.read module:
#!/usr/bin/python
## Requires: Linux
## Does not require: Pexpect
import pty, os
from os import fork, waitpid, execv, read, write, kill
def pid_exists(pid):
"""Check whether pid exists in the current process table."""
if pid < 0:
return False
try:
kill(pid, 0)
except (OSError, e):
return e.errno == errno.EPERMRM
else:
return True
class exec():
def __init__(self):
self.run()
def run(self):
command = [
'/usr/bin/ngrok',
'http',
'5000'
]
# PID = 0 for child, and the PID of the child for the parent
pid, child_fd = pty.fork()
if not pid: # Child process
# Replace child process with our SSH process
execv(command[0], command)
while True:
output = read(child_fd, 1024)
print(output.decode('UTF-8'))
lower = output.lower()
# example input (if needed)
if b'password:' in lower:
write(child_fd, b'some response\n')
waitpid(pid, 0)
exec()
There's still a problem here, and I'm not quite sure what or why that is.
I'm guessing the process is waiting for a signal/flush some how.
The problem is that it's only printing the first "setup data" of ncurses, meaning it wips the screen and sets up the background-color.
But this would give you the output of the process at least. replacing print(output.decode('UTF-8')) would show you what that output is.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can't parse json data
I am making a phonegap app for android. I am receiving a json string and trying to parse it . It's a very small data.
{"result":{ "node":"32"}
}
if iam using alert(request.responseText)
the result is displayed but if i return this request.responseText to the calling function and collect it there in a variable like var x= somefunction(); x contains undefined.
var jsonObj = sendPostRequest(url,nurl);
console.log(jsonObj+""); // GIVES UNDEFINED HERE AT THIS LINE BUT SAME STATEMENT WORKS IN sendPostRequest()
if(jsonObj){
var Json = JSON.parse(jsonObj);
console.log(json);
document.getElementById("gnodei").innerHTML= json.result.wEventId;
}
I can collect this response in a variable y inside somefunction() but on returning this data to the calling function nothing reaches there. I use the above json data x just below it but it doesn't work.
please suggest.
edit: `function sendPostRequest(url,nurl){
var request = new XMLHttpRequest();
request.onreadystatechange = function() {
if (request.readyState == 4) {
if (request.status == 200 || request.status == 0){
console.log(request.status);
//alert(request.responseText);
var txt= request.responseText;
console.log(txt);
return txt;
}
}
}
request.open("POST",url, true);
request.setRequestHeader("Content-type","application/x-www-form-urlencoded");
request.send(nurl);
}`
The ajax call. Also there are multiple functions calling this ajax call after preparing their url and nurl values so i need this function as it is. I only need to know how to get back the response in my calling function.
Thanks
A:
Solved it: Created a function to be passed as callback to the sendPostRequest(url,nurl) method. This callback is called only after the ajax call finishes as correct status. The callback function assigns the data that can be used thereafter.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Does the question reputation increase apply to Area 51?
Recently, the reputation for question upvotes to the asker increased from 5 to 10 reputation per question.
On Area 51, I still seem to have 5 Internet points per question. Area 51 doesn't have answers. The Area 51 FAQ states that question upvotes gain 5 reputation points.
Is this reputation increase meant to cover Area 51?
A:
As far as I know the answer to this question is no. Area 51 runs under a different software and is not affected by this change.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Reuse the Action Bar in all the activities of app
I am a newbie to android and I was wondering if someone could guide me about how to reuse the action bar in all of my android activities. As far as I have explored, I found out that we have to make a BaseActivity class and extend it in our Activity where we want to reuse it, and also we have to make a xml layout and include it in our activity xml file. I have finished with the BaseActivity part. Now I am sort of confused in framing the xml part and including it. I know how to merge and include a layout, But in case of Action Bar, what necessary steps are to be taken. Any help would be appreciated.
This is my BaseMenuActivity:
public class BaseMenuActivity extends Activity{
ActionBar actionBar;
@Override
protected void onCreate(Bundle savedInstanceState) {
// TODO Auto-generated method stub
super.onCreate(savedInstanceState);
actionBar = getActionBar();
actionBar.setDisplayHomeAsUpEnabled(true);
actionBar.setDisplayShowCustomEnabled(true);
actionBar.setIcon(R.drawable.ic_social_share);
LayoutInflater inflator = (LayoutInflater) this
.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
View v = inflator.inflate(R.layout.apptitle, null);
actionBar.setDisplayShowTitleEnabled(false);
actionBar.setCustomView(v);
}
}
Manifest part for the same:
<activity
android:name="com.example.travelplanner.MenuActivity"
android:screenOrientation="portrait" android:configChanges="orientation|keyboardHidden"
android:uiOptions="splitActionBarWhenNarrow"
android:label="WeTrip"
android:theme="@style/MyTheme" >
Style.xml part:
<style name="MyTheme" parent="@android:style/Theme.Holo.Light">
<item name="android:actionBarStyle">@style/MyActionBar</item>
</style>
<style name="MyActionBar" parent="@android:style/Widget.Holo.Light.ActionBar">
<item name="android:background">#F0F1F1</item>
<item name="android:backgroundSplit">#000000</item>
</style>
MenuActivity.java
public class MenuActivity extends BaseMenuActivity implements OnItemClickListener{
ActionBar actionBar;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//requestWindowFeature(Window.FEATURE_NO_TITLE);
setContentView(R.layout.activity_menu);
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.menu, menu);
SearchView searchView = (SearchView) menu.findItem(R.id.menu_action_search).getActionView();
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// TODO Auto-generated method stub
switch(item.getItemId()){
case R.id.menu_action_search:
{}
case R.id.menu_action_locate:
{}
case R.id.menu_action_mail:
{}
case R.id.menu_action_call:
{}
}
return super.onOptionsItemSelected(item);
}
}
A:
Well Your code looks good, but if you want to reuse exactly the same ActionBar with the same icons and menus and generally the same functionality in every activity.
You could add the code:
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.menu, menu);
SearchView searchView = (SearchView) menu.findItem(R.id.menu_action_search).getActionView();
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// TODO Auto-generated method stub
switch(item.getItemId()){
case R.id.menu_action_search:
{}
case R.id.menu_action_locate:
{}
case R.id.menu_action_mail:
{}
case R.id.menu_action_call:
{}
}
return super.onOptionsItemSelected(item);
}
in your BaseMenuActivity class and your actionbar will be populated the same for every activity that extends from it.
Update:
To create a menu layout you should create a folder 'menu' in your resources folder res/menu.
Then create a xml file inside called : some_title.xml
A typical example of a menu xml file is like below:
<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android" >
<item
android:id="@+id/menu_search"
android:actionViewClass="com.actionbarsherlock.widget.SearchView"
android:icon="@drawable/abs__ic_search"
android:showAsAction="ifRoom|withText|collapseActionView"
android:title="@string/menu_action_search"/>
<item
android:id="@+id/menu_sort"
android:icon="@drawable/content_sort_icon"
android:showAsAction="always"
android:title="@string/menu_action_sort">
</item>
</menu>
and then inflate that file :
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.some_title, menu);
SearchView searchView = (SearchView) menu.findItem(R.id.menu_action_search).getActionView();
return true;
}
For some more reading this tutorial is very very good on using ActionBar:
http://www.vogella.com/tutorials/AndroidActionBar/article.html
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how to prevent duplicate child record to database for each parent using codeigniter?
i am having product variant table in mysql, i want to prevent duplicate child record for each parent product id:
-----------------------------------------------------
id | product id | category id | variant_value_id' | title
----------------------------------------------------
1 | 11 | 2 | 7
2 | 11 | 3 | 7
this is my mysql table structure.
i want to have unique variant id for each category id.
this is my controller
foreach($this->input->post('product_variant') as $value){
$variant_data = array(
'product_id' => $id,
'category_id' => $this->input->post('product_category'),
'variant_group_id' => $this->Product_model->get_variant_group_by_variant_id($value)[0]->group_id,
'variant_value_id' => $value,
'product_variant_title' => $this->input->post('product_name').' '.$this->Product_model->get_variant_group_by_variant_id($value)[0]->value,
'mrp_price' => '',
'price' =>'',
'slug' => url_title($this->input->post('product_name').'-'.$this->Product_model->get_variant_group_by_variant_id($value)[0]->value, 'dash', true),
'status' =>'',
);
if($this->Product_model->add_product_variant($variant_data)){
$this->session->set_flashdata('product_variant_added', 'Product Variant Created Succesfully');
}
}
Please help. if need more info, i will provide
A:
If you want to prevent duplicate productId, then use the following I mean do a check:
$q = $this->db->select('ProductName')
->from('Table')
->where(array('ProductId' => $ProductId, 'variant_value_id' => $variant_value_id))->get(); //Select query to check the productId
if($q->num_rows() == 0){ //Finally checks if the Id doesn't exist
//Insert goes here
}
else
{
//Already exists
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ValueError: need more than 3 values to unpack
I'm struggling about this ValueError, it happens when I want to define an init_db function with resetting the database and adding some data from local document (hardwarelist.txt), the code was:
def init_db():
"""Initializes the database."""
db = get_db()
with app.open_resource('schema.sql', mode='r') as f:
db.cursor().executescript(f.read())
with open('hardwarelist.txt') as fl:
for eachline in fl:
(model,sn,user,status)=eachline.split(',')
db.execute('insert into entries (model,sn,user,status) values (?, ?, ?, ?)',
(model,sn,user,status))
fl.close()
db.commit()
And the error was:
File "/home/ziyma/Heroku_pro/flaskr/flaskr/flaskr.py", line 48, in init_db
(model,sn,user,status)=eachline.split(',')
ValueError: need more than 3 values to unpack
What should I do?
A:
One of my mentors told me "If half your code is error handling, you aren't doing enough error handling." But we can leverage python's exception handling to make the job easier. Here, I've reworked your example so that if an error is detected, a message is displayed, and nothing is committed to the database.
When you hit the bad line, its printed and you can figure out what's wrong from there.
import sys
def init_db():
"""Initializes the database."""
db = get_db()
with app.open_resource('schema.sql', mode='r') as f:
db.cursor().executescript(f.read())
with open('hardwarelist.txt') as fl:
try:
for index, eachline in enumerate(fl):
(model,sn,user,status)=eachline.strip().split(',')
db.execute('insert into entries (model,sn,user,status) values (?, ?, ?, ?)',
(model,sn,user,status))
db.commit()
except ValueError as e:
print("Failed parsing {} line {}: {} ({})".format('hardwarelist.txt',
index, eachline.strip(), e), file=sys.stderr)
# TODO: Your code should have its own exception class
# that is raised. Your users would catch that exception
# with a higher-level summary of what went wrong.
raise
You should expand that exception handler to catch exceptions from your database code so that you can catch more errors.
As a side note, you need to strip the line before splitting to remove the \n newline character.
UPDATE
From the comments, here's an example on splitting multiple forms of the comma. In this case its Unicode FULLWIDTH COMMA U+FF0C. Whether you can enter unicode directly into your python scripts depends on your text editor and etc..., but that comma could be represented by "\uff0c" or ",". Anyway could can use a regular expression to split on multiple characters.
I create the text using unicode escapes
>>> text='a,b,c\uff0cd\n'
>>> print(text)
a,b,c,d
and I can write the regex with excapes
>>> re.split('[,\uff0c]', text.strip())
['a', 'b', 'c', 'd']
or by copy/paste of the alternate comma character
>>> re.split('[,,]', text.strip())
['a', 'b', 'c', 'd']
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I reset the application data after each test with Xcode 7 UI Testing?
Apple introduced in Xcode 7 new UI Testing but I have a struggle whenever the tests launches the app, it starts with data that the application had before. It means tests cannot be independent and can be influenced by other tests.
It is not possible to access user defaults and other data because the application that is running tests has no access to the bundle of the tested application. Scripts are also out of question because they can be run before or after testing. And there is no way how to execute NSTask on iOS to run a script before each test suite.
Is there a way how do reset the application data before each test suite?
A:
Not in a straight forward manner. But there are some workarounds.
The XCUIApplication can set command line arguments and environment variables that can alter your application’s behavior.
A simple example of your main.m file:
int main(int argc, char * argv[]) {
#if DEBUG
// Reset all data for UI Testing
@autoreleasepool {
for (int i = 1; i < argc; ++i) {
if (0 == strcmp("--reset-container", argv[i])) {
NSArray *folders = NSSearchPathForDirectoriesInDomains(NSLibraryDirectory, NSUserDomainMask, YES);
NSFileManager *fm = [[NSFileManager alloc] init];
for (NSString *path in folders) {
[fm removeItemAtPath:path error:nil];
}
// Also remove documents folder if necessary...
}
}
}
#endif
@autoreleasepool {
return UIApplicationMain(argc, argv, nil,
NSStringFromClass([AppDelegate class]));
}
}
And in -[XCTestCase setUp] add:
XCUIApplication *app = [[XCUIApplication alloc] init];
app.launchArguments = @[@"--reset-container"];
[app launch];
A:
If preparing the app for UITests inside application:didFinishLaunchingWithOptions: is ok in your case, then you can do the following:
In setUp() method of your test class extending XCTestCase add following code:
let application = XCUIApplication()
application.launchEnvironment = ["UITESTS":"1"]
application.launch()
Then, in application:didFinishLaunchingWithOptions: you can check for the flag using following code:
func application(_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey : Any]? = nil) -> Bool {
let env = ProcessInfo.processInfo.environment
if let uiTests = env["UITESTS"], uiTests == "1" {
// do anything you want
}
// further set up code
}
Of course if that is an option for you.
NOTE: Instead of setting "1" as argument for "UITESTS" flag, you can specify different values for different test cases - or even test methods (but in such case, you should launch the application from test method, not setUp())
NOTE 2: I suggest wrapping the code dealing with the flag into #if DEBUG block.
A:
I got to reset the application data using some private headers to access to the springboard and the settings app.
First I added a Run script phase to remove it when the tests starts:
/usr/bin/xcrun simctl uninstall booted com.mycompany.bundleId
And after that I use the solution I wrote here to remove it using a test script that runs on the tearDown calls to reset it after every test.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can you sort a const vector?
If I have a const vector defined in my class, how can I go about sorting it?
Attempting to sort a const vector will give errors since I'm changing the contents of a const vector.
A:
You don't. If you need to modify it... well then it shouldn't be const. The two goals are in direct conflict with one another.
Instead of asking for a solution to a problem that doesn't make sense, tell us what you are actually trying to accomplish here. Are you trying to return a vector from a method that you don't want the caller to be able to modify? In that case, create a getter method and return a const vector&
#include <vector>
class Foo
{
public:
// clients can't change this vector directly
const std::vector<int>& get_vector() const { return _vec; }
// you can still create an interface that allows
// mutation of the vector in a safe way, or mutate
// the vector internally.
void push_back( int i ) { _vec.push_back( i ); }
private:
std::vector<int> _vec;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
PDO - Iteratively Binding Variables
I am trying to create a function to iteratively bind variables. This is what I have so far:
function prepareQuery($db, $query, $args) {
// Returns a prepared statement
$stmt = $db->prepare($query);
foreach ($args as $arg => $value) {
$stmt->bindParam($arg, $value);
}
return $stmt;
}
This is how I'm using it:
$stmt = prepareQuery($db, "SELECT * FROM `Licenses` WHERE `verCode`=:verCode", Array(":verCode" => $verCode));
$verCode = "some_string";
$stmt->execute();
while ($info = $stmt->fetch()) {
print_r($info);
}
Though it doesn't print anything. I know the database entry exists, and the same query works from PHPMyAdmin. So, I think it's just a problem in how my function tries to create the bindings. How can I fix this? Thanks!
A:
Do not create a function to iteratively bind variables. PDO can do it already
function prepareQuery($db, $query, $args) {
$stmt = $db->prepare($query);
$stmt->execute($args);
return $stmt;
}
If it doesn't print anything, then it didn't find anything. As simple as that.
You don't even need this prepare query function actually. Just amend PDO very little like this
class myPDOStatement extends PDOStatement
{
function execute($data = array())
{
parent::execute($data);
return $this;
}
}
$user = 'root';
$pass = '';
$dsn = 'mysql:charset=utf8;dbname=test;host=localhost';
$opt = array(
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
PDO::ATTR_EMULATE_PREPARES => TRUE,
PDO::ATTR_STATEMENT_CLASS => array('myPDOStatement'),
);
$pdo = new PDO($dsn, $user, $pass, $opt);
and you'll be able to write such a neat chain:
$sql = "SELECT * FROM `Licenses` WHERE `verCode`=:verCode";
$code = "some_string";
$data = $pdo->prepare($sql)->execute([$code])->fetchAll();
foreach ($data as $info) {
print_r($info);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Display result of function in notification title (Swift 3, iOS10)
I'm triggering a local notification from my Swift 3 app in ios10 and, while I can get the notification to fire ok, I'm trying to pass a variable (which happens to be a value returned from a function) as part of the title of the notification. Instead of displaying the contents of that variable it just shows as "(Function)".
I'm using that same variable as the text of a label within the app itself without any problems.
I've created the following class and class function to schedule the notification:
class notificationController {
class func scheduleNotification(at date: Date, header: String, body: String) {
let calendar = Calendar(identifier: .gregorian)
let components = calendar.dateComponents(in: .current, from: date)
let newComponents = DateComponents(calendar: calendar, timeZone: .current, month: components.month, day: components.day, hour: components.hour, minute: components.minute)
let trigger = UNCalendarNotificationTrigger(dateMatching: newComponents, repeats: false)
let content = UNMutableNotificationContent()
content.title = NSString.localizedUserNotificationString(forKey: header, arguments: nil)
content.body = body
content.sound = UNNotificationSound.default()
let request = UNNotificationRequest(identifier: "textNotification", content: content, trigger: trigger)
UNUserNotificationCenter.current().removeAllPendingNotificationRequests()
UNUserNotificationCenter.current().add(request) {(error) in
if let error = error {
print("error: \(error)")
}
}
}
I'm then defining the variables to pass to this function in the viewdidload() function of the view controller associated with the main screen, and then calling the scheduleNotification function:
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "dd/MM/yyyy HH:mm:ss"
let strDate = "25/01/2017 21:21:00"
let notificationDate = dateFormatter.date(from: strDate)
let dayNumber = String(describing: (playerArray[0]?.calculateDays)!)
let header = "Welcome to day \(dayNumber) of your life"
var body : String = ""
if(infoArray.count > 0) {
body = (infoArray[0]?.text)!
} else {body = "Doesn't it feel great to be alive?"}
notificationController.scheduleNotification(at: notificationDate!, header: header, body: body)
PlayerArray is an array containing one Player CoreData object. I've created the following function within the Player class to work out how many days old the player is:
public class Player: NSManagedObject {
func calculateDays() -> Int {
let currentCalendar = Calendar.current
guard let dob = currentCalendar.ordinality(of: .day, in: .era, for: self.dob as! Date) else {
return 0
}
guard let today = currentCalendar.ordinality(of: .day, in: .era, for: Date()) else {
return 0
}
let days = today - dob
return days
}
}
Any reason that the dayNumber variable won't display? I feel like it's something obvious that I've missed. Any ideas would be magic.
A:
you need to call the function with ()
change this:
let dayNumber = String(describing: (playerArray[0]?.calculateDays)!)
to this:
let dayNumber = String(describing: (playerArray[0]?.calculateDays())!)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Inner join on a meta_value field
I would like to do an inner join on a meta_value field.
In the following code, I need to join 'enrolment' to the 'wp_woocommerce_order_itemmeta' (OIM) table where the oim.meta_key='enrolment_id' and the meta_value= the id field in the 'enrolment' table.
The structure of the wp_woocommerce_order_itemmeta is
id
meta_key
meta_value
The structure of the enrolment table is:
id
firstname
surname
etc
Note in my example I pivot the OIM table fields from rows to columns.
This is what I have so far:
SELECT
O.order_item_name,
MAX(CASE WHEN OIM.meta_key = 'class' THEN OIM.meta_value END) AS 'class',
MAX(CASE WHEN OIM.meta_key = 'fee-type' THEN OIM.meta_value END) AS 'fees',
MAX(CASE WHEN OIM.meta_key = 'enrolment_id' THEN OIM.meta_value END) AS 'eid'
FROM wp_woocommerce_order_items O
INNER JOIN wp_woocommerce_order_itemmeta OIM ON O.order_item_id = OIM.order_item_id
INNER JOIN enrolment E ON ...
A:
Include the meta key in the ON clause.
...
INNER JOIN enrolment e
ON e.id = oim.meta_value
AND oim.meta_key = 'enrolment_id'
...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Clojure import versus :import
Reading the clojure API for import I see that :import in the ns macro is preferred to import, however when i'm coding using swank/slime/emacs, I can't c-x c-e the (ns .. ) s-expression to get the deps into the repl, but using (import ...) I can.
Whats the reason why :import is preferred over import, and is there fast way to import the deps from a (ns ...) s-expr from my .clj file to the repl? (Same question can be generalized to :use and :refer.. thanks)
A:
Here is my preferred workflow:
Start Swank/Slime
Open the file I want to work on in Emacs
Do C-c C-k to compile and load the file in question
Do , followed by i, then type the name of the namespace you're working on and press Enter
Now your Slime REPL should be in the namespace you're working on, and you can add to the ns declaration at the top and just C-c C-k as you change things (including your ns imports).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Equivalent form of derivative as limit?
I was traditionally taught the formula for the derivative to be:
$$ \dfrac{df}{dx} = \lim_{\Delta x \to 0} \dfrac{f(x + \Delta x) - f(x)}{\Delta x}$$
Is this an equally valid form? How can I see one way or the other?
$$ \dfrac{df}{dx} \overset{?}{=} \lim_{\Delta x \to 0} \dfrac{f(x) - f(x - \Delta x)}{\Delta x}$$
A:
Let $\Delta y=-\Delta x$. Then
$$\lim_{\Delta x \to 0} \dfrac{f(x) - f(x - \Delta x)}{\Delta x}= \lim_{\Delta y \to 0} \dfrac{f(x) - f(x + \Delta y)}{-\Delta y}$$
Now move the $-$ to the numerator.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is Map.Entry interface?
I came across the following code :
for(Map.Entry<Integer,VmAllocation> entry : allMap.entrySet()) {
// ...
}
What does Map.Entry<K,V> mean ? What is the entry object ?
I read that the method entrySet returns a set view of the map. But I do not understand this initialization in for-each loop.
A:
Map.Entry is a key/value pair that forms one element of a Map. See the docs for more details.
You typically would use this with:
Map<A, B> map = . . .;
for (Map.Entry<A, B> entry : map.entrySet()) {
A key = entry.getKey();
B value = entry.getValue();
}
If you need to process each key/value pair, this is more efficient than iterating over the key set and calling get(key) to get each value.
A:
Go to the docs: Map.Entry
Map.Entry is an object that represents one entry in a map. (A standard map has 1 value for every 1 key.) So, this code will iterator over all key-value pairs.
You might print them out:
for(Map.Entry<Integer,VmAllocation> entry : allMap.entrySet()) {
System.out.print("Key: " + entry.getKey());
System.out.println(" / Value: " + entry.getValue());
}
A:
An entry is a key/value pair. In this case, it is a mapping of Integers to VmAllocation objects.
As the javadoc says
A map entry (key-value pair). The Map.entrySet method returns a collection-view of the map, whose elements are of this class. The only way to obtain a reference to a map entry is from the iterator of this collection-view. These Map.Entry objects are valid only for the duration of the iteration; more formally, the behavior of a map entry is undefined if the backing map has been modified after the entry was returned by the iterator, except through the setValue operation on the map entry.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to make list items clickable upon append?
(I'll try to keep the question simple for now, since I don't really know how to go about this.)
I have to change unordered list elements inside an html file by
1.) Create a DOM fragment ( for each of the new <li> ) in javascript.
2.) Remove the original batch of <li> in the ul.
3.) Append each new DOM fragment into the ul
4.) Make each of the new <li> elements clickable again
Clicking on a <li> element removes the contents of the ul and replaces it with a new batch
(the batch varies from 0 <li> to 6 <li> elements (depending on which element you clicked))
The first problem I'm having is that there is no way to create multiple elements clickable in one go (for loops doesnt work) so at the moment im just repeating the functions I have.
$( "#choices" ).children().eq(0).click(function() {
selectChoice(allChoices, $( "#choices" ).children().eq(0).text());
});
$( "#choices" ).children().eq(1).click(function() {
selectChoice(allChoices, $( "#choices" ).children().eq(1).text());
});
// more li elements to make clickable
Second, when you remove a li element and put back a new element, you have to make it clickable, but since you dont know the number of elements and for loops dont work, I can't add a function upon creation. Not all elements follow the same format also, so simply modifying the existing li isnt an option.
Is there a simple way that I don't know about that does what I want it to do..?
A:
You're using jQuery, right?
Look into the on method. You should be able to do something like
$('#choices').on('click', 'li', function(ev) {
... do your thing ...
});
As new li's are added under #choices, this event binding will be automatically added to the new li. http://api.jquery.com/on/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Supremum and infimum of series
The question asks to show the equality/inequality of supremum/infimum of series.
Suppose that $\{a_{n,m}\}$ are nonnegative real numbers for all $n,m \in \mathbb{N}$.
Suppose that for each $n$, $m \mapsto a_{n,m}$ is a nondecreasing function of $m$, i.e., $a_{n,m_1} \leq a_{n,m_2}$ when $m_1 \leq m_2$. Show that
$$\sup_{m \in \mathbb{N}} \sum_{n = 1}^\infty a_{n,m} = \sum_{n =1 }^\infty \sup_{m \in \mathbb{N}} a_{n,m} $$
Regardless of whether or not the sides are finite or infinite.
Suppose that for each $n$, $m \mapsto a_{n,m}$ is nonincreasing, i.e., $a_{n,m_1} \leq a_{n,m_2}$ for all $n \in \mathbb{N}$ and all $m_2 \leq m_1$. Show that
$$ \sum_{n =1 }^\infty \inf_{m \in \mathbb{N}} a_{n,m} \leq \inf_{m \in \mathbb{N}} \sum_{n = 1}^{\infty} a_{n,m} $$
Give an example to show that the inequality is strict.
My attempt in part 1 is to show $\sup_{m \in \mathbb{N}} \sum_{n = 1}^\infty a_{n,m} \leq \sum_{n =1 }^\infty \sup_{m \in \mathbb{N}} a_{n,m}$ and also $\sum_{n =1 }^\infty \sup_{m \in \mathbb{N}} a_{n,m} \leq \sup_{m \in \mathbb{N}} \sum_{n = 1}^\infty a_{n,m}$. I am not sure how to apply the concept of supremum/infimum to equality/inequality of series. Any hints would be very helpful.
A:
For any $n,m$, $a_{n,m} \leq \sup_r\,a_{n,r}$. Therefore $\sum_n{a_{n,m}} \leq \sum_n{\sup_r\,a_{n,r}}$ and the rest is textbook definition of sup to get the first inequality.
For the reverse inequality, show that for every $N \geq 1$, $\sum_{n=1}^N{\sup_r\,a_{n,r}} \leq \sup_r\,\sum_{n=1}^N{a_{n,r}}$ (show that in this case the sups are limits), then that the LHS is not greater than $\sup_r\,\sum_{n \geq 1}{a_{n,r}}$, and conclude.
For 2), you just need to show as above that the RHS is greater than any finite sum in the LHS.
For the counter-example, find for instance an example where all $a_{n,m}$ converge to zero, but all the $\sum_n{a_{n,m}}$ are infinite.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
nodej.js handling of post req.body is not working
I spend so much time looking at this.. I am just following udemy tutorial where instructor used exactly below.. BUT when I run it, req.body is empty EVEN though I am sending it from source(I tried from both nodeman and insomnia). I am just posting { "name":"test" }... and it's not getting the req.body...
I console logged just req and do not see the parm 'body'... Can someone please throw me some light here? SO furstrated
const express = require('express');
const app = express();
const port = 8002;
app.post('/', (req, res) => {
console.log(req.body);
});
app.listen(port, () => {
console.log(`port : ${port}`);
})
A:
Try using body-parser for your req.body.
First install the dependency npm install body-parser and then try executing the below code:
const app = express();
const bodyParser= require('body-parser')
const port = 8002;
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: false}));
app.post('/', (req, res) => {
console.log(req.body);
});
app.listen(port, () => {
console.log(`port : ${port}`);
})
For more documentation refer: body-parser-documentation
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Cancel gwt rpc call
In this example there is a pretty description of how to make a timeout logic using a Timer#schedule. But there is a pitfall there. We have 2 rpc requests: first makes a lot of computation on server(or maybe retrieving a large amount of data from database) and second a tiny request that returns results immediately. If we make first request, we will not recieve results immediately, instead we will have a timeout and after timeout we make the second tiny request and then abortFlag from example will be true, so we can retrieve the results of second request, but also we can retrieve the results of first request that was timed out before(because the AsyncCallback object of first call was not destroyed).
So we need some kind of cancelling the first rpc call after timeout occurs. how can I do this?
A:
Let me give you an analogy.
You, the boss, made a call to a supplier, to get some product info. Supplier say they need to call you back because the info would take some time to be gathered. So, you gave them the contact of your foreman.
Your foreman waits for the call. Then you told your foreman to cancel the info request if it takes more than 30 minutes.
Your foreman thinks you are bonkers because he cannot cancel the request, because he does not have an account that gives him privilege to access the supplier's ordering system.
So, your foreman simply ignores any response from the supplier after 30 minutes. Your ingenious foreman sets up a timer in his phone that ignores the call from the supplier after 30 minutes. Even if you killed your foreman, cut off all communication links, the vendor would still be busy servicing your request.
There is nothing on the GWT client-side to cancel. The callback is merely a javascript object waiting to be invoked.
To cancel the call, you need to tell the server-side to stop wasting cpu resources (if that is your concern). Your server-side must be programmed to provide a service API which when invoked would cancel the job and return immediately to trigger your GWT callback.
You can refresh the page, and that would discard the page request and close the socket, but the server side would still be running. And when the server side completes its tasks and tries to perform a http response, it would fail, saying in the server logs that it had lost the client socket.
It is a very straight forward piece of reasoning.
Therefore, it falls into the design of your servlet/service, how a previous request can be identified by a subsequent request.
Cascaded Callbacks
If request 2 is dependent on the status of request 1, you should perform a cascaded callback. If request 2 is to be run on success then, you should place request 2 into the onFailure block of the callback. Rather than submitting the two requests one after another.
Otherwise, your timer should trigger request 2, and request 2 would have two responsibilities:
tell the server to cancel the previous request
get the small piece of info
|
{
"pile_set_name": "StackExchange"
}
|
Q:
PHP Youshido GraphQL issue with nested fields
I am using version v1.4.2.18. The library can be found here: https://github.com/Youshido/GraphQL
I am trying to accomplish the following:
query {
articleSummary(id:1) {
title,
body,
article {
id
}
}
}
I have an ArticleSummaryField.php:
class ArticleSummaryField extends AbstractField
{
public function build(FieldConfig $config)
{
$config->addArgument('id', new NonNullType(new StringType()));
}
public function getType()
{
return new ArticleSummaryType();
}
public function resolve($value, array $args, ResolveInfo $info)
{
return [
'title' => 'test title',
'body' => 'test body',
'article' => $args['id']
];
}
}
Then the ArticleSummaryType.php:
class ArticleSummaryType extends AbstractObjectType
{
public function build($config)
{
$config
->addField('title', new StringType());
->addField('body', new StringType());
->addField('article', new ArticleField());
}
}
Then the ArticleField.php has the getType method return the ArticleType which has the id field.
However what i am getting is an error:
Fatal error: Uncaught Error: Call to undefined method ArticleField::getNullableType() in .../vendor/youshido/graphql/src/Execution/Processor.php on line 135
What seems to be happening is that when $targetField->getType() on line 135 in src/Execution/Processor.php is called its returning the ArticleField class, not the ArticleType class.
I would expect that to return the class as declared in the 'getType' method on the ArticleField class.
Am i going about this wrong for nesting fields? Or is there a bug in the library?
A:
To accomplish this you only pass the Field class as the first argument.
class ArticleSummaryType extends AbstractObjectType
{
public function build($config)
{
$config
->addField('title', new StringType());
->addField('body', new StringType());
->addField(new ArticleField());
}
}
Then in the field class you can override getName to set the name for the field as needed or it will use the class name as the field name.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why might open source companies use crowd funding?
Often when I see open source companies I notice one of their sources of funding is crowd funding? My question is why is this?
Are there not any better methods of income that they could use?
A:
Why might open source companies use crowd funding?
1. Open Source company
First, what is an open source company? In this context, I presume it means a company that develops and distributes open source software. Red Hat is considered to be a well know Open Source company, right? Canonical (of ubuntu fame) is another. But, does it end there? Google is an open source company by that definition (android, angular, and thousands of other things are open source... right?) So is IBM (eclipse, OSGi, parts of Java, parts of it's Z-OS code, IBM donated much code to Apache, etc.). Microsoft is Open Source - it's opening the .net platform, etc.
I presume, at this point, that you are starting to disagree with me.... how can Oracle be an open source company - it does own MySQL and distribute it, though... so, it qualifies - even Java is open - and that's Oracle.
This concept of an open source company is broken.
Should the concept be narrowed? An open source company cannot sell closed-source products? Well, there goes RedHat, Pentaho, Mandriva, SuSE, Canonical, etc. All the posterchilds are gone... leaving things like.... Apache, Eclipse, hmmm, not much else.
Then again, those are not companies, they are non-profits, or foundations...
2. Crowd Funding
What is crowd funding? Kickstarting is crowd-funding, but, it is actually an investment, and a contract. A kickstart campaign is a pre-order with a specified delivery, and a penalty if it fails. It is not exactly a donation. This sort of seed money is a common thing to do, and is a form of distributed venture-capital. A lot of people invest, and they have preferential returns on that investment (early/cheap access to cool things).
Kickstarting is crowdfunding, but it is almost always associated with something tangible, a book, a device, a bowl-of-soup, or whatever. Not software.
Further, companies have complicated rules about accepting donations (but not so complicated when accepting investments or pre-orders). Companies are not charitable organisations, so they can't just say "donate here" without first making it clear that they are "for profit", and they cannot (in the US, at least) issue tax benefits, etc.
On the other hand, individuals, and not-for-profit charitable organizations can accept donations (like Mediawiki, EFF, OSI, Apache, Eclipse, etc.). Then again, they are not companies.
3. The Common case
Most companies who's primary business is related to open source software distribution (like RedHat, Pentaho, SuSE, Canonica, Elastic, etc.) make their money (and profits) from value-add - whether it be support, a premium experience, an enhanced management system, better scalability, or whatever.
Where they don't make their money primarily from that, they make it from advertising, selling space on their pages.
4. So, who does use crowd-funding for software?
Individuals and non-profits. It is easy to throw up a "donate" paypal button. But, you should be aware, that most of those people get good value from advertising too, and also from esoteric things like "wishlists", commissions, contract-work, and so on. If you are the lead developer on an open source project, you will likely be the person contracted to apply a custom hack for some company, or train people, etc. It is not a direct payment, but indirect.
Or, you become an "evangelist" at Google, etc.
5. Conclusion
No company makes significant money from crowd funding.
Individuals and non-profits may, but there is no reason to believe it is significant... the bulk of value comes in from being able to put it on your resume, and selling your knowledge and skills, not your product.
A:
Why is crowdsourcing used? It works pretty well. Open sourcing something nearly always means a community that is interested in it. So it can work that the community is enthusiastic enough to fund the open sourced thing through crowdfunding.
But there are a lot of other methods for making money. My examples for these methods are non-companies, but companies can make money with these methods as organization or persons can.
Most open source projects have a donate button or something similar. That is not far from actual organized crowdfunding. Some project make fundraisers at some times, Wikipedia does from time to time for instance.
Some classic open source projects release the open sourced software, but offer paid services around it: support, documentation, installation and configuration help and so on.
Some projects sell premium versions of the product. That is common for open books or music, digital is free, a printed version or a CD cost something. A way Cory Doctorow makes his money.
Merchandise is another way of income for an open-source-project. Selling coffee-cups, T-shirts, Posters, mousepads and so on with the logo of the project can bring a lot of money. XKCD finances itself mostly with merchandise.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Power sets : Do the relations between P(A) and P(B) always mirror the relations between the sets A and B?
If I am correct it is true that:
(1) "$P(A)$ is included in $P(B)$" implies "$A$ is included in $B$".
(2) "$P(A) = P(B)$" implies "$A = B$".
Might I conclude from this that the power sets of two sets always have the same relations as these two sets have with one another?
Are there classical counterexamples to this (hasty) generalization?
I can think of this as a counterexample :
The fact that $A$ and $B$ are disjoint does NOT imply that $P(A)$ and $P(B)$ are disjoint.
Attenpt to prove (1) using the theorem : "$\{ x \}$ belongs to $P(S)$" $\Longleftrightarrow$ "$x$ belongs to $S$.
Let's admit that : $P(A)$ is included in $P(B)$.
Now, suppose (in view of refutation) that $A$ is not included in $B$.
It means that there exists an x such that x belongs to $A$ but not to $B$. And consequently that there is an $x$ such that $\{ x \}$ belongs to $P(A)$ but not to $P(B)$. If this were true, there would be a set $S$ such that $S$ belongs to $P(A)$ but not to $P(B)$. This contradicts our hypothesis according to which $P(A)$ is included in $P(B)$.
Conclusion: "$P(A)$ is included in $P(B)$" implies "$A$ is included in $B$".
A:
A very natural relation to consider would be that of the cardinality of the sets. We certainly have that $|A| = |B|$ implies that $|P(A)| = |P(B)|$. You may ask whether or not the converse is true, so does $|P(A)| = |P(B)|$ imply $|A| = |B|$?
This turns out to be not necessarily true, it is in fact independent from ZFC. That means that there can be set-theoretic universes where $|P(A)| = |P(B)|$ implies $|A| = |B|$ (e.g. when GCH holds), but there can also be set-theoretic universes where this fails. So then there are $A$ and $B$ such that $|P(A)| = |P(B)|$ while $|A| \neq |B|$. See also this answer: https://math.stackexchange.com/a/244873/661457.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
I uninstalled a package but not all disk space was cleared. What do I do?
So I went to install a LaTeX distribution using sudo apt-get install texlive-full, and hit yes without really looking at the size of the package. It told me it required 3,550MB, but when all was said and done by /root partition was 5.6GB smaller, so I went to uninstall it using sudo apt-get purge tex*.
After that, for some reason, my /root partition is still 1.8GB smaller than it was immediately before installing the package. What gives? What can I do to get that 1.8GB back?
Let me know if more info is needed!
Thanks :)
A:
try running
sudo apt clean && sudo apt autoclean && sudo apt autoremove
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to prevent funny characters on Home Page
This is a challenge for the sharpest and most experienced WordPress specialists
I am building a site in the WP Twenty Fifteen theme.
I created a bare-bones minimum home page at http://bit.ly/1zlmpKS
The first time you hit the page, it looks fine. When you refresh the browser you get a series of funny characters:
How can I fix this?
Amendment:
Not sure how or if this relates and does what it does, but when I am logged into WordPress (as Super Admin) the home page never displays the funky characters. I can refresh and refresh and refresh dozens of times and it displays fine everytime. But the minute I logout of WP the home page shows the funny characters. I have an inner page on the new site as well, and it renders fine, consistently and all the time.
Background & History:
I started out installing the Quark theme. Then I created my own "starter" child theme for the site. I could never get the child theme to take until, say hours later, all of a sudden my child theme started working. So I continued trying to tweak the child theme just to see if I could use it as a building block for my new site. Admittedly, this is my first crack at using the Quark starter theme and my first attempt to learn and utilize a child theme. Well after days of frustration, I thought maybe the W3 Total Cache plugin was the issue. So I network-deactivated it. Still no luck. Then I pulled out the child theme and the Quark theme and simply went with the WP out-of-the box Twenty Fifteen theme. That's when the funny characters surfaced. So I believe somehow and someway the W3 Total Cache plugin is messing with this network and the network site. I have dozens of other sites on my Multisite network install and none of them are misbehaving. W3 Total Cache is a standard plugin I activate for each new network I setup on my server. I have a shortlist of standard plugins I use across all networks and W3 Total Cache is one of them. Oh well...I am at a total loss.
Something else that's peculiar:
When you hit the home page - it's fine. When you refresh the home page it renders the funny characters. But then if you wait for 5-10 mins and hit the home page again it renders fine. What might that mean in the WordPress world?
Next Trial & Error Test:
I just re-installed Quark theme and network activated my Child Theme. The theme works -- so it is not a theme issue. The funny characters still appear on the home page on refresh under the Quark Child Theme. So it is some sort of unexplained caching issue, I think.
A:
Solved it thanks to [at]OnethingSimple and @milo.
It was the W3 Total Cache plugin. I removed it from my server and everything works fine.
Here's the path I had to take.
Step 1 - went into my network admin on the site having the problem to confirm that W3TC was not activate or network active -- and it was not.
Step 2 - walked through each of my other networks (over a dozen) to Network Deactivate the W3TC plugin -- again it is one of the standard plugins I install on every new network.
Here's was an interesting finding:
Each network site showed this error screen. But I proceeded to Network Deactivate the W3TC plugin anyway on each and every one of my networks.
Step 4 - After network deactivating the plugin, I proceeded to Delete it from the network admin panel on my primary site. When I did, I got this error on screen:
W3 Total Cache Error: some files appear to be missing or out of place. Please re-install plugin or remove /home/abcdefg/public_html/wp-content/advanced-cache.php.W3 Total Cache Error: some files appear to be missing or out of place. Please re-install plugin or remove /home/abcdefg/public_html/wp-content/db.php.W3 Total Cache Error: some files appear to be missing or out of place. Please re-install plugin or remove /home/abcdefg/public_html/wp-content/object-cache.php.
So I looked at the Drop-ins plugins as shown below:
Step 5 - next I FTP'd onto my server and deleted these 3 files:
./wp-content/advanced-cache.php
./wp-content/db.php
./wp-content/object-cache.php
Step 6 - I tried to delete these 2 directories via FTP and could not. So I telnet'd into server as root and deleted them:
./wp-content/cache
./wp-content/w3tc-config
Step 7 - I then removed this line of code from wp-config.php:
define('WP_CACHE', true); // Added by W3 Total Cache
Step 8 - I then removed these blocks of directives from .htaccess:
# 2015-02-05 updated as part of install of W3 Total Cache Plugin
# BEGIN W3TC Page Cache core
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} \/$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp\-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html" [L]
</IfModule>
# END W3TC Page Cache core
# 2014-09-18 added as part of install of W3 Total Cache Plugin
# BEGIN W3TC Browser Cache
<IfModule mod_deflate.c>
<IfModule mod_headers.c>
Header append Vary User-Agent env=!dont-vary
</IfModule>
AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<IfModule mod_mime.c>
# DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml
</IfModule>
</IfModule>
# END W3TC Browser Cache
Step 9 - refreshed the home page on the site numerous, numerous times. NO MORE FUNNY CHARACTERS !!!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
JavaScript Creating array in object and push data to the array
I'm new to programming. I'm trying React and have function addComment which is executed when a user adds a comment to news.I need to create in this moment a property comments (array) and assign or push to this array inputCommentValue value. But right now I only rewrite 0 element of the array and can't add a new element.
Can you please tell me where to put push method? Thank you!
var ARTICLES = [{
title: "sit amet erat",
text: "nam dui proin leo odio porttitor id consequat in consequat ut nulla sed accumsan"
}, {
title: "pulvinar sed",
text: "velit id pretium iaculis diam erat fermentum justo nec condimentum"
}]
addComment(index, inputCommentValue){
ARTICLES = [...ARTICLES], ARTICLES[index].comments=[inputCommentValue];
this.setState({ARTICLES:ARTICLES});
}
A:
assuming that data exist in component's state , then handler will look something like that
addComment(index, inputCommentValue){
// copy array , for not mutate state
let ARTICLES = [...this.state.ARTICLES];
// check if comments not exist
if(!ARTICLES[index].comments) ARTICLES[index].comments=[];
// add new comment to array
ARTICLES[index].comments.push(inputCommentValue);
// update component with new articles
this.setState({ARTICLES:ARTICLES});
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What are these containers called for waste?
There are so many ways to call these containers for waste. (correct me if some of them might sound weird/unnatural to use)
garbage can, trash can, rubbish can, pedal can, garbage bin, trash bin, rubbish bin, pedal bin
I'm totally confused. I know some of them might mean the same thing.
But please tell me which is which based on the following pictures by order? Please also let me know where you are from. (US, UK, Canada, Australia, etc...)
1:
2:
3:
4:
5:
A:
Let's just consider the container where we throw our garbage.
In the Continental U.S., the two most common generic terms for these containers are trash can and garbage can. If you don't want to specify, these will always be understood for what they are: a place to throw your garbage.
Whether you will hear garbage or trash, is a regional and age-related matter.(ref 1)
Garbage can is most likely to be heard in Southwestern New England (All of New York state and Connecticut), New Jersey, parts of Pennsylvania, Michigan and Illinois, and then all Northern States from Wisconsin to Oregon as well as parts of Utah and Nevada.
In all other parts of the U.S., including all Southern States, most people will say trash can.
In addition, according to Josh Katz¹
Since the 1950s, trash can has become increasingly common in American speech. Two in three people born in the 1990s would say trash can over garbage can.
As for the several pictures shown by the OP, 1,2 and 3 are trash cans, Number 4 can be found in supermarkets and retail stores under the name of roller bins
Number 5 is a trash cart.
Different models abound so it's not always easy to tell 4 from 5.
In Britain, it's a completely different matter and dustbin is one of the generic terms.
Katz, Josh. Speaking American.
A:
US native speaker here, East Coast mostly. Everything below is from my personal experience, not published sources.
Trash is assorted unwanted debris, but garbage includes food waste and other things that start to smell or attract vermin/germs if they sit; garbage pails or cans usually have a lid to contain odors. When I was a kid in the 1960s we had both a trash can and a garbage pail, because, I think, the city collected them separately.
Rubbish is mostly a British term.
(I now live in a city where there are separate collections for trash, recyclables, food scraps for compost, and yard waste, but most American cities are doing well to separate trash and recyclables.)
As for your examples:
Trash can, depicted with a plastic trash bag or can liner. Indoor or outdoor use, but too big and awkward for most home uses. Also called a trash barrel.
Trash can, garbage can, or garbage pail. Indoor use, I'd expect to see this type in a kitchen or doctors office, or perhaps a bathroom.
This might also be a "diaper pail" used to collect soiled diapers.
And a term you didn't mention: "disposal bin"; for example when discussing the proper location for used feminine hygiene products in a public restroom.
Trash can (with swing top.)
(Wheeled) Trash bin or maybe trash cart. I'd expect to see this type outdoors next to a house; you empty the indoors trash can into this as needed and then wheel it to the curb once a week to be picked up by the automated garbage truck. (The bar on the opposite side from the wheels is hooked by a claw from the truck's mechanism.)
Trash bin or trash cart. You might see something like this in an office or apartment building, or it might also be a utility cart that people use when working on their garden.
(Note: I've added terms from the comments where it was immediately obvious from my personal experience that I should have put them in without being promoted.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ABBYY OCR SDK: I am trying a sample script for recognizing business cards but not getting any output
I am trying to use OCR SDK in PHP from ABBYY.com for recognizing business cards. I have the following code just to check out how it works. When I execute the code I get a blank output. Where I could be gonig wrong on the code?
$applicationId = "MyBusinessCardReader";
$password = "password";
$filename = "businesscard.jpg";
$localDir = dirname(__FILE__);
$url = "http://cloud.ocrsdk.com/processBusinessCard";
$c = curl_init();
curl_setopt($c, CURLOPT_URL, $url);
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_USERPWD, "$applicationId:$password");
curl_setopt($c, CURLOPT_POST, 1);
$post_array = array(
"my_file" => "@$localDir$filename"
);
curl_setopt($c, CURLOPT_POSTFIELDS, $post_array);
$response = curl_exec($c);
curl_close($c);
echo "<pre>";
echo $response;
echo "</pre>";
The samle business card image can be seen at http://test.goje87.com/vangal/businesscard.jpg
A:
I don't know much about the Abbyy SDK. But before you try any OCR engine on an image, you should always make sure to...
...crop all borders with different coloring,
...scale the image so you get your text to a (virtual) size of at least 10 pt per 300 DPI.
I tried Tesseract v3.01 against your original sample, and it didn't find anything.
Then I applied an ImageMagick command to crop the borders and scale the image to 200% like this:
convert \
businesscard.jpg \
-crop 440x200+30+120 \
-scale 180% \
cropped+scaled-businesscard.jpg
to get this picture:
This already lets Tesseract's commandline recognize most of the text (it fails on @ and .):
tesseract b.jpg bcard && cat bcard.txt
Tesseract Open Source OCR Engine v3.01 with Leptonica
Fe/<70"
MIKE FARAG
PH 913 284 6455
EM milzeocreatefervoncom
Tw 0mil<efarag01
createfervoncom
One could most likely get Tesseract's recognition rate close to 100% if I'd...
... enhance the picture quality for OCR purposes: increase contrast and convert to pure grayscale ('binarization');
...'train' Tesseract on the specific font used in this document.
I assume that you can make Abbyy's life easier by similar measures...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Joined-True cuts data to be plotted
I have imported data (which are so many numbers and if necessary please see http://pastebin.com/RgWLJmTS) and I plotted them with
ListLogPlot[{data},
Frame -> True,
Joined -> True}
I have this fig
But when I use
ListLogPlot[{data},
Frame -> True,
Joined -> False}
I have
There is a problem with Joined which caused a cut in plot. How can I have joined data in ListLogPlot without any being cut.
A:
I cannot reproduce your problem with only ListLogPlot but addition of PlotRange -> All option should solve it:
ListLogPlot[data, Frame -> True, Joined -> True, PlotRange -> All]
I can reproduce your problem only using Show:
Show[ListLogPlot[data, Frame -> True, Joined -> True], PlotRange -> {Automatic, 10^-3}]
This behavior is not a bug: it is documented (under the "Details" section) behavior of the default PlotRange -> Automatic option:
Options[ListLogPlot, PlotRange]
{PlotRange -> Automatic}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Google Maps OpenGL 3D makes my system unstable (hangup)
Running Ubuntu 12.04LTS. I tried to load Google Maps with the OpenGL 3D new functionality. My system does weird things graphically and still reports system errors after reboot. What shall I do?
BTW, I cannot access maps any more, if I do the system crashes again.
A:
I had the same problem. I am on 12.04 as well. Visiting the URL below will restore your access to Google Maps.
http://maps.google.com/?vector=0
See here for more details:
|
{
"pile_set_name": "StackExchange"
}
|
Q:
basic haskell : Converting Type to Int
If I have a type, eg insurance number which is a int.
Is there anyway I can convert insurancenumber to int for use in a comparing function?
intNI :: NI -> Int
intNI x = Int (x)
A:
If, as I suspect, NI is defined as
type NI = Int
then you can just say
intNI :: NI -> Int
intNI x = fromIntegral x
or, after eta-conversion:
intNI :: NI -> Int
intNI = fromIntegral
On the other hand, it seems that
data NI = NI Int
in which case the right way to go is pattern matching, like so:
intNI (NI x) = x
This will extract the x bit out of NI x and return it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Ubuntu 18.04: Cannot extract downloaded tar.xz file with a single command
I am trying to download and extract tar.xz file with a single line command. However, it doesn't consistently work for all of the links. I can manually download and extract it.
I am able to download glibc and extract without any issue.
curl https://ftp.gnu.org/gnu/glibc/glibc-2.26.tar.xz | tar -xJ -C ${PWD} --strip-components 1
When it comes to download the following file, it surprisingly fails. I couldn't find out what makes the difference. This is the single link I have seen so far which fails.
Any idea why the following command fails?
curl https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/arm-linux-gnueabihf/gcc-linaro-7.4.1-2019.02-x86_64_arm-linux-gnueabihf.tar.xz | tar -xJ -C ${PWD} --strip-components 1
It fails with the following message.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
xz: (stdin): File format not recognized
tar: Child returned status 1
tar: Error is not recoverable: exiting now
Any help would be appreciated.
A:
Use curl -L to follow redirects:
curl -L https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/arm-linux-gnueabihf/gcc-linaro-7.4.1-2019.02-x86_64_arm-linux-gnueabihf.tar.xz | tar -xJ --strip-components 1
To debug such issues, you can run the curl command by itself and observe that in this case, it doesn't return any data:
$ curl https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/arm-linux-gnueabihf/gcc-linaro-7.4.1-2019.02-x86_64_arm-linux-gnueabihf.tar.xz
$
You can then further run curl -v to see what's going on. It shows a HTTP/1.1 302 Found with an empty body and a redirection to a different file, which is how you know to add -L to follow the redirection.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Identify story wherein special glass slows time
Sorry I don't recall much about this at all. A friend, who had recommended many other books/authors I ended up liking, told me about a story in which a scientist develops some kind of special glass, that had some property like slowing down light by a factor of a trillion, or something else odd like that, adventure ensuing. He's since moved away, and now I'm wondering what this book might have been.
A:
Yes, that's certainly Bob Shaw's Slow Glass stories, starting with “Light of Other Days” and collected in Other Days, Other Eyes.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to scroll to a desired location in a view?
Since my ScrollView is very long, and users need to scroll down for a long time to arrive to the desired TableLayout, I would like to ask whether there are functions in a way that if I design a button such that when it was pressed, it will automatically scroll down itself to the desired TableLayout location?
I do not know what is this named in technical term. Could somebody tell me how to achieve it? Would there be examples of codes? Many thanks in advance!
A:
Refer to Can I scroll a ScrollView programmatically in Android?
In your activity:
Button b = (Button) findViewById(R.id.button1);
b.setONClickListener(new Button.OnCLickListener() {
public void onClick(View v) {
myScrollView.scrollTo(0, y_index);
}
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Javascript code loading issue with IE7 and IE8
The website in question is cpavalet.com
My javascript loads fine on all browsers except ie7 and ie8. For a few hours now I have been trying to debug the issue to no avail.
I am using jQuery and modernizr, and using the supersized library for the full-sized background images.
The weird thing is, sometimes when I load the page it loads correctly, other times the javascript doesn't work on the page. I think it has to do with the order the scripts are loaded. I am using document.ready for my jquery scripts.
Can anyone shed some light as to why it's not working correctly in ie7 and ie8? I am currently using ie8 for testing purposes.
I am using javascript for: image slider on home page, full-size background images, back to top link to slide to the top, and form validation.
Thanks!
Corey
A:
My guess as to why it wasn't loading correctly was partly correct. I thought that it was because of the order I had my scripts being run on the page.
The answer is that I was using the defer attribute when loading my scripts. When I removed the defer attributes, the scripts started working correctly again.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Firebase ReyclerView adapter not displaying on activity start
I have a RecyclerView that displays a list of messages. When the activity starts the RecyclerView is not populated by the Firebase database. However if I click the back button or an EditText the RecyclerView displays all the items correctly. I have tried manually updating it by using the notifyDataSetChanged method. I have seen other threads, but didn't see anyone with a final solution.
Similar Problem
This is the code called during onStart.
RecyclerView menu = (RecyclerView) findViewById(R.id.chat_view);
LinearLayoutManager manager = new LinearLayoutManager(this);
manager.setReverseLayout(false);
menu.setAdapter(messageAdapter);
menu.setLayoutManager(manager);
messageAdapter.notifyDataSetChanged();`
A:
you need to use the messageAdapter.notifyDataSetChanged(); inside the indide the firebase request not outside it cause it's asynchrone so firebase doesn't block the main thread until it finish
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is TensorExpand so slow for vector operations?
I would like to expand the following tensor expression:
$Assumptions = {(a1 | b | c | d | e | f | g | h) ∈
Vectors[3,
Reals], (m | Ε1 | Ε2) ∈
Reals};
exp=-32 I m^2 a1.(-b + d)\[Cross](-b + f) (b + c).(b + e) +
16 I m Ε2 a1.(b + e) (b +
c).(-b + d)\[Cross](-b + f) +
16 I Ε1 Ε2 a1.(b + e) (b +
c).(-b + d)\[Cross](-b + f) -
16 I m Ε2 (b + c).a1\[Cross](-b + f) (-b + d).(b +
e) - 16 I Ε1 Ε2 (b +
c).a1\[Cross](-b + f) (-b + d).(b + e) +
32 I m^2 a1.(b + c)\[Cross](b + e) (-b + d).(-b + f) +
32 I m Ε1 a1.(b + c)\[Cross](b + e) (-b + d).(-b +
f) - 32 I m Ε2 (b + c).a1\[Cross](b + e) (-b +
d).(-b + f) -
32 I Ε1 Ε2 (b +
c).a1\[Cross](b + e) (-b + d).(-b + f) +
16 I m Ε2 (b + c).(-b + f) (-b +
d).a1\[Cross](b + e) +
16 I Ε1 Ε2 (b + c).(-b + f) (-b +
d).a1\[Cross](b + e) +
16 I Ε1 Ε2 (b + c).(b + e) (-b +
d).a1\[Cross](-b + f) +
16 I m Ε2 a1.(b + c) (-b +
d).(b + e)\[Cross](-b + f) +
16 I Ε1 Ε2 a1.(b + c) (-b +
d).(b + e)\[Cross](-b + f) +
16 I m Ε2 (b + c).a1\[Cross](-b + d) (b + e).(-b +
f) + 16 I Ε1 Ε2 (b +
c).a1\[Cross](-b + d) (b + e).(-b + f) +
16 I m Ε2 (b + c).(-b + d) (b +
e).a1\[Cross](-b + f) +
16 I Ε1 Ε2 (b + c).(-b + d) (b +
e).a1\[Cross](-b + f) -
16 I a1.(g + h) (b + c).(b + e) (-g + h).(-b + d)\[Cross](-b + f) -
16 I a1.(-g + h)\[Cross](-b + f) (-b + d).(b + e) (g + h).(b + c) -
32 I a1.(-g + h)\[Cross](b + e) (-b + d).(-b + f) (g + h).(b + c) +
32 I a1.(-g + h) (-b + d).(b + e)\[Cross](-b + f) (g + h).(b + c) +
16 I a1.(-g + h)\[Cross](-b + d) (b + e).(-b + f) (g + h).(b + c) +
16 I a1.(-b + d)\[Cross](-b + f) (-g + h).(b + e) (g + h).(b + c) -
16 I a1.(b + e) (-g + h).(-b + d)\[Cross](-b + f) (g + h).(b + c) +
16 I a1.(-g + h)\[Cross](-b + f) (b + c).(b + e) (g + h).(-b + d) +
16 I a1.(-g + h)\[Cross](-b + f) (b + c).(-b + d) (g + h).(b + e) -
16 I a1.(-g + h)\[Cross](-b + d) (b + c).(-b + f) (g + h).(b + e) +
32 I a1.(-g + h) (b + c).(-b + d)\[Cross](-b + f) (g + h).(b + e) +
32 I a1.(-g + h)\[Cross](b + c) (-b + d).(-b + f) (g + h).(b + e) -
16 I a1.(-b + d)\[Cross](-b + f) (-g + h).(b + c) (g + h).(b + e) +
16 I a1.(b + c) (-g + h).(-b + d)\[Cross](-b + f) (g + h).(b + e) -
16 I a1.(-g + h)\[Cross](-b + d) (b + c).(b + e) (g + h).(-b + f) -
16 I a1.(-b + d)\[Cross](-b + f) (b + c).(b +
e) (Ε1 Ε2 - (g + h).(-g + h));
TensorExpand[exp]
However, it is really slow in my laptop. It takes over 12 hours. I think even if I do it manually, it could be faster.
Does anyone know the reason?
A:
I also don't know the reason for the poor performance of TensorExpand, but as a possible workaround I may suggest using FeynCalc. The package has its roots in the field of the High Energy Physics, that is, it is not a toolbox for tensor algebra like xAct and company. Yet, the current development version already has a built-in support for 3-vectors, which was added there to accomodate for nonrelativistic field theories.
After having installed the development version according to the wiki via
Import["https://raw.githubusercontent.com/FeynCalc/feyncalc/master/install.m"]
InstallFeynCalc[InstallFeynCalcDevelopmentVersion -> True]
we can do the following
vecs = {a1, b, c, d, e, f, g, h};
expTmp = (exp /. Dot -> dot /. Cross -> cross /. {
dot[x_, cross[y_, z_]] /; SubsetQ[vecs, Variables[{x, y, z}]] :>
CLC[][x, y, z],
dot[x_, y_] /; SubsetQ[vecs, Variables[{x, y}]] :> CSP[x, y]
}) // ExpandScalarProduct[#, EpsEvaluate -> True] & // FCE
Here I converted your original expression into the FeynCalc notation using CLC (a shortcut for the Cartesian Levi-Civita tensor) and CSP (a shortcut for the Cartesian scalar product). Mathematically CLC[][a,b,c] corresponds to $\varepsilon^{ijk} a^i b^j c^k$, while CSP[a,b] stands for $a^i b^i$. The explicit Cartesian indices are suppressed for technical reasons, to avoid the expensive canonicalization. However, you can also define a standalone $\varepsilon^{ijk}$ via CLC[i,j,k] and 3-vector $a^i$ as CV[a,i]. ExpandScalarProduct is FeynCalc function for expanding scalar product, while FCE converts the result from the internal notation used by the package (FeynCalcInternal) to the more concise external notation (FeynCalcExternal).
Then we can convert the result back into your original notation via
res = expTmp /. {
CSP[x_, y_] /; SubsetQ[vecs, Variables[{x, y}]] :> dot[x, y],
CLC[][x_, y_, z_] /; SubsetQ[vecs, Variables[{x, y, z}]] :>
dot[x, cross[y, z]]
} /. cross -> Cross /. dot -> Dot
which yields
(16*I)*m*\[CapitalEpsilon]2*(-a1 . Cross[b, e] - a1 . Cross[b, f] - a1 . Cross[e, f])*
(-b . b - b . c + b . d + c . d) + (16*I)*\[CapitalEpsilon]1*\[CapitalEpsilon]2*
(-a1 . Cross[b, e] - a1 . Cross[b, f] - a1 . Cross[e, f])*
(-b . b - b . c + b . d + c . d) + (16*I)*\[CapitalEpsilon]1*\[CapitalEpsilon]2*
(-a1 . Cross[b, d] + a1 . Cross[b, f] - a1 . Cross[d, f])*
(b . b + b . c + b . e + c . e) -
(32*I)*m^2*(a1 . Cross[b, d] - a1 . Cross[b, f] + a1 . Cross[d, f])*
(b . b + b . c + b . e + c . e) +
(16*I)*m*\[CapitalEpsilon]2*(a1 . Cross[b, d] + a1 . Cross[b, e] - a1 . Cross[d, e])*
(-b . b - b . c + b . f + c . f) + (16*I)*\[CapitalEpsilon]1*\[CapitalEpsilon]2*
(a1 . Cross[b, d] + a1 . Cross[b, e] - a1 . Cross[d, e])*
(-b . b - b . c + b . f + c . f) + (16*I)*m*\[CapitalEpsilon]2*(a1 . b + a1 . e)*
(-b . Cross[c, d] + b . Cross[c, f] + b . Cross[d, f] + c . Cross[d, f]) +
(16*I)*\[CapitalEpsilon]1*\[CapitalEpsilon]2*(a1 . b + a1 . e)*(-b . Cross[c, d] + b . Cross[c, f] +
b . Cross[d, f] + c . Cross[d, f]) -
(16*I)*m*\[CapitalEpsilon]2*(-a1 . Cross[b, c] - a1 . Cross[b, f] - a1 . Cross[c, f])*
(-b . b + b . d - b . e + d . e) - (16*I)*\[CapitalEpsilon]1*\[CapitalEpsilon]2*
(-a1 . Cross[b, c] - a1 . Cross[b, f] - a1 . Cross[c, f])*
(-b . b + b . d - b . e + d . e) -
(16*I)*(-a1 . Cross[b, g] + a1 . Cross[b, h] + a1 . Cross[f, g] -
a1 . Cross[f, h])*(b . g + b . h + c . g + c . h)*
(-b . b + b . d - b . e + d . e) -
(32*I)*m*\[CapitalEpsilon]2*(a1 . Cross[b, c] - a1 . Cross[b, e] - a1 . Cross[c, e])*
(b . b - b . d - b . f + d . f) - (32*I)*\[CapitalEpsilon]1*\[CapitalEpsilon]2*
(a1 . Cross[b, c] - a1 . Cross[b, e] - a1 . Cross[c, e])*
(b . b - b . d - b . f + d . f) +
(32*I)*m^2*(-a1 . Cross[b, c] + a1 . Cross[b, e] + a1 . Cross[c, e])*
(b . b - b . d - b . f + d . f) +
(32*I)*m*\[CapitalEpsilon]1*(-a1 . Cross[b, c] + a1 . Cross[b, e] + a1 . Cross[c, e])*
(b . b - b . d - b . f + d . f) -
(32*I)*(a1 . Cross[b, g] - a1 . Cross[b, h] + a1 . Cross[e, g] -
a1 . Cross[e, h])*(b . g + b . h + c . g + c . h)*
(b . b - b . d - b . f + d . f) +
(16*I)*(-a1 . Cross[b, g] + a1 . Cross[b, h] + a1 . Cross[f, g] -
a1 . Cross[f, h])*(b . b + b . c + b . e + c . e)*
(-b . g - b . h + d . g + d . h) + (16*I)*m*\[CapitalEpsilon]2*(a1 . b + a1 . c)*
(-b . Cross[d, e] - b . Cross[d, f] - b . Cross[e, f] + d . Cross[e, f]) +
(16*I)*\[CapitalEpsilon]1*\[CapitalEpsilon]2*(a1 . b + a1 . c)*(-b . Cross[d, e] - b . Cross[d, f] -
b . Cross[e, f] + d . Cross[e, f]) + (32*I)*(-a1 . g + a1 . h)*
(b . g + b . h + c . g + c . h)*(-b . Cross[d, e] - b . Cross[d, f] -
b . Cross[e, f] + d . Cross[e, f]) - (16*I)*(a1 . g + a1 . h)*
(b . b + b . c + b . e + c . e)*(-b . Cross[d, g] + b . Cross[d, h] +
b . Cross[f, g] - b . Cross[f, h] - d . Cross[f, g] + d . Cross[f, h]) -
(16*I)*(a1 . b + a1 . e)*(b . g + b . h + c . g + c . h)*
(-b . Cross[d, g] + b . Cross[d, h] + b . Cross[f, g] - b . Cross[f, h] -
d . Cross[f, g] + d . Cross[f, h]) +
(16*I)*m*\[CapitalEpsilon]2*(-a1 . Cross[b, c] - a1 . Cross[b, d] - a1 . Cross[c, d])*
(-b . b - b . e + b . f + e . f) + (16*I)*\[CapitalEpsilon]1*\[CapitalEpsilon]2*
(-a1 . Cross[b, c] - a1 . Cross[b, d] - a1 . Cross[c, d])*
(-b . b - b . e + b . f + e . f) +
(16*I)*(-a1 . Cross[b, g] + a1 . Cross[b, h] + a1 . Cross[d, g] -
a1 . Cross[d, h])*(b . g + b . h + c . g + c . h)*
(-b . b - b . e + b . f + e . f) +
(16*I)*(a1 . Cross[b, d] - a1 . Cross[b, f] + a1 . Cross[d, f])*
(b . g + b . h + c . g + c . h)*(-b . g + b . h - e . g + e . h) +
(16*I)*(-a1 . Cross[b, g] + a1 . Cross[b, h] + a1 . Cross[f, g] -
a1 . Cross[f, h])*(-b . b - b . c + b . d + c . d)*
(b . g + b . h + e . g + e . h) -
(16*I)*(-a1 . Cross[b, g] + a1 . Cross[b, h] + a1 . Cross[d, g] -
a1 . Cross[d, h])*(-b . b - b . c + b . f + c . f)*
(b . g + b . h + e . g + e . h) -
(16*I)*(a1 . Cross[b, d] - a1 . Cross[b, f] + a1 . Cross[d, f])*
(-b . g + b . h - c . g + c . h)*(b . g + b . h + e . g + e . h) +
(32*I)*(-a1 . g + a1 . h)*(-b . Cross[c, d] + b . Cross[c, f] +
b . Cross[d, f] + c . Cross[d, f])*(b . g + b . h + e . g + e . h) +
(32*I)*(a1 . Cross[b, g] - a1 . Cross[b, h] + a1 . Cross[c, g] -
a1 . Cross[c, h])*(b . b - b . d - b . f + d . f)*
(b . g + b . h + e . g + e . h) + (16*I)*(a1 . b + a1 . c)*
(-b . Cross[d, g] + b . Cross[d, h] + b . Cross[f, g] - b . Cross[f, h] -
d . Cross[f, g] + d . Cross[f, h])*(b . g + b . h + e . g + e . h) -
(16*I)*(-a1 . Cross[b, g] + a1 . Cross[b, h] + a1 . Cross[d, g] -
a1 . Cross[d, h])*(b . b + b . c + b . e + c . e)*
(-b . g - b . h + f . g + f . h) -
(16*I)*(a1 . Cross[b, d] - a1 . Cross[b, f] + a1 . Cross[d, f])*
(b . b + b . c + b . e + c . e)*(\[CapitalEpsilon]1*\[CapitalEpsilon]2 + g . g - h . h)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MyBatis multiple resultsets with procedure
i tried the following mapping:
<select id="getRequestDetail" statementType="CALLABLE" parameterType="test.domain.RequestDetailRequest" resultMap="ExternalManagersMap, SubjectServicesMap">
{call pop.dbo.getRequestDetail ( #{uid, mode=IN, jdbcType=VARCHAR},
#{requestId, mode=IN, jdbcType=INTEGER},
#{resultStatus, mode=OUT, jdbcType=INTEGER},
#{resultMsg, mode=OUT, jdbcType=VARCHAR} )}
</select>
<resultMap type='test.domain.User' id="ExternalManagersMap">
<result property="name" column="externalManager"/>
</resultMap>
<resultMap type='test.domain.Service' id="SubjectServicesMap">
<result property="name" column="serviceName"/>
<result property="id" column="serviceId"/>
</resultMap>
But I have error:
org.apache.ibatis.exceptions.TooManyResultsException: Expected one result (or null) to be returned by selectOne(), but found: 2
My interface is:
Map<String, Object> getRequestDetail(RequestDetailRequest detailRequest);
Can you please help me howto map multiple resutlset with calling procedure?
My DB is Sybase.
A:
sqlSession.selectOne indicates you are only expecting one row returned from the procedure.
Instead you should use sqlSession.select
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Cannot see editText
So I'm trying to place an edit text above a textview but I'm having trouble accomplishing this. When I open the activity the keyboard opens and when I type I cannot see any text input. Also when the keyboard opens it focuses to halfway down my imageview instead of where I'm trying to place the edit text.
here's my xml
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/activity_route_details"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:paddingBottom="@dimen/activity_vertical_margin"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
tools:context="com.example.zach.BirdsboroClimbing.RouteDetails">
<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/scrollView1"
android:layout_width="match_parent"
android:layout_height="match_parent">
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<CheckBox
android:text="Route Climbed"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true"
android:id="@+id/routeCheckBox"
android:gravity="center" />
<CheckBox
android:text="Add Note"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentRight="true"
android:layout_alignParentEnd="true"
android:id="@+id/noteCheckBox"
android:gravity="center"
/>
<EditText
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:inputType="text"
android:text="Name"
android:layout_centerVertical="true"
android:layout_centerHorizontal="true"
android:id="@+id/noteEditText"
android:layout_below="@id/routeCheckBox"
android:background="#008000"
/>
<TextView
android:text="TextView"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/routeDetailsView"
android:textSize="18sp"
android:textAlignment="center"
android:textColor="@android:color/black"
android:layout_below="@id/noteEditText"/>
<ImageView
android:layout_below="@id/routeDetailsView"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/routeImage"
android:scaleType="fitCenter"
android:layout_alignParentBottom="true"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true"
android:adjustViewBounds="true" />
</RelativeLayout>
</ScrollView>
Thanks for any help!
A:
use softInputMode with adjustPan in your manifest. This will make the OS scroll the entire screen by the minimum amount needed to make the cursor visible above the keyboard.
<activity
android:name=".YourActivity"
android:windowSoftInputMode="adjustPan">
</activity>
Edit : I answered for the question.You cannot say its not working when you have mistakes on your layout!! remove unnecessary attributes.
You have layout_alignParentBottom,layout_below,layout_alignParentStart all of them in that imageView (use the exact one you want not all of them)
replace your imageView like this
<ImageView
android:src="@drawable/yourImage"
android:layout_below="@id/routeDetailsView"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/routeImage"
android:scaleType="fitCenter"
android:adjustViewBounds="true" />
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Running python in template
I have an Applicant model containing title, first_name and surname and I am passing a list of them to a template here:
{% for applicant in applicants %}
<tr>
<td>{{ applicant.id }}</td>
<td>{{ applicant.title.replace('^','') }} {{ applicant.first_name }} {{ ed {%
</tr>
{% endfor %}
The problem is that the titles contain dodgy characters (^) that I need to replace in python
applicant.title.replace('^','')
But this causes the template to break
Could not parse the remainder: '('^','')' from 'applicant.title.replace('^','')'
How can I run python on a template variable without causing this error?
A:
For this simple case, you can use the builtin cut filter like so:
<td>{{ applicant.title|cut:"^" }} ...
If you need something more complicated, you could write a Custom Template Filter.
cut, for example, is implemented like this:
def cut(value, arg):
"""Removes all values of arg from the given string"""
return value.replace(arg, '')
you could easily implement something like:
def myfilter(value):
"""sanitizes my output"""
for c in "_^/\\":
value = value.replace(c, '')
return value
and apply with
<td>{{ applicant.title|myfilter }} ...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
command line parsing using apache commons cli
M tryng to use apache commons cli , My use case is variable number of arguments with some options.
Say
-p str1 str2;
It can be
-p str1 str2 str3 .. strn
Another is
-m str1
-h
with
cmdline.getOptionValues("p");
It fetches only last string.How can I fetch all the values of an particular option?
Edit:
if(cmdline.hasOption("p")){
String[] argsList = cmdline.getOptionValues(p);
String strLine = Arrays.toString(argsList);
argsList = strLine.split(",");
}
M i doing it right? will string consist of exactly data I want or smthng unexpected white spaces r anythng else?
A:
Use hasArgs() with a value separator set to a comma, so the option becomes
-p str1,str2,str3,...,strn
This is how multi-valued options are handled in CLI
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to get execution time of c program?
I am using clock function for my c program to print execution time of current program.I am getting wrong time in output.I want to display time in seconds,milliseconds and microseconds.
#include <stdio.h>
#include <unistd.h>
#include <time.h>
int main()
{
clock_t start = clock();
sleep(3);
clock_t end = clock();
double time_taken = (double)(end - start)/CLOCKS_PER_SEC; // in seconds
printf("time program took %f seconds to execute \n", time_taken);
return 0;
}
time ./time
time program took 0.081000 seconds to execute
real 0m3.002s
user 0m0.000s
sys 0m0.002s
I expect output around 3 seconds however it display wrong.
As you see if I run this program using Linux command time I am getting correct time,I want to display same time using my c program.
A:
Contrary to popular belief, the clock() function retrieves CPU time, not elapsed clock time as the name confusingly may induce people to believe.
Here is the language from the C Standard:
7.27.2.1 The clock function
Synopsis
#include <time.h>
clock_t clock(void);
Description
The clock function determines the processor time used.
Returns
The clock function returns the implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation. To determine the time in seconds, the value returned by the clock function should be divided by the value of the macro CLOCKS_PER_SEC. If the processor time used is not available, the function returns the value (clock_t)(−1). If the value cannot be represented, the function returns an unspecified value.
To retrieve the elapsed time, you should use one of the following:
the time() function with a resolution of 1 second
the timespec_get() function which may be more precise, but might not be available on all systems
the gettimeofday() system call available on linux systems
the clock_gettime() function.
See What specifically are wall-clock-time, user-cpu-time, and system-cpu-time in UNIX? for more information on this subject.
Here is a modified version using gettimeoday():
#include <stdio.h>
#include <unistd.h>
#include <sys/time.h>
int main() {
struct timeval start, end;
gettimeofday(&start, NULL);
sleep(3);
gettimeofday(&end, NULL);
double time_taken = end.tv_sec + end.tv_usec / 1e6 -
start.tv_sec - start.tv_usec / 1e6; // in seconds
printf("time program took %f seconds to execute\n", time_taken);
return 0;
}
Output:
time program took 3.005133 seconds to execute
|
{
"pile_set_name": "StackExchange"
}
|
Q:
jQuery UI multiselect rendering components far away from actual control
I am using the jquery.multiselect library, which works perfectly in chrome.
I am using css styles which can be viewed here
Usage:
$("#component").multiselect(
{
multiple: true,
height: '30px',
selectedText: "# selected",
noneSelectedText: "Select Items",
checkAllText: "All",
uncheckAllText: "None"
});
This works perfectly in chrome, but when I attempt to view in firefox or IE, upon opening the select, it renders way down to the bottom left of the screen and it seems like the z-index isn't working at all. I've researched this a bit and it sounds like there may be a bug with jQuery 1.8.1 (which I am using) however, the hotfix
didn't seem to work.
Any ideas as to why this would work in chrome, but not in other browsers?
The issue reported here seems very similar, as well.
It also may be valid to note that I am placing the combobox inside an accordion, but since it works in chrome, I am confident that this shouldn't be an issue. I'm concerned that position:absolute (in some of the styles) may be causing issues in some browsers, but it may be a red herring.
I'm lost on this one, please help! Thanks.
A:
If you are using jQuery 1.8+, it could be caused by the new implementation of outerHeight(false).
In the jquery-ui-multiselect-widget version 1.13 source line 573:
top: pos.top + button.outerHeight(),
changing this to:
top: pos.top + button.outerHeight(false),
fixed my problem.
A:
One trick you can try is to use firebug or chrome inspector to figure out the offending element -- the outermost element that's not where it's supposed to be. Try adding "position: relative" to the parent of this element. That will probably get it at least closer to where you need it to be. If that doesn't work, try adding "position: relative" to the parent's parent and so on until the problem is fixed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Does the word "challenge" here mean something more than its regular meaning?
Does the word challenge here has a latent meaning of anger and obstinacy? If so is that a common expression or something Jumpa Lahiri came up with as a writer?
...He offered these details as if responding diligently to questions I was not asking. “I don’t ask you to care for her, even to like her,” my father said. “You are a grown man, you have no need for her in your life as I do. I only ask, eventually, that you understand my decision.” It was clear to me that he had prepared himself for my outrage—harsh words, accusations, the slamming down of the phone. But no turbulent emotion passed through me as he spoke, only a diluted version of the nauseous sensation that had taken hold the day that I learned my mother was dying, a sensation that had dropped anchor in me and never fully left.
Is she there with you?” I asked. “Would you like me to say something?” I said this more as a challenge than out of politeness, not entirely believing him. Since my mother’s death, I frequently doubted things my father said in the course of our telephone conversations: that he had eaten dinner on any given night, for example, and not simply polished off another can of almonds and a few Johnnie Walkers in front of the television. “They arrive in two weeks. You will see them when you come home for Christmas,”my father said, adding, “Her English is not so good.”...
A:
There's no unusual or metaphoric use of challenge here.
The narrator explains exactly the sense in which he challenges his father: having experienced his father's penchant for embroidering the truth he declines to accept the claim of having found a new wife at face value and asks for some verifiable evidence that Chitra actually exists. A very similar sense is employed when we speak of academics or scientists challenging the claims or theories advanced in a publication.
Challenge doesn't have to involve anger or even mild hostility, merely the presentation of some obstacle to be overcome; for instance, you may in a very supportive spirit challenge someone to strive for a new achievement.
I know of no inherent connection of challenge with obstinacy.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
R - original colours of georeferenced raster image using ggplot2- and raster-packages
I would like to use the original colortable of a >>georeferenced raster image<< (tif-file) as coloured scale in a map plotted by ggplot/ggplot2.
Due to not finding an easier solution, I accessed the colortable-slot from the legend-attribute of the loaded raster image (object) raster1 like so:
raster1 <- raster(paste(workingDir, "/HUEK200_Durchlaessigkeit001_proj001.tif", sep="", collapse=""))
raster1.pts <- rasterToPoints(raster1)
raster1.df <- data.frame(raster1.pts)
colTab <- attr(raster1, "legend")@colortable
Ok, so far so good. Now I simply need to apply colortable as a colored scale to my existing plot:
(ggplot(data=raster1.df)
+ geom_tile(aes(x, y, fill=raster1.df[[3]]))
+ scale_fill_gradientn(values=1:length(colTab), colours=colTab, guide=FALSE)
+ coord_fixed(ratio=1)
)
Unfortunately, this does not work as expected. The resulting image does not show any colors beside white and the typical ggplot-grey which often appears when no custom values are defined. At the moment, I am a little clueless what is actually wrong here. I assumed that the underlying band values stored in raster1.df[[3]] are indices for the color table. This might be wrong. If it is wrong, then how are the band values connected with the colortable? And even if my assumption would be right: The parameters which I have given to scale_fill_gradientn() should still result in a more colorful plot, shouldn't they? I checked out what the unique values are:
sort(unique(raster1.df[[3]]))
This outputs:
[1] 0 1 2 3 4 5 6 7 8 9 10 11 12
Apparently, not all of the 256 members of colortable are used which reminds me that the color does not always need to reflect the underlying band-data distribution (especially when including multiple bands).
I hope, my last thoughts didn't confuse you about the fact that the objective is quite straight forward.
Thank you for your help!
A:
Ok, I have found an answer which might not apply to every georeferenced raster image out there, but maybe almost.
First, my assumption that the data values do bot exactly represent the color selection was wrong. There are 15 unique colors in the colortable of the spatial raster object. However, not all of them are used (14 and 15). Ok, now I know, I have to map my values to the corresponding colors ina way that scale_fill_gradientn understands. For this I am using my previous initial code snippet and define a new variable valTab which stores all unique data values of the given band:
raster1 <- raster(paste(workingDir, "/HUEK200_Durchlaessigkeit001_proj001.tif", sep="", collapse=""))
raster1.pts <- rasterToPoints(raster1)
raster1.df <- data.frame(raster1.pts)
raster1.img <- melt(raster1)
colTab <- attr(raster1, "legend")@colortable
names(colTab) <- 0:(length(colTab) - 1)
valTab <- sort(unique(raster1.df[[3]]))
Notice, how index names are defined for colTab - this will be important soon. With this, I am able to automatically relate all active colors with their respective value while plotting:
(ggplot(data=raster1.df)
+ geom_tile(aes(x, y, fill=raster1.df[[3]]))
+ scale_fill_gradientn(colours=colTab[as.character(valTab)])
+ coord_fixed(ratio=1)
)
Using valTab-members as references to the corresponding color-indices helps to always pick only the colors which are needed. I don't know if defining the values-paramter of scale_fill_gradientn() is necessary in some cases.
I am not sure if the raster images read by raster() always define their values starting from 0. If not, names(colTab) <- 0:(length(colTab) - 1) needs to be adjusted.
I hope, this helps somebody in the future. At least, I finally have a solution!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to multiply in C++
SO basically I am stuck on this:
#include <iostream>
#include <string>
using namespace std;
int main() {
cout << "please enter your first name and age\n"; //prompts user to enter name and age
string first_name; //string variable
int age; //integer variable
cin first_name; //reads first_name
cin age; //reads age
cout << "Hello " << first_name << " (age" << age << ")\n";
}
Hi, I just started c++ two days ago, and I would like help. I want to multiply the int age by 12, where abouts do I do it like this?
cin age*12
or do I put it here
int age*12
Again, sorry that it's a newb question, but I just started two days ago. I'm not even sure if there are errors in it, but I only need that part answered.
A:
Your program should be like this:
int main() {
cout << "please enter your first name and age\n"; //prompts user to enter name and age
string first_name; //string variable
int age; //integer variable
cin >> first_name; //reads first_name
cin >> age; //reads age
int result = age * 12;
cout << "Hello " << first_name << " (age" << age << ")\n";
cout << "Your age multiplied by 12 is " << result << "\n";
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
bash + getting the headers from a file and numbering them
This gets all the headers in a file
$head -n 1 basicFile.csv | tr ',' '\n'
header1
header2
header3
header4
header5
header6
header7
header8
header9
header10
what I want is to add the header number to the left
to get something like:
1:header1
...
10:header10
How do I do this?
A:
head -n 1 basicFile.csv | tr ',' '\n' | cat -n
Not exactly the output you specified, but pretty close.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I create a custom entity that can be read by all users?
Is there a quick way to mark a custom entity as readable by all users via a Customizations.xml entry?
I have been successful in creating the custom entity I need, but cannot seem to make it readable by newly created users without creating a security role (with read permissions for the entity) and applying it to ALL users.
Is there a way I can ensure that everyone (even newly created users) have read access to a custom entity?
A:
Sorry the question, but why do you need an Entity for each user that is readable by everyone?
I mean is not the same to create a record in one entity and filter it with custom views if you want to?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Knapsack algorithm in JavaScript - Integer weights and values
My version of Knapsack works only when the weights or values of items are whole numbers.
Restrictions
You are given an array of objects each of which contains a weight and value.
You are also given a bag which has a maximum capacity.
Both the values and weights for every item are integers. This example will not cover cases where the weights/values contain decimals
The maximum capacity will also be a whole number, again no decimals.
Here is my code with commentary
var result = document.getElementById("result");
function knapsack(items, capacity) {
var totalWeight = 0;
var totalValue = 0;
Here I initialized the total weight and value of our bag, which is empty at first, so both are zero.
var sorted = items.sort(function(a,b) {
return (b.value / b.weight) - (a.value / a.weight);
});
To get the most bang for the buck, I'm taking a greedy algorithm and first choosing the item with the highest value to weight ratio. I have to sort the array based on the item's value per cost. I will then set the index to zero, to start at the best value.
var index = 0;
while (totalWeight < capacity) {
var ratio = sorted[index].value / sorted[index].weight;
totalValue += ratio;
totalWeight++;
if (totalWeight === sorted[index].weight) {
index++;
}
}
The loop is run until the weight is equal to the capacity. For every item, I will get the value per 1 unit of weight. This will be added to the value of the bag, whereas the weight of the bag will increase by one unit.
If the weight of the bag equals the weight of the item, I will move on to the next item and continue the while loop.
return totalValue.toFixed(2);
}
var array = [
{"value": 15, "weight": 10},
{"value": 24, "weight": 15},
{"value": 25, "weight": 18}
];
result.innerHTML = knapsack(array, 20);
This will not work if the item weights or values have decimals, but that's beyond the scope of this problem. This problem assumes that all are whole numbers.
What would the complexity be of this algorithm? Also, which type of knapsack does my algorithm solve?
A:
I don't think your algorithm actually works.
First...
while (totalWeight < capacity) {
...is not the proper criteria for determining when loading into the knapsack should be completed. What if the knapsack capacity is 10 and you got passed 4 objects each having weight of 3? The most you could ever get in the knapsack is a weight of 9. With your condition as is, you would get an infinite loop.
Second...
totalValue += ratio;
...makes no sense. Why are you adding the value/weight ratio to the total value summation?
Third...
totalWeight++;
...also makes no sense. Why are you incrementing the total weight?
Probably something like the following is what you would need
function knapsack(items, capacity) {
let totalValue = 0;
let totalWeight = 0;
let remainingItems = items.sort( (a, b) => {
return (b.value / b.weight) - (a.value / a.weight);
});
while (remainingItems.length > 0) {
const remainingCapacity = capacity - totalWeight;
remainingItems = remainingItems.filter( (item) => {
return (item.weight <= remainingCapacity);
});
if (remainingItems.length === 0) continue;
const addedItem = remainingItems.shift();
totalValue = totalValue + addedItem.value;
totalWeight = totalWeight + addedItem.weight;
}
return totalValue.toFixed(2);
}
Note that this would work even with floats, however this does not validate input data against weight = 0 items which would cause error. In production code, it might be needed to validate the item being passed against this (and possibly other things like negative values/weight, negative/zero capacity, etc.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Adding columns & index to a huge table
I'm working with a Rails app backed by a MySQL database. One of the more-frequently used models is backed by a table with over 200,000,000 rows (and growing). We've been aware for a while that the way it's set up isn't scalable.
The feature I'm working on at the moment requires me to add a couple new columns to that table, as well as a compound index. My coworker did a test-run on her local dev machine and it took about an hour per column (didn't even test adding the index yet).
We're considering taking a different approach, building a new table with the needed columns, dumping the data from the old table to the new one, then pointing the Rails model at the new table instead. Feels like a short-term solution for a large looming problem, but I'm curious if anyone has insight on how to navigate this problem.
We've also talked about archiving the table's old records. It's a table that gets written to and read from a lot, but only with recent records (the app probably never needs to do anything with records more than a month old).
A:
First of all, use one alter statement to do all of adding two columns and the compound index. ALTER TABLE table1 ADD col1 INT, ADD col2 CHAR(2), ADD INDEX (col1, col5);. More about ALTER
Archiving old records is a good idea. If it is very rarely needed, then some slowness is affordable (I think).
Depending on your engine that you use, dumping the data and importing it to a new table maybe slow if you can't disable the keys. If you drop all indexes and recreate them, that is slow again. If you can afford some down time, any method is ok.
If the tables keeps growing, and you don't want to archive, you may consider "Sharding" the table, but that would require some extra efforts at development level
Last idea, which I don't strongly recommend unless you can't afford downtime, is to create a new table with a primary key and the new column that you want to add. Use this PK to join the original table.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
"Uncaught Cannot extend unknown button type: copyHtml5" - How to use `datatables.net-buttons-bs4`
I have installed Datatables via npm:
npm install --save datatables.net-bs4
npm install --save datatables.net-buttons-bs4
and want to use the buttons.html5 js file too.
Before I started working with npm packages I used datatables CDNs like this:
<script src="//cdn.datatables.net/1.10.19/js/jquery.dataTables.min.js" crossorigin="anonymous"></script>
<script src="https://cdn.datatables.net/buttons/1.5.2/js/dataTables.buttons.js" crossorigin="anonymous"></script>
<script src="https://cdn.datatables.net/buttons/1.0.0/js/buttons.html5.min.js" crossorigin="anonymous"></script>
Now I import it like this:
// Datatables
import 'datatables.net-bs4';
// Datatables - Buttons
import 'datatables.net-buttons-bs4';
My script uses the buttons.js feature with the HTML5 (file exist in the folder node_modules/datatables.net-buttons/js/buttons.html5.js. but it seems like it's not imported properly using import 'datatables.net-buttons-bs4';
therefore, resulting the error:
Uncaught Cannot extend unknown button type: copyHtml5
in the console, pointing the a row using the feature:
this.tableDownload = new $.fn.dataTable.Buttons(this[this.tableDisplayed], { ... }
Which worked fine when using the CDNs.
How do I get buttons.html5.js cooperate with my code?
A:
You need to add
import 'datatables.net-buttons/js/buttons.html5.js'
like you can see in the download builder if you pick what you need and switch to the npm tab on the bottom.
https://datatables.net/download/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ionic app blocked by CORS policy, jsonp not working neither
I'm developing a Ionic app with angular 5, I run it in localhost with the command ionic serve -l
I have a get call to an external web https://example.com/api?req1=foo&req2=bar
But I get the CORS denied message and no response.
Failed to load https://example.com/api?req1=foo&req2=bar: Redirect from 'https://example.com/api?req1=foo&req2=bar' to 'http://example.com/api?req1=foo&req2=bar' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8100' is therefore not allowed access.
The request code is simple:
getSomethingList(req1: number, req2: number): Observable<Item[]> {
const requestUrl = 'https://example.com/api?req1=foo&req2=bar';
return this._http.get(requestUrl).map((res: Response) => res.json());
}
I tried to install chrome extension Moesif Origin & CORS changer I activated it and seated the origin at http://localhost:8100/ (where is running my ionic app). Still with no results, but now I don't see the denied CORS error, I see no error at all.
I tried a proxy:
// file ionic.config.json
{
"name": "movielovers",
"app_id": "",
"type": "ionic-angular",
"integrations": {
"cordova": {}
},
"proxies": [{
"path": "/api",
"proxyUrl": "https://example.com/api"
}]
}
and then changed const requestUrl = 'https://example.com/api?req1=foo&req2=bar' to const requestUrl = '/api?req1=foo&req2=bar', I still don't know what I'm doing with this, for me it has no sense, but it is what I understood reading some tutorials.
This way obviously, I get the error:
GET http://localhost:8100/api?req1=foo&req2=bar 404 (Not Found)
I tried to install nom corsproxy: npm install -g corsproxy and run corsproxy in the command line. This begins the server in a port. But this isn't working. I tried even to mix the corsproxy with the proxy in ionic.config.json file, but no way. Never get an answer.
and at last I tried with jsonp:
first I import { JsonpModule } from '@angular/http'; in app.module.ts and I declare the JsonpModule in the imports.
then in the provider where I have the request I inject the Jsonp library constructor(private jsonp: Jsonp) {}
I add a callback at the end of the request url: const requestUrl = 'https://example.com/api?req1=foo&req2=bar&callback=JSONP_CALLBACK'
change the request method:
return this.jsonp.request(requestUrl).map((res: Response) => res.json());
And with the jsonp the error in console is:
Uncaught SyntaxError: Unexpected token :
I don't know what else try, I can't get any response because of the CORS. The api I'm requesting is not restricted, as is responding to postman, and even to write the request url directly in the browser. But when running from localhost it is obviously blocked, and I can't get over it and I'm loosing a lot of time trying to get a response. I need some help.
ionic: 3.20.0
Cordova
Angular CLI: 1.7.3
Node: 8.9.4
FYI: the api I'm using is https://www.comicvine.com/api
edit:
installed cordova-plugin-whitelist
added <meta http-equiv="Content-Security-Policy" content="script-src * 'unsafe-inline' 'unsafe-eval'"> to www/index.html
the file config.xml has the lines:
<content src="index.html" />
<access origin="*" />
<allow-intent href="http://*/*" />
<allow-intent href="https://*/*" />
<allow-navigation href="*" />
A:
Finally I solved it, first I returned all my code to the initial point (deleting all the proxies references, jsonp, etc..) where I got the CORS denial answer. Then I deleted the chrome extension Moesif Origin & CORS changer and installed the chrome extension Allow-Control-Allow-Origin: *, turn it on and now it's working fine.
Maybe it helps somebody in the future, and saves him/her a lot of time.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to remove the parent directory of a subdirectory from the URL (.htaccess)
I have been trying every possible way of removing the subdirectory's parent directory from the URL.
Let's assume that this is my url: https://example.com/accounts/login. I need the url to look like this: https://example.com/login, so without the accounts directory.
And usually when I try to go to https://example.com/login I get "The server encountered an internal error or misconfiguration and was unable to complete your request." (aka Error 500 - internal server error).
My current .htaccess:
RewriteEngine On
RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://example.com/$1 [R,L]
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}\.html -f
RewriteRule ^(.*)$ $1.html [NC,L]
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}\.php -f
RewriteRule ^(.*)$ $1.php [NC,L]
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ html/$1
<IfModule mod_expires.c>
ExpiresActive On
ExpiresByType image/jpg "access 1 year"
ExpiresByType image/jpeg "access 1 year"
ExpiresByType image/gif "access 1 year"
ExpiresByType image/png "access 1 year"
ExpiresByType text/css "access 1 month"
ExpiresByType text/html "access 1 month"
ExpiresByType application/pdf "access 1 month"
ExpiresByType text/x-javascript "access 1 month"
ExpiresByType application/x-shockwave-flash "access 1 month"
ExpiresByType image/x-icon "access 1 year"
ExpiresDefault "access 1 month"
</IfModule>
<IfModule mod_deflate.c>
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/vnd.ms-fontobject
AddOutputFilterByType DEFLATE application/x-font
AddOutputFilterByType DEFLATE application/x-font-opentype
AddOutputFilterByType DEFLATE application/x-font-otf
AddOutputFilterByType DEFLATE application/x-font-truetype
AddOutputFilterByType DEFLATE application/x-font-ttf
AddOutputFilterByType DEFLATE application/x-javascript
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE font/opentype
AddOutputFilterByType DEFLATE font/otf
AddOutputFilterByType DEFLATE font/ttf
AddOutputFilterByType DEFLATE image/svg+xml
AddOutputFilterByType DEFLATE image/x-icon
AddOutputFilterByType DEFLATE image/jpg
AddOutputFilterByType DEFLATE image/jpeg
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/javascript
AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/xml
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
Header append Vary User-Agent
</IfModule>
Options -Indexes
A:
Replace all of your rewrite rules with this block:
RewriteEngine On
RewriteCond %{SERVER_PORT} 80
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,NE,L]
RewriteCond %{REQUEST_FILENAME} -d [OR]
RewriteCond %{REQUEST_FILENAME} -f
RewriteRule ^ - [L]
RewriteCond %{DOCUMENT_ROOT}/html/$1.php -f
RewriteRule ^(.+?)/?$ html/$1.php [L]
RewriteCond %{DOCUMENT_ROOT}/html/$1.html -f
RewriteRule ^(.+?)/?$ html/$1.html [L]
RewriteCond %{DOCUMENT_ROOT}/accounts/$1.php -f
RewriteRule ^(.+?)/?$ accounts/$1.php [L]
RewriteCond %{DOCUMENT_ROOT}/accounts/$1.html -f
RewriteRule ^(.+?)/?$ accounts/$1.html [L]
RewriteCond %{REQUEST_FILENAME}.html -f
RewriteRule ^(.+?)/?$ $1.html [L]
RewriteCond %{REQUEST_FILENAME}.php -f
RewriteRule ^(.+?)/?$ $1.php [L]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Git branch command behaves like 'less'
When I use the git branch command to list all branches, I see the output of git branch | less.
The command git branch is supposed to show a list of branches, like ls does for files.
This is the output I get:
How do I get the default behaviour of git branch? What causes the paged output?
I am using ZSH with oh_my_zsh (nothing for Git in there), and my .gitconfig looks like this:
[user]
email = [email protected]
name = Dennis Haegler
[push]
default = simple
[merge]
tool = vimdiff
[core]
editor = nvim
excludesfile = /Users/dennish/.gitignore_global
[color]
ui = true
[alias]
br = branch
ci = commit -v
cam = commit -am
co = checkout
df = diff
st = status
sa = stash
mt = mergetool
cp = cherry-pick
pl = pull --rebase
[difftool "sourcetree"]
cmd = opendiff \"$LOCAL\" \"$REMOTE\"
[mergetool "sourcetree"]
cmd = /Applications/SourceTree.app/Contents/Resources/opendiff-w.sh
\"$LOCAL\" \"$REMOTE\" -ancestor \"$BASE\" -merge \"$MERGED\"
trustExitCode = true
A:
As mentioned in comments to Mark Adelsberger's answer, this was a default behavior change introduced in Git 2.16.
You can turn paged output for git branch back off by default with the pager.branch config setting:
git config --global pager.branch false
A:
As other answers pointed out, Git defaults to piping itself into a pager (less by default) for most commands.
An important point, though, is that when the LESS environment variable is unset, Git sets it to FRX, and the consequence is that the user-visible behavior is the same as if the pager was not used when the command's output is short (i.e. if you have only few branches). See man less:
-F or --quit-if-one-screen
Causes less to automatically exit if the entire file can be displayed on the first screen.
-R or --RAW-CONTROL-CHARS
[...]ANSI "color" escape sequences are output in "raw" form.
-X or --no-init
Disables sending the termcap initialization and deinitialization strings to the terminal. This is sometimes desirable if the
deinitialization string does something unnecessary, like clearing the
screen.
If you get the behavior you describe, you most likely have $LESS set to something else, and unsetting it (unset LESS) would get rid of the issue while keeping the "pager" behavior for long output. Alternatively, you can activate the behavior for while keeping $LESS as-is by adding this to your .gitconfig file:
[core]
pager = less -FRX
If you really dislike the pager thing, you can deactivate it globally or on a per-command basis (see other answers).
A:
Not to argue semantics, but the behavior you're getting is the default. That's why you get it when you don't ask for something different. By default, branch (and numerous other Git commands) use a pager when sending output to the terminal.
You can override this default by using the --no-pager option:
git --no-pager branch
Or if you redirect the output to a file, Git should detect that it isn't writing to a terminal and so should not use a pager anyway. (On the other hand, that suggests a scripting use case, in which case you should consider using a plumbing command like git for-each-ref in preference to git branch.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Understanding the GROUP BY statement's behaviour
The question is this..
Table is this..
+--------------------------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------------------+---------+------+-----+---------+----------------+
| facility_map_id | int(10) | NO | PRI | NULL | auto_increment |
| facility_map_facility_id | int(10) | NO | MUL | NULL | |
| facility_map_listing_id | int(10) | NO | | NULL | |
+--------------------------+---------+------+-----+---------+----------------+
Data is this..
+-----------------+--------------------------+-------------------------+
| facility_map_id | facility_map_facility_id | facility_map_listing_id |
+-----------------+--------------------------+-------------------------+
| 248 | 1 | 18 |
| 259 | 1 | 19 |
| 206 | 1 | 20 |
| 244 | 1 | 21 |
| 249 | 2 | 18 |
| 207 | 2 | 20 |
| 208 | 3 | 20 |
| 245 | 3 | 21 |
| 260 | 4 | 19 |
| 261 | 5 | 19 |
| 246 | 6 | 21 |
| 250 | 7 | 18 |
| 247 | 8 | 21 |
+-----------------+--------------------------+-------------------------+
I run the this query :
SELECT facility_map_listing_id
FROM facility_map
WHERE facility_map_facility_id IN(1, 2)
GROUP BY facility_map_listing_id
HAVING count(DISTINCT facility_map_facility_id) >= 2
and get this..
+-------------------------+
| facility_map_listing_id |
+-------------------------+
| 18 |
| 20 |
+-------------------------+
2 rows in set (0.00 sec)
Which is correct! - but can anyone explain, why the GROUP BY needs to be in the statement?
if it Isnt and I run the same query leaving out the GROUP BY I get..
+-------------------------+
| facility_map_listing_id |
+-------------------------+
| 18 |
+-------------------------+
1 row in set (0.00 sec)
Can any one explain this to me? Thank you!
A:
Without a group by, an aggregate like count works on the set as a whole. So this query returns either zero or one row:
SELECT facility_map_listing_id
FROM facility_map
WHERE facility_map_facility_id IN(1, 2)
HAVING count(DISTINCT facility_map_facility_id) >= 2
It will return one row if the having condition is met, and an empty set otherwise.
Now, with the group by, it evaluates the having condition for each value of facility_map_listing_id. That can return up to as many rows as there are distinct values of facility_map_listing_id.
A:
I think this should explain things:
If you omit group by, all the rows not excluded by the where clause
return as a single group.
So, basically, you are still using a group by...just by the entire set.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Ruby webscript preserving values between calls
I have this big problem on ruby:
class Finances
@@sum = 0
def self.add(amount)
@@sum += amount
end
def self.total
@@sum
end
end
basically, an accumulator.
Problem is, It's working the first time, but it's preserving values everytime I refresh the page that uses that script. (Rails + activeadmin)
ActiveAdmin.register Order, as: 'FinanceOrders' do
idx = 0
index do |x|
column :id do
idx += 1
end
column :contractor do |order|
amount = order.contractor_payment_amount
Finances.add amount
amount.to_money
end
summary = Finances.get_summary collection
tfoot do
tr do
column :contractor do |order|
Finances.total.to_money
end
column :profit do |order|
(order.sum_cost / 100).to_money * (1 - FEE) - Finances.total.to_money
end
end
end
td
end
end
end
end
I suppose it's some kind of caching of classes on Rails. Question is, since there is a caching, how can I accomplish what I need? (Note that even idx, ids += 1 is not working. It works fine the first time, but accumulates the value of idx on every page refresh)
A:
I was initializing the counter and the accumulator on the module/class definition, and not on the code that was actually being executed on execution time.
The solution was this:
index do |x|
Finances.start
idx = 0
I created a Finances.start method that initializes @@sum, and moved the call to the index do block, along with the idx counter, so those values actually got initialized on every index do code execution.
It works like a charm.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to transparent element in element css
Today I'm working on a project, but I've to subtract the element in a element, but how can you do that?
the green element should be subtracted. But how can you do that?
Thanks for advance.
A:
Using some CSS magic you can create the shape you want:
.container {
width: 300px;
height: 750px;
position: relative;
background-color: lightgray;
}
.phone {
position: absolute;
border-radius: 25px;
width: 250px;
height: 700px;
left: 25px;
top: 25px;
background-color: black;
}
.screen-top {
width: 250px;
height: 40px;
left: 45px;
top: 45px;
position: relative;
overflow: hidden;
border-radius: 30px 30px 0 0;
}
.screen-top:before {
z-index: 1;
content: '';
position: absolute;
left: 15%;
bottom: calc(100% - 30px);
width: 70%;
height: 100%;
border-radius: 0 0 30px 30px;
box-shadow: 0px 300px 0px 300px white;
}
.screen-bottom {
position: absolute;
top: 85px;
left: 45px;
width: 250px;
height: 660px;
border-radius: 0 0 30px 30px;
background-color: white;
}
<div class="container">
<div class="phone"></div>
<div class="screen-top"></div>
<div class="screen-bottom"></div>
<div class="screen-content"></div>
</div>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to return the response of Node.js mysql query connection
I am new at Node.js and I want to find something from database by using select query.
Here is my code.
var address = socket.request.client._peername.address;
var ip_addrss = address.split("::ffff:");
let mine = ip_addrss[1];
var location = iplocation_find(mine);
connection.connect( function () {
// insert user data with IP, location --- has got a status.
let stranger = "";
var values = [];
if (mine == null){
mine = "local server";
}
values.push(mine);
values.push('location');
var sql = "INSERT INTO user_list (IP_address, location) VALUES (?)";
connection.query(sql, [values], function (err, res){
if (err) throw err;
});
// control chatting connection between users
connection.query("SELECT IP_address FROM user_list WHERE status = ? AND location = ?", [0, "location"], function (err, res){
if (err) throw err;
stranger = res[0].IP_address;
console.log(stranger);
});
var room_users = [];
room_users.push(mine);
room_users.push(stranger);
console.log(room_users);
connection.query("INSERT INTO chatting_status (IP_client_1, IP_client_2) VALUES (?)", [room_users], function (err, res){
if (err) throw err;
console.log('inserted');
});
});
Now the problem is "stranger". It is not working anymore. Just always null.
Please tell me how I can return value in mysql query statement.
on my console, shows this.
[ 'local server', '' ]
127.0.0.1
inserted
[ '192.168.1.100', '' ]
127.0.0.1
inserted
Above, 'local server' and '192.168.1.100' are values of mine. And also '127.0.0.1' is the value of stranger only in query. But out of query it is just null.
A:
You are using asynchronous operations with your .connect() and .query() calls. To sequence code with asynchronous callbacks like this, you have to continue the flow of control inside the callback and then communicate back errors or result via a callback.
You could do that like this:
let address = socket.request.client._peername.address;
let ip_addrss = address.split("::ffff:");
let mine = ip_addrss[1];
let location = iplocation_find(mine);
function run(callback) {
connection.connect( function () {
// insert user data with IP, location --- has got a status.
let values = [];
if (mine == null){
mine = "local server";
}
values.push(mine);
values.push('location');
var sql = "INSERT INTO user_list (IP_address, location) VALUES (?)";
connection.query(sql, [values], function (err, res){
if (err) return callback(err);
// control chatting connection between users
connection.query("SELECT IP_address FROM user_list WHERE status = ? AND location = ?", [0, "location"], function (err, res){
if (err) return callback(err);
let stranger = res[0].IP_address;
console.log(stranger);
let room_users = [];
room_users.push(mine);
room_users.push(stranger);
console.log(room_users);
connection.query("INSERT INTO chatting_status (IP_client_1, IP_client_2) VALUES (?)", [room_users], function (err, res){
if (err) return callback(err);
console.log('inserted');
callback(null, {stranger: stranger, room_users: room_users});
});
});
});
});
}
run((err, result) => {
if (err) {
console.error(err);
} else {
console.log(result);
}
});
Personally, this continually nesting callback code is a drawback of writing sequenced asynchronous code with plain callbacks. I would prefer to use the promise interface to your database and write promise-based code using async/await which will allow you to write more linear looking code.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How a file is read when sent to an API end point
Question: I want to know how a file is read if sent to an end point.
After reading multiple articles and doing some research I am able to send a file to Amazon S3 bucket. Below is the working code but I don't understand how a file is sent to my API via Postman and how processing happens, how this file is being read in the code. Can someone please help me decode this code.
I have added line numbers to the code I want to understand.
Line
Number
1 [httppost]
2 public async Task<bool> Upload()
{
try
{
3 var filesReadToProvider = await Request.Content.ReadAsMultipartAsync();
4 foreach (var content in filesReadToProvider.Contents)
{
5 var stream = await content.ReadAsStreamAsync();
6 using (StreamReader sr = new StreamReader(stream))
{
string line = "";
7 while ((line = sr.ReadLine()) != null)
{
8 using (MemoryStream outputStream = new MemoryStream())
9 using (StreamWriter sw = new StreamWriter(outputStream))
{
sw.WriteLine(line);
10 sw.Flush();
PutRecordRequest putRecord = new PutRecordRequest();
putRecord.DeliveryStreamName = myStreamName;
Record record = new Record();
11 outputStream.Position = 0;
record.Data = outputStream;
putRecord.Record = record;
try
{
await kinesisClient.PutRecordAsync(putRecord);
}
catch (Exception ex)
{
Console.WriteLine("Failed to send record to Kinesis. Exception: {0}", ex.Message);
}
}
}
}
}
}
catch (Exception e)
{
}
return true;
}
P.S: I am sending a file via Postman for testing the working of this end point.
My undertanding of above code.
Line Number 1: It's a HTTPPOST request, so that means the content I am sending will be in the body of the request and not in the headers or the URL. Since I am sending a file, the file content will be sent in the body of the request.
Line Number 2: asyn so it will be run on a separate thread so my UI (if any) will be responsive.
Line Number 3: I am guessing Request is basically a handle to read the body content for any API Requests, so I am reading the content using Request.Content,
Question: What is the meaning of ReadAsMultipartAsync: it says: Reads all body parts within a MIME multipart message and produces a set of System.Net.Http.HttpContent instances as a result.? What this means? Does this mean the if my file has 10 lines, it is reading all 10 lines and kind of storing it in the memory (in layman terms)?
Line Number 4: I am guessing filesReadToProvider.Contents is again a handle for all those 10 lines that I have in file. Does that mean if I have 10 lines in my file this will loop will run for 10 times (I think I am wrong here).
Line Number 5: ReadAsStreamAsync it says: Serialize the HTTP content and return a stream that represents the content as an asynchronous operation.. Does this mean that it is reading 1 line at a time and serializing it? But why it is serializing the lines?
LineNumber 6: Since I have streams to deal with I am creating a streamreader instance and passing the stream to it. (in layman terms: I am passing one line of file that I am reading)?
Beyond this I am lost. I am totally confused after this. What is a stream? Is it one line in my file?
Question: Please someone help me in understading what is happening in above code. I don't think I can directly link a line of file with a stream. Please guide me.
A:
Line 1: Setup the method to respond to HttpPost
Line 2: async means go off and perform the task (which can be on a separate thread and will be in this case), and re-call this method when that task has completed or failed.
Line 3: Multipart HTTP is a standard often used for sending files What is http multipart request?
Line 4: This is the different contents of the Multipart request the user has sent.
Line 5: Get the contents of this part as a stream. A stream performs byte reading only and isn't very user friendly. Hence the StreamReader in Line 6.
Line 6: Creates a StreamReader, which is a helper class to allow line by line processing of a Stream.
Line 7: do a loop, getting line's as a string, until we hit a null.
Line 8: Construct a memory stream. A memory stream is a stream where the read/write operations are performed in memory rather than from a network or disk.
Line 9: Use a StreamWriter around the memory stream. StreamWriter is a helper class that will allow easier line based operations.
Line 10: Use the StreamWriter to write the current lines from the loop started in line 7. We Flush it so that it has the full line inside of it. If we don't do this potentially it won't have all the content. Streams by default will buffer, meaning they will delay their read/writes until enough data has been added to them as a performance optimisation. This forces that read/write operation.
Line 11: If we don't set the location of the MemoryStream back to 0, it'll attempt to read AFTER the line has been inserted. This sets it back to the beginning of the line. MemoryStream are basically a byte buffer with what you put inside of them, along with the current Read/Write location information.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
php return a specific row from query
Is it possible in php to return a specific row of data from a mysql query?
None of the fetch statements that I've found return a 2 dimensional array to access specific rows.
I want to be able to return only 1 specific row, kinda like mysql_result... except for entire row instead of 1 cell in a row.
I don't want to loop through all the results either, I already know how to do that I just thought that there might be a better way that I'm not aware of. Thanks
A:
For example, mysql_data_seek() and mysqli_stmt_data_seek() allow you to skip forward in a query result to a certain row.
If you are interested in one certain row only, why not adapt the query to return only the row you need (e.g. via a more specific WHERE clause, or LIMIT)? This would be more resource-effective.
A:
You should add LIMIT to your mysql statement. And it will return only data you need. Like following:
-- returns 1 row after 2 row
SELECT * FROM table LIMIT 2, 1
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ajax function and Grails controller
i'm just trying the ajax Jquery function in GSP, here is the GSP:
<%@ page contentType="text/html;charset=UTF-8"%>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
<meta name="layout" content="main" />
<title>Insert title here</title>
<g:javascript library='jquery' plugin='jquery' />
<script type="text/javascript">
function callAjax(){
$(document).ready(function(){
$('button').click(function(){
var URL="${createLink(controller:'book',action:'checkJquery')}"
$.ajax({
url:URL,
data: {id:'1'},
success: function(resp){
console.log(resp);
$("#author").val(resp.author)
$("#book").val(resp.bookName)
}
});
});
});
}
</script>
</head>
<body>
<button class="testMe" onclick="callAjax();">test</button>
<div class="body" id="divBody">
<g:textField name="author" id="author"/>
<g:textField name="book" id="book"/>
</div>
</body>
</html>
here is the checkJquery action in the controller :
def checkJquery() {
def s=Book.get(params.id)
render s as JSON
}
the problem that when i click the button ,it doesn't do anything , but if i clicked it again it prints the below in chrome console, the question why from the first click it didn't work , and why printing the response twice.
Object {class: "test.Book", id: 1, author: "a1", bookName: "book1"}
Object {class: "test.Book", id: 1, author: "a1", bookName: "book1"}
A:
So there are a couple things to point out here.
function callAjax(){
$(document).ready(function(){
$('button').click(function(){
var URL="${createLink(controller:'book',action:'checkJquery')}";
$.ajax({
url:URL,
data: {id:'1'},
success: function(resp){
console.log(resp);
$("#author").val(resp.author);
$("#book").val(resp.bookName);
}
});
});
});
}
Lets start with just the logic. So this is creating a function that contains a document ready. What this means is when the function executes it will give the function to the document ready method which will delay the execution of it until the page's body is parsed and in the dom.
Now lets look at the html.
<button class="testMe" onclick="callAjax();">test</button>
This is defining a button that will call the callAjax() method when it is clicked. So lets follow the logic. You create your function that will be executed later. Your page is rendered and the button exists.
You click the button which executes the method. That method then gives the function to the document ready to wait for the page to be parsed. But... we already know it is. Cause you called it based on an interaction with the page. So the document ready is pointless.
Another point, that call is going to happen -every- time that button is clicked. Meaning your method will happen multiple times, which means your binding will happen multiple times, and so on and so forth.
You should really consider binding in your javascript instead of inline in order to separate your concerns and to minimize/eliminate the redundancy.
So first off the html would change to be something like..
<button class="testMe">test</button>
And your javascript...
$(document).ready(function(){
$('.testMe').click(function(){
var URL="${createLink(controller:'book',action:'checkJquery')}";
$.ajax({
url:URL,
data: {id:'1'},
success: function(resp){
console.log(resp);
$("#author").val(resp.author);
$("#book").val(resp.bookName);
}
});
});
});
Now your markup would be only your markup, and your bindings would happen after the page loads, and only once.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Android delete from listview, user experience
I've seen how iphone users delete from lists - its generally a swipe action and then shows some minus/remove button.
I realize that it is counterproductive to implement iphone things on android because android users don't know this stuff.
I don't want to do that, I just don't know of a better/intuitive way to delete from a listview.
I've previously opted for doing "longclicks" on listview items, which will show an alertdialog asking if you want to delete or do other things, but this is never an obvious thing to do.
I've seen delete buttons that are shown in each view, but that messes with the layout of the listview, in a way that wasnt' considered in the wireframes.
What is a good intuitive way to allow the user to remove items from listviews on android?
A:
Here's my two cents before I pitch my answer. Any one who has an android phone is going to know or eventually find out that longclicks often lead to another menu. Yes, it's not immediately obvious but they are going to figure it out just as iphone users have figured out the swipe action is to delete.
If you really want a fool proof way for a user to know how to delete, I would implement checkBoxes. (More on check boxes here)If the user checks the item, bring up a "soft menu" at the bottom that has a bunch of options normally associated with long clicks.
If you look at the gmail application and check a box, you'll see what I mean when I say "soft menu".
Another way you could go would be to implement check box, then have "menu options." Every android user should be able to see and figure out the menu button on their device, all devices have them. Make one of the menu options delete and you're all set.
http://developer.android.com/guide/practices/ui_guidelines/menu_design.html#options_menu
|
{
"pile_set_name": "StackExchange"
}
|
Q:
OpenXML: Allow editing of Content Controls in locked Word document
I want to create a Word document that works as a template, where all the document is locked from editing except the Content Controls (<sdt/> elements) in the document that the user can edit.
What I've seen is that if I lock the document edition (right now I'm using the _markAsFinal property) there's no way to unlock a single Content Control.
Am I missing something? Or is this by design?
A:
In your settings.xml file, you'll want under <w:settings/> an element like this:
<w:documentProtection w:edit="forms" w:enforcement="1" w:cryptProviderType="rsaFull"
w:cryptAlgorithmClass="hash" w:cryptAlgorithmType="typeAny" w:cryptAlgorithmSid="4"
w:cryptSpinCount="100000" w:hash="UrgUnH3e8g+JF+pZ0azudEQQUYY="
w:salt="dKkOT11EOm/O3alLt8NBbQ=="/>
The hash and salt you'll need to set on your own, you can refer to the Ecma specs and implementation notes for those details, but this is a really good tutorial to just jump right in. But what this does is limit all editing to only content controls.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Spring SAML: Error decrypting encrypted key, No installed provider supports this key
I have refered the Spring SAML manual to create private key and import public certificate. But I am still facing issues with the encryption/decryption.
I have created a JKS file with the following commands as mentioned in the manual which are as follows
Command used to Import public certificate of IDP
keytool -importcert -alias adfssigning -keystore samlKeystore.jks -file testIdp.cer
Command used for Private Key
keytool -genkeypair -alias myprivatealias -keypass changeit -keystore samlKeystore.jks
Passwords of both private key and keystore is defined as 'changeit'
I have configured the securityContext as follows
<bean id="keyManager" class="org.springframework.security.saml.key.JKSKeyManager">
<constructor-arg value="classpath:security/samlKeystore.jks"/>
<constructor-arg type="java.lang.String" value="changeit"/>
<constructor-arg>
<map>
<entry key="myprivatealias" value="changeit"/>
</map>
</constructor-arg>
<constructor-arg type="java.lang.String" value="myprivatealias"/>
</bean>
I am able to see the idpDiscovery page where I can select the IDP. I am able to view the login page of the IDP as well. But when I provide the user credentials, I am getting the following exception.
This exception is occuring when saml2:EncryptedAssertion is sent along with the saml2p:Status in the SAML response. (Class: WebSSOProfileConsumerImpl of spring-saml jar)
ERROR org.opensaml.xml.encryption.Decrypter - Error decrypting encrypted key
org.apache.xml.security.encryption.XMLEncryptionException: No installed provider supports this key: sun.security.provider.DSAPrivateKey
Original Exception was java.security.InvalidKeyException: No installed provider supports this key: sun.security.provider.DSAPrivateKey
at org.apache.xml.security.encryption.XMLCipher.decryptKey(XMLCipher.java:1479)
at org.opensaml.xml.encryption.Decrypter.decryptKey(Decrypter.java:697)
at org.opensaml.xml.encryption.Decrypter.decryptKey(Decrypter.java:628)
at org.opensaml.xml.encryption.Decrypter.decryptUsingResolvedEncryptedKey(Decrypter.java:783)
Caused by: java.security.InvalidKeyException: No installed provider supports this key: sun.security.provider.DSAPrivateKey
at javax.crypto.Cipher.a(DashoA13*..)
at javax.crypto.Cipher.init(DashoA13*..)
at javax.crypto.Cipher.init(DashoA13*..)
at org.apache.xml.security.encryption.XMLCipher.decryptKey(XMLCipher.java:1475)
... 46 more
740323 [http-8080-2] ERROR org.opensaml.xml.encryption.Decrypter - Failed to decrypt EncryptedKey, valid decryption key could not be resolved
740324 [http-8080-2] ERROR org.opensaml.xml.encryption.Decrypter - Failed to decrypt EncryptedData using either EncryptedData KeyInfoCredentialResolver or EncryptedKeyResolver + EncryptedKey KeyInfoCredentialResolver
740325 [http-8080-2] ERROR org.opensaml.saml2.encryption.Decrypter - SAML Decrypter encountered an error decrypting element content
Can anyone let me know where I am going wrong??
Alternate command used Private Key generation instead of the above mentioned
keytool -genkey -alias privatekeyalias -keyalg RSA -keystore samlKeystore.jks
If I use this command and update JKS file, then I get a different exception mentioned as InvalidKeyException: Key is too long for unwrapping.
Caused by: java.security.InvalidKeyException: Key is too long for unwrapping
at com.sun.crypto.provider.RSACipher.engineUnwrap(DashoA13*..)
at javax.crypto.Cipher.unwrap(DashoA13*..)
at org.apache.xml.security.encryption.XMLCipher.decryptKey(XMLCipher.java:1477)
... 46 more
41 [http-8080-1] ERROR org.opensaml.xml.encryption.Decrypter - Failed to decrypt EncryptedKey, valid decryption key could not be resolved
42 [http-8080-1] ERROR org.opensaml.xml.encryption.Decrypter - Failed to decrypt EncryptedData using either EncryptedData KeyInfoCredentialResolver or EncryptedKeyResolver + EncryptedKey KeyInfoCredentialResolver
42 [http-8080-1] ERROR org.opensaml.saml2.encryption.Decrypter - SAML Decrypter encountered an error decrypting element content
Can anyone help me out in this problem??
A:
The problem was caused by using a different keystore in the application than the one generated with:
keytool -genkeypair -alias privatekeyalias -keypass samplePrivateKeyPass -keystore samlKeystore.jks -keyalg RSA -sigalg SHA1WithRSA
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Testing GWT application in firefox - how to prevent firefox to cache css
I develop a GWT application and use mozilla firefox for testing. I am looking for a way to configure firefox to not cache css files of my application. Because it often happens that I forget to clean the firefox cache and work on my application with old css styles.
Is it possible to configure mozilla firefox to not cache css files?
A:
The Web Developer extension in Firefox lets you disable all caching. This should work for CSS, as well. It'll also help prevent those "doh" moments when the app starts working after you clear the cache.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
403 Forbidden Response to a JSONP request
First time posting to stackoverflow.
I'm just trying to get the data from a json url using jquery.
First problem was the cross origin request block, even with libraries that are supposed to stop this issue, like ajax cross origin js (sorry not to provide this link, I'm too new to have more than 2 links on here), I was still having no luck, same cross origin error.
so I moved to JSONP.
url = "http://take-home-test.herokuapp.com/api/v1/works.json?callback=?"
$.ajaxSetup({ dataType: "jsonp" });
$.getJSON(url, function(json) {
console.log(data);
});
(the AJAX syntax for a JSONP request I tried too)
Now I can see in the network tab that the data is returning back but the status is 403 forbidden.
pic of the response in the network tab of chrome
I'm using http-server that you can install with npm to avoid chrome having issues with the json MIME type.
This similar stack overflow answer says I needed to integrate jsonp support for my framework but they were referring to sinatra for ruby.
Why Does JSONP Call Return Forbidden 403 Yet URL can be accessed in a browser
So I tried out npmjs jsonpclient and still got the forbidden response.
Any ideas? This has taken me over a day.
A:
The problem: The server (http://take-home-test.herokuapp.com) does not have 'Access-Control-Allow-Origin' headers set. If you have access to the server, start it with the '--cors' option. Aka: node bin/http-server --cors ... This will enable CORS via the Access-Control-Allow-Origin header and should resolve your problem.
If you do not have access to the server. Here's a quick solution: proxy your request through http://cors.io. See below.
url = 'http://take-home-test.herokuapp.com/api/v1/works.json?callback=?';
new_url = "http://cors.io/?u=" + encodeURIComponent( url );
$.ajaxSetup({ dataType: "jsonp" });
$.getJSON(new_url, function(json) {
console.log(json);
});
JSFiddle: http://jsfiddle.net/davemeas/4rt3s7ta/1/ (note: you have to add jQuery to that fiddle :) )
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Packaging a program containing images
I'm having massive issues packaging my java program which contains images into a jar for conversion into and executable file. The images have been used in the background of the program and buttons. Please see the diagram below which shows the program I desire to convert to a jar.
IMAGE
As you see above the program runs OK. I created the same program with no custom background and custom buttons containing no images and I successfully packaged it into a jar and subsequently into an .exe file.
With regards to drawing my background I'm doing this as follows:
public void paintComponent(Graphics g) {
Image img = new ImageIcon("imgs/Bgnd1.jpg").getImage();
Dimension size = new Dimension(img.getWidth(null), img.getHeight(null));
setPreferredSize(size);
setMinimumSize(size);
setMaximumSize(size);
setSize(size);
setLayout(null);
g.drawImage(img, 0, 0, null);
}
With regards to creating my 4 custom buttons with images, I'm doing the following:
// Prepare rollover images
ImageIcon F1 = new ImageIcon("imgs/btn_f1_not_selected.jpg");
ImageIcon F1rollOver = new ImageIcon("imgs/btn_f1_selected.jpg");
// Create F1 button
final JButton btnF1 = new JButton(F1);
//btnF1.setOpaque(false);
btnF1.setContentAreaFilled(false);
btnF1.setBorder(null);
btnF1.setBorderPainted(false);
btnF1.setFocusPainted(false);
btnF1.setRolloverIcon(F1rollOver);
I attempted placing the images in the bin folder and for the creation of the background I altered the above method with regards to the declaration/fetching of the image.
public void paintComponent(Graphics g) {
String path = "Bgnd11.jpg";
java.net.URL imgURL = getClass().getResource(path);
Image img = new ImageIcon(imgURL).getImage();
Dimension size = new Dimension(img.getWidth(observer), img.getHeight(observer));
setPreferredSize(size);
setMinimumSize(size);
setMaximumSize(size);
setSize(size);
setLayout(null);
g.drawImage(img, 0, 0, null);
}
I also attempted fetching the images needed for the creation of my buttons as indicated below and then passing them to my button but this did not work.
String path = "Bgnd11.jpg";
java.net.URL imgURL = getClass().getResource(path);
Image img = new ImageIcon(imgURL).getImage();
How to locate & load the images?
A:
In your first attempt, you're loading images from the file system, in the current directory, which is the directory from which the java of javaw command is started. That's what prevents you from bundling the images with your applications. Obviously, the end user of your app won't have the images in his current directory, and his current directory will change depending on how he launches the application.
You should instead have the images packaged inside the jar file, and thus be present in the classpath, and thus load them using the ClassLoader as you're doing in your second attempt.
Let's say they're in the folder /resources/images of the jar, which thus corresponds to the package resources.images.
Using getClass().getResource("Bgnd11.jpg"), as the javadoc indicates, tries to find Bgnd11.jpg in the same package as the class returned by getClass(). So, it would work in our example if the class was in the package resources.images. If it's not, you should use the absolute path of the resource:
URL imgURL = getClass().getResource("/resources/images/Bgnd11.jpg");
Also, don't mess with the bin folder. This is the destination folder of Eclipse, and doing a clean build will remove everything from this directory. Just add the images to the appropriate package in the source directory, and Eclipse will automatically copy them to the destination directory when building the project.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Presentations of PSL(2, Z/p^n)
As is well known, the group $PSL(2,\mathbb Z)$ is isomorphic to the free product $C_2 \ast C_3$ of cyclic groups of order $2$ and $3$. Call the generators of the cyclic groups $S$ and $T$.
Problem: Given a prime number $p$ and a natural number $n$, write a presentation of the quotient $PSL(2, \mathbb Z/p^n\mathbb Z)$ with the images of $S$ and $T$ as generators.
A:
A method to do this for the group $\textrm{PSL}_2 (\mathbb{F}_{p^n})$ can be found in the papers by Glover and Sjerve:
Representing $PSl_2(p)$ on a Riemann surface of least genus, L'Enseignement Mathématique 31 (1985)
The genus of $PSl_2(q)$, Journal für die reine und angewandte Mathematik 380 (1987).
A:
I usually use Sunday's presentation: see MR0311782. His T has order 2 but your S will be what he denotes ST.
A:
The group $PSL_2(\mathbb{Z}/p^n)$ is the automorphisms group of the $(p+1)$ regular tree of depth $n$, where at level $m$ of the tree you have the points of $\mathbb{P}(\mathbb{Z}/p^m)$. The main benefit of this view, is that you can understand the relations at each level, and then move inductively to the next one.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is another commit created when nothing has changed?
In the script below, a new project is created. One file is committed. A change is made, but it is removed from the stage. Doing a commit at this point should do nothing. Why is another commit created?
++ git init
Initialized empty Git repository in C:/src/newproject/.git/
++ echo asdf
++ git status
On branch master
No commits yet
Untracked files:
(use "git add <file>..." to include in what will be committed)
file1.txt
nothing added to commit but untracked files present (use "git add" to track)
++ git add file1.txt
warning: LF will be replaced by CRLF in file1.txt.
The file will have its original line endings in your working directory
++ git status
On branch master
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
new file: file1.txt
++ git commit '--message=this is the message'
[master (root-commit) c3f5d0f] this is the message
1 file changed, 1 insertion(+)
create mode 100644 file1.txt
++ git log
commit c3f5d0f7da49b4eacc8df2b6e3e1efda4fc33cad (HEAD -> master)
Author: lit <[email protected]>
Date: Tue Dec 17 17:04:30 2019 -0600
this is the message
++ echo another line
++ git status
On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: file1.txt
no changes added to commit (use "git add" and/or "git commit -a")
++ git add file1.txt
warning: LF will be replaced by CRLF in file1.txt.
The file will have its original line endings in your working directory
++ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
modified: file1.txt
++ git rm --cached file1.txt
rm 'file1.txt'
++ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
deleted: file1.txt
Untracked files:
(use "git add <file>..." to include in what will be committed)
file1.txt
++ git commit '--message=this is the second message'
[master a28bb98] this is the second message
1 file changed, 1 deletion(-)
delete mode 100644 file1.txt
++ git status
On branch master
Untracked files:
(use "git add <file>..." to include in what will be committed)
file1.txt
nothing added to commit but untracked files present (use "git add" to track)
++ git log
commit a28bb987b69c69fabe92154b5f6929fd65819bfd (HEAD -> master)
Author: lit <[email protected]>
Date: Tue Dec 17 17:04:36 2019 -0600
this is the second message
commit c3f5d0f7da49b4eacc8df2b6e3e1efda4fc33cad
Author: lit <[email protected]>
Date: Tue Dec 17 17:04:30 2019 -0600
this is the message
A:
The change is not removed from the staging area. The entire file is removed from the staging area.
git rm --cached file1.txt
rm 'file1.txt'
++ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
deleted: file1.txt
Note that this is showing up as to be committed. That means the file is in the HEAD commit (see the last section on git status).
Long
The way to think about this is:
Git commits store whole files, always. They do not store changes.
Each commit has its own independent set of files, quite apart from every other commit. (However, since the files in a commit are completely read-only, frozen for all time, any commit can share files with any other commit, if the content of those files match. The fact that you can never change any commit, not one single bit, enables this.)
The files that go into your next commit are the files that are currently stored in the index.
The index is so important—and/or so poorly named—that Git actually has three names for it. Sometimes Git calls it the index. Sometimes Git calls it the staging area. Occasionally—rarely these days—Git calls it the cache. These different names reflect the different ways that this thing—this index/staging-area/cache—is used, but for the most part, it's all just the one thing.
Despite its importance, though, Git rarely lets you see what is in it—at least, not directly. You can easily see what is in your work tree (or working tree or any number of similar terms—again these all refer to the same thing), because your work-tree—I like to hyphenate it—holds ordinary files in their everyday format, so that every program on your computer can see them and work with them. This is not the case for files that are in commits, nor for files that are in the index.
Normally, when Git shows you a commit, it shows it by comparing the commit to some other commit. The most common comparison is between a child commit and its immediate parent. When you have a pretty-new repo with just two commits in it, one is the parent and the other is the child, and git show shows you what's in the child by:
extracting all the files from the parent into a temporary work area;1
extracting all the files from the child into a temporary work area; and
comparing all the files in these two work areas.
It then merely tells you about files that are different, and by default, shows you what it sees as the difference as well.
The files that are in commits are in a special, read-only, frozen, Git-only format that Git calls a blob object. You don't really need to know this (it won't be on any quiz ) to use Git. But it helps, because you do need to know about the index, to use Git. The files stored in Git's index are in this same read-only, Git-only format.2 This means that you literally can't see them—at least, not without having Git extract them somewhere.
When you git checkout a commit, Git copies that commit's files into the index (but see footnote 2 for technical strictness again). Then it copies—and de-Git-ifies—the frozen-format file into your work-tree, so that you can see it and work with it.
You can now work with the work-tree files. If you change one in any way—whether that's a total replacement, or a modification in place—this has no effect on the index. You probably want the changed file in your new commit, though, so now you should run git add on that file. What git add does is package up the work-tree copy of the file into the internal Git-only format, and write that into the index (and see footnote 2 again for technical accuracy).
When you make a new commit, Git packages up the index's files as a new commit. So now the new commit and the index match. The new commit becomes the current commit. If you updated the index as you went along, all three storage areas match: the current commit, the index, and your work-tree.
If you like, you can remove a file from the index. You can do this while also removing it from your work-tree, or while keeping it in your work-tree. Either way, what you've done is arrange for the next commit you make to just not have the file at all.
1This temporary work area is not your work-tree, which is mostly reserved for you to mess with. In fact, given the way commits are stored internally, Git can usually get away with not bothering to extract very much at all: it's easy for Git to tell that file F in commit P is exactly the same as file F in commit C, for instance, so for all unchanged files, Git can just do nothing at all.
2Technically, the index simply holds the file's name and a reference to the internal blob object that Git is using to store the file's content. But you can use Git without knowing this: it's OK to imagine the index holding the entire file's content, at least until you start getting deep into Git internals and using git ls-files --stage and git update-index directly.
Summary of the above
The short version of all of the above is that the index acts as where you build your next commit. It has a copy of every file—or more precisely, a reference to such a copy—in the form that the file would or does have in a new or an existing commit.
When you run git commit, Git packages up the index into a new commit. The new commit becomes the current commit as soon as possible after the new commit has been created.3 So, now the index and the commit match. That's also the normal case right after git checkout: the index and commit normally match. You make them not-match using git add and/or git rm. Then you make a new commit from the index, and they match again. The index starts out as a copy of the current commit. Then you change it—put entire new files in, or take entire files out—to build up your proposed new commit. Then you commit and they match.4 All of this happens mostly-invisibly, because the only files you can see and work with are the ones in your work-tree.
3This is so fast that it's almost impossible not to see it as a single operation. But it is actually separate operations: "write out commit", then "update some reference". The reference update requires adding to the reference's reflog, in most cases, and that's where you could—at least in theory, if you're fast enough—see these various steps unfold.
4There are some exceptions to this rule. See, e.g., Checkout another branch when there are uncommitted changes on the current branch. Eventually, look into git commit --only too. But it's at least relatively dependable.
Viewing the index with git status
Remember that the index (or staging area, if you prefer that name) sits, in effect, between your current commit—which Git calls HEAD—and your work-tree. That is, you can draw the current commit on the left, the index in the middle, and your work-tree on the right:
HEAD index work-tree
--------- --------- ---------
README.md README.md READNE.md
file.txt file.txt file.txt
The HEAD copy is read-only. You can copy from it, to the index and/or the work-tree, but you can't copy to it. The index copy can be replaced wholesale (git add) or removed entirely (git rm). The work-tree copy is a regular file, so you can do anything that your computer can do, without even using Git at all.
You can't see the index copy of the file directly, but git status will do comparisons and tell you what's different. In fact, git status runs two comparisons:
First, it compares HEAD vs the index. For every file that is the same, it says nothing at all. For a file that is different, it reports something staged for commit.
Then, it compares the index vs your work-tree. For every file that is the same, it says nothing at all. For a file that is different, it reports something not staged for commit.
This tells you, in a very efficient way, what's in your index: i.e., what will be in the next commit. If it's different from what's in the current commit, you see a change staged for commit. If it's different from what's in your work-tree, you see a change not staged for commit.
There's one last wrinkle here. Because your work-tree is yours, to do whatever you want with it, you can put files into it that aren't in the index. Or, you can take a file that's in all three places—HEAD, the index, and your work-tree—and remove it from the index, without removing it elsewhere. You can't remove it from the commit—no commit can ever be changed—so it remains there, but it can also remain in the work-tree, and/or you can change the file in the work-tree.
Any file that is not in the index, but is in your work-tree, is what Git calls an untracked file. This is the actual definition of untracked file: it's just a file that exists in your work-tree but not in the index.
Because you can change the index (put files in, or git rm --cached to take them out), you can change the untracked-ness of any file at any time. Untracked-ness is always relative to what's in the index.
In any case, though, when you do have untracked files, git status normally complains about them. To shut it up—make it not complain that all your build artifacts are untracked, for instance—you can list file names, or glob patterns, in .gitignore files. These entries in .gitignore do not make files untracked. They just tell git status to shut up about them, and tell git add not to add them to the index by default. If a file that would match a .gitignore line is already tracked, though, it stays tracked.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Jquery Add/Remove Class for label and input text
Using Jquery im trying to place input text box over label when i click on edit button and when i click on cancel button im changing it from input to label
MyCode
<label id="labelId" class="labelclass">label</label>
<input value="input" type="text" class="inputclass" id="inputId" style="display:none;"/>
<button id="edit">edit</button>
<button id="cancel">cancel</button>
<script type="text/javascript">
$(document).click(function(){
$('#edit').addClass('inputclass').removeClass('labelclass');
$('#cancel').addClass('labelclass').removeClass('inputclass');
});
</script>
Any Help is Appreciated Thanks!
A:
You need to bind click event on buttons instead of document. Then as your input, labels and buttons are at same level so you can use .siblings(classname) and just hide/show them.
Try this:
$(document).ready(function() {
$('#edit').click(function() {
$(this).siblings('.labelclass').hide();
$(this).siblings('.inputclass').show();
});
$('#cancel').click(function() {
$(this).siblings('.labelclass').show();
$(this).siblings('.inputclass').hide();
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<label id="labelId" class="labelclass">label</label>
<input value="input" type="text" class="inputclass" id="inputId" style="display:none;" />
<br/>
<button id="edit">edit</button>
<button id="cancel">cancel</button>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Block some characters from being typed in a text box?
I'd like to block users from typing certain characters in a text box (want to only allow [a-z], [A-Z], and underscore). I took this from a previous Q, but if I press the arrow keys (in FF 3.6), I get a js error:
"window.event is undefined"
this is the original code, what can I change window.event to in this case?:
$("#myTextBox").bind("keypress", function(event) {
var charCode = (event.which) ? event.which : window.event.keyCode;
if (charCode <= 13) {
return true;
}
else {
var keyChar = String.fromCharCode(charCode);
var re = /[a-zA-Z_]/
return re.test(keyChar);
}
});
Thanks
A:
You can do just this:
$("#myTextBox").bind("keypress", function(event) {
var charCode = event.which;
if (charCode <= 13) return true;
var keyChar = String.fromCharCode(charCode);
return /[a-zA-Z_]/.test(keyChar);
});
jQuery normalizes event.which already...and you can test it here :)
If it didn't, the fix would just be event.keyCode, you want to refer to the event passed as the parameter to this function, not a property on window. In this case, if the if condition was true, it'd return...so there's no need for an if/else, you can simplify it like I have above.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Django on a Mac
I have experience using python, and I want to start learning Django. I recently got a macbook air, and am a complete novice in all things concerning OSx.
I just cannot get Django to run. I have python installed, and have downloaded the Django module and followed the installation instructions from several different places. I am really sorry that I don't have any specific errors or problems - no matter what I try, importing Django never (that is, no module named django can be found).
I am probably overlooking something fairly obvious, or missing some key component that no tutorial bothers to mention because it's so trivial. If you can direct me a place where I can download Django, and maybe go over the basic essentials of installing it, that would be great.
Thanks
A:
There are unfortunately no equivalent of MAMP for Python. That being said once you'll get virtualenv setup you won't have any issue at all and be able to manage several projects with different versions of Django easily.
I'm not sure what you've tried but it seems that you've installed Django on the wrong python setup (maybe wrong virtual env?).
There are a few steps:
Install homebrew
Install PIP
Install virtualenv & virtualenvwrapper
create a vritual env
install Django in it
run Django in it
Then you'll work within Python virtual envs and have no problem. Google "homebrew django mac": http://hackercodex.com/guide/python-development-environment-on-mac-osx/ for example.
Enjoy :)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Check if the number is even or odd
My program gives me error(not exactly an error but it just prints error instead of even or odd) even if I put a number or letters. The code works if I remove the isdigit checker(3rd line). I do no know what am I doing wrong. Can someone please help me. Thanks in advance. Here is my code.
int main()
{
int n;
printf("Input an integer\n");
scanf("%d", &n);
if(!isdigit(n))
{
print("error");
return 0;
}
n%2 == 0 ? printf("Even\n") : printf("Odd\n");
return 0;
}
A:
isdigit is not for this purpose.
If you want to check if the input is vaild, one method is to load with %s and use strtol.
#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
void print(const char *s) {
puts(s);
}
int main()
{
char nstr[100] = {0};
int n;
char *e;
printf("Input an integer\n");
scanf("%99s", nstr);
n=(int)strtol(nstr, &e, 10);
if(nstr[0] == '\0' || *e != '\0')
{
print("error");
return 0;
}
n%2 == 0 ? printf("Even\n") : printf("Odd\n");
return 0;
}
A:
man -a isdigit
isdigit()
checks for a digit (0 through 9).
Thus isdigit fails if ascii value of n is not anything but
Oct Dec Hex Char
--------------------------
060 48 30 0
061 49 31 1
062 50 32 2
063 51 33 3
064 52 34 4
065 53 35 5
066 54 36 6
067 55 37 7
070 56 38 8
071 57 39 9
man -a ascii
thus,
if(!isdigit(n))
{
print("error");
return 0;
}
is not an appropriate option. you should probably find some other option to validate n.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I make multiple folders in a single location using relative path to the location?
What I'm trying to do is create a number of folders in the "~/Labs/lab4a/" location (~/Labs/lab4a/ already exists).
Say I want folder1, folder2, folder3 all in the lab4a folder.
This isn't about making nested folders all at one go using the mkdir -p command or going in to lab4a and just making multiple folders at one go. I'm wondering is there a faster way using mkdir to create multiple folders in the same location using relative path.
i.e
prompt~/: mkdir Labs/lab4a/folder1 folder2 folder3 To create all those folders in lab4a at once.
A:
In Bash and other shells that support it, you can do
mkdir ~/Labs/lab4a/folder{1..3}
or
mkdir ~/Labs/lab4a/folder{1,2,3}
Other options:
mkdir $(seq -f "$HOME/Labs/lab4a/folder%03g" 3)
mkdir $(printf "$HOME/Labs/lab4a/folder%03g " {0..3})
Which will give you leading zeros which make sorting easier.
This will do the same thing in Bash 4:
mkdir ~/Labs/lab4a/folder{001..3}
A:
Use shell expansion :
mkdir Labs/lab4a/{folder1,myfolder,foofolder}
That such an underestimated possibility :)
my2c
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Break Up stackoverflow?
I think it might be time to consider breaking up stackoverflow into several different sites. I just posted a JS question. I never even saw it on the first page. As soon as it's off the first page, the views just stop. There's simply way too much traffic. You guys really need to do something because honestly the site is basically useless to me. The quality of posts can only go down from here. If you just broke-out JavaScript alone that would cut the traffic in half.
A:
I just posted a JS question. I never even saw it on the first page.
Staying on the front page is not the only way to ensure that your question gets visibility.
As soon as it's off the first page, the views just stop. There's simply way too much traffic.
If that's your line of thinking, shouldn't this problem apply to every single one of the hundreds of thousands of people posting on Stack Overflow?
Stack Overflow has features built in to the site so that this problem can be tackled.
Sitting on the other side of the table, I come to Stack Overflow to post answers. However, I am not, and can never become an expert in all the topics that get asked on Stack Overflow. So how do I get to see the questions matching my area of expertise?
Also, while your concern regarding your question not getting enough visibility is correct, using some mechanism to keep it on the front page does not necessarily ensure that it will get the desired visibility.
Stack Overflow and other Stack Exchange sites offer features that enable a variety of ways to view and browse questions.
Users can follow tags, use filters, and subscribe to RSS feeds to discover posted questions that are relevant to them. Also, the front page could differ for every other user, some could be viewing the interesting questions, currently hot questions, newest questions or currently unanswered questions.
Instead of attempting to get your question to stay put on the front-page, you should focus on asking a good question, meaning you post a question after you have done the due diligence of using Stack Overflow's search feature to see if a similar question is already answered, searching for a solution by reading the platform documentation, searching the Web for a solution existing elsewhere. Resist the urge to ask a question as soon as you find yourself stuck.
It's only natural for contributors to the site to refrain from answering the same question for the 100th time. And despite their best intentions of helping, they can't answer a not well-formed question.
On the other hand, a well-formed question attracts a lot of attention and other users are more than willing to point you to an appropriate solution.
Breaking up the site into different sites in not the appropriate solution for many reasons. Fluent users of Stack Overflow, do break up Stack Overflow by creating a custom filter and only follow the questions that interest them.
To understand more about how filters work, you can refer to this wonderful blog post on The Overflow, Stack Exchange/Stack Overflow official blog:
Introducing Custom Filters
I am an Apple developer and have a custom filter where I only follow tags that are relevant to Apple development ecosystem. My Stack Overflow front page looks different from the All Questions view.
So, generally speaking, every user has a different view of the front page, and a sure shot way to ensure getting an answer on Stack Overflow (or any Stack Exchange site for that matter) is to write good question.
It is expected of users to read up, or at least skim through the relevant articles from the Help Center before posting a question.
And if despite doing your best attempt at writing a good question, you find that it isn't getting the due traction, you can always offer a bounty (needs a minimum number of reputation points) or even use the share button located below the question to share in other social networks where you think the question can get visibility by relevant people.
The designers of Stack Overflow (and by extension other Stack Exchange sites) continuously work hard to cover several such issues and keep introducing features that can make the platform as useful for as many numbers of people as possible.
Remember, good posts make everyone happy on Stack Exchange sites, whether they are writing questions, answers, or even developing the platform.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to get the IEclipseContext in an activator
I got stuck on one problem with an Eclipse 4 RCP application. I need to log some events. I need obtain somehow a reference to the logger. I know, how to do that using IEclipseContext, but I've nowhere found, how to obtain IEclipseContext without the dependency injection, which I cannot use in the activator. Do you anybody know, how to sort it out this problem, please?
Thanks a lot
A:
You can get a specialized IEclipseContext by calling EclipseContextFactory.getServiceContext(bundleContext) which will allow access to OSGi services.
A:
It seems regretably, that there is no way to obtain IEclipseContext without using injection.
There is written in an answer to How to use eclipse 4 DI in classes that are not attached to the application model:
The problem is, however, that the IEclipseContext already needs to be
injected into a class that can access the object that needs injection.
Nevertheless I have already sorted out the problem of logging and I thing, the principle works generally. There is always some service providing things you need. If you cannot use the dependency injection, you've to get somehow (Internet and experiments are very often) an appropriate service class name. If you have got the service class name, then you can obtain an instance reference from the bundle context. Fortunately, the bundle context is accessible without using injection.
Back to our logging problem. The class being searched is org.osgi.service.log.LogService:
public class Activator implements BundleActivator {
...
private static BundleContext context;
...
public static BundleContext getContext() {
return context;
}
...
public void start(BundleContext bundleContext) throws Exception {
ServiceReference<?> logser = bundleContext.getServiceReference(LogService.class);
LogService ls = (LogService)bundleContext.getService(logser);
//print an error to test it (note, that info can be below the threshold)
ls.log(LogService.LOG_ERROR, "The bundle is starting...");
Activator.context = bundleContext;
}
...
}
Et voilà!
!ENTRY eu.barbucha.rcp-experiment.kernel 4 0 2013-08-20 07:32:32.347
!MESSAGE The bundle is starting...
That's all. Later you can obtain the bundle context using Activator.getContext(), if it would be needed.
Important note: Regretably you cannot decrease the threshold now. The JVM argument -Declipse.log.level does not affect the OSGI log service and you're using just the OSGI logger now. Unfortunately they (may have provisionally) hardcoded the logging threshold (see How to log warnings and infos in eclipse 3.7). I found out, that they haven't repair it yet. Neither in the Kepler release. However you can make a compromise. You can do that injection-way, where possible.
Final solution (to catch exceptions globally as well)
I extended my activator:
ServiceReference<?> logreser = bundleContext.getServiceReference(LogReaderService.class);
LogReaderService lrs = (LogReaderService) bundleContext.getService(logreser);
lrs.addLogListener(new LogListener() {
@Override
public void logged(LogEntry entry) {
System.err.println("Something was logged: " + entry.getMessage());
}
});
The text beginning with Something was logged really appears, whenewer is something somewhere logged. But the very advantage is, that this class is mine. I can control it. The log entry contains also the level. I can also easily set the threshold. For example on the command line.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why XYFocusKeyboardNavigation not working?
I want to prevent handling of onKeyDown events for keyboard arrows in CommandBar control.
I disabled XYFocusKeyboardNavigation, but it doesn't work - I am still able to navigate between buttons using "Left"/"Right" arrows. Why is it ?
<CommandBar XYFocusKeyboardNavigation="Disabled">
<AppBarButton Label="menu">
<AppBarButton.Icon>
<BitmapIcon UriSource="/Help/home.png"/>
</AppBarButton.Icon>
</AppBarButton>
<AppBarButton x:Name="hideLeavesButton" Label="hide leaves" Click="HideLeavesButton_Click">
<AppBarButton.Icon>
<BitmapIcon UriSource="/Help/hideLeaves.png"/>
</AppBarButton.Icon>
</AppBarButton>
</CommandBar>
A:
Why XYFocusKeyboardNavigation not working?
It looks a bug, and I will report it, currently we have a workaround that prevent handling of onKeyDown events for keyboard arrows. For the detail please refer the following.
Window.Current.Content.PreviewKeyDown += Content_PreviewKeyDown;
private void Content_PreviewKeyDown(object sender, KeyRoutedEventArgs e)
{
if (e.Key == VirtualKey.Left | e.Key == VirtualKey.Right | e.Key == VirtualKey.Up | e.Key == VirtualKey.Down)
{
e.Handled = true;
}
else
{
e.Handled = false;
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
selenium webdriver - waiting until outerHTML attribute of last class in list contains "X"
Using c# and selenium webdriver, how can I wait until the last class in a list contains a specific attribute?
In my AUT, I have three classes on a page (they are all called paragraph). I need to get the last paragraph specifically (I'm using a list but feel free to suggest better method) then wait until the last class on the page contains an outerHTML attribute of "X".
This is what I have so far:
I create a list to store all classes, get the last class and finally, get the outerHTML attribute for the last class.
IList<IwebElement> Element = driver.FindElements(By.ClassName("Paragraph"));
var GetLastElement = Element.Last();
var LastElementAttribute = GetLastElement.GetAttribute("outerHTML");
Based on my code above, how can I add a wait condition that will check the last class in the list contains an outerHTML attribute of "X"?
A:
If I understood you right - lambda for X appearance in the last Paragraph outherHTML would look like this:
Wait().Until(driver => driver
.FindElements(By.ClassName("Paragraph"))
.Last().GetAttribute("outerHTML")
.Contains("X"));
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.