text
stringlengths 64
89.7k
| meta
dict |
|---|---|
Q:
MS Access information before a date
Im trying to find employees who were hired before 1991. When I run my query i get "Data type mismatch in criteria expression" What does that mean?
This is my query:
SELECT EMP_NUM, EMP_LNAME, EMP_FNAME, EMP_INITIAL, JOB_CODE, EMP_PCT, PROJ_NUM
FROM employee
where emp_hiredate < '01/01/1991';
Ive also tried 01-01-1991 and January 1, 1991 and Tuesday, January 1, 1991.
The format of the hire date in the table is Day of Week, Month, Day#, Year, ie) Tuesday, November 8, 1994.
A:
10 tips for working with dates in Microsoft Access
6 The correct character to use when including a literal date value is the pound character (#).
Your query should be
SELECT EMP_NUM, EMP_LNAME, EMP_FNAME, EMP_INITIAL, JOB_CODE, EMP_PCT, PROJ_NUM
FROM employee
where emp_hiredate < #01/01/1991#
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is a permutation of coordinates or labels really equivalent?
To construct a N-body anti-symmetric wave function some derivations start with the requirement that the N-body wave function should be anti-symmetric under a permutation of coordinates, other derivations start with the requirement that the total wave function should be anti-symmetric under a permutation of labels or states, for example this derivation . I can see the equivalence is trivial if you assume you can freely change the order of the wavefunctions. This is true in the case the wavefunctions are just scalars but what if you are dealing with multicomponent wavefunctions. For example relativistic wavefunctions which are 4x1 matrices.
A simple example of my problem is the following, suppose we want to write down a general 2 body wavefunction in terms of single particle wavefunctions,
\begin{align}
\Psi(x_1,x_2) \propto \varphi_{a}(x_1)\varphi_b(x_2)
\end{align}
Where $x_i$ denote the spatial,spin,... coordinates, and $a,b$ denote single particle eigenstates. If we permute the coordinates in order to derive a antisymmetric wavefunction we get,
\begin{align}
\frac{1}{\sqrt{2}} \left(\varphi_{a}(x_1)\varphi_b(x_2) - \varphi_{a}(x_2)\varphi_b(x_1) \right)
\end{align}
while a permutation in states gives,
\begin{align}
\frac{1}{\sqrt{2}} \left(\varphi_{a}(x_1)\varphi_b(x_2) - \varphi_{b}(x_1)\varphi_a(x_2) \right)
\end{align}
Obviously above expressions are identical for scalar wavefunctions.
When I try to check the normalisation of the two particle wavefunction (assuming the single particle states are orthormal and general $N \times 1$ matrices) I get for the label permutation,
\begin{align}
\int | \Psi(x_1,x_2) |^{2} &= \frac{1}{2} \int \varphi_{b}^{\dagger}(x_2) \varphi_{a}^{\dagger}(x_1) \varphi_{a}(x_1)\varphi_b(x_2) - \frac{1}{2} \int \varphi_{b}^{\dagger}(x_2) \varphi_{a}^{\dagger}(x_1)\varphi_{b}(x_1)\varphi_a(x_2) \\
&- \frac{1}{2} \int \varphi_{a}^{\dagger}(x_2) \varphi_{b}^{\dagger}(x_1) \varphi_{a}(x_1)\varphi_b(x_2) + \frac{1}{2} \int \varphi_{a}^{\dagger}(x_2) \varphi_{b}^{\dagger}(x_1)\varphi_{b}(x_1)\varphi_a(x_2) \\
&= \frac{1}{2} - 0 - 0 + \frac{1}{2} = 1
\end{align}
Which is the expected result (note that the integral is assumed to integrate over all continuous degrees of freedom, or sum over the discrete ones).
For the coordinate permutation however I get,
\begin{align}
\int | \Psi(x_1,x_2) |^{2} &= \frac{1}{2} \int \varphi_{b}^{\dagger}(x_2) \varphi_{a}^{\dagger}(x_1) \varphi_{a}(x_1)\varphi_b(x_2) - \frac{1}{2}\int \varphi_{b}^{\dagger}(x_2) \varphi_{a}^{\dagger}(x_1)\varphi_{a}(x_2)\varphi_b(x_1) \\
&- \frac{1}{2}\int \varphi_{b}^{\dagger}(x_1) \varphi_{a}^{\dagger}(x_2) \varphi_{a}(x_1)\varphi_b(x_2) + \frac{1}{2}\int \varphi_{b}^{\dagger}(x_1) \varphi_{a}^{\dagger}(x_2)\varphi_{a}(x_2)\varphi_b(x_1) \\
\end{align}
Where it is not clear to my why the cross terms,
\begin{align}
\propto \int \textrm{d}x_1 \textrm{d}x_2 \left[ \varphi_{b}^{\dagger}(x_1) \varphi_b(x_2) \right] \left[ \varphi_{a}^{\dagger}(x_2) \varphi_{a}(x_1) \right]
\end{align}
should vanish. (Even if they did my main question still stands namely if you can always freely change the order of the wavefunctions). Note that the choice of permutation in coordinates or labels is equivalent to the choice whether to expand to rows or columns in a Slater determinant.
EDIT: upon request a more elaborate version. Suppose we want to write down a general wave function for 2 identical fermions $\Psi$. This wavefunction has to be antisymmetric. Suppose the single particle states are given by
$$
\varphi_{\alpha_i}(r_i,s_i) \,\,\,\,\,\ i \in \{1,2\}
$$
Where $\alpha_i$ denotes the quantum numbers of a state, while $r_i,s_i$ denote the spatial and spin coordinates. A way to antisymmeterize $N$-body wavefunction in terms of single particle wavefunctions is often introduced by the concept of a Slater determinant. In the case of two particles a Slater determinant looks like
\begin{align}
\Psi_{\alpha_1,\alpha_2}(r_1,s_1,r_2,s_2) = \frac{1}{\sqrt{2}} \left| \begin{array}{c c} \varphi_{\alpha_1}(r_1,s_1) & \varphi_{\alpha_2}(r_1,s_1) \\ \varphi_{\alpha_1}(r_2,s_2) & \varphi_{\alpha_2}(r_2,s_2) \end{array} \right|.
\end{align}
Now we can calculate this determinant by expanding along a row or along a column. Which is equivalent to the choice whether to permute the quantum numbers of the single particle states or the the coordinates of the single particle states. More explicitly, expanding the matrix along the first row (or equivalently, permuting the label of the single particle states) we get,
$$
\frac{1}{\sqrt{2}} \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right).
$$
While an expansion along the first column gives (equivalent to a coordinate permutation),
$$
\frac{1}{\sqrt{2}} \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \right).
$$
The two above expression certainly look to be equivalent. However if we consider the wave functions to be $N$-component vectors I get some strange results. Suppose we want to calculate the normalization of the two particle wave function. We would then write the following
$$
\sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 \Psi^{\dagger}(r_1,s_1,r_2,s_2) \Psi(r_1,s_1,r_2,s_2)
$$
Note that I have introduced the $\dagger$ instead of the complex conjugate $*$ as the normalization should give a scalar. If we now replace $\Psi$ with the expression for the "label" permutation case we get,
\begin{align*}
\sum_{s_1} \sum_{s_2} & \int \textrm{d}r_1 \textrm{d}r_2 \Psi^{\dagger}(r_1,s_1,r_2,s_2) \Psi(r_1,s_1,r_2,s_2) \\
&= \frac{1}{2} \sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right)^{\dagger} \\
& \times \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right) \\
&= \frac{1}{2} \sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 \left( \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) \right. \\
& \,\, - \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \\
& \,\, - \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) \\
& \,\, \left. + \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right) \\
&= \frac{1}{2} \sum_{s_1} \int \textrm{d}r_1 \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_1}(r_1,s_1) \sum_{s_2} \int \textrm{d}r_2 \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi_{\alpha_2}(r_2,s_2) \\
&-\frac{1}{2}\sum_{s_1} \int \textrm{d}r_1 \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_2}(r_1,s_1) \sum_{s_2} \int \textrm{d}r_2 \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi_{\alpha_1}(r_2,s_2) \\
&-\frac{1}{2}\sum_{s_1} \int \textrm{d}r_1 \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi_{\alpha_1}(r_1,s_1) \sum_{s_2} \int \textrm{d}r_2 \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi_{\alpha_2}(r_2,s_2) \\
&+\frac{1}{2}\sum_{s_1} \int \textrm{d}r_1 \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi_{\alpha_2}(r_1,s_1) \sum_{s_2} \int \textrm{d}r_2 \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi_{\alpha_1}(r_2,s_2)
\end{align*}
Note that we have used the general expression $(AB)^{\dagger} = B^{\dagger} A^{\dagger}$ as well as the fact that the product $\varphi^{\dagger} \varphi$ contracts to a scalar and can be safely dragged trough the expressions. Now if one assumes the single particle states are orthogonal we have the following relation,
\begin{align}
\sum_{s_i}\int \textrm{d} r_i \varphi^{\dagger}_{\alpha_m}(r_i,s_i) \varphi_{\alpha_n}(r_i,s_i) = \delta_{m,n}
\end{align}
So we get,
\begin{align}
\sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 &\Psi^{\dagger}(r_1,s_1,r_2,s_2) \Psi(r_1,s_1,r_2,s_2) \\
& =\frac{1}{2} \delta_{\alpha_1,\alpha_1} \delta_{\alpha_2,\alpha_2} - \frac{1}{2} \delta_{\alpha_1,\alpha_2} \delta_{\alpha_2,\alpha_1} - \frac{1}{2} \delta_{\alpha_2,\alpha_1} \delta_{\alpha_1,\alpha_2} + \frac{1}{2} \delta_{\alpha_2,\alpha_2} \delta_{\alpha_1,\alpha_1} \\
&=1
\end{align}
Which is to be expected. If we start from the second expression for $\Psi$ however we get the following,
\begin{align*}
\sum_{s_1} \sum_{s_2} & \int \textrm{d}r_1 \textrm{d}r_2 \Psi^{\dagger}(r_1,s_1,r_2,s_2) \Psi(r_1,s_1,r_2,s_2) \\
&= \frac{1}{2} \sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \right)^{\dagger} \\
& \times \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \right) \\
&= \frac{1}{2} \sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 \left( \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) \right. \\
& \,\, - \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \\
& \,\, - \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) \\
& \,\, \left. + \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \right) \\
\end{align*}
It is clear that the first and the fourth term will give $1/2 + 1/2 = 1$ as they are exactly the same as in the previous expression. However I do not see why the cross term should be necessarily zero. So it would seem that starting from the different antisymmeterization expression one gets different results. Hence my phrase "order matters" as the following appears to be true
$$
\frac{1}{\sqrt{2}} \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right)
\neq
\frac{1}{\sqrt{2}} \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \right)
$$
A final note about the Dirac notation. If one would start from the Dirac formalism and introduce $ | \Psi \rangle$ as
$$
| \Psi \rangle = \frac{1}{\sqrt{2}} \left( | \alpha_1 \rangle | \alpha_2 \rangle - | \alpha_2 \rangle | \alpha_1 \rangle \right)
$$
A projection into coordinate space gives the "label" permutation expression,
$$
\langle r_1,s_1 ; r_2,s_2 | \Psi \rangle = \frac{1}{\sqrt{2}} \left( \langle r_1,s_1| \alpha_1 \rangle \langle r_2,s_2 | \alpha_2\rangle - \langle r_1,s_1| \alpha_2 \rangle \langle r_2,s_2 | \alpha_1\rangle \right) \\
= \frac{1}{\sqrt{2}} \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right)
$$
The expression for the normalization starting from the Dirac formalism gives,
$$
\langle \Psi| \Psi \rangle = \frac{1}{2} \left( \langle \alpha_1 | \alpha_1 \rangle \langle \alpha_2 | \alpha_2 \rangle - \langle \alpha_1 | \alpha_2 \rangle \langle \alpha_2 | \alpha_1 \rangle - \langle \alpha_2 | \alpha_1 \rangle \langle \alpha_1 | \alpha_2 \rangle + \langle \alpha_2 | \alpha_2 \rangle \langle \alpha_1 | \alpha_1 \rangle \right)
$$
Using the unit identities: $1 = \sum_s | s \rangle \langle s |$ and, $1 = \int \textrm{d} r | r \rangle \langle r |$ we get,
$$
\langle \Psi| \Psi \rangle = \frac{1}{2} \sum_{s_1} \sum_{s_2} \int \textrm{d} r_1 \int \textrm{d} r_2 \\
\left( \langle \alpha_1 | r_1,s_1 \rangle \langle r_1,s_1 | \alpha_1 \rangle \langle \alpha_2 | r_2,s_2 \rangle \langle r_2,s_2| \alpha_2 \rangle - \langle \alpha_1 | r_1,s_1 \rangle \langle r_1,s_1 | \alpha_2 \rangle \langle \alpha_2 | r_2,s_2 \rangle \langle r_2,s_2| \alpha_1 \rangle - \langle \alpha_2 | r_1,s_1 \rangle \langle r_1,s_1| \alpha_1 \rangle \langle \alpha_1 | r_2,s_2 \rangle \langle r_2,s_2| \alpha_2 \rangle + \langle \alpha_2 | r_1,s_1 \rangle \langle r_1,s_1| \alpha_2 \rangle \langle \alpha_1 | r_2,s_2 \rangle \langle r_2,s_2| \alpha_1 \rangle \right)
$$
Which is when using $ \langle r_i, s_i | \alpha_i \rangle = \varphi_{\alpha_i}(r_i,s_i)$ exactly the same as the normalization expression I get for the label permutation case, which works out nicely to $1$ as expected.
A:
I suspect that the root of this (quite standard) confusion is that one thinks that wavefunctions are the basic notion of quantum mechanics. While you have to think about Hilbert spaces, vectors, scalar and tensor products.
If, say, you'd use Dirac notation :
$$\mid\!\Psi\rangle = \frac{1}{\sqrt2}\left(\mid\!a\rangle\otimes\mid\!b\rangle\,-\mid\!b\rangle\otimes\mid\!a\rangle\right)$$
Then there is no "label/argument ambiguity" in the first place.
So your question doesn't make much sense if you think about basics of QM.
Finally, talking about normalization (I hope that you'll vaguely recognize yours in it):
$$\langle\Psi\mid\!\Psi\rangle=\frac12\left(\langle a\mid\!a\rangle\langle b\mid\!b\rangle-\langle a\mid\!b\rangle\langle b\mid\!a\rangle-\langle b\mid\!a\rangle\langle a\mid\!b\rangle+\langle b\mid\!b\rangle\langle a\mid\!a\rangle\right)$$
Depends on mutual properties of $\mid\!a\rangle$ and $\mid\!b\rangle$. Usually they form a complete set (like in your case, since you called them "eigenstates"):
$$\langle i\mid\!j\rangle = \delta_{ij}$$
Which leads to a unit normalization.
A:
You had the following cross term for the case of coordinate permutation:
\begin{align}
\propto \int \textrm{d}x_1 \textrm{d}x_2 \left[ \varphi_{b}^{\dagger}(x_1) \varphi_b(x_2) \right] \left[ \varphi_{a}^{\dagger}(x_2) \varphi_{a}(x_1) \right].
\end{align}
However, this is not how you take the inner product of $\varphi_{a}(x_{1})\varphi_{b}(x_{2})$ and $\varphi_{a}(x_{2})\varphi_{b}(x_{1})$.
$\varphi_{a}(x_{1})$ and $\varphi_{b}(x_{1})$ belong to the Hilbert space of particle 1, which I denote $\mathcal{H}_{1}$, and similarly, $\varphi_{a}(x_{2}), \varphi_{b}(x_{2}) \in \mathcal{H}_{2}$. The objects $\varphi_{a}(x_1) \varphi_{b}(x_2)$ and $\varphi_{a}(x_2) \varphi_{b}(x_1)$ are members of $\mathcal{H}_{1}\otimes\mathcal{H}_{2}$, which is the tensor product of $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$. Furthermore, as $\varphi_{a}(x_{1})$ and $\varphi_{b}(x_{2})$ belong to different Hilbert spaces, $\varphi_{a}(x_1) \varphi_{b} (x_2)$ and $\varphi_{b} (x_2)\varphi_{a}(x_1)$ really mean the same thing: there is no ordering issue even if $\varphi$ is a $N$-component vector.
When taking the inner product of the two objects $\varphi_{a}(x_{1})\varphi_{b}(x_{2})$ and $\varphi_{a}(x_{2})\varphi_{b}(x_{1})$ in $\mathcal{H}_{1}\otimes\mathcal{H}_{2}$, you should contract the parts in $\mathcal{H}_{1}$ together, and the parts in $\mathcal{H}_{2}$ together. Then,
$$
\int dx_{1} dx_{2} \big[\varphi_{a}(x_{1})\varphi_{b}(x_{2})\big]^{\dagger} \varphi_{a}(x_{2})\varphi_{b}(x_{1}) = \left[\int dx_{1} \varphi_{a}(x_{1})^{\dagger} \varphi_{b}(x_{1})\right]\left[\int dx_{2} \varphi_{b}(x_{2})^{\dagger} \varphi_{a}(x_{2})\right] = 0.
$$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Verifying that $\operatorname{Res}_{E/F} \mathbb G_a$ is a split unipotent group
Let $G$ be a unipotent connected linear algebraic group over a field $F$. Then $G$ is called split if there is a series of closed subgroup schemes $1 = G_0 \subset G_1 \subset \cdots \subset G_t =G$ with each $G_i$ normal in $G_{i+1}$ and $G_i/G_{i+1} \cong_F \mathbb G_a$.
I have read that for $F$ perfect, every such $G$ is split, provided it is connected. And $F$ of characteristic zero, every such $G$ is moreover automatically connected.
I was trying to verify this in the case of $G = \operatorname{Res}_{E/F} \mathbb G_a$, where $E/F$ is a quadratic extension. Does $G$ really have such a composition series?
I tried considering the diagonal embedding $\Delta$ of $\mathbb G_a$ into $G$ and looking at the quotient $G/\Delta$. It should be the case that $G/\Delta \cong \mathbb G_a$, but I'm not able to verify this. The natural map $G/\Delta \rightarrow \mathbb G_a$ given by $(x,y)\Delta \mapsto x- y $ is not defined over $F$.
A:
It's not that complicated. I believe you have all the needed knowledge, but just have to unpack the definitions.
By fixing a basis of $E/F$, we know that $E$ is isomorphic to $F^2$ as $F$-vector spaces, hence for any $F$-algebra $R$, one has $\mathbb{G}_a(E\otimes_F R) = E\otimes_F R \simeq R^2 = \mathbb{G}_a^2(F)$, where the middle isomorphism is functorial in $R$.
Hence by the definition of restriction of scalars, we see that $\operatorname{Res}_{E/F}\mathbb{G}_a \simeq \mathbb{G}_a^2$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
dense subspace of $L^2(\Omega\times(0,T))$
I am trying to prove that the functions $f(\omega,t)=g(\omega)h(t)$ where $g\in G,\: h\in H,$ are dense in $L^2(\Omega\times(0,T))$ if $G$ is dense in $L^2(\Omega)$ and $H$ is dense in $L^2((0,T))$. If I fix one of the variables then i find a sequence of approximating functions in the other variable, thus for all t I have a sequence $g_n(\omega)\to f(\omega,t)$ in $L^2(\Omega)$ but how can I make it work even with the other variable? I would appreciate it if anyone could help me. Thank you in advance.
A:
Hints: Here, I'll assume that the measure $\mu$ on $(\Omega,\mathcal{A})$ is finite.
Since elementary functions are dense in $L^2(\Omega \times (0,T))$ it suffices to show the claim for any elementary function, i.e. a function of the form $$f(\omega,t) = \sum_{j=1}^n c_j 1_{C_j}(\omega,t)$$ where $C_j \in \mathcal{A} \otimes \mathcal{B}([0,T])$ are elements of the product $\sigma$-algebra on $\Omega \times (0,T)$. Using linearity, we can restrict ourselves to indicator functions: $$f(\omega,t) = 1_{C}(\omega,t)$$ for some $C \in \mathcal{A} \otimes \mathcal{B}([0,T])$.
Show that $$\mathcal{D} := \{C \in \mathcal{A} \otimes \mathcal{B}((0,T)): \text{the claim holds for} \, f=1_C\}$$ defines a Dynkin system which is stable under intersections.
By assumption, $\mathcal{A} \times \mathcal{B}((0,T)) \subseteq \mathcal{D}$. Consequently, $$\mathcal{A} \otimes \mathcal{B}((0,T)) = \sigma(\mathcal{A} \times \mathcal{B}((0,T))) \subseteq \sigma(\mathcal{D}) = \mathcal{D}.$$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do you spot the left side pedal on egg beater pedals?
I've just got a new set of crank brothers egg beater pedals and I can't tell which pedal is for the left side, and which is for the right just from looking at them. Is there an obvious way to tell without simply trying to screw them in?
Update: There isn't an obvious L and R that I (or another pair of eyes) could see. The only text I could see was really fine print that mentioned the torque to put the pedals in. I couldn't see anything in the instructions that appeared to say which one was which either.
A:
This is from the Crank Brothers pdf available on their site or in the package:
Eggbeater pedals have either a 6mm Hex, an 8mm Hex, and/or 15mm wrench flats. Note that the right pedal has a
standard right-handed thread and the left pedal has a left-handed thread. For identification, left pedal has a small
“L” on the spindle or a small groove around the spindle flange. The right pedal has a small "R" stamped in the
spindle or no special markings.
In my case there was no "L" on the spindle, but there is a small groove around the left spindle flange.
The link: http://www.crankbrothers.com/support/product_documentation/instructions_eggbeater.pdf
A:
If they aren't marked with a big L and R, you could just inspect the threads. Whichever one has reverse threads is the left pedal. You can tell reverse threads because they are oriented toward the north-west rather than the north-east.
A:
Looking from the top down standing over the bike and holding the pedals in the way they should go into the cranks, the threads of the left pedal angle like this (towards the left side crank): /////
The threads of the right pedal angle like this (towards the right side crank): \\\\\
Both pedals thread in to the front and out to the rear.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
cycle thru each parent category and show 3 entries from each category
I am looking to cycle thru all the parent categories of a category group, show three random entries (from the products channel) from each category, then go to the next category in the group.
I'd share some code, but I don't even know where to start.
My final code that works:
{exp:channel:categories
style="linear"
category_group="2"
parent_only="yes"
}
<h1>{category_name}</h1>
{exp:channel:entries
channel="products"
category="{category_id}"
limit="3"
orderby="title"
sort="random"
status="open|featured"
cache="yes"
refresh="5"
disable="category_fields|member_data|pagination"
}
<p>{title}</p>
{/exp:channel:entries}
{/exp:channel:categories}
A:
Alec's method works, but it might cause very high query counts due to the way EE generates nested Channel Entries tags. Take a look at the alternative method outlined in this post from EE Add-on superhero (and parse order master) Low. It achieves the same thing, but is much more performance friendly:
http://gotolow.com/blog/nesting-tags-and-performance-in-ee
And if you're like me and not excited about futzing with PHP, have no fear. This is actually a pretty simple and flexible technique to implement even if it might not appear so on the surface.
A:
I needed this for a recipe website once and the code below seemed to do the trick:
{exp:channel:categories channel="products" style="linear" show_empty="no"}
<h2>{category_name}</h2>
{exp:channel:entries channel="products" dynamic="no" limit="3" orderby="random" category="{category_id}" }
{title}
{/exp:channel:entries}
{/exp:channel:categories}
The channel param on the categories tag is required unless you only have a single channel. Multiple channels may also be specified.
see: http://ellislab.com/expressionengine/user-guide/modules/channel/categories.html#channel
Hope this helps!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Chrome not responding to css animation/transform, but Firefox works fine
I'm trying to spin an svg element. It works fine in Firefox 25, but not in Chrome 31 (both the current, latest, release versions). Here is the code http://codepen.io/zshift/pen/Fvibj, but shown below for easy reading:
<!DOCTYPE html>
<html>
<head>
<style>
@-webkit-keyframes spinners {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
@keyframes spinners {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
.spinner {
-webkit-animation: spinners 1s infinite linear;
-moz-animation: spinners 0.75s infinite linear;
-o-animation: spinners 1s infinite linear;
/* animation: spinners 1s infinite;*/
}
</style>
</head>
<body>
<div>
<svg class="spinner" xmlns="http://www.w3.org/2000/svg" version="1.1">
<rect width="300" height="100" style="fill:rgb(0,0,255);stroke-width:1;stroke:rgb(0,0,0)" />
</svg>
</div>
</body>
I don't see any errors or warnings in Chrome, and I'm following the guides I've read on http://www.w3schools.com/css/css3_animations.asp, but no luck. Any idea what I might be doing wrong?
A:
You're missing the prefix for transform
-webkit-transform: rotate(0deg);
@-webkit-keyframes spinners {
from {
-webkit-transform: rotate(0deg);
}
to {
-webkit-transform: rotate(360deg);
}
}
The demo http://codepen.io/anon/pen/fJCki
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is my child's cooperativity a trait or a choice?
My wife and I are about to have a child. We wanted to know, through experience or studies, if we can influence how cooperative our child can be. She believes that it is something he can learn through playing with others, while I believe it something that is passed on based on my stubbornness. Anything can help, so the overall questions are (if they are within scope):
Is this a nature vs. nurture argument (nature loads the gun, nurture pulls the the trigger).
Is there a way to change my child's cooperativeness?
Different way of saying it: Is it a trait or choice?
Thank you! :)
A:
There have been incredible recent advances in the research on how genes and experience influence development. One of the most important things to take away from the newer research is that it's not a question of two independent forces (nature and nurture) exerting influence in different proportions --- your genetic make up affects the way experience affects you, and your experience has the potential to change the genetic component itself. Yes, nurture affects nature.
We learn about the genetics of eye color or blood type in our high school biology classes, and then we're left to assume that there are similar processes in place for the rest of our genetic inheritance --- unfortunately, it really doesn't work that way. Once you get past the most basic traits, the genetic part is actually very difficult to understand, in part because its expression often depends on environmental circumstance. Even something like height, which seems like it should be pretty straightforward, is surprisingly complex (if you read the linked article, don't stop after the first paragraph; it opens with a simple assertion, but the rest of the article explains all of the main exceptions and caveats to that pattern). In fact, even eye color is more complex genetically than you were probably taught. If you start talking about cognitive and behavioral traits, like intelligence, linguistic ability, or cooperation, there is literally no trait that has a clear, simple genetic cause. For example, IQ has been rigorously studied for decades, including a tremendous amount of research on its genetic component. We know from this research genetics definitely plays a part in IQ, but no one has been able to identify the individual genes that control it. (Some relevant reading, if you want to learn more: article on genetics of race and IQ; summary of evidence on genes and IQ).
All of this is to say you can't really "pass on" cooperativeness to your offspring the way you might pass on your blood type. To address your question directly,
She believes that it is something he can learn through playing with others, while I believe it something that is passed on based on my stubbornness.
Your child will have roughly 50% of his genetic material from you. Any complex trait like cooperativeness will be influenced by a huge number of individual genes, each contributing only a tiny bit to the behavior. Your child will likely have some but not all of your constellation of genes that contributes to your behavior as cooperative or not --- since genes work together in complex ways, you have no idea whether the pieces of cooperativeness genetics your child will inherit will work together the same way yours do. This is all before we even consider the fact that cooperativeness might be influenced by experience/environment. As it turns out, environment is probably more important than genetics in the development of cooperative behavior: I was able to find one study on this, and they found that only 38% of cooperativeness appears to have a genetic link. So put together the fact that most of cooperativeness is predicted by environment rather than genes and that the genetic component is likely very complex, and you can see how tenuous your claim that you can "pass on" cooperativeness is.
To sum up...
Complex behaviors (and, actually, even many simple traits) can't adequately be described as "inherited or learned", or even as "inheritance plus experience" --- your genetic inheritance is complex, flexible, and it can change over time depending on your environment.
Any cognitive or behavioral trait has complex genetic determinants. There's no one gene, or small set of genes, that will determine whether you do or don't show some behavior (with the exception of inherited forms of psychopathology, which is outside the scope of this question).
The little we know about the inheritability of cooperativeness suggests that genes play a relatively small role (compared to something more heritable like IQ, roughly 70% of which may be influenced by genes).
Although you're right that your genes might make a difference in some subtle ways to your child's cooperativeness, on the whole your wife wins this argument. :)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Varnish. how to clean all cache over curl
curl command:
curl -X PURGE <URL>
so it is possible to clear only one page
how to clean the all cache by using CURL?
A:
(varnish 3) I was able to clear the all cache so
if (req.request == "BAN") {
if (!client.ip ~ purge) {
# Not from an allowed IP? Then die with an error.
error 405 "This IP is not allowed to send PURGE requests.";
}
ban("req.http.host == " +req.http.host+" && req.url ~ "+req.url);
error 200 "Ban added";
}
curl -X BAN http://domain.com/.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Spring YAML profile configuration
I'm not sure if I well understand how Spring profiles works with yaml and properties files.
I was trying to misc those two types of configuration (the two file do not share any configuration) but i'm having problems when reading profiles from yaml configuration.
I'm using Spring 4.1.1
Here is the code.
This is the context:property-placeholder configuration:
<context:property-placeholder location="classpath:/job-config.properties" order="1"
ignore-unresolvable="true" ignore-resource-not-found="false"/>
<context:property-placeholder properties-ref="yamlProperties" order="2"
ignore-resource-not-found="false" ignore-unresolvable="true"/>
where yamlProperties is the following bean
<bean id="yamlProperties"
class="org.springframework.beans.factory.config.YamlPropertiesFactoryBean">
<property name="resources"
value="file:${catalina.home}/properties/test.yml"/>
</bean>
Here is the test.yml
spring:
profiles.default: default
---
spring:
profiles: default
db:
url: jdbc:oracle:thin:@##hostname##:##port##:##SID##
usr: ##USER##
pwd: ##PWD##
---
spring:
profiles: development
db:
url: jdbc:oracle:thin:@##hostname##:##port##:##SID_DEVELOPMENT##
usr: ##USER_DEVELOPMENT##
pwd: ##PWD_DEVELOPMENT##
My problem is that when I try to configure (via xml) my datasources by doing this:
<property name="url" value="${db.url}"/>
<property name="username" value="${db.usr}"/>
<property name="password" value="${db.pwd}"/>
Spring always use the last configuration in the YAML file ignoring the profile. I tried to pass the active profile through contex-parameter in web.xml or directly to the JVM (I implemented a bean that implements EnvironmentAware interface to get the active/default profiles and it's corret) and it seems all good but, when trying to inject values the profile is ignored.
I believe that using property-placeholder context (with orders) I get one property-placeholder that is an instance of PropertySourcesPlaceholderConfigurer so with access to Environment but i cannot understand why the profile is ignored and spring gets the last yaml file configuration.
I add a reference to the documentation (spring-boot), in the section 63.6
http://docs.spring.io/spring-boot/docs/current/reference/html/howto-properties-and-configuration.html
Thanks in advance
A:
Not sure if this helps at this point but here is what I did.
I used SpringProfileDocumentMatcher class [by Dave Syer from spring boot] as my base matcher and implemented EnvironmentAware to get the actives profiles and pass this bean to the YamlPropertiesFactoryBean bean.
Here is the code:
applicationContext.xml
<bean id="yamlProperties" class="org.springframework.beans.factory.config.YamlPropertiesFactoryBean">
<property name="resources" value="classpath:application.yml" />
<property name="documentMatchers">
<bean class="com.vivastream.quant.spring.SpringProfileDocumentMatcher" />
</property>
SpringProfileDocumentMatcher.java
public class SpringProfileDocumentMatcher implements DocumentMatcher, EnvironmentAware {
private static final String[] DEFAULT_PROFILES = new String[] {
"^\\s*$"
};
private String[] activeProfiles = new String[0];
public SpringProfileDocumentMatcher() {
}
@Override
public void setEnvironment(Environment environment) {
if (environment != null) {
addActiveProfiles(environment.getActiveProfiles());
}
}
public void addActiveProfiles(String... profiles) {
LinkedHashSet<String> set = new LinkedHashSet<String>(Arrays.asList(this.activeProfiles));
Collections.addAll(set, profiles);
this.activeProfiles = set.toArray(new String[set.size()]);
}
@Override
public MatchStatus matches(Properties properties) {
String[] profiles = this.activeProfiles;
if (profiles.length == 0) {
profiles = DEFAULT_PROFILES;
}
return new ArrayDocumentMatcher("spring.profiles", profiles).matches(properties);
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
OpenLDAP 2.4 Chain Overlay Minimal LDIF Configuration
There's almost no information about how Chain overlays are configured in OpenLDAP LDIF backend. What's the minimal configuration required?
A:
The only way to work this out is by converting an old style configuration file into LDIF style. This show's quite a complex structure which isn't well documented.
The structure creates LDAP database entries in the frontend to intercept referrer responses.
To complicate matters, a schema validation conflicts with OpenLDAP's own configuration requirements (olcDbURI can not be used in the first entry). To work around this, the offline/direct modification must be made but remember that editing the LDIF directly with a text editor is strongly discouraged - See Working with OpenLDAP 2.4 LDIF config backend
If you're on Ubuntu/Debian, ensure you load the back_ldap module - OpenLDAP Chain not found
Create "chainoverlay.ldif":
dn: olcOverlay=chain,olcDatabase={-1}frontend,cn=config
objectClass: olcOverlayConfig
objectClass: olcChainConfig
olcOverlay: chain
olcChainCacheURI: FALSE
olcChainMaxReferralDepth: 1
olcChainReturnError: TRUE
As root, import indirectly:
# ldapadd -Y EXTERNAL -H ldapi:/// -f chainoverlay.ldif
Create "defaultldap.ldif":
dn: olcDatabase=ldap,olcOverlay={0}chain,olcDatabase={-1}frontend,cn=config
objectClass: olcLDAPConfig
objectClass: olcChainDatabase
olcDatabase: ldap
Import defaultldap.ldif offline (This is to work around schema validation):
# service slapd stop
# slapadd -b cn=config -l defaultldap.ldif
Fix a weird entry and perms:
# rm "/etc/ldap/slapd.d/cn=config/olcDatabase={-1}over.ldif"
# chown -R openldap:openldap "/etc/ldap/slapd.d/cn=config"
Start slapd:
# service slapd start
Create a chain intercept configuration - chainedserver.ldif:
dn: olcDatabase=ldap,olcOverlay={0}chain,olcDatabase={-1}frontend,cn=config
objectClass: olcLDAPConfig
objectClass: olcDatabaseConfig
objectClass: olcConfig
objectClass: top
objectClass: olcChainDatabase
olcDatabase: ldap
olcDbURI: ldap://areferredserver.com
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I convert a Unicode path to a c string?
How can i convert from a Unicode path name (LPWSTR) to the ASCII equivalent? The library that gets called understands only c strings.
Edit:
Okay, I took the GetShortPathName and the WideCharToMultiByte suggestions and created that piece of code, i tested it with some folders containing Unicode characters in the path and it worked flawlessly:
wlength = GetShortPathNameW(cpy,0,0);
LPWSTR shortp = (LPWSTR)calloc(wlength,sizeof(WCHAR));
GetShortPathNameW(cpy,shortp,wlength);
clength = WideCharToMultiByte(CP_OEMCP, WC_NO_BEST_FIT_CHARS, shortp, wlength, 0, 0, 0, 0);
LPSTR cpath = (LPSTR)calloc(clength,sizeof(CHAR));
WideCharToMultiByte(CP_OEMCP, WC_NO_BEST_FIT_CHARS, shortp, wlength, cpath, clength, 0, 0);
A:
GetShortPathName() Function
http://msdn.microsoft.com/en-us/library/aa364989%28VS.85%29.aspx
Will give you an equivalent 8.3 filename, pointing to the same file, for use with legacy code.
[EDIT] This is probably the best you can do, although theoretically the 8.3 filenames may contain non-ascii characters, depending on registry setting. In this case, you don't have an easy way of getting the proper char*, and GetShortPathNameA() will not do that either if codepage setting during file creation does not match current setting.
See http://technet.microsoft.com/en-us/library/cc781607%28WS.10%29.aspx about the setting. There's a concensus here (see below) that this case is reasonable to neglect.
Thanks Moron, All, for contribution to this post.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Pegar uma variável qualquer de um array que tenha certo id em AngularJS
Com o seguinte código, estou dizendo que no segundo array contido no objeto $scope.listademercadoria há a propriedade "quantidade", e estou declarando ela como valor 0.
$scope.listademercadoria[1].quantidade = 0;
existe uma forma de selecionar somente o array que contenha certo id como propriedade?
Ex: $scope.listademercadoria[id = 2].quantidade = 0;
Se funcionasse, pegaria somente os arrays que contém id=2 e setaria a quantidade para 0.
Existe alguma forma de fazer isso sem o uso do $filter? Se não, como ficaria com o $filter?
A:
Sem filter:
var filtrado = $scope.listademercadoria(function(item) {
return item.id === 2;
})[0];
Com $filter:
var filtrado = $filter('filter')($scope.listademercadoria, {id: 2})[0];
Depois você usa o atributo que quer:
filtrado.quantidade = 0;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Removing unwanted fields when extending Django UserCreationForm
Just started getting into Django and have become stuck when extending the classic UserRegistrationForm. I have followed the tutorial here which is great, but the Html form in the browser shows me fields I do not want. I only want to extend email for now, but want to add First Name and surname later.
Please note I have no CSS yet, just want to see the basic information in the browser for now. Can anybody explain why I am seeing all the other fields in addition to username, password, password2 and email?
forms.py
from django import forms
from django.contrib.auth.models import User
from django.contrib.auth.forms import UserCreationForm
class MyRegistrationForm(UserCreationForm):
email = forms.EmailField(required=True)
class Meta:
model = User
field = ('username','email','first_name','last_name','password1', 'password2')
def save(self, commit=True)
user = super(MyRegistrationForm, self).save(commit=False)
user.email = self.cleaned_data['email'] #validated before committing to database
if commit:
user.save()
return user
views.py
from django.shortcuts import render_to_response #allows you to render a template back to the browser
from django.http import HttpResponseRedirect #allows the browser to redirect to another url
from django.contrib import auth
from django.core.context_processors import csrf # method to stop hackers submitting requests
from fantasymatchday_1.forms import MyRegistrationForm #A user registration form I created that inherits the UserCreationForm
def register_user(request):
if request.method == 'POST':
form = MyRegistrationForm(request.POST) #create a form object
if form.is_valid(): #if the form is valid, save the form
form.save()
return HttpResponseRedirect('/register_success')
args = {}
args.update(csrf(request))
args['form'] = MyRegistrationForm()
#print args
return render_to_response('register.html', args)
def register_success(request):
return render_to_response('register_success.html')
register.html
<h2> Register </h2>
<form action="/register/" method="post">{% csrf_token %}
{{form}}
<input type="submit" value="Register" />
</form>
Why are all the others turning up? Any help on this would be greatly appreciated :)
A:
You have a typo in your MyRegistrationForm's Meta definition:
field = ('username','email','first_name','last_name','password1', 'password2')
It should be fields rather than field.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Crystal sqlite3 create new database
How do a create a new sqlite database using Crystal?
When trying:
DB.open "sqlite3://.data.db" do |db|
end
I get no errors.
But when I attempt to do it in the home directory:
DB.open "sqlite3://~/.data.db" do |db|
end
I get a DB::ConnectionRefused exception.
How do I get to open a database in the home directory ?
A:
When you do
DB.open "sqlite3://.data.db" do |db|
end
you create data.db in the current directory. Try running usage example and you will get some output.
To create/open db in a home directory, you need simply to expand the path (~ not working in sqlite URI):
db_path = File.expand_path "~/data.db"
DB.open "sqlite3://#{db_path}" do |db|
end
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using a Pandas DataFrame as Lookup
I have 2 pandas DataFrames, this one:
item inStock description
Apples 10 a juicy treat
Oranges 34 mediocre at best
Bananas 21 can be used as phone prop
<...many other fruits...>
Kiwi 0 too fuzzy
and a lookup table with only a subset of the items above:
item Price
Apples 1.99
Oranges 6.99
I would like to scan through the first table and fill in a price column for the DataFrame when the fruit in the first DataFrame matches the fruit in the second:
item inStock description Price
Apples 10 a juicy treat 1.99
Oranges 34 mediocre at best 6.99
Bananas 21 can be used as phone prop
<...many other fruits...>
Kiwi 0 too fuzzy
I've looked at examples with the built-in lookup function, as well as using a where-in type function but I cannot seem to get the syntax to work. Can someone help me out?
A:
import pandas as pd
df_item= pd.read_csv('Item.txt')
df_price= pd.read_csv('Price.txt')
df_final=pd.merge(df_item,df_price ,on='item',how='left' )
print df_final
output
item inStock description Price
0 Apples 10 a juicy treat 1.99
1 Oranges 34 mediocre at best 6.99
2 Bananas 21 can be used as phone prop NaN
3 Kiwi 0 too fuzzy NaN
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Webcam doesn't work in browser...?
I'm running the latest version of Ubuntu, and I have a Dell Sp2009wfd monitor with built in microphone and webcam. I've gotten the microphone working on mumble and on sound recorder, and I've gotten the webcam to work on Cheese...but it WON'T work through the flash player//website webcam things. I -have- the flash plugin, and I have drivers installed for my video card. Could anyone help me figure this out? It's really frustrating.
Thank you...!
A:
If your problem is something like the image below, follow the steps given.
This problem usually occurs in Firefox where you can't click on Allow.
Open the website which requires the webcam.
Go to this page: http://www.macromedia.com/support/documentation/en/flashplayer/help/settings_manager06.html;
There will be a list of websites which you've visited and which required webcam.
Select the website to which you want to allow webcam access and select Always Allow.
Refresh the website again and you're done!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Вывести данные о людях, имена которых повторяются
Возникла трудность при решении такой задачи: дан массив строк, в котором записаны имена и фамилии пользователей.
Нужно вывести пользователей с одинаковыми именами (их имя и фамилию).
Для получения имени из строки я написал специальную функцию, т.к. использовать strtok тут не получится - она сохраняет изменения над самой строкой.
template <typename T>
int find(int length, T *arr, T element)
{
for (int i = 0; i < length; i++)
if (arr[i] == element)
return i;
return -1;
}
char *getWord(char *letter, int delimiterIndex) // получить все слова до разделителя (его индекса)
{
char *wrds = new char[delimiterIndex];
strncpy(wrds, letter, delimiterIndex);
wrds[delimiterIndex] = '\0';
return wrds;
}
int main()
{
const int SIZE = 5;
char users[SIZE][30] =
{
"James Smith",
"George Williams",
"James Brown",
"Ann Johnson",
"James Miller"
};
for (int i = 0; i < SIZE - 1; i++) {
for (int j = i + 1; j < SIZE; j++) {
// получим текущее и следующее имена из строки
// в строке имя и фамилия разделены пробелом, потому вытащим слово до первого пробела
char *curName = getWord(users[i], find(strlen(users[i]), users[i], ' '));
char *nextName = getWord(users[j], find(strlen(users[j]), users[j], ' '));
if (!strcmp(curName, nextName)) // если имена равны
{ // вывести их
cout << users[i] << endl;
cout << users[j] << endl;
}
}
}
return 0;
}
Чтобы вывести пользователей с одинаковыми именами, я в цикле сравниваю имя из текущей и следующей строк. Если они равны, то вывожу текущую и следующую строки.
Проблема следующая, имея такой массив, как у меня, программа выведет такой результат:
James Smith
James Brown
James Smith
James Miller
James Brown
James Miller
А должна выводить такой:
James Smith
James Brown
James Miller
Почему так происходит я понимаю. Подскажите, как реализовать эту программу правильно.
Спасибо!
A:
Лучше всего завести контейнер вида
std::map<std::string, std::vector<std::string>> res;
В качестве ключа надо использовать фамилию, а в массив заносить имя/фамилию всех соответствующих людей.
В результате должно получиться что-то вроде:
res = {
"James" => ["James Smith", "James Brown", "James Miller"],
"George" => ["George Williams"],
"Ann" => ["Ann Johnson"]
}
И далее надо распечатывать только те массивы, в которых более одного элемента.
Привожу код (C++17):
#include <string>
#include <map>
#include <vector>
#include <iostream>
int main()
{
std::vector<std::string> users {
"James Smith",
"George Williams",
"James Brown",
"Ann Johnson",
"James Miller",
"Ann Turing",
};
std::map<std::string, std::vector<std::string>> res;
for (auto & name: users) {
auto first_name = name.substr(0, name.find(' '));
auto it = res.find(first_name);
if (it == res.end()) {
res[first_name] = {name};
} else {
it->second.push_back(name);
}
}
for (auto & [name, vec]: res) {
if (vec.size() <= 1)
continue;
std::cout << "People with name " << name << ":\n";
for (auto & full_name: vec) {
std::cout << "\t" << full_name << "\n";
}
}
return 0;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how to highlight string in a string in a laravel blade view
Somewhere in my template I have this:
{{ $result->someText }}
Now in this text I want to highlight all words that are in the string
{{ $searchString }}
So I thought I create a new blade directive:
{{ @highlightSearch($result->someText, $searchString) }}
Blade::directive('highlightSearch', function($input, $searchString)...
error: missing argument 2
Found out that directives do not except 2 arguments. I tried every workaround that I could find but none worked. They always return the arguments as a plain string, not even passing the actual values.
I tried adding a helper function like explained here: https://stackoverflow.com/a/32430258/928666. Did not work:
error: unknown function "highlightSearch"
So how do I do this super easy task in laravel? I don't care about the highlighting function, that's almost a one-liner.
A:
The reality is blade directives can't do what you need them to do. Whether or not they should is not a topic I can't help with. However you can instead do this in your service provider:
use Illuminate\Support\Str;
/* ... */
Str::macro('highlightSearch', function ($input, $searchString) {
return str_replace($searchString, "<mark>$searchString</mark>", $input);
//Or whatever else you do
});
Then in blade you can just do:
{!! \Illuminate\Support\Str::highlightSearch($result->someText, $searchString) !!}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to Add Java VM Arguments in JBoss EAP 6.1.1 Management Console
Does anyone know how to add Java VM arguments for a web application (*.war file) via the JBoss EAP 6.1.1 Management Console?
For example, I want to specify the location of a truststore at C:\truststore\truststore.jks, which is used in my web application for SSL. The java options I would normally run with a Java application would be:
-Djavax.net.ssl.trustStore=c:/truststore/truststore.jks
Is there a specific field that I can enter this into in the console for web apps running in JBoss EAP? Or a setting somewhere in standalone.xml (or some other configuration file, which in theory should get picked up and displayed in the console like all the other settings)?
Thanks in advance.
A:
Under the Profiles tab in the console, there is a "System Properties" section where you can add key/value pairs. Click the "Add" button and enter in your VM arguments as a key/value pair, save them, then restart the server for your new properties to take effect.
For example:
key: javax.net.ssl.trustStore
value: C:\mydirectory\keystore\cacerts
You can also add JNDI properties and Maven commands in your pom.xml to do the same thing.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
git-merge is not respecting mergeoptions
I'm trying to configure my local git to always use --squash when merging into develop branch. As many recommendations suggest I tried git config branch.develop.mergeoptions "–-squash" but it seems to have no effect:
> git checkout -b test_branch
> touch 1 2
> git add 1
> git commit -m "Added 1"
> git add 2
> git commit -m "Added 2"
> git checkout develop
> git merge test_branch
Updating e258d21..90a41ec
Fast-forward
1 | 0
2 | 0
2 files changed, 0 insertions(+), 0 deletions(-)
create mode 100644 1
create mode 100644 2
My expectation is that result of git merge test_branch would be exactly the same as for git merge --squash test_branch. But it obviously not true: merge pulls in both commits from test branch without any attempt to squash them.
What am I missing?
Dump of local config looks like this
$ git config --local --list
core.repositoryformatversion=0
core.filemode=true
core.bare=false
core.logallrefupdates=true
core.ignorecase=true
remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
[email protected]:repo/repo.git
gitflow.branch.master=master
gitflow.branch.develop=develop
gitflow.prefix.feature=feature/
gitflow.prefix.release=release/
gitflow.prefix.hotfix=hotfix/
gitflow.prefix.support=support/
gitflow.prefix.versiontag=
branch.master.remote=origin
branch.master.merge=refs/heads/master
branch.develop.remote=origin
branch.develop.merge=refs/heads/develop
branch.develop.rebase=true
branch.develop.mergeoptions=--ff-only –-squash
user.name=User Name
[email protected]
log.mailmap=true
push.default=simple
A:
You probably made typo when passing options in git config. Try this: git config branch.develop.mergeoptions "--squash" Did it help?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can i show the modal in the twitter bootstrap just once?
this is my code now:
<script type="text/javascript">
$(document).ready(function() {
if($.cookie('msg') == 0)
{
$('#myModal').modal('show');
$.cookie('msg', 1);
}
});
</script>
on page load the model shows but when i refresh it keeps showing which it should only show once. the $.cookie is from https://github.com/carhartl/jquery-cookie
update:
this worked: the 'hide' didnt work for some reason
<script type="text/javascript">
$(document).ready(function() {
if($.cookie('msg') == null)
{
$('#myModal').modal('show');
$.cookie('msg', 'str');
}
else
{
$("div#myModal.modal").css('display','none');
}
});
</script>
A:
@SarmenB 's Update worked in most browsers (FF, IE9) but not IE8.
I modified his updated solution to get it to work in IE8...
This was @SarmenB 's solution:
<script type="text/javascript">
$(document).ready(function() {
if($.cookie('msg') == null)
{
$('#myModal').modal('show');
$.cookie('msg', 'str');
}
else
{
$("div#myModal.modal").css('display','none');
}
});
</script>
This is the modified solution I came up with that works is IE8 as well:
<script type="text/javascript">
$(document).ready(function() {
if($.cookie('msg') != null && $.cookie('msg') != "")
{
$("div#myModal.modal, .modal-backdrop").hide();
}
else
{
$('#myModal').modal('show');
$.cookie('msg', 'str');
}
});
</script>
Basicaly to get it to work in IE8 I had to reverse what was in the if/else statements.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Problem to install latest version of shellcheck from tar.xz
I have been downloading the latest version of ShellCheck and tried to install it on my System taking the following steps:
I downloded shellcheck-latest.linux.x86_64.tar.xz
then I run as root:
# extracting the tar
tar --directory=/opt -xvf shellcheck-latest.linux.x86_64.tar.xz
# added the directory to path
export PATH=$PATH:/opt/shellcheck/bin
# created a symlink:
cd /usr/bin
ln -s /opt/shellcheck shellsheck
if I try to run the program I always get the following error (translated from german):
bash: /usr/bin/shellcheck: Can not run binary file: wrong format.
In the README it says:
This is a precompiled ShellCheck binary.
so I thought I would run fine with this steps.
Here are some of my system informations:
Operating System: Debian GNU/Linux buster/sid
Kernel: Linux 4.11.0-1-686-pae
Architecture: x86
A:
You need to download the 32-bit version of Spellcheck. Your kernel is not the x86_64 64-bit kernel.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Web3 not defined
I am following this tutorial here:
https://medium.com/@merunasgrincalaitis/the-ultimate-end-to-end-tutorial-to-create-and-deploy-a-fully-descentralized-dapp-in-ethereum-18f0cf6d7e0e
after running webpack I try to view the page but it returns a blank page.
When I check the developers console, I get the following error:
Uncaught ReferenceError: web3 is not defined
at new App (build.js:10665)
at constructClassInstance (build.js:19093)
at updateClassComponent (build.js:20577)
at beginWork (build.js:20963)
at performUnitOfWork (build.js:22962)
at workLoop (build.js:23026)
at HTMLUnknownElement.callCallback (build.js:13280)
at Object.invokeGuardedCallbackDev (build.js:13319)
at invokeGuardedCallback (build.js:13176)
at renderRoot (build.js:23104) build.js:22485 The above error occurred in the component:
in App
Consider adding an error boundary to your tree to customize error
handling behavior. Visit https://fb.me/react-error-boundaries to learn
more about error boundaries. logCapturedError @ build.js:22485
build.js:23732 Uncaught Error: A cross-origin error was thrown. React
doesn't have access to the actual error object in development. See
https://fb.me/react-crossorigin-error for more information.
at Object.invokeGuardedCallbackDev (build.js:13326)
at invokeGuardedCallback (build.js:13176)
at renderRoot (build.js:23104)
at performWorkOnRoot (build.js:23752)
at performWork (build.js:23705)
at requestWork (build.js:23616)
at scheduleWorkImpl (build.js:23470)
at scheduleWork (build.js:23427)
at scheduleTopLevelUpdate (build.js:23931)
at Object.updateContainer (build.js:23969)
This is strange as my index.js (the file from which I am building from ) has
import Web3 from 'web3'
I have even tried swapping for
Web3 = require('web3')
but to no avail.
These are the references to web3 in the js file:
import React from 'react'
import ReactDOM from 'react-dom'
import Web3 from 'web3'
import './../css/index.css'
class App extends React.Component {
/*
Constructor and set the initial state of the application
*/
constructor(props){
super(props)
this.state = {
lastWinner: 0,
numberOfBets: 0,
minimumBet: 0,
totalBet: 0,
maxAmountOfBets: 0,
}
/*
Checking to see if Web3 variable we imported is defined or not
*/
if(typeof web3 != 'undefined'){
console.log("Using web3 detected from external source like Metamask")
this.web3 = new Web3(web3.currentProvider)
}else{
this.web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:8545"))
}
What am I doing wrong?
A:
Try it this way:
import React, { Component } from 'react';
import web3 from './web3';
class App extends Component {
state = { lastWinner: 0, numberOfBets: 0, minimumBet: 0, totalBet: 0, maxAmountOfBets: 0 };
async componentDidMount() {
const lastWinner = await blackjack.methods.lastWinner().call();
const numberOfBets = await blackjack.methods.getNumberOfBets().call();
this.setState({ lastWinner, numberOfBets });
}
I would create a separate file to house all configuration code around the web3 library that would look like this:
import Web3 from 'web3';
const web3 = new Web3(window.web3.currentProvider);
export default web3;
Now, I utilized ES2016 way of initializing state where you no longer need the constructor() function.
You want to setup your own local instance of web3 and rip out the provider from the injected copy that is coming from Metamask and this will allow your instance of web3 to automatically connect to the Rinkeby test network and make use of all the accounts assigned to the Metamask extension.
So I created a new instance of web3 and simultaneously ripped out the injected copy of web3 provided by metamask. To rip out the provider is where you see the reference to window global variable.
Metamask injects web3 onto web3 global variable. The currentProvider has been pre configured for the Rinkeby test network.
You can then import it to the App file as I displayed above and then console it out to ensure it works like so:
import React, { Component } from 'react';
import web3 from './web3';
class App extends Component {
state = { lastWinner: 0, numberOfBets: 0, minimumBet: 0, totalBet: 0, maxAmountOfBets: 0 };
async componentDidMount() {
const lastWinner = await blackjack.methods.lastWinner().call();
const numberOfBets = await blackjack.methods.getNumberOfBets().call();
this.setState({ lastWinner, numberOfBets });
}
render() {
console.log(web3.version);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to invoke an aws lambda function from another aws lambda in Java?
I have a code in java that returns an integer value. for example:
public class Hello {
public int myHandler(Object name, Context context) {
int count = 5;
return count;
}
}
I have uploaded this code on AWS with the handler name as example.Hello::myHandler and function name as AWSLAMBDA under the us-east-2 region.
Now I want to write another code in Java which invokes the output value (count) of the earlier code. Note that count is an integer. Since I am a novice in both java and aws. Kindly help me with this and please provide a simple explanation if possible.
A:
This is the code snippet example aws sdk
To invoke a function asynchronously, set InvocationType to Event
To invoke a function synchronously, set InvocationType to RequestResponse (which is the default value).
The calling lambda should have a role with attached policy having lambda:InvokeFunction action.
import com.amazonaws.regions.Regions;
import com.amazonaws.services.lambda.AWSLambda;
import com.amazonaws.services.lambda.AWSLambdaClientBuilder;
import com.amazonaws.services.lambda.model.InvokeRequest;
import com.amazonaws.services.lambda.model.InvokeResult;
AWSLambda client = AWSLambdaClientBuilder.standard().build();
InvokeRequest request = new InvokeRequest().withFunctionName("MyFunction").withInvocationType("RequestResponse").withLogType("Tail").withClientContext("MyApp")
.withPayload(ByteBuffer.wrap("fileb://file-path/input.json".getBytes())).withQualifier("1");
InvokeResult response = client.invoke(request);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
total class specialization for a template
lets say i have a templated class
template <typename T>
struct Widget
{
//generalized implementation
}
but i wanted to totally specialize..
for a template that accepted a parameter?
template <>
struct Widget< TemplateThatAcceptsParameter<N> >
{
//implementation for Widget for TemplateThatAcceptsParameterN
//which takes parameter N
}
How does one go about doing this?
A:
This is called a partial specialization and can be coded like this:
template <typename T>
struct Widget
{
//generalized implementation
};
template <typename N>
struct Widget< TemplateThatAcceptsParameter<N> >
{
//implementation for Widget for TemplateThatAcceptsParameterN
//which takes parameter N
};
It works just like a regular specialization, but has an extra template argument.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to get all group content from group?
I am using OG, Flag and Rules module. I would like to do like this.
When a user click Flag links in Group (name is follow_group) then all group content of that group will be flagged as following.
For now, i'm stuck at getting all group content of specific group. Is there a way to get group content of current group?
A:
All the contents are associated in og_membership table. If you can get the gid (group node id), then it will be easier to get the content associated with that group.
$og_contents = db_select('og_membership', 'ogm')
->fields('ogm', array('etid'))
->condition('ogm.gid', $gid, '=')
->condition('ogm.entity_type', 'node', '=')
->execute()
->fetchAll();
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using Node's request module to upload files via REST API
I am using the request module for communicating with the rest API.
So far everything was perfect, and now I have problems with uploading files.
This is my code:
var url = "www.targetsite.com";
var options = {
method: 'post',
json: true,
body: {
parameter: 'param'
},
formData: {
file: fs.createReadStream("pic.jpg");
}
}
request(url, options, function(err, res, res_body){
console.log(err);
...
}
Here I receive the error Error: write after end
If I then remove the "json" and "body" from the options, it makes the request, and returns the error from the other side (missing parameter).
So, how can I send both "body" and upload file in the same call?
A:
Thi should work pretty well
var url = "www.targetsite.com";
var options = {
parameter: 'param',
file: fs.createReadStream(__dirname + "/pic.jpg")
}
request.post({url: url, formData: options}, function (err, httpResponse, body) {
// done
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to render multiple columns with Markdown in GitHub README?
To render items in three columns, I attempted to add the following CSS3 directives to my project's README.md file, but the styling was stripped out:
<div style="-webkit-column-count: 3; -moz-column-count: 3; column-count: 3; -webkit-column-rule: 1px dotted #e0e0e0; -moz-column-rule: 1px dotted #e0e0e0; column-rule: 1px dotted #e0e0e0;">
<div style="display: inline-block;">
<!-- first column's content -->
</div>
<div style="display: inline-block;">
<!-- second column's content -->
</div>
<div style="display: inline-block;">
<!-- third column's content -->
</div>
</div>
This styling works correctly outside of GitHub's processing of Markdown. How can I put data into multiple columns in a Markdown document? Note that I am not concerned about support for IE browsers and don't care if IE renders a single column (my software project does not work on Windows clients, anyway).
A:
GitHub-Flavored Markdown only permits certain whitelisted tags and attributes in inline HTML:
HTML
You can use a subset of HTML within your READMEs, issues, and pull requests.
A full list of our supported tags and attributes can be found in the README for github/markup.
Regarding <div> tags, that README says that only the itemscope and itemtype attributes are whitelisted, in addition to the general attribute whitelist:
abbr, accept, accept-charset, accesskey, action, align, alt, axis, border, cellpadding, cellspacing, char, charoff, charset, checked, cite, clear, cols, colspan, color, compact, coords, datetime, dir, disabled, enctype, for, frame, headers, height, hreflang, hspace, ismap, label, lang, longdesc, maxlength, media, method, multiple, name, nohref, noshade, nowrap, prompt, readonly, rel, rev, rows, rowspan, rules, scope, selected, shape, size, span, start, summary, tabindex, target, title, type, usemap, valign, value, vspace, width, itemprop
No tags support the style attribute.
Unless you can hack something together with the tags and attributes listed in that README I think you'll find that you're out of luck.
An alternative would be to put together a GitHub Pages site, which seems to be much more flexible.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to find the largest value of a column in mysql?
I am writing a forum application. I have a script that creates a board. In addition to the autoincremented board_id column, all boards have an integer column called position that is used to order the boards on the home page. When a new board is created, I want the default position to be the largest value within the rows of the boards table with the given category_id. Positions can have duplicates because they are positioned within their category. I hope that makes sense.
So if I have the following boards...
b_id | c_id | pos |
-------------------
1 | 1 | 1 |
-------------------
2 | 1 | 2 |
-------------------
3 | 2 | 1 |
-------------------
And I am creating a new board in c_id 2, the position should be 2. If the new board is in c_id 1, the position would be 3. How can I do this?
The query below is what I am currently using, but the position always ends up being 0.
INSERT INTO `forum_boards` (
`title`,
`description`,
`category_id`,
`position`
) VALUES (
'Suggestion Box',
'Have an idea that will help us run things better? Let us know!',
'1',
'(SELECT MAX(position), category_id FROM forum_boards WHERE category_id = 1)+1'
)
A:
You can take the approach you are using. You need to drop the single quotes:
INSERT INTO `forum_boards` (`title`, `description`, `category_id`, `position`
)
VALUES ('Suggestion Box',
'Have an idea that will help us run things better? Let us know!',
1,
(SELECT MAX(position) + 1 FROM forum_boards WHERE category_id = 1)'
);
However, this doesn't take into account categories that are initially empty. And, I would write this using insert . . . select:
INSERT INTO `forum_boards` (`title`, `description`, `category_id`, `position`
)
SELECT 'Suggestion Box',
'Have an idea that will help us run things better? Let us know!',
1,
COALESCE(MAX(position) + 1, 1)
FROM forum_boards
WHERE category_id = 1;
Note that I dropped the single quotes around '1'. Numbers should be passed in as numbers, not strings.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
UTF-8 issue in Java code
I'm getting a string 'ÐалендаÑÐ' instead of getting 'Календари' in Java code. How can I convert 'ÐалендаÑÐ' to 'Календари'?
I used
String convert =new String(convert.getBytes("iso-8859-1"), "UTF-8")
String convert =new String(convert.getBytes(), "UTF-8")
A:
I believe your code is okay. It appears that your problem is that you need to do a specific character conversion, and maybe your "real" input is not being encoded correctly. To test, I would do a standard step by step CharSet encoding/decoding, to see where things are breaking.
Your encodings look fine, http://docs.oracle.com/javase/1.6/docs/guide/intl/encoding.doc.html
And the following seems to run normally :
//i suspect your problem is here - make sure your encoding the string correctly from the byte/char stream. That is, make sure that you want "iso-8859-1" as your input characters.
Charset charsetE = Charset.forName("iso-8859-1");
CharsetEncoder encoder = charsetE.newEncoder();
//i believe from here to the end will probably stay the same, as per your posted example.
Charset charsetD = Charset.forName("UTF-8");
CharsetDecoder decoder = charsetD.newDecoder();
ByteBuffer bbuf = encoder.encode(CharBuffer.wrap(inputString));
CharBuffer cbuf = decoder.decode(bbuf);
final String result = cbuf.toString();
System.out.println(result);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
org-mode exclude fullpage.sty from latex preview preamble
Using fullpage.sty in a document preamble breaks latex preview by generating a full page preview images with page numbers. I.e., this code:
#+LATEX_CLASS: article
#+LATEX_CLASS_OPTIONS: [a4paper]
#+LATEX_HEADER: \usepackage{fullpage}
\begin{equation}
E=mc^2
\end{equation}
will generate the preview image:
How to exclude fullpage package from LaTeX preview preamble? Also, is it a bug and needs a bug report?
A:
according to org-latex-preview#Minted if you add the fullpage.sty package to org-latex-packages-alist in your emacs configuration file like so:
(add-to-list 'org-latex-packages-alist '("" "fullpage" nil))
it should use it for exporting but not for the preview.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can we put verification while schema creation in mongoose
I'm creating a new schema in mongoose and trying to verify different inputs eg ip and pwd.
We can have the verification in UI part but can we do it again in back-end?
Below is the example code:-
var mongoose = require('mongoose'),
Schema = mongoose.Schema;
var ServerSchema = new Schema({
serverIp: { type : String , required : true },
Name: { type: String, required : true },
serverType: {type: String , required : true },
created_date: {type: Date, default: Date.now},
updated_date: {type: Date, default: Date.now}
});
I have never worked on back-end part, so please excuse if this is a novice question.
A:
You can even use custom regular expressions to perform your validation. For example like this:
var userSchema = new Schema({
phone: {
type: String,
validate: {
validator: function(v) {
return /\d{3}-\d{3}-\d{4}/.test(v);
},
message: '{VALUE} is not a valid phone number!'
},
required: [true, 'User phone number required']
}
});
Check out mongoose validation docs
|
{
"pile_set_name": "StackExchange"
}
|
Q:
maven-plugin-api com.thoughtworks.qdox.parser.ParseException
Getting a com.thoughtworks.qdox.parser.ParseException when building my project. This error first occurred when I changed:
<packaging>jar</packaging>
to:
<packaging>maven-plugin</packaging>
Before that, the entire project built and ran cleanly. maven-plugin-api is the newest version available in maven, so upgrading per "GWT, Maven, Spring - Getting com.thoughtworks.qdox.parser.ParseException: syntax error on Maven Build" won't work. I've also tried downgrading; no change.
The exception:
com.thoughtworks.qdox.parser.ParseException: syntax error @[38,1] in file:/home/blablahbla/MyClass.java
at com.thoughtworks.qdox.parser.impl.Parser.yyerror(Parser.java:716)
at com.thoughtworks.qdox.parser.impl.Parser.yyparse(Parser.java:826)
at com.thoughtworks.qdox.parser.impl.Parser.parse(Parser.java:697)
at com.thoughtworks.qdox.JavaDocBuilder.addSource(JavaDocBuilder.java:300)
at com.thoughtworks.qdox.JavaDocBuilder.addSource(JavaDocBuilder.java:316)
at com.thoughtworks.qdox.JavaDocBuilder.addSource(JavaDocBuilder.java:312)
at com.thoughtworks.qdox.JavaDocBuilder$1.visitFile(JavaDocBuilder.java:369)
at com.thoughtworks.qdox.directorywalker.DirectoryScanner.walk(DirectoryScanner.java:43)
at com.thoughtworks.qdox.directorywalker.DirectoryScanner.walk(DirectoryScanner.java:34)
at com.thoughtworks.qdox.directorywalker.DirectoryScanner.walk(DirectoryScanner.java:34)
at com.thoughtworks.qdox.directorywalker.DirectoryScanner.walk(DirectoryScanner.java:34)
at com.thoughtworks.qdox.directorywalker.DirectoryScanner.walk(DirectoryScanner.java:34)
at com.thoughtworks.qdox.directorywalker.DirectoryScanner.walk(DirectoryScanner.java:34)
at com.thoughtworks.qdox.directorywalker.DirectoryScanner.walk(DirectoryScanner.java:34)
at com.thoughtworks.qdox.directorywalker.DirectoryScanner.scan(DirectoryScanner.java:52)
at com.thoughtworks.qdox.JavaDocBuilder.addSourceTree(JavaDocBuilder.java:366)
at org.apache.maven.tools.plugin.extractor.java.JavaMojoDescriptorExtractor.discoverClasses(JavaMojoDescriptorExtractor.java:628)
at org.apache.maven.tools.plugin.extractor.java.JavaMojoDescriptorExtractor.execute(JavaMojoDescriptorExtractor.java:592)
at org.apache.maven.tools.plugin.scanner.DefaultMojoScanner.populatePluginDescriptor(DefaultMojoScanner.java:105)
at org.apache.maven.plugin.plugin.AbstractGeneratorMojo.execute(AbstractGeneratorMojo.java:171)
at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348)
at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:362)
at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315)
at org.codehaus.classworlds.Launcher.launch(Launcher.java:255)
at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430)
at org.codehaus.classworlds.Launcher.main(Launcher.java:375)
The pom file:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>foo.bar</groupId>
<artifactId>foobar</artifactId>
<packaging>maven-plugin</packaging>
<version>1.0.1</version>
<name>foobar</name>
<properties>
<org.springframework.version>3.1.0.M1</org.springframework.version>
<org.hibernate.version>3.6.0.Final</org.hibernate.version>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.0.2</version>
<configuration>
<source>1.6</source>
<target>1.6</target>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.maven</groupId>
<artifactId>maven-plugin-api</artifactId>
<version>2.2.1</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${org.springframework.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-beans</artifactId>
<version>${org.springframework.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>${org.springframework.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-tx</artifactId>
<version>${org.springframework.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<version>${org.springframework.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>${org.hibernate.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.8.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.oracle</groupId>
<artifactId>ojdbc6</artifactId>
<version>11.1.0.7.0</version>
</dependency>
<dependency>
<groupId>postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.0-801.jdbc4</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.0.1</version>
</dependency>
<dependency>
<groupId>commons-cli</groupId>
<artifactId>commons-cli</artifactId>
<version>1.2</version>
</dependency>
<dependency>
<groupId>foo.bar.internal</groupId>
<artifactId>internal-artifact</artifactId>
<version>0.1.9-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.6.1</version>
</dependency>
</dependencies>
</project>
The class in question:
import javax.persistence.*;
@Table(name = "MY_TABLE")
@SecondaryTables({
@SecondaryTable(name = "MY_TABLE2"),
@SecondaryTable(name = "MY_TABLE3"),
@SecondaryTable(name = "MY_TABLE4")
})
@Entity
@NamedQueries({
...
})
@AttributeOverrides({ // line 37
// @AttributeOverride( //line 38
// name = "metadataCheckOutFlag",
// column = @Column(
// name = "COMMENTED_OUT_FIELD",
// table = "MY_TABLE2"
// )
// ),
})
public class MyClass extends SimpleMyClass {
}
All JPA annotations have previously functioned without issue.
A:
Well, it turns out I had an older version of another maven plugin, the maven-compiler-plugin. When I upgraded it to versino 2.3.2, a new version of the qdox library was downloaded and my problems disappeared, even when I tested downgrading to 2.0.2 again. Relevant section of pom.xml:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.6</source>
<target>1.6</target>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why do we set $u_n=r^n$ to solve recurrence relations?
This is something I have never found a convincing answer to; maybe I haven't looked in the right places. When solving a linear difference equation we usually put $u_n=r^n$, solve the resulting polynomial, and then use these to form a complete solution. This is how to get $F_n=\frac{1}{\sqrt{5}}(\phi^n-(-\phi)^{-n})$ from $F_{n+1}=F_n+F_{n-1}$ and $F_0=0, F_1=1$ for example.
I can see that this works, but I don't understand where it comes from or why it does indeed give the full solution. I have read things about eigenfunctions and other things but they didn't explain the mechanics behind this very clearly. Could someone help me understand this?
A:
This form of the the solution comes out rather naturally if one works in terms of generating functions. This reduces the linear difference equation to a matter of careful algebra, in rather a similar way as the Laplace transform does for linear differential equations.
The Fibonacci sequence provides a nice example of this. We have $F_{n+2}=F_{n+1}+F_n$ for $n\geq 0$ and $F_0=F_1=1$ as boundary values. We may introduce the generating function / formal power series $\mathcal{F}(x)=\sum_{n=0}^\infty F_n x^n$ where $x$ is a formal variable. Then
\begin{align}\mathcal{F}(x)
&=1+x+\sum_{n=0}^\infty F_{n+2} x^{n+2}\\
&=1+x+\sum_{n=0}^\infty F_{n+1} x^{n+2}+\sum_{n=0}^\infty F_{n} x^{n+2}\\
&=1+x+x(\mathcal{F}(x)-1)+x^2 \mathcal{F}(x)\\&=1+(x+x^2)\mathcal{F}(x)\hspace{2.5 cm}\implies \mathcal{F}(x)=\frac{1}{1-x-x^2}
\end{align}
One may verify that series expansion of $\mathcal{F}(x)$ produces the Fibonnaci sequence as coefficients. But we can also express this generating function via partial fractions as $$\mathcal{F}(x)=\dfrac{A_+}{1-r_+ x}+\dfrac{A_-}{1-r_- x}$$ where $A_\pm$ are appropriate coefficients and $r_\pm$ the roots of $1-x-x^2=0$ (i.e. the characteristic equation!) Expanding these as geometric series, we find coefficients $F_n = A_+ (r_+)^n+A_- (r_-)^n$. This is precisely the form we had expected.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
JSON Serialization Issue: The argument type 'Tracks' can't be assigned to the parameter type 'Map'
I have written a custom type "Tracks", and am reading the track from a JSON file and then converting into a List of <Tracks>.
Here is a portion of the code (which throws the error on the 4th line):
Future loadTrackList() async {
String content = await rootBundle.loadString('data/titles.json');
List<Tracks> collection = json.decode(content);
List<Tracks> _tracks = collection.map((json) => Tracks.fromJson(json)).toList();
setState(() {
tracks = _tracks;
}); }
Additionally, here is the tracks.dart file where I have serialized the JSON.
class Tracks{ final String title; final String subtitle;
Tracks(this.title, this.subtitle);
Tracks.fromJson(Map<String, dynamic> json) :
title = json['title'],
subtitle = json['subtitle']; }
And in my original usage scenario, I use this list of tracks in this manner:
body: ListView.builder(
itemCount: tracks.length,
itemBuilder: (BuildContext context, int index) {
var rnd = new Random();
var totalUsers = rnd.nextInt(600);
Tracks trackTitles = tracks[index];
return PrimaryMail(
iconData: OMIcons.supervisorAccount,
title: trackTitles.title,
count: "$totalUsers active",
colors: Colors.lightBlueAccent,
);
},
),
Now, firstly - in the first bunch of code inside the async loader, I am getting the error error: The argument type 'Tracks' can't be assigned to the parameter type 'Map<String, dynamic>'. for the line
List<Tracks> _tracks = collection.map((json) => Tracks.fromJson(json)).toList();
Additionally, when I use the track here:
title: trackTitles.title, // inside listView builder
I get the error (when running the app, not before): type '_InternalLinkedHashMap<String, dynamic is not a subtype of type 'tracks'
Requesting for any kind of help regarding how to get rid of this issue. You can find the important part of the whole code in this link for a good look into the implementation.
A:
Try this
import 'dart:async';
import 'dart:convert';
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:news_app/tracks.dart';
class TrackList extends StatefulWidget {
@override
State<StatefulWidget> createState() => _TrackListState();
}
class _TrackListState extends State<TrackList> {
List<dynamic> tracks = List<dynamic>();
Future loadTrackList() async {
String content = await rootBundle.loadString('data/titles.json');
var collection = json.decode(content);
print(collection);
List<dynamic> _tracks =
collection.map((json) => Tracks.fromJson(json)).toList();
setState(() {
tracks = _tracks;
});
}
void initState() {
loadTrackList();
super.initState();
}
@override
Widget build(BuildContext context) => Scaffold(
backgroundColor: Colors.white,
body: Container(
child: ListView.builder(
itemCount: tracks.length,
itemBuilder: (BuildContext context, int index) {
Tracks item = tracks[index];
return Text(item.title);
},
),
),
);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ASP.NET Application misses the Breakpoints
I'm facing a severe problem - my ASP application doesn't see the Breakpoints.
I've read this Why Arent Breakpoints Working In Web Application Project. But it didn't help.
I'm using VS2010 and the page is loaded in IE.
Here's my web.config file
<configuration>
<connectionStrings>
<add name="ApplicationServices"
connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnetdb.mdf;User Instance=true"
providerName="System.Data.SqlClient" />
</connectionStrings>
<system.web>
<siteMap enabled="true">
<providers>
<clear/>
<add siteMapFile="Web.sitemap" name="AspNetXmlSiteMapProvider" type="System.Web.XmlSiteMapProvider" securityTrimmingEnabled="true"/>
</providers>
</siteMap>
<compilation debug="true" targetFramework="4.0" />
<authentication mode="Forms">
<forms loginUrl="~/Account/Login.aspx" timeout="2880" />
</authentication>
<membership>
<providers>
<clear/>
<add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="ApplicationServices"
enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false"
maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10"
applicationName="/" />
</providers>
</membership>
<profile>
<providers>
<clear/>
<add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="ApplicationServices" applicationName="/"/>
</providers>
</profile>
<roleManager enabled="false">
<providers>
<clear/>
<add name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="ApplicationServices" applicationName="/" />
<add name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider" applicationName="/" />
</providers>
</roleManager>
</system.web>
<system.webServer>
<modules runAllManagedModulesForAllRequests="true"/>
</system.webServer>
</configuration>
I start my application simply clicking F5 in VisualStudio.
Does anybody know what the mistake is, please?
This is my Login.aspx.cs
public partial class Login : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
RegisterHyperLink.NavigateUrl = "Register.aspx?ReturnUrl=" + HttpUtility.UrlEncode(Request.QueryString["ReturnUrl"]);
}
protected void LoginButton_Click(object sender, EventArgs e)
{
var userNameTextBox = LoginUser.FindControl("UserName") as TextBox;
var passwordTextBox = LoginUser.FindControl("Password") as TextBox;
if (userNameTextBox != null && passwordTextBox != null)
{
bool succes = UserDAL.Login(userNameTextBox.Text, passwordTextBox.Text);
if (succes == true)
Server.Transfer("~/Account/Succes.aspx");
}
}
}
And here's its interface.
<asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
<center>
<p>
Введите имя пользователя и пароль.
<asp:HyperLink ID="RegisterHyperLink" runat="server" EnableViewState="false">Register</asp:HyperLink> если у вас нет учетной записи.
</p>
<asp:Login ID="LoginUser" runat="server" EnableViewState="false" RenderOuterTable="false">
<LayoutTemplate>
<span class="failureNotification">
<asp:Literal ID="FailureText" runat="server"></asp:Literal>
</span>
<asp:ValidationSummary ID="LoginUserValidationSummary" runat="server" CssClass="failureNotification"
ValidationGroup="LoginUserValidationGroup"/>
<div class="accountInfo">
<fieldset class="login">
<legend>text</legend>
<p>
<asp:Label ID="UserNameLabel" runat="server" AssociatedControlID="UserName">Login:</asp:Label>
<asp:TextBox ID="UserName" runat="server" CssClass="textEntry"></asp:TextBox>
<asp:RequiredFieldValidator ID="UserNameRequired" runat="server" ControlToValidate="UserName"
CssClass="failureNotification" ErrorMessage="" ToolTip=""
ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator>
</p>
<p>
<asp:Label ID="PasswordLabel" runat="server" AssociatedControlID="Password">Password:</asp:Label>
<asp:TextBox ID="Password" runat="server" CssClass="passwordEntry" TextMode="Password"></asp:TextBox>
<asp:RequiredFieldValidator ID="PasswordRequired" runat="server" ControlToValidate="Password"
CssClass="failureNotification" ErrorMessage="" ToolTip=""
ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator>
</p>
</fieldset>
<p class="submitButton">
<asp:Button ID="LoginButton" runat="server" CommandName="Login" Text="Выполнить вход" ValidationGroup="LoginUserValidationGroup"/>
</p>
</div>
</LayoutTemplate>
</asp:Login>
</center>
</asp:Content>
A:
You need to add an OnClick="LoginButton_Click" attribute to your <asp:Button...> item to hook the button to the event.
Read more here, or use CommandName / CommandArgument.
You are doing a bit of both and that is why it is not working as you expect.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why do Young-Earth Creationists make such a big deal about the YEC view
For anyone who's unfamiliar with what the Young-Earth view, here's a starting point.
I want to start out by saying that this question is not about whether or not the YEC view it true. It's not about whether there is scientific evidence for and/or against, or even neutral. This is in no way, shape, or form, a question about the validity of the YEC view, or any opposing view.
Specifically, I'm asking why the Kent Hovinds, Ken Hams, and other prominent YEC creationists, and so many Fundamental conservative Christians so married to the YEC view, and so vocal about it.
Most of the people that self-identify as Christians could care less about the issue. The idea of creation/evolution is a non-issue. Debating whether the Genesis account of creation and the flood has no bearing on doctrines they care about, from their perspective. As a matter of fact, many of them consider Creationists to be an embarrassment, and the idea of speaking out on the topic as counter-productive at best, and some think it's downright dangerous.
They open themselves, and as a result, all of Christianity to ridicule because they "refuse to believe science" and stick to ancient stories.
For the record (for the one or two people that don't already know it) I am one of those people. I'm a Young-Earth Creationist, so even though the above may sound like I'm attacking the YEC view, that's not why put all that in there.
What I'm looking for is an answer as to why the "Creation Science Evangelists" - the Kent Hovinds, Ken Hams, the ICR, creationtoday.org, and others are willing to go around preaching something that gets them laughed at, ridiculed, and even other Christians wish they'd just knock it off. Why is it so important to them? Is there a doctrinal reason, or are they just that stuck in their ways that they're unwilling to change?
A:
Again... This is not about the validity of the YEC view. The point of this is not to reveal "Truth", the point is to accurately explain the doctrinal significance of the view, from the perspective of those who believe the view, so that we have it on record on site.
The answer is quite simple, actually, and laid out very well on the Answers in Genesis website. The short version is that in the minds of Young Earth Creationists, the issue is not whether the earth is young or old. It's not whether or not we evolved. Those are distractions from the real question, which is "Can we trust Scripture?", and extending it further, "Was Jesus Himself a liar?"
remember - just explaining the position/doctrinal view, not debating the validity.
Every one of the Young Earth Creationists listed, and those of us that follow are among those that believe that the Bible is the divinely inspired, inerrant, infallible Word of God as described here, here, and in the AIG article I linked to above. Since the original manuscripts were given by God, it is impossible for them to be erroneous in any way. And since we have such overwhelming manuscript evidence, we have every reason to believe that the Bible we have today is reliable.
All of them subscribe to a historical-grammatical method of interpretation, which provides guidelines for determining what content is to be taken literally, and where a non-literal interpretation is warranted. Of course, there is variance in how this is applied, the view held by Young Earth Creationists is that a simple reading of the Genesis account without any external, non-Biblical evidence clearly gives six literal days of creation, one day of rest, a bit of history, and then a global flood. The age of the earth is based strictly on adding up the genealogies (when so and so was x years old, he begat y) and so on.
Since the method of interpretation includes the following:
Extra-Biblical resources, such as language helps, commentaries, the
writings of the so-called church fathers, and archaeological and
scientific evidences, can be useful resources in correctly
interpreting Scripture. But since they are the words and works of
fallible men they are not authoritative.
when determining what a passage says, we can't refer to things like common knowledge, radiometric dating, currently accepted geology, paleontology, or any other branch of science. When determining what Scripture says, only the context given within Scripture can be used. In other words, no "knowledge" of fallible man can possibly equal the revealed Truth given by God in Scripture.
Therefore, the only measurement we have to glean the age of the earth is the genealogies. Without modern scientific knowledge, there would be no need for a gap theory, or a day-age theory. That's why they refer to these as "compromise" theories - because they are attempting to use external evidence - man's fallible evidence and make it fit into God's word.
In Kent Hovind's words, "if you gave someone a Bible, with no idea about the controversy, and said 'read this - tell me what this says', not one of them would say 'Oh, there were millions of years between those days.' or gap theory, or anything of the sort. They'd say 7 days". Dr. Hovind goes on to re-state that you can't use fallible man's ideas to re-interpret Scripture.
So, for Young-Earth Creationists, the reason they view the Day-Age theory, or the Gap Theory, or anything else that tries to tie billions of years into the Creation account as invalid is pretty straightforward.
But how about the idea that the Genesis account is an allegory? Plenty of Christians believe that.
Again, back to the "rules" of interpretation:
Scripture is intelligible. God meant for us to understand it.
Because it is infallible, the Bible is internally consistent. it can't contradict itself.
Because God meant to communicate truth, and because Scripture is internally consistent, the words of Scripture have only one meaning in context. There may be multiple legitimate applications of a passage of Scripture, but a passage has only one meaning in context. This is what it means to interpret Scripture according to its literal, or normal, sense.
None of that rules out an allegorical Genesis account, but how did Jesus treat the Genesis account? Did He speak of it as if it were real, or did He speak of it as if it were an allegory?
Borrowing from the Answers in Genesis article:
Another way that Jesus revealed His complete trust in the Scriptures
was by treating as historical fact the accounts in the Old Testament
which most contemporary people think are unbelievable mythology. These
historical accounts include Adam and Eve as the first married couple
(Matt. 19:3-6, Mark 10:3-9), Abel as the first prophet who was
martyred (Luke 11:50-51), Noah and the Flood (Matt. 24:38-39), Moses
and the serpent (John 3:14), Moses and the manna (John 6:32-33, 49),
the experiences of Lot and his wife (Luke 17:28-32), the judgment of
Sodom and Gomorrah (Matt. 10:15), the miracles of Elijah (Luke
4:25-27), and Jonah and the big fish (Matt. 12:40-41). As Wenham has
compellingly argued,7 Jesus did not allegorize these accounts but took
them as straightforward history, describing events that actually
happened just as the Old Testament describes. Jesus used these
accounts to teach His disciples that the events of His death,
resurrection and second coming would likewise certainly happen in
time-space reality.
So, in summary, for the Young-Earth Creationist, because of the doctrines of an inspired, infallible, inerrant Word of God, combined with the approach they use toward interpreting Scripture, there is no breathing room left.
Jesus didn't seem to treat the Genesis account as allegorical. He presented it as actual historical fact.
by extension, if it were not actual, historical fact, then Jesus was either wrong, or a liar. Both options end up with Christianity being untrue.
The rules of interpretation of the Genesis account provide no reason not to take the creation period as six literal days, with one day of rest. And there is nothing in Scripture, anywhere else, that provides even a hint that the account is to be taken as allegory. The only evidence to that effect is external, non-Biblical evidence. Therefore, per the rules of the historical-grammatical method of interpretation, we must accept that the account is a literal one, not allegorical.
Again, this leaves us with two options: The Bible is wrong, or the historical-grammatical method of interpretation is wrong.
If the Bible is wrong, then we have no reliable record of history, and nothing upon which to base our faith, other than man's fallible teachings, which means Christianity is no more or less valid that Buddhism, or the worship of trees.
If the historical-grammatical method of interpretation is wrong, then we lack a framework for correctly divining the meaning of Scripture, and again, nothing solid upon which to place our faith.
So for the Young-Earth Creationist, all of these doctrines combine to force us into a corner where our only option is to believe in a young-earth view. Any other view would render the book of Genesis as unreliable. Since Jesus referred to Genesis as real history several times, it would make Jesus unreliable. Since many of Scripture's doctrines can be traced back to Genesis (original sin, marriage, and others), if Genesis falls, the rest of Scripture falls, Jesus is fallible, and therefore not God, so all of Christianity is a sham.
Again, one last time. This is not about the validity of the YEC view. The point of this is not to reveal "truth", the point is to accurately explain the view, so that we have it on record on site. I am fully aware that not all of Christianity holds these views. I am fully aware that you don't have to be a YEC believer to be a Christian. I'm not stating that any other view is wrong. I am merely presenting the thought process, and doctrinal importance of the Young-Earth Creationist view to those of us that hold this view.
A:
Rephrasing what you said slightly more succinctly- it isn't about history, it's about trustworthiness.
Creationists see all theories that attempt to explain origins as inherently matters of faith. One either trusts that matter could have somehow been there, packed so densely together that it caused a universe creating explosion, and then developed strictly through natural processes with literally astronomical odds, or else one believes there is a Creator. There are no other possibilities.
Given then that there is a Creator (and here is where Intelligent Designers and YECs technically depart) the YEC'er says that there is no reason to assume that the Bible is not literally a first hand account with complete and sufficient information.
By tabulating results from a complete record, it is merely a matter of math rather than science, because it falls from the postulate of:
The Bible is complete and sufficient
That a YEC arrives at the rest of the narrative.
Indeed any successful attack on the YEC position must necessarily be one on the main postulate- otherwise it is only the expression of the science that is dismissed. iCR and the ID movement are scientists- they hypothesize and then then test hypotheses. The only question is the framework in which the hypotheses are developed.
A:
What I'm looking for is an answer as to why the "Creation Science Evangelists" ... are willing to go around preaching something that gets them laughed at...Why is it so important to them? Is there a doctrinal reason, or are they just that stuck in their ways that they're unwilling to change?
They see a willingness to yield on this point as similar to Christians who "bent the knee" (offered an act of worship) to Caesar to avoid persecution in ancient times.
They see putting worldly wisdom ahead of the truth of the Bible as an act of betrayal of God.
Luke 9:26 For whosoever shall be ashamed of me and of my words, of
him shall the Son of man be ashamed, when he shall come in his own
glory, and in his Father's, and of the holy angels.
They recognize that what appears to be an inconsequential act of compromise as a major act of undermining the word of God.
1 Corinthians 5:6 Your glorying is not good. Know ye not that a
little leaven leaveneth the whole lump?
They see humanity as being brought to such a state of collectivism that most people think in terms of social control through bullying and intimidation (like getting a school class to laugh at a boy who is not compliant).
Luke 7:32 They are like unto children sitting in the marketplace, and
calling one to another, and saying, We have piped unto you, and ye
have not danced; we have mourned to you, and ye have not wept.
They see truth as something absolute and worth defending.
John 3:19 And this is the condemnation, that light is come into the
world, and men loved darkness rather than light, because their deeds
were evil.
They see social pressure to deny what they see as absolute as a type of martyrdom.
John 15:19 If ye were of the world, the world would love his own: but
because ye are not of the world, but I have chosen you out of the
world, therefore the world hateth you.
They see compromise with what is popular as something evil.
James 4:4 Ye adulterers and adulteresses, know ye not that the
friendship of the world is enmity with God? whosoever therefore will
be a friend of the world is the enemy of God.
I consider myself one of "them". Speaking as one of them, the idea of changing my beliefs to suit the preferences of others would be poor service to my Savior.
Galatians 1:10 For do I now persuade men, or God? or do I seek to
please men? for if I yet pleased men, I should not be the servant of
Christ.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
HttpServletRequest JAXP DOM: reading POST data
I have an HttpServletRequest object in my servlet which
obtains an XML document posted to it. I would like to use
JAXP (not JAXB becuase for one it uses too much disk space
for my particular use case). I need to parse the document
into a DOM object in memory where it will be processed.
Any idea of how to parse the POST XML from the request object?
Thanks,
John Goche
A:
That depends on how the client has sent it.
If it's conform the HTTP multipart/form-data standard (like as is been used together with the HTML <input type="file">), then use Apache Commons FileUpload or Servlet 3.0 HttpServletRequest#getParts() to extract the desired part from the multipart request. You can find some concrete examples here: How to upload files to server using JSP/Servlet? You'd ultimately like to end up with an InputStream.
If it's the raw request body (i.e. the entire request body is actually the whole XML file, you see this often in homegrown low level applications which are abusing the HTTP protocol for transferring files), then you can get it as an InputStream by just HttpServletRequest#getInputStream().
Whatever way you use/choose, you'd need to ensure that you somehow end up with an InputStream referring the XML file. This way you can feed it to the JAXP API the usual way, which has methods taking an InputStream.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MySQL LIKE - partial match with up tp 2 different characters
Trying to look for partial matches:
SELECT * FROM watches_check WHERE modelNumber LIKE
'URL'
sample data:
URL: http://www.domain.com/s/keywords=AK-1470GBST&id=123456
modelNumber: AK-1470GBST
I'd like to find if the modelNumber is in URL field but it returns 0 results all the time.
Where did I go wrong?
Also, would it be possible to find partial matches, where there is say 1 or 2 characters different, for example AK-1470GBST vs AK/1470GBST ('/' instead of '-') will be considered a match
A:
You can use
SELECT * FROM watches_check WHERE INSTR(URL, modelNumber) > 0
to check if your url contains the modelNumber. This will even work, if your modelNumber contains the wild cards for LIKE patterns : % and _.
It won't return partial matches. Have a look at Gordons answer for that.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Automatically prepending Github issue number to all commits
This has already been asked here but was not answered as far as I can see.
I am writing a script to automate git workflow. When a topic branch is created it automatically creates a github issue referencing it - is there any way to automatically prepend this issue number to all commits made in that branch? I am thinking there may be a way using git hooks but I'm unable to find it.
I am aware that manually adding #xxx to the beginning of each commit message will do this; what I am interested in (and what was never answered in the original question) is if there is a way for this to be added automatically.
A:
Have a look at how Henrik Nyh did a similar thing, by extracting issue numbers from branch names. You might be able to get something out of that.
Henrik's article is here.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How i can change the text of the next span
I have the following markup inside my sharepoint web page (which i can not have control over the markup):-
<span dir="none">
<select id="OrderAssignToApprover_9002e96d-1276-4355-9a2a-0c565d8079db_$DropDownChoice" title="Assign To Approver Required Field" class="ms-RadioText">
<option value="f**">f**</option>
<option value="m**">m**</option>
<option value="m**">m**</option>
<option value="t**">t**</option>
<option value=""></option>
</select>
</span>
<br>
<span class="ms-metadata">AddSourceHere</span>
now i am trying to find a way using javascript or jquery, to dynamically change the text of the span which comes after the select list from "AddSourceHere" to something else.
what i am trying to do:-
select the select list which have specific id
select the next span
change the span text.
A:
You can just find the dropdown by id and use nextAll to find the span.
$('#OrderAssignToApprover_9002e96d-1276-4355-9a2a-0c565d8079db_\\$DropDownChoice')
.parent('span')
.nextAll('span.ms-metadata:eq(0)')
.text("Some new value");
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<span dir="none">
<select id="OrderAssignToApprover_9002e96d-1276-4355-9a2a-0c565d8079db_$DropDownChoice" title="Assign To Approver Required Field" class="ms-RadioText">
<option value="f**">f**</option>
<option value="m**">m**</option>
<option value="m**">m**</option>
<option value="t**">t**</option>
<option value=""></option>
</select>
</span>
<br>
<span class="ms-metadata">AddSourceHere</span>
Note the one sticky point is that you have a $ in your select id, which needs to be escaped when using as a selector in jQuery.
You can also do this with straight javascript, but its a bit nasty.
var select = document.getElementById('OrderAssignToApprover_9002e96d-1276-4355-9a2a-0c565d8079db_$DropDownChoice');
select.parentNode.nextSibling.nextSibling.nextSibling.nextSibling.innerHTML = "Some text here"
<span dir="none">
<select id="OrderAssignToApprover_9002e96d-1276-4355-9a2a-0c565d8079db_$DropDownChoice" title="Assign To Approver Required Field" class="ms-RadioText">
<option value="f**">f**</option>
<option value="m**">m**</option>
<option value="m**">m**</option>
<option value="t**">t**</option>
<option value=""></option>
</select>
</span>
<br>
<span class="ms-metadata">AddSourceHere</span>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to generate multiple formatted files in Talend open studio based on MySQL tables?
I need to pull data from MYSQL DB and create formatted files depending on the data. How can I do that by Talend open studio??
MySQL DB has one table ( user_id, order_id, purchase_date) and I need to generate csv files for each user contains his orders. files names should have the user_id ( output files could be like user_id.csv)
Thanks
A:
You can try below -
tMysqlInput--->tFlowToIterate---(iterate)-->tMysqlInput--->tFileOutputDelimited
More details given below -
tmysqlInput(select user_id from table group by user_id) --- row Main ---> tFlowToIterate (uncheck use the default key option, create a new key called user_id and set value to user_id in dropdown) ----- Iterate -----> tmysqlInput(sql = "select user_id, order_id,purchase_date from table where user_id=((String)globalMap.get("user_id))") ----- row main ----> tFileOutputDelimited(set filename = (String)globalMap.get("user_id))+".csv").
to summarize - you first get list of all distinct user_id then you iterate through each of them and again fetch orders for each user_id by applying filter and use this user_id value from global variable into filename..
|
{
"pile_set_name": "StackExchange"
}
|
Q:
partition count reduced at sink during kafka stream record forward
I am using kafka stream for processing few kafka records , I have two node one is for doing some transformation and other is a final sink.
My the topics are INTER_TOPIC and FINAL_TOPIC are having 20 partitions each. and my producer which writing to INTER_TOPIC is writing in key value and partition-er is round robin.
below is the code at my inter transform node.
public void streamHandler() {
Properties props = getKafkaProperties();
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> processStream = builder.stream("INTER_TOPIC",
Consumed.with(Serdes.String(), Serdes.String()));
//processStream.peek((key,value)->System.out.println("key :"+key+" value :"+value));
processStream.map((key, value) -> getTransformer().transform(key, value)).filter((key,value)->filteroutFailedRequest(key,value)).to("FINAL_TOPIC", Produced.with(Serdes.String(), Serdes.String()));
KafkaStreams IStreams = new KafkaStreams(builder.build(), props);
IStreams.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
@Override
public void uncaughtException(Thread t, Throw-able e) {
logger.error("Thread Name :" + t.getName() + " Error while processing:", e);
}
});
IStreams.cleanUp();
IStreams.start();
try {
System.in.read();
} catch (IOException e) {
logger.error("Failed streaming ",e);
}
}
but my sink is getting data in 2 partitions only, but I have 20 stream thread configured, and I verified my producer is writing to all 20 partitions, How to know that my transform node forwarding to all 20 partitions of my FINAL_TOPIC
30 Sep 2019 10:39:41,416 INFO c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-3] Received
30 Sep 2019 10:39:41,416 INFO c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-4] Received
30 Sep 2019 10:39:41,416 INFO c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-3] Received
30 Sep 2019 10:39:41,416 INFO c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-4] Received
30 Sep 2019 10:40:57,427 INFO c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-3] Received
30 Sep 2019 10:40:57,427 INFO c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-4] Received
30 Sep 2019 10:40:57,427 INFO c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-3] Received
30 Sep 2019 10:40:57,427 INFO c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-4] Received
A:
and partition-er is round robin
Why do you think that the partitioner is round-robin? By default, Kafka Streams applies a hash-based partitioning based on the key.
If you want to change the default partitioner, you can implement interface StreamPartitioner and pass it via:
Produced.with(Serdes.String(), Serdes.String())
.withStreamPartitioner(...)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Safari ignore CSS :not() rule
I got the problem with Safari 13.0.5 (14608.5.12)
Trying to modify some of the WordPress Divi builder plugin styles, I came across an unexpected problem with the inability to apply a negative rule that would prevent the plugin from styling if necessary in a particular place.
Modifying file:
src/wordpress/wp-content/plugins/divi-builder/includes/builder/styles/frontend-builder-plugin-style.min.css
So I came from this:
#et-boc .et-l .hentry,
#et-boc .et-l a,
#et-boc .et-l a:active,
#et-boc .et-l blockquote,
#et-boc .et-l div:not(.woocommerce-message,.star-rating),
#et-boc .et-l em,
#et-boc .et-l form,
#et-boc .et-l h1,
#et-boc .et-l h2,
#et-boc .et-l h3,
#et-boc .et-l h4,
#et-boc .et-l h5,
#et-boc .et-l h6,
#et-boc .et-l hr,
#et-boc .et-l iframe,
#et-boc .et-l img,
#et-boc .et-l input,
#et-boc .et-l label,
#et-boc .et-l li,
#et-boc .et-l object,
#et-boc .et-l ol,
#et-boc .et-l p,
#et-boc .et-l span,
#et-boc .et-l strong,
#et-boc .et-l textarea,
#et-boc .et-l ul,
#et-boc .et-l video {background: 0 0;
border: none;
-moz-border-radius: 0;
-webkit-border-radius: 0;
border-radius: 0;
-webkit-box-shadow: none;
-moz-box-shadow: none;
box-shadow: none;
color: inherit;
letter-spacing: normal;
margin: 0;
outline: 0;
padding: 0;
text-align: inherit;
text-shadow: inherit;
-moz-transition: none;
-o-transition: none;
-webkit-transition: none;
transition: none;
vertical-align: baseline;
}
To this:
#et-boc .et-l *:not(.no-divi-styles) .hentry,
#et-boc .et-l *:not(.no-divi-styles) a,
#et-boc .et-l *:not(.no-divi-styles) a:active,
#et-boc .et-l *:not(.no-divi-styles) blockquote,
#et-boc .et-l *:not(.no-divi-styles) div:not(.woocommerce-message,.star-rating),
#et-boc .et-l *:not(.no-divi-styles) em,
#et-boc .et-l *:not(.no-divi-styles) form,
#et-boc .et-l *:not(.no-divi-styles) h1,
#et-boc .et-l *:not(.no-divi-styles) h2,
#et-boc .et-l *:not(.no-divi-styles) h3,
#et-boc .et-l *:not(.no-divi-styles) h4,
#et-boc .et-l *:not(.no-divi-styles) h5,
#et-boc .et-l *:not(.no-divi-styles) h6,
#et-boc .et-l *:not(.no-divi-styles) hr,
#et-boc .et-l *:not(.no-divi-styles) iframe,
#et-boc .et-l *:not(.no-divi-styles) img,
#et-boc .et-l *:not(.no-divi-styles) input,
#et-boc .et-l *:not(.no-divi-styles) label,
#et-boc .et-l *:not(.no-divi-styles) li,
#et-boc .et-l *:not(.no-divi-styles) object,
#et-boc .et-l *:not(.no-divi-styles) ol,
#et-boc .et-l *:not(.no-divi-styles) p,
#et-boc .et-l *:not(.no-divi-styles) span,
#et-boc .et-l *:not(.no-divi-styles) strong,
#et-boc .et-l *:not(.no-divi-styles) textarea,
#et-boc .et-l *:not(.no-divi-styles) ul,
#et-boc .et-l *:not(.no-divi-styles) video {background: 0 0;
border: none;
-moz-border-radius: 0;
-webkit-border-radius: 0;
border-radius: 0;
-webkit-box-shadow: none;
-moz-box-shadow: none;
box-shadow: none;
color: inherit;
letter-spacing: normal;
margin: 0;
outline: 0;
padding: 0;
text-align: inherit;
text-shadow: inherit;
-moz-transition: none;
-o-transition: none;
-webkit-transition: none;
transition: none;
vertical-align: baseline;
}
This solution works fine at Chrome, Firefox, etc!
...except Safari =/
What's the problemm with this browser?
A:
In any case, it is better to avoid such complex code in styles and not stumble on browser bugs. I did not manage to find out why Safari behaves in this way, however, with regard to the DIVI plug-in, such a solution would complicate the code even more. I am sure that in future versions the bug will be fixed.
However I found another simple solution: since DIVI plugin is quite aggressive to any code placed in it, then a simple way out was to put the applications and the necessary parts of the code that fall into the stream into an iframe that completely isolates the code you need from third-party interference.
For example you can do so:
interest.pug
extends index
block content
.interest
.container.container--content.interest__container
script.
function interestIframeOnLoad () {
var interestIframe = document.getElementById('interest__iframe')
interestIframe.contentWindow.postMessage({
selector: '#interest',
//...your_app_init_options_here
}, '*')
}
iframe(
src='./interest__iframe.pug'
class='interest__iframe'
id='interest__iframe'
frameborder='0'
onload='javascript: interestIframeOnLoad()'
)
interest__iframe.pug
doctype html
html(lang='en')
head
meta(charset='UTF-8')
meta(name='viewport', content='width=device-width, initial-scale=1.0')
title Interest inner app
script(src='../../assets/js/interest.js')
body
#interest
script.
let appWasLaunched = false;
const iframe = window.top.document.getElementById('interest__iframe');
function launchApp(options) {
window.app.interest(options);
}
window.addEventListener('message', e => {
if (!appWasLaunched) {
appWasLaunched = true;
var data = e.data;
launchApp(data);
}
}, false);
interest.js
import Vue from 'vue'
import App from './App.vue'
export default function interest (options) {
new Vue({
data: options,
render: (h) => h(App)
}).$mount(options.selector);
}
window.app = window.app || {};
window.app.interest = interest;
The code above uses the parsel bundler Therefore, you can use the .pag extension in the iframe src attribute and you can even use .scss and ES6 inside that iframe which basically closed the problem with the DIVI plugin in my case. I hope my answer will be useful to someone
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can i use Prompt to enter number of loops for macros
I'am getting this type of error SyntaxError: missing ; before statement, line 11 (Error code: -991).
This is my macros,
play = "PROMPT 'How many times to play macros?:'" !VAR1 + "\n";
for (i = 0; i < play; i++) {
start = iimPlay(macros);
}
A:
You have to use JavaScript 'prompt' dialog like this:
var play = 1; // default number
play = prompt("How many times to play macros?:", play);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
C double value not stored using scanf
So what is happening is I ask the user for input, then store that into a variable using scanf.
But when I try to retrieve the value, it just a 0.000000. I'm sure I'm just making a stupid noob mistake, but any help you can provide is appreciated.
printf("Enter the radius of your circle/sphere: ");
scanf("%f", &radius);
printf("\n%f", radius);
Ex:
Enter the radius of your circle/sphere: 10
0.0000000
Thanks for your time!
A:
As answered by @Chris McGrath, you must use the %lf format specifier for doubles. However, this doesn't explain why the %f format works just fine for printf.
%lf stands for long float, i.e. double. When values of type float are passed to variadic functions (those accepting a variable number of arguments, such as printf and scanf), they are implicitly promoted to double:
float val;
printf("%f", val); /* printf receives val cast to double, the same
as printf("%f", (double) val) */
printf("%lf", val); /* printf also receives val cast to double, the
same as printf("%lf", (double) val) */
Since both printf("%lf", ...) and printf("%f", ...) end up receiving a double, the two are completely equivalent.
On the other hand, all arguments to scanf are pointers. scanf("%f", ...) expects to receive a float *, and scanf("%lf", ...) a double *:
float floatval;
double dblval;
scanf("%f", &floatval); /* stores a float value to address received */
scanf("%lf", &dblval); /* stores a double value to address received */
The two pointers point to different types, so one cannot be promoted to the other. If they received the same treatment by scanf, it would end up storing the same value into addresses allocated for types of different size, which cannot work.
This is why scanf strictly requires the use of the %lf specifier when used with double values.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Getting started with calling APIs via XMLHttpRequest
I've just started to learn how to use APIs, but I'm having a little bit of trouble understanding exactly how to utilize them.
I was able to write the following code thanks to some online documentation, but I have some questions about how to add to it:
var xhr = new XMLHttpRequest();
xhr.open("GET", "http://api.openweathermap.org/data/2.5/weather?q=London&mode=xml", false); xhr.send();
console.log(xhr);
Now, when I run this code in my browser and I open the console, I get a dropdown with a whole bunch of stuff under it. First of all, how can I get the code to display JUST the response? I want the console to display just the XML to show when I run my code. Second of all, how do I parse the XML? Is there any way to get a value from an XML element and assign it to a JavaScript variable?
Thanks!
A:
What you are printing is the XMLHttpRequest object itself, while what you actually want is the response from the request you've made. To get the response, you use
xhr.responseXML;
Which is a object of the type document. (See the MDN docs).
To exhibit the code, you can just
console.log(xhr.responseXML);
But to actually work with the response, you'll probably want to do as Josh suggested, and request from the API a JSON-formatted response (instead of a XML-formatted one).
You will probably want to do that asyncronously, too (see this). The page has a more robust example. Let's adapt it to show London's temperature:
var xhr = new XMLHttpRequest();
xhr.open("GET", "http://api.openweathermap.org/data/2.5/weather?q=London&mode=json", true);
xhr.onload = function (e) {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
console.log(xhr.responseText);
var response = JSON.parse(xhr.responseText);
console.log("Temperature(K): " + response.main.temp);
} else {
console.error(xhr.statusText);
}
}
};
xhr.onerror = function (e) {
console.error(xhr.statusText);
};
xhr.send(null);
Being async means that the xhr.onLoad function will be executed as soon as the response is received. Thus all of your code will be executed sequentially, and only when the response is received the onLoad will be executed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Thick orange ProgressBar instead of thin blue one
I am having a ProgressBar widget in my app and no matter what I do, it always shows as a fat orange one (2.x style) instead of thin blue (4.x style).
Project build target is set to Google API 4.0.3 (API Level 15)
SDK version requirements are set as follows:
<uses-sdk android:minSdkVersion="14" android:targetSdkVersion="14"/>
Application theme is set to Theme.Holo
The app is run on several devices and emulators (all running KitKat or other ICS-based versions)
The progress bar itself is defined as follows:
<ProgressBar
android:id="@+id/progressBar1"
style="@android:style/Widget.ProgressBar.Horizontal"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:layout_marginRight="16dp"
android:layout_marginLeft="16dp"
android:theme="@android:style/Theme.Holo" />
(explicitly set theme is the latest attempt to turn it to the new style).
Is there anything I am missing?
A:
What you need is style="@android:style/Widget.Holo.ProgressBar.Horizontal", not style="@android:style/Widget.ProgressBar.Horizontal.
And, android:theme should be set in <Activity> tag, not in a specific view.
Hope it helps
A:
For the quick solution I recommend you to use a small Android library SmoothProgressBar allowing you to have a smooth and customizable horizontal indeterminate ProgressBar.
Credits: Johnkil
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Download .pdf file to R, getting error message
I'm having trouble download a .pdf from the internet into Rstudio. I would like to analyse the .pdf using the pdftools package. I have a directory called files that I want the .pdf to go to. I'm using this code.
download.file('https://www2.gov.scot/Resource/Doc/352649/0118638.pdf', 'files')
I get this error:
Warning messages:
1: In download.file("https://www2.gov.scot/Resource/Doc/352649/0118638.pdf", :
URL https://www2.gov.scot/Resource/Doc/352649/0118638.pdf: cannot open destfile 'files', reason 'Is a directory'
2: In download.file("https://www2.gov.scot/Resource/Doc/352649/0118638.pdf", :
download had nonzero exit status
Is there way to get around this message?
A:
The destfile has to be the filename (not the directory name) for the downloaded file.
For example, if we were to download the file above and save it as "Commission.pdf" in the files folder we would do the following:
download.file(url='https://www2.gov.scot/Resource/Doc/352649/0118638.pdf',
destfile="files/Commission.pdf")
You're passing in file to the destfile, which prompts R to throw the error warning that the argument you specified is a directory.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Use a normal link to submit a form
I want to submit a form. But I am not going the basic way of using a input button with submit type but a a link.
The image below shows why. I am using image links to save/submit the form. Because I have standart css markup for image links I don't want to use input submit buttons.
I tried to apply onClick="document.formName.submit()" to the a element but I would prefer a html method.
Any ideas?
A:
Two ways. Either create a button and style it so it looks like a link with css, or create a link and use onclick="this.closest('form').submit();return false;".
A:
You can't really do this without some form of scripting to the best of my knowledge.
<form id="my_form">
<!-- Your Form -->
<a href="javascript:{}" onclick="document.getElementById('my_form').submit(); return false;">submit</a>
</form>
Example from Here.
A:
Just styling an input type="submit" like this worked for me:
.link-button {
background: none;
border: none;
color: #0066ff;
text-decoration: underline;
cursor: pointer;
}
<input type="submit" class="link-button" />
Tested in Chrome, IE 7-9, Firefox
|
{
"pile_set_name": "StackExchange"
}
|
Q:
A doubt about complex variable.
Find the radius of convergence of the power series:$$\sum_{n\geq0}\frac{1}{(n+3)^2}z^n$$
Describes the convergence domain $(\Omega)$ and study the convergence point on the border $(\Omega)$.
A:
Since
$$
\lim_{n\to \infty}\Big|\frac{z^{n+1}}{(n+4)^2}\cdot\frac{(n+3)^2}{z^n}\Big|=\lim_{n\to \infty}\Big|\frac{n+3}{n+4}\Big|^2|z|=|z|,
$$
therefore if $|z|<1$, then the series converges. For $|z|=1$ the series is absolutely convergent (and therefore convergent) because
$$
\sum_{n=0}^\infty\frac{|z|^n}{(n+3)^2}=\sum_{n=3}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}-1-\frac14.
$$
Hence the series converges in $\Omega=\{z \in \mathbb{C}:\ |z|\le 1\}$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
PHP how to create JSON with localization arguments for APNS
I want to send JSON in an APNS with the following:
{
"aps" : {
"alert" : {
"loc-key" : "GAME_PLAY_REQUEST_FORMAT",
"loc-args" : [ "Jenna", "Frank"]
},
"sound" : "default"
},
}
Can anyone explaine how I can create this in PHP?
I have the following for JSON without the key/args:
$body['aps'] = array(
'alert ' => 'This is my messsage',
'sound' => 'default'
);
$payload = json_encode($body);
I have tried to replace the 'This is my message' with an array for loc-key and loc-args but that does not work. Also jus putting in the data as string does not work..
Hope someone can help me. I have tried multiple options and variations but nothing works..
A:
$body = array(
"aps" => array(
"alert" => array(
"loc-key" => "GAME_PLAY_REQUEST_FORMAT",
"loc-args" => array( "Jenna", "Frank" )
),
"sound" => "default",
),
);
echo json_encode($body);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Swift Unit Test Error: symbol(s) not found for architecture x86_64 (Swift Package Manager)
I am having trouble getting unit tests to run in Swift projects created with the Swift Package Manager (that is, any unit tests created by the Package Manager... those I create from within Xcode work fine from within Xcode). I am getting the same error on all projects generated from the Package Manager. To keep it simple, I tried on a very basic test project with as little modification from the default setup as possible, but still getting the error. Here are the steps to reproduce:
Create a new project with swift package init --type executable (module name is Hello)
Add the Xcode Project: swift package generate-xcodeproj
In Xcode build settings, ensure Enable Testability is Yes
In swift.main enter this simple test code:
import Foundation
let message = "Hello, world!"
print(message)
In HelloTests.swift:
import XCTest
@testable import Hello
class HelloTests: XCTestCase {
func testExample() {
XCTAssert(message == "Hello, world!")
}
static var allTests = [
("testExample", testExample),
]
}
Package.swift and XCTestManifests.swift left as-is.
It builds and runs fine with swift build and swift run Hello (Also, from within in Xcode).
However, when running swift test or running any test in Xcode, the build fails with the following error message:
Undefined symbols for architecture x86_64:
"Hello.message.unsafeMutableAddressor : Swift.String", referenced from:
implicit closure #1 : @autoclosure () throws -> Swift.Bool in HelloTests.HelloTests.testExample() -> () in HelloTests.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Somehow, it seems like it's failing to link the main module, so the symbols are not recognized. However, I can't tell what's wrong or how to fix it.
I downloaded one of the sample projects from GitHub, and generated the Xcode project. The tests for this project run perfectly in Xcode and the terminal. I've carefully compared the sample project to mine and can't tell what's different. Almost all setup code (Package.swift, file structure, etc.) and project setting are nearly identical. The only meaningful difference I can tell is that the sample project is a library/framework and mine is an executable (but seems like linking should work the same for both types). Otherwise, I can't tell what they are doing right and I am doing wrong.
A:
I figured it out (thanks to Cristik's help). Executable modules are not testable (at least for now), so the solution was to move all definitions to a library module and leave just the main.swift file in the executable module. That way, all unit tests were run with the library as a dependency vs. the executable. The package.swift now looks like this:
let package = Package(
name: "HighestNumberPairing",
products: [
.executable(name: "HighestNumberPairing", targets: ["HighestNumberPairing"]),
.library(name: "NumberPairing", targets: ["NumberPairing"]),
],
dependencies: [],
targets: [
.target(
name: "HighestNumberPairing",
dependencies: ["NumberPairing"]),
.target(
name: "NumberPairing",
dependencies: []),
.testTarget(
name: "NumberPairingTests",
dependencies: ["NumberPairing"]),
]
)
The full program is here on Github.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Pytorch sending inputs/targets to device
Currently using the pycocotools 2.0 library.
My train_loader is:
train_loader, test_loader = get_train_test_loader('dataset', batch_size=16, num_workers=4)
However the training lines of code:
for i, data in enumerate(train_loader, 0):
images, targets = data
images = images.to(device)
targets = targets.to(device)
Results in error. Variables data, images, and targets are all of class tuple
Traceback (most recent call last):
File "train.py", line 40, in <module>
images = images.to(device)
AttributeError: 'tuple' object has no attribute 'to'
How can I properly send these to cuda device?'
Edit:
I can send images[0].to(device) no problem. How can I send the rest?
A:
You should open the for loop with as many items as your dataset returns at each iteration.
Here is an example to illustrate my point:
Consider the following dataset:
class CustomDataset:
def __getitem__(self, index):
...
return a, b, c
Notice that it returns 3 items at each iteration.
Now let us make a dataloader out of this:
from torch.utils.data import DataLoader
train_dataset = CustomDataset()
train_loader = DataLoader(train_dataset, batch_size=50, shuffle=True)
Now when we use train_loader we should open the for loop with 3 items:
for i, (a_tensor, b_tensor, c_tensor) in enumerate(train_loader):
...
Inside the for loop's context, a_tensor, b_tensor, c_tensor will be tensors with 1st dimension 50 (batch_size).
So, based on the example that you have given, it seems that whatever dataset class your get_train_test_loader function is using has some problem. It is always better to seprately instantiate the dataset and then create the dataloader rather than use a blanket function like the one you have.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
CSS Style switcher in JavaScript on an Asp .Net server
I am trying to create a mobile website and can do it with just CSS, but the server master pages MUST be in their current locations, I am planning to use a Javascript "Style Switcher" to change the stylesheets but most mobile browsers don't have java support, so is there a way using java (or any other "language" that is either supported by mobile phones or can be run server-side) to run server-side that can also stop some of the page elements from loading (for mobile bandwidth issues) and change a few lines of the master code?
I was thinking of using this (with some edits):
How to Use JavaScript to Change a Cascading Style Sheet (CSS) Dynamically
I cannot use PHP as our hosting provider will not install it.
I do not have Asp .Net access (as a user) but do have FTP access.
A:
var isMobile = function() {
var w = window.innerWidth || document.body.clientWidth;
var o = 900;
if(w < o) {
return true;
}
else {
return false;
}
};
and an init func
var init = function(){
if(!isMobile()){
// all your desktop stuff goes here
var desktopcss = document.createElement('link');
desktopcss.rel = 'stylesheet';desktopcss.type = 'text/css';
desktopcss.href='/desktop.css';
document.getElementsByTagName('head')[0].appendChild(desktopcss);
}
else{
// all your mobile stuff goes here
var mobilecss= document.createElement('link');
mobilecss.rel = 'stylesheet';mobilecss.type = 'text/css';
mobilecss.href='/mobile.css';
document.getElementsByTagName('head')[0].appendChild(mobilecss);
}
};
and keep in mind that if you set something to
display:none;
via css, it will not be loaded and it will not be displayed!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Exception in thread "main" org.pdfclown.util.parsers.ParseException: 'name' table does NOT exist
I am trying to run the Java Code written by Stefano Chizzolini (Awesome guy : Creator of PDFClown) to Parse a PDF using PDF Clown library. I am getting this error and I dont know what I can do to fix this.
Exception in thread "main" org.pdfclown.util.parsers.ParseException: 'name' table does NOT exist.
at org.pdfclown.documents.contents.fonts.OpenFontParser.getName(OpenFontParser.java:570)
at org.pdfclown.documents.contents.fonts.OpenFontParser.load(OpenFontParser.java:221)
at org.pdfclown.documents.contents.fonts.OpenFontParser.<init>(OpenFontParser.java:205)
at org.pdfclown.documents.contents.fonts.TrueTypeFont.loadEncoding(TrueTypeFont.java:91)
at org.pdfclown.documents.contents.fonts.SimpleFont.onLoad(SimpleFont.java:118)
at org.pdfclown.documents.contents.fonts.Font.load(Font.java:738)
at org.pdfclown.documents.contents.fonts.Font.<init>(Font.java:351)
at org.pdfclown.documents.contents.fonts.SimpleFont.<init>(SimpleFont.java:62)
at org.pdfclown.documents.contents.fonts.TrueTypeFont.<init>(TrueTypeFont.java:68)
at org.pdfclown.documents.contents.fonts.Font.wrap(Font.java:253)
at org.pdfclown.documents.contents.FontResources.wrap(FontResources.java:72)
at org.pdfclown.documents.contents.FontResources.wrap(FontResources.java:1)
at org.pdfclown.documents.contents.ResourceItems.get(ResourceItems.java:119)
at org.pdfclown.documents.contents.objects.SetFont.getResource(SetFont.java:119)
at org.pdfclown.documents.contents.objects.SetFont.getFont(SetFont.java:83)
at org.pdfclown.documents.contents.objects.SetFont.scan(SetFont.java:97)
at org.pdfclown.documents.contents.ContentScanner.moveNext(ContentScanner.java:1330)
at org.pdfclown.tools.TextExtractor.extract(TextExtractor.java:626)
at org.pdfclown.tools.TextExtractor.extract(TextExtractor.java:296)
at PDFReader.FullExtract.run(FullExtract.java:71)
at PDFReader.FullExtract.main(FullExtract.java:142)
I know the class OpenFontParser in the library package is throwing this error. Is there anything I can do to fix this?
This code works for most PDF. I have a PDF that it does not parse. I am guessing it is because of this symbol below in the pdf.
public class PDFReader extends Sample {
@Override
public void run()
{
String filePath = new String("C:\\Users\\XYZ\\Desktop\\SomeSamplePDF.pdf");
// 1. Open the PDF file!
File file;
try
{file = new File(filePath);}
catch(Exception e)
{throw new RuntimeException(filePath + " file access error.",e);}
// 2. Get the PDF document!
Document document = file.getDocument();
// 3. Extracting text from the document pages...
for(Page page : document.getPages())
{
extract(new ContentScanner(page)); // Wraps the page contents into a scanner.
}
close(file);
}
private void close(File file) {
// TODO Auto-generated method stub
}
/**
Scans a content level looking for text.
*/
/*
NOTE: Page contents are represented by a sequence of content objects,
possibly nested into multiple levels.
*/
private void extract(
ContentScanner level
)
{
if(level == null)
return;
while(level.moveNext())
{
ContentObject content = level.getCurrent();
if(content instanceof ShowText)
{
Font font = level.getState().getFont();
// Extract the current text chunk, decoding it!
System.out.println(font.decode(((ShowText)content).getText()));
}
else if(content instanceof Text
|| content instanceof ContainerObject)
{
// Scan the inner level!
extract(level.getChildLevel());
}
}
}
private boolean prompt(Page page)
{
int pageIndex = page.getIndex();
if(pageIndex > 0)
{
Map<String,String> options = new HashMap<String,String>();
options.put("", "Scan next page");
options.put("Q", "End scanning");
if(!promptChoice(options).equals(""))
return false;
}
System.out.println("\nScanning page " + (pageIndex+1) + "...\n");
return true;
}
public static void main(String args[])
{
new PDFReader().run();
}
}
A:
The issue
As the stacktrace indicates, the problem is that some TrueType font embedded in the PDF does not contain a name table even though it is a required table:
org.pdfclown.util.parsers.ParseException: 'name' table does NOT exist.
...
at org.pdfclown.documents.contents.fonts.TrueTypeFont.loadEncoding(TrueTypeFont.java:91)
Thus, strictly speaking, that embedded font is invalid and consequentially the embedding PDF, too. And PDFClown runs into an exception due to this validity issue.
Some backgrounds
A TrueType font file consists of a sequence of concatenated tables. ...
The first of the tables is the font directory, a special table that facilitates access to the other tables in the font. The directory is followed by a sequence of tables containing the font data. These tables can appear in any order. Certain tables are required for all fonts. Others are optional depending upon the functionality expected of a particular font.
Tables that are required must appear in any valid TrueType font file. The required tables and their tag names are shown in Table 2.
Table 2: The required tables
Tag Table
'cmap' character to glyph mapping
'glyf' glyph data
'head' font header
'hhea' horizontal header
'hmtx' horizontal metrics
'loca' index to location
'maxp' maximum profile
'name' naming
'post' PostScript
(Section TrueType Font files: an overview in chapter 6 The TrueType Font File in the TrueType Reference Manual)
On the other hand, though, there are a number of PDF generators cutting down embedded TrueType fonts to the bare essentials required by PDF viewers (foremost Adobe Reader), and the name table does not seem to be strictly required.
Furthermore the table name is only used for one purpose in PDFClown, to determine the name of the font in question, even though the font name could be determined from the BaseFont entry of the associated font dictionary, too. Actually the latter entry is required by the PDF specification while the PostScript name of the font entry in the name table is optional according to the TTF manual.
Thus, using the BaseFont entry in the PDF font dictionary would be a better alternative to this name table access.
Fixing it
Is there anything I can do to fix this?
You can either fix the not entirely valid PDF by adding a name table to the embedded TTF in question or you can patch PDFClown to ignore the missing missing table: in the class org.pdfclown.documents.contents.fonts.OpenFontParser edit the method getName:
private String getName(
int id
) throws EOFException, UnsupportedEncodingException
{
// Naming Table ('name' table).
Integer tableOffset = tableOffsets.get("name");
if(tableOffset == null)
throw new ParseException("'name' table does NOT exist.");
Replace that throw new ParseException("'name' table does NOT exist.") by return null.
PS
While the problem could be analyzed using merely the information given by the OP, the sample file provided by @akarshad in his now deleted answer gave more motivation to start the analysis at all.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
stop blur event from triggering as a result of keypress
When the event 'keydown' is detected, the event 'blur' is also automatically triggered. How can I prevent the 'blur' event on the element if 'keydown' event is triggered?
Here is the jsFiddle and the code:
HTML:
<div id="delegatee">
<span id="ctrl" contenteditable="true"></span>
</div>
CSS:
#ctrl {
width:100px;
height:50x;
border:1px solid black;
display:inline-block;
}
JavaScript:
function saveOrDelete() {
$("#ctrl").attr('contentEditable', false);
alert('Triggered'); // alerts 2 times
}
$("#delegatee").on('keydown blur', '#ctrl', saveOrDelete)
Please note, that this solution doesn't work:
function saveOrDelete() {
$("#delegatee").off('keydown blur', '#ctrl', saveOrDelete)
$("#ctrl").attr('contentEditable', false);
alert('Trigger');
$("#delegatee").on('keydown blur', '#ctrl', saveOrDelete)
}
A:
This line is causing this behavior:
$("#ctrl").attr('contentEditable', false);
As soon as the span is no longer contentEditable it will lose focus and hence blur is raised.
You will have to re-think your logic.
Edit (After your comment):
In order to trap the blur which immediately follows a keydown because of the above reason, you may change your logic to first check if the span is editable and then take action based on that.
e.g.
var mode = $("#ctrl").prop('contentEditable');
if (mode == 'true') {
$("#ctrl").prop('contentEditable', false);
// your other processes here
} else {
return false;
}
Check the update on your fiddle: http://jsfiddle.net/CxFaq/1/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Run deeplearning4j without GPU?
The documentation says the library runs on GPUs. If my powerful laptop doesn't have a GPU, can I still run Deeplearning4J?
A:
We don't imply anywhere in our docs that we only run on gpus. If for some reason you found that to be the case, could you notify us in a github issue?
https://github.com/deeplearning4j/deeplearning4j/issues
You only need an nd4j backend:
http://nd4j.org/backend.html
Dl4j unlike a lot of other libraries decouples the hardware implementation from the algorithms. The ticket here is our tensor library nd4j. Nd4j handles all the computation.
Think of it is "tensorflow/theano" if you will.
We deploy all of our native binaries to maven central.
Usually all you need is:
http://search.maven.org/#search%7Cga%7C1%7Cnd4j-native-platform
for cpu.
GPU, we support the 2 latest cuda versions:
http://search.maven.org/#search%7Cga%7C1%7Cnd4j-cuda-7.5-platform
The same exact code as is also works on android and pis:
http://deeplearning4j.org/android
Same idea, you don't compile from source, just specify the right backend.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
If $K:F$ is a finite extension and $F\subseteq E \subseteq K$, is $K:E$ also finite?
Suppose that $F:E:K$ are field extensions and $K:F$ is finite. Then is $K:E$ necessarily finite?
Since $K:F$ is finite,$K$ has a basis $\{k_1,...,k_n\}$ over $F$. Indeed this set still spans $K$ over $E$ but now could be linearly dependent, is there a way to be sure they are independent?
A:
If we have a basis $\{\alpha_i\}_{i=1}^n \subset K$ for the extension $K/F$, then of course we can also write every element of $K$ in the form $\displaystyle \sum_k c_k \alpha_k$ where $c_k \in E$, since $E \supset F$, meaning this is also a spanning set for $K/E$. As you noted, this set $\{\alpha_i\}_{i=1}^n$ may no longer be linearly independent when viewed as vectors over $E$. However, you can prove that some subset of $\{\alpha_i\}_{i=1}^n$ is both linearly independent over $E$ and spans $K$. The cardinality of this subset is finite and equal to the degree of $K/E$.
These considerations lay the groundwork for proving the multiplicativity formula for degrees. If $K/F$ is a finite extension, and if we have an intermediate field $E$ between $F$ and $K$, then $[K:F] = [K:E] \cdot [E:F]$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Firefox makes links non-clickable
I am using some online reporting pages from my company's web site. After logging in the related pages, I cannot click on the links that produce the reports. The links seem just as plain texts, and non-clickable. When I open the same pages in IE8, there is no problem. The links work and reports are generated. I've looked at the security settings from options menu, but found nothing. How can I make Firefox trust this site and work properly?
Note: The web pages are in asp format, and the links are supposed to open the reports in Crystal Report Viewer. There are also some Flash graphs in some pages, and they don't work either.
Source code of one frame:
<SCRIPT LANGUAGE="JavaScript" TYPE="text/javascript">
function go_there(url)
{
window.open(url + '&prompt0=1&prompt1=' + [..]);
}
</SCRIPT>
[..]
<td style="cursor:hand; [..]"
onclick="go_there('/webreports/[..]/dpp_zmo_bayi_dd.rpt?apsuser=[..]');">
<img [..] src="[..]">&nbsp;&nbsp;Envanter inceleme linki (zmo_bayi_dd)
</td>
After logging into the site, Error Console displays the following errors:
After opening the problematic page, the following errors are displayed:
Finally, when I click on the links (although they don't look like links), these error messages are created:
A:
style="cursor:hand; [..]"
The standards for CSS cursor do not define "hand", and hence that value is only understood by some browsers (like Internet Explorer, and in Safari if no strict DOCTYPE is set). Firefox doesn't support it.
So: bad design by the creators of the site. However, the CSS only defines how things are shown; clicking in your source code sample should still work, even though the mouse pointer might not indicate something is clickable!
function go_there(url)
{
window.open(url
+ '&prompt0=1&prompt1='
+ parent.detail.ust.form1.donem.value, [..]
);
}
Error: parent.detail.ust.form1 is undefined
Error: parent.ust.form1 is undefined
Too bad, this is caused by the way the web site tries to get information from the other frames. Maybe the things named "detail" and "ust" just don't exist and Internet Explorer ignores that. Or maybe this is just non-standard, IE-only. Bad implementation.
(I'm sure someone could create a Greasemonkey script to replace the CSS hand on the fly, as a workaround. Some script might also fix the bad JavaScript, but as IE works I guess that's a bit too much.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Which is a good practice & choice :Sending a Hash Map object or a JSON from java servlet to jsp?
I am developing a java web application with MongoDB. In this app, I'm querying a Monogo database from a Java servlet. I am just confused whether to convert the query result to Json or a HashMap and send it to JSP or is there any other way to achieve this, and in the jsp file I can get the HashMap data and send it to the javascript in the same file.(I need to use the data in javascript)
My question is:
Will it be a good practice to send a HashMap with a large amount of data (somewhere in the range of 300,000 to 700,000 records for now) from servlet to JSP.
Or should I create the JSON in servlet with same amount of data and send it to the JSP file, then parse the JSON in jsp and somehow access the parsed JSON in javascript code.
Another way I could think of is to directly send the JSON from servlet to javascript and parse the JSON in javascript itself.
Which one will be a good solution in terms of Security, Performance?
I am new to this whole web application thing but I really need the help to clear the doubts in my mind. Also if you guys could direct me to some links/tips for optimizing the code for my web application that would be a huge help.
A:
Definitely keep using the HashMap.
Serializing from HashMap to JSON and back is very expensive, and should only be used when you are transferring data "across-the-wire" to the browser or the database.
When passing a HashMap, you are sharing a simple pointer that references the HashMap from your servlet to the JSP page.
There isn't any difference between these two options in terms of security, but in terms performance, it is a few orders of magnitude difference.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
array of integers vs. pointer to integer in c++
If I have int x[10] and int *y, how can I tell the difference between the two?
I have two ideas:
sizeof() is different.
&x has different type --- int (*p)[10] = &x works but not int **q
= &x.
Any others?
In some template library code, I need to determine whether a pointer is a "real" pointer or degenerated from an array. I can't look at source code as the library user does not exist until I write the library. ... I can work around this by rewriting the code, so now this is only a theoretical exercise.
A:
There is no general method - you either already know the type because you have just declared the object, or the type will have decayed to a pointer and been lost. Please explain what problem you are trying to solve by differentiating between them.
A:
The sizeof idea is not very good, because if the array happens to have a single element, and the element type happens to be the same size as a pointer, then it will be the same size as the size of a pointer.
The type matching approach looks more promising, and could presumably be used to pick a template specialization (if that's what you're up to).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Delay before first touch position changes on iPhone
I have a problem using Input.touches[0].position on Unity3D. Sorry if the problem is unclear, I find it somewhat hard to describe.
When I start touching the screen and very slowly start to move my finger the position returned by Input.touches[0].position doesn't change immediatly, I have to move a little then it jumps to the current touch position.
This happens on iPhone, I don't have any other touch devices.
Here is some testable code illustrating it:
using System;
using UnityEngine;
// attach this to a game object on the scene
public class TestTouch : MonoBehaviour
{
private void Update()
{
if (Input.touchCount == 1)
{
// you have to add a GUIText component to the gameobject
this.guiText.text = "pos: " + Input.touches[0].position;
}
}
}
A:
Finally I just ignore the first delta, that is the first change in position, this solves my problem pretty well.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to use Fail2Ban to protect ProxyPass(ed) applications
So I recently configured an application that uses WebSockets and I am using Apache proxy pass to pass on the application via domain.com/application
The problem is the application is very bare metal. It does come with a login screen, but I have gone a step further and implemented an htpasswd login via Apache that is located in the proxypass config.
(I feel like I am answering my question as i type, but I will re-word/re-ask)
<Location /application>
AllowOverride AuthConfig
AuthUserFile /home/[USERNAME]/.htpasswd
AuthName "Authorization Required"
AuthType Basic
require user [USERNAME]
ProxyPass wss://192.168.1.50:443/application
ProxyPassReverse wss://192.168.1.50:443/application
</Location>
Can I configure Fail2Ban to monitor the actual application login screen from the ProxyPass server [The Application can not install Fail2Ban]
--Workaround?? By implementing the htpasswd authorization page @ the proxyserver will fail2ban block based upon failed attempts to log in via the initial Apache Authentication prompt?
A:
You are absolutely sure in the a pure config as proxypass fail2ban won't work here because of several reasons.
fail2ban actually works looking at logs, and blocking failed authenticated tries ; you have none as the one doing the authentication as the authentications logs are on the backend.
fail2ban as a serie of definitions for known protocols; it maybe not the case in some particular cases, and you would have to develop your plug-in when that happens.
fail2ban needs to be running on the same machine as where the authentications are done.
First and foremost, when you do select tools, I do recommend you to investigate a little whether they apply to your particular situation. More important than using the tools is understanding whether they apply to your situation.
It is indeed possible to put fail2ban to work to blocked abused web basic auths as this link shows. I just think your proxypass take precedence and the auth is never done. You may need a landing page for the authentication and do the proxypass not at the root at that vhost.
https://ileriseviye.wordpress.com/2010/08/21/fail2ban-defending-apache-against-brute-force-attacks-to-digest-authentication-protected-pages/
I usually use mod_evasive at least as an additional layer of security in Apache for controlling DoS/abusive requests. Have a look at my reply on the thread (it is not the one marked as correct) Editing my /etc/hosts.deny Take not that in Apache you can rate limit the requests, and that does help a lot with people abusing your service, and not only when asking for passwords.
Please do consider using Captchas as a layer 7 measure to rate limit answers. Botnet coordinated networks will render your fail2ban rules ineffective using several different IPs to bypass your measures.
Have a look at this:
"Google reCAPTCHA - Protect your website from spam and abuse while letting real people pass through with ease"
https://www.google.com/recaptcha/intro/index.html
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to specify a local working directory for threading.Thread and multiprocessing.Pool?
Just like subprocess.Popen( target=, cwd=), it can specify its own local working directory.
I dont want to specify the absolute path every time because simple is better than complex.
os.chdir() doesn't work at all since it sets a global variable (right?). Whenever there are multiple threads, os.chdir() will fail.
Any suggestions? Thank you!
I just try jorgenkg's code and modify a little bit and you may see why I have to ask this question. Here is the code.
import os
import threading
import time
class child(threading.Thread):
def run(self ):
for i in range(1,3):
print "I am " + str(threading.current_thread())[7:15] + " in " + os.getcwd() + '\r\n'
time.sleep(2)
child().start() # prints "/username/path"
os.chdir('C://') # The process is changing directory
child().start() # prints "/"
Here is the Result.
I am Thread-1 in C:\Python27\Projects
I am Thread-2 in C:\
I am Thread-1 in C:\
I am Thread-2 in C:\
You can see that Thread-2 is no long working on its original working directory after os.chdir() is invoked.
A:
As you state, the current directory path belongs to the process that owns the threads
Before you create your threads
You will have to set the path before initializing the child threads that will share os.getcwd()
A simple code example:
import os
import threading
import time
class child(threading.Thread):
def __init__(self, initpath=None):
# initpath could be a string fed to many initializations
time.sleep(0.05) # print() doesn't seem thread safe, this delays it.
super(child, self).__init__()
if initpath is not None:
os.chdir(initpath)
def run(self):
print(os.getcwd())
time.sleep(2)
print("slept "+os.getcwd()) # These will all show the last path.
child().start() # Prints your initial path.
# Both print "/home/username/path/somefolder/".
child(initpath="/home/username/path/somefolder/").start()
child().start()
os.chdir("/") # The process is changing directory
child().start() # prints "/"
As above, once the directory is changed, all threads change with it. Hence, you cannot use os.chdir() concurrently across multiple threads.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Module Build Failed - Webpack, React, Babel
I was following a video tutorial from plural sight. Course name is "Building a Real-time App with React, Flux, Webpack, and Firebase".
Please see below code and attached screen shot of the issue i am having. Webpack is failing when ever i try to re build the files. Can someone please advise of what that issue could be. I'm currently using all the latest libraries.
/*webpack.config.js*/
module.exports = {
entry: {
main: [
'./src/main.js'
]
},
output: {
filename: './public/[name].js'
},
module: {
loaders: [
{
test: /\.jsx?$/,
exclude: /node_modules/,
loader: 'babel'
}
]
}
}
/*App.jsx*/
import React from 'react';
class App extends React.Component {
constructor() {
super();
this.state = {
messages: [
'hi there how are you ?',
'i am fine, how are you ?'
]
}
}
render() {
var messageNodes = this.state.messages.map((message)=> {
return (
<div>{message}</div>
);
});
return (
<div>{messageNodes}</div>
);
}
}
export default App;
/*main.js*/
import React from 'react';
import ReactDOM from 'react-dom';
import App from './components/App.jsx';
ReactDOM.render(<App/>, getElementById('container'));
/*index.html*/
<!DOCTYPE html>
<html>
<head>
<title></title>
<meta charset="utf-8" />
</head>
<body>
<div id="container"></div>
<script src="public/main.js"></script>
</body>
</html>
/*package.json */
{
"name": "reatapp",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"babel-core": "^6.1.2",
"babel-loader": "^6.0.1",
"babel-preset-react": "^6.1.2",
"babelify": "^7.2.0",
"react": "^0.14.2",
"react-dom": "^0.14.2",
"webpack": "^1.12.3"
}
}
A:
It was solved. The answer was in installing presets npm i --save babel-preset-env babel-preset-react. Then adding another key in the webpack.config.js, in the loader: query: {presets: ['env', 'react'] }. Should be good to go.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
integer pointer buffer skipping indices
This is weird.
I have a buffer of integer pointers. I take an input, and append it to the buffer. Only, when I debug, I see that it is putting a 0 in between each input.
int *buf[BUFLEN];
char input[BUFLEN];
int temp;
printf("To move to the next step, enter q or quit\n");
printf("Please enter an integer then press Enter to insert integer into the list:\n");
while(1){
scanf("%s", input);
if (!strcmp(input, "q") || !strcmp(input, "quit"))
break;
buf[list_count] = atoi(input);
list_count++;
}
Here's what lldb is showing me
(lldb) p *(int(*)[20])buf
(int [20]) $19 = {
[0] = 9
[1] = 0
[2] = 8
[3] = 0
[4] = 7
[5] = 0
[6] = 6
[7] = 0
[8] = 5
[9] = 0
[10] = 4
[11] = 0
[12] = 3
[13] = 0
[14] = 2
[15] = 0
which is weird because when I execute p buf, it's correct -- but my overall program isn't working, so I think my bug is here
(lldb) p buf
(int *[256]) $17 = {
[0] = 0x0000000000000009
[1] = 0x0000000000000008
[2] = 0x0000000000000007
[3] = 0x0000000000000006
[4] = 0x0000000000000005
[5] = 0x0000000000000004
[6] = 0x0000000000000003
[7] = 0x0000000000000002
[8] = 0x0000000000000001
Also, how can I assign a atoi() returned integer into an array of integer pointers?
that is, I have...
int *buf[BUFLEN];
scanf("%s", input);
buf[list_count] = atoi(input);
A:
You are confusing int and int*. I'd venture a guess that on your system sizeof(int) == 4 and sizeof(int*) == 8
That means that your buffer looks something like this:
0x00000000 0x00000009
0x00000000 0x00000008
0x00000000 0x00000007
0x00000000 0x00000006
If you are printing them as int* that will be four entries, but if you are printing them as int it will be 8, every second one being zero.
Also, how can I assign a atoi() returned integer into an array of integer pointers?
I'm not sure what you are trying to do.. Are you trying to read a pointer(i.e. address) from the user? In that case you're doing it okayish, just need a cast, so:
buf[list_count] = (int*)atoi(input);
The compiler should have given you a warning there. Don't ignore those.
If you are trying to get the address of the int the user is giving you.. it doesn't exactly work that way. First you need to store the int somewhere, in a separate int intbuffer[BUFSIZE];, and then use the address of that element.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Functions with the domain having greater cardinality than $\aleph_1$
Are there any examples of an algebra on elements belonging to a set of greater cardinality than that of real numbers. Can a reference to their use be given?
A:
It is consistent that the real numbers have cardinality $\aleph_2$, in which case a function $f\colon\Bbb{R\to R}$ satisfies your requirements.
If you want a set whose cardinality is explicitly $\aleph_2$ then you will have to resort, in one way or another, to $\omega_2$ which is the second uncountable ordinal (i.e. the cardinal $\aleph_2$). The function can be anything, from the successor ordinal, to anything crazy.
However, I feel that you're not talking about $\aleph_2$, but rather $\beth_2$ i.e. $|\mathcal{P(P(}\Bbb R))|$. In that case you can consider the Lebesgue measure as a function whose domain has cardinality $\beth_2$, and its range is $\Bbb R$.
(Also related: Functions on P(R) - are there examples?)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Route voyager posts index not defined
I just have installed voyager at laravel 5.4 and tables migrated but at the end it shows following error
Route [voyager.posts.index] not defined
A:
Re runing php artisan voyager:install will resolve the issue.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is $\mathfrak{c}$ the cardinality of the lower limit topology on $\mathbb{R}$?
Why is $\mathfrak{c} = |\mathbb R|$ the cardinality of the lower limit topology on $\mathbb{R}$?
An open set in the lower limit topology is of the form $[a,b)$. I can clearly see why the cardinality has to be at least $\mathfrak{c}$, but why is it bounded by $\mathfrak{c}$?
A:
There are open sets in the lower limit topology that are not of the form $[a,b)$: the lower limit topology consists of the subsets of $\Bbb R$ that are unions of sets of the form $[a,b)$, so, for instance, every set of the form $(a,b)$ is open in the lower limit topology:
$$(a,b)=\bigcup_{a<x<b}[x,b)\;.$$
Let $\tau$ be the lower limit topology on $\Bbb R$; there are several ways to see that $|\tau|\le\mathfrak{c}$. One is to observe that $\langle\Bbb R,\tau\rangle$ is hereditarily Lindelöf. Now let $U\in\tau$, and let $\mathscr{U}=\{[a,b):[a,b)\subseteq U\}$; clearly $\mathscr{U}$ is an open cover of $U$, so it has a countable subcover $\mathscr{U}_0$. Thus, every open set in the lower limit topology is the union of countably many sets of the form $[a,b)$. There are $\mathfrak{c}$ such sets, so there are $\mathfrak{c}^{\aleph_0}=(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0\cdot\aleph_0}=2^{\aleph_0}=\mathfrak{c}$ countable families of them and hence at most $\mathfrak{c}$ members of $\tau$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Laravel submitting blank form
I'm receiving the following error when submitting a blank form in Laravel 4.
Undefined index: fields
There are currently no issues outside of Laravel. Users should be able to submit the form even without selections.
I can certainly check if isset and act accordingly but just wondering if it's something noticeable with Laravel.
$submission = $_POST['fields'];
Form:
{{ Form::open(array('url' => 'results')) }}
<table>
<tbody>
<tr>
<td><span>text 1</span>
{{ Form::checkbox('fields[]', 'value_1', false, array('class'=>'checkbox_style')) }}
</td>
<td><span>text 2</span>
{{ Form::checkbox('fields[]', 'value_2', false, array('class'=>'checkbox_style')) }}
</td>
</tr>
</tbody>
</table>
{{ Form::submit('Submit', array('class'=>'btn')) }}
{{ Form::close() }}
A:
It is always a good practice to double check variables that could be empty or undefined.
$submission = empty($_POST['fields']) ? $_POST['fields'] : [];
This way if $_POST['fields'] would be undefined, it will be set to an empty array.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can't add a new Cassandra datacenter due to streaming errors
Using DSE 4.8.6 (C* 2.1.13.1218)
When I try adding a new node in a new datacenter, bootstraping / node rebuild is always interrupted by streaming errors.
Error example from system.log:
ERROR [STREAM-IN-/172.31.47.213] 2016-04-19 12:30:28,531 StreamSession.java:621 - [Stream #743d44e0-060e-11e6-985c-c1820b05e9ae] Remote peer 172.31.47.213 failed stream session.
INFO [STREAM-IN-/172.31.47.213] 2016-04-19 12:30:30,665 StreamResultFuture.java:180 - [Stream #743d44e0-060e-11e6-985c-c1820b05e9ae] Session with /172.31.47.213 is complete
There is about 500GB of data to be streamed to the new node. Boostrap or rebuild operation stream those from 4 different nodes on the other (main) DC.
When a streaming error occurs, all synced data is wiped (and I have to start over).
What I tried so far:
bootstraping the node
setup auto_boostrap: False in cassandra.yaml and manually run nodetool rebuild
disabling streaming_socket_timeout_in_ms and setting up more aggressive TCP Keep Alive values in my linux conf (following advice in the CASSANDRA-9440 ticket)
increasing phi_convict_threshold (to the max)
do not bootstrap the node and use repair to stream the data (stopping the repair at a nearly full disk and 80K SSTables. After 3 days of trying to compact them, I gave up)
Any other things I should try ? I'm in the process of running nodetool scrub on every failing node to see if this helps...
On the stream out node, these are the error messages:
ERROR [STREAM-IN-/172.31.45.28] 2016-05-11 13:10:43,842 StreamSession.java:505 - [Stream #ecfe0390-1763-11e6-b6c8-c1820b05e9ae] Streaming error occurred
java.net.SocketTimeoutException: null
at sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229) ~[na:1.7.0_80]
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) ~[na:1.7.0_80]
at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385) ~[na:1.7.0_80]
at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51) ~[cassandra-all-2.1.14.1272.jar:2.1.14.1272]
at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:257) ~[cassandra-all-2.1.14.1272.jar:2.1.14.1272]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
and then:
INFO [STREAM-IN-/172.31.45.28] 2016-05-10 07:59:14,023 StreamResultFuture.java:180 - [Stream #ea1271b0-1679-11e6-917a-c1820b05e9ae] Session with /172.31.45.28 is complete
WARN [STREAM-IN-/172.31.45.28] 2016-05-10 07:59:14,023 StreamResultFuture.java:207 - [Stream #ea1271b0-1679-11e6-917a-c1820b05e9ae] Stream failed
ERROR [STREAM-OUT-/172.31.45.28] 2016-05-10 07:59:14,024 StreamSession.java:505 - [Stream #ea1271b0-1679-11e6-917a-c1820b05e9ae] Streaming error occurred
java.lang.AssertionError: Memory was freed
at org.apache.cassandra.io.util.SafeMemory.checkBounds(SafeMemory.java:97) ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.io.util.Memory.getLong(Memory.java:249) ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.io.compress.CompressionMetadata.getTotalSizeForSections(CompressionMetadata.java:247) ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.streaming.messages.FileMessageHeader.size(FileMessageHeader.java:112) ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.streaming.StreamSession.fileSent(StreamSession.java:546) ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:50) ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41) ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45) ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:358) ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:338) ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
A:
As answered in the Cassandra ticket CASSANDRA-11345, this issue was due to a big SSTable file (40GB) being transferred.
The transfer of said file takes more than 1 hour and by default streaming operations time out if an outgoing transfer takes more than 1 hour.
To change this default behavior you can set the streaming_socket_timeout_in_ms in the cassandra.yaml configuration file to a large value (eg: 72000000 ms or 20 hours)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
std::tuple_size and partially specialized templates
this small program
https://ideone.com/dqVJbN
#include <iostream>
#include <tuple>
#include <string>
using namespace std;
class MetaData
{
public:
template<int ID, class T>
void addVar(string varNames)
{
// do smth
}
template<int ID, class MSGT>
struct addVarDesc
{
static void exec(MetaData& md, string varNames)
{
typedef typename std::tuple_element<ID, typename MSGT::values_type>::type varType;
md.addVar<ID, varType>(varNames);
addVarDesc<ID+1, MSGT>::exec(md, varNames);
}
};
template<class MSGT>
struct addVarDesc<std::tuple_size<typename MSGT::values_type>::value, MSGT>
{
static void exec(MetaData& md, string varNames)
{
}
};
template<class MSGT>
static MetaData createMetaData(string varNames)
{
MetaData md;
MetaData::addVarDesc<0, MSGT>::exec(md, varNames);
return md;
}
};
template<typename... Types>
class Message
{
public:
tuple<Types...> m_values;
typedef tuple<Types...> values_type;
static MetaData m_meta;
};
typedef Message<string, double> MyMessageType;
template<>
MetaData MyMessageType::m_meta = MetaData::createMetaData<MyMessageType>("name\nmass");
int main() {
// your code goes here
return 0;
}
compiles well in gcc, yet produces an "error C2755: 'MetaData::addVarDesc::value,MSGT>' : non-type parameter of a partial specialization must be a simple identifier" in MS Visual Studio 2013.
I wonder, what is the smallest/best change required here for this code to work in VS 2013.
EDIT Trying to rephrase it differently: how can i get tuple size as compile-time constant, eligible to use as template parameter?
EDIT Basically, using integral_costant<int, ID> instead of int ID resolved the issue.
A:
I have been asked to create an answer. You need to wrap the number in a type
template<typename ID, class MSGT>
struct addVarDescImpl;
template<int ID, class MSGT>
struct addVarDesc : addVarDescImpl<std::integral_constant<int, ID>, MSGT>
{};
template<typename ID, class MSGT>
struct addVarDescImpl
{
static void exec(MetaData& md, string varNames)
{
typedef typename std::tuple_element<ID::value, typename MSGT::values_type>::type varType;
md.addVar<ID::value, varType>(varNames);
addVarDesc<ID::value+1, MSGT>::exec(md, varNames);
}
};
template<class MSGT>
struct addVarDescImpl<
std::integral_constant<int, std::tuple_size<typename MSGT::values_type>::value>,
MSGT>
{
static void exec(MetaData& md, string varNames)
{
}
};
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Detect LinearLayout's selected item in the HorizontalScrollView
I have LinearLayout in the HorizontalScrollView which is horizontal scrolable.
<HorizontalScrollView
android:id="@+id/scrollMessageFiles"
android:layout_width="fill_parent"
android:layout_height="65dp"
android:layout_below="@+id/editMessage"
android:orientation="horizontal"
android:weightSum="1.0" >
<LinearLayout
android:id="@+id/panelMessageFiles"
android:orientation="horizontal"
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:background="#FFFFFF"
>
</LinearLayout>
</HorizontalScrollView>
I add TextView's to my LinearLayout and ScrollView programmaticaly next way:
public void addFiles()
{
if(!FileManagerActivity.getFinalAttachFiles().isEmpty())
{
for (File file: FileManagerActivity.getFinalAttachFiles())
{
View line = new View(this);
line.setLayoutParams(new LayoutParams(1, LayoutParams.MATCH_PARENT));
line.setBackgroundColor(0xAA345556);
informationView = new TextView(this);
informationView.setId(102);
informationView.setTextColor(Color.BLACK);
informationView.setTextSize(12);
informationView.setCompoundDrawablesWithIntrinsicBounds(
0, R.drawable.file_icon, 0, 0);
informationView.setText(file.getName().toString());
layout.addView(informationView, 0);
layout.addView(line, 1);
}
}
}
It works properly. I want to detect what item of LinearLayout was selected in the HorizontalScrollView. I do it next way:
layout.setOnLongClickListener(new View.OnLongClickListener() {
public boolean onLongClick(View v) {
final CharSequence[] items = {"Open", "Delete", "Details"};
final AlertDialog.Builder builder = new AlertDialog.Builder(NewMessageActivity.this);
int childCount = layout.getChildCount()/2;
for (int i = 0; i < childCount; i++)
{
final View child = layout.getChildAt(i);
if (child instanceof TextView)
builder.setTitle(((TextView)child).getText().toString());
}
builder.setItems(items, new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int item) {
}
});
AlertDialog alert = builder.create();
alert.show();
return false;
}
});
But the title of the AlertBuilder is setting with first item from the list, but not those which I had chosed. How can I detect what item was selected and set its name to the AlertDialog title?
A:
I solved this by changing my adding TextView's to the LinearLayout in this way:
public void addFiles()
{
if(!FileManagerActivity.getFinalAttachFiles().isEmpty())
{
TextView tv[] = new TextView[FileManagerActivity.getFinalAttachFiles().size()];
for (int i = 0; i< FileManagerActivity.getFinalAttachFiles().size();i++)
{
View line = new View(this);
line.setLayoutParams(new LayoutParams(1, LayoutParams.MATCH_PARENT));
line.setBackgroundColor(0xAA345556);
tv[i] = new TextView(this);
tv[i].setId(i);
tv[i].setTextColor(Color.BLACK);
tv[i].setTextSize(12);
tv[i].setCompoundDrawablesWithIntrinsicBounds(
0, R.drawable.file_icon, 0, 0);
tv[i].setText(FileManagerActivity.getFinalAttachFiles().get(i).getName().toString());
tv[i].setOnLongClickListener(onclicklistener);
layout.addView(tv[i], 0);
layout.addView(line, 1);
}
}
}
I added setOnLongClickListener to the array of my TextView's which length is equal to my list's length.
My Listener:
OnLongClickListener onclicklistener = new OnLongClickListener() {
public boolean onLongClick(View arg0) {
final CharSequence[] items = {"Open", "Delete", "Details"};
final AlertDialog.Builder builder = new AlertDialog.Builder(NewMessageActivity.this);
builder.setTitle(((TextView)arg0).getText().toString());
builder.setItems(items, new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int item) {
if (item == 1)
{
FileManagerActivity.getFinalAttachFiles().remove(item);
layout.invalidate();
}
}
});
AlertDialog alert = builder.create();
alert.show();
return false;
}
};
That solved my problem.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Failed to load aspnetcorev2.dll hosting ASP.NET Core 2.2 on IIS7
I am struggling with deploying an ASP.NET Core 2.2 site to Windows 7 SP1 IIS7.5.
The server has dotnet-hosting-2.2.1-win installed. Following are the Programs and Features entries:
Registered IIS modules:
The application pool is configured in the following way:
The app pool is used only by one web application and is running under a windows account.
I am deploying an ASP.NET Core 2.2 website using the following publish settings:
This is the deployed web.config file:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<location path="." inheritInChildApplications="false">
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath=".\App.exe" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" hostingModel="InProcess" />
</system.webServer>
</location>
</configuration>
Whatever I do, the app pool will stop and the following error will show up in the Event Viewer:
The Module DLL C:\Program Files\IIS\Asp.Net Core Module\V2\aspnetcorev2.dll failed to load. The data is the error.
I tried deploying with:
Any CPU/x86/x64
win-x64/win-x86/Portable
Self-Contained/Framework-dependent
hostingModel="InProcess"/hostingModel="OutOfProcess"
Enable 32-bit Applications="true"
Also attempted the following workarounds:
https://github.com/aspnet/AspNetCore/issues/6118
https://github.com/aspnet/AspNetCore/issues/4206
Whatever I do, I can't make the application pool run. Does anyone know what could be causing those problems?
UPDATE For some reason even the other IIS sites on .NET Framework that used to work before, now can't start with the same error - The Module DLL C:\Program Files\IIS\Asp.Net Core Module\V2\aspnetcorev2.dll failed to load. The data is the error.
ANSWER After a lot of digging it turned out that the installer for the hosting bundle failed to download Microsoft Visual C++ 2015 Redistributable. That is why all websites stopped working. I installed it manually and resintalled the hosting bundle and everything worked.
A:
After a lot of digging it turned out that the installer for the hosting bundle failed to download Microsoft Visual C++ 2015 Redistributable. That is why all websites stopped working. I installed it manually and resintalled the hosting bundle and everything worked.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Pareto optimality in game theory
Consider the classic Game Theory game Prisoner Dilemma:
______C_____________D______
C| -1,-1 | -4, 0
|------------|-----------
D| 0, -4 | -3,-3
Now in this game there are 3 Pareto optimals: (-4,0), (0,-4) and (-1,-1).
I have had some trouble about considering (-1,-1) as a Pareto optimal, maybe because there is a better outcome for both players (not at the same time) in other outcomes.
Is the following reasoning correct?
initialize the set of Pareto optimals with all the outcomes (a,b);
if there is another outcome (a',b') such that a'>a ANDb'>bthen remove from the Pareto optimals set(a,b)`.
Is this correct?
Another question: is the outcome where a player has the maximum of his/her utility always a Pareto optimal?
A:
I have had some trouble about considering (-1,-1) as a Pareto optimal, maybe because there is a better outcome for both players (not at the same time) in other outcomes.
No. It does have to be at the same time. that is, since none of the outcomes have the feature that it is better than $(-1,-1)$ for both players at the same time. So, it is Pareto optimal.
You say this yourself: for some outcome $(a,b)$ to not be Pareto Optimal, you must have some other outcome $(a',b')$ where both $a'>a$ AND $b'>b$. This is not true for $(-4,0)$ (worse for player 1), $(0,-4)$ (worse for player 2), and $(-3,-3)$ (worse for both players).
p.s. in explaining your reasoning you say:
assume every outcome (a,b) is Pareto Optimal.
I don't understand why you would be assuming that. It's typically not true.
Another question: is the outcome where a player has the maximum of his/her utility always a Pareto optimal?
Yes, because for some outcome to not be a Pareto Optimal, there must be some other outcome where that player can do better, but that is not the case if the player is already at his/her maximum.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why can't Selenium RC see a site on virtual machine
I'm having trouble getting Selenium to see sites hosted on a virtual
machine. The following test causes an error, and I have no idea why:
<?php
require_once 'PHPUnit/Extensions/SeleniumTestCase.php';
class ActualTest extends PHPUnit_Extensions_SeleniumTestCase
{
protected function setUp()
{
$this->setBrowser("*firefox");
$this->setBrowserUrl("http://10.48.77.48/"); // IP of virtual machine
$this->setHost('192.168.101.1'); // IP of my Mac
}
public function testGetHomePage()
{
$this->open("/", true);
}
}
It returns the following error message, indicating that it couldn't
find the virtual machine:
$ phpunit ActualTest.php
PHPUnit 3.5.6 by Sebastian Bergmann.
E
Time: 7 seconds, Memory: 6.75Mb
There was 1 error:
1) ActualTest::testGetHomePage
PHPUnit_Framework_Exception: Response from Selenium RC server for testComplete().
XHR ERROR: URL = http://10.48.77.48/ Response_Code = 404 Error_Message = Page Not Found.
/home/craiga/ombudsman/app/systemtests/ActualTest.php:16
FAILURES!
Tests: 1, Assertions: 0, Errors: 1.
I can access this site any browser anywhere on the network without any
problem, but for some reason the browser launched by Selenium can't.
This error occurs whether I launch the test from the virtual machine
or the Mac.
I can get the following test to connect to Google without any problem:
<?php
require_once 'PHPUnit/Extensions/SeleniumTestCase.php';
class VanitySearchTest extends PHPUnit_Extensions_SeleniumTestCase
{
protected function setUp()
{
$this->setBrowser("*firefox");
$this->setBrowserUrl("http://www.google.com.au/");
$this->setHost('192.168.101.1'); // IP of my Mac
}
public function testSearchForSelf()
{
$this->open("/");
$this->type("q", "craig anderson");
$this->click("btnG");
$this->waitForPageToLoad("30000");
try {
$this->assertTrue($this->isTextPresent("craiga.id.au"));
} catch (PHPUnit_Framework_AssertionFailedError $e) {
array_push($this->verificationErrors, $e->toString());
}
}
}
This test, which connects to my Mac's default page, also passes
without any problems:
<?php
require_once 'PHPUnit/Extensions/SeleniumTestCase.php';
class MacTest extends PHPUnit_Extensions_SeleniumTestCase
{
protected function setUp()
{
$this->setBrowser("*firefox");
$this->setBrowserUrl("http://192.168.101.1/");
$this->setHost('192.168.101.1'); // IP of my Mac
}
public function testMacHomePage()
{
$this->open("/");
try {
$this->assertTrue($this->isTextPresent("It works!"));
} catch (PHPUnit_Framework_AssertionFailedError $e) {
array_push($this->verificationErrors, $e->toString());
}
}
}
Does anyone have any idea why this might be happening? I'm happy to
provide whatever information I can about my setup. I'm using Selenium
Server 1.0.3, and the latest phpunit from pear.
A:
Solved. My application wasn't responding correctly to HEAD requests. To test your application, run the following:
curl --head http://10.48.77.48/
If it returns a 404, your application probably isn't handling HEAD requests properly.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is the composition of irreducible polynomials again irreducible
I've been pondering this since yesterday. Is it true that given two irreducible polynomials $f(x)$ and $ g(x)$ will $f(g(x))$ or $g(f(x))$ be irreducible?
A:
This need not be true. Note that in $\mathbb{R}[x]$ any irreducible polynomial has degree either $1$ or $2$. So, you can take two irreducible polynomials of degree 2, and compose them to get an reducible polynomial.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Return track-list using musicbrainzngs.search_releases()
I'm getting acquainted with musicbrainzngs and have run into a snag. All of the track-lists which are returned from the following are empty. Are there additional parameters I need to provide or is this a bug?
releases = musicbrainzngs.search_releases(
query='arid:' + musicbrainz_arid
)
A:
This is expected. You have three ways of retrieving data from the MusicBrainz web service (using musicbrainzngs or directly):
lookup/get info for one entity by id: lots of info for that id
browse a list of entities: possibility to get long list, medium amount of information
search for entities: powerful to find things, but not much data given
When you know an entity by id you can look it up directly. You can even add includes to get very detailed information.
When you not only want one entity, but a list (like a list of releases for one artist) you can browse. Even for these you can add includes.
And only when you don't know the id of the entity (or an attached entity) or if you want to cut down the list of entities you search.
In your case you know the artist id and want to get the list of releases. In that case you should use browse_releases and set an include for recordings:
releases = musicbrainzngs.browse_releases(artist=musicbrainz_arid,
inc=["recordings"])
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Certification logos on Careers site
I would like to make a feature request that would allow us to upload or link to a logo for a certification.
I love the idea of using Careers 2.0 to generate my resume, but I would really like to see my cert logos on there.
For example:
Would this be possible? Thanks for considering!
A:
While the profiles are meant to fit your (the candidate's) standards, we also want to make sure that employers have an easy time reading them. That's why even though we offer a lot more than a standard resume (SO answers, Open Source projects etc...) the layout is similar to a standard resume.
I worry that including these images, or other images/logos for companies you worked for, will turn some profiles into a NASCAR* monstrosity and turn off employers from otherwise good candidates.
*for those unfamiliar with NASCAR
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Get and parse a JSON-String from URL
How can i get this json-data-string as an variable and parse it into some usefull fromat?
The url for the string:
http://intranet.ooelfv.at/webext2/getjson.php?scope=laufend&callback=?
Iam pretty new to js and json so some advice on how to get into this topic would be great.
Thank you,
Stoani
A:
Here's a simple runable example that just logs the content of the data to your console
<!DOCTYPE html>
<html lang="de">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
<script>
function getData(jsonp){
console.log(jsonp);
}
var scripts = document.createElement('script');
scripts.src = 'http://intranet.ooelfv.at/webext2/getjson.php?scope=laufend&callback=getData';
document.body.appendChild(scripts);
</script>
</body>
</html>
You can access the data by modifying the getData function. For example, to get the title, you can use
function getData(jsonp){
console.log(jsonp.title);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Differentiate complex function?
$$f(z)=3z^2+\bar{z}$$
I want to show the function is either differentiable or not so I can state if it is holomorphic or not. What is the method for this ?
Edit - Can some give an example of how to apply cauchy riemann equation to this question ?
A:
If $f(z)$ were differentiable, then $f(z)-3z^2=\bar{z}$ would be differentiable since $3z^2$ is. However, $\bar{z}$ is not complex differentiable. Check this with the Cauchy-Riemann equations.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Java7 Reserving and Assigning random positions
I am making a basic color guessing game (my firstone) where you get a hex code and you'll get some color alternatives to chose from. However, I've run into some problems. The way I have it set up is that I create some ovals to which I assign random colors (one with the correct one) and that is working all fine.
The problem is that in order for the answer not to be put at the same spot every time I'm trying to randomly generate positions for these ovals, but my code doesn't work. The code (which I thought would reserve spots and assign spots) doesn't reserve spots correctly. I thought it would only go into the else statement if the positions wasn't taken (takenPos[tempRandomNum] == false) but it seems that it always goes into the else statement, even if my print confirms that it generated the same spot multiple times.
Another problem is that if it enters the If statement (which it doesn't right now), generates a new value, and that value is taken it uses the value anyway.
Print:
ELSE false 0
ELSE false 540
ELSE false 0
ELSE false 360
ELSE false 360
ELSE false 450
ELSE false 180
ELSE false 360
ELSE false 540
Code:
public int randomOvalPos() {
//An array of booleans to keep track of if the position is taken.
boolean[] takenPos = new boolean[difficulty];
//Temporary variable to return if it gets through the if statement.
int tempRandomNum = randomNum(0 , difficulty - 1);
//Check if the position is taken (set before return).
if(takenPos[tempRandomNum]){
//If the position is taken I want it to get a new random position.
//The problem is that if this spot is taken as
//well I can't just keep redoing it.
tempRandomNum = randomNum(0, difficulty - 1);
takenPos[tempRandomNum] = true;
return tempRandomNum * 90;
}else{
//If it isn't taken set the boolean to true and return the value.
takenPos[tempRandomNum] = true;
return tempRandomNum * 90;
}
}
A:
What I would do (before displaying the ovals to the user) is use the code that currently works and places the answer in the same spot every time, and then shuffle the array using a Fischer-Yates shuffle
Read this question for an implementation on how to do that: Random shuffling of an array
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MariaDB using all available ram
On an computer with 64G of ram I have a mariadb server that uses all available ram even though it is (i think at least) configured to use much less:
SHOW VARIABLES
aria_block_size 8192
aria_checkpoint_interval 30
aria_checkpoint_log_activity 1048576
aria_encrypt_tables OFF
aria_force_start_after_recovery_failures 0
aria_group_commit none
aria_group_commit_interval 0
aria_log_file_size 1073741824
aria_log_purge_type immediate
aria_max_sort_file_size 9223372036853727232
aria_page_checksum ON
aria_pagecache_age_threshold 300
aria_pagecache_buffer_size 134217728
aria_pagecache_division_limit 100
aria_pagecache_file_hash_size 512
aria_recover NORMAL
aria_repair_threads 1
aria_sort_buffer_size 268434432
aria_stats_method nulls_unequal
aria_sync_log_dir NEWFILE
aria_used_for_temp_tables ON
auto_increment_increment 1
auto_increment_offset 1
autocommit ON
automatic_sp_privileges ON
back_log 152
basedir /usr
big_tables OFF
binlog_annotate_row_events OFF
binlog_cache_size 32768
binlog_checksum NONE
binlog_commit_wait_count 0
binlog_commit_wait_usec 100000
binlog_direct_non_transactional_updates OFF
binlog_format STATEMENT
binlog_optimize_thread_scheduling ON
binlog_row_image FULL
binlog_stmt_cache_size 32768
bulk_insert_buffer_size 8388608
character_set_client utf8mb4
character_set_connection utf8mb4
character_set_database utf8mb4
character_set_filesystem binary
character_set_results utf8mb4
character_set_server utf8mb4
character_set_system utf8
character_sets_dir /usr/share/mysql/charsets/
collation_connection utf8mb4_unicode_ci
collation_database utf8mb4_general_ci
collation_server utf8mb4_general_ci
completion_type NO_CHAIN
concurrent_insert AUTO
connect_timeout 10
core_file OFF
datadir /var/lib/mysql/
date_format %Y-%m-%d
datetime_format %Y-%m-%d %H:%i:%s
deadlock_search_depth_long 15
deadlock_search_depth_short 4
deadlock_timeout_long 50000000
deadlock_timeout_short 10000
debug_no_thread_alarm OFF
default_master_connection
default_regex_flags
default_storage_engine InnoDB
default_tmp_storage_engine
default_week_format 0
delay_key_write ON
delayed_insert_limit 100
delayed_insert_timeout 300
delayed_queue_size 1000
div_precision_increment 4
encrypt_binlog OFF
encrypt_tmp_disk_tables OFF
encrypt_tmp_files OFF
enforce_storage_engine
error_count 0
event_scheduler OFF
expensive_subquery_limit 100
expire_logs_days 10
explicit_defaults_for_timestamp OFF
external_user
extra_max_connections 1
extra_port 0
flush OFF
flush_time 0
foreign_key_checks ON
ft_boolean_syntax + -><()~*:""&|
ft_max_word_len 84
ft_min_word_len 4
ft_query_expansion_limit 20
ft_stopword_file (built-in)
general_log OFF
general_log_file prod4.log
group_concat_max_len 1024
gtid_binlog_pos
gtid_binlog_state
gtid_current_pos
gtid_domain_id 0
gtid_ignore_duplicates OFF
gtid_seq_no 0
gtid_slave_pos
gtid_strict_mode OFF
have_compress YES
have_crypt YES
have_dynamic_loading YES
have_geometry YES
have_openssl NO
have_profiling YES
have_query_cache YES
have_rtree_keys YES
have_ssl DISABLED
have_symlink YES
histogram_size 0
histogram_type SINGLE_PREC_HB
host_cache_size 628
hostname prod4
identity 0
ignore_builtin_innodb OFF
ignore_db_dirs
in_transaction 0
init_connect
init_file
init_slave
innodb_adaptive_flushing ON
innodb_adaptive_flushing_lwm 10.000000
innodb_adaptive_hash_index ON
innodb_adaptive_hash_index_partitions 1
innodb_adaptive_max_sleep_delay 150000
innodb_additional_mem_pool_size 8388608
innodb_api_bk_commit_interval 5
innodb_api_disable_rowlock OFF
innodb_api_enable_binlog OFF
innodb_api_enable_mdl OFF
innodb_api_trx_level 0
innodb_autoextend_increment 64
innodb_autoinc_lock_mode 1
innodb_background_scrub_data_check_interval 3600
innodb_background_scrub_data_compressed OFF
innodb_background_scrub_data_interval 604800
innodb_background_scrub_data_uncompressed OFF
innodb_buf_dump_status_frequency 0
innodb_buffer_pool_dump_at_shutdown OFF
innodb_buffer_pool_dump_now OFF
innodb_buffer_pool_dump_pct 100
innodb_buffer_pool_filename ib_buffer_pool
innodb_buffer_pool_instances 8
innodb_buffer_pool_load_abort OFF
innodb_buffer_pool_load_at_startup OFF
innodb_buffer_pool_load_now OFF
innodb_buffer_pool_populate OFF
innodb_buffer_pool_size 12884901888
innodb_change_buffer_max_size 25
innodb_change_buffering all
innodb_checksum_algorithm INNODB
innodb_checksums ON
innodb_cleaner_lsn_age_factor HIGH_CHECKPOINT
innodb_cmp_per_index_enabled OFF
innodb_commit_concurrency 0
innodb_compression_algorithm zlib
innodb_compression_failure_threshold_pct 5
innodb_compression_level 6
innodb_compression_pad_pct_max 50
innodb_concurrency_tickets 5000
innodb_corrupt_table_action assert
innodb_data_file_path ibdata1:12M:autoextend
innodb_data_home_dir
innodb_default_encryption_key_id 1
innodb_default_row_format compact
innodb_defragment OFF
innodb_defragment_fill_factor 0.900000
innodb_defragment_fill_factor_n_recs 20
innodb_defragment_frequency 40
innodb_defragment_n_pages 7
innodb_defragment_stats_accuracy 0
innodb_disable_sort_file_cache OFF
innodb_disallow_writes OFF
innodb_doublewrite ON
innodb_empty_free_list_algorithm BACKOFF
innodb_encrypt_log OFF
innodb_encrypt_tables OFF
innodb_encryption_rotate_key_age 1
innodb_encryption_rotation_iops 100
innodb_encryption_threads 0
innodb_fake_changes OFF
innodb_fast_shutdown 1
innodb_fatal_semaphore_wait_threshold 600
innodb_file_format Antelope
innodb_file_format_check ON
innodb_file_format_max Antelope
innodb_file_per_table ON
innodb_flush_log_at_timeout 1
innodb_flush_log_at_trx_commit 0
innodb_flush_method O_DIRECT
innodb_flush_neighbors 1
innodb_flushing_avg_loops 30
innodb_force_load_corrupted OFF
innodb_force_primary_key OFF
innodb_force_recovery 0
innodb_foreground_preflush EXPONENTIAL_BACKOFF
innodb_ft_aux_table
innodb_ft_cache_size 8000000
innodb_ft_enable_diag_print OFF
innodb_ft_enable_stopword ON
innodb_ft_max_token_size 84
innodb_ft_min_token_size 3
innodb_ft_num_word_optimize 2000
innodb_ft_result_cache_limit 2000000000
innodb_ft_server_stopword_table
innodb_ft_sort_pll_degree 2
innodb_ft_total_cache_size 640000000
innodb_ft_user_stopword_table
innodb_idle_flush_pct 100
innodb_immediate_scrub_data_uncompressed OFF
innodb_instrument_semaphores OFF
innodb_io_capacity 400
innodb_io_capacity_max 2000
innodb_kill_idle_transaction 0
innodb_large_prefix OFF
innodb_lock_schedule_algorithm fcfs
innodb_lock_wait_timeout 50
innodb_locking_fake_changes ON
innodb_locks_unsafe_for_binlog OFF
innodb_log_arch_dir ./
innodb_log_arch_expire_sec 0
innodb_log_archive OFF
innodb_log_block_size 512
innodb_log_buffer_size 8388608
innodb_log_checksum_algorithm INNODB
innodb_log_compressed_pages ON
innodb_log_file_size 1073741824
innodb_log_files_in_group 2
innodb_log_group_home_dir ./
innodb_lru_scan_depth 1024
innodb_max_bitmap_file_size 104857600
innodb_max_changed_pages 1000000
innodb_max_dirty_pages_pct 75.000000
innodb_max_dirty_pages_pct_lwm 0.001000
innodb_max_purge_lag 0
innodb_max_purge_lag_delay 0
innodb_mirrored_log_groups 1
innodb_monitor_disable
innodb_monitor_enable
innodb_monitor_reset
innodb_monitor_reset_all
innodb_mtflush_threads 8
innodb_old_blocks_pct 37
innodb_old_blocks_time 1000
innodb_online_alter_log_max_size 134217728
innodb_open_files 400
innodb_optimize_fulltext_only OFF
innodb_page_size 16384
innodb_prefix_index_cluster_optimization OFF
innodb_print_all_deadlocks OFF
innodb_print_lock_wait_timeout_info OFF
innodb_purge_batch_size 300
innodb_purge_threads 1
innodb_random_read_ahead OFF
innodb_read_ahead_threshold 56
innodb_read_io_threads 64
innodb_read_only OFF
innodb_replication_delay 0
innodb_rollback_on_timeout OFF
innodb_rollback_segments 128
innodb_sched_priority_cleaner 19
innodb_scrub_log OFF
innodb_scrub_log_speed 256
innodb_show_locks_held 10
innodb_show_verbose_locks 0
innodb_simulate_comp_failures 0
innodb_sort_buffer_size 1048576
innodb_spin_wait_delay 6
innodb_stats_auto_recalc ON
innodb_stats_include_delete_marked OFF
innodb_stats_method nulls_equal
innodb_stats_modified_counter 0
innodb_stats_on_metadata OFF
innodb_stats_persistent ON
innodb_stats_persistent_sample_pages 20
innodb_stats_sample_pages 8
innodb_stats_traditional ON
innodb_stats_transient_sample_pages 8
innodb_status_output OFF
innodb_status_output_locks OFF
innodb_strict_mode OFF
innodb_support_xa ON
innodb_sync_array_size 1
innodb_sync_spin_loops 30
innodb_table_locks ON
innodb_thread_concurrency 0
innodb_thread_sleep_delay 10000
innodb_tmpdir
innodb_track_changed_pages OFF
innodb_undo_directory .
innodb_undo_logs 128
innodb_undo_tablespaces 0
innodb_use_atomic_writes OFF
innodb_use_fallocate OFF
innodb_use_global_flush_log_at_trx_commit ON
innodb_use_mtflush OFF
innodb_use_native_aio ON
innodb_use_stacktrace OFF
innodb_use_sys_malloc ON
innodb_use_trim OFF
innodb_version 5.6.42-84.2
innodb_write_io_threads 4
insert_id 0
interactive_timeout 28800
join_buffer_size 262144
join_buffer_space_limit 2097152
join_cache_level 2
keep_files_on_create OFF
key_buffer_size 33554432
key_cache_age_threshold 300
key_cache_block_size 1024
key_cache_division_limit 100
key_cache_file_hash_size 512
key_cache_segments 0
large_files_support ON
large_page_size 0
large_pages OFF
last_gtid
last_insert_id 0
lc_messages en_US
lc_messages_dir /usr/share/mysql
lc_time_names en_US
license GPL
local_infile ON
lock_wait_timeout 31536000
locked_in_memory OFF
log_bin OFF
log_bin_basename
log_bin_index
log_bin_trust_function_creators OFF
log_error /var/log/mysql/error.log
log_output FILE
log_queries_not_using_indexes OFF
log_slave_updates OFF
log_slow_admin_statements OFF
log_slow_filter admin,filesort,filesort_on_disk,full_join,full_sca...
log_slow_rate_limit 1
log_slow_slave_statements OFF
log_slow_verbosity
log_tc_size 24576
log_warnings 1
long_query_time 10.000000
low_priority_updates OFF
lower_case_file_system OFF
lower_case_table_names 0
master_verify_checksum OFF
max_allowed_packet 16777216
max_binlog_cache_size 18446744073709547520
max_binlog_size 104857600
max_binlog_stmt_cache_size 18446744073709547520
max_connect_errors 100
max_connections 512
max_delayed_threads 20
max_digest_length 1024
max_error_count 64
max_heap_table_size 16777216
max_insert_delayed_threads 20
max_join_size 18446744073709551615
max_length_for_sort_data 1024
max_long_data_size 16777216
max_prepared_stmt_count 16382
max_relay_log_size 104857600
max_seeks_for_key 4294967295
max_session_mem_used 9223372036854775807
max_sort_length 1024
max_sp_recursion_depth 0
max_statement_time 0.000000
max_tmp_tables 32
max_user_connections 0
max_write_lock_count 4294967295
metadata_locks_cache_size 1024
metadata_locks_hash_instances 8
min_examined_row_limit 0
mrr_buffer_size 262144
multi_range_count 256
myisam_block_size 1024
myisam_data_pointer_size 6
myisam_max_sort_file_size 9223372036853727232
myisam_mmap_size 18446744073709551615
myisam_recover_options BACKUP
myisam_repair_threads 1
myisam_sort_buffer_size 134216704
myisam_stats_method NULLS_UNEQUAL
myisam_use_mmap OFF
mysql56_temporal_format ON
net_buffer_length 16384
net_read_timeout 30
net_retry_count 10
net_write_timeout 60
old OFF
old_alter_table OFF
old_mode
old_passwords OFF
open_files_limit 8551
optimizer_prune_level 1
optimizer_search_depth 62
optimizer_selectivity_sampling_limit 100
optimizer_switch index_merge=on,index_merge_union=on,index_merge_so...
optimizer_use_condition_selectivity 1
performance_schema ON
performance_schema_accounts_size 100
performance_schema_digests_size 10000
performance_schema_events_stages_history_long_size 10000
performance_schema_events_stages_history_size 10
performance_schema_events_statements_history_long_... 10000
performance_schema_events_statements_history_size 10
performance_schema_events_waits_history_long_size 10000
performance_schema_events_waits_history_size 10
performance_schema_hosts_size 100
performance_schema_max_cond_classes 80
performance_schema_max_cond_instances 3348
performance_schema_max_digest_length 1024
performance_schema_max_file_classes 50
performance_schema_max_file_handles 32768
performance_schema_max_file_instances 3077
performance_schema_max_mutex_classes 200
performance_schema_max_mutex_instances 10072
performance_schema_max_rwlock_classes 40
performance_schema_max_rwlock_instances 5024
performance_schema_max_socket_classes 10
performance_schema_max_socket_instances 1044
performance_schema_max_stage_classes 150
performance_schema_max_statement_classes 178
performance_schema_max_table_handles 4000
performance_schema_max_table_instances 12500
performance_schema_max_thread_classes 50
performance_schema_max_thread_instances 1124
performance_schema_session_connect_attrs_size 512
performance_schema_setup_actors_size 100
performance_schema_setup_objects_size 100
performance_schema_users_size 100
pid_file /var/run/mysqld/mysqld.pid
plugin_dir /usr/lib/x86_64-linux-gnu/mariadb18/plugin/
plugin_maturity unknown
port 3306
preload_buffer_size 32768
profiling OFF
profiling_history_size 15
progress_report_time 5
protocol_version 10
proxy_user
pseudo_slave_mode OFF
pseudo_thread_id 16370
query_alloc_block_size 16384
query_cache_limit 16777216
query_cache_min_res_unit 4096
query_cache_size 0
query_cache_strip_comments OFF
query_cache_type OFF
query_cache_wlock_invalidate OFF
query_prealloc_size 24576
rand_seed1 357956615
rand_seed2 975103936
range_alloc_block_size 4096
read_buffer_size 131072
read_only OFF
read_rnd_buffer_size 262144
relay_log
relay_log_basename
relay_log_index
relay_log_info_file relay-log.info
relay_log_purge ON
relay_log_recovery OFF
relay_log_space_limit 0
replicate_annotate_row_events OFF
replicate_do_db
replicate_do_table
replicate_events_marked_for_skip REPLICATE
replicate_ignore_db
replicate_ignore_table
replicate_wild_do_table
replicate_wild_ignore_table
report_host
report_password
report_port 3306
report_user
rowid_merge_buff_size 8388608
secure_auth ON
secure_file_priv
server_id 0
skip_external_locking ON
skip_name_resolve OFF
skip_networking OFF
skip_parallel_replication OFF
skip_replication OFF
skip_show_database OFF
slave_compressed_protocol OFF
slave_ddl_exec_mode IDEMPOTENT
slave_domain_parallel_threads 0
slave_exec_mode STRICT
slave_load_tmpdir /tmp
slave_max_allowed_packet 1073741824
slave_net_timeout 3600
slave_parallel_max_queued 131072
slave_parallel_mode conservative
slave_parallel_threads 0
slave_run_triggers_for_rbr NO
slave_skip_errors OFF
slave_sql_verify_checksum ON
slave_transaction_retries 10
slave_type_conversions
slow_launch_time 2
slow_query_log OFF
slow_query_log_file prod4-slow.log
socket /var/run/mysqld/mysqld.sock
sort_buffer_size 2097152
sql_auto_is_null OFF
sql_big_selects ON
sql_buffer_result OFF
sql_log_bin ON
sql_log_off OFF
sql_mode NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
sql_notes ON
sql_quote_show_create ON
sql_safe_updates OFF
sql_select_limit 18446744073709551615
sql_slave_skip_counter 0
sql_warnings OFF
ssl_ca
ssl_capath
ssl_cert
ssl_cipher
ssl_crl
ssl_crlpath
ssl_key
storage_engine InnoDB
stored_program_cache 256
strict_password_validation ON
sync_binlog 0
sync_frm ON
sync_master_info 10000
sync_relay_log 10000
sync_relay_log_info 10000
system_time_zone UTC
table_definition_cache 400
table_open_cache 4000
thread_cache_size 8
thread_concurrency 10
thread_handling one-thread-per-connection
thread_pool_idle_timeout 60
thread_pool_max_threads 1000
thread_pool_oversubscribe 3
thread_pool_size 8
thread_pool_stall_limit 500
thread_stack 196608
time_format %H:%i:%s
time_zone SYSTEM
timed_mutexes OFF
timestamp 1561288673.757972
tmp_table_size 16777216
tmpdir /tmp
transaction_alloc_block_size 8192
transaction_prealloc_size 4096
tx_isolation REPEATABLE-READ
tx_read_only OFF
unique_checks ON
updatable_views_with_limit YES
use_stat_tables NEVER
userstat OFF
version 10.1.38-MariaDB-0ubuntu0.18.10.2
version_comment Ubuntu 18.10
version_compile_machine x86_64
version_compile_os debian-linux-gnu
version_malloc_library system jemalloc
version_ssl_library YaSSL 2.4.4
wait_timeout 28800
warning_count 0
wsrep_osu_method TOI
wsrep_auto_increment_control ON
wsrep_causal_reads OFF
wsrep_certification_rules strict
wsrep_certify_nonpk ON
wsrep_cluster_address
wsrep_cluster_name my_wsrep_cluster
wsrep_convert_lock_to_trx OFF
wsrep_data_home_dir /var/lib/mysql/
wsrep_dbug_option
wsrep_debug OFF
wsrep_desync OFF
wsrep_dirty_reads OFF
wsrep_drupal_282555_workaround OFF
wsrep_forced_binlog_format NONE
wsrep_gtid_domain_id 0
wsrep_gtid_mode OFF
wsrep_load_data_splitting ON
wsrep_log_conflicts OFF
wsrep_max_ws_rows 0
wsrep_max_ws_size 2147483647
wsrep_mysql_replication_bundle 0
wsrep_node_address
wsrep_node_incoming_address AUTO
wsrep_node_name prod4
wsrep_notify_cmd
wsrep_on OFF
wsrep_patch_version wsrep_25.24
wsrep_provider none
wsrep_provider_options
wsrep_recover OFF
wsrep_reject_queries NONE
wsrep_replicate_myisam OFF
wsrep_restart_slave OFF
wsrep_retry_autocommit 1
wsrep_slave_fk_checks ON
wsrep_slave_uk_checks OFF
wsrep_slave_threads 1
wsrep_sst_auth
wsrep_sst_donor
wsrep_sst_donor_rejects_queries OFF
wsrep_sst_method rsync
wsrep_sst_receive_address AUTO
wsrep_start_position 00000000-0000-0000-0000-000000000000:-1
wsrep_sync_wait 0
Output of tuning-primer also seems fine:
INNODB STATUS
Current InnoDB index space = 42.05 G
Current InnoDB data space = 48.17 G
Current InnoDB buffer pool free = 40 %
Current innodb_buffer_pool_size = 12.00 G
Depending on how much space your innodb indexes take up it may be safe
to increase this value to up to 2 / 3 of total system memory
MEMORY USAGE
Max Memory Ever Allocated : 12.16 G
Configured Max Per-thread Buffers : 1.40 G
Configured Max Global Buffers : 12.04 G
Configured Max Memory Limit : 13.45 G
Physical Memory : 62.79 G
Max memory limit seem to be within acceptable norms
After a few hours the output of top look like this:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
26350 mysql 20 0 64.3g 55.3g 19300 S 29.9 88.0 354:20.10 /usr/sbin/mysqld
shortly after this oom_reaper would kill mysql:
[3308788.693609] Out of memory: Kill process 30421 (mysqld) score 915 or sacrifice child
[3308788.693727] Killed process 30421 (mysqld) total-vm:78894468kB, anon-rss:64508740kB, file-rss:0kB, shmem-rss:0kB
[3308790.493095] oom_reaper: reaped process 30421 (mysqld), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Im using 10.1.38-MariaDB-0ubuntu0.18.10.2 on ubuntu 18.10
Pretty much all of the table are innodb, result from show global status is here: https://pastebin.com/7ayJBpgC
New settings, after changing some settings as Rick James suggested:
https://pastebin.com/N55AzWFw
A:
After Rick James suggesting that some default settings are weird I decided to upgrade to official (1:10.2.25+maria~cosmic) instead of ubuntu distributed packages and the memory problem seems to have gone away without any changes to config.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2736 mysql 20 0 27.4g 15.0g 22624 S 141.7 23.9 384:24.58 /usr/sbin/mysqld
current variables: https://pastebin.com/7C96E6J4
current status: https://pastebin.com/G55ydpJy
Additional data as requested by Wilson Hauck:
we are using 2x Samsung SSD 850 EVO 500GB in raid 1 on on an i7-6700K
iostat -xm 5 3:
Linux 4.18.0-20-generic (prod4) 07/04/2019 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
7.10 0.08 1.43 1.53 0.00 89.86
Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
sdb 6.35 205.97 0.41 4.21 1.79 63.35 21.94 23.52 0.33 1.85 0.67 66.31 20.92 1.19 25.22
sda 14.63 205.95 0.70 4.17 2.58 62.76 14.98 23.36 0.19 1.43 0.58 48.67 20.73 1.06 23.46
md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.47 489.75 0.00 0.00
md2 19.86 232.03 0.84 4.23 0.00 0.00 0.00 0.00 0.00 0.00 0.00 43.43 18.65 0.00 0.00
md1 1.33 2.51 0.01 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 4.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
9.38 0.00 1.61 2.18 0.00 86.83
Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
sdb 0.00 352.80 0.00 6.88 0.00 57.60 0.00 14.04 0.00 0.26 0.40 0.00 19.97 0.88 31.12
sda 0.00 353.00 0.00 6.88 0.00 57.40 0.00 13.99 0.00 0.26 0.39 0.00 19.96 0.86 30.48
md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md2 0.00 370.80 0.00 6.87 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18.98 0.00 0.00
md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
8.00 0.00 1.69 2.16 0.00 88.14
Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
sdb 0.00 387.60 0.00 7.49 0.00 76.20 0.00 16.43 0.00 0.24 0.42 0.00 19.78 0.83 32.16
sda 0.00 387.60 0.00 7.49 0.00 76.20 0.00 16.43 0.00 0.31 0.42 0.00 19.78 0.82 31.60
md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md2 0.00 422.20 0.00 7.48 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18.15 0.00 0.00
md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Identification of a young tree in the alpine region of southeastern France
I have a young tree growing in my backyard (in Grenoble, France), and I was not able to identify it, despite searching with many identifier tools.
It's 2 years old now and must be 2 meters high. It has a grey trunk, its leaves grow on brown/red stems that protrude irregularly from the trunk.
The leaves are pointed, gently serrated, covered with a light fuzz, with a distinct lattice of veins.
Could you help me identify it?
A:
The tree grew its first fruits this spring, so that I could definitely identify it: it is a goat willow! This is consistent with its spontaneous sprouting in my yard, as it is a common tree in my region.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Problem getting a delete button on an UpdateView to redirect
Python and Django newbie here
I've been trying to get a delete button onto a standard UpdateView form from Django and then get that to redirect to the DeleteView if that button is pressed instead of the submit button. I have that working but im not sure how to redirect to the corresponding delete page on click.
I expect i need to change the reverse_lazy('app:submission_delete') to include the id somehow but im a bit lost here, and my google fu isnt helping much either.
views.py
class AssessmentUpdate(UpdateView):
model = Submission
fields = '__all__'
success_url = reverse_lazy('app:index')
def form_valid(self, request):
if 'Delete' in self.request.POST:
reverse_lazy('app:Submission_delete')
else:
self.object = request.save()
return HttpResponseRedirect(self.get_success_url())
urls.py
app_name = 'app'
urlpatterns = [
re_path(r'^$', views.index, name='index'),
path('<int:submission_id>/', views.detail, name='detail'),
path('<int:pk>/update/', views.AssessmentUpdate.as_view(), name='Submission_update'),
path('<int:pk>/delete/', views.AssessmentDelete.as_view(), name='Submission_delete'),
]
A:
If a URL needs args or kwargs:
reverse_lazy('app:Submission_delete', kwargs={'pk': object.pk})
where object is the object which would be deleted.
ALTERNATIVELY:
Just use a regular hyper link to the delete page. Use django's template tag url for that:
{% url 'app:Submission_delete' pk=object.pk %}
You can still style that link like a button via CSS, so user's wouldn't even know the difference.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to calculate numerical trend lines in python
I have a monitoring application in python 2.6 that calculates the number of entries in a queue (queue_len). I want to create a simple function that can use these queue_len rate of change values over time and extract a trend from them.
The goal is to start up or shut down queue processing nodes depending of the +ive or -ive trend over time and not flip flop.
MACD from the financial arena looks sort of what I need or does it? Looking for any help here.
A:
Isn't that just a simple derivative?
def derivs(l):
return [l[i + 1] - l[i] for i in range(len(l) - 1)]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ZODB not able to commit
I am using ZODB first time. Just trying to commit the data with FileStorage. But when I do the same script second time I am not able to commit the new object. Here is my script
from ZODB import FileStorage,DB
import transaction
storage = FileStorage.FileStorage('/tmp/test.fs')
db = DB(storage)
conn = db.open()
root = conn.root()
#root['layer']={}
root['layer']['2b']={"id":'2b','name':'some name'}
transaction.commit()
conn.close()
db.close()
storage.close()
when I repeat the code once again, with just changing the id root['layer']['2c'] and come out of the python, the second time object is not getting committed. I only have first object only. What could be the reason.
A:
The ZODB persistence layer detects changes by hooking into the python __setattr__ hook, marking the persistent object as changed every time you set an attribute.
But if you use a primitive mutable object like a python dictionary, then there is no way for the persistence machinery to detect the changes, as there is no attribute being written. You have three options to work around this:
Use a persistent mapping
The persistent package includes a persistent mapping class, which is basically a python dictionary implementation that is persistent and detects changes directly by hooking into __setitem__ and other mapping hooks. The root object in your example is basically a persistent mapping.
To use, just replace all dictionaries with persistent mappings:
from persistent.mapping import PersistentMapping
root['layer'] = PersistentMapping()
Force a change detection by triggering the hook
You could just set the key again, or on a persistent object, set the attribute again to force the object to be changed:
root['layer'] = root['layer']
Flag the persistent object as changed
You can set the _p_changed flag on the nearest persistent object. Your root object is the only persistent object you have, everything else is python dictionaries, so you need to flag that as changed:
root._p_changed = 1
|
{
"pile_set_name": "StackExchange"
}
|
Q:
System freezes completely with Intel Bay Trail
My system freezes completely at random, frequent intervals. I started to have the same problem in Ubuntu 14.04 but after recent upgrade to 16.04 there is no improvement, in fact it seems worse.
When it happens, it's impossible to do anything. I've tried everything in this thread: What to do when Ubuntu freezes but nothing works, I have to hard reset. I have read all the system logs and journalctl but there is never any information that could help diagnose the problem.
This is a dual-boot system with Windows 10 and there's no problem there, so it's not defective hardware.
My laptop has an Intel Bay Trail processor (Pentium N3540)
A:
Your processor is affected by the c-state bug
This causes total freezes when the CPU tries to enter an unsupported sleep state. It's a problem for many Bay Trail devices especially with newer (4.*) kernels.
Affected processors AFAIK:
Atom Z3735F (Asus X205TA, Acer Aspire Switch 10, Lenovo MIIX 3 1030)
Atom Z3735G
Celeron J1900 (Asus ET2325IUK, shuttle XS35V4)
Celeron N2940 (Acer Aspire ES1-711, Chromebook)
Celeron N2840 (Acer Aspire ES1-311)
Celeron N2930 (Jetway JBC311U93, Zotac Nano CI320)
Pentium N3520
Pentium N3530 (Acer V3-111P)
Pentium N3540 (Dell Inspiron 15 3000, Lenovo G50, ASUS X550MJ)
(please (suggest an) edit to add your own device if affected)
Complete list of Bay Trail processors may be found here
There is a simple workaround for this until it gets properly fixed upstream.
You just need to pass a kernel boot parameter and the random freezing stops completely. The parameter may increase battery consumption slightly, but it will give you a usable system.
You do this by editing the configuration file for GRUB:
Boot Ubuntu and open a terminal by pressing Ctrl+Alt+T then type
sudo nano /etc/default/grub
Find the line that starts GRUB_CMDLINE_LINUX_DEFAULT=
This needs to be changed to include intel_idle.max_cstate=1
So after your edit it reads something like
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_idle.max_cstate=1"
quiet and splash are default parameters for Ubuntu Desktop - no need to change them, or any other pre-existing parameters
Now save the file by pressing ctrl+o then enter and exit by pressing ctrl+x
Now run
sudo update-grub
Then reboot.
What to do if you don't have enough time to do this before the system hangs
No problem. As explained on the help page I linked to earlier, you can add the parameter to GRUB before booting. Note that this only passes the parameter for the current boot, so you still have to edit /etc/default/grub once you have booted to make the change permanent.
You need to get to the GRUB menu. If you are dual booting this will appear anyway, if not you have to press and hold (or tap) shift after pressing the power button to turn on.
When you get to this screen select Advanced Options for Ubuntu. You can move the cursor to a different kernel, or leave it in place to edit options for the default. Instead of pressing enter, press e and you will go into edit mode, looking vaguely like this.
Move the cursor down to where it says quiet splash, put a space after splash and carefully type intel_idle.max_cstate=1 making sure there is a space after it as well.
Now press F10 or Ctrl+x to boot.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to access user's video or other media files stored in iphone?
I do not mean the assets inside the Photo app.
I want to know if it is possible at all to access media files a user have in Music or Videos on an iphone.
If it is possible, which framework I should use?
A:
It seems impossible without break jail.
ps. Just now I find one app in my iPad can access the video.app list. And my iPad is with ios 5.1.1. Maybe it is possible now!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I read multiple csvs and retain the number in the file name for each?
I have multiple csv files in a folder none of which have a header. I want to preserve the order set out by the number at the end of the file. The file names are "output-1.csv", "output-2.csv" and so on. Is there a way to include the file name of each csv so I know which data corresponds to which file. The answer [here][1] gets close to what I want.
library(tidyverse)
#' Load the data ----
mydata <-
list.files(path = "C:\\Users\\Documents\\Manuscripts\\experiment1\\output",
pattern = "*.csv") %>%
map_df( ~ read_csv(., col_names = F))
mydata
A:
You can use:
library(tidyverse)
mydata <- list.files("C:\\Users\\Documents\\Manuscripts\\experiment1\\output",
pattern = ".csv$", full.names = T) %>%
set_names(str_sub(basename(.), 1, -5)) %>%
map_dfr(read_csv, .id = "file")
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MySQL Left Join Query not working
The history table contains dates that I'm trying to match against the participation table. If the date doesn't exist in the participation table, then I want the record(s) pulled out so I can enter the participation data . But what I have doesn't work. Here's a rundown of what I'm using:
MariaDB [sotp]> describe history;
+--------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+------------------+------+-----+---------+----------------+
| historyid | int(10) unsigned | NO | PRI | NULL | auto_increment |
| amount | float | NO | | NULL | |
| subsidy | char(1) | NO | | NULL | |
| last_payment | date | NO | | NULL | |
| amount_paid | float | NO | | NULL | |
| balance | float | NO | | NULL | |
| attend | char(1) | NO | | N | |
| atend_date | date | NO | | NULL | |
| groupid | int(11) unsigned | NO | | NULL | |
| clientid | int(10) unsigned | NO | MUL | NULL | |
| memberid | int(10) unsigned | NO | MUL | NULL | |
+--------------+------------------+------+-----+---------+----------------+
MariaDB [sotp]> select clientid, attend_date
-> from history
-> where memberid = "1"
-> AND MONTH(attend_date) = "10"
-> AND YEAR(attend_date) = "2016"
-> AND attend_date <> "0000-00-00"
-> ORDER BY attend_date ASC;
+----------+-------------+
| clientid | attend_date |
+----------+-------------+
| 3 | 2016-10-11 |
| 1 | 2016-10-11 |
| 7 | 2016-10-11 |
| 2 | 2016-10-11 |
| 4 | 2016-10-11 |
| 5 | 2016-10-11 |
| 8 | 2016-10-11 |
| 9 | 2016-10-11 |
+----------+-------------+
MariaDB [sotp]> describe participation;
+-----------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+------------------+------+-----+---------+----------------+
| partid | int(11) | NO | PRI | NULL | auto_increment |
| notes | varchar(255) | NO | | NULL | |
| groupdate | date | NO | | NULL | |
| clientid | int(10) unsigned | NO | MUL | NULL | |
| memberid | int(10) unsigned | NO | MUL | NULL | |
+-----------+------------------+------+-----+---------+----------------+
MariaDB [sotp]> select clientid, groupdate
-> from participation
-> where memberid = "1"
-> AND MONTH(groupdate) = "10"
-> AND YEAR(groupdate) = "2016"
-> AND groupdate <> "0000-00-00"
-> ORDER BY groupdate ASC;
+----------+------------+
| clientid | groupdate |
+----------+------------+
| 2 | 2016-10-11 |
| 4 | 2016-10-11 |
+----------+------------+
And my left join query:
SELECT historyid, p.groupdate, h.attend_date, h.clientid, h.memberid
FROM history AS h
LEFT JOIN participation AS p ON p.groupdate = h.attend_date
WHERE h.memberid = "1"
AND MONTH(h.attend_date) = "10"
AND YEAR(h.attend_date) = "2016"
AND h.attend_date <> "0000-00-00"
ORDER BY h.attend_date ASC;
+-----------+------------+-------------+----------+----------+
| historyid | groupdate | attend_date | clientid | memberid |
+-----------+------------+-------------+----------+----------+
| 58 | 2016-10-11 | 2016-10-11 | 3 | 1 |
| 61 | 2016-10-11 | 2016-10-11 | 2 | 1 |
| 59 | 2016-10-11 | 2016-10-11 | 1 | 1 |
| 62 | 2016-10-11 | 2016-10-11 | 4 | 1 |
| 60 | 2016-10-11 | 2016-10-11 | 7 | 1 |
| 63 | 2016-10-11 | 2016-10-11 | 5 | 1 |
| 61 | 2016-10-11 | 2016-10-11 | 2 | 1 |
| 64 | 2016-10-11 | 2016-10-11 | 8 | 1 |
| 62 | 2016-10-11 | 2016-10-11 | 4 | 1 |
| 65 | 2016-10-11 | 2016-10-11 | 9 | 1 |
| 63 | 2016-10-11 | 2016-10-11 | 5 | 1 |
| 64 | 2016-10-11 | 2016-10-11 | 8 | 1 |
| 65 | 2016-10-11 | 2016-10-11 | 9 | 1 |
| 58 | 2016-10-11 | 2016-10-11 | 3 | 1 |
| 59 | 2016-10-11 | 2016-10-11 | 1 | 1 |
| 60 | 2016-10-11 | 2016-10-11 | 7 | 1 |
+-----------+------------+-------------+----------+----------+
The groupdate field should be NULL except for memberid 2 and 4. Plus it gives the data twice. What am I doing wrong?
Best regards.
UPDATE
Per the request of kasparg:
MariaDB [sotp]> select *
-> from participation;
+--------+-----------------------------------------------+------------+----------+----------+
| partid | notes | groupdate | clientid | memberid |
+--------+-----------------------------------------------+------------+----------+----------+
| 824 | aaaaaaaaaaaaaaaaaaaaaazzzzzzzzzzzzzzzzzzzzzzz | 2016-01-26 | 3 | 1 |
| 825 | lol hahaha and stuff | 2016-01-26 | 4 | 1 |
| 826 | aaaaaaaaaaaaaaaaaaaaaa | 2016-01-26 | 2 | 1 |
| 827 | zzzzzzzzzzzzzzaaaaaaaaaaaaaaaaaa | 2016-01-26 | 1 | 1 |
| 828 | llllllllllllllllllllllllllllllllllll | 2016-01-28 | 3 | 1 |
| 829 | bbbbbbbbbbbbbbbbbbb | 2016-01-28 | 1 | 1 |
| 830 | Absent | 2016-01-28 | 4 | 1 |
| 831 | Absent | 2016-01-28 | 2 | 1 |
| 832 | llllkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk | 2016-01-29 | 5 | 1 |
| 833 | xxxxxxxxxxxxxxxzzzzzzzzzzzzzzzzzz | 2016-01-29 | 4 | 1 |
| 834 | xxxxxxxxxxxxxxxxxxxxxxxx | 2016-01-29 | 2 | 1 |
| 835 | ccccccccccccccccccccccc | 2016-01-29 | 1 | 1 |
| 836 | llllkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk | 2016-01-29 | 3 | 1 |
| 1063 | zzzzzzzzzzzzzzzzzzzzzzzzzzzz | 2016-01-30 | 3 | 1 |
| 1064 | ddddddddddddddddddddddddd | 2016-01-30 | 1 | 1 |
| 1065 | No entry made. | 2016-01-30 | 4 | 1 |
| 1066 | No entry made. | 2016-01-30 | 2 | 1 |
| 1075 | 2016-02-26: car wreck | 2016-10-11 | 2 | 1 |
| 1076 | 2016-02-26: broken legs | 2016-10-11 | 4 | 1 |
+--------+-----------------------------------------------+------------+----------+----------+
UPDATE
MariaDB [sotp]> SELECT historyid, p.groupdate, h.attend_date, p.clientid, h.clientid, h.memberid
-> FROM history AS h
-> LEFT JOIN participation AS p ON p.groupdate = h.attend_date and p.clientid = h.clientid
-> WHERE h.memberid = "1"
-> AND h.clientid = "1"
-> AND MONTH(h.attend_date) = "10"
-> AND YEAR(h.attend_date) = "2016"
-> AND h.attend_date <> "0000-00-00"
-> AND p.groupdate = "NULL"
-> ORDER BY h.attend_date ASC;
Empty set, 1 warning (0.00 sec)
I hard-coded h.clientid = "1" and still get nothing. And that record should return a NULL value for groupdate.
A:
If I understand you correctly, you want to get records where participation date is missing. Add an additional criteria to the WHERE clause: AND p.groupdate IS NULL
Also, note that you are joining participation only on groupdate, join it also on clientid, like this: LEFT JOIN participation AS p ON p.groupdate = h.attend_date and p.clientid = h.clientid
|
{
"pile_set_name": "StackExchange"
}
|
Q:
WPF ComboBoxItem Style gets loaded after Combobox gets focus
i am developing an application to learn MVVM. I have an issue now but I could not find a similar case to my.
First my code:
<ComboBox Width="100" DisplayMemberPath="Name" ItemsSource="{Binding MyList}">
<ComboBox.Resources>
<Style TargetType="ComboBoxItem">
<Setter Property="IsSelected" Value="{Binding IsSelected}"/>
</Style>
</ComboBox.Resources>
</ComboBox>
I have a list of simple objects which are structured like below:
Property: string Name
Property: bool IsSelected
I bind this list of objects to my ComboBox and bind the IsSelected-Property of my object to the IsSelected-Property of the ComboBoxItem. The bindings work fine, my objects are in the ComboBoxand if I select an item the IsSelected-property gets updated.
BUT the issue is that at the start of the application there is no selected item visible. I have to click on the ComboBox so the selected item gets visible. I think that the style of the ComboBoxItem gets loaded after its parent gets the focus.
Are there any solutions?
A:
You should set or bind the SelectedItem property of the ComboBox to an instance of the item to be selected:
<ComboBox Width="100" DisplayMemberPath="Name" ItemsSource="{Binding MyList}" SelectedItem="{Binding Selected}">
...
Selected = MyList.FirstOrDefault(x => x.IsSelected == true);
This is how you select an item in the ComboBox using MVVM. You don't define a ComboBoxItem style.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Help transform recommendations questions, desirable?
Not everyone is a hardcore gamer
But almost everyone plays cards and boardgames once in a while. So they can be a bit "lazy" and ask a question like:
I can't decide between Risk and Axis&Allies, which one is better?
If I have understood well, this kind of question would be closed. However, I think there is a legitimate "ask the experts" element there but doubt that this kind of player is willing to go read the faqs, search for different games info and post a more objective elaborate question; it's just a game right?
If the poster is asked about what he likes and wants, the question can be transformed [edited] a bit.. yes it's a bit of against SE and a bit of spoiling but..
Is a casual gamer (almost anyone) a regular member of B&CG?
I mean, how many monopolies is he going to buy through his life anyways?
So then, what's the attitude towards that kind of questions here?
A:
No, Boardgames.SE has few if any regular casual gamers
A regular user would be someone that logs in once a day, or at least once a week. Someone who is only casually interested in board games is unlikely to search out information on Boardgames.SE, or on any other site for that matter.
Recommendation questions do have an element of expert knowledge that can determine if a game would be a good fit, or with a large enough sample size a poll could reflect which game is the best (like RottenTomatoes with movies and critics/people ratings). The problem with those sorts of questions is, that the don't fit the stack exchange Q&A format:
Open recommendations result in long lists, with no definitive answer. (i.e. What is a good game for 3 players)
Open recommendations quickly become out of date, and are difficult to maintain.
They are too localized (i.e. What is a better game [for me] Risk or Axis&Allies?)
They become a poll of not necessarily the best answer, but the most popular answer.
There are other resources on the net that can provide better answers for these sorts of questions. BGG has a Gift Guide 20xx, and Games Magazine has Game of the Year. SE users are also free to ask for suggestions in chat, which they should be directed to. There, they can be prodded to provide more information that would be necessary to pin down which games would be best for them.
A:
I'm not clear precisely what your question is, but it seems to include the assumptions: (1) Casual game-players will find this site and ask questions on it: (2) A casual game-player won't have the time and intelligence needed to ask a good StackExchange question and (3) A bad question is better than no question. I find all of them factually dubious, and in any case you are ignoring the reasons why the BCG rules were created in the first place. Stack Exchange members are not random visitors: they are expected to have some expertise, which they freely give for the sake of improving (the hobby, in this case). I for one would stop visiting if the front page were overwhelmed by "what game should I buy as a Christmas present for my 12-year-old nephew?" questions.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Calculate distance traveled with opposite force being applied to item travelling
An item moves for $20$ seconds on a straight path with a beginning velocity of $20m/s$. For this entire duration, a force $(X)$ is added to the item which results in this item decelerating at $6m/s^2$. What is the items’ distance from its initial point after the entire 20 seconds have passed?
I think, distance = (speed)(time) = (20)(20) = 400m.
And also, distance = (speed)(time) = (6)(20) = 120m.
Therefore, distance traveled over 20 seconds is 400m – 120m = 280m.
Is this correct? I am worried that I might be oversimplifying this.
Thanks.
A:
Let apply
$$s=v_ot+\frac12at^2$$
with
$v_0=20 m/s$
$a=-6 m/s^2$
$t=20 s$
$$\implies s=v_ot+\frac12at^2=400-\frac126\cdot400=-800 \,m$$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
could to describe past ability
Someone was lost in desert for 3 days. When we found him, we said to him
"How could you survive in desert those 3 days?"
Is the usage of "could" correct in this sentence?
A:
'Could' is not correct in this context because it is a modal auxiliary verb, which implies capability. For example, "He could have reached the airport earlier had there not been any traffic". However, since the person has already been found (which is past tense) the correct verb to use is 'did'. Also, the word 'those' is superfluous in your case. Hence the sentence should read as follows.
"How did you survive in the desert for 3 days?"
For your reference, have a look at these lists of verbs. You can also try to form sentences of your own with them.
Irregular verbs
Regular verbs
|
{
"pile_set_name": "StackExchange"
}
|
Q:
select option - how to use change event to show news in selected category
I have 2 tables like that:
Category {categoryId, categoryName}
News {newsId, newsTitle, categoryId}
In my asp.net mvc project, I have a View:
<script type="text/javascript" src="../Scripts/jquery-1.7.1.min.js"></script>
<script type="text/javascript">
$(document).ready(
function ()
{
$("#slCategory").change(
function ()
{
var value = $(this).val();
if (value == "All")
location.href = "GetAllByCategory";
else
location.href = "GetAllByCategory/?category=" + value;
}
);
}
);
</script>
<h2>Get All By Category</h2>
Select Category
<select name="slCategory" id="slCategory">
<option value="All">All</option>
<option value="1">Sport</option>
<option value="2">Social</option>
<option value="3">Economy</option>
</select>
@foreach (var item in Model)
{
<p>@item.newsTitle</p>
}
It's not work!
How to use change event to show news in selected category?
A:
Try this,
<script type="text/javascript">
$(document).ready(
function ()
{
$("#slCategory").change(
function ()
{
var value = $("#slCategory").val();
if (value == "All")
window.location.href = '@Url.Action( "GetAllByCategory", "YourController")'
else
window.location.href = '@Url.Action( "GetAllByCategory", "YourController")?category'+ value ;
}
);
}
);
</script>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
issue implementing an AsyncTask thread
I have the following issue...I'm implementing a QR code game in my android application...that is done by launching this intent:
Intent intent = new Intent(
"com.google.zxing.client.android.SCAN");
intent.setPackage("com.google.zxing.client.android");
intent.putExtra("SCAN_MODE", "QR_CODE_MODE");
startActivityForResult(intent, 0);
In other words I start the Barcode Scanner application.
Want I want to do is launch this application and even if the user starts scanning a barcode or not the application will close automatically after two minutes.
Of course if the user doesn't close it mean time.
I tried implementing an AsyncTask thread like this:
in onCreate()
initTask=new InitTask();
initTask.execute();
outside of onCreate()
private class InitTask extends AsyncTask<Void,Void,Void>{
protected Void doInBackground(Void...unused){
new Runnable() {
@Override
public void run() {
try {
Thread.sleep(20000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
Intent intent = new Intent(
"com.google.zxing.client.android.SCAN");
intent.setPackage("com.google.zxing.client.android");
intent.putExtra("SCAN_MODE", "QR_CODE_MODE");
startActivityForResult(intent, 0);
}
}.run();
this.cancel(true);
return null;
}
}
outside of onCreate()
public void onActivityResult(int requestCode, int resultCode, Intent intent) {
//..........
}
The problem is that the app gets opened but it doesn't close automatically after 2 minutes.
Does someone know how could I achieve that?
A:
Update
private class InitTask extends AsyncTask<Void,Void,Void>{
@Override
protected void onPreExecute()
{
Intent intent = new Intent("com.google.zxing.client.android.SCAN");
intent.setPackage("com.google.zxing.client.android");
intent.putExtra("SCAN_MODE", "QR_CODE_MODE");
startActivityForResult(intent, 1515);
}
protected Void doInBackground(Void...unused){
try
{
Thread.sleep(20000);
}
catch(Exception e)
{
System.out.println(e);
}
return null;
}
@Override
protected void onPostExecute(Void result)
{
finishActivity(1515);
}
}
Activity Result
public void onActivityResult(int requestCode, int resultCode, Intent intent) {
if (requestCode == 1515) {
if (resultCode == RESULT_OK) {
String contents = intent.getStringExtra("SCAN_RESULT");
String format = intent.getStringExtra("SCAN_RESULT_FORMAT");
System.out.println("it is ok");
// Handle successful scan
} else if (resultCode == RESULT_CANCELED) {
// Handle cancel
System.out.println("it is cancel");
}
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I make a sunken electrical outlet line up with an outlet cover?
I recently replaced drywall in my bathroom but made the mistake of not checking to see if the existing electrical outlet boxes lined up properly. The result is that the electrical outlet box (and more importantly the receptacle itself) are sunken in about 1/4 inch and don't line up correctly with the outlet cover wall plate. At this point the drywall is already set so I can't (or more accurately, don't want to) tear down the drywall.
Is there some sort of product that I can safely use as a spacer to have the outlet stick out an extra amount of space from the electrical box?
I'll try to get a picture or two up soon...
A:
If you use a spacer the inset must be equal to or less than 1/4 of an inch to be NEC compliant.
See this question for relevant information: How do I extend outlets after installing a backsplash?
Personally, I would just get a box extender like the one in the answer to the question above. They are fairly cheap and better by design (in my opinion).
A:
I have used a plastic spacer from a Mechano set on the screws that holds the receptacle in the work box. That one was about 1/4 inch by 1/4 round.
They look like this although the one in the picture is not the same size.
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.