text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
How do I disable filesystem checking on boot in 20.04?
Whenever I boot Ubuntu 20.04 on my computer, I get a message saying "Press Ctrl+C to cancel all filesystem checks in progress." is there any way to stop auto disk checking?
This happens with and without the Surface Linux kernel. If you want more info (such as my GRUB config) I'll answer it in the comments.
This did not happen when I used 18.04.
A:
Hope this works for you, in brief:
tune2fs -c 0 /dev/yourdevicehere
Information came from this source: How to force file system check (fsck) after system reboot on Linux
| {
"pile_set_name": "StackExchange"
} |
Q:
What's the more pythonic way of formatting a huge set of strings while maintaining good readability and white spaces?
I'm working on a Mad Lib project. I have a whole story that I would like to print but I get unwanted indentions whenever I use triple quotes to define my strings. When I try to get over this by removing the tabs and spaces that I use to keep it neat and readable, it just looks unpleasant. I ended up going with the code below but I was thinking that maybe there's a better way to do this. Will there be a more pythonic way of doing formatting this?
The code that I ended up with:
name, place1, place2, adj1, adj2, adj3, pNoun1, pNoun2, pNoun3, pNoun4,
aVerb1, aVerb2, aVerb3, noun = None
print ('Last summer, my mom and dad took me and %s on a trip to %s. ', % name, place1,
'The weather there is very %s! Northern %s has many %s, and ', % adj1, place1, pNoun1
'they make %s %s there. Many people also go to %s to %s or see ', % adj2, pNoun2, place2, aVerb1
'the %s. The people that live there love to eat %s and are very ', % pNoun3, pNoun4
'proud of their big %s. They also liketo %s in the sun and in the ', % noun, aVerb2
'%s! It was a really %s trip!' % aVerb3, adj3)
At first I was doing it like this but it ended up with unwanted new lines and indentions:
print('''Last summer, my mom and dad took me and %s on a trip to %s.
The weather there is very %s! Northern %s has many %s, and they
make %s %s there. Many people also go to %s to %s or see the %s.
The people that live there love to eat %s and are very proud of
their big %s. They also liketo %s in the sun and in the %s! It
was a really %s trip!''' % (name, place1, adj1, place1,
pNoun1,adj2, pNoun2, place2, aVerb1, pNoun3, pNoun4, noun, aVerb2,
aVerb3, adj3))
A:
You can use format with Accessing arguments by name, see the documentation.
Try:
infos = {
'name': 'name',
'noun': 'noun',
'adj1': 'adj1',
'adj2': 'adj2',
'adj3': 'adj3',
'aVerb1': 'aVerb1',
'aVerb2': 'aVerb2',
'aVerb3': 'aVerb3',
'place1': 'place1',
'place2': 'place2',
'pNoun1': 'pNoun1',
'pNoun2': 'pNoun2',
'pNoun3': 'pNoun3',
'pNoun4': 'pNoun4',
}
print('''Last summer, my mom and dad took me and {name} on a trip to %s.
The weather there is very {adj1}! Northern {place1} has many {pNoun1}, and they
make {adj2} {pNoun2} there. Many people also go to {place2} to {aVerb1} or see the {pNoun3}.
The people that live there love to eat {pNoun4} and are very proud of
their big {noun}. They also liketo {aVerb2} in the sun and in the {aVerb3}! It
was a really {adj3} trip!'''.format(**infos))
And you can reuse the name argument in format to be more flexible:
print('{pNoun1} {aVerb1} {pNoun1}'.format(**infos))
=> pNoun1 aVerb1 pNoun1
| {
"pile_set_name": "StackExchange"
} |
Q:
Setting up shared development machine
I have a server machine that contains 10 users. OS: Ubuntu 12.04.3 LTS 64bit
These users are developers (mainly web developers: HTML+JS+PHP+MYSQL)
I allowed them to remote login through XRDP and VNC.
Now everything works fine but the problem in /var/www
What is the suitable permission for this directory so they don't have problems while sharing some code between them although they aren't root. All of them are simple users.
I need to know if there is a secure way of letting them share source code and at the same time each one have its own repository. I welcome any idea.
A:
Another Method (due to the bounty :))
You can do it this way if you don't want to keep all users in the same webroot. You can make different directories as webroot aliases for the /var/www/ directory. Suppose you h ave two users u1 and u2.
I supposed you already installed apace2 if not do it sudo apt-get install apache2
I'll make the directories /home/u1/web and /home/u2/web to be the webroot of the users u1 and u2.
sudo chmod 775 home/u1/web
sudo chmod 775 home/u2/web
sudo gedit /etc/apache2/sites-available/default
Create alias (add the following to the file ) for the user u1:
Alias /u1 /home/u1/web
<Directory /u1>
Options All
AllowOverride All
order allow,deny
allow from all
</Directory>
Create alias (add the following to the file ) for the user u2:
Alias /u2 /home/u2/web
<Directory /u1>
Options All
AllowOverride All
order allow,deny
allow from all
</Directory>
Now Each user can access his web root by pointing your web browser to localhost/u1 for user u1 and localhost/u2 for user u2
| {
"pile_set_name": "StackExchange"
} |
Q:
Usage before initialization of const member, is this expected bahviour of gcc and clang?
Consider the following snippet. Class test has a const member a and a member function fun which returns a. An initialization list is used to to initialize a in the constructor. However in the initialization list a lambda is used to initialize a with the returned value of fun. This leads to different behaviors of clang and gcc at compile and runtime, depending on the optimization level. Below the snippet and the different outputs at compile and runtime are listed. Is this expected behavior of gcc and clang?
#include <iostream>
class test{
public:
const int a;
test(): a([this](){return fun();}()) {}
int fun()
{
return a;
}
};
int main()
{
auto t = test();
std::cout << t.a << '\n';
return 0;
}
Compiletime:
clang++-5.0 -std=c++17 -Wall -Wextra -Weverything
lambda_in_initializer_list.cpp:7:15: warning: lambda expressions are incompatible with C++98
[-Wc++98-compat]
test(): a([this](){return fun();}()) {}
^
warning: 'auto' type specifier is incompatible with C++98 [-Wc++98-compat]
lambda_in_initializer_list.cpp:17:5: warning: 'auto' type specifier is incompatible with C++98
[-Wc++98-compat]
auto t = test();
^~~~
3 warnings generated.
clang++-5.0 -std=c++17 -Wall -Wextra -Weverything -O1
lambda_in_initializer_list.cpp:7:15: warning: lambda expressions are incompatible with C++98
[-Wc++98-compat]
test(): a([this](){return fun();}()) {}
^
warning: 'auto' type specifier is incompatible with C++98 [-Wc++98-compat]
lambda_in_initializer_list.cpp:17:5: warning: 'auto' type specifier is incompatible with C++98
[-Wc++98-compat]
auto t = test();
^~~~
g++ -std=c++17 -Wall -Wextra -Wpedantic
No output
g++ -std=c++17 -Wall -Wextra -Wpedantic -O1
lambda_in_initializer_list.cpp: In function ‘int main()’:
lambda_in_initializer_list.cpp:18:20: warning: ‘t.test::a’ is used uninitialized in this function [-Wuninitialized]
std::cout << t.a << '\n';
~~^
Runtime:
clang++-5.0 -std=c++17 -Wall -Wextra -Weverything
0
clang++-5.0 -std=c++17 -Wall -Wextra -Weverything -O1
4196112
g++ -std=c++17 -Wall -Wextra -Wpedantic
Non deterministic output.
g++ -std=c++17 -Wall -Wextra -Wpedantic -O1
0
A:
I didn't quite understood a question, but it seems like you are actually asking 'why gcc didn't warn you until you turned up optimization'.
This is a known thing. Detecting undefined behavior in complex cases requires quite a lot of efforts on compiler side, and often is only done when you are optimizing code (since compiler is doing a lot of work anyways). Just something to keep in mind when you are dealing with real life-compilers.
| {
"pile_set_name": "StackExchange"
} |
Q:
When bottom half is called with respect to interrupt handlers
When referring to Linux Kernel Interrupt handlers as I know there are two stages of interrupt executions first is Top Half and second Bottom Half.
I know Top Half will be executed immediately on occurrence of interrupt from HW but my doubt is when and how bottom half is executed ?
A:
¿ when and how bottom half is executed ?
When: it is executed AFTER the interrupt handler, and in fact, its execution is triggered by the interrupt handler itself. Sometimes it executes just right after the interrupt handler, sometimes not.
How: if your bottom half is implemented by a tasklet, its execution is scheduled by using the task_schedule() function, normally called from inside the interrupt handler. This function does not execute the tasklet, but informs the scheduler to queue the tasklet function for later execution.
A:
The bottom halves are implemented as tasklets (deferred interrupt context), workqueues (process context) and softirqs (rarely, only 9 of those in Linux kernel).
The timer interrupt handler checks which of the 9 softirqs are to be executed (scheduler, hrtimers, network rx/tx, tasklets, etc.). If there is any pending softirq, (say a list of tasklets that the top-half has notified) then those get executed. As for any tasklet, this is true for any other softirq too. Also, because tasklet is a kind of softirq it can only be executed on the same CPU core.
On the contrary, the workqueues are executed when the corresponding process subsequently context switches in. Hence, unlike tasklets, these can sleep and can be scheduled on other CPU cores too.
| {
"pile_set_name": "StackExchange"
} |
Q:
Compact surface with constant strictly positive curvature is a sphere
I'm following Cartan's Differential forms. I'm trying to do exercise 8 on page 161. The chapter is about moving frames and differential forms in surface theory.
Consider the frame of Ex. 2 (principal frame), show that if $dk_1 = dk_2 = 0$ at the point M, then at M $k_1 = k_2$ or $\omega_{12} = 0$. Deduce that on a surface S which has constant strictly positive gaussian curvature K, the principal curvature cannot have a relative maximum or minimum at a point which is not umbilical.
For the first part all ok. In fact we have
$\omega_{13} = k_1\omega_1 \\ \omega_{23} = k_2\omega_2 \\ d\omega_1 = -\omega_2\wedge\omega_{12} \\ d\omega_2 = \omega_1\wedge\omega_{12} \\ d\omega_{12} = -k_1k_2\omega_1\wedge\omega_2 \\ d\omega_{13} = k_2d\omega_1 \\ d\omega_{23} = k_1d\omega_2$
Differentiating the first two and substituting the last two
$ d\omega_{13} = dk_1\wedge\omega_1 + k_1d\omega_1 = k_2d\omega_1 \\ d\omega_{23} = dk_2\wedge\omega_2 + k_2d\omega_2 = k_1d\omega_2$
we obtain
$dk_1\wedge\omega_1 = (k_2 - k_1)d\omega_1 \\ dk_2\wedge\omega_2 = (k_1 - k_2)d\omega_2 $
So if $dk_1 = dk_2 = 0$ we have or $k_1 = k_2$ or $d\omega_1 = d\omega_2 = 0$ and so or $k_1 = k_2$ or $\omega_{12} = 0$.
Now suppose that M is not umbilical, so $k_1 \neq k_2$ and $\omega_{12} = 0$. The frame becomes at M
$\omega_{12} = 0 \\ \omega_{13} = k_1\omega_1 \\ \omega_{23} = k_2\omega_2 \\ d\omega_1 = 0 \\ d\omega_2 = 0 \\ d\omega_{12} = -k_1k_2\omega_1\wedge\omega_2 \\ d\omega_{13} = 0 \\ d\omega_{23} = 0$
Now I don't know how to continue to get a contradiction. I know I have to show that $k_1k_2 \leq 0$, against the hypothesis. I also know the solution working in local coordinates, but I don't know how can I translate this in the language of differential forms. The proof here is from Shifrin's book
I don't know how to get second derivatives with differential forms (because $d^2\omega = 0$), but I suppose (and also I prefer) I have to work avoiding local coordinates.
Thanks in advance
A:
Even with the moving frames computation, you're going to have to do something analogous to the local computation with second-order partial derivatives. How else can we check that a critical point is a local maximum/minimum?
Here's how you should start: Write $dk_i = \sum\limits_j k_{ij}\omega_j$ (so we know that $k_{ij} = 0$ at $M$ for $i,j=1,2$). Then write $dk_{ij} = \sum\limits_\ell k_{ij\ell}\omega_\ell$. If $k_1>k_2$ locally, then we know that $k_{1jj} \le 0$ and $k_{2jj}\ge 0$ at $M$ for $j=1,2$.
I would rather write your third displayed equations as
\begin{align*}
dk_1\wedge\omega_1 &= (k_2-k_1)\omega_{12}\wedge\omega_2 \\
dk_2\wedge\omega_2 &= (k_1-k_2)\omega_1\wedge\omega_{12}.
\end{align*}
Solve these to obtain $(k_1-k_2)\omega_{12} = A\omega_1+B\omega_2$. Now can you proceed?
| {
"pile_set_name": "StackExchange"
} |
Q:
Connect another database in library in Codeigniter
I use two databases, oracle and postgresql. Postgre database is on default and I want to connect oracle where it is needed only. So how can I do this?
Now I want to connect oracle database in my library.
My Code in Library:
function find_all_subjects($id1,$id2)
{
$DB1 = $this->load->database('database2', TRUE);
$sql="sql here";
$query=$DB1->query($sql);
return $query->result();
}
But its giving error below:
Severity: Notice
Message: Undefined property: Somefunction::$load
Do anyone what's wrong with the code or how to connect another database??
A:
Something like this (Not Tested):
class Classname {
protected $CI;
protected $database2;
public function __construct () {
$this->CI =& get_instance();
$this->database2 = $this->CI->load->database('database2', TRUE);
}
public function function_name () {
$this->database2->query('...');
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Using one class's properties in another OOP PHP
I have the following class
namespace PG\Referrer\Single\Post;
class Referrer implements ReferrerInterface
{
/**
* @var $authorReferrer = null
*/
protected $isAuthorReferrer = null;
/**
* @var $dateReferrer = null
*/
protected $isDateReferrer = null;
/**
* @var $searchReferrer = null
*/
protected $isSearchReferrer = null;
/**
* @var $taxReferrer = null
*/
protected $isTaxReferrer = null;
/**
* @param array $values = null;
*/
public function __construct(array $values = null)
{
if ($values)
$this->setBulk($values);
}
/**
* Bulk setter Let you set the variables via array or object
*/
public function setBulk($values)
{
if (!is_array($values) && !$values instanceof \stdClass) {
throw new \InvalidArgumentException(
sprintf(
'%s needs either an array, or an instance of \\stdClass to be passed, instead saw %s',
__METHOD__,
is_object($values) ? get_class($values) : gettype($values)
)
);
}
foreach ($values as $name => $value) {//create setter from $name
global $wp_query;
if (array_key_exists($value, $wp_query->query_vars)) { //Check that user don't set a reserved query vars
throw new \InvalidArgumentException(
sprintf(
'%s is a reserved query_vars and cannot be used. Please use a unique value',
$value
)
);
}
$setter = 'set' . $name;
$condition = isset($_GET[$value]);
if ($setter !== 'setBulk' && method_exists($this, $setter)) {
$this->{$setter}($condition);//set value (bool)
}
}
return $this;
}
/**
* @param bool $authorReferrer
* @return $this
*/
public function setAuthorReferrer($isAuthorReferrer)
{
$this->isAuthorReferrer = $isAuthorReferrer;
return $this;
}
/**
* @param bool $dateReferrer
* @return $this
*/
public function setDateReferrer($isDateReferrer)
{
$this->isDateReferrer = $isDateReferrer;
return $this;
}
/**
* @param bool $searchReferrer
* @return $this
*/
public function isSearchReferrer($isSearchReferrer)
{
$this->isSearchReferrer = $isSearchReferrer;
return $this;
}
/**
* @param bool $taxReferrer
* @return $this
*/
public function setTaxReferrer($isTaxReferrer)
{
$this->isTaxReferrer = $isTaxReferrer;
return $this;
}
}
with its interface
namespace PG\Referrer\Single\Post;
interface ReferrerInterface
{
/**
* @param array $values
* @return $this
*/
public function setBulk($values);
/**
* @param bool $authorReferrer
* @return $this
*/
public function setAuthorReferrer($isAuthorReferrer);
/**
* @param bool $dateReferrer
* @return $this
*/
public function setDateReferrer($isDateReferrer);
/**
* @param bool $searchReferrer
* @return $this
*/
public function isSearchReferrer($isSearchReferrer);
/**
* @param bool $taxReferrer
* @return $this
*/
public function setTaxReferrer($isTaxReferrer);
}
This class sets up 4 conditionals that I need to use in another class. The values that is used in this class is also set from the other class, so basically the user sets values in the other class (lets call it class b) that is then used by class Referrer and returns the 4 conditionals which is then used by class b.
The reason why I'm doing it this way is because there will be two other classes that will need to do the same, but will returns different info
What is the more correct way to achieve this?
EDIT
To clear this up
class Referrer
The properties $isAuthorReferrer, $isDateReferreretc will either have a value of null or a boolean value depending on what is set by the user.
Example:
$q = new Referrer(['authorReferrer' => 'aq']);
In the code above, $isAuthorReferrer is set via the setBulk() method in the class to true when the variable aq is available in the URL or false when not present. The three other properties will return null because they are not set in the example.
The above works as expected, but I need to do this in another class, lets again call it class b. The arguments will be set to class b, and in turn, class b will set this arguments to class Referrer, class Referrer will use this arguments and return the proper values of its properties, and class b will use this results to do something else
Example:
$q = new b(['authorReferrer' => 'aq']);
Where class b could be something like this (it is this part that I'm not sure how to code)
class b implements bInterface
{
protected $w;
protected $other;
public function __construct($args = [])
{
//Do something here
// Do something here so that we can use $other in other classes or functions
}
public function a()
{
$w = new Referrer($args);
}
public function b()
{
// use $w properties here
// return $other for usage in other classes and functions
}
}
A:
The best way is to inject the referrer to your classes in order to do loose coupling between them and the referrer (this pattern use the benefit of your ReferrerInterface):
class b implements bInterface
{
protected $referrer;
public function __construct(ReferrerInterface $referrer, array $values = array())
{
$this->referrer = $referrer;
$this->referrer->setBulk($values);
}
public function getReferrer()
{
return $this->referrer;
}
public function b()
{
// use $this->referrer properties here
}
}
// Instantiation (use your dependency injection if you have one):
$referrer = new Referrer();
$b = new b($referrer, ['authorReferrer' => 'aq']);
I do not understand what is $other so I removed it but explain me if you want me to I add it again.
If you need to use the properties of the referrer in b, you should add some getters in your ReferrerInterface to allow that. I would use setAuthorReferrer($isAuthorReferrer) to set the value and isAuthorReferrer() to get it for instance.
| {
"pile_set_name": "StackExchange"
} |
Q:
Custom Ribbon command greyed out in IE8
I have created a simple Ribbon command that hides the header, footer, and ribbon via jQuery. It then calls window.print() and unhides the previously hidden divs. It works great on my development machine using IE9, and works everywhere using Chrome or Firefox. Unfortunately it does not work in IE8 in production. I tried reverting to IE8 to test but was unable to.
Can you think of anything that could cause my ribbon to be greyed out? My simple code can be found below:
<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
<CustomAction
Id="SharePoint.Ribbon.NewGroupInExistingTab"
Location="CommandUI.Ribbon">
<CommandUIExtension>
<CommandUIDefinitions>
<CommandUIDefinition Location="Ribbon.Templates._children">
<GroupTemplate Id="Ribbon.Templates.NewGroupInExistingTab.OneLargeExample">
<Layout Title="NewGroupInExistingTabOneLarge" LayoutTitle="NewGroupInExistingTabOneLarge">
<Section Alignment="Top" Type="OneRow">
<Row>
<ControlRef DisplayMode="Large" TemplateAlias="Button1" />
</Row>
</Section>
</Layout>
</GroupTemplate>
</CommandUIDefinition>
<CommandUIDefinition Location="Ribbon.WikiPageTab.Scaling._children">
<MaxSize
Id="SharePoint.Ribbon.NewGroupInExistingTab.NotificationGroup.MaxSize"
Sequence="15"
GroupId="SharePoint.Ribbon.NewGroupInExistingTab.NotificationGroup"
Size="NewGroupInExistingTabOneLarge" />
</CommandUIDefinition>
<CommandUIDefinition Location="Ribbon.WikiPageTab.Groups._children">
<Group
Id="SharePoint.Ribbon.NewGroupInExistingTab.NotificationGroup"
Sequence="15"
Description="Print Page"
Title="Print"
Template="Ribbon.Templates.NewGroupInExistingTab.OneLargeExample">
<Controls Id="SharePoint.Ribbon.NewGroupInExistingTab.NotificationGroup.Controls">
<Button
Id="SharePoint.Ribbon.NewGroupInExistingTab.NotificationGroup.Print"
Command="NewGroupInExistingTab.Command.Print"
Sequence="10"
Image16by16="/_layouts/Redacted/Images/Print16.png"
Image32by32="_layouts/Redacted/Images/Print32.png"
Description="Print the current page."
LabelText="Print the current page"
TemplateAlias="Button1" />
</Controls>
</Group>
</CommandUIDefinition>
</CommandUIDefinitions>
<CommandUIHandlers>
<CommandUIHandler
Command="NewGroupInExistingTab.Command.Print"
CommandAction="javascript:
$('div#s4-ribbonrow').hide();
$('div#Redacted-header').hide();
$('div#Redacted-footer').hide();
$('div#grid-gutter').css('margin','0');
$('div#Redacted-wrapper').css('margin','0');
window.print();
$('div#s4-ribbonrow').show();
$('div#Redacted-header').show();
$('div#Redacted-footer').show();
$('div#grid-gutter').css('margin','0 auto');
$('div#Redacted-wrapper').css('margin','0 auto');
"/>
</CommandUIHandlers>
</CommandUIExtension>
</CustomAction>
</Elements>
A:
ISS reset and clean browser cache (Dev Tools), could work.
| {
"pile_set_name": "StackExchange"
} |
Q:
Show $f(w)=\frac{1}{2\pi i}\int_{\partial \Omega} f(z)\frac{g'(z)}{g(z)-g(w)}\,dz$ for $w\in\Omega$
Let $f, \Omega$ be as in Cauchy's formula (i.e. $\Omega\subset\mathbb{C}$ is bounded, open, $\partial\Omega=\amalg (\text{rectifiable Jordan curves})$, $f$ is holomorphic on an open set $\supset \overline{\Omega}$), $g$ holomorphic on open $\supset \overline{\Omega}$, injective. Show for $w\in\Omega$
$$f(w)=\dfrac{1}{2\pi i}\int_{\partial \Omega} f(z)\dfrac{g'(z)}{g(z)-g(w)}\,dz.$$ Also show when $g(z)=z$, we get the result as stated in Cauchy's representation formula $\left(\text{i.e. } f(w)=\dfrac{1}{2\pi i}\int_{\partial \Omega} \dfrac{f(z)}{z-w}\,dz \right)$.
Proof: Recall Cauchy's representation formula
\begin{equation*}
\begin{aligned}
f(w) & =\dfrac{1}{2\pi i}\int_{\partial \Omega} \dfrac{f(z)}{z-w}\,dz \\
& =\dfrac{1}{2\pi i}\int_{\partial \Omega} \dfrac{f(z)}{z-w}\cdot \dfrac{g(z)-g(w)}{g(z)-g(w)}\,dz \\
& =\dfrac{1}{2\pi i}\int_{\partial \Omega} f(z) \cdot \dfrac{g(z)-g(w)}{z-w} \cdot \dfrac{1}{g(z)-g(w)}\,dz \\
& =\dfrac{1}{2\pi i}\int_{\partial \Omega} f(z) \cdot \dfrac{g'(w)}{g(z)-g(w)}\,dz
\end{aligned}
\end{equation*}
I know that I lept from the third line to the fourth but I know there is some immediate steps. Note I am not allow to use anything about residues.
EDIT: Steps between lines 3 and 4 of the equation above.
Note that we can expand $g(z)$ in its Taylor series around $z=w$ $$\dfrac{g(z)-g(w)}{z-w}=\dfrac{g'(w)(z-w)+\sum\limits_{n=2}^\infty g^{(n)}(w)(z-w)^n}{z-w}$$
Thus,
\begin{equation*}
\begin{aligned}
f(w)
& =\dfrac{1}{2\pi i}\int_{\partial \Omega} f(z) \cdot \dfrac{g'(w)(z-w)}{z-w} \cdot \dfrac{1}{g(z)-g(w)}\,dz
+ \dfrac{1}{2\pi i}\int_{\partial \Omega} f(z) \cdot \dfrac{\sum\limits_{n=2}^\infty g^{(n)}(w)(z-w)^n}{z-w} \cdot \dfrac{1}{g(z)-g(w)}\,dz \\
& =\dfrac{1}{2\pi i}\int_{\partial \Omega} f(z) \cdot \dfrac{g'(w)}{g(z)-g(w)}\,dz
+ \dfrac{1}{2\pi i}\int_{\partial \Omega} f(z) \cdot \sum\limits_{n=2}^\infty g^{(n)}(w)(z-w)^{n-1} \cdot \dfrac{1}{g(z)-g(w)}\,dz \\
& =\dfrac{1}{2\pi i}\int_{\partial \Omega} f(z) \cdot \dfrac{g'(w)}{g(z)-g(w)}\,dz
+ \dfrac{1}{2\pi i}\sum\limits_{n=2}^\infty g^{(n)}(w)\int_{\partial \Omega} f(z) \cdot (z-w)^{n-1} \cdot \dfrac{1}{g(z)-g(w)}\,dz \\
\end{aligned}
\end{equation*}
A:
For fixed $w\in\Omega$ the function
$$h(z):={f(z)g'(z)\over g(z)-g(w)}$$
is analytic in $\overline{\Omega}$, apart from an isolated singularity at $w$, because $g$ is assumed injective. In addition this singularity is a simple pole. We then can write $g(z)-g(w)=(z-w)g_1(z)$ with $g_1$ analytic in a neighborhood of $w$ and $g_1(w)=g'(w)\ne0$. It follows that
$${1\over2\pi i}\int_{\partial\Omega}{f(z)g'(z)\over g(z)-g(w)}={1\over2\pi i}\int_{\partial\Omega}{f(z)g'(z)\over g_1(z)}{dz\over z-w}=f(w)\ ,$$
using standard rules of residue calculus.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible to save state of Web application
I build a Web application using GWT (and/or SmartGWT) that uses a number of forms to collect data. Is it possible to save the progress so that the user can leave the application and when comes back continues with the data (s)he has already entered?
If yes, do I have to use a database?
A:
Of course.
One of the way is use the GWT 's RPC framework to make calls to Java servlets .
You do not need a database to store the form data as you can store the form data inside the HttpSession object provided by the Java servlets . You can imagine HttpSession has a built-in data store that allows you to store any number of key/value pairs and each client has their own HttpSession.
You can refer to the followings links/ tutorials to get the basic ideas .
References
Using Servlet Sessions in GWT – Tutorial
Google Web Toolkit (GWT) & Servlets - Web application tutorial
Google official document - Communicate with a Server
| {
"pile_set_name": "StackExchange"
} |
Q:
How to make a file-field automatically show the file-chooser just before the form is submitted?
I am developing one web application using CakePHP , I will be providing one button to user as "Upload ". Once user clicks on this button, I wanted to display dialogue box(browse file) so that user can choose file which would be located on his/her local machine.
Once successful uploading file I wanted to post file's data in one <textarea> which will be used for further data processing.
Below is cakephp code which I used earlier:
echo $form->create('MyFile', array('action' => 'getTranslation', 'type' => 'file'));
echo $form->file('File');
echo $form->submit('Upload');
echo $form->end();
Here, I was getting two button's 'Upload' and 'Browse'. I don’t want to use two buttons over here. I want to use only one button i.e. 'Upload' which achieves selecting file as well as posting to some action.
Please provide me your suggestions.
Thanks
-Pravin
A:
I guess You could do that with some jQuery, add an listener to the file field, and once it's filled, post the form.
I also found this: Uploadify. If you look at the second demo, you'll see something that fits your needs. Check it out.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add exception in SELinux?
When SELinux is disabled, I have no issues,
but when it's Enforced then I'm facing this
[systemd] failed to get d-bus session: Failed to connect to socket /run/dbus/system_bus_socket: Permission denied
Audit.log
sealert -a /var/log/audit/audit.log
100% done
found 2 alerts in /var/log/audit/audit.log
--------------------------------------------------------------------------------
SELinux is preventing /usr/sbin/zabbix_agentd from connectto access on the unix_stream_socket /run/dbus/system_bus_socket.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that zabbix_agentd should be allowed connectto access on the system_bus_socket unix_stream_socket by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'zabbix_agentd' --raw | audit2allow -M my-zabbixagentd
# semodule -i my-zabbixagentd.pp
i created a policy as suggested above,restarted zabbix-agent, now from zabbix agent log getting
[systemd] failed to get d-bus session: An SELinux policy prevents this sender from sending this message to this recipient, 0 matched rules; type="method_call", sender="(null)" (inactive) interface="org.freedesktop.DBus" member="Hello" error name="(unset)" requested_reply="0" destination="org.freedesktop.DBus" (bus)
sealert -a /var/log/audit/audit.log
39% donetype=AVC msg=audit(1534885076.573:250): avc: denied { connectto } for pid=10654 comm="zabbix_agentd" path="/run/dbus/system_bus_socket" scontext=system_u:system_r:zabbix_agent_t:s0 tcontext=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 tclass=unix_stream_socket
**** Invalid AVC allowed in current policy ***
A:
Well, first you have to identify the denial you are getting from SELinux. The easiest (in my opinion) way to do that is via the sealert utility.
First install the setroubleshoot-server package with:
yum install setroubleshoot-server
Then run:
sealert -a /var/log/audit/audit.log
You will probably get a lot of output, look for your specific denial, and follow the recommendations. But be sure to NOT allow things that shouldn't be allowed!
Here is an exmple of a denial, and the suggested woraround from sealert (my emphasis):
SELinux is preventing /usr/libexec/postfix/qmgr from using the rlimitinh access on a process.
***** Plugin catchall (100. confidence) suggests **************************
you believe that qmgr should be allowed rlimitinh access on processes labeled postfix_qmgr_t by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'qmgr' --raw | audit2allow -M my-qmgr
# semodule -i my-qmgr.pp
Additional Information:
Source Context system_u:system_r:postfix_master_t:s0
Target Context system_u:system_r:postfix_qmgr_t:s0
Target Objects Unknown [ process ]
Source qmgr
Source Path /usr/libexec/postfix/qmgr
Port
Host
Source RPM Packages postfix-2.10.1-6.el7.x86_64
Target RPM Packages
Policy RPM selinux-policy-3.13.1-102.el7_3.16.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name centos
Platform Linux centos 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue
Jul 4 15:04:05 UTC 2017 x86_64 x86_64
Alert Count 5
First Seen 2018-04-18 18:02:32 CEST
Last Seen 2018-08-22 09:11:22 CEST
Local ID 855f168c-1e47-4c6b-8a1e-f8fddce5d426
The example above concerns Postfix, again; look for your denial, and insert a local policy.
| {
"pile_set_name": "StackExchange"
} |
Q:
An MVVM button event with a list view not getting the selected item
I am trying to get the selectedItem from my list view. I am using MVVM light toolkit and the EventToCommand on a button.
My listView is bound to an ObservableCollection which is correctly binding. Here is the listView xaml:
<ListView Name="serverListView"
Grid.Row="3"
Grid.Column="0"
Grid.ColumnSpan="2"
ItemsSource="{Binding Servers}"
ItemTemplate="{StaticResource ServerList}"
SelectionMode="Single"
BorderThickness="0"/>
I then have a button which I am using Interaction.Triggers with a mvvm EventToCommand, I am not sure if the selectedItem binding is correct. The event is firing correctly through a Relay Command (mvvm light toolkit) but I am getting null every time.
Here is my button xaml;
<Button x:Name="LoadButton"
Content="Load Server"
Grid.Column="0"
Grid.Row="4"
Grid.ColumnSpan="2">
<i:Interaction.Triggers>
<i:EventTrigger EventName="Click">
<mvvm:EventToCommand Command="{Binding ButtonClick, Mode=OneWay}"
CommandParameter="{Binding SelectedItem, ElementName=serverListView}"
MustToggleIsEnabledValue="True"/>
</i:EventTrigger>
</i:Interaction.Triggers>
</Button>
Relay Command:
this.ButtonClick = new RelayCommand<object>(new Action<object>(this.GetClickEvent));
A:
You should as well bind the SelectedItem property of the listview to a property (SelectedServer) of your viewmodel, and change your EventToCommand
CommandParameter="{Binding SelectedItem, ElementName=serverListView}"
to
CommandParameter="{Binding SelectedServer}"
| {
"pile_set_name": "StackExchange"
} |
Q:
How to connect to postgres through docker-compose network?
I use docker-compose and I try to connect to the postgres database from the web container.
I use this URI:
postgresql://hola:hola@postgres/holadb
I get this error:
Connection refused
Is the server running on host "postgres" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
docker-compose.yml
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /usr/src/app/project/static
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
postgres:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_USER=hola
- POSTGRES_PASSWORD=hola
- POSTGRES_DB=holadb
volumes:
- ./data/postgres:/var/lib/postgresql/data
I remove ./data/postgres before building and running.
Logs
postgres_1 | The files belonging to this database system will be owned by user "postgres".
postgres_1 | This user must also own the server process.
postgres_1 |
postgres_1 | The database cluster will be initialized with locale "en_US.utf8".
postgres_1 | The default database encoding has accordingly been set to "UTF8".
postgres_1 | The default text search configuration will be set to "english".
postgres_1 |
postgres_1 | Data page checksums are disabled.
postgres_1 |
postgres_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1 | creating subdirectories ... ok
postgres_1 | selecting default max_connections ... 100
postgres_1 | selecting default shared_buffers ... 128MB
postgres_1 | selecting dynamic shared memory implementation ... posix
postgres_1 | creating configuration files ... ok
postgres_1 | running bootstrap script ... ok
web_1 | [2017-06-03 16:54:14 +0000] [1] [INFO] Starting gunicorn 19.7.1
web_1 | [2017-06-03 16:54:14 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web_1 | [2017-06-03 16:54:14 +0000] [1] [INFO] Using worker: sync
web_1 | [2017-06-03 16:54:14 +0000] [7] [INFO] Booting worker with pid: 7
web_1 | [2017-06-03 16:54:14 +0000] [8] [INFO] Booting worker with pid: 8
postgres_1 | performing post-bootstrap initialization ... ok
postgres_1 |
postgres_1 | WARNING: enabling "trust" authentication for local connections
postgres_1 | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1 | --auth-local and --auth-host, the next time you run initdb.
postgres_1 | syncing data to disk ... ok
postgres_1 |
postgres_1 | Success. You can now start the database server using:
postgres_1 |
postgres_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres_1 |
postgres_1 | waiting for server to start....LOG: database system was shut down at 2017-06-03 16:54:16 UTC
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
postgres_1 | LOG: autovacuum launcher started
postgres_1 | done
postgres_1 | server started
postgres_1 | CREATE DATABASE
postgres_1 |
postgres_1 | CREATE ROLE
postgres_1 |
postgres_1 |
postgres_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
postgres_1 |
postgres_1 | LOG: received fast shutdown request
postgres_1 | LOG: aborting any active transactions
postgres_1 | LOG: autovacuum launcher shutting down
postgres_1 | LOG: shutting down
postgres_1 | waiting for server to shut down....LOG: database system is shut down
postgres_1 | done
postgres_1 | server stopped
postgres_1 |
postgres_1 | PostgreSQL init process complete; ready for start up.
postgres_1 |
postgres_1 | LOG: database system was shut down at 2017-06-03 16:54:18 UTC
postgres_1 | LOG: MultiXact member wraparound protections are now enabled
postgres_1 | LOG: database system is ready to accept connections
postgres_1 | LOG: autovacuum launcher started
I don't understand why it does not work. Thank you in advance for your help.
A:
You need to setup a network to allow communication between containers. Something like this should work:
version: '2'
services:
web:
restart: always
build: ./web
expose:
- "8000"
volumes:
- /usr/src/app/project/static
command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app
depends_on:
- postgres
networks: ['mynetwork']
postgres:
restart: always
build:
context: ./postgresql
volumes_from:
- data
ports:
- "5432:5432"
networks: ['mynetwork']
networks: {mynetwork: {}}
More info here and here.
A:
The web container tries to connect while postgres is still initializing...
Waiting some delay solved my issue.
EDIT: I use Docker Compose Healthcheck to do this.
| {
"pile_set_name": "StackExchange"
} |
Q:
jQuery change elements within div
I'm stuck with this kinda stupid problem. I saw examples of my problem, but it seems I can't find correct loop. In any case, I'll better provide examples:
<table border="1" id="inline" class="<?=$id?>" style="background:none;">
<tr id="border<?=$id?>">
<td rowspan="2" style="max-width:420; width:420" valign="top">
<form id="feedback<?=$id?>" <? if(!$check){?>style="display:none"<? }?>>
<textarea cols="40" class="editor" id="editor<?=$id?>" name="editor1" rows="10"><?=$text?></textarea>
<table style="background:none"><tr><td><input type="button" value="Save" id="submit<?=$id?>"></td><td><img id="spinner<?=$id?>" height="25" style="display:none"></td></tr></table>
</form>
<div id="content<?=$id?>"<? if($check){?> style="display:none"<? }?>><?=$text?></div>
</td>
<td style="border-width:0">
Title:<br>
<div id="title_div<?=$id?>"<? if($check){?> style="display:none"<? }?>><?=$title?></div><input type="text" id="title<?=$id?>" value="<?=$title?>"<? if(!$check){?> style="display:none"<? }?>>
</td>
</tr>
<tr>
<td style="border-width:0" valign="top">
<div id="uploader<?=$id?>"<? if(!empty($img)){?> style="display:none<? }?>">
<input id="upload<?=$id?>" name="upload" type="file" />
</div>
<div id="div_holder<?=$id?>">
<? draw_buttons($id);?>
<a title="<?=$title?>" <? if(!empty($img)){?> href="images/people/<?=$img?>"<? }?> id="feedback_img<?=$id?>" class="lightbox"><img border="0"<? if(!empty($img)){?> src="images/people/timthumb.php?src=<?=$img?>&w=200&q=100"<? }else{?> style="display:none"<? }?> id="img_holder<?=$id?>"></a></div><img id="jcrop<?=$id?>" style="display:none" />
</td>
</tr>
</table>
This is a part of my php script where $id is taken from database. Ids are numeric, so all table ids differ only with a number, eg.:
<tr id="border1">
//next table tr
<tr id="border2">
All tables are taken from database and shown within a loop. Ids can be deleted, so their order could be 1,3,4,6 and so on. But there's one hidden table with known $id = 'zero', eg.:
<tr id="borderzero">
There's also div element with id zero, within which shown above table is situated. So my problem is - I need to change id of each element within that div with id zero, eg.:
<tr id="borderzero">
//change to
<tr id="border5">
Of course I can just type them one by one, but I'm trying with the .each function, though I fail and I hope I'll get some help. Here's what I came with:
$("div#zero").clone().attr({'id' : feedback_id}).appendTo("#temp_holder").fadeIn('slow');
$('#' + feedback_id + ":not(td)").contents().each(function() {
if($("[id*='zero']").length > 0){
var new_id = $(this).attr('id');
new_id = new_id.replace('zero', some_id);
$(this).attr({'id' : new_id});
}
});
Var feedback_id through ajax taken from database and it's value is last table id +1.
A:
Okay I found a way to actually get all the elements. First answer was helpful, thank you, but also had to change .contents() to .find('*'). So the working script looks like this if somebody would require it:
$("div#zero").clone().attr({'id' : feedback_id}).appendTo("#temp_holder").fadeIn('slow');
$('#' + feedback_id + ":not(td)").find('*').each(function() {
if(this.id.match(/zero$/ig)){
var new_id = $(this).attr('id');
new_id = new_id.replace('zero', some_id);
$(this).attr({'id' : new_id});
}
});
| {
"pile_set_name": "StackExchange"
} |
Q:
jetty: dynamically changing idle time
I have a jetty server which is configured to expire requests after 30 seconds, this is the configuration line in the xml config file:
<Set name="maxIdleTime">30000</Set>
There are two kinds of requests that are accepted by this server: requests which have to be served in real time and requests that come from batch scripts that can take their time to be answered.
One request in particular is expected to take up to some minutes.
Now I would like to allow this request to execute in full without timing out, while keeping expiration time for "normal" real time requests low, in order to avoid potential congestions.
My guess is that I would have to do something like this:
public class MyServlet extends HttpServlet {
...
public void doGet(HttpServletRequest pRequest, HttpServletResponse pResponse)
throws IOException, ServletException {
if (pRequest is of the type allowed to be slow) {
set max idle time for this request very high (or infinite)
}
server.execute(pRequest, pResponse);
}
}
I'm using jetty 6.1.2rc4
A:
The maxIdleTime parameter doesn't configure the time that a request is allowed to take. The value is used to remove idle threads from the thread pool when Jetty decides to shrink the pool. See the javadoc for QueuedThreadPool#setMaxIdleTime().
If requests are timing out, it is probably due to the socket timeout parameter on one or both sides.
| {
"pile_set_name": "StackExchange"
} |
Q:
Apple ID two-factor authentication in VisualStudio
As of February 27, 2019, Apple forces developer accounts to use an extra security layer called Two-Factor Authentication.
When I try to login to my Apple Developer account through Visual Studio 2019, it seems 2FA is not supported; Visual Studio doesn't ask me for a security code after I enter my username/password.
Is 2FA supported in Visual Studio?
A:
Visual Studio 2019 Preview 4.2 supports two-factor auth for the Apple ID login
| {
"pile_set_name": "StackExchange"
} |
Q:
Store variable data after query with mongoose and node.js
I have code in app.js
var Song = db.model('Song');
var Album = db.model('Album');
I want to render to index.jade with 2 variables are list of song and list of album
I use query like this
Song.find({}, function( err, docs ){
// .........
}
Album.find({}, function( err, docs ){
// .........
}
So, what should i do to store list of song and list of album to variables and render to index.jade with 2 lists
A:
I think you mean something like this:
function( req, res ) { // whatever your "controller" function is
Song.find({}, function( err, songs ){
Album.find({}, function( err, albums ){
res.render('index', { song_list: songs, album_list: albums });
});
});
}
Then just iterate and markup your song_list and album_list arrays in the template.
Note that this is synchronous and therefore slower than an async approach, but it should do what you want. To go the async route, consider using a library like this to defer res.render until both queries are done: https://github.com/kriszyp/promised-io
| {
"pile_set_name": "StackExchange"
} |
Q:
Running JavaScript function to gives "Uncaught SyntaxError: Unexpected token }"
I am creating a personal website and decided to add a "skills div" where the content of the div changes depending on what tab is selected. to do this i have created a function the changes the innerHTML of the div to a string containing all the <p>, <img> and <scripts>'s. this works fine for the paragraphs and images but when i run the function in the script it gives me an error
here is the code that changes the innerHTML (only including the because it is just a big string with html elements and tags):
var tabContents = ['', '<img src="Skills/3DModelling/Night_Swarz.png" alt="ModelImage1.png" style="position: absolute; top: 10px; left: 10px; height: 200px;">' +
'<img src="Skills/3DModelling/Night_Swarz_AllClothes.png" alt="ModelImage2.png" style="position: absolute; bottom: 10px; left: 10px; height: 200px;">' +
'<p style="position: absolute; top: calc(50% - 50px); left: 0px; width: 170px; text-align: center; font-size: 180%;">Character 1</p>' +
'<img src="Skills/3DModelling/Samurai_Render.png" alt="ModelImage3.png" style="position: absolute; top: 10px; right: 30px; height: 200px;">' +
'<img src="Skills/3DModelling/Samurai_Posed.png" alt="ModelImage4.png" style="position: absolute; bottom: 10px; right: 10px; height: 200px;">' +
'<p style="position: absolute; top: calc(50% - 50px); right: 0px; width: 170px; text-align: center; font-size: 180%;">Character 2</p>' +
'<p style="position: absolute; top: 0px; left: 175px; height: calc(100% - 20px); width: calc(100% - 330px); text-align: center;"> I have been 3D modelling since January of 2014. I have been doing so as a recreational activity in order to provide myself with objects to move around in a gaming environment. I enjoy the coding challanges I face more so than 3D modelling, however through developing my skills I have acquired a talent and satasfaction through the models I have created. I started with very basic designs and now I am confident in my abilities to create models based on 2D sketches, texture and paint the models as well as pose and render them to create images. The tools I have always used are the <i>Blender</i> 3D Modelling program. This is a free program that is mainly used in low budget game making.</p>' +
'<p style="position: absolute; top: 220px; left: 175px; height: calc(100% - 20px); width: calc(100% - 330px); text-align: center; font-size: 200%;">Some More Models</p>' +
'<img id="ChangingImage" style="position: absolute; top: 300px; left: calc(50%); height: 200px;">' +
'<img src="Images/LeftArrow.png" style="position: absolute; top: 350px; left: calc(50% - 200px); height: 50px; cursor: pointer;" onclick="ChangeToNext("false"");">' +
'<img src="Images/RightArrow.png" style="position: absolute; top: 350px; right: calc(50% - 200px); height: 50px; cursor: pointer;" onclick="ChangeToNext("true");">' +
'' +
'<script>' +
'' +
'var currentImage = 0;' +
'var imageID = ["Night_Swarz.png", "Night_Swarz_AllClothes.png", "Samurai_Posed.png", "Samurai_Render.png"];' +
'' +
'function ChangeImageTo(number){' +
'img = document.getElementById("ChangingImage");' +
'currentImage = number;' +
'img.src = "Skills/3DModelling/" + imageID[number];' +
'img.setAttribute("style", "height: 200px;");' +
'var width = img.clientWidth/2;' +
'img.setAttribute("style", "position: absolute; top: 300px; left: calc(50% - " + width.toString() + "px); height: 200px;");' +
'}' +
'' +
'function ChangeToNext(forward){' +
'if(forward == "true"){' +
'if(currentImage == imageID.length-1){' +
'ChangeImageTo(0);' +
'} else {' +
'ChangeImageTo(currentImage+1);' +
'}' +
'} else {' +
'if(currentImage == 0){' +
'ChangeImageTo(imageID.length-1);' +
'} else {' +
'ChangeImageTo(currentImage-1);' +
'}' +
'}' +
'}' +
'' +
'ChangeImageTo(0);' +
'</script>','','','',''];
function PopulateWithTab(tab){
div = document.getElementById("SkillsPane");
div.innerHTML = tabContents[tab];
}
this function is meant to change an image to cycle through a list however once the function is run this message is displayed in the console of the webpage:
Index.html:2 Uncaught SyntaxError: Unexpected token }
this is line 2 in index.html:
<html>
A:
InnerHtml Security considerations
Although this may look like a cross-site scripting attack, the result is harmless. HTML5 specifies that a <script> tag inserted via innerHTML should not execute.
Note: script elements inserted using innerHTML do not execute when they are inserted.
InnerHtml wc3/specs
Conditional (ternary) Operators
Your if / else implementation it's hard to read / debug, use an inline implementation. ( You probably have an extra })
condition ? expr1 : expr2
| {
"pile_set_name": "StackExchange"
} |
Q:
How to call block defined as property in another class?
I am calling the block from second class which has been declared and maintained in first class.
In ViewController.h
@property (copy) void (^simpleBlock)(NSString*);
In View Controller.m
- (void)viewDidLoad {
[super viewDidLoad];
self.simpleBlock = ^(NSString *str)
{
NSLog(@"Hello My Name is: %@",str);
};
}
In SecondViewController.m
In ViewDidload
ViewController *VC = [[ViewController alloc]init];
VC.simpleBlock(@"Harjot");//bad execution error
Please suggest me some solutions because the code is giving me bad execution error.
How can i call the block in any another way?
A:
It's the correct way of run the block. However if you try to run a block that is nil you'll have a crash - so you should always check that it's not nil before calling it:
ViewController *vc = [[ViewController alloc] init];
if (vc.simpleClock) {
vc.simpleBlock(@"Harjot");//this will not get called
}
The reason why in your case the block is nil is because you set it in viewDidLoad - however viewDidLoad is not called until its view is ready to go on screen. For testing purposes try to move the assignment from viewDidLoad to init and this should work:
- (instancetype)init
{
self [super init];
if (self) {
_simpleBlock = ^(NSString *str)
{
NSLog(@"Hello My Name is: %@",str);
};
}
return self;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
VS2008 to VS2010 - Problematic Configuration and Upgrade - Newbie
I have updated this question with an executive summary at the start below. Then, extensive details follow, if needed. Thanks for the suggestions.
Exec Summary:
I am a novice with VS. I have a problem with some inherited code. Code builds and executes fine on VS2008 (XP64). Same code will either build and not run, or fail to build on XP64 or W7 with VS2008 and/or VS2010. After changing some compiler options, I managed to get it to run without an issue on VS2010 on XP64; however, on W7, no luck.
I eventually discovered that the heap is getting corrupted.
Unhandled exception at 0x76e540f2 (ntdll.dll) in ae312i3.3.exe: 0xC0000374: A heap has been corrupted.
I am not familiar with how to consider fixing a heap problem; perhaps there is an issue with the pointers in the existing code that points to memory in use by another thread or program, corrupted ntdll.dll file, other?
Rebooting PC to check if ntdll.dll was corrupted didn't help. Changed debug settings, and received the following feedback:
HEAP[ae312i3.3.exe]: Invalid address specified to RtlSizeHeap( 0000000000220000, 000000002BC8BE58 )
Windows has triggered a breakpoint in ae312i3.3.exe.
This may be due to a corruption of the heap, which indicates a bug in ae312i3.3.exe or any of the DLLs it has loaded. This may also be due to the user pressing F12 while ae312i3.3.exe has focus.
It appears that when it crashes, C++ is returning a boolean variable to an expression of the form
While (myQueryFcn(inputvars))
QUESTIONS:
So, is it not returning a C++ boolean to a VB boolean? I do believe that the two are different representations (one uses True/False, the other an integer?) Could this be an issue? If so, why was it NOT an issue in VB2008?**
Or, perhaps it is that the C++ code has written to allocated memory, and upon returning to VB, it crashes???
** I have recently learned of 'Insure++', and will be trying to use it to track down the issue. Any suggestions on its use, other possible insight? **
I would appreciate any suggestions. Thanks again.
.
.
.
.
.
DETAILS THAT LED TO THE ABOVE SUMMARY (below):
I am a novice with VS2010; familiar with programming at an engineering application level (Python, Fortran, but been decades since I used C++ extensively), but not a professional programmer.
I have a solution that consists of multiple projects, all in VS2008. Projects are:
Reader (C++ project; utilizes 3rd party DLLs)
Query (C++ project; depends upon Reader)
Main (VB; depends upon Reader and Query).
The following applies to XP64 OS.
The solution and projects were written, built, and released by someone other than myself.
I have taken the existing files, and made a copy, placed in a directory of my choice, and simply opened in VS2010 (VS2008 is not installed on my PC). I was able to successfully build (with many warnings though - more on that later) ; but when I ran the executable, it would reach a point and crash. After much trial and error, I discovered that modification of compiler settings resolved the issue for me as follows:
It would build and execute in DEBUG configuration, but no the Release. I found that the in the Query project Property Page / Configuration Properties / C++ / Optimization / Optimization --> the Release (x64) configuration utilized 'Maximize Speed (/O2) while the Debug used 'Disabled (/Od)' --> so I switched to 'Disabled (/Od).
Also, Query's project Property Page / Configuration Properties / General / Whole Program Optimization --> needed to be set to 'Use Link Time Code Generation'.
The above build and ran successfully on XP64 in VS2010.
Next, I simply copied the files and placed a copy on a W7 machine with VS2010. Opened the solution via 2010, and it 'upgraded' the files automatically. When I launch VS2010, it automatically indicates the 4 following warnings. They are:
Operands of type Object used for operator '&'; runtime errors could occur. In file 'CobraIFile.vb', Line 1845, Column 37.
identical error completely
Accesss of shared member, constant member, enum member or nested type through an instance; qualifying expression will not be evaluated. In file 'FileWriter.vb', Lines 341, Columns 51
Operands of type Object used for operator '='; use the 'Is' operator to test object identity. In file 'FormMain.vb'; Line 4173, Column 32.
Code for warnings in 1 & 2 are as follows
ValueStr = String.Empty
For iCols = 0 To DGrid.Columns.Count - 1
ValueStr &= DGrid.Item(iCols, iRows).Value & ";" // THIS IS WARNING LINE!!!
Next
Code for warning 3:
With FormMain
WriteComment("")
WriteComment("Generated by :")
WriteComment("")
WriteComment(" Program : " & .PROGRAM.ToUpper) // THIS IS WARNING LINE!!!
Code for warning 4:
' Compare material against the material table
For iRowMat As Integer = 0 To matCount - 1
' Ignore new row
If Not .Rows(iRowMat).IsNewRow Then
' Check material description
// LINE BELOW IS WARNING LINE!!!
If .Item("ColMatDesc", iRowMat).Value = matDesc Then
DataGridMatProp.Item("ColMatIdx", iRow).Value = .Item("ColMatFile", iRowMat).Value
Exit For
End If ' Check description
End If ' Check new row
Next iRowMat
When I build the solution, it will successfully build without errors (but many warnings), and when I run the executable, it successfully loads the GUI, but at some point crashes while executing either the Query or Reader projects (after taking actions with gui buttons) with the following information:
C:\Users\mcgrete\AppData\Local\Temp\WER5D31.tmp.WERInternalMetadata.xml
C:\Users\mcgrete\AppData\Local\Temp\WER68E6.tmp.appcompat.txt
C:\Users\mcgrete\AppData\Local\Temp\WER722A.tmp.mdmp
I was unable to utilize the information in the three files above (ignorant of how to consider to do so).
The warnings I receive in W7 are very similar / if not identical to that in XP64; they are along the lines of the following types, and there are over 1,600 of them. Add to the warning types below the original 4 warnings listed ealier above. With my success in running on XP64, and not in W7, I was assuming/hoping that these would not require to individually be addressed, but are only warnings.
Warning C4267: 'argument' : conversion from 'size_t' to 'int', possible loss of data. C:\Users\mcgrete\Documents\iCOBRA\pts\p312\exec\win64\6111\include\atr_StringBase.h 351 1 Reader
Warning C4018: '<' : signed/unsigned mismatch C:\Users\mcgrete\Documents\iCOBRA\pts\p312\exec\win64\6111\include\omi_BlkBitVectTrav.h 69 1 Reader
Warning C4244: 'initializing' : conversion from 'double' to 'float', possible loss of data. C:\Users\mcgrete\Documents\iCOBRA\pts\p312\exec\win64\6111\include\g3d_Vector.h 76 1 Reader
Warning C4244: 'initializing' : conversion from 'double' to 'float', possible loss of data. C:\Users\mcgrete\Documents\iCOBRA\pts\p312\exec\win64\6111\include\g3d_Vector.h 76 1 Reader
Warning C4800: 'int' : forcing value to bool 'true' or 'false' (performance warning). C:\Users\mcgrete\Documents\iCOBRA\pts\p312\exec\win64\6111\include\rgnC_Region.h 219 1 Reader
Warning LNK4006: "public: class ddr_ShortcutImpl const & __cdecl cow_COW,struct cow_Virtual > >::ConstGet(void)const " (?ConstGet@?$cow_COW@V?$ddr_ShortcutImpl@VkmaC_Material@@@@U?$cow_Virtual@V?$ddr_ShortcutImpl@VkmaC_Material@@@@@@@@QEBAAEBV?$ddr_ShortcutImpl@VkmaC_Material@@@@XZ) already defined in ABQDDB_Odb_import.lib(ABQDDB_Odb.dll); second definition ignored C:\Users\mcgrete\Documents\iCOBRA\pts\p312\source\312i3.3\Reader\ABQSMAOdbCore_import.lib(ABQSMAOdbCore.dll) Reader
Warning LNK4221: This object file does not define any previously undefined public symbols, so it will not be used by any link operation that consumes this library. C:\Users\mcgrete\Documents\iCOBRA\pts\p312\source\312i3.3\Reader\ABQSMAOdbCore_import.lib(ABQSMAOdbCore.dll) Reader
Warning C4996: 'sprintf': This function or variable may be unsafe. Consider using sprintf_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. C:\Users\mcgrete\Documents\iCOBRA\pts\p312\source\312i3.3\Query\Query.cpp 271 1 Query
Warning MSB8004: Output Directory does not end with a trailing slash. This build instance will add the slash as it is required to allow proper evaluation of the Output Directory. C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.CppBuild.targets 299 6 Query
Now to my request for help:
I must clarify, I am willing to dig into the warnings above in detail; however, I have not done so as before investing that effort and not having written code to begin with, I am simply trying to understand what might be the true root cause, then focus efforts in that direction.
I was disappointed with the XP64 issues I experienced, and was unsure if the changes required to the configuration were required, or if the changes that I made were only actually a 'work-around' to an unidentified problem?
I expected that once the XP64 VS2010 version of the solution was operable, that it would transfer to W7 without an issue, as the software build and ran fine with VS2008 and XP64. Is that a poor assumption? What might I be missing?
Should I consider attempting to modify the configurations again, or is the root cause likely associatd with the warnings indicated above? If the warnings, why were they apparently non-issues in VS2008 - did changes in VS2010 effectively lead to generation of actual runtime errors where in VS2008 I was luckily 'spared' the pain?
I appreciate any guidance and insight on how to proceed, as from my limited experience, it appears from searches on the web that there were numerous compiler bugs or related in VS2010. Not sure if any are related to my issues, if the numerous warnings are actually a problem and the code needs quite a bit of cleaning up, or if there are simply some configuration issues that I may have to deal with.
FYI - The latest update/SP to VS2010 that I have installed is VS10SP1-KB2736182.exe. I have also trid to use the debugger, but was unable to get it to stop at breakpoints in my Query or Reader project codes, even while running VS2010 as administrator. W7 does have .NET Framework 4.0 Multi-Targeting Pack installed, and my solution is configured to use .NET Framework 4.0 Client Profile.
Thanks in advance!
UPDATE March 18, 2013
I didn't know how to reply to my own question, so here is an update.
I still could not manage to get the debugger working; so, I did it the old fashioned way - added various MessageBoxs to find where it was crashing.
A. The Main.vb program calls a function in the 'Query' project
OdbQueryGetIncrement(str_out, vec_ptr)
B. Then, the function executes through 100%, attempting to return a boolean...here is code with some old fashioned debugging code added...
//Gets the next item in a list.
// Returns false if there is the vector is empty.
// NOTE: Once an element is returned it is removed from the list.
bool __stdcall OdbQueryGetItem(
char* &str_out, // RETURN Next item in list.
void * vec_ptr, // Pointer to the vector of pointers.
int index) // Index of pointers vector to return next item of.
{
// Cast the point into an array of pointers
std::vector<std::string>* *vec_temp = (std::vector<std::string>* *) vec_ptr;
bool bool_out = false;
char vectempsize[1000];
int TEM1;
char temp[1000];
TEM1 = vec_temp[index]->size();
// Check vector is valid
if (vec_temp) {
if(vec_temp[index]->size() >= index)
{
sprintf(temp,"value: %d\n",(int)bool_out);
::MessageBoxA(0, (LPCSTR) temp, (LPCSTR) "OdbQuery.dll - bool_out", MB_ICONINFORMATION);
sprintf(temp,"value: %d\n",(int)index);
::MessageBoxA(0, (LPCSTR) temp, (LPCSTR) "OdbQuery.dll - index", MB_ICONINFORMATION);
sprintf(vectempsize,"value: %d\n",(int)TEM1);
::MessageBoxA(0, (LPCSTR) temp, (LPCSTR) "OdbQuery.dll - index", MB_ICONINFORMATION);
}
if (!vec_temp[index]->empty()) {
// Get the next item in the list
std::string item = vec_temp[index]->front();
// Initialise ouput string
str_out = (char*)malloc( item.size()*sizeof(char) );
sprintf(str_out, "%s", item.c_str());
::MessageBoxA(0,(LPCSTR) str_out, (LPCSTR) "hello", 0);
// Remove first item from the vector
vec_temp[index]->erase(vec_temp[index]->begin());
bool_out = true;
}
}
sprintf(temp,"value: %d\n",(int)bool_out);
::MessageBoxA(0, (LPCSTR) temp, (LPCSTR) "OdbQuery.dll - bool_out", MB_ICONINFORMATION);
return bool_out;
}
The code starts out with bool_out=false as expected (verified with MessageBox value=0 output)
The code reads and outputs index = 2 with the MessageBox...
The code reads and outputs TEM1=vec_temp[index]->size() as a value=2 with the MessageBox...
The code outputs bool_out as true (value=1) with the MessageBox...
Then, the code crashes. A MessageBox that was placed immediately after the line that calls the code above never is executed.
The output from VS2010 is "The program '[6892] ae312i3.3.exe: Managed (v4.0.30319)' has exited with code -2147483645 (0x80000003)."
I am lost as to why the execution would die while returning from this function.
Is there some possible issue with compiler settings or bugs?
Any help is appreciated!
MORE INFORMATION
Hello, I modified some settings on the Properties Page to attempt to get the debugger to give me more information. This has resulted in more information as follows:
Unhandled exception at 0x76e540f2 (ntdll.dll) in ae312i3.3.exe: 0xC0000374: A heap has been corrupted.
I am not familiar with how to consider fixing a heap problem; perhaps there is an issue with the pointers in the existing code that points to memory in use by another thread or program, corrupted ntdll.dll file, other?
I will try rebooting PC to see if that helps, though I have little hope for that...didn't help.
Found option in Debugger to 'Enable unmanaged code debugging', checked it; cleaned; rebuild; run with debug...
Output more descriptive --
HEAP[ae312i3.3.exe]: Invalid address specified to RtlSizeHeap( 0000000000220000, 000000002BC8BE58 )
Windows has triggered a breakpoint in ae312i3.3.exe.
This may be due to a corruption of the heap, which indicates a bug in ae312i3.3.exe or any of the DLLs it has loaded. This may also be due to the user pressing F12 while ae312i3.3.exe has focus.
It appears that when it crashes, C++ is returning a boolean variable to an expression of the form
While (myQueryFcn(inputvars))
So, is it not returning a C++ boolean to a VB boolean? I do believe that the two are different representations (one uses True/False, the other an integer?) Could this be an issue? If so, why was it NOT an issue in VB2008?
A:
I solved my own problem; the root cause of the problem was as follows.
Root Cause:
VisualBasic (VB) called C++.
VB created a string and sent to C++. Previous developer/coder allocated memory in C++ for the same string.
When execution of C++ code ended, C++ appears to have terminated the memory allocation established by VB and C++.
Solution:
1. Removed memory allocation in C++ code (below).
str_out=(char*)malloc( (item.size()+1)*sizeof(char) );
Modified VB code to use a StringBuilder type, rather than string.
Dim str_out As StringBuilder = New StringBuilder(5120)
See: return string from c++ function to VB .Net
| {
"pile_set_name": "StackExchange"
} |
Q:
For cloud machine is it a good practice to reboot periodically?
As a desktop Windows user one often find better performance if one reboot it after using if for a long periold of time. Is it also the case for virtual machine in th cloud, like EC2, Azure, or VPS?
A:
For the most part, I only have to reboot my virtual machines when applying security patches that require a reboot.
If you're using stable applications/services on a stable OS, you shouldn't have to do any sort of "weekly/monthly reboot" just to ensure better performance.
| {
"pile_set_name": "StackExchange"
} |
Q:
What could cause bubbling toilet when the shower or sink is running?
One of the two toilets in my rental house is bubbling when the shower or sink in that master bath area are on. In the past, about 6 months ago, I have had plumbers and rotoruter out and they could find no blockage in the line. Before I spend more money on an out of state house, could someone please tell me if they have suggestions?
A:
It sounds like the toilet, shower and sink share a vent. This is pretty normal; no plumber in his right mind would run separate vented stacks for each drain in the house. The drains are instead tied into one vent stack, and then stacks are combined as they flow into the main sanitary drain. However, the shower or sink may be upstream of the toilet, and are pushing air in front of water which might be finding relief by bubbling up the toilet's drain. The plumbing can still pass code, but the intent of the applicable plumbing code is to prevent a drain being too far from its vent, which causes air to get trapped "downstream" of water in the line, resulting in problems like this (and slow drains).
The design of the toilet may have something to do with it. Toilets, like other drains, have U-bends; for a toilet this has the dual purpose of keeping water in the bowl, and also keeping sewer gases from pushing out into the room (similar to J-traps on sink'shower drains). However, "low-flow" toilets which use 1.6GPF or less are often designed with a shallower U-bend, so that it doesn't take as much water flow to induce the siphon that makes the contents of the bowl go away. Depending on other aspects of the design, like the relative order of the sink, toilet and shower in the drain line, water pushing past the tee to the toilet may be enough to force some air through the toilet's U-bend.
You may also have some issues with tee junctions in the plumbing. Specifically, I'm thinking of a tee joint being installed backwards. Drain tees are not true T-shapes; the perpendicular end instead curves into the straight section. The curve should direct water from the tee joint "downstream" towards the main stack, but if installed backwards it will force drain water (and air) towards "upstream" drains before gravity then pulls it back down the main line. This causes a backwash that slows drains, and yes it can force air in the drain lines past traps like the toilet U-bend. If the bathroom was ever renovated and the plumbing changed, and the work was not inspected (or the inspector missed the problem), this is plausible.
| {
"pile_set_name": "StackExchange"
} |
Q:
jQuery plugin: invoke callback method that can then invoke plugin functions based on response
I'm getting into writing jQuery plugins and integrating them with AJAX. I'm cutting the fat and focusing on the basics:
(function($)
{
function MyPlugin(el, options)
{
this.element = $(el);
this.myInput = false;
this.options = $.extend(
{
onSave:function(){}
}, options);
this.initialize();
}
$.fn.myPlugin = function(options)
{
var control = new MyPlugin(this.get(0), options);
return control;
};
MyPlugin.prototype =
{
initialize:function()
{
var me = this;
//specifics shouldn't really matter
//it creates some HTML to go inside the element
//within that HTML is an input, which I can successfully invoke keyup:
me.myInput.keyup(function(event)
{
if(keyPressIsEnter(event)
me.saveFunction();
});
},
saveFunction:function()
{
var value = this.myInput.val();
this.options.onSave(value);
//CURRENTLY I CALL SUCCESS ANIMATION
successAnimation();
},
successAnimation:function()
{
//do something to the markup within element for success
},
failureAnimation:function()
{
//do something to the markup within element for failure
}
};
})(jQuery);
and let's say I have a div and set it to use my plugin like so:
$("myDiv").myPlugin({onSave:function(value){myAjaxSaveFunction(value);});
With this setup, the plugin, save function and animation all work (assuming there is never an error). However, the myAjaxSaveFunction is asyncronous and might either return a success or failure. I currently have the save function within the plugin calling the success animation. If I had myAjaxSaveFunction return a true/false, I could (in theory), use that return value within the plugin to determine whether to run the success or failure, but not if the function is asyncronous.
So, how is this scenario typically handled? For reuse, I need to be able to customize the function that handles the data as an optional callback, but I need the plugin to wait on whether it runs the success/fail animation based on the result of the function (which might be asyncronous, or might not be).
@Kevin B: Are you suggesting this?
saveFunction:function()
{
var value = this.myInput.val();
var success = this.options.onSave(value);
if(success)
successAnimation();
else
failureAnimation();
},
wouldn't this just fall through to the if statement immediately while the this.options.onSave function is executing?
A:
I'd suggest allowing the onSave function to be returned either a boolean value, or a promise object.
saveFunction:function()
{
var value = this.myInput.val();
var success = this.options.onSave(value);
if(success && success.promise)
success.promise().done(successAnimation).fail(failureAnimation);
elseif (success)
successAnimation();
else
failureAnimation();
},
Now, you can use it like this:
$("myDiv").myPlugin({onSave:function(value){return myAjaxSaveFunction(value);});
function myAjaxSaveFunction(value) {
return $.ajax({...});
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Why do my program behave unexpectedly when I use sigaction?
I'm writing my own simple shell as an exercise. I need to register to the SIGCHLD signal in order to handle zombie-processes. For some reason, when I add the handler using sigaction the program exits, and I don't understand why.
You can see in main() that we exit if process_arglist() returned 0 but I return 1 and I don't see how the signal handling could affect that.
Here's my code. It should handle a command ending with an & (we fork() and use execvp in the child code).
For example: ping 127.0.0.1 -c 5 &.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <pthread.h>
#include <signal.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <unistd.h>
void sigchld_handler(int signal) {
int my_errno = errno;
while (waitpid(-1, 0, WNOHANG) > 0); // WNOHANG so handler will be non-blocking.
errno = my_errno;
}
int handle_background_command(int count, char** arglist) {
pid_t pid;
arglist[count - 1] = NULL; // remove "&" from arglist
//Handle SIGCHLD signal
struct sigaction sa, sa_dft;
sa.sa_handler = sigchld_handler;
sa.sa_flags = SA_NOCLDSTOP;
if (sigaction(SIGCHLD, &sa, &sa_dft) == -1) {
perror("error when trying to set signal action");
exit(-1);
}
if((pid = fork()) == -1) {
perror("error when trying to fork() from handle_background_command()");
exit(1);
}
if(pid == 0) {
// Child code
sigaction(SIGCHLD, &sa_dft, NULL);
if(execvp(arglist[0], arglist) == -1) {
perror("error when trying to execvp() from handle_background_command()");
exit(1);
}
}
// Parent code
return 1;
}
int process_arglist(int count, char** arglist)
{
return handle_background_command(count, arglist);
}
int main(void)
{
while (1)
{
char** arglist = NULL;
char* line = NULL;
size_t size;
int count = 0;
if (getline(&line, &size, stdin) == -1) {
printf("out!");
break;
}
arglist = (char**) malloc(sizeof(char*));
if (arglist == NULL) {
printf("malloc failed: %s\n", strerror(errno));
exit(-1);
}
arglist[0] = strtok(line, " \t\n");
while (arglist[count] != NULL) {
++count;
arglist = (char**) realloc(arglist, sizeof(char*) * (count + 1));
if (arglist == NULL) {
printf("realloc failed: %s\n", strerror(errno));
exit(-1);
}
arglist[count] = strtok(NULL, " \t\n");
}
if (count != 0) {
int result = process_arglist(count, arglist);
printf("result = %d\n", result);
if (!result) {
free(line);
free(arglist);
printf("out\n");
break;
}
}
free(line);
free(arglist);
}
pthread_exit(NULL);
return 0;
}
Again, if I get rid of the signal handling code then it works.
What's the reason?
EDIT
Here's the output (last rows) of strace utility:
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=2818, si_status=0, si_utime=0, si_stime=0} ---
wait4(-1, NULL, WNOHANG, NULL) = 2818
wait4(-1, NULL, WNOHANG, NULL) = -1 ECHILD (No child processes)
rt_sigreturn() = -1 EINTR (Interrupted system call)
write(1, "out!", 4out!) = 4
exit_group(0) = ?
+++ exited with 0 +++
A:
Your program exits with EINTR (interrupted system call) from getline function (getline is interrupted by signal handler).
Check this: How to handle EINTR (interrupted System Call)
| {
"pile_set_name": "StackExchange"
} |
Q:
Change Optional from within it self
I have a variable oneTimeTask. Its type is:
Optional<Runnable> oneTimeTask=...;
Do you think this example is a dirty way to 'reset' the Optional<Runnable>:
oneTimeTask = Optional.of(() -> {
/*Run task
...*/
oneTimeTask= Optional.empty();
});
... what do you think?
oneTimeTask will get a Optional(someRunnable) value many times while the app is running, but most of the time the value is empty.
Should I maybe use a Supplier<Runnable> or Supplier<Optional<Runnable>> instead?
If yes, how would you implement it? (I'm not so familiar with the Supplier class)
I'm open for any alternative (better) ways there achieve the same result.
A:
As you want to control what is returned from one call to the next you want a Supplier.
AtomicBoolean once = new AtomicBoolean();
Supplier<Runnable> oneTimeTask = () -> once.getAndSet(true)
? () -> {}
: () -> { /* do one */ };
or more simply you can have a Runnable
Runnable runs = () -> {
if (once.getAndSet(true)) return;
/* do once */
};
| {
"pile_set_name": "StackExchange"
} |
Q:
More equations than unkowns in amplifier circuit
I am trying to find the Magnitude Response of the gain of this amplifier circuit.
The gain formula is: $$ H(\omega) = \frac{\tilde{V_{out}}}{\tilde{V_{in}}}$$
My amplifier circuit is as follows:
I am trying to find the magnitude response:
$$ |H(\omega)| = \frac{|\tilde{V_{out}}|}{|\tilde{V_{in}}|}$$
The end goal: to get both V_out and V_in as functions of ω (with the resistor and capacitor values treated as constants). I will then use a tool (i.e. MATLAB, Maple, or other graphing software) to plot the magnitude response as a function of ω, and I will keep adjusting the values for the resistors and capacitors until the plot shows that the cutoff frequencies at both sides of the pass band are right where I want them.
How I am trying to get the equation: Before working with the absolute value, I am trying to get the equation V_out/V_in as one fraction with the only variable being ω and the constants being the impedances of the resistors and capacitors (ZR1, ZR2, ZR3, ZC1, ZC2).
The problem: I have way more equations than unknowns! The circuit is way over-defined. I have tried to use substitution to solve the problem, and was taken in circles. I tried to plug the equations into a matrix, but the calculator returned an error. How can I solve this over-defined system of equations? For now, please treat the impedances ZR1, ZR2, ZR3, ZC1, and ZC2 as constants (i.e. don't plug in the capacitor formula ZC=1/jωC or the resistor formula ZR=R just yet, I'd like to get an expression with just Z's first to keep things simple).
What I'm stuck trying to get: An expression V_out/V_in = [expression with only Z's]. This means that Vm, I1, I2, I3, and I4 have all been substituted out.
Equations:
$$\tilde{V_{out}} - 0V = (\tilde{I_{1}})(Z_{R2})$$
$$\tilde{I_{1}} + \tilde{I_{2}} - \tilde{I_{3}} - \tilde{I_{4}} = 0$$
$$\tilde{V_{out}} - \tilde{V_{m}} = (\tilde{I_{2}})(Z_{C2})$$
$$\tilde{V_{m}} - \tilde{V_{in}} = (\tilde{I_{3}})(Z_{R1})$$
$$\tilde{V_{m}} = (\tilde{I_{4}})(Z_{R3})$$
$$0V - \tilde{V_{m}} = (\tilde{I_{1}})(Z_{C1})$$
To reiterate: I want to find ( V_out / V_in ) = [expression with only Z's]. All Vm, I1, I2, I3, and I4 have been substituted out. Then I can finally plug in the capacitor and resistor impedance equations and get an expression with R (resistance) and C (capacitance) constants as a function of ω. But this hasn't been working (6 equations, only 5 unknowns: Vm, I1, I2, I3, and I4). V_out and V_in are not unknowns since they will be shown as a fraction on the left hand side of the equation.
Thanks in advance.
A:
Why going through a complicated analysis with KVL and KCL then ending stuck with a system of equations to solve? The fast analytical circuits techniques or FACTs are an interesting alternative to follow. They are described in the book I published in 2016.
The principle is to chop this 2nd-order circuit into a succession of smaller sketches you can solve almost by inspection, without writing a single line of algebra. You first determine the time constants involving each capacitors by "looking" into the connecting terminals as the component is temporarily removed from the circuit. When you do this exercise, the remaining capacitors are left in their dc state which is an open circuit. Then, you alternatively short one capacitor while you "look" through the connecting terminals of the other ones. This is what I have done below where a dc operating point from SPICE confirms the analysis. In these simple cases, no need to write a line of algebra, just inspect the circuit and confirm the response with SPICE by reading the bias points:
For instance, \$\tau_1\$ is simply capacitor \$C_1\$ multiplied by \$R_1||R_3\$. The SPICE bias point confirms this as the right-side terminal of the current source is virtually grounded and the upper connection biases the two paralleled resistors. Same for \$\tau_2\$ where the right-side connection of the current source is also grounded by the op-amp delivering 0 V. Finally, \$\tau_{12}\$ shows that shorting \$C_1\$ for this exercise naturally excludes the two paralleled resistors and \$R_2\$ remains alone. When the time constants are determined, simply assemble them to form the denominator of your transfer function:
\$D(s)=1+s(\tau_1+\tau_2)+s^2(\tau_1\tau_{12})\$
Once we have all the time constants we need for the denominator, we can determine the zeroes using the generalized expression involving high-frequency gains H. These gains are determined when capacitors are set in their high-frequency states (short circuit). Use SPICE and bias the input with a 1-V source and check what the output is. This is the gain you want. Again, inspection is easy here as most of these gains are 0 except the first one which involves a simple inverting configuration from which \$R_3\$ is excluded considering the virtual ground at the (-) pin:
You can form the numerator by combining these gains with the time constants already determined:
\$N(s)=H_0+s(H^1\tau_1+H^2\tau_2)+s^2(H^{12}\tau_1\tau_{12})\$
Capture all these information in a Mathcad sheet and there you go, you have the transfer function:
However, the exercise ends - in my opinion - when the transfer function is rearranged in a low-entropy way where the band-pass gain appears, together with a quality factor and a resonant frequency. These extra steps are part of the design-oriented analysis or DOA as promoted by Dr. Middlebrook: you format your equation to gain insight on what it does and how you select the filter elements to meet a design goal like a desired gain at the resonance for instance.
The response for the arbitrarily-selected components values is here:
A:
If you are familiar with the function could you please explain in an
answer instead of simply saying that solutions might "exist somewhere
on the internet"? Thank you!
Not "might exist" but "do exist". Try this site's simulator: -
The end goal: to get both V_out and V_in as functions of ω (with the
resistor and capacitor values treated as constants). I will then use a
tool (i.e. MATLAB, Maple, or other graphing software) to plot the
magnitude response as a function of ω, and I will keep adjusting the
values for the resistors and capacitors until the plot shows that the
cutoff frequencies at both sides of the pass band are right where I
want them.
Looks like you need a tool to keep plugging in values to get the response you want i.e. that is your end goal. The Okawa electric tool is just that.
A:
Ok, I thought I'd come back here and assure everyone that it is possible to find the formula for H(ω) with (1) ω being the only variable and (2) the only constants being the Z's and the complex number i. The system of equations can be solved by substitution. Here's what I was doing wrong:
The equations haven't changed:
$$\tilde{V_{out}} - 0V = (\tilde{I_{1}})(Z_{R2})$$
$$\tilde{I_{1}} + \tilde{I_{2}} - \tilde{I_{3}} - \tilde{I_{4}} = 0$$
$$\tilde{V_{out}} - \tilde{V_{m}} = (\tilde{I_{2}})(Z_{C2})$$
$$\tilde{V_{m}} - \tilde{V_{in}} = (\tilde{I_{3}})(Z_{R1})$$
$$\tilde{V_{m}} = (\tilde{I_{4}})(Z_{R3})$$
$$0V - \tilde{V_{m}} = (\tilde{I_{1}})(Z_{C1})$$
The situation: There are actually 7 unknowns and 6 equations. The unknowns are Vout, Vin, Vm, I1, I2, I3, and I4
What this means: Not all the unknowns will be fully defined. It will come down to two of the knowns being dependent on each other (being left in an equation with each other) while the rest of the variables are fully defined (and will not be seen in the H(ω) formula). And obviously, since the H(ω) formula is equal to V_out / V_in, we choose the two underdefined variables to be V_out and V_in. They will be a ratio, so in a way, together they will be treated as one variable.
How to solve: We want two different equations. The first one we will obtain will take the form of "V_in = [...]" and the second will take the form of "V_out = [...]". For the "V_in = [...]" equation, first take the equation on top, isolate the V_out, and plug it into the other V_out term in equation #3 from the top. All the V_out's will disappear for the time being (which is fine). Then use substitution and the rest of the equations (you'll need ALL of them) to isolate V_in. You now have the "V_in = [...]" equation. To get the "V_out = [...]" equation, simply grab another copy of the equation #1 from the top and (again) isolate V_out. Put the expression for V_out in the numerator and the expression for V_in in the denominator, and that will get you the expression for V_out / V_in. You're finished!
The final result will be:
$$
\begin{split}
\frac{\tilde{V_{out}}}{\tilde{V_{in}}} = \frac{
(-1)*(\frac{Z_{R2}}{Z_{C1}})
}{
(Z_{R1})*(\frac{Z_{R2}}{Z_{C1}*Z_{C2}} + \frac{1}{Z_{C2}} + \frac{1}{Z_{R1}} + \frac{1}{Z_{R3}})
}
\end{split}
$$
Just FYI: I did not make the MATLAB script for quickly and repeatedly adjusting the impedance values and re-plotting the Magnitude as a function of frequency. It was enough to know that solving this is possible. When I want to design an amplifier/filter to certain specs I will simply use a known transfer function (like Butterworth for example), plug in the parameters, plot/test as necessary, and then (and only then) use THAT transfer function to build a circuit. I hate software that doesn't give you the math solution but only gives you the circuit. If it did not do this, I wouldn't have had this problem in the first place! Also, for frequencies higher than audio (i.e. RF, IR, etc) I don't even think you can use an op amp. Since it has internal capacitance, I don't think you could get an op amp with a slew rate high enough for sufficient gain. You'd have to use other components like transistors (correct me if I'm wrong with any of that, I'm still trying to learn). Thank you
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add KVO to synchronized class?
In my app I have the Restaurant class that you can see below. I'd like to attach a KVOController to it. But I'm having no luck. When I attach it with the code below, it crashes.
FBKVOController *KVOController = [FBKVOController controllerWithObserver:self];
self.KVOController = KVOController;
[self.KVOController observe:self keyPath:@"[Restaurant current].name.asString" options:NSKeyValueObservingOptionInitial|NSKeyValueObservingOptionNew block:^(id observer, id object, NSDictionary *change) {
DDLogDebug(@"Restaurant changed");
}];
What's the best way to add KVO to a class like this?
@implementation Restaurant
static Restaurant *current = nil;
+ (Restaurant *)current {
@synchronized(self) {
if (current == nil) {
current = [[Restaurant alloc] initWithId:0];
}
}
return current;
}
- (id)initWithId:(NSInteger)number {
self = [super init];
if (self)
{
...
}
return self;
}
@end
A:
The problem is not @synchronized. There are several issues with your code:
Do you want to observe when the current restaurant changes? Or when the current restaurant's name changes (without +[Restaurant current] pointing to a different restaurant instance). Or any kind of name change, whether triggered by a change of current or a change of name?
Depending on the answer, you'll either want to observe observe:[Restaurant class] or observe:[Restaurant instance], but definitely not observe:self (unless you're setting this up inside the Restaurant class implementation, in which case [self class] would be an alternative to [Restaurant class]).
For any change to be observable, you must ensure that the class is implemented in a KVO-compliant way. This goes both for changes to +[Restaurant current] as well as for changes to -[Restaurant name], depending on what you want to be able to observe.
[Restaurant current].name.asString is not a valid key path. Valid key paths may only contain property names (ASCII, begin with a lowercase letter, no whitespace) and dots to separate them (see Key-value coding for details). Once you're telling the KVOController to observe:[Restaurant class], all that remains for the key path is current.name.asString.
What is name if not a string? Do you really need to convert it to a string for observing it? If your intention is to watch for name changes, observing current.name is probably sufficient.
You'll likely end up with one of the following two options:
FBKVOController *kvoController = [FBKVOController controllerWithObserver:self];
[kvoController observe:[Restaurant class] keyPath:@"current.name" ...];`
// or
[kvoController observe:[Restaurant current] keyPath:@"name" ...];`
And again, for any changes to be observable, they need to be KVO-compliant.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why should you use ACL in CakePHP instead of authentication with a user level feature?
Having difficulty seeing the advantage of going through all the trouble with ACL when I could do it myself.
A:
The best reason to use an ACL system is that access requirements may change. A check for user_level > 5 is what is needed now, but it may change in the future. There is a higher level of abstraction in checking if a role is allowed a certain action because it is not tied to a condition. This condition is thus allowed to change over time.
| {
"pile_set_name": "StackExchange"
} |
Q:
Post request in curl throwing error
I execute this curl request from command line:
curl -X POST https://www.googleapis.com/bigquery/v2/projects/projname/queries?key={AIzaSyB740elm45sh9AkpuaekZW8eJbRi_oDDAc} \
After this command, I have a list of parameters to be passed:
{
"query": "SELECT * FROM [red-road-574:TestSridevi.Trucks] LIMIT 20",
"defaultDataset": {
"datasetId": "TestSridevi",
"projectId": "red-road-574"
}
}
Bur after executing the first command itself, I get the error:
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 411 (Length Required)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>411.</b> <ins>That’s an error.</ins>
<p>POST requests require a <code>Content-length</code> header. <ins>That’s all we know.</ins>
Any advice on how to resolve this would be appreciated.
A:
This will probably work:
curl -X POST -d "" https://www.googleapis.com/bigquery/v2/projects/projname/queries?key={AIzaSyB740elm45sh9AkpuaekZW8eJbRi_oDDAc}
The error message said that POST requests needs their Content-length header field filled out, which you did not provide. The -d "" switch fills out that for you. You could also just send a GET request instead, for which you don't need to provide the switch.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is coupling property file values with data in a database an anti-pattern?
I'm working on an application where there are a few configuration values defined in properties files whose values or valid ranges are in some way dependent on data stored in a database.
This seems wrong for a few reasons but I'm struggling to find a name for this or indeed any published article that might suggest it is bad practice. Could anyone advise?
A:
Your pain has a name, and it is coupling.
The property file -- which I gather is either part of a server's configuration or is bundled with the application -- is coupled to the database. Changes in the database will ripple to the property file.
It's not the worst kind of coupling -- that circle of hell is Pathological/Content coupling -- but coupling it still is.
| {
"pile_set_name": "StackExchange"
} |
Q:
@RefreshScope and /refresh not working
I have tried to implement spring external configurations using Config Server. It is working fine for the very first time when the application is started but any changes to the properties file are not being reflected. I tried to use /refresh endpoint to refresh my properties on the fly but it doesn't seem to be working. Any help on this would be greatly helpful.
I tried POSTing to localhost:8080/refresh but getting a 404 Error response.
Below is the code of my application class
@SpringBootApplication
public class Config1Application {
public static void main(String[] args) {
SpringApplication.run(Config1Application.class, args);
}
}
@RestController
@RefreshScope
class MessageRestController {
@Value("${message:Hello default}")
private String message;
@RequestMapping("/message")
String getMessage() {
return this.message;
}
}
and POM file is
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.0.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring-cloud.version>Finchley.M8</spring-cloud.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
and bootstrap.properties
spring.application.name=xxx
spring.cloud.config.uri=https://xxxxxx.com
management.security.enabled=false
endpoints.actuator.enabled=true
A:
The endpoint is now /actuator/refresh for Spring 2 and greater
From the comments:
You do need to have the management.endpoints.web.exposure.include=refresh set in the bootstrap.properties/bootstrap.yml
Note: If you're new to Spring-Cloud and not quite sure of what all keywords can go in web.exposure you can set it to * (management.endpoints.web.exposure.include=*) to have all exposed and you can get to know the endpoints and their restrictions later.
A:
It worked for me after adding the property "management.endpoints.web.exposure.include=*" in bootstrap.properties and changing the url to /actuator/refresh for spring version above 2.0.0
For spring version 1.0.5 url is /refresh
| {
"pile_set_name": "StackExchange"
} |
Q:
Kivy: Use a toggle button to change the state of another toggle button
For example, in Kivy language:
<MainToggle@ToggleButton>:
on_state: # something that will change the state of the sub-toggle
<SubToggle@ToggleButton>:
on_state: self.background_color = 0,0,0,1 # the sub-toggle button changes color
A:
You can refer to other Widgets using the kivy id system. Observe the following code:
from kivy.base import runTouchApp
from kivy.lang import Builder
runTouchApp(Builder.load_string("""
<MainToggle@ToggleButton>:
<SubToggle@ToggleButton>:
on_state: self.background_color = 0,0,0,1 # the sub-toggle button changes color
BoxLayout:
MainToggle:
id: my_toggle1 # an id allows us to refer to this widget
text: "Main Toggle"
# change the other toggle's state using its id
on_state: my_toggle2.state = "down" if my_toggle2.state == "normal" else "normal"
SubToggle:
id: my_toggle2
text: "Sub Toggle"
"""))
Here's a superb video tutorial that uses the kivy id system in a practical example. Reply if you are having trouble wrapping your head around this.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can WCF MemoryStream return type be written to the Http Response object for downloading as an Excel file?
I built a parsing application that reads xml files and populates an Excel workbook using the NPOI library. Originally, I had that as part of my .net web app and would get the MemoryStream from NPOI and write that to the Response object so the browser would download it. I've since moved the parsing to a WCF netTcp service hosted in a Windows Service. The communication works great, but when I return the MemoryStream from the WCF service and write it to the response, I get the following error:
Microsoft JScript runtime error:
Sys.WebForms.PageRequestManagerParserErrorException:
The message received from the server
could not be parsed.
My question is: What happens to the stream when it gets passed from the wcf service to my client? The stream is (theoretically) the exact same stream from NPOI that I was writing to the response originally. Is there any special processing that I need to do on the client to make this work?
Here is my client code: (the exception get thrown at Response.End()
string filename = "test.xls";
ms = client.GetExportedFile();
byte[] b = ms.ToArray();
ms.Flush();
ms.Close();
Response.Clear();
Response.ContentType = "application/vnd.ms-excel";
Response.AddHeader( "Content-Disposition", string.Format( "attachment;filename={0}", filename ) );
Response.AddHeader( "Content-Length", b.Length.ToString() );
Response.BinaryWrite( b );
Response.End();
A:
You seem to retrun stream to request for partial page update with Update panel (search for Sys.WebForms.PageRequestManagerParserErrorException to find more details about exception).
Make sure that you are retruning stream only to full page request (GET/POST issued by browser itself, not by some script on the page that expects some particular type of responce).
| {
"pile_set_name": "StackExchange"
} |
Q:
compare different number-types (int, float, double, ...) in .NET
I have a function which takes two parameters and returns true if they are equal or false if they are not:
private bool isequal(object a, object b)
{
if (a != null)
return a.Equals(b);
if (b != null)
return b.Equals(a);
//if (a == null && b == null)
return true;
}
Now I want to extend this function. It should also return true if a and b are 2 equal numbers but of different type.
For example:
int a = 15;
double b = 15;
if (isequal(a,b)) //should be true; right now it's false
{ //...
}
I already found a similar question (with answer) best way to compare double and int but a and b could be any type of number or something else than numbers. How can I check if it a and b are numeric at all? I hope there is a better way than checking all existing numeric types of .net (Int32, Int16, Int64, UInt32, Double, Decimal, ...)
//update: I managed to write a method which works pretty well. However there might be some issues with numbers of the type decimal (have not tested it yet). But it works for every other numeric type (including high numbers of Int64 or UInt64). If anybody is interested: code for number equality
A:
You could use Double.TryParse on both a and b. It will handle int, long, etc.
private bool isequal(object a, object b)
{
if (a == null || b == null)
return (a == b);
double da, db;
if (Double.TryParse(a.ToString(), out da) && Double.TryParse(b.ToString(), out db))
return YourIsDoubleEqualEnough(da, db);
return false;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Fast memory allocation for real time data acquisition
I have a range of sensors connected to a PC that measure various physical parameters, like force, rotational speed and temperature. These sensors continuously produce samples at some sample rate. A sample consists of a timestamp and the measured dimension itself; the sample rates are in magnitudes of single-digit kilohertz (i.e., somewhere between 1 and 9000 samples per second).
The PC is supposed to read and store these samples during a given period of time. Afterwards the collected data is further treated and evaluated.
What would be a sensible way to buffer the samples? At some realistic setup the acquisition could easily gather a couple of megabytes per second. Also paging could be critical in case memory is allocated fast but needs swapping upon write.
I could think of a threaded approach where a separate thread allocates and manages a pool of (locked, so non-swappable) memory chunks. Given there are always enough of these chunks pre-allocated, further allocation would only block (in case other processes' pages have to be swapped out before) this memory pool's thread and the acquisition could proceed without interruption.
This basically is a conceptual question. Yet, to be more specific:
It should only rely on portable features, like POSIX. Features out Qt's universe is fine, too.
The sensors can be interfaced in various ways. IP is one possibility. Usually the sensors are directly connected to the PC via local links (RS232, USB, extension cards and such). That is, fast enough.
The timestamps are mostly applied by the acquisition hardware itself if it is capable in doing so, to avoid jitter over network etc.
Thinking it over
Should I really worry? Apparently the problem diverts into three scenarios:
There is only little data collected at all. It can easily be buffered in one large pre-allocated buffer.
Data is collected slowly. Allocating the buffers on the fly is perfectly fine.
There is so much data acquired at high sample rates. Then allocation is not the problem because the buffer will eventually overflow anyway. The problem is rather how to transfer the data from the memory buffer to permanent storage fast enough.
A:
The idea for solving this type of problems can be as follows:
Separate the problem into 2 or more processes depending what you need to do with your data:
Acquirer
Analyzer (if you want to process data in real time)
Writer
Store data in a circular buffer in shared memory (I recommend using boost::interprocess).
Acquirer will continuously read data from the device and store it in a shared memory. In the meantime, once is enough data read for doing any analysis, the Analyzer will start processing it. It can store results into another circular buffer shared memory if needed. Also in the meantime Reader will read the data from shared memory (acquired or already processed) and store it in the output file.
You need to make sure all the processes are synchronized properly so that they do their job simultaneously and you don't lose the data (the data is not being overwritten before is processed or saved into output file).
| {
"pile_set_name": "StackExchange"
} |
Q:
Recursion: sum digits of a number until there is a single digit left
How can I sum digits of a number in a recursive manner until there is only a single digit left?
Example: with the input 9234, the result would be 9 because 9 + 2 + 3 + 4 = 18 and then 1 + 8 = 9.
This is my code for the moment but I want to sum until there is only a single digit
int getsum(int n) {
return n == 0 ? 0 : n % 10 + getsum(n/10);
}
A:
There are several possibilities, here is one of them:
public static int getSum(int n) {
int s = getSumHelper(n); // your original (private) method
while (s > 9)
s = getSumHelper(s);
return s;
}
EDIT: Your original code for completeness, because there seems to be confusion.
private static int getSumHelper(int n) {
return n == 0 ? 0 : n % 10 + getSumHelper(n/10);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Term for the opposite of salami publishing
Question
Salami publishing refers to the practice of splitting scientific work into overly small pieces (least publishable units) and publishing a separate paper about each.
I am looking for a term for the opposite practice, i.e., lumping together a lot of or too much scientific work into one paper.
I prefer terms that can be understood (by a suitable audience) without further explanation. Essentially, I want something less cumbersome than opposite of salami publishing.
I have no strong preference regarding the tone of the term. E.g., it can be derogatory (but doesn’t need to be).
I am open to neologisms, but please consider the first point.
Background
This term would be useful for me to talk about the publishing culture in biology (or certain subfields thereof), where new relevant methods often do not get papers on their own, but are only published as the appendix to some paper that is primarily about findings achieved with that method.
A:
I call it "kitchen sink publishing." I apply the term to papers which contain redundant methods for determining the result.
https://en.wiktionary.org/wiki/everything_but_the_kitchen_sink
A:
Disclaimer: This answer is full of tongue-in-cheek neologisms ;-)
I would like to stay in the context of food and suggest
Gluttony publishing (more derogatory)
or
Banquet publishing (less derogatory).
You could even go as far as labeling the publication itself as banquet as in the following sentence.
He has just written a banquet paper of 40 pages. He could have easily salami sliced it into 5 papers. Doesn't he know that for most grants only the number of publications counts?
A slightly different term would be a buffet paper denoting a paper that is a collection of not necessarily tightly coupled topics where everyone can pick what he likes. It can also refer to a paper that is written to suit everybody.
A:
This depends on what you mean by "opposite". If you treat "salami publishing" as an extreme, then the more ordinary case (by far) is just "publishing". It needs no adjective.
However, if you mean the opposite extreme, consider the following. Sometimes a field will have a period of intense work with a large number of (possibly) relatively small results. After that period ends or at least the rate of advancement slows, someone may decide to "consolidate" what has been recently learned in a summative paper that will have many references and a new top-level view of the field as it is then known. "Consolidation" and "Unification" are good terms for that sort of publication. Such a publication is a great resource for new researchers in the field (say, new PhD students).
However, if you require a food metaphor, try paella. Of course it is best savored in Andalusia. And I guess that if you need an explanation about why this is a good metaphor you haven't tried to make (or eat) it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Delphi SQL subselect error in MS-Access
I'm using Delphi 7 with ADO components, and MS Access 2003. The SQL sentence
SELECT CMCB.Name,
(SELECT SUM(amount) FROM movement MCB
WHERE MCB.movement_classification_id=CMCB.movement_classification_id
AND MCB.operation_date >= #01/01/2013#
AND MCB.operation_date < #01/01/2014#
) AS MyYear
FROM movement_classification CMCB
is working fine in MS Access console but through a Delphi application launches the following error when I am opening the DataSet (TADOQuery):
Data provider or other service returned an E_FAIL status
Any idea why it happens? Is it related with the ADO component (TADOQuery in this case)
I tried a similar query from the database dbdemos.mdb (Program Files\Common Files\Borland Shared\Data) and it works
SELECT CustNo,
(SELECT SUM(AmountPaid) FROM orders O
WHERE O.CustNo = C.CustNo
AND O.SaleDate >= #01/01/1994#
AND O.SaleDate < #01/01/1995#
) AS AmountPaid
FROM customer C
The code I used in Delphi is the following:
procedure TForm1.Button1Click(Sender: TObject);
begin
ADOConnection1.Connected := False;
ADOConnection1.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;User ID=Admin;Data Source=D:\Xiber\Delphi\StackOverflow\Subquerys\dbdemos.mdb';
ADOConnection1.Connected := True;
ADOQuery1.SQL.Text := 'SELECT CustNo, (SELECT SUM(AmountPaid) FROM orders O WHERE O.CustNo = C.CustNo AND O.SaleDate >= #01/01/1994# AND O.SaleDate < #01/01/1995#) AS AmountPaid FROM customer C';
ADOQuery1.Open;
end;
procedure TForm1.Button2Click(Sender: TObject);
var
sSQL: string;
begin
ADOConnection1.Connected := False;
ADOConnection1.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;User ID=Admin;Data Source=D:\Xiber\Delphi\StackOverflow\Subquerys\XiGest-CASA.mdb';
ADOConnection1.Connected := True;
sSQL := ' SELECT CMCB.Name, ' +
' (SELECT SUM(amount) FROM movement MCB ' +
' WHERE MCB.movement_classification_id=CMCB.movement_classification_id ' +
' AND MCB.operation_date >= #01/01/2013# ' +
' AND MCB.operation_date < #01/01/2014# ' +
' ) AS MyYear ' +
' FROM movement_classification CMCB ';
ADOQuery1.SQL.Text := sSQL;
ADOQuery1.Open;
end;
A:
Finally I realised the difference between the two sums was that in dbdemos the field AmountPaid.mdb is double and in my case is decimal(8,2).
It seems to be an ADO bug. You can reproduce by yourself.
So, If you change the field AmountPaid in dbdemos.mdb (provided by Borland, you can found at Program Files\Common Files\Borland Shared\Data) to decimal(8,2) and execute the query through Delphi 7 (with an ADOConnection and an ADOQuery), you'll get the error above mentioned.
SELECT CustNo,
(SELECT SUM(AmountPaid) FROM orders O
WHERE O.CustNo = C.CustNo
AND O.SaleDate >= #01/01/1994#
AND O.SaleDate < #01/01/1995#
) AS AmountPaid
FROM customer C
But if you execute this query inside MS Access, it works fine.
| {
"pile_set_name": "StackExchange"
} |
Q:
Show that $F(G(x))=G(F(x)).$
Let $V$ be a finitely dimensional linear space, and $F:V\rightarrow V$
and $G:V\rightarrow V$ linear transformations that are diagonalizable,
that is: there exists a base $\mathbf{e}$ for $V$ such that the matrix
to $F$ in the basis $\mathbf{e}$ is diagonal and a base $\mathbf{f}$
for $V$ such that the matrix to $G$ is diagonal.
i) Show that if $F$ and $G$ are simultaneously diagonalizable, that is if $\mathbf{e}=\mathbf{f},$ then it follows that
$$F(G(x))=G(F(x)), \quad \forall x\in V. \tag{1}$$
ii) Show that if $(1)$ holds, then there exists a basis $\mathbf{g}$ (possibly different from both $\mathbf{e}$ and $\mathbf{f}$) such that
the matrices to both $F$ and $G$ in the basis $\mathbf{g}$ are
diagonal.
Okay, I usually post good questions where I've done my own work before asking for help. But this one has thrown me away. I seriously don't even know how to begin or what I should proove.
This is no homework or assignment, I'm just doing practice problems for my exam at the end of this month. Any tips/tricks/translation of the problem statement is welcome!
A:
For 1, if $e\in\mathbf e$, we have $Fe=\alpha_e e$, $Ge=\beta_e e$ for some coefficients $\alpha_e,\beta_e$. Then
$$
FGe=\beta_e Fe=\alpha_e\beta_e e=\alpha_e Ge=GFe.
$$
So $FG$ and $GF$ agree on all elements of a basis, and thus are equal.
For 2, write $\mathbf e=\{e_1,\ldots,e_n\}$. By hypothesis, $Fe_j=\alpha_j e_j$, $j=1,\ldots,n$. We have
$$
FGe_j=GFe_j=\alpha_jGe_j.
$$
So $Ge_j$ is an eigenvector for $F$ with eigenvalue $\alpha_j$. That is, the eigenspaces of $F$ are invariant for $G$. That is, if $E_1,\ldots,E_r$ are the eigenspaces of $F$ corresponding to distinct eigenvalues, $GE_j\subset E_j$. So we may consider $G$ as a linear transformation on $E_j$.
Now we want to show that $G$ is diagonalizable on $E_j$. The key fact is that "diagonalizable" is equivalent to the fact that the minimal polynomial has no repeated roots. So, since $G$ is diagonalizable, $q(G)=0$, where $q(t)=(t-\beta_1)\cdots(t-\beta_n)$. Now let $p_j$ be the minimal polynomial of $G|_{E_j}$. Since $q(G)=0$, we must have $p_j|q$, and so the roots of $p_j$ are simple. Thus, $G|_{E_j}$ is diagonalizable.
So, for each $E_j$ we have a basis $e_{1j},\ldots,e_{m_jj}$ of eigenvectors for $G$. As $e_{kj}\in E_j$, it is also an eigenvector for $F$. So if we let
$$
\mathbf g=\{e_{11},\ldots,e_{m_11},e_{21},\ldots,e_{m_22},\ldots,e_{r1},\ldots,e_{m_rr}\},
$$
we have a basis where its elements are both eigenvectors for $F$ and for $G$.
| {
"pile_set_name": "StackExchange"
} |
Q:
How does one tell if a new month has started? php/mysql
I have the dates stored in a mySQL table from 2010 - 2040 in the following format:
2012-01-02
I SELECT each date from the table and display them in a while - mysql_fetch_assoc loop
while($row=mysql_fetch_array($result)){
echo $row["date"];
}
how can I check that a new month has started? I suspect I need to place the current month counter in a var using explode()... but how can a comparison be made since the var will be overwritten in the while loop
if(old month != new month){
do something..
}
thanks
A:
$old_month = '';
while($row=mysql_fetch_array($result)){
list($year, $month, $day) = explode('-', $row["date"]);
if($old_month != $month) {
$old_month = $month;
echo 'new month';
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I align these two shapes to form a tree?
I need to print a tree shape where the user inputs 4 different parameters. Branch height, branch width, stem height and stem width. I have two shapes that form the top part and bottom part of the tree but I can't seem to figure out how to put them together so that it looks like a tree. I figured that I need to calculate the width of the branch and deduct the stem from that but I'm not exactly sure. My output currently looks like this:
Any suggestions?
Enter height of the branches: 5
*
***
*****
*******
*********
Enter width of the stem: 5
*****
*****
*****
*****
*****
def pyramid(height):
for row in range(height):
for count in range(height - row):
print(end=" ")
for count in range(2 * row + 1):
print(end="*")
print()
def square(width):
for i in range(width):
for j in range(width):
print('*', end='')
print()
height = int(input("Enter height of the branches: "))
pyramid(height)
width = int(input("Enter width of the stem: "))
square(width)
A:
You are looking for str.center(width\[, fillchar\]):
def pyramid(height):
for row in range(height):
print(('*' * (2 * row + 1)).center((2 * height + 1)))
def square(width, height):
for i in range(width):
print(('*' * (width)).center((2 * height + 1)))
height = int(input("Enter height of the branches: "))
pyramid(height)
width = int(input("Enter width of the stem: "))
square(width, height)
Out:
C:\_\Python363-64\python.exe C:/Users/MrD/.PyCharm2018.2/config/scratches/scratch_75.py
Enter height of the branches: 5
*
***
*****
*******
*********
Enter width of the stem: 5
*****
*****
*****
*****
*****
Process finished with exit code 0
| {
"pile_set_name": "StackExchange"
} |
Q:
Convert json dict to row in pandas df
I have pulled JSON data from a url. The result is a dictionary. How can I transform this dictionary so that each key is a column, and the timestamp is the index for each row- where the dict values correspond to the row entries each time the url is called?
Here is the data:
with urllib.request.urlopen('https://api.blockchain.info/stats') as url:
block_data = json.loads(url.read().decode())
# Convert to Pandas
block_df = pd.DataFrame(block_data)
I tried:
block_df = pd.DataFrame(block_data)
block_df = pd.DataFrame(block_data, index = 'timestamp')
block_df = pd.DataFrame.from_dict(block_data)
block_df = pd.DataFrame.from_dict(block_data, orient = 'columns')
But all attempts give different errors:
ValueError: If using all scalar values, you must pass an index
and
TypeError: Index(...) must be called with a collection of some kind, 'timestamp' was passed
A:
Wrap the block_data in a list
pd.DataFrame([block_data]).set_index('timestamp')
blocks_size difficulty estimated_btc_sent estimated_transaction_volume_usd hash_rate market_price_usd miners_revenue_btc miners_revenue_usd minutes_between_blocks n_blocks_mined n_blocks_total n_btc_mined n_tx nextretarget total_btc_sent total_fees_btc totalbc trade_volume_btc trade_volume_usd
timestamp
1504121943000 167692649 888171856257 24674767461479 1.130867e+09 7.505715e+09 4583.09 2540 11645247.85 7.92 170 482689 212500000000 281222 483839 174598204968248 41591624963 1653361250000000 43508.93 1.994054e+08
With datetime index.
df = pd.DataFrame([block_data]).set_index('timestamp')
df.index = pd.to_datetime(df.index, unit='ms')
df
blocks_size difficulty estimated_btc_sent estimated_transaction_volume_usd hash_rate market_price_usd miners_revenue_btc miners_revenue_usd minutes_between_blocks n_blocks_mined n_blocks_total n_btc_mined n_tx nextretarget total_btc_sent total_fees_btc totalbc trade_volume_btc trade_volume_usd
timestamp
2017-08-30 19:39:03 167692649 888171856257 24674767461479 1.130867e+09 7.505715e+09 4583.09 2540 11645247.85 7.92 170 482689 212500000000 281222 483839 174598204968248 41591624963 1653361250000000 43508.93 1.994054e+08
| {
"pile_set_name": "StackExchange"
} |
Q:
Undefined symbols for architecture x86_64 on Mac OS
I am trying to install stunnel software on Mac OS 10.10 and I am getting the following error
Undefined symbols for architecture x86_64
while executing make command from terminal.
below are the detailed logs:
Making all in src
/Applications/Xcode.app/Contents/Developer/usr/bin/make all-am
/bin/sh ../libtool --tag=CC --mode=link gcc -g -O2 -D_THREAD_SAFE -pthread -Wall -Wextra -Wpedantic -Wformat=2 -Wconversion -Wno-long-long -Wno-deprecated-declarations -fstack-protector -fPIE - D_FORTIFY_SOURCE=2 -L/usr/local/openssl/lib64 -L/usr/local/openssl/lib -lssl -lcrypto -fPIE -pie -o stunnel stunnel-tls.o stunnel-str.o stunnel-file.o stunnel-client.o stunnel-log.o stunnel-options.o stunnel-protocol.o stunnel-network.o stunnel-resolver.o stunnel-ssl.o stunnel-ctx.o stunnel-verify.o stunnel-sthreads.o stunnel-fd.o stunnel-dhparam.o stunnel-cron.o stunnel-stunnel.o stunnel-pty.o stunnel-libwrap.o stunnel-ui_unix.o -lz
libtool: link: gcc -g -O2 -D_THREAD_SAFE -pthread -Wall -Wextra -Wpedantic -Wformat=2 -Wconversion - Wno-long-long -Wno-deprecated-declarations -fstack-protector -fPIE -D_FORTIFY_SOURCE=2 -fPIE -pie -o stunnel stunnel-tls.o stunnel-str.o stunnel-file.o stunnel-client.o stunnel-log.o stunnel-options.o stunnel-protocol.o stunnel-network.o stunnel-resolver.o stunnel-ssl.o stunnel-ctx.o stunnel-verify.o stunnel-sthreads.o stunnel-fd.o stunnel-dhparam.o stunnel-cron.o stunnel-stunnel.o stunnel-pty.o stunnel-libwrap.o stunnel-ui_unix.o -L/usr/local/openssl/lib64 -L/usr/local/openssl/lib -lssl -lcrypto -lz -pthread
clang: warning: argument unused during compilation: '-pthread'
clang: warning: argument unused during compilation: '-pie'
clang: warning: argument unused during compilation: '-pthread'
ld: warning: directory not found for option '-L/usr/local/openssl/lib64'
ld: warning: directory not found for option '-L/usr/local/openssl/lib'
Undefined symbols for architecture x86_64:
"_CRYPTO_THREADID_set_callback", referenced from:
_sthreads_init in stunnel-sthreads.o
"_CRYPTO_THREADID_set_numeric", referenced from:
_threadid_func in stunnel-sthreads.o
"_ERR_remove_thread_state", referenced from:
_client_run in stunnel-client.o
"_SSL_CTX_set_psk_client_callback", referenced from:
_context_init in stunnel-ctx.o
"_SSL_CTX_set_psk_server_callback", referenced from:
_context_init in stunnel-ctx.o
"_TLSv1_1_client_method", referenced from:
_parse_service_option in stunnel-options.o
"_TLSv1_1_server_method", referenced from:
_parse_service_option in stunnel-options.o
"_TLSv1_2_client_method", referenced from:
_parse_service_option in stunnel-options.o
"_TLSv1_2_server_method", referenced from:
_parse_service_option in stunnel-options.o
"_X509_STORE_get1_certs", referenced from:
_verify_callback in stunnel-verify.o
"_X509_check_email", referenced from:
_verify_callback in stunnel-verify.o
"_X509_check_host", referenced from:
_verify_callback in stunnel-verify.o
"_X509_check_ip_asc", referenced from:
_verify_callback in stunnel-verify.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [stunnel] Error 1
make[1]: *** [all] Error 2
make: *** [all-recursive] Error 1
A:
This problem was coming since installable has dependency on OpenSSL and it was not able to find libraries related to OpenSSL.
There can be two possible reasons for this problem-
1. OpenSSL is not installed in the system. In this case , OpenSSL needs to be installed first before installing stunnel.
OpenSSL is not installed in standard path.
| {
"pile_set_name": "StackExchange"
} |
Q:
Chained fat arrow in mapDispatchToProps get me an error
Error: Actions must be plain objects. Use custom middleware for async actions.
I am trying to chain fat arrow functions in a mapDispatchToProps but it doesn't seems to work.
Container and Action creator :
// Actions Creator
const inputChange = (name: string) => (
e: React.FormEvent<HTMLInputElement>
) => ({
type: INPUT_CHANGE,
name,
value: e.currentTarget.value
});
// Dispatch
const mapDispatchToProps = {
inputChange
};
// Connect
export default connect(
mapStateToProps,
mapDispatchToProps
)(SignUp);
Dumb component
const SignUp = ({ inputChange }) => (
<input
type="password"
placeholder="Password"
onChange={inputChange('password')}
/>
);
Maybe some parts of this code seems a little strange because I removed some of my types to not add extra pointless code.
Anyway the error come from the mapDispatchToProps, it seems like it's ok with a single fat arrow but when I start to chain them I get this error (even if they return an object).
A:
As the error mentioned actions must be plain objects but when you chain the functions, you are actually passing a function (the next one on the chain) and not an object.
The problem here for you is that you want to pass an extra argument in addition to the actual DOM event, like the name of the input, i.e: "password", "user" etc...
Then why not just give the input a name and grab it in the action creator function (same as you do with the value attribute).
Your form can look something like this:
const Form = ({ inputChange, form }) => (
<div>
<input name="user" onChange={inputChange} type="text" value={form.user} />
<input name="password" onChange={inputChange} type="password" value={form.password} />
</div>
);
const mapState = state => ({
form: state
});
const mapDispatch = {
inputChange
};
const ConnectedForm = connect(
mapState,
mapDispatch
)(Form);
And inside your action creator:
const inputChange = ({ target }) => ({
type: INPUT_CHANGE,
payload: {
value: target.value,
inputName: target.name
}
});
and your reducer can handle it, something like this:
const reducer = (state = {user: '', password: ''}, action) => {
switch (action.type) {
case INPUT_CHANGE: {
const { inputName, value } = action.payload;
return {
...state,
[inputName]: value
};
}
default:
return state;
}
};
Running example:
// mimic imports
const { createStore } = Redux;
const { Provider, connect } = ReactRedux;
const INPUT_CHANGE = "INPUT_CHANGE";
const reducer = (state = {user: '', password: ''}, action) => {
switch (action.type) {
case INPUT_CHANGE: {
const { inputName, value } = action.payload;
const nextState = {
...state,
[inputName]: value
};
console.clear();
console.log('store', nextState);
return nextState;
}
default:
return state;
}
};
const inputChange = ({ target }) => ({
type: INPUT_CHANGE,
payload: {
value: target.value,
inputName: target.name
}
});
const store = createStore(reducer);
const Form = ({ inputChange, form }) => (
<div>
<input name="user" onChange={inputChange} type="text" value={form.user} />
<input name="password" onChange={inputChange} type="password" value={form.password} />
</div>
);
const mapState = state => ({
form: state
});
const mapDispatch = {
inputChange
};
const ConnectedForm = connect(
mapState,
mapDispatch
)(Form);
class App extends React.Component {
render() {
return (
<div>
<ConnectedForm />
</div>
);
}
}
const root = (
<Provider store={store}>
<App />
</Provider>
);
const rootElement = document.getElementById("root");
ReactDOM.render(root, rootElement);
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/redux/4.0.1/redux.min.js
"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-redux/4.4.10/react-redux.min.js"></script>
<div id="root"/>
| {
"pile_set_name": "StackExchange"
} |
Q:
How is it okay for sendmail to send emails from any domain?
I just tried my hands with the sendmail function as documented in Mail::Sendmail
I saw that I was able to send mail with a userid from any domain as long as I have an SMTP server running on localhost. How is this okay? or am I missing something?
For instance, I was able to deliver emails with from address such as <myname>@microsoft.com and it did deliver the same way onto my gmail inbox. It did not even get into any junk folder/
A:
Congrats: you've just discovered email spoofing! :)
SMTP does not perform authentication of the sort you imply that it should, e.g. verifying that someone is authorized to send mail from a certain domain -- so anyone with a machine who knows how to run sendmail can do this.
Most anti-spoofing measures rely on the owner of a domain (e.g. microsoft.com) doing something which amounts to authenticating whether a message is really from them. For example, they may list every domain they normally send mail from; that's roughly what Sender Policy Framework does.
If the recipient's server gets a message purporting to be from microsoft.com, it can check to see if that domain lists the server that sent the message. If it doesn't, it will likely increase the probability that it's rated as spam.
| {
"pile_set_name": "StackExchange"
} |
Q:
What tools can be used to find which DLLs are referenced?
This is an antique problem with VB6 DLL and COM objects but I still face it day to day. What tools or procedures can be used to see which DLL file or version another DLL is referencing?
I am referring to compiled DLLs at runtime, not from within VB6 IDE.
It's DLL hell.
A:
Dependency Walker shows you all the files that a DLL links to (or is trying to link to) and it's free.
A:
ProcessExplorer shows you all the DLLs that are currently loaded in a process at a particular moment. This gives you another angle on Dependency Walker which I believe does a static scan and can miss some DLLs that are dynamically loaded on demand. Raymond says that's unavoidable.
| {
"pile_set_name": "StackExchange"
} |
Q:
CSS in App_Theme folder gets Cached in Browser
The Stylesheet in the App_Theme folder gets cached in the browser. What should be the approach? so that whenever there is a new deployment the browser should take the latest stylesheets and not the one cached in the browser.
This was happening for other css(which are not in theme folder) too, so used custom control as mentioned in the link
http://blog.sallarp.com/asp-net-automatic-css-javascript-versioning/
How this could be done for the CSS in the Theme folder?
Edit: The theme name is mentioned in the web.config as mentioned below. so its not just the html link tag which I had solved by using the method mentioned in the link.
<pages styleSheetTheme="Default">
<controls>
</controls>
</pages>
A:
I too have come across this and the solution I came up with is to add a version to your CSS filename, not pretty but without disabling cache on IIS I could think of no other way.
Rename the CSS file to say
mycss-V1.0.css, which will force your
user's web browsers to reload the CSS
A:
When deploying the web application, include the version number in the themes path. For example, App_Themes/Default/v1.2.0.4321/, where v1.2.0.4321 is the folder added at deployment for version 1.2.0.4321. This preserves both the theme name (e.g., "Default") and the file names, which makes source code control and path references much easier. ASP.NET loads all of the CSS files in the current theme folder regardless of subfolders. This not only solves the problem referencing CSS files, but images that are referenced from within the CSS files (e.g., background-image).
Additionally, the browser cache duration for App_Themes may be increased to improve performance while ensuring that the next time the web application is updated, all the theme files will be updated.
Add this to the <configuration> section of the Web.Config to have the browsers cache for 120 days.
<location path="App_Themes">
<system.webServer>
<staticContent>
<clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="120.00:00:00" />
</staticContent>
</system.webServer>
</location>
| {
"pile_set_name": "StackExchange"
} |
Q:
What to do with your answer when no one, including OP, is interested in the post
Example: Link Removed, sorry
Sometimes I go ahead post an answer to a question and then wait endlessly for OP to show any interest in the post itself, let alone my answer.
If there are other answers one can say that OP might be looking at those answers and that is absolutely fine but i am talking about situations where that answer is the only answer. No comments/votes whatsoever by the OP or anyone for that matter.
I feel like I wasted time on such answers and sometimes I feel like deleting those answers because no one is interested in that question/answer post.
But i don't want to just go ahead delete stuff after a couple of days with no reactions because this site does not focus on short term problem solving for the OP and someone can come in future and learn something from that post.
So is it fine if i go ahead delete such answers (mine)? Shouldn't the OP usually be willing to waste a few seconds of their time to take part in activity related to their question when others have wasted some of their time to answer what they asked?
Please don't go ahead start voting on the answer thinking i brought it up because I want some votes, that's not what i mean.
Edit
I just removed the link because I don't feel all those up-votes that the answer is getting are well deserved. That post is getting all the attention just because of this question and I sort of feel bad about getting up-votes this way.
A:
Don't delete it - it could (and probably will) be useful to someone else. Many new users don't have the culture and habit to up-vote or accept answers once their problem is solved. It might take more time before someone else googles that and your answers starts showing up in results. I also have such answers but after some time it seems they've been helpful to someone else as they start gaining up-votes.
PS: Great answer to the timezone question BTW :)
A:
No, you should never delete an answer solely for the reason of it not getting accepted or up-voted. The fact that nobody has taken the time to give feedback on your answer does not mean it's not (been) useful to anyone.
Your answer might help future readers with the same problem. If you would have deleted your answer, they would have to ask the same question again. So, by deleting your answer you're basically slamming the door in the face of people having the same issue. Just leave it there.
The only legitimate reason (IMHO) to delete an answer if is you made a mistake in it (like you misinterpreted the question the first time your read it and are giving the wrong solution) or by overlooking certain things that might be potentially harmful or have no added value to the question asked. For example, if your answer uses a deprecated function that is strongly discouraged from using anymore, you could delete that answer (although it would be better yet to improve it to use the proper code).
SO is a Q&A community whose archive is more valuable then any separate question/answer. By keeping answers out there, you contribute to the quality of that archive.
A:
I don't think you should delete it - we've all been there, and it's all too easy to have that initial reaction but the point to remember is that this is a community and that answer could be silently helping others in a big way.
If the OP has a low rep, or is a new member - it might be worth putting a comment with a reminder "Don't forget to accept the answer that helped you" etc...
This type of comment isn't a plea for votes or rep, this is simply a gentle nudge to encourage that person to become an active part of the community.
| {
"pile_set_name": "StackExchange"
} |
Q:
Specific modular multiplication algorithm
I have 3 large 64 bit numbers: A, B and C. I want to compute:
(A x B) mod C
considering my registers are 64 bits, i.e. writing a * b actually yields (A x B) mod 2⁶⁴.
What is the best way to do it? I am coding in C, but don't think the language is relevant in this case.
After getting upvotes on the comment pointing to this solution:
(a * b) % c == ((a % c) * (b % c)) % c
let me be specific: this isn't a solution, because ((a % c) * (b % c)) may still be bigger than 2⁶⁴, and the register would still overflow and give me the wrong answer. I would have:
(((A mod C) x (B mod C)) mod 2⁶⁴) mod C
A:
As I have pointed in comment, Karatsuba's algorithm might help. But there's still a problem, which requires a separate solution.
Assume
A = (A1 << 32) + A2
B = (B1 << 32) + B2.
When we multiply those we get:
A * B = ((A1 * B1) << 64) + ((A1 * B2 + A2 * B1) << 32) + A2 * B2.
So we have 3 numbers we want to sum and one of this is definitely larger than 2^64 and another could be.
But it could be solved!
Instead of shifting by 64 bits once we can split it into smaller shifts and do modulo operation each time we shift. The result will be the same.
This will still be a problem if C itself is larger than 2^63, but I think it could be solved even in that case.
| {
"pile_set_name": "StackExchange"
} |
Q:
How soon should I "vote to close" a question?
I am reproducing this post from MSE here, because I think it is a valuable and relevant topic. Of course we can make our own policies and have our own values here, but I do think this is worth posting:
Since it's currently impossible to delete a "vote to close",* when I see a poorly asked question, should I immediately vote to close, or should I comment, and give the OP a chance to improve his question?
Waiting increases the chance that I'll forget, and never vote to close, potentially leading to a cluttered site.
Voting to close immediately increases the chance that the question will be closed, and the author will be forced to re-post his (hopefully improved) question, which leads to a cluttered site. :)
*It is now possible to retract a close vote, but that doesn't change the fundamental nature of this question.
I'd also like to propose adding this to faq (although we don't really have a FAQ here yet, perhaps starting one is in order, it'd be a good place to put all the e.g. "if your cel is on give us your engine codes" stuff, too).
A:
My thought (and hopefully what I usually do) with the "bad" questions is to start with an encouraging comment that hopefully elicits an edit or clarification from the OP. Sometimes when it seems like the OP is not a native English writer, I'll go so far as to try to reshape the question into a more fluent version. That makes me a bit nervous as it treads a line of being paternalistic / patronizing.
A:
On Meta or SO having closeable questions closed is no big deal. They get plenty of traffic and it does tend to clutter up the site. Those two are full fledged sites who don't need more people or more questions. I get that you would want to close bad questions immediately there.
Here it's a different story. While I know we are not anywhere close to being closed, in fact we are very close to becoming a full fledged or graduated site, we still aren't there yet. We need all of the questions & answers we can get. To that end, getting questions from people in whatever form, isn't necessarily a bad thing and housing the bad questions is not terrible if we can get the owners to come back and fill in the blanks. If we can give them a little time, if the question is salvageable, if there is actually something there, what is the harm with leaving it open?
There is also the long standing Stack Exchange idea of being nice. If the first thing which happened when you come onto a site is have your question closed, mainly because you don't know how to utilize the site, I'm sure you'd feel like people are not being nice. In fact, it might seem quite rude. Instead, if we can get people to understand how to ask questions and flesh out their needs, we may have someone who produces good questions for life. That is far more important than shutting down a bad question. Regardless of some people's ideas of how we deal with things, most are relatively nice on this site. We need to direct people to the newbie thread here on Meta. This will ensure they have a clue how to write questions, understand what's expected of them, and know that we the normal people on this site are not a bunch of a-holes.
We were all new once ... even I the esteemed (cough, cough). Give people a chance to understand. There is always time to close down bad questions, though it may be a little bit more difficult finding them after they've sat for a few days. Let's help others create better question and not shut them down at the outset of just getting here.
| {
"pile_set_name": "StackExchange"
} |
Q:
Optimalization of sum $f(x_i, x_j)$ for all $i < j$ pairs through permutation only?
$\DeclareMathOperator*{\argmin}{arg\,min}$Can something be said about the difficulty of minimizing the quantity
$$g(x) = \sum_{i=1}^n\sum_{j=i+1}^n f(x_i, x_j)$$
of some string of symbols $x \in \Sigma^n$ solely through permuting $x$? That is, finding
$$\underset{\sigma\in S_n}{\argmin}\ g(\sigma(x))$$
$f: \Sigma \times \Sigma \to \mathbb{R}$ is here a black-box function with no other properties.
A:
Your problem is at least as hard as the NP-hard problem called Minimum Feedback Arc Set.
Consider a directed graph and set $f(u,v)=1$ if it contains an arc from $u$ to $v$, and $0$ otherwise. Then your problem corresponds to finding a minimum feedback arc set: a minimum set of arcs whose removal yields a directed acyclic graph (DAG). From such a DAG, one can obtain the desired permutation using a topological sort, and the corresponding value of $g$ will be the size of the feedback arc set.
Moreover, the problem of deciding whether there is a permutation $\sigma$ with $g(\sigma(x))\leq k$ is in NP: just guess the permutation and compute $g$.
| {
"pile_set_name": "StackExchange"
} |
Q:
RecyclerView not showing under TextView for ConstraintLayout
I have this layout and I used ConstraintLayout. The code is below
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
android:id="@+id/main_content"
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".ui.main.MainActivity">
<TextView
android:id="@+id/txtVwCount"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="the count is: 5"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
/>
<view
android:id="@+id/rclrVw"
class="androidx.recyclerview.widget.RecyclerView"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:layout_constraintTop_toBottomOf="@id/txtVwCount"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
/>
<com.google.android.material.floatingactionbutton.FloatingActionButton
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:src="@drawable/ic_plus"
android:layout_margin="16dp"
app:layout_anchor="@id/rclrVw"
android:clickable="true"
android:onClick="addNewTodo"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintBottom_toBottomOf="parent"/>
</androidx.constraintlayout.widget.ConstraintLayout>
Screenshot:
Isnt it supposed to show the textview at the top and underneath the textView it will show recyclerView? Why are they overlapping?
A:
You need to set the android:layout_height of the RecyclerView to 0dp (also known as ConstraintLayout.MATCH_CONSTRAINT). It's currently match_parent, so it's filling the entire parent size.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there a way to make components (not elements) draggable in angular?
Im using the Angular CDK drag and drop module, it works well on html elements like div, p and such. but for some reason when i put a cdkDrag directive on a component, it doesnt work.
<!-- WORKS -->
<div cdkDrag>content</div>
<!-- DOESNT WORK -->
<my-component cdkDrag></my-component>
another thing i noticed is that every component in angular have width and height set to auto (basically 0x0), unless i edit the css and put display: block on the component style
A:
A component is a custom tag. Within a browser this is treated as an 'unknown' tag, and made to have the default display of inline. This will also cause the dimensions to be 0x0 if you add block elements in there.
To overcome this, you should make it display: block or inline-block or flex (or whatever suits you) to make it also draggable. You can make a global class if this doesn't break the layout of the rest of your draggables:
.cdkDrag {
display: inline-block;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
powershell script to add text to existing aspx file
I'm trying to find resources to help me write a powershell script to add text to an existing aspx file but can't seem to find anything. Does anybody have any suggestions or is there anybody that can help me some other way?
Update: So it's out there, I'm using version 1.0.
I have:
$lines = Get-Content foo.aspx
$lines = "<head>asfdsfsafd</head>" + $lines
$lines | Out-File "c:\documents and settings\...\foo.aspx" -Encoding utf8
but it wipes out everything originally in foo.aspx and creates <head>asfdsfsafd</head><head>asfdsfsafd</head>. How do I fix it so it keeps the original stuff in foo and add stuff to the beginning of the file rather than end?
Update:
I've figured out how to add text with:
$lines = add-content -path "C:\Documents and Settings\..\foo.aspx" -value "Warning...."
but want the text to go at the beginning of the aspx file
Update: I found a function that does what I want and I'm all set.
A:
You can get the content (array of strings) of the ASPX file using Get-Content:
$lines = Get-Content foo.aspx
Or you can get the content as a single string which is sometimes more useful if you want to use a regex that spans lines:
$content = Get-Content foo.aspx -raw
As far as changing the content, you have all sorts of options:
$content = "text before " + $content
$content += "text after"
$content = $content -replace 'regex pattern','replacement text'
And then to write back out to the file:
$content | Out-File foo.aspx -Encoding <UTF8 or ASCII or UNICODE>
| {
"pile_set_name": "StackExchange"
} |
Q:
ZFS System on Production
Do you know/have any system which works on ZFS such as RDMS? If yes, what is your setup? FreeBSD, ZFS on Linux etc.
A:
Yes, ZFS can support production workloads, such as hosting virtualization systems, running an Oracle database or general NAS or block-level storage presentation.
What are you interested in doing?
Edit:
This depends on your implementation, but ZFS on Solaris, NexentaStor, OpenIndiana, FreeBSD and even on Linux, have been extremely stable and solid for me so far.
Some other user experiences here: ZFS Data Loss Scenarios
| {
"pile_set_name": "StackExchange"
} |
Q:
PHP Variable in Select Statement
I've written this PHP-Script which is working, and now I want to change the row name into a variable to (not sure if row is correct), I mean the "name" from the select name...
I've tried nearly everything, but nothing gave me the right result.
I know that the normal thing how I can use variables in a statement like ("'. $var .'") won't work.
<?php
require_once 'config.php';
$id = $_GET["id"]; //ID OF THE CURRENT CONTACT
$user = $_GET["user"]; //ID OF THE CURRENT USERS
$query = mysql_query("SELECT name FROM contacts WHERE contact_id='". mysql_real_escape_string( $id ) ."' and user_id='1';");
$retval = mysql_fetch_object($query)->name;
$retval = trim($retval);
echo $retval;
?>
A:
This is much easier isn't it?
$sql_insert =
"INSERT INTO customers (
`name`,
`address`,
`email`,
`phone`
)
VALUES (
'$name',
'$address',
'$email',
'$phone'
)";
A:
Is it this you're looking for? Even your question in German isn't that clear to me :
$field = 'name';
$query = mysql_query("SELECT $field FROM contacts WHERE contact_id='". mysql_real_escape_string( $id ) ."' and user_id='1';");
$retval = mysql_fetch_object($query)->$field;
A:
You can usi it something like this. Currently i assume you get only one row back and want to use only one field.
<?php
require_once 'config.php';
$id = $_GET["id"]; //ID DES DERZEITIGEN KONTAKTES
$user = $_GET["user"]; //ID DES DERZEITIGEN USERS
//Use variable inside closures `` and just in case escape it, depends how you get variable
$query = mysql_query("SELECT `".mysql_real_escape_string($variable)."` FROM contacts WHERE contact_id='". mysql_real_escape_string( $id ) ."' and user_id='1';");
if (!$query) {
echo 'Could not run query: ' . mysql_error();
exit;
}
$row = mysql_fetch_row($query); //Retriev first row, with multiple rows use mysql_fetch_assoc
$retval = $row['0']; //Retriev first field
$retval = trim($retval);
echo $retval;
?>
| {
"pile_set_name": "StackExchange"
} |
Q:
Pulling information from one row cursor - Android SQLite
I have this issue where I am trying to fill a String[] with stuff, but the way I am doing it is not working. My app so far, takes XML data, parses it, shoves it into a separate tables, and displays the Items in a list. I'm working on the detailed view, and this is where I am stuck. I keep getting a nullpointer exception error because, even though the String[] has the correct number of 'slots' (I've checked this) they are null.
This is the function:
public String[] getDetails(String id, int param){
Cursor cursor = getDetailsCursor(id, param);
String[] details = new String[cursor.getColumnCount()];
int itty=0;
if (cursor.moveToFirst()){
do{
details[itty] = cursor.getString(itty);
itty++;
}while(cursor.moveToPosition(itty));
}
cursor.close();
return details;
}
Before we ask: It is the right cursor, from information that sometimes come through and the ColumnCount, I know it is the right cursor.
I've ask you guys questions before and you seem to know what it is pretty much instantly. This would probably be the last question I have for this little project.
Update
public String[] getDetails(String id, int param){
Cursor cursor = getDetailsCursor(id, param);
String[] details = new String[cursor.getColumnCount()] , names = new String[cursor.getColumnCount()];
int i=0;
cursor.moveToFirst();
names = cursor.getColumnNames();
cursor.moveToFirst();
for(i=0;i < names.length;i++){
details[i] = cursor.getString(cursor.getColumnIndex(names[i]));
}
cursor.close();
return details;
}
This is the code I currently have. I am going to try your solutions and see if they have the same result.
A:
The Cursor methods moveTo... move the row you're dealing with and not the column. Passing an int to a call on getString(...) defines the column number. You basically need to do something like this...
if (cursor.moveToFirst()) {
for (int itty = 0; itty < cursor.getColumnCount(); itty++) {
details[itty] = cursor.getString(itty);
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Make equilateral triangle from two hexagons
Given two congruent regular hexagons, we should partition them into a total of $n$ pieces. What is the smallest value of $n$ so that the $n$ pieces together can be formed into an equilateral triangle?
If we start with only one hexagon, it is possible to use five pieces. But we can't combine an equilateral triangle and a hexagon, or two equilateral triangles together. In addition, from a regular hexagon we can make two equilateral triangles by cutting segments $AC,CE,EA$ if the hexagon is $ABCDEF$.
A:
Six pieces: green hexagon is cut into five pieces and red one is a single piece:
| {
"pile_set_name": "StackExchange"
} |
Q:
Does IE fail when removing tag attribs with jquery?
I´m using an different box for title atrib in elements. I´ve developed a simple jQuery stuff to do it but... the "best friend ever", IE, don´t work correctly. It´s simple don´t remove the title atrib as the other browsers do. The result: i´ve the box showing the title atrib and the own browser box over this. How can i resolve this? (code next).
NOTE: works on Chrome, Safari, Firefox, Opera. But IE don´t.
<title>Box</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<style type="text/css">
#dbox {
background: #003767;
padding: 5px 8px;
position: absolute;
margin: 20px;
color: white;
font-size: 14px;
display: inline;
/* border radius specif browsers */
-webkit-border-radius: 3px; /* safari, chrome */
-khtml-border-radius: 3px; /* ktml browsers */
-moz-border-radius: 3px; /* firefox, mozila found. */
-o-border-radius: 3px; /* opera, opmob */
}
</style>
<script type="text/javascript" src="js/jquery-1.5.2.min.js"></script>
<script type="text/javascript">
var dbox_time = 120;
var delay_time = 500;
$(document).ready(function(){
$('<div id="dbox"></div>').insertAfter(('body'));
$("[title != '']").each(function() {
$(this).addClass('lkBox');
$(this).css({border: '1px solid red'});
});
$('#dbox').empty();
$('#dbox').css({
display: 'none',
opacity: '0'
})
$(document).mousemove(function(e){
$('#dbox').css({
'left': e.pageX,
'top' : e.pageY
})
})
$('.lkBox').mouseover(function(){
$('#dbox').text($(this).attr('title'));
$('#dbox').css({display: 'block'})
.stop().animate({opacity: '1'}, dbox_time)
})
$('.lkBox').mouseout(function(){
$(this).attr('title', $('#dbox').text());
$('#dbox').css({display: 'none'})
.stop().animate({
opacity: '0'
}, dbox_time)
})
})
</script>
Body:
<div style="float: left; width: 70%; padding: 50px;">
<h1>Examples: (mouse over the links)</h1>
<p>Curabitur lacus tortor, pellentesque eget <a href="#">interdum in</a>, auctor et lorem. In in quam lorem, vel <a href="#" title="i´m am a title =). must show">sagittis lec</a>. Donec felis leo, id fermentum nibh porttitor. Vestibulum ante <a href="#" title="">empy title (dont need to work)<span></span></a> primis. Lorem ipsum dolor sit amet, <a href="#" title="another title to show">lorem ipsum</a> elit.</p>
</div>
INFO: if an alert is placed after
$(this).attr('title', $('#dbox').text());
it work on IE.. but i can´t use an alert.
A:
I tested your code in FF4 and I'm getting the same double title tooltip behavior you are in IE (your tooltip pops up and then the built-in FF one pops up about a second later after continuing to hover over the link). From the code you've posted this is absolutely expected behavior.
What you need to do is either leverage a different attribute (ie: use 'dboxtitle' rather than 'title') or you're going to need to dynamically remove and re-add the title on mouseover/mouseout so that the browser doesn't see it (ref: Disabling browser tooltips on links and <abbr>s).
| {
"pile_set_name": "StackExchange"
} |
Q:
asp.net core web api published in IIS after moved to different IIS server pc gives error 500.19 (0x8007000d)
I developed a web api in asp.net core version 1.0.1 using visual studio 2015, when I published the web api in IIS 10 of the same pc where it was developed, everything works correctly. The problem arises when I copy and paste the publication folder of the web api to a different pc, the browser shows the error 500.19 Internal Server Error, error code 0x8007000d, "The requested page can not be accessed because the related configuration data for the page is invalid ", which leads to some problem in the web.config.
I do not think the version of IIS is the problem because moving from IIS 10 to IIS 8 or from IIS8 to IIS 10 gives the same error, and the same happens between two pcs with IIS 10.
I have already reviewed several related issues, like, The element 'system.webServer' has invalid child element 'aspNetCore', and others related to web.config file where it seems the error is found. The web.config file in the development environment is:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<!--
Configure your application settings in appsettings.json. Learn more at http://go.microsoft.com/fwlink/?LinkId=786380
-->
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
</handlers>
<aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false"/>
</system.webServer>
</configuration>
After publish the web api, the web.config file looks like:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<!--
Configure your application settings in appsettings.json. Learn more at http://go.microsoft.com/fwlink/?LinkId=786380
-->
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet" arguments=".\buildingSecureWebApi.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false" />
</system.webServer>
</configuration>
This web.config file has the same content no matter what computer was publish.
Some idea of what the solution to my problem may be, I need to mount the web api in any version of windows and until now it only works correctly on the pc that was developed.
A:
To get a more detailed error message:
Verify that the log directory exists at the path referenced by the web config. If it does not, create it. The path shown in your config would place the "logs" directory in the root folder of the deployed site.
Verify that the application pool has write access to the logs directory and
Verify that `stdoutLogEnabled="true".
If you have verified all 3 of these items, then you should get log entries that will contain a more detailed description of the error
What could cause a "500.19 Internal Server Error, error code 0x8007000d" error?
.NET Core hosting bundle is not installed on the webserver where the site is deployed. To remedy this, obtain this download .NET Core Windows Server Hosting bundle
You may also want to verify that the path to the dotnet executable exists in the deployment machine's environment variables. To check this, first find the path where dotnet.exe is installed. It is generally located in either C:\Program Files\dotnet or C:\Program Files (x86)\dotnet. Once you know the path, ensure that the path exists in your Environment Variables.
Control Panel > System > Advanced System Settings > Environment Variables. highlight "Path", click 'Edit' and verify that the path to the dotnet folder is present. If it isn't, add it. Do the same for both the User variables and System variables. Restart your machine and try again.
| {
"pile_set_name": "StackExchange"
} |
Q:
Trying to compile YouCompleteMe with mingw-64 and clang support on Windows 7
I have tried many different configuration options, I've built llvm/clang with windows and with mingw-64, but no matter what I set I am always stopped here. Since there isn't official support, the only help is the wiki documentation that hasn't been updated in a long time.
Has anyone gotten this to work?
C:\mingw64\bin\g++.exe -shared -o C:\Users\Daddy007\vimfiles\bundle\YouCompl
eteMe\third_party\ycmd\ycm_core.pyd -Wl,--out-implib,libycm_core.dll.a -Wl,--maj
or-image-version,0,--minor-image-version,0 -Wl,--whole-archive CMakeFiles\ycm_co
re.dir/objects.a -Wl,--no-whole-archive ..\BoostParts\libBoostParts.a C:\Python2
7\libs\libpython27.a -lkernel32 -luser32 -lgdi32 -lwinspool -lshell32 -lole32 -l
oleaut32 -luuid -lcomdlg32 -ladvapi32
CMakeFiles\ycm_core.dir/objects.a(ClangCompleter.cpp.obj):ClangCompleter.cpp:(.t
ext+0x328): undefined reference to `clang_createIndex'
CMakeFiles\ycm_core.dir/objects.a(ClangCompleter.cpp.obj):ClangCompleter.cpp:(.t
ext+0x353): undefined reference to `clang_toggleCrashRecovery'
CMakeFiles\ycm_core.dir/objects.a(ClangCompleter.cpp.obj):ClangCompleter.cpp:(.t
ext+0x3ea): undefined reference to `clang_disposeIndex'
c:/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/4.8.1/../../../../x86_64-w64-mingw3
2/bin/ld.exe: CMakeFiles\ycm_core.dir/objects.a(ClangCompleter.cpp.obj): bad rel
oc address 0x0 in section `.data'
collect2.exe: error: ld returned 1 exit status
A:
Steps that worked for me were the following.
Make sure you use either 32-bit or 64-bit for all the steps, but never mix them. In the instructions there will be some paths, that depend on your installation. Make sure you adapt them and not just try to copy paste.
Get GVim (built against Python, you can check this in the version. There has to be an entry +python/dyn)
(For always up to date builds, I can recommend: https://tuxproject.de/projects/vim/)
Get the mingw-w64 toolchain.
Because of your question I am not exactly sure what version you got, but mingw-w64 is in my point of view one of the better toolchains available.
Online installer available here (mingw-w64-install.exe):
http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/installer/
My versions used during building YCM:
(Once you launch the installer you will see what these names mean.)
x86_64-5.2.0-posix-seh-rt_v4-rev0
x86_64-5.3.0-posix-seh-rt_v4-rev0
Get cmake:
https://cmake.org/
Get Python 2.7.x
https://www.python.org/downloads/
The best would be to take 2.7.10, because 2.7.11 works but needs a fix in the registry because of https://bugs.python.org/issue25824
As you already compiled LLVM/Clang you may skip this step. If you use above mentioned toolchain, rebuild it.
(This is required to get libclang.dll for semantic support.)
Get LLVM/Clang sources:
http://llvm.org/docs/GettingStarted.html
(had to define M_PI in llvm\lib\Target\AMDGPU\SIISelLowering.cpp
, everything else pretty straight forward)
Get YouCompleteMe sources
git clone https://github.com/Valloric/YouCompleteMe
cd YouCompleteme
git submodule update --init --recursive
Generate libpython27.a
C:\Python27\libs\libpython27.a was missing, so I had to create this. If you have it, you may still want to create this just to be sure.
In your toolchain's ...\mingw-w64\x86_64-5.2.0-posix-seh-rt_v4-rev0\mingw64\bin folder there should be gendef and dlltool.
Go to your python27.dll and run (from command line):
gendef python27.dll
dlltool --dllname python27.dll --def python27.def --output-lib libpython27.a
Make sure ...\mingw-w64\x86_64-5.2.0-posix-seh-rt_v4-rev0\mingw64\bin is added to PATH environment variable, to save you some hassle.
Launch cmake-gui and configure
Generator will be: MinGW Makefiles
Where is the source code: .../YouCompleteMe/third_party/ycmd/cpp
Where to build the binaries: .../build
Uncheck BUILD_SHARED_LIBS
CMAKE_BUILD_TYPE: Release
Make sure every path concerning the toolchain is correct.
(Paths to ld.exe, g++.exe, mingw32-make, objcopy, ...)
Where to put the built files:
CMAKE_INSTALL_PREFIX: wherever you want, you will not find the necessary files there :).
The next variables depend on your installation of Clang.
EXTERNAL_LIBCLANG_PATH: point to the libclang.dll you built earlier with the same toolchain
(.../mingw-w64/x86_64-5.2.0-posix-seh-rt_v4-rev0/mingw64/bin/libclang.dll)
PATH_TO_LLVM_ROOT: .../mingw-w64/x86_64-5.2.0-posix-seh-rt_v4-rev0/mingw64
PYTHON_EXECUTABLE: C:/python27/python.exe
PYTHON_INCLUDE_DIR: C:/python27/include
PYTHON_LIBRARY: C:/python27/libs/libpython27.a (the one you created earlier)
Check USE_CLANG_COMPLETER (for semantic support)
Press Configure and Generate.
Now you should find the Makefile in the path specified at the top of cmake.
(Where to build the binaries:)
Build YCM
Open command line and navigate to the directory and enter mingw32-make.
The build will most likely fail before hitting 100%, the only thing you need is to get around 90%. I think it tried to compile the tests too and failed.
If you navigate to ...\YouCompleteMe\third_party\ycmd there should be the following files
ycm_core.pyd
ycm_client_support.pyd
libclang.dll
If they are there, lucky you.
You can now copy the folders in ...\YouCompleteMe\* to the gvim folder, to check if it works.
In your _vimrc you can specify:
let g:ycm_path_to_python_interpreter = 'C:\python27\python.exe'
To point YCM to the right interpreter, if you have more then one installation (3.5) it may will produce problems, depending on which one is on the PATH.
Well this is about it, there are quite some steps where something can go wrong, or I may have missed something. If you face difficulties, just ask I may can help.
Just a side note. I can also recommend to build with Visual Studio 2015, a snapshot build from LLVM/Clang from http://llvm.org/builds/ and Python 2.7.11. Because VS 2015 supports Clang and is compatible with VS2015's VC++ (http://clang.llvm.org/docs/MSVCCompatibility.html).
Works well too.
| {
"pile_set_name": "StackExchange"
} |
Q:
QT how to save to file QPoint 2d array
I have board where I can "draw" a number.
This is code of board
void PrintRectangle::paintEvent(QPaintEvent *)
{
for(int i=0; i<5; i++)
{
ypos=20;
for(int j=0; j<5; j++)
{
QColor color = Qt::white;
for(int k=0; k<points.size(); k++){
if( i == points[k].x() && j == points[k].y() )
{
color = Qt::black;
}
}
p.fillRect(xpos,ypos,recWidth,recHeight,color);
ypos+=60;
}
xpos+=60;
}
}
And next function, which updated points on list
QVector<QPoint> points;
void PrintRectangle::updateIndexFromPoint(const QPoint &point)
{
int x = point.x() - 20;
int y = point.y() - 20;
bool removed = false;
if( ( (x >= 0 ) && ( x <= 300) ) && ( (y >= 0 ) && ( y <= 300) ) )
{
mXIndex = x / 60; //rec width + spacing
mYIndex = y / 60; //rec height + spacing
for(int k=0; k<points.size(); k++){
qDebug("%d %d", points[k].x(), points[k].y());
if(points[k].x() == mXIndex && points[k].y() == mYIndex){
points.remove(k);
removed = true;
}
}
if(!removed){
points.append(QPoint(mXIndex,mYIndex));
}
}
}
My question is how can I save to file numer from QPoint selected rectangle.
eg. numer 1 in file
0 0 1 0 0
0 1 1 0 0
1 0 1 0 0
0 0 1 0 0
0 0 1 0 0
A:
Simply use QDataStream to store your points in file:
void savePoints(QVector<QPoint> points)
{
QFile file("points.bin");
if(file.open(QIODevice::WriteOnly))
{
QDataStream out(&file);
out.setVersion(QDataStream::Qt_4_0);
out << points;
file.close();
}
}
QVector<QPoint> loadPoints()
{
QVector<QPoint> points;
QFile file("points.bin");
if(file.open(QIODevice::ReadOnly))
{
QDataStream in(&file);
in.setVersion(QDataStream::Qt_4_0);
in >> points;
file.close();
}
return points;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
customization of the camera overlay
My first question for customization is at here and with help from Mat. I can customize it. Now I would like to customize the overlay by loading and view from a nib file...
What I am to do is
Create another UIViewController called myCustomVIew with its xib
Add a toolbar on top and some buttons onto it
Set controller.cameraOverlay = aView.view
Please take a look at here, so that you can picture what I am doing so far
However when I run my app, the view is shown at here.
I know I have just screwed up somewhere, please advice me about this issue.
A:
You have to set a clearColor on the view, otherwise you can't see the camera layer.
| {
"pile_set_name": "StackExchange"
} |
Q:
Reload page when a certain width is passed
I want the page to reload only if the browser window goes above or below 768px.
This was my attempt which failed.
if ($(window.width() > "769") {
$(window).resize(function () {
if ($(window).width() < "769") {
location.reload();
}
});
}
elseif($(window.width() < "769") {
$(window).resize(function () {
if ($(window).width() > "769") {
location.reload();
}
});
}
Im sures theres a really simple way of doing this.
A:
demo jsFiddle
The proof that the page is reloaded is (the wait icon in the tab :D ) the Math random that generates a random number (in the demo.)
var ww = $(window).width();
var limit = 769;
function refresh() {
ww = $(window).width();
var w = ww<limit ? (location.reload(true)) : ( ww>limit ? (location.reload(true)) : ww=limit );
}
var tOut;
$(window).resize(function() {
var resW = $(window).width();
clearTimeout(tOut);
if ( (ww>limit && resW<limit) || (ww<limit && resW>limit) ) {
tOut = setTimeout(refresh, 100);
}
});
The timeout function will help on window resize to wait 100ms before calling the refresh function.
You can increase the timeout value to improve usability.
A:
There are probably other and much better ways of doing what you really need, but:
if ($(window.width() > "769"){
Should be:
if ($(window).width() > 769){
Full code:
var width = $(window).width();
$(window).resize(function() {
if (width > 769 && $(window).width() < 769) {
location.reload();
}
else if (width < 769 && $(window).width() > 769) {
location.reload();
}
});
Live DEMO
It could be made with one if statement, but I preferred splitting it into two so it'll be easier to follow.
| {
"pile_set_name": "StackExchange"
} |
Q:
codeigniter jquery status update - prior updates not showing
I'm working on a Facebook-like status update using jquery in Codeigniter. So far, I've got the database, models, controllers, views, and jquery.
My status updates are posting to the database. The problem is that I can't see the prior status updates in my view (they should be in the #content div), instead, when I press submit I get a blank screen (i have php errors turned on, so it's not that). I suspect the problem is in the jquery or the message list view.
DATABASE
CREATE TABLE IF NOT EXISTS `messages` (
`id` tinyint(4) NOT NULL AUTO_INCREMENT,
`message` VARCHAR(200) NOT NULL,
PRIMARY KEY (`id`)
)
CONTROLLER
<?php
class Message extends CI_Controller
{
function index () {
$this->load->view('default');
}
function add()
{
if ($this->input->post('submit')) {
$id = $this->input->post('id');
$message = $this->input->post('message');
// Add the post
$this->load->model('message_model');
$this->message_model->addPost($id, $message);
}
}
function view($type = NULL)
{
$data['messages'] = $this->db->get('message');
if ($type == "ajax")
$this->load->view('messages_list', $data);
else // load the default view
$this->load->view('default', $data);
}
}
MODEL
<?php
class Message_model extends CI_Model {
function addPost($id, $message) {
$data = array(
'id' => $id,
'message' => $message
);
$this->db->insert('messages', $data);
}
function get($limit=5, $offset=0)
{
$this->db->orderby('id', 'DESC');
$this->db->limit($limit, $offset);
return $this->db->get('messages')->result();
}
}
HTML/JQUERY
<!DOCTYPE html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.js" type="text/javascript"></script>
<script type="text/javascript">
$(document).ready(function() {
$('#submit').click(function() {
var msg = $('#message').val();
$.post("<?= site_url('message') ?>", {message: msg}, function() {
$('#content').load("<?= site_url('message/view/ajax') ?>");
$('#message').val('');
});
});
});
</script>
</head>
<body>
<div id="form">
<?php echo form_open('message/add'); ?>
<input type="text" id="message" name="message" />
<?php echo form_submit('submit', 'Update', "class='button'"); ?>
<?php echo form_close(); ?>
</div>
<br />
<br />
<div id="content">
<?php $this->load->view('messages_list') ?>
</div>
</body>
</html>
MESSAGE LIST VIEW
(this is what should load in the #content div; a list of the previous messages--limited to 5 by the model)
<ol>
<?php
if (!empty($message) and (is_array($message)))
foreach ($message as $message):
?>
<li><?= $message->message ?></li>
<?php endforeach ?>
</ol>
A:
You get redirected to a completely different page (blank), or is just the js that blanks the current page?
And, does it blank just #content or the whole page?
does the callback function of $.post() get called? (Try with an alert())
Are you sure that message/view/ajax is returning the expected html? -> try to see how the request goes by using Firebug console
Update: avoiding the redirect
It looks like you are not stopping the normal form submit, and thus page gets reloaded; to prevent this, you should add event.preventDefault() to your event handler:
$('#your-form-id').submit(function(event) {
event.preventDefault();
// Your code here..
});
(or, in your case, in the .click() handler for #submit, but it would be better to use .submit() for the form, if you can uniquely determine its id.. or you can just use $('#form form').submit( ... ))
| {
"pile_set_name": "StackExchange"
} |
Q:
Does there exist a ``continuous measure'' on a metric space?
Let $X$ be a separable complete metrizable space. Does there exist a complete metric $d$ and a Borel measure $\mu$ such that
(a)
$\mu(B_r(x))<\infty$ for every open ball $B_r(x)$ of radius $r>0$ around $x\in X$,
(b) for each $r>0$ the map
$
x\mapsto \mu(B_r(x))
$
is continuous on $X$,
(c) $\mathrm{supp}(\mu)=X$?
A:
By continuing the line of thought from Nate Eldredge's comment you can handle finitely many separated parts of the space by a distance having values at most one and exactly one between points in different connected components. However, a possible counter-example (I did not yet check the details) should follow by continuing this to countably many separated parts that accumulate. For instance:
Take $$X = [-1,0] \cup \{\frac1n\,:\,n \in \mathbb N\}$$ with the induced topology from $\mathbb R$. Each isolated point $\frac1n$ has to have positive measure. Since they accumulate to $0$ you cannot put a distance on $X$ separating them from the interval $[-1,0]$. This should force a discontinuity for a suitable $r>0$ when $x$ travels along $[-1,0]$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Windows Server 2016 (Backup - opinion)
I'm looking for a simple answer.
I have a server here that's identical to our primary server. I want to make this server our backup server.
Can someone point me to a resource or mention the services I need?
If the primary server fails, I want our backup server to pickup DHCP / Directory / DNS services and keep copies of the files from the primary server.
Would that mean simply failover clustering? Can you do that on two servers or do you need at least three?
A:
Failover clustering, as you mention, is probably your best bet here. You need a minimum of 2 servers (not 3, thankfully).
Your best options would be either to:
Individually cluster the services running atop the 2 physical servers (i.e. install AD,DHCP,DNS on each of the physical servers and make them aware of one another using their native functionality) or my preferred option:
Build a Windows Failover Cluster atop your two physical servers, and then create clustered roles on that Failover Cluster (either directly, or inside of VMs). This approach has a number of advantages (I advise you to research it in more detail), but automated failover and live migration are two of my favourites.
The latter option will require you to have (among other pre-requisites), a shared storage medium, and identical server hardware, as well as multiple NICs, etc.
The MS docs on the topic are dense, but very helpful - I highly recommend that you read through them.
As an aside - most of the roles you've mentioned are actually best deployed as pairs (or more) such as AD-DS, DNS, DHCP, etc - rather than having an active/backup relationship - but I still recommend that you familiarise yourself with failover clustering as an option.
| {
"pile_set_name": "StackExchange"
} |
Q:
With full data journaling, why does data appear in the directory immediately?
I've got a question regarding full data journaling on ext3 filesystems. The man page states the following:
data=journal
All data is committed into the journal prior to being written into
the main filesystem.
It seems to me that that means that a file is first saved to the journal and then copied to the filesystem.
I assumed that if I download something it should first be saved in the journal and if complete moved to FS. But after starting the download file appears in the directory (FS). What's wrong about that?
Edit: Maybe its wrong to think of "all Data" = whole size of the file? So if all data is maybe only a Block or something else than it would make sense and I couldn't see that things are first written to journal?!
A:
First, you're right to suspect that “all data” doesn't mean the whole file. In fact, that layer of the filesystem operates on fixed-size file blocks, not on whole files. At that level, it's important to keep a bounded amount of data, so working on whole files (which can be arbitrary large) wouldn't work.
Second, there's a misconception in your question. The journaling behavior isn't something you can observe by looking at the directory contents with ls, it works at a much lower level. With normal tools, you'll always see that the file is there. (It would be catastrophic if creating a file didn't appear to, y'know, create it.) What happens under the hood is that the file can be stored in different ways. At first, the first few blocks are saved in the journal. Then, as soon as efficiently possible, the data is moved to its final location. It's still the same file in the same directory, just stored differently.
The only way you can observe journaling behavior is if you go and see exactly what the kernel is writing to the disk, or if you analyse the disk content after a crash. In normal operation, the journal is an implementation detail: if you could see it in action (other than performance-wise), it would be severely broken.
For more information about filesystem journals, I recommend starting with the Wikipedia article. In ext3 terms, a data=journal ensures that if the system crashes, each file is in a state that it had at some point before the crash (it's not always the latest state because of buffering). The reason this doesn't happen automatically is that the kernel reorders disk writes for efficiency (it can make a big difference). This is called a “physical journal” in the Wikipedia article. The other two modes (data=ordered and data=writeback) are forms of “logical journal”: they're faster, but they can lead to corrupted files. The journal limits the risk of corruption to a few files containing garbage; ext3 always uses a full journal for metadata. Without a journal for metadata, metadata can get lost, leading to major filesystem corruption. Furthermore, without a journal, recovery after a crash requires a full filesystem integrity check, whereas with a journal recovery means replaying a few journal entries.
Note that even with a journal, typical unix filesystems don't guarantee global filesystem consistency, only per-file consistency at most. That is, suppose you write to file foo, then you write to file bar, then the system crashes. It's possible for bar to have the new contents but foo to still have the old contents. To have complete consistency, you need a transactional filesystem.
| {
"pile_set_name": "StackExchange"
} |
Q:
Clustering an image using Gaussian mixture models
I want to use GMM(Gaussian mixture models for clustering a binary image and also want to plot the cluster centroids on the binary image itself.
I am using this as my reference:
http://in.mathworks.com/help/stats/gaussian-mixture-models.html
This is my initial code
I=im2double(imread('sil10001.pbm'));
K = I(:);
mu=mean(K);
sigma=std(K);
P=normpdf(K, mu, sigma);
Z = norminv(P,mu,sigma);
X = mvnrnd(mu,sigma,1110);
X=reshape(X,111,10);
scatter(X(:,1),X(:,2),10,'ko');
options = statset('Display','final');
gm = fitgmdist(X,2,'Options',options);
idx = cluster(gm,X);
cluster1 = (idx == 1);
cluster2 = (idx == 2);
scatter(X(cluster1,1),X(cluster1,2),10,'r+');
hold on
scatter(X(cluster2,1),X(cluster2,2),10,'bo');
hold off
legend('Cluster 1','Cluster 2','Location','NW')
P = posterior(gm,X);
scatter(X(cluster1,1),X(cluster1,2),10,P(cluster1,1),'+')
hold on
scatter(X(cluster2,1),X(cluster2,2),10,P(cluster2,1),'o')
hold off
legend('Cluster 1','Cluster 2','Location','NW')
clrmap = jet(80); colormap(clrmap(9:72,:))
ylabel(colorbar,'Component 1 Posterior Probability')
But the problem is that I am unable to plot the cluster centroids received from GMM in the primary binary image.How do i do this?
**Now suppose i have 10 such images in a sequence And i want to store the information of their mean position in two cell array then how do i do that.This is my code foe my new question **
images=load('gait2go.mat');%load the matrix file
for i=1:10
I{i}=images.result{i};
I{i}=im2double(I{i});
%determine 'white' pixels, size of image can be [M N], [M N 3] or [M N 4]
Idims=size(I{i});
whites=true(Idims(1),Idims(2));
df=I{i};
%we add up the various color channels
for colori=1:size(df,3)
whites=whites & df(:,:,colori)>0.5;
end
%choose indices of 'white' pixels as coordinates of data
[datax datay]=find(whites);
%cluster data into 10 clumps
K = 10; % number of mixtures/clusters
cInd = kmeans([datax datay], K, 'EmptyAction','singleton',...
'maxiter',1000,'start','cluster');
%get clusterwise means
meanx=zeros(K,1);
meany=zeros(K,1);
for i=1:K
meanx(i)=mean(datax(cInd==i));
meany(i)=mean(datay(cInd==i));
end
xc{i}=meanx(i);%cell array contaning the position of the mean for the 10
images
xb{i}=meany(i);
figure;
gscatter(datay,-datax,cInd); %funky coordinates for plotting according to
image
axis equal;
hold on;
scatter(meany,-meanx,20,'+'); %same funky coordinates
end
I am able to get 10 images segmented but no the values of themean stored in the cell arrays xc and xb.They r only storing [] in place of the values of means
A:
I decided to post an answer to your question (where your question was determined by a maximum-likelihood guess:P), but I wrote an extensive introduction. Please read carefully, as I think you have difficulties understanding the methods you want to use, and you have difficulties understanding why others can't help you with your usual approach of asking questions. There are several problems with your question, both code-related and conceptual. Let's start with the latter.
The problem with the problem
You say that you want to cluster your image with Gaussian mixture modelling. While I'm generally not familiar with clustering, after a look through your reference and the wonderful SO answer you cited elsewhere (and a quick 101 from @rayryeng) I think you are on the wrong track altogether.
Gaussian mixture modelling, as its name suggests, models your data set with a mixture of Gaussian (i.e. normal) distributions. The reason for the popularity of this method is that when you do measurements of all sorts of quantities, in many cases you will find that your data is mostly distributed like a normal distribution (which is actually the reason why it's called normal). The reason behind this is the central limit theorem, which implies that the sum of reasonably independent random variables tends to be normal in many cases.
Now, clustering, on the other hand, simply means separating your data set into disjoint smaller bunches based on some criteria. The main criterion is usually (some kind of) distance, so you want to find "close lumps of data" in your larger data set. You usually need to cluster your data before performing a GMM, because it's already hard enough to find the Gaussians underlying your data without having to guess the clusters too. I'm not familiar enough with the procedures involved to tell how well GMM algorithms can work if you just let them work on your raw data (but I expect that many implementations start with a clustering step anyway).
To get closer to your question: I guess you want to do some kind of image recognition. Looking at the picture, you want to get more strongly correlated lumps. This is clustering. If you look at a picture of a zoo, you'll see, say, an elephant and a snake. Both have their distinct shapes, and they are well separated from one another. If you cluster your image (and the snake is not riding the elephant, neither did it eat it), you'll find two lumps: one lump elephant-shaped, and one lump snake-shaped. Now, it wouldn't make sense to use GMM on these data sets: elephants, and especially snakes, are not shaped like multivariate Gaussian distributions. But you don't need this in the first place, if you just want to know where the distinct animals are located in your picture.
Still staying with the example, you should make sure that you cluster your data into an appropriate number of subsets. If you try to cluster your zoo picture into 3 clusters, you might get a second, spurious snake: the nose of the elephant. With an increasing number of clusters your partitioning might make less and less sense.
Your approach
Your code doesn't give you anything reasonable, and there's a very good reason for that: it doesn't make sense from the start. Look at the beginning:
I=im2double(imread('sil10001.pbm'));
K = I(:);
mu=mean(K);
sigma=std(K);
X = mvnrnd(mu,sigma,1110);
X=reshape(X,111,10);
You read your binary image, convert it to double, then stretch it out into a vector and compute the mean and deviation of that vector. You basically smear your intire image into 2 values: an average intensity and a deviation. And THEN you generate 111*10 standard normal points with these parameters, and try to do GMM on the first two sets of 111. Which are both independently normal with the same parameter. So you probably get two overlapping Gaussians around the same mean with the same deviation.
I think the examples you found online confused you. When you do GMM, you already have your data, so no pseudo-normal numbers should be involved. But when people post examples, they also try to provide reproducible inputs (well, some of them do, nudge nudge wink wink). A simple method for this is to generate a union of simple Gaussians, which can then be fed into GMM.
So, my point is, that you don't have to generate random numbers, but have to use the image data itself as input to your procedure. And you probably just want to cluster your image, instead of actually using GMM to draw potatoes over your cluster, since you want to cluster body parts in an image about a human. Most body parts are not shaped like multivariate Gaussians (with a few distinct exceptions for men and women).
What I think you should do
If you really want to cluster your image, like in the figure you added to your question, then you should use a method like k-means. But then again, you already have a program that does that, don't you? So I don't really think I can answer the question saying "How can I cluster my image with GMM?". Instead, here's an answer to "How can I cluster my image?" with k-means, but at least there will be a piece of code here.
%set infile to what your image file will be
infile='sil10001.pbm';
%read file
I=im2double(imread(infile));
%determine 'white' pixels, size of image can be [M N], [M N 3] or [M N 4]
Idims=size(I);
whites=true(Idims(1),Idims(2));
%we add up the various color channels
for colori=1:Idims(3)
whites=whites & I(:,:,colori)>0.5;
end
%choose indices of 'white' pixels as coordinates of data
[datax datay]=find(whites);
%cluster data into 10 clumps
K = 10; % number of mixtures/clusters
cInd = kmeans([datax datay], K, 'EmptyAction','singleton',...
'maxiter',1000,'start','cluster');
%get clusterwise means
meanx=zeros(K,1);
meany=zeros(K,1);
for i=1:K
meanx(i)=mean(datax(cInd==i));
meany(i)=mean(datay(cInd==i));
end
figure;
gscatter(datay,-datax,cInd); %funky coordinates for plotting according to image
axis equal;
hold on;
scatter(meany,-meanx,20,'ko'); %same funky coordinates
Here's what this does. It first reads your image as double like yours did. Then it tries to determine "white" pixels by checking that each color channel (of which can be either 1, 3 or 4) is brighter than 0.5. Then your input data points to the clustering will be the x and y "coordinates" (i.e. indices) of your white pixels.
Next it does the clustering via kmeans. This part of the code is loosely based on the already cited answer of Amro. I had to set a large maximal number of iterations, as the problem is ill-posed in the sense that there aren't 10 clear clusters in the picture. Then we compute the mean for each cluster, and plot the clusters with gscatter, and the means with scatter. Note that in order to have the picture facing in the right directions in a scatter plot you have to shift around the input coordinates. Alternatively you could define datax and datay correspondingly at the beginning.
And here's my output, run with the already processed figure you provided in your question:
| {
"pile_set_name": "StackExchange"
} |
Q:
How to connect to Azure Table Service REST API?
I'm working on trying to access an Azure table storage resource with REST API only from a .net application (without using the azure cloud libraries) ...
Just looking at the MSDN instructions, and get that my URL should be https://.table.core.windows.net/Tables to enumerate all the tables in the storage account, but when I enter the proper URL it gives me 404s...every URL I build according to the documentation to try and test functionality, comes back 404s.
I don't see where I can make the tables anonymous access in Azure, so I'm assuming I'm missing an authentication step somewhere, it's just not readily documented on MSDN.
Thanks for the help
A:
You're correct - Anonymous table access is not possible.
For listing tables, the request needs to be authenticated. In order to have an authenticated request, you would need to create an authorization header and pass that header in your request. To create an authorization header, please see this link: https://msdn.microsoft.com/en-us/library/azure/dd179428.aspx
| {
"pile_set_name": "StackExchange"
} |
Q:
Why do I get unresolved reference in setOnClickListener?
I have this code:
class ViewHolder(view: View) : RecyclerView.ViewHolder(view){
fun bindItem(items: Item) {
itemView.name.text = items.name
Glide.with(itemView.context).load(items.image).into(itemView.image)
view.setOnClickListener {
view.context.startActivity(view.context.intentFor<DetailsActivity>("image" to items.image, "name" to items.name))
}
}
}
Which keeps giving me an error that shows unsolved reference: view.
How to solve this? Thanks.
A:
It happens because you don't keep any reference to this view inside your class, what you do in Java terms is just super(view) call. You can get an access to the RecyclerView.ViewHolder#itemView field instead:
fun bindItem(items: Item) {
itemView.name.text = items.name
Glide.with(itemView.context).load(items.image).into(itemView.image)
itemView.setOnClickListener {
itemView.context.startActivity(itemView.context.intentFor<DetailsActivity>("image" to items.image, "name" to items.name))
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to solve morse code without spaces?
My friend got a note that has this morse code:
.--......-.----....-
She asked for help solving it.
It doesn't have any spaces, therefore we don't know what it means.
It is in English.
It would be really great if you were willing to help... Thank you and have a great day!
A:
I wrote a program to find all subdivisions of the morse code into english words. The minimum number of words possible is 3. There are 96 subdivisions into 3 words in my dictionary. None of them look like a solution to me.
The most plausible might be "at hero test", which would make sense if the writer of the message is telling your friend that the writer is off being tested to see if they're a hero. Another possibility is "wise no test", indicating that if the puzzle solver is wise, this puzzle should be "no test" for them, e.g. easy.
As I said, none of them look right.
ad saw zit .--.. ....-.-- --....-
ads rob a .--..... .-.----... .-
ads rod it .--..... .-.----.. ..-
ads rode a .--..... .-.----... .-
ads rot set .--..... .-.---- ....-
ads rots a .--..... .-.----... .-
an haw zit .--. .....-.-- --....-
an hey zit .--. .....-.-- --....-
ani snob a .--... ...-.----... .-
ani snot set .--... ...-.---- ....-
ani snots a .--... ...-.----... .-
ani stem zit .--... ...-.-- --....-
ani vat zit .--... ...-.-- --....-
ani veto set .--... ...-.---- ....-
at hero bet .-- ......-.--- -....-
at hero test .-- ......-.--- -....-
ate haw zit .--. .....-.-- --....-
ate hey zit .--. .....-.-- --....-
egis no bet .--...... -.--- -....-
egis no test .--...... -.--- -....-
egis nod it .--...... -.----.. ..-
egis node a .--...... -.----... .-
egis none it .--...... -.----.. ..-
egis not set .--...... -.---- ....-
egis tam bet .--...... -.--- -....-
egis tam test .--...... -.--- -....-
egis tat zit .--...... -.-- --....-
em hero bet .-- ......-.--- -....-
em hero test .-- ......-.--- -....-
ems snob a .--... ...-.----... .-
ems snot set .--... ...-.---- ....-
ems snots a .--... ...-.----... .-
ems stem zit .--... ...-.-- --....-
ems vat zit .--... ...-.-- --....-
ems veto set .--... ...-.---- ....-
pee snob a .--... ...-.----... .-
pee snot set .--... ...-.---- ....-
pee snots a .--... ...-.----... .-
pee stem zit .--... ...-.-- --....-
pee vat zit .--... ...-.-- --....-
pee veto set .--... ...-.---- ....-
pees no bet .--...... -.--- -....-
pees no test .--...... -.--- -....-
pees nod it .--...... -.----.. ..-
pees node a .--...... -.----... .-
pees none it .--...... -.----.. ..-
pees not set .--...... -.---- ....-
pees tam bet .--...... -.--- -....-
pees tam test .--...... -.--- -....-
pees tat zit .--...... -.-- --....-
peeve ode a .--......-. ----... .-
peeve one it .--......-. ----.. ..-
peeve to set .--......-. ---- ....-
pi snob a .--... ...-.----... .-
pi snot set .--... ...-.---- ....-
pi snots a .--... ...-.----... .-
pi stem zit .--... ...-.-- --....-
pi vat zit .--... ...-.-- --....-
pi veto set .--... ...-.---- ....-
pie erode a .--.... ..-.----... .-
pie fob a .--.... ..-.----... .-
pie into set .--.... ..-.---- ....-
pie item zit .--.... ..-.-- --....-
pis no bet .--...... -.--- -....-
pis no test .--...... -.--- -....-
pis nod it .--...... -.----.. ..-
pis node a .--...... -.----... .-
pis none it .--...... -.----.. ..-
pis not set .--...... -.---- ....-
pis tam bet .--...... -.--- -....-
pis tam test .--...... -.--- -....-
pis tat zit .--...... -.-- --....-
we haw zit .--. .....-.-- --....-
we hey zit .--. .....-.-- --....-
wee saw zit .--.. ....-.-- --....-
wees rob a .--..... .-.----... .-
wees rod it .--..... .-.----.. ..-
wees rode a .--..... .-.----... .-
wees rot set .--..... .-.---- ....-
wees rots a .--..... .-.----... .-
whit am bet .--......- .--- -....-
whit am test .--......- .--- -....-
whit at zit .--......- .-- --....-
whit em zit .--......- .-- --....-
white ode a .--......-. ----... .-
white one it .--......-. ----.. ..-
white to set .--......-. ---- ....-
wise no bet .--...... -.--- -....-
wise no test .--...... -.--- -....-
wise nod it .--...... -.----.. ..-
wise node a .--...... -.----... .-
wise none it .--...... -.----.. ..-
wise not set .--...... -.---- ....-
wise tam bet .--...... -.--- -....-
wise tam test .--...... -.--- -....-
wise tat zit .--...... -.-- --....-
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I use CSS transitions to animate a responsive layout's vertical orientation?
I am trying to figure out a way to smoothly animate a responsive change to some elements' display property when the browser size reaches a certain breakpoint. I would like to use CSS transitions, but they do not apply to the display property so I may have to figure out a workaround. To be clear, I am only having trouble animating changes to the vertical orientation of elements that were previously arranged horizontally. Other, simple, responsive animations have been set up without issue.
Here is a simple example
In that example, I have set up effective transitions for the div dimensions that activate at given breakpoints. The final (smallest window) transition causes the divs to line up vertically. At first, this was achieved by simply changing the divs from display:inline-block; to display:block;. However, this could not be animated using CSS transitions, so I tried an alternative method. The alternative involved changing the divs from position:relative; to position:absolute; and adjusting their top properties. I thought CSS transitions would be able to effectively animate the change in top but that does not seem to happen.
Does anyone have any suggestions?
A:
Your problem is that you change from relative to absolute. That can not be transitioned in any way.
Just try to keep your styles and change only numeric properties.
For instance, you can keep using relative position, and adjust the left and top values accordingly:
@media (max-width: 680px) {
.box {
width:150px;
height:150px;
}
#box1 {
left: 165px;
top:10px;
}
#box2 {
left: 0px;
top:170px;
}
#box3 {
left: -165px;
top:330px;
}
}
demo
Change a little bit the style to avoid the ugly behaviour in smaller screens
@media (max-width: 680px) {
.box {
width:150px;
height:150px;
margin-left: -77px;
margin-right: -77px;
left: 0px;
}
#box1 {
top:10px;
}
#box2 {
top:170px;
}
#box3 {
top:330px;
}
}
new demo
The problem came when the container width could no longer hold the 3 divs, and they begin to flow to another row.
| {
"pile_set_name": "StackExchange"
} |
Q:
Extract a value from a SOAP Response in PHP
I need to a value from this SOAP response. The value is in the loginresponse / return element. Here's the response:
<soap-env:envelope xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:soap-enc="http://schemas.xmlsoap.org/soap/encoding/">
<soap-env:body soap-env:encodingstyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:ns1="urn:DBCentralIntf-IDBCentral">
<ns1:loginresponse>
<return xsi:type="xsd:string"><**THIS IS THE VALUE I NEED**></return>
</ns1:loginresponse>
</soap-env:body>
Here's how I'm trying to parse:
$response = curl_exec($ch);
curl_close($ch);
$xml = simplexml_load_string($response, NULL, NULL, "http://schemas.xmlsoap.org/soap/envelope/");
$ns = $xml->getNamespaces(true);
$soap = $xml->children($ns['SOAP-ENV']);
$res = $soap->Body->children($ns['NS1']);
print_r($res->LoginResponse->Return);
But I get an empty object.
Thanks for your help!
A:
UPDATE:
Removing the namespaces clears things up a bit (although a hack). Here my new code:
$response = curl_exec($ch);
curl_close($ch);
$cleanxml = str_ireplace(['SOAP-ENV:', 'SOAP:'], '', $response);
$cleanxml = str_ireplace('NS1:','', $cleanxml);
$xml = simplexml_load_string($cleanxml);
echo $xml->Body->LoginResponse->return[0];
A:
Instead of using cURL and attempting to parse the XML response, consider using the PHP SOAP client. You may need to install PHP SOAP or enable it in your PHP configuration. (I'm using PHP on Windows, so I just had to uncomment extension=php_soap.dll in php.ini.)
If you have SOAP installed, you can get the WSDL from the provider of the web service you're using. Based on Googling this value in the XML you showed: xmlns:ns1="urn:DBCentralIntf-IDBCentral", I'm guessing you can find it here, but you'll probably have better luck finding it since you know for sure what web service you're using.
After you have the WSDL, using the PHP SOAP client is super easy:
$client = new SoapClient('path/to/your.wsdl');
$response = $client->Login(['username', 'password']);
$theValueYouNeed = $response->loginresponse->return;
| {
"pile_set_name": "StackExchange"
} |
Q:
Do mortgage lenders ever prefer bad credit?
In the US, If you want to put $400,000 down and borrow $100,000 with a mortgage to buy a $500,000 home, is it really so important to have good credit with so much collateral? If you were to default the bank will get all that equity so it seems to my uninformed mind that banks might desire you to have bad credit in such a situation as above were %80 of the house is already put down.
A:
is it really so important to have good credit with so much collateral
Yes it is important to have good credit, the bank may not lend or may charge higher for bad credit.
If you were to default the bank will get all that equity so
You are missing the fundamental. Bank cannot take more than what they are owed. When they take possession of house, they auction it. Take what was due from the sale and return any surplus to the owner. This entire process takes time and hence bank wants to avoid giving loan to someone who they feel is risky.
Edit:
There are different aspects of risk that the bank factors.
Whether someone will default on repayments. This is established by income and credit score. Based on this they would assign low, moderate, high risk.
In case of a default, will the Bank lose money. This is determined by the equity in the house. The more equity, the more the Bank is safeguarded that even in adverse conditions, the Bank will not lose money. The only advantage with your example is the bank may not lose money even if price crashes by more than 50%.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why my GPU load is low when is rendering a scene with Blender?
I have a scene in Blender and I render it with GPU Compute, Cycles.
The problem is when the scene is rendering, my GPU load is under the 5% while the CPU is doing all the render work.
My GPU is a Nvidia GTX 745 and my Cpu an Intel core I5. The blender version that I am using for render is 2.82a.
Anybody knows why is this happening?
A:
You could go into the blender preferences, (edit>preferences>system>cycles render devices) and try looking for your GPU, as you may have 'none' selected.
It might be under a different path tracing GPU option
| {
"pile_set_name": "StackExchange"
} |
Q:
main.c|4|error C2059: syntax error : 'type'| i dont know y=why this error is popping up
#include <stdio.h>
#include <stdlib.h>
int max(int num1, int num2) {
int result;
if(num1 > num2) {
result =num1;
} else {
result = num2;
}
return result;
}
int main() {
printf("%d",max(4,3));
return 0;
}
I do not understand why this is not working; it is telling me:
main.c|4|error C2059: syntax error : 'type'|
I do not know why this error is popping up.
A:
It would appear that one of the library header files (most likely stdlib.h) is providing a macro definition for max, and this is conflicting with your own function definition.
To resolve this, either rename your function (say mymax) or add the following line after the two #include lines:
#undef max
Or, if you want to be more thorough:
#ifdef max
#undef max
#endif
Another option, as you aren't actually using anything from the stdlib.h header, is just to remove (or comment out) the #include <stdlib.h> line (but that may affect code you later add to your program).
NOTE: Compilers (such as MSVC) that give macro definitions for max and min are, strictly speaking, not conforming to the C Language Standard. However, that doesn't seem to prevent their implementers from doing it. If you are using MSVC, then you can also prevent this error by including the following line immediately before including the stdlib.h header:
#undef _CRT_INTERNAL_NONSTDC_NAMES
#include <stdlib.h>
You can also fix the problem without changing your code, by adding the /Za complier option (or set "Disable Language Extensions" to "Yes" in the project's C/C++ properties).
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use parallel computing for missRanger in imputation of missing values?
I am imputing missing values by missRanger and it takes too long as I have 1000 variables. I tried to use parallel computing, but it does not make the process faster. Here is the code
library(doParallel)
cores=detectCores()
cl <- makeCluster(cores[1]-1)
registerDoParallel(cl)
library(missRanger)
train[1:lengthvar] <- missRanger(train[1:lengthvar], pmm.k = 3, num.trees = 100)
stopCluster(cl)
I am not sure what to add to this code to make it work.
A:
missRanger is based on a parallelized random forest implementation in R -ranger. Thus, the code is already running on all cores and stuff like doParallel just renders the code clumsy.
Try to speed up the calculations by passing relevant arguments to ranger via the ... argument of missRanger, e.g.
num.trees = 20 or
max.depth = 8
instead.
Disclaimer: I am the author of missRanger.
| {
"pile_set_name": "StackExchange"
} |
Q:
What is SQL Connection
What SQL Connection object is and what does actually happen when we open/close it? What resources does it consume and why is it necessary to dispose it (in terms of c#/.net)?
A:
SqlConnection:
Take a look at the MSDN page for SqlConnection. It is stated that:
A SqlConnection object represents a unique session to a SQL Server
data source. With a client/server database system, it is equivalent to
a network connection to the server.
SqlConnection.Open: In the MSDN page on SqlConnection.Open, it is stated that:
The SqlConnection draws an open connection from the connection pool if
one is available. Otherwise, it establishes a new connection to an
instance of SQL Server.
SqlConnection.Close(and Dispose):
The MSDN page on SqlConnection.Close says that:
The Close method rolls back any pending transactions. It then releases the connection to the connection pool, or closes the connection if connection pooling is disabled.
Also, in the SqlConnection page it is stated that:
If the SqlConnection goes out of scope, it won't be closed. Therefore, you must explicitly close the connection by calling Close or Dispose. Close and Dispose are functionally equivalent. If the connection pooling value Pooling is set to true or yes, the underlying connection is returned back to the connection pool. On the other hand, if Pooling is set to false or no, the underlying connection to the server is actually closed.
and:
To ensure that connections are always closed, open the connection inside of a using block, as shown in the following code fragment. Doing so ensures that the connection is automatically closed when the code exits the block.
This should answer your questions.
EDIT:
For further readings (also seen in your comments) you can read about Connection-Pooling and of course check out the source code for SqlConnection.
| {
"pile_set_name": "StackExchange"
} |
Q:
Source crop & Frame values in dumpsys SurfaceFlinger output
I am working on a project on SurfaceFlinger. So, when is the "source-crop" area different from "frame" area(these are the values that appear in the dumpsys SurfaceFlinger output). In other words, when/why are the layer-contents (rendered by the app) scaled. Or are they rendered by the app itself after scaling. If not, does SurfaceFlinger take the layer-content, scales them and then composites them? Also, who decides the "frame" rectangle, the app or SurfaceFlinger?
A:
See the Android System-Level Graphics document. In particular, the SurfaceView section has a sub-section on use of the hardware scaler that addresses this, but I recommend reading the full thing if you want to understand the details.
Most content is rendered 1:1 for best quality. Apps can choose to scale up a SurfaceView for performance reasons. The sizes are set by the Window Manager.
One common situation in which scaling is performed is video playback. You generally want the video to fill as much of the screen as possible, regardless of whether it's SD or HD content.
| {
"pile_set_name": "StackExchange"
} |
Q:
Razor Page Ranking
I am trying to make a ranking page using razor page.
I will be getting my data from API (which was linked to the db).
The sample data in the DB is
CREATE TABLE Member (Id int, Points int);
INSERT INTO Member VALUES (1, 200);
INSERT INTO Member VALUES (2, 100);
INSERT INTO Member VALUES (3, 20);
INSERT INTO Member VALUES (4, 50);
INSERT INTO Member VALUES (5, 300);
I signed in as memberid = 2, so my current ranking is 3.
I need to display out in a table the data of rank 2, 3, and 4.
Sample display is
Rank MemberID Points
2 1 200
3 2 100
4 4 50
How to acheive this?
using (var client = new HttpClient())
{
var id = Request.Cookies["MemberId"].ToString();
client.BaseAddress = new Uri(baseUrl);
var responseTask = client.GetAsync("api/members");
responseTask.Wait();
var result = responseTask.Result;
if (result.IsSuccessStatusCode)
{
var memberResponse = result.Content.ReadAsStringAsync().Result;
Member = JsonConvert.DeserializeObject<IList<Member>>(memberResponse);
var member = Member.OrderByDescending(x => x.Points);
Member = member.ToList().GetRange(member.ToList().FindIndex(x => x.Id == Int32.Parse(id)) - 1, 3);
}
else
{
Member = (IList<Member>)Enumerable.Empty<Member>();
ModelState.AddModelError(string.Empty, "Server Error. Please contact administrator.");
}
}
This is what I have got so far. I am able to display only the +1 from user and -1 from user. But the rank I have gotten was wrong.
<table class="table">
<thead>
<tr>
<th>
Rank
</th>
<th>
@Html.DisplayNameFor(model => model.Member[0].UserName)
</th>
<th>
@Html.DisplayNameFor(model => model.Member[0].Points)
</th>
</tr>
</thead>
<tbody>
@{int rank = 0;}
@foreach (var item in Model.Member)
{
<tr>
<td>
@{rank++;}<label># @rank</label>
</td>
<td>
@Html.DisplayFor(modelItem => item.UserName)
</td>
<td>
@Html.DisplayFor(modelItem => item.Points)
</td>
</tr>
}
</tbody>
This is what I got for my cshtml page.
Thanks!
A:
just so i get the correct understanding are you after a leaderboard of 3 people?
so the member who is logged in with the member who sit above and below them in the leaderboard?
Looking at the api it looks like it takes in a memberId and returns that member only, are there any other apis? one that can get members by rank, or a list of members?
| {
"pile_set_name": "StackExchange"
} |
Q:
Type Mismatch Error while looping the array
I'm trying to build the array by looping the data set.
Values inside are Object/Range type. (e.g. 34FF544)
I get the "type mismatch" error.
Dim arr2 As Variant
Dim y As Long
Dim eil As Long
eil = 1
y = 1
Do Until Sheets(2).Range("A" & eil) = "" 'looping until the blank cell
arr2(y) = Range("A" & eil) 'storing the value in an array
y = y + 1 'next array element
eil = eil + 1 'next row to take value from
Loop
A:
The way you are currently populating an array is dynamic and for that you need to adjust two things:
"By declaring a dynamic array, you can size the array while the code
is running. Use a Static, Dim, Private, or Public statement to declare
an array, leaving the parentheses empty."Office Dev Center
So make sure you start your code with Dim arr2() As Variant
The second thing is that because you use a dynamic array, you have the option to resize the array before your loop, however you can also resize your array on the go, which is your route:
Do Until Sheets(2).Range("A" & eil) = ""
ReDim Preserve arr2(y) 'This is your key!
arr2(y) = Range("A" & eil)
y = y + 1
eil = eil + 1
Loop
Now that you know the culprit, it's also good to have a look at how sufficient your code actually is. Few things that come to mind:
You have both y and eil going on the same count, why not just use one of them?
You can just load your array from a range in one go
Also, you should look into naming your worksheet and run code through a With... End With
| {
"pile_set_name": "StackExchange"
} |
Q:
EaselJS way to change default pivot?
Pivot of stage is set to Left Top corner of canvas but I want Left Bottom corner.
So then up is +y and right is +x
Is this possible?
A:
It is not advisable to transform the stage -- there are some issues with how mouse coordinates are transformed.
Put your contents in a Container instead
Set the coordinates of the container to the stage width/height
Move your contents into negative x/y
If you absolutely must transform your stage, you can set the regX and regY to the stage width/height. It will move the contents, so you will have to counter-position the contents so they display properly.
Hope that helps!
| {
"pile_set_name": "StackExchange"
} |
Q:
How to make single array in between dates
I have one table called task
id t_title t_started_on t_due_on
1 Test 1 2018-01-18 01:00 PM 2018-01-20 01:00 PM
2 Test 2 2018-01-25 01:00 PM 2018-01-27 01:00 PM
from here i have to select dates like first row start date is 2018-01-18 01:00 PM (t_started_on) and end date is 2018-01-20 01:00 PM(t_due_on).total is 3 days
2018-01-18
2018-01-19
2018-01-20
same as second row also 3 days
2018-01-25
2018-01-26
2018-01-27
Expected Result
Array
(
[allocatedDate] => 2018-01-18
[allocatedDate] => 2018-01-19
[allocatedDate] => 2018-01-20
[allocatedDate] => 2018-01-25
[allocatedDate] => 2018-01-26
[allocatedDate] => 2018-01-27
)
How to write a select query in above case?
A:
Like I said in your previous question, you should not store your dates as strings, but in date datatype: that will make your data less error prone and your queries simpler. Now you'll have to convert those strings to dates each time you need to do date/time calculations with them.
To generate the dates inside periods, you need a helper table, which can be useful also for many other purposes: a table with one column that has natural numbers starting from 0 up to some large n. You could create it like this:
create table nums (i int);
insert into nums values (0), (1), (2), (3);
insert into nums select i+4 from nums;
insert into nums select i+8 from nums;
insert into nums select i+16 from nums;
insert into nums select i+32 from nums;
insert into nums select i+64 from nums;
insert into nums select i+128 from nums;
insert into nums select i+256 from nums;
You can see how you double the number of records by adding a similar insert statement, but this will already generate 512 records, which would be more than enough for your purposes: it should have the highest number of days that a period can have in your tasks table.
Then you can use this query to get the desired output:
SELECT DISTINCT date_add(d_started_on, interval i day)
FROM (
SELECT date(STR_TO_DATE(t_started_on, '%Y-%m-%d')) as d_started_on,
datediff(
date(STR_TO_DATE(t_due_on, '%Y-%m-%d')),
date(STR_TO_DATE(t_started_on, '%Y-%m-%d'))
) as days
FROM tasks
) as base
INNER JOIN nums ON i <= days
ORDER BY 1
See also SQLfiddle
| {
"pile_set_name": "StackExchange"
} |
Q:
Buscar no MySQL usando somente dia, mês ou ano com PHP
Criei uma tabela, nela um campo no formato DATETIME para armazenar o dia, mês, ano e horário em que um registro é feito.
Gostaria de saber como eu posso fazer para buscar somente usando dia, mês ou ano ?
Exemplo:
Tenho 4 registros:
2017-02-03
2017-02-13
2017-05-03
2018-01-04
Ao fazer uma busca para exibir os registros feitos no dia 03, deve aparecer assim:
2017-02-03
2017-05-03
Ao fazer uma busca para exibir os registros feitos no ano 2017, deve aparecer assim:
2017-02-03
2017-02-13
2017-05-03
A mesma coisa deve ser feita para o mês.É possível fazer isso com PHP?
A:
Utilize: YEAR(), MONTH() e DAY():
SELECT * FROM tabela
WHERE YEAR(data) = '2017'
AND MONTH(data) = '07'
AND DAY(data) IN ('1', '25' , '30')
Direto no PHP com explode:
$data = '2017-01-15';
$arrayData = explode("-", $data);
echo "Ano: ".$arrayData[0];
echo "<br>Mes: ".$arrayData[1];
echo "<br>Dia: ".$arrayData[2];
| {
"pile_set_name": "StackExchange"
} |
Q:
How to implement prototype function with 2 dimensional array in javascript?
I need to implement DataTable struct ,that is in c#, in javascript.
For example
function Servers(name)
{
this.Name = name;
this.Columns = new Array(5);
var rows = new Array(3);
for (var i=0;i<3;i++)
rows[i]=new Array(5);
this.Rows = rows;
}
I simply access jth element of ith Row by typing like this;
Servers.Rows[i][j]
This works good, but I need to call my object like this;
Servers.Rows[i]["ServerUrl"]
But I dont know how to implement a prototype for this work.
Is there anyway to achieve this?
Note: Columns array holds Column names like in c# and Columns array size always equals to Rows' sub array.
A:
Live demo
function create2Array(d1, d2, fn) {
var arr = [],
d = function(x, y) {},
f = fn || d;
for (var i = 0; i < d1; i++) {
for (var j = 0, curr = []; j < d2; j++) {
curr[j] = f.call(window, i, j);
};
arr[i] = curr;
};
return arr;
};
function createArrayOfObjects(d1) {
var arr = [];
for (var i = 0; i < d1; i++) {
arr[i] = {};
};
return arr;
};
function print2DArray(arr) {
document.body.innerHTML += "<p><b>Array:</b></p>";
for (var i = 0, len = arr.length; i< len; i++) {
document.body.innerHTML += "<p><b>" + i + "</b>: " + arr[i].join(" ") + "</p>";
};
};
function printArrayOfObj(arr) {
document.body.innerHTML += "<p><b>Array:</b></p>";
for (var i = 0, len = arr.length; i< len; i++) {
document.body.innerHTML += "<p><b>" + i + "</b>: " + JSON.stringify(arr[i]) + "</p>";
};
};
var Server = {};
Server.Rows = createArrayOfObjects(10);
Server.Rows[0]["something"] = "test";
printArrayOfObj(Server.Rows);
Use it like this:
Server.rows = create2Array(10, 10);
Or you can even specify a custom init function which takes the index as param.
Say if you want to init your matrix with 0 by default:
Server.rows = create2Array(10, 10, function(x, y) { return 0;});
Or if you use an object.
Server.rows = create2Array(10, 10);
Server.rows[0]["ServerUrl"] = "test";
| {
"pile_set_name": "StackExchange"
} |
Q:
Following definite integral
Here is the integral:
$$\int_{0}^{2}\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} dx$$
Here is my work:
$$\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} := y \implies x=y^2-y$$
By implicit differentiation, $$1 = 2y\frac{dy}{dx}-\frac{dy}{dx} \implies dx=dy(2y-1)$$.
So the integral is $$\int_{0}^{2}y(2y-1)dy = \frac{10}{3}$$.
(The limits of the integral stay at $0$ and $2$)
However, Wolfram Alpha is giving me approximately $19/6$; http://www.wolframalpha.com/input/?i=int%28%28x%2B%28x%2B%28x%2B%28x%2B%28x%2B%28x%2B%28x%2B%28x%2B%28x%2B%28x%2B%28x%2B%28x%2B%28x%2B%28x%2B%28x%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%29%5E%280.5%29%2C0%2C2%29.
Is there something wrong with my work?
A:
Is there something wrong with my work?
Yes, you didn't change the integral limits correctly.
You have (see below)
$$\lim_{x\downarrow 0} \sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x+\dotsc}}}} = 1,$$
so the integral should be
$$\int_1^2 y(2y-1)\,dy.$$
With $f(x) = \sqrt{x+\sqrt{x+\sqrt{x+\dotsc}}} = \sqrt{x+f(x)}$, for $x > 0$ we have $f(x) \geqslant 0$, whence $f(x) = \sqrt{x+f(x)} \geqslant \sqrt{x}$. Then $f(x) = \sqrt{x+f(x)} \geqslant \sqrt{f(x)} \geqslant \sqrt[4]{x}$, and iterating $f(x) \geqslant x^{1/2^k}$ for all $k \in\mathbb{N}$, which implies $f(x) \geqslant 1$.
From $x = y^2-y$ we can directly compute $y = f(x)$ with the quadratic formula and obtain
$$f(x) = \frac{1}{2} + \sqrt{x+ \frac{1}{4}},$$
which we can integrate to check the result.
| {
"pile_set_name": "StackExchange"
} |
Q:
Getting 'Consent Required' error despite successful permission grant using Oauth URL
I am using the example from https://github.com/docusign/docusign-python-client (docusign python SDK). Despite granting access by logging in using the URL given by oauth_login_url below, the subsequent "api_client.configure_jwt_authorization_flow" call always results in 'consent_required' error. The corresponding integrator key is set up with right redirect URI and key pair (of which I use the private key as private_key_filename below). Note that the account is not associated to any Organization yet. I am not yet there. But I would expect this basic flow to work as is. Any idea what could be causing this error?
oauth_login_url = api_client.get_jwt_uri(integrator_key, redirect_uri, oauth_base_url)
print(oauth_login_url)
https://account-d.docusign.com/oauth/auth?response_type=code&client_id=<integrator_key>&scope=signature%2Bimpersonation&redirect_uri=https%3A%2F%2Fwww.docusign.com%2Fapi
integrator_key = "<My INTEGRATOR_KEY1 from Docusign>"
redirect_uri = "https://www.docusign.com/api" <== same as in the Integrator Key
oauth_base_url = "account-d.docusign.com"
private_key_filename = "/Users/myname/Desktop/private.key"
user_id = "46933ecb-9aec-4fe3-8efe-7d5777ac9b54" <== Silly me, anonymized:) but to indicate I am not using email
api_client.configure_jwt_authorization_flow(private_key_filename, oauth_base_url, integrator_key, user_id, 3600)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/userme/projects/docudocu/lib/python2.7/site-packages/docusign_esign/api_client.py", line 118, in configure_jwt_authorization_flow
post_params=self.sanitize_for_serialization({"assertion": assertion, "grant_type": "urn:ietf:params:oauth:grant-type:jwt-bearer"}))
File "/Users/userme/projects/docudocu/lib/python2.7/site-packages/docusign_esign/api_client.py", line 418, in request body=body)
File "/Users/userme/projects/docudocu/lib/python2.7/site-packages/docusign_esign/rest.py", line 244, in POST body=body)
File "/Users/userme/projects/docudocu/lib/python2.7/site-packages/docusign_esign/rest.py", line 200, in request
raise ApiException(http_resp=r)
docusign_esign.rest.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'X-DocuSign-Node': 'CH1DFE2', 'Content-Length': '28', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Content-Type-Options': 'nosniff', 'Content-Type': 'application/json; charset=utf-8', 'Expires': '-1', 'Content-Security-Policy-Report-Only': "script-src 'unsafe-inline' 'unsafe-eval' 'self';style-src 'unsafe-inline' 'self';img-src https://docucdn-a.akamaihd.net/ 'self';font-src 'self';connect-src 'self';object-src 'none';media-src 'none';frame-src 'none';frame-ancestors 'none';report-uri /client-errors/csp", 'X-XSS-Protection': '1; mode=block', 'X-DocuSign-TraceToken': 'b009b0a6-19ad-4e58-844e-76fc5b509cbb', 'Pragma': 'no-cache', 'Cache-Control': 'no-cache', 'Date': 'Wed, 29 Nov 2017 22:52:10 GMT', 'X-Frame-Options': 'SAMEORIGIN', 'X-AspNetMvc-Version': '5.2'})
HTTP response body: {"error":"consent_required"}
A:
If you are using User Consent then you do not need Organization. Only thing you need to configure in your DocuSign account is, Create IntegratorKey, Create RedirectURI, CreateSecretKey (only required if you want to know userId using OAUTH APIs), Create Public/Private RSA Key.
And your Authorization Code Grant URL to get userConsent should refer to same IntegratorKey, redirectURI which were configured in your DocuSign Account. And same Integrator Key should be used in generating JWT as well. Also oauth_base_path should be account-d.docusign.com for Demo environment. I would recommend to create JWT using JWT and test the flow using PostMan. And also use Epoch Time Converted to generate iat and exp claims.
Details for JWT related OAUTH is also explained here, JWT OAUTH, ignore Admin Consent if you are not using Organization.
| {
"pile_set_name": "StackExchange"
} |
Q:
Must GnuPG to have installed keys for every username profile?
I'm running (on my local machine) the GPG (wingpg ) - command line version.
My login name at win7 - is RoyiN. ( so I have a profile for it)
When I logged in - I've installed the keys (using PKA.exe) both private and public.
All fine.
Then I wrote this code ( which is working )
Process proc = new Process();
proc.StartInfo.FileName = cfg.PGP_Program_FullPath;
proc.StartInfo.UserName = "Royin";
proc.StartInfo.Domain = ...;
proc.StartInfo.Password = ...
proc.StartInfo.Verb = "runas";
proc.Start();
...
However if I write in the UserName field - another user which is also Administrator on my local machine - it says :
gpg: decryption failed: No secret key
Then I swapped again to RoyiN and it did work
Are keys installed per user? is there a way to change that so it will be global ? ( so every user on the machine will be able to use these keys - without having to install the keys under each every profile) ?
It also implies that if i want to allow other's to connect to my computer - I must be logged on with RoiyN 24/7....
Is there any workaround for this ?
A:
Yes, they are installed on per-user basis
Simple answer - just export the private/public key pair, and install it for the Administrator account as well.
Although, it'd be better to create a separate key for your automated system with own public key - whoever has your key with a high level of trust, will accept this one as well.
A:
There are two different things happening here that are related to the "person" running gpg.
GPG searches for keys in the default keyring files, which are installed in your user profile directory (under a folder named .gnupg). This will be a set of files like pubring.gpg and secring.gpg. This part is easy to work around: pass --secret-keyring "path\to\file" as one of the parameters and it will add that keyring file to its search path. You may want to move it to a publically readable location, like %ALLUSERSPROFILE%, first.
Apart from that, GnuPG keys are generated for and tied to an identity, which is usually your email address. When receiving files, the data will specify the identify of the person who's key is needed to decrypt and/or verify the integrity. When encrypting or signing files, you have to tell GPG who's key to use. Your secret key is used when you sign things for others, or when you decrypt data sent to you. You need to make sure the appropriate keys are in whatever keyring file you use, regardless of where it is.
There's no need for you to actually stay logged in when you run gpg, if you give it an explicit location for the data. It's simply that gpg, by default, reads the current environment variables, set at login, to determine where those things are.
You'll probably need to specify a keyring file path, a secret keyring file path, and a configuration file path if you want to run GPG unattended. The entire list of options you can specify is on the GPG Configuration Options page.
(You may want to try starting with just the --homedir option, which I think will override the default paths for everything else in one go, but you'd need to test that to make sure.)
| {
"pile_set_name": "StackExchange"
} |
Q:
Overlapping vertices on sphere?
Iv started up doing some programming in XNA as Iv been doing C# for several years and would like to start do some 3D work with C# and XNA framework.
Right now Im trying to build a sphere by code. It renders fine, but as soon as I apply some light it looks like it it is drawn twise with and without normals for shading. It gets very flickery, between light-shaded and unshaded.
basicEffect.EnableDefaultLighting();
basicEffect.DirectionalLight0.Direction = new Vector3(1, -1, 1);
basicEffect.DirectionalLight0.Enabled = true;
The code for generating the shpere data looks like this.
public class Sphere : Component, I3DComponent
{
float spehereRes = 10.0f;
// I3DComponent values
Vector3 position = Vector3.Zero;
Matrix rotation = Matrix.Identity;
Vector3 scale = new Vector3(1, 1, -1);
BoundingBox boundingBox = new BoundingBox(new Vector3(-1), new Vector3(1));
public Vector3 Position { get { return position; } set { position = value; } }
public Vector3 EulerRotation
{
get { return MathUtil.MatrixToVector3(Rotation); }
set { this.Rotation = MathUtil.Vector3ToMatrix(value); }
}
public Matrix Rotation { get { return rotation; } set { rotation = value; } }
public Vector3 Scale { get { return scale; } set { scale = value; } }
public BoundingBox BoundingBox { get { return boundingBox; } }
// Effect
BasicEffect basicEffect;
//Sphere variables
short[] indices;
int nvertices, nindices;
VertexPositionColorNormal[] vertices;
VertexBuffer vbuffer;
IndexBuffer ibuffer;
public Sphere(float radius)
: base()
{
basicEffect = new BasicEffect(Engine.GraphicsDevice);
SetupEffect();
Setup(radius);
}
public Sphere(float radius, GameScreen Parent)
: base(Parent)
{
basicEffect = new BasicEffect(Engine.GraphicsDevice);
SetupEffect();
}
private void Setup(float radius)
{
nvertices =Convert.ToInt32( spehereRes) * Convert.ToInt32(spehereRes); // nr of vertices in a circle, nr of circles in a sphere
nindices = Convert.ToInt32(spehereRes) * Convert.ToInt32(spehereRes) * 6;
vbuffer = new VertexBuffer(Engine.GraphicsDevice, typeof(VertexPositionNormalTexture), nvertices, BufferUsage.WriteOnly);
ibuffer = new IndexBuffer(Engine.GraphicsDevice, IndexElementSize.SixteenBits, nindices, BufferUsage.WriteOnly);
CreateIndices();
CreateSphereVertices(radius);
CalculateNormals();
vbuffer.SetData<VertexPositionColorNormal>(vertices);
ibuffer.SetData<short>(indices);
}
#region// Setup BasicEffect
/// <summary>
/// Setsup basic effect parameters
/// </summary>
private void SetupEffect()
{
//basicEffect.VertexColorEnabled = true;
//basicEffect.TextureEnabled = true;
basicEffect.EnableDefaultLighting();
basicEffect.DirectionalLight0.Direction = new Vector3(1, -1, 1);
basicEffect.DirectionalLight0.Enabled = true;
//basicEffect.AmbientLightColor = new Vector3(0.3f, 0.3f, 0.3f);
//basicEffect.DirectionalLight1.Enabled = false;
//basicEffect.DirectionalLight2.Enabled = false;
//basicEffect.SpecularColor = new Vector3(0, 0, 0);
}
#endregion
public override void Draw()
{
// Look for a camera in the service container
Camera camera = Engine.Services.GetService<Camera>();
// Throw an exception if one isn't present
if (camera == null)
{
throw new Exception("Camera not found in engine's"
+ "service container, cannot draw");
}
// Set effect values
basicEffect.World = MathUtil.CreateWorldMatrix(position, rotation, scale);
basicEffect.View = camera.View;
basicEffect.Projection = camera.Projection;
// For each pass..
foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes)
{
pass.Apply();
// Draw the terrain vertices and indices
Engine.GraphicsDevice.SetVertexBuffer(vbuffer);
Engine.GraphicsDevice.Indices = ibuffer;
Engine.GraphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices, 0, nvertices, indices, 0, indices.Length / 3, VertexPositionColorNormal.VertexDeclaration);
}
}
#region //CreateIndices()
/// <summary>
/// Creates Sphere Indices
/// </summary>
private void CreateIndices()
{
indices = new short[nindices];
int i = 0;
for (int x = 0; x < spehereRes; x++)
{
for (int y = 0; y < spehereRes; y++)
{
int s1 = x == spehereRes-1 ? 0 : x + 1;
int s2 = y ==spehereRes-1 ? 0 : y + 1;
short upperLeft = (short)(x * spehereRes + y);
short upperRight = (short)(s1 * spehereRes + y);
short lowerLeft = (short)(x * spehereRes + s2);
short lowerRight = (short)(s1 * spehereRes + s2);
indices[i++] = upperLeft;
indices[i++] = upperRight;
indices[i++] = lowerLeft;
indices[i++] = lowerLeft;
indices[i++] = upperRight;
indices[i++] = lowerRight;
}
}
}
#endregion
#region //CreateSphereVertices(float radius)
/// <summary>
/// Setup Sphere object
/// </summary>
/// <param name="radius"></param>
void CreateSphereVertices(float radius)
{
vertices = new VertexPositionColorNormal[nvertices];
Vector3 center = new Vector3(0, 0, 0);
Vector3 rad = new Vector3((float)Math.Abs(radius), 0, 0);
for (int x = 0; x < spehereRes; x++) //nr of circles, difference between each is 4 degrees
{
float difx = 360.0f / spehereRes;
for (int y = 0; y < spehereRes; y++) //nr of veritces, difference between each is 4 degrees
{
float dify = 360.0f / spehereRes;
Matrix zrot = Matrix.CreateRotationZ(MathHelper.ToRadians(y * dify)); //rotate vertex around z
Matrix yrot = Matrix.CreateRotationY(MathHelper.ToRadians(x * difx)); //rotate circle around y
Vector3 point = Vector3.Transform(Vector3.Transform(rad, zrot), yrot);//transformation
vertices[x + y * Convert.ToInt32(spehereRes)].Position = point;
vertices[x + y * Convert.ToInt32(spehereRes)].Color = Color.Black;
}
}
}
#endregion
#region //CalculateNormals()
/// <summary>
/// Calculates Noramals for vertices
/// </summary>
private void CalculateNormals()
{
for (int i = 0; i < vertices.Length; i++)
vertices[i].Normal = new Vector3(0, 0, 0);
for (int i = 0; i < indices.Length / 3; i++)
{
int index1 = indices[i * 3];
int index2 = indices[i * 3 + 1];
int index3 = indices[i * 3 + 2];
Vector3 side1 = vertices[index1].Position - vertices[index3].Position;
Vector3 side2 = vertices[index1].Position - vertices[index2].Position;
Vector3 normal = Vector3.Cross(side1, side2);
vertices[index1].Normal += normal;
vertices[index2].Normal += normal;
vertices[index3].Normal += normal;
}
for (int i = 0; i < vertices.Length; i++)
vertices[i].Normal.Normalize();
}
#endregion
}
Any ideas about this would be appreciated. I lowered the spheres "resolution" to 10 to try find an answer while doing the drawing but looks kind of tricky.
A:
Solved. Typical... :) Been struggling with this for a while now. and now that I posted for any tips i finally found the solution.
I had theese Rasterizer settings for rendering.
RasterizerState rs = new RasterizerState();
rs.CullMode = CullMode.None;
rs.FillMode = FillMode.Solid;
Engine.GraphicsDevice.RasterizerState = rs;
If i removed them and only ran
Engine.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise;
It worked as it should. Well if anyone else should have problem with overlapped drawing I hope this helps to check out how the Rastarizer is set.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I return custom http header in asp.net core?
Ciao,
I'm developing a series of microservices with aspnet core. I want to return a custom http header on 500 responses.
I tried to create a custom ASP.NET Core Middleware that update context.Response.Headers property but it works only when the response is 200.
This is my custom middleware:
namespace Organizzazione.Progetto
{
public class MyCustomMiddleware
{
private readonly RequestDelegate _next;
public ExtractPrincipalMiddleware(RequestDelegate next)
{
_next = next;
}
public async Task InvokeAsync(HttpContext context)
{
context.Response.Headers.Add("X-Correlation-Id", Guid.NewGuid());
await _next.Invoke(context);
return;
}
}
}
This is my configure method:
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseMiddleware<MyCustomMiddleware>();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseMvc();
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API");
});
}
How can I return my custom header on a 500 response caused by unhandled exception (or possibly on all the responses)?
Thank you a lot
A:
You have to subscribe on httpContext.Response.OnStarting
public class CorrelationIdMiddleware
{
private readonly RequestDelegate _next;
public CorrelationIdMiddleware(RequestDelegate next)
{
this._next = next;
}
public async Task Invoke(HttpContext httpContext)
{
httpContext.Response.OnStarting((Func<Task>)(() =>
{
httpContext.Response.Headers.Add("X-Correlation-Id", Guid.NewGuid().ToString());
return Task.CompletedTask;
}));
try
{
await this._next(httpContext);
}
catch (Exception)
{
//add additional exception handling logic here
//...
httpContext.Response.StatusCode = 500;
}
}
}
And register it your Starup
app.UseMiddleware(typeof(CorrelationIdMiddleware));
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add @ symbol before column value and separate with ' , '?
I want to add @ symbol to a column name and separated each by ','
I tried with substring() as the following:
declare @tmp varchar(250) SET @tmp = ''
select @tmp = @tmp + COLUMN_NAME + ', @' FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME ='tbl_emp'
select SUBSTRING(@tmp, 0, LEN(@tmp)) as new column
The column names are:
+--------------+
| COLUMN_NAME |
+--------------+
| empName |
| workinhDate |
| Workinghour |
+--------------+
The output should be like:
+-------------------------------------+
| ColumnNames |
+-------------------------------------+
| @empName,@workinhDate,@Workinghour |
+-------------------------------------+
A:
I think you meant:
DECLARE @tmp VARCHAR(250)
SET @tmp = ''
SELECT @tmp = @tmp + '@' + COLUMN_NAME + ', '
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'tbl_emp'
SELECT LEFT(@tmp, LEN(@tmp) - 1)
Also, substring in sql server is 1 based, not 0.
| {
"pile_set_name": "StackExchange"
} |
Q:
Undecidable unary languages (also known as Tally languages)
An exercise that was in a past session is the following:
Prove that there exists an undecidable subset of $\{1\}^*$
This exercise looks very strange to me, because I think that all subsets are decidable.
Is there a topic that I should read to find a possible answer?
A:
Note that $\{1\}^*$ is isomorphic to $\Bbb N$. There are uncountably many subsets of both $\{1\}^*$ and $\Bbb N$.
Perhaps you are confused with the fact that there are only countably many finite subsets of $\{1\}^*$ (and $\Bbb N$).
| {
"pile_set_name": "StackExchange"
} |
Q:
Enzyme simulate('change') on select does not increase func coverage
I've got a component with a select inside it
<select onChange={this.handleChange}>
{this.props.options.map(this.renderOption)}
</select>
and the following function
handleChange(e) {
const element = e.target;
this.props.onChange.call(null, e, {
value: element.value,
label: element.options[element.selectedIndex].textContent,
});
}
I'm wrote a test in Jest and Enzyme, like this:
const onChange = jest.fn();
// ....
component.find('select').simulate('change');
expect(onChange).toHaveBeenCalled();
The problem is that my % Funcs coverage is at 83.33% for this test because it thinks that the handleChange function is not being called. All the other coverages (% Stmts, % Branch, % Lines) are at 100%, except for this one. Is this an enzyme/jest bug, or am I doing something wrong?
PS: I tried writing a dummy test where I would call handleChange manually and the coverage goes to 100%. So, it's definitely something to do with that.
Edit, to clarify: The handleChange function is being called, the test works. The problem is that the coverage doesn't count that the function is being called.
A:
I found out what the case was for me.
SelectComponent.defaultProps = {
options: [],
onChange: () => {},
};
It was because of the default property function. When I unit tested, I replaced it with a mock function, so that function was never called, thus making the coverage not count it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Do people respond more to $ off promos or % off promos?
In the past we've typically used promotional codes on our site in either banners or other advertisement placed images. And I was wondering if anyone has any information on performance of banners with using Dollar amounts off or Percentage amounts.
For example...
Save $10 off orders of $100 or more
As opposed to
Save 10% off your next order*
A disclaimer at the bottom of the page would then state:
*10% discount applies to first $100 in your order
I see a few companies in our competition using the 10% with a disclaimer at the bottom and my personal feeling is that it's misleading, but I do believe that with the use of 10% more people are going to click on it and order.
A:
There's a lot of marketing chatter on this topic but there is one main thing to consider; which sounds like the more attractive deal in your * specific case*?
There's an interesting Donut Hole graph from Dealicacy Showing how both can offer better value to the consumer:
The relative (%) benefit remains constant, while the absolute benefit ($ off) has a complex curve. Logically with the relative benefit it's easy to tell how much of a benefit you're getting; it's always the same, it just scales to what you buy. But with the absolute benefit you always know what you're getting. Never underestimate the power of less thinking.
A marketing study conducted by Evo A/B tested relative vs absolute benefits and found the following results:
$50-Off Coupon generated 170% more revenue than the 15%-Off Coupon.
$50-Off Coupon had 72% higher conversion rate.
I think the issue here is twofold; as mentioned, it's easy to understand how much I'm saving if you say I save $50: I save $50. Additionally, 15% really doesn't sound like much.
Consider the following bargains, both yielding equal savings on a certain product: 75% off or $5 off. Clearly the value of the relative benefit seems higher. Roughly speaking, a benefit of 40% or more starts to sound pretty significant. A dollar amount of $50 or more sounds significant.
If one value clearly appears to be a better value, it's probably best to offer that option as long as it's not misleading (like the terrible, lying asterisk in your question). Otherwise it would probably be best to A/B test your specific case if there's no clear winner.
A:
Behavioral economics looks at this type of question, and would be a good area for you to investigate. http://en.wikipedia.org/wiki/Hyperbolic_discounting talks about how people perceive discounts. My immediate thought for your specific issue is that people will minimise cognitive load - a % discount requires them to think a bit more than a $ discount, so the greater response is likely to be from a $ discount.
A:
I work in the financial industry and during tax season, this always comes up. Which do we offer:
20% off, or
$25 off tax prep?
Typically our software will calculate the most advantagous and offer that price to the client. With verbal confirmation that they are receiving the best "deal", clients always seem satisfied.
Of course tax preparation can get pricey, so the % off generally wins. But this assurs the client of the most savings either way, without having to calculate each discount on their own for the better deal.
| {
"pile_set_name": "StackExchange"
} |
Q:
Extracting the Number of Pixels within Buffers around Various Points in Google Earth Engine?
I have a Feature Collection of various points, each of which I've drawn a 2.5km buffer around using the following:
var bufferBy = function(size) {
return function(feature) {
return feature.buffer(size);
};
};
After running a supervised SVM classification with only two possible output classes, either 0 or 1, I have determined the total number of pixels in both classes within the overlapping area of the buffers using the following:
var sigP = classified_SVM_Park.reduceRegion({
reducer: ee.Reducer.countEvery().group({
groupName: 'signature',
}),
geometry: pBuffer.geometry(),
scale: 30,
maxPixels: 1e8
});
How would I go about extracting the number of pixels assigned 0 and 1 within each buffer--in other words, the number of pixels within the buffer around each individual point?
Link to the full code is here: https://code.earthengine.google.com/54d0806b66fd31476a7a314f2da11df6
A:
You can actually do it quite simply with reduceRegions
Add this to your code. It will create a featureCollection with the result of the reduction added to the properties of each feature.
var sigPColl = classified_SVM_Park.reduceRegions({
collection: pBuffer,
reducer: ee.Reducer.countEvery().group({
groupName: 'signature',
}),
scale: 30
});
print(sigPColl,'Park Collection: Signature Count');
var sigRColl = classified_SVM_Ranch.reduceRegions({
collection: rBuffer,
reducer: ee.Reducer.countEvery().group({
groupName: 'signature',
}),
scale: 30
});
print(sigRColl, 'Ranch Collection: Signature Count');
| {
"pile_set_name": "StackExchange"
} |
Q:
Where in the Windows phone device is SQLite db file located/stored?
I'm working on a WP8 app involving SQLite. So when the code for creating and populating the database is executed, I want to be sure the db file is really created on the device and also verify its contents using an SQLite browser.
In creating the database, I specified the location ApplicationData.Current.LocalFolder but I've no idea where exactly in the device that points to.
Which location in the device can I find the SQLite db file?
A:
Go to Program Files (x86)\Microsoft SDKs\Windows Phone\v8.0\Tools\IsolatedStorageExplorerTool. Go to this folder and run one of the below commands:
ISETool.exe ts xd 8a40681d-98fc-4069-bc13-91837a6343ca c:\data\myfiles command, if you are running the application on emulator.
ISETool.exe ts de 8a40681d-98fc-4069-bc13-91837a6343ca c:\data\myfiles command, if you are running the application on device.
The third argument is the application product ID, you can get it in the WMAppManifest.xml file, App tag, ProductId attribute. The product ID of the given example is 8a40681d-98fc-4069-bc13-91837a6343ca.
Now, if everything is okay, you should have a copy of the Isolated Storage content in the c:\data\myfiles; And the database file should be there too.
Then you can add your database file in your project with Add Existing Item option.
Remember to change the Copy to Ouput Directory property of the added .sqlite file to Copy if newer.
| {
"pile_set_name": "StackExchange"
} |
Q:
Easy Frontend/GUI for Python
I have seen a few solutions to this, but I want to know what the absolute best front-end solution is for my application. I'm making a basic web app, that needs a front end capable of taking in input and storing it, then display it later. I need an easy to learn a solution that works with Python 3, I have tried tkinter but that was a little confusing, and most of the documentation is for Python 2, I have also seen Flask and Django, but I'm not 100% sure what exactly they are. Any help would be much appreciated.
A:
There's no "absolute best front-end", the choice depends entirely on what you're trying to achieve. First and foremost you should do more reading; tkinter is certainly not a web development framework. Flask and Django are. If you're not sure what they do, then again there's some reading ahead of you.
My recommendation would be to go with Flask, as it's relatively nice & easy. Plus, deployment of Flask apps is made quite easy with popular cloud providers.
You might want to check out Dash, as it makes development of analytical web applications exceptionally easy. Is it the best? All depends on your application. Again, there's no "best" - there's just a good fit. I can tell you Django isn't (yet!) for you.
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.