text
stringlengths 3
1.74M
| label
class label 2
classes | source
stringclasses 3
values |
---|---|---|
Cornish pasty now has protected status. . If you want to try an authentic recipe, [this one](http://www.greenchronicle.com/connies_cornish_kitchen/cornish_pasty.htm) is pretty good. | 0non-cybersec
| Reddit |
#Doctorbedancing is an anesthesiologist in Boston that dances on the streets in his spare time to raise money for charity.. | 0non-cybersec
| Reddit |
Concept of shared preference in ionic. <p>Does anyone know, how the concept of shared preference in android, is used in ionic?. I tried a lot but couldn't understood how it is used in ionic.</p>
| 0non-cybersec
| Stackexchange |
Brocade FCX terminal length. <p>I have a Brocade FCX-4XG and I'm connected to the serial console. I would like to change the terminal length of this console to facilitate scripting, but unlike a cisco or juniper, it's not terribly obvious how to do this...</p>
<pre><code>FCX648 Switch#term
monitor
FCX648 Switch#
FCX648 Switch(config)#term
Unrecognized command
FCX648 Switch(config)#
FCX648 Switch(config)#console
timeout Idle timeout
FCX648 Switch(config)#
</code></pre>
<p>Any ideas how to do this? The manual doesn't seem to say either.</p>
| 0non-cybersec
| Stackexchange |
WTF does this mean?. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
i pray to god daily, please god stop all the ruthless suffering and have some kind of goodness in your heart. | 0non-cybersec
| Reddit |
Can we extend Young's convolution inequality with $BMO$ instead of $L^\infty$. <p>Obviously $\|f*g\|_{L^\infty}\leq\|f\|_{L^1}\|g\|_{L^\infty}$. Do we have the stronger bound $\|f*g\|_{L^\infty}\leq C\|f\|_{L^1}\|g\|_{BMO}$? Or almost as good, $\|f*g\|_{L^\infty}\leq C\|f\|_{H^1}\|g\|_{BMO}$? I think this might follow from the fact that interpolation still works when you replace $L^\infty$ with $BMO$.</p>
<p>Edit: It seems to me that the first statement is false for example if you take $f=1_{[0,1]}\in L^1$ and $g(x)=\log|x|\in BMO$. Then for $x>1$,
\begin{align*}
f*g(x)=x\log x-(x-1)\log(x-1)-1
\end{align*}
is not $L^\infty$.</p>
<p>For the second claim, I'm tempted to use the duality inequality for $H^1$ and $BMO$ to say something like
$$|\int f(t)g(x-t)dt|\leq\|f\|_{H^1}\|g\|_{BMO}$$
but I know that this is only really supposed to hold for $f\in H_0^1$.</p>
| 0non-cybersec
| Stackexchange |
I am currently googling divorce lawyers after my wife did this.. | 0non-cybersec
| Reddit |
Digital Photocopiers Loaded With Secrets - CBS Evening News. | 0non-cybersec
| Reddit |
cannot read property swing of undefined. <p>I am using the CDN version of <a href="http://materializecss.com/getting-started.html" rel="noreferrer">materializecss</a></p>
<pre><code><html>
<head>
<!-- css -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.98.0/css/materialize.min.css">
</head>
<body>
<!-- page body -->
<!-- scripts -->
<script src="https://code.jquery.com/jquery-3.1.1.slim.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.98.0/js/materialize.min.js"></script>
<script src="scripts/scripts.js"></script>
</body>
</html>
</code></pre>
<p>and getting the following error in console</p>
<blockquote>
<p>Uncaught TypeError: Cannot read property 'swing' of undefined</p>
<p>at materialize.min.js:6</p>
<p>(anonymous) @ materialize.min.js:6</p>
</blockquote>
| 0non-cybersec
| Stackexchange |
Would using GTG on pullup negatives help? or is it too difficult to try since it's an eccentric exercise?. I was thinking of doing a set of 3 pullup negs every half hour or so. I was planning on stopping once I can do 3 pull ups with perfect form, or 2 weeks whichever comes first. Thoughts? | 0non-cybersec
| Reddit |
How to change boot device on vm installed on Citrix Xen Server to DVD. <p>I have installed Centos 6 on my new Xen Server (XenServer release 6.2.0-70446c). Everything went smooth. System is working fine but now I want to change boot device on that VM to boot from DVD (iso from nfs storage) But in properties there is only Hard Disk option... I was looking for adding some new DVD device but didn't find where (On Vmware it's very simple and maybe here to but I was searching quite long time and it's became frustrating )... It's strange to me because when I was creating VM for Centos system had to start from DVD and everything went well and without problems but now when I wan't to change boot order I can't (don't know how)
<img src="https://i.stack.imgur.com/rzNqE.jpg" alt="enter image description here"></p>
<p>What I do wrong ... because I don't believe that it is impossible on Xen.
How to accomplish this ? </p>
| 0non-cybersec
| Stackexchange |
Spark Dataframe distinguish columns with duplicated name. <p>So as I know in Spark Dataframe, that for multiple columns can have the same name as shown in below dataframe snapshot:</p>
<pre><code>[
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=125231, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0047, 3: 0.0, 4: 0.0043})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=145831, f=SparseVector(5, {0: 0.0, 1: 0.2356, 2: 0.0036, 3: 0.0, 4: 0.4132})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=147031, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=149231, f=SparseVector(5, {0: 0.0, 1: 0.0032, 2: 0.2451, 3: 0.0, 4: 0.0042}))
]
</code></pre>
<p>Above result is created by join with a dataframe to itself, you can see there are <code>4</code> columns with both two <code>a</code> and <code>f</code>.</p>
<p>The problem is is there when I try to do more calculation with the <code>a</code> column, I cant find a way to select the <code>a</code>, I have try <code>df[0]</code> and <code>df.select('a')</code>, both returned me below error mesaage:</p>
<pre><code>AnalysisException: Reference 'a' is ambiguous, could be: a#1333L, a#1335L.
</code></pre>
<p><strong>Is there anyway in Spark API that I can distinguish the columns from the duplicated names again? or maybe some way to let me change the column names?</strong></p>
| 0non-cybersec
| Stackexchange |
[gif] what happens when popcorn 'pops'. From /r/gifs, but i thought that it would be better off here. . | 0non-cybersec
| Reddit |
Legs by Kevin Marr at Resolution SF San Francisco. | 0non-cybersec
| Reddit |
Speeding Motorcyclist knock a trucks mirror and get smashed.. | 0non-cybersec
| Reddit |
Number of non-negative integer solutions for linear equations with constants. <p>How do we find the number of non-negative integer solutions for linear equation of the form: </p>
<p>$$a \cdot x + b \cdot y = c$$</p>
<p>Where $a, b, c$ are constants and $x,y$ are the variables ?</p>
| 0non-cybersec
| Stackexchange |
Bernoulli's representation of Euler's number, i.e $e=\lim \limits_{x\to \infty} \left(1+\frac{1}{x}\right)^x $. <blockquote>
<p><strong>Possible Duplicates:</strong><br>
<a href="https://math.stackexchange.com/questions/28476/finding-the-limit-of-n-sqrtnn">Finding the limit of $n/\sqrt[n]{n!}$</a><br>
<a href="https://math.stackexchange.com/questions/39170/how-come-such-different-methods-result-in-the-same-number-e">How come such different methods result in the same number, $e$?</a> </p>
</blockquote>
<p>I've seen this formula several thousand times: $$e=\lim_{x\to \infty} \left(1+\frac{1}{x}\right)^x $$</p>
<p>I know that it was discovered by Bernoulli when he was working with compound interest problems, but I haven't seen the proof anywhere. Does anyone know how to rigorously demonstrate this relationship?</p>
<p>EDIT:
Sorry for my lack of knowledge in this, I'll try to state the question more clearly. How do we prove the following?</p>
<p>$$ \lim_{x\to \infty} \left(1+\frac{1}{x}\right)^x = \sum_{k=0}^{\infty}\frac{1}{k!}$$</p>
| 0non-cybersec
| Stackexchange |
How to import a scss file inside a scss class. <p>I want to add a different theme when i add "dark-theme" class to body. My implementation looks like this:</p>
<pre><code>@import '../../../../node_modules/angular-grids/styles/material.scss';
.app-dark {
@import '../../../../node_modules/angular-grids/styles/material-dark.scss';
}
</code></pre>
<p>Without any luck. Any clue on how to do this?</p>
| 0non-cybersec
| Stackexchange |
Go go power... Oh.. | 0non-cybersec
| Reddit |
My good friend's tattoo, done by Jessica McDermot of Santa Cruz.. | 0non-cybersec
| Reddit |
Soda can stove (X-Post r/gifs). | 0non-cybersec
| Reddit |
how to calculate the log of the likelihood ratio given a semi log plot of data and two predictions. <p>I was looking at the Luria–Delbrück experiment and its semi-log plot of experimental data along with Poisson and Luria-Delbrück models (referred to as P1 and P2 respectively from now on). </p>
<p>With P1(m) and P2(m) and n(m), the total number of trials that resulted in m mutants given in the semi-log plot, I was asked to calculate the log of the likelihood ratio, which means the following:</p>
<p>log( p(data|Lamarckian)/p(data|Darwinian) ) = log( p(data|Lamarckian) ) - log(p(data|Darwinian) )</p>
<p>Then, the book claims that $$\log(p(data|theory)) = \textrm{A}\Sigma_m n(m) \log(P_{theory}(m)) $$ for some constant A for normalization</p>
<p>So to calculate log( p(data|Lamarckian)/p(data|Darwinian) ), the book suggests that I just take the difference of P1 and P2 in the semi-log plot and multiply the difference by n(m), then sum over all m</p>
<p>which confuses me because if the above is true, then it seems to imply the following</p>
<p>$$\log(p(data|theory)) = \textrm{A}\Sigma_m n(m) \log(P_{theory}(m))= \textrm{A}\Sigma_m\log(P_{theory}(m)^{n(m)}) $$</p>
| 0non-cybersec
| Stackexchange |
Do I(my gateway) have a public ip?. <p>when I search for "my public ip", google returns <code>103.12.15.1</code> (changed). My router's WAN side IP is <code>10.5.184.23</code> and its gateway is <code>10.5.184.12</code>, not a public ip. When I traceroute to <code>8.8.8.8</code> from my router, this is what I get: </p>
<pre><code>traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 38 byte packets
1 10.5.184.12 (10.5.184.12) 4.327 ms 6.880 ms 2.860 ms
2 103.12.15.1 (103.12.15.1) 6.807 ms 4.739 ms 3.904 ms
3 103.12.15.201 (103.12.15.201) 6.369 ms 6.849 ms 15.123 ms
4 150.107.206.250 (150.107.206.250) 10.626 ms 15.744 ms 13.094 ms
.
.
</code></pre>
<p><code>10.5.184.12</code> is my ISP's AP to which I connect via PPPoE.<br>
So, what can I make from this observations.</p>
| 0non-cybersec
| Stackexchange |
Your body is a temple. | 0non-cybersec
| Reddit |
Undercover officer dressed as giant traffic cone helps nab motorists near school zone (x-post r/JusticePorn??). | 0non-cybersec
| Reddit |
DIY Social media buttons Vs. Ready built (e.g. AddThis). <p>Is there any benefit from rolling-your-own set of social media 'like' buttons rather than using a pre-packaged widget, such as that provided by AddThis? Are there any downsides from using out of the box services? </p>
| 0non-cybersec
| Stackexchange |
My very first post to reddit. I thought I'd share my house.. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
WCD attacks still a significant issue. | 1cybersec
| Reddit |
How to install GD on Heroku. <p>I am running Laravel 5.3 and trying to do some image manipulation. I get this error: GD Library extension not available with this PHP installation.</p>
<p>I've tried putting gd in my composer.json</p>
<pre><code>"require": {
"php": ">=5.6.4",
"laravel/framework": "5.3.*",
"mews/purifier": "~2.0",
"vinkla/hashids": "^2.4",
"barryvdh/laravel-debugbar": "^2.2",
"fzaninotto/faker": "~1.4",
"intervention/image": "^2.3",
"gd": "*"
},
</code></pre>
<p>and it didn't work. I also tried:</p>
<pre><code>"ext-gd": "*"
</code></pre>
<p>and that didn't work either. I looked at this page <a href="https://devcenter.heroku.com/articles/php-support" rel="noreferrer">https://devcenter.heroku.com/articles/php-support</a> and it says: </p>
<p>The following built-in extensions have been built “shared” and can be enabled through composer.json (internal identifier names given in parentheses)</p>
<p>GD (gd)</p>
| 0non-cybersec
| Stackexchange |
How to talk to my bf (M 23) when he does not respond to me (F 22)?. Hello Reddit,
I have been a lurker here for a while, trying to figure out the lay of the land, and of course to try and figure out some answers to my overwhelming relationship problems. But here is my story—buckle up! It’s a doozy. Also, throwaway account because my SO knows my username/reads my comments and I’d love some input before I move forward with him.
A bit of background:
My boyfriend, we’ll call him Gary, and I (F 22) have been in a relationship for almost 3 and a half years. We met in college, and have been living together in a small studio apartment for two years. Things were fine before we moved in together, but all of our issues started to arise afterwards.
Gary is my absolute best friend. We are very close, and have many similar interests. However, he treats me like I am *only* his best friend. He does not kiss me, compliment me, hug me, f*ck me, nothing. He will do some of these things, but I have to ask him. It hurts me to ask him, my own boyfriend, to give me a hug! I hope that makes sense where I am coming from.
Naturally, partners communicate with one another on their feelings. And of course, this is something of concern on my end because I did not notice his lack of intimacy with me before we moved in together. It might have been the switch from living alone to living with someone in such a small space, but his ways of not ever touching me or emotionally connecting with me on a gf level became glaringly obvious. Anyhow, this is the issue: when I try to speak with him about how he is feeling or how I am feeling…he literally shuts down. He does not look at me. He does not respond. If he does respond, I wait in silence for upwards of 5 minutes to get a response, and it is usually “I don’t know”/some variant that does not mean anything. It is absolutely infuriating.
I have been going through these types of “talks” (and emotionally suffering in-between them) for years. I have tried every avenue. He has tried speaking with therapists, just for them to tell me he does not communicate with them either. I have tried to communicate with him through letters, texts, having a few drinks to loosen up, my parents, his parents, our friends, his brother…everything. And every time, he shuts down, does not say anything, and never changes his behaviors.
I do not know how much longer I can live so close with someone who does not value me above being a casual friend. I wish I could speak with him and have an actually meaningful conversation where we can openly discuss both his feelings and mine, and how to move forward to making our relationship what it was. I’d love anyone’s input, as I love him dearly and do not wish to end the relationship!
Tl;dr Have a boyfriend who treats me like a best friend. He shuts down whenever a serious topic of conversation comes up, and I’ve tried almost every avenue to try to mitigate this issue. I’d love some help so we can salvage our relationship. | 0non-cybersec
| Reddit |
Made my heart melt...💖. | 0non-cybersec
| Reddit |
Last Saturday at Crabtree Falls - Blue Ridge Parkway, NC [OC][1364x2048]. | 0non-cybersec
| Reddit |
Why TypeTag doesnt have method runtimeClass but Manifest and ClassTag do. <p>I have this code to generically transform String to Dto, if I am using Manifest and ClassTag, both of it I can use method <strong>runtimeClass</strong> to get runtime class, but TypeTag doesnt have this method</p>
<pre><code>class ObjectMapper[T] {
def readValue(x: String, t: Class[T]): T = ???
}
class Reader {
def read[W: Manifest](x: String): W = {
val mapper = new ObjectMapper[W]
mapper.readValue(x, implicitly[Manifest[W]].runtimeClass.asInstanceOf[Class[W]])
}
}
</code></pre>
<p>May I know why TypeTag doesnt have method runtimeClass </p>
<p>Many thanks in advance</p>
| 0non-cybersec
| Stackexchange |
Do I need to setup NAT or ACL configurations for my DMZ setup?. <p>I'm trying to setup a network configuration in my company like the one in the picture below. A web server publicly accessible in the DMZ but insulated from the internal LAN.</p>
<p>For the firewall I'm using the Cisco RVS4000 4-Port Gigabit Security Router with VPN router.</p>
<p><a href="https://i.stack.imgur.com/v2zXo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v2zXo.jpg" alt="enter image description here"></a></p>
<p>Before setting the DMZ configuration on the router, I was able to ping and ssh back and forth from the LAN host to the server, which I understand as "the server is in the LAN".</p>
<p>After configuring the router to set the server into the DMZ, I'm still able to ping and ssh in both directions on both hosts and I expected to not being able to ping from the server to the host on LAN, which leads to my question:</p>
<ol>
<li><p>Do I need to configure ACL or NAT rules to insulate the server to create connections to the LAN host? If yes, then what is the DMZ setting doing?</p></li>
<li><p>The DMZ was not supposed to setup the router to block access to the internal LAN?</p></li>
</ol>
<p>Any help would be appreciated. Thanks in advance.</p>
| 0non-cybersec
| Stackexchange |
IPv6 data transfer between two connected clients on same modem. <p>I have a modem which gives its clients public IPv6 addresses. If I scp a large file from client 1 connected to same router to client 2 connected to same router will the data transferred be chargeable, or will it be counted as LAN traffic?</p>
<p>Also can someone please explain to me, if my modem is giving me IP address using a /64 prefix (seems I cannot change this to /56 on modem), can I use another router connected to this modem as WAN, to distribute IPv6 public addresses.</p>
| 0non-cybersec
| Stackexchange |
Half cut crease using huda beauty mauve obsessions 💗. | 0non-cybersec
| Reddit |
Is the sum of a closed set and a subspace closed?. <p>We define the sum of two sets $A$ and $B$ to be $$A+B=\{x+y ~|~ x \in A, y \in B \}.$$ Now let's suppose $A,B$ are subsets of $\mathbb{R}^n$ and $A$ is closed and $B$ is a subspace. Does it follow that $A+B$ closed?</p>
| 0non-cybersec
| Stackexchange |
puppetlab 'file_line' type not working in one puppet apply run. <p>When i run puppet apply policy1.pp , it does not apply all the file_line resource type written in policy1.pp. So when i run again puppet apply policy1.pp it will apply remaining file_line resource written in policy1.pp.</p>
<p>why this behaviour? Can't puppet apply all the resources in one run. This does not happen if it is file resource.</p>
| 0non-cybersec
| Stackexchange |
James Gunn Says No Humans As Main Characters In Guardians Of The Galaxy Vol. 2. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Why is it easy to use ‘smart’ devices as weapons for cyber attacks?. | 1cybersec
| Reddit |
limitations of handsfree wireless headset. <p>So I recently bought the sony WH 1000XM3 headset, the audio is great as well as the noice canceling, however I have one big problem. I use the stereo profile when listening to music, however when I want to play games I need my microphone so I have to switch to the handsfree profile. The audio of the handsfree is really bad, is there a way so I can use the microphone and get better sound quality ?</p>
<p>I have done some research and the reason why the sound quality is som bas is because of the HSP/HFP limitations.
<a href="https://superuser.com/questions/1101560/bluetooth-handsfree-better-quality">Bluetooth handsfree better quality</a>
Is there already a solution to make the sound quality better when using the microphone ?</p>
| 0non-cybersec
| Stackexchange |
makarna salatasi. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
Big Brother and the Holding Company - Ball and Chain [Psychedelic]. | 0non-cybersec
| Reddit |
Perfect recreation. | 0non-cybersec
| Reddit |
The edge precoloring extension problem for complete graphs. <p>Consider coloring the edges of a complete graph on even order. This can be seen as the completion of an order <span class="math-container">$n$</span> symmetric Latin square except the leading diagonal. My question pertains to whether we can always complete the edge coloring in <span class="math-container">$n-1$</span> colors given a certain set of colors? The number of colors I fix is exactly equal to <span class="math-container">$\frac{(k)(k+2)}{2}$</span>, where <span class="math-container">$k=\frac{n}{2}$</span> and form <span class="math-container">$4$</span> distinct consecutive last four subdiagonals (and, by symmetry, superdiagonals) in the partial Latin square.</p>
<p>For example, in the case of <span class="math-container">$K_8$</span>, I fix the following colors:
<span class="math-container">\begin{bmatrix}X&&&&1&3&7&4\\&X&&&&2&4&1\\&&X&&&&3&5\\&&&X&&&&6\\1&&&&X&&&\\3&2&&&&X&&\\7&4&3&&&&X&\\4&1&5&6&&&&X\end{bmatrix}</span></p>
<p>A completion to a proper edge coloring in this case would be:</p>
<p><span class="math-container">\begin{bmatrix}X&5&6&2&1&3&7&4\\5&X&7&3&6&2&4&1\\6&7&X&4&2&1&3&5\\2&3&4&X&7&5&1&6\\1&6&2&7&X&4&5&3\\3&2&1&5&4&X&6&7\\7&4&3&1&5&6&X&2\\4&1&5&6&3&7&2&X\end{bmatrix}</span></p>
<p>Can the above be always done if the colors I fix follow the same pattern for all even order complete graphs? Note that the pattern followed in the precoloring consists of two portions-</p>
<p>i) the last <span class="math-container">$k-1$</span> subdiagonals are actually taken from a canonical <span class="math-container">$n$</span>-edge coloring of the complete graph on <span class="math-container">$n-1$</span> vertices, where <span class="math-container">$n$</span> is even. By canonical, I mean the commutative idempotent 'anti-circulant' latin square. Like in the example above, the canonical coloring of the complete graph on <span class="math-container">$7$</span> vertices is
<span class="math-container">\begin{bmatrix}1&5&2&6&3&7&4\\5&2&6&3&7&4&1\\2&6&3&7&4&1&5\\6&3&7&4&1&5&2\\3&7&4&1&5&2&6\\7&4&1&5&2&6&3\\4&1&5&2&6&3&7\end{bmatrix}</span>
ii)The <span class="math-container">$k$</span>-th subdiagonal just consists of entries in the pattern <span class="math-container">$1-2-3-$</span> so on and takes into account the previous entries to create an appropriate entry. Like in the example above the last diagonal I took was <span class="math-container">$1-2-3-6$</span>. It could also have been <span class="math-container">$1-2-3-7$</span>.</p>
<p>And, if the completion exists, would the completion be unique? Any hints? Thanks beforehand.</p>
| 0non-cybersec
| Stackexchange |
EARN 10 ETH EVERY MONTH, THE NEWLY LAUNCHED ETHEREUM SMART CONTRACT BETTER THAN FORSAGE AND MILLION MONEY. | 1cybersec
| Reddit |
Characterize all real-valued $2\times 2$ matrices with eigenvalues $\pm c$, for $c > 0$.. <blockquote>
<p>Characterize all real-valued <span class="math-container">$2\times 2$</span> matrices that have as eigenvalues <span class="math-container">$\lambda_1 = c$</span> and <span class="math-container">$\lambda_2 = −c$</span>, for <span class="math-container">$c > 0$</span>. Use your result to generate a matrix that has its eigenvalues <span class="math-container">$-1$</span> and <span class="math-container">$1$</span> and does not contain any zero elements.</p>
</blockquote>
<p>Where do I even start with this? I know how to compute eigenvalues/vectors and everything, but am I finding the matrix that these eigenvalues came from like matrix <span class="math-container">$A$</span> from <span class="math-container">$(A-\lambda I)x=0$</span>? Or am I finding <span class="math-container">$\lambda_i$</span>?</p>
| 0non-cybersec
| Stackexchange |
what is cinnamon --replace process?. <p>I use cinnamon 3.6.7 <>
<code>cinnamon --replace</code> became <code>cinnamon --replace --replace</code> when I restarted it:
<img src="https://i.stack.imgur.com/tMFuC.png" alt="output of <code>htop</code>"></p>
| 0non-cybersec
| Stackexchange |
Over 570 Groups Endorse Sanders and Ocasio-Cortez's Fracking Ban Act as 'Essential and Urgent Climate Action'. | 0non-cybersec
| Reddit |
Get Schwifty!. | 0non-cybersec
| Reddit |
Is there a way to find all Cyrillic typewriter fonts on CTAN?. <p>DejaVu and Droid both provide good T2A typewriter fonts. I wanted to check if there are alternatives, but couldn't find a good way to do so using either <a href="http://www.tug.dk/FontCatalogue" rel="noreferrer">http://www.tug.dk/FontCatalogue</a>, CTAN, or MikTeX Console.</p>
<p>Example document:</p>
<pre><code>\documentclass[12pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1,T2A]{fontenc}
\usepackage{listings}
%\usepackage[ttdefault=true]{AnonymousPro}
\usepackage{sourcecodepro}
\lstset{
language=Haskell,
inputencoding=utf8,
extendedchars=true,
breaklines=true,
escapeinside=!!,
tabsize=4,
breakatwhitespace=true,
keepspaces=true
}
\lstset{
literate={а}{{\selectfont\char224}}1
{б}{{\selectfont\char225}}1
{в}{{\selectfont\char226}}1
{г}{{\selectfont\char227}}1
{д}{{\selectfont\char228}}1
{е}{{\selectfont\char229}}1
{ё}{{\"e}}1
{ж}{{\selectfont\char230}}1
{з}{{\selectfont\char231}}1
{и}{{\selectfont\char232}}1
{й}{{\selectfont\char233}}1
{к}{{\selectfont\char234}}1
{л}{{\selectfont\char235}}1
{м}{{\selectfont\char236}}1
{н}{{\selectfont\char237}}1
{о}{{\selectfont\char238}}1
{п}{{\selectfont\char239}}1
{р}{{\selectfont\char240}}1
{с}{{\selectfont\char241}}1
{т}{{\selectfont\char242}}1
{у}{{\selectfont\char243}}1
{ф}{{\selectfont\char244}}1
{х}{{\selectfont\char245}}1
{ц}{{\selectfont\char246}}1
{ч}{{\selectfont\char247}}1
{ш}{{\selectfont\char248}}1
{щ}{{\selectfont\char249}}1
{ъ}{{\selectfont\char250}}1
{ы}{{\selectfont\char251}}1
{ь}{{\selectfont\char252}}1
{э}{{\selectfont\char253}}1
{ю}{{\selectfont\char254}}1
{я}{{\selectfont\char255}}1
{А}{{\selectfont\char192}}1
{Б}{{\selectfont\char193}}1
{В}{{\selectfont\char194}}1
{Г}{{\selectfont\char195}}1
{Д}{{\selectfont\char196}}1
{Е}{{\selectfont\char197}}1
{Ё}{{\"E}}1
{Ж}{{\selectfont\char198}}1
{З}{{\selectfont\char199}}1
{И}{{\selectfont\char200}}1
{Й}{{\selectfont\char201}}1
{К}{{\selectfont\char202}}1
{Л}{{\selectfont\char203}}1
{М}{{\selectfont\char204}}1
{Н}{{\selectfont\char205}}1
{О}{{\selectfont\char206}}1
{П}{{\selectfont\char207}}1
{Р}{{\selectfont\char208}}1
{С}{{\selectfont\char209}}1
{Т}{{\selectfont\char210}}1
{У}{{\selectfont\char211}}1
{Ф}{{\selectfont\char212}}1
{Х}{{\selectfont\char213}}1
{Ц}{{\selectfont\char214}}1
{Ч}{{\selectfont\char215}}1
{Ш}{{\selectfont\char216}}1
{Щ}{{\selectfont\char217}}1
{Ъ}{{\selectfont\char218}}1
{Ы}{{\selectfont\char219}}1
{Ь}{{\selectfont\char220}}1
{Э}{{\selectfont\char221}}1
{Ю}{{\selectfont\char222}}1
{Я}{{\selectfont\char223}}1
}
\lstset{
basicstyle=\ttfamily\footnotesize,
commentstyle=\color{green}\itshape,
keywordstyle= % TODO https://tex.stackexchange.com/questions/415777/avoid-highlighting-keywords-following-certain-words-in-listings
}
\begin{document}
\lstinline|АБВ|
\end{document}
</code></pre>
| 0non-cybersec
| Stackexchange |
Lucky McKee's "The Woman": The Most Disturbing Film You'll See This Year. | 0non-cybersec
| Reddit |
WAYWT - Dec. 27th. WAYWT = What Are You Wearing Today (or a different day, whatever). Think of this as your chance to share your personal taste in fashion with the community. Most users enjoy knowing where you bought your pieces, so please consider including those in your post. Want to know how to take better WAYWT pictures? Read the guide [here]((http://www.reddit.com/r/malefashionadvice/comments/16rwft/how_to_take_better_self_pics_for_mfa/)).
If you're looking for feedback on an outfit instead of just looking to share, consider using Outfit Feedback & Fit Check thread instead.
**Important: Downvotes are strongly discouraged in this thread. Sorting by new is strongly encouraged.**
| 0non-cybersec
| Reddit |
How does clear command work?. <p>I was recently trying to learn more about how the shell works and was looking at how the <code>clear</code> command works. The executable is located in <code>/usr/bin/clear</code> and it seems to print out a bunch of blank lines (equal to the height of the terminal) and puts the cursor at the top-left of the terminal.</p>
<p>The output of the command is always the same, regardless of the size of the terminal:</p>
<pre><code>$ clear | hexdump -C
00000000 1b 5b 48 1b 5b 32 4a |.[H.[2J|
00000007
</code></pre>
<p>and can be replicated with the echo having the exact same effect:</p>
<pre><code>$ /bin/echo -e "\x1b\x5b\x48\x1b\x5b\x32\x4a\c"
</code></pre>
<p>I was really curious how this output of this command translates to clearing the console.</p>
| 0non-cybersec
| Stackexchange |
1
Fast Algorithm for Blind Independence-Based
Extraction of a Moving Speaker
Jakub Janský, Zbyněk Koldovský, Jiřı́ Málek, Tomáš Kounovský, and Jaroslav Čmejla
Acoustic Signal Analysis and Processing Group, Faculty of Mechatronics, Informatics, and Interdisciplinary
Studies, Technical University of Liberec, Studentská 2, 461 17 Liberec, Czech Republic.
E-mail: [email protected], fax:+420-485-353112, tel:+420-485-353534
Abstract—Independent Vector Extraction (IVE) is a modifi-
cation of Independent Vector Analysis (IVA) for Blind Source
Extraction (BSE) to a setup in which only one source of interest
(SOI) should be separated from a mixture of signals observed
by microphones. The fundamental assumption is that the SOI is
independent of the other signals. IVE shows reasonable results;
however, its basic variant is limited to static sources. To extract
a moving source, IVE has recently been extended by considering
the Constant Separating Vector (CSV) mixing model. It enables
us to estimate a separating filter that extracts the SOI from
a wider spatial area through which the source has moved.
However, only slow gradient-based algorithms were proposed
in the pioneering papers on IVE and CSV. In this paper,
we experimentally verify the applicability of the CSV mixing
model and propose new IVE methods derived by modifying
the auxiliary function-based algorithm for IVA. Piloted Variants
are proposed as well for the methods with partially controllable
global convergence. The methods are verified under reverberant
and noisy conditions using model-based as well as real-world
acoustic impulse responses. They are also verified within the
CHiME-4 speech separation and recognition challenge. The
experiments corroborate the applicability of the CSV mixing
model for the blind moving source extraction as well as the
improved convergence of the proposed algorithms.
I. INTRODUCTION
A. Standard Independence-based BSS
The goal of Blind Source Separation (BSS) is to separate
individual signals from their mixture that is observed through
several sensors [1]. The standard linear instantaneous mixing
model considered in BSS is given by
x = As, (1)
where x is an r × 1 vector representing r observed (mixed)
signals, s is a d×1 vector of original source signals, and A is a
r×d mixing matrix. Let the number of available samples of the
observed data be N . In this paper, we will consider complex-
valued signals and parameters, which is a setup necessary for
applications in audio signal processing in the time-frequency
domain.
When r = d or r < d, the model is referred to as determined
and underdetermined, respectively. The advantage of a deter-
mined problem compared to an underdetermined one is that
0This work was supported by The Czech Science Foundation through
Project No. 17-00902S and by the United States Department of the Navy,
Office of Naval Research Global, through Project No. N62909-19-1-2105.
the inverse matrix of A exists provided that A is nonsingular.
The BSS problem can then be solved through finding a d× d
square de-mixing matrix W such that y = Wx correspond
to the original signals s up to their order and scaling factors,
which cannot be determined without additional information.
The rows of the de-mixing matrix and the columns of the
mixing matrix will be referred to as separating and mixing
vectors, respectively.
Independent Component Analysis (ICA) [2], [3] has been
a popular BSS method based on the assumption that the
original signals s are statistically independent. Later, the idea
was extended in Independent Vector Analysis (IVA) to the
joint BSS problem (jBSS) where K > 1 standard linear
instantaneous mixtures (k corresponds to the kth frequency
bin in the frequency-domain BSS [4])
xk = Aksk, k = 1, . . . ,K, (2)
are separated jointly. Here, the source signals in sk are
assumed to be statistically independent for every k, as in ICA.
In addition, the elements of the ith vector component, defined
as si = [s1i , . . . , s
K
i ]
T , i = 1, . . . , d, are allowed to be mutually
dependent. This dependence is used for separating the original
sources so that their order is the same in all mixtures, which
helps us solve the permutation problem (a different order of
separated components for each k) [5]. Independent Low Rank
Matrix Analysis (ILRMA) is a recent extension of IVA where
samples of vector components are assumed to obey a low-
rank model. For example, ILRMA combines the IVA and
Nonnegative Matrix Factorization (NMF) in [6], [7].
Independence-based BSS methods can be classified accord-
ing to the statistical model of signals. Basically, ICA, IVA and
ILRMA assume that the original signals have independently
distributed samples drawn from non-Gaussian distributions.
Here, the independence of separated signals is measured
through contrast functions that involve higher-order statistics
[8], [9]. In IVA, it is additionally assumed that signals from
different mixtures (the elements of vector components) are
uncorrelated but dependent and that their dependence can be
presented through higher-order statistics [9]. Another class of
BSS methods, which we do not consider here, is based on
Gaussian statistical models of signals that exploit only second-
order statistics of signals; see, e.g., [10]–[16].
ar
X
iv
:2
00
2.
12
61
9v
1
[
ee
ss
.A
S
]
2
8
F
eb
2
02
0
2
B. Mixing models for dynamic conditions
The standard mixing models (1) and (2) are not suitable for
describing dynamic situations; for example, when a source
is moving and the mixing matrix is varying in time. There
have been few time-varying mixing models considered in the
previous BSS literature; see, e.g., [17], [18] for BSS models
with a linearly changing mixing matrix. Recently, Piecewise
Determined Mixing models (PDM) assume that the mixture
is determined and locally obeys the standard mixing model
within specified time intervals [19]. The mixing matrix can
be changing from interval to interval, which approximates the
dynamic mixing. In PDM, the tth sample or interval of the
mixture is described by1
xt = Atst, t ∈ T , (3)
where T is the set of possible indices, and At is square
(r = d). For a set of dynamic mixtures, we introduce the
joint Piecewise Determined Mixing model (jPDM) described
by
xk,t = Ak,tsk,t, k = 1, . . . ,K, t ∈ T . (4)
Here the mixing matrices Ak,t are also square.
The dimensions in the joint mixing models (2) and (4) can
be dependent on k. Nevertheless, for the practical purposes of
this paper, we will consider only the same dimension d for all
mixtures. When T contains only one possible value of t, the
PDM models coincide with the standard ones, (1) and (2).
The general PDM models correspond to a sequential appli-
cation of the standard mixing model to short intervals (or even
samples) of data, which is a straightforward approach used to
cope with dynamic mixing conditions, e.g., in either online
or batch-online implementations of BSS algorithms [20]. In
this paper, we will consider a special case of the jPDM model
that involves a reduced number of parameters. The model is,
however, formulated for the Blind Source Extraction (BSE)
problem.
C. Blind Source Extraction
BSE aims at the blind extraction of one particular source
of interest (SOI) and could be seen as a subtask of BSS.
Indeed, some ICA and IVA algorithms, such as FastICA,
actually perform sequential or parallel BSE; see, e.g., [21]–
[23]. BSE within the framework of ICA and IVA has recently
been revised in [24]. Here, the problem to extract the SOI
based on its independence from the remaining signals, called
background, is referred to as Independent Component/Vector
Extraction (ICE/IVE).
In ICE/IVE, the mixing matrix is assumed to have a special
parameterization involving only the mixing and separating
vectors corresponding to the SOI. It was shown that this
structure is sufficient for the BSE task under the standard
mixing models without bringing any limitation in terms of
the achievable accuracy given by the Cramér-Rao bound [25],
1The formal descriptions of the mixing models (3) and (2) coincide.
Therefore, we will accept a convention that t denotes the index of a time
instant or interval, while k stands for the index of the mixture.
[26]. Moreover, close relationships between ordinary gradient-
based algorithms derived on the basis of a structured mixing
matrix and One-Unit FastICA2 were shown.
D. Contribution
The structured mixing matrix parameterization can straight-
forwardly be applied within the (j)PDM models. However,
the number of parameters can further be reduced, e.g., by
assuming that some parameters are constant over the inter-
vals of data. This way, Constant Mixing/Separating Vector
(CMV/CSV) models have been considered in [19].3. The
methods designed with CSV and CMV have been shown to
be capable of extracting moving sources or static sources
from a dynamic background, respectively. Usefulness of the
algorithms in [19] has been shown in audio applications;
however, since the gradient-based optimization is used, they
suffer from slow convergence and are prone to getting stuck
in local extremes of the contrast function.
In this paper, we therefore focus on the development of
fast algorithms for ICE/IVE assuming that the CSV model
is suitable for the blind extraction of a moving speaker. The
contribution here is three-fold. First, a BSE variant of the
AuxIVA algorithm is derived for the standard (static) mixing
model (2) using the IVE framework; the resulting algorithm
is named AuxIVE. Second, AuxIVE is extended for the CSV
model, whose modification is referred to as Block AuxIVE.
The third contribution is a piloted version of Block AuxIVE
using the idea from [28].
It features a partially controlled convergence through relying
on a pilot signal that carries information about which source
should be extracted, that is, the SOI. Therefore, it is assumed
to be statistically dependent on the SOI.
This article is organized as follows. In the following section,
the problem of the blind extraction of a moving speaker is
formulated, and its solution through IVE is described. In Sec-
tion III, the AuxIVE algorithm and its variants Block AuxIVE
and piloted Block AuxIVE are derived based on the original
AuxIVA by Ono [29]. Section IV is devoted to experimental
evaluations based on simulated as well as real-world data. The
paper is concluded in Section V.
II. PROBLEM DESCRIPTION
A. Notation
Throughout this paper, we use the following notation:
plain letters denote scalars, bold letters denote vectors, and
bold capital letters denote matrices. Upper indices such as
·T , ·H , or ·∗ denote, respectively, transposition, conjugate
transpose, or complex conjugate. The Matlab convention for
matrix/vector concatenation and indexing will be used, e.g.,
[1; g] = [1, gT ]T , (A)j,: is the jth row of ak,t, and (a)i is
the ith element of a. E[·] stands for the expectation operator,
and Ê[·] is the average taken over all available samples of the
argument.
2The variant of FastICA designed for the BSE assuming an unstructured
mixing matrix [21]
3To the best of our knowledge, these mixing models have not yet been
studied in the BSS literature; our preliminary studies in [19] and in [27] were
the first.
3
B. Frequency-domain BSS
Audio sources propagate with delays and reflections in a
typical room [4]. The mixtures observed on the microphones
are therefore described by the convolutive model
xi(n) =
d∑
j=1
L−1∑
τ=0
hij(τ)sj(n− τ), i = 1, . . . , r, (5)
where xi(n) is the observed signal on the ith microphone
at time n, s1(n), . . . , sd(n) are the original signals, and hij
denotes the impulse response between the jth source and ith
microphone of length L. In the Short-Time Fourier Transform
(STFT) domain, the convolutive model can be approximated
by the instantaneous one. Specifically, for the kth frequency
and the `th frame, the STFT coefficients of the observed
signals are described by
xk(`) = Aksk(`), k = 1, . . . ,K, (6)
where sk(`) denotes the coefficient vector of the original
signals. The ijth element of the mixing matrix Ak corresponds
to the kth Fourier coefficient of the impulse response hij .
Now, we can see that the joint mixing models (2) and (4)
can be applied to the frequency domain signals. The data
for the kth frequency corresponds to the kth mixture in the
joint model. Dynamic mixing can be handled by the PDM
model (4) under the assumption that the impulse responses
are approximately constant within the selected intervals (of
frames) and that the number of sources is the same as that
of the microphones in each interval. For simplicity, we will
consider only the standard model (2) in this section and will
get back to the CSV model later in the paper.
Let Wk be the de-mixing matrix for the kth frequency bin.
The separated sources are obtained through
uk(`) = Wkxk(`) = WkAksk(`). (7)
It holds that Wk separates the signals perfectly whenever
WkAk = PkΛk where Pk is a permutation matrix deter-
mining the order of separated signals at the kth frequency,
and Λk is diagonal with non-zero diagonal entries determining
their scales.
The fact that Pk and Λk can be arbitrary provided that
they have the above-specified properties follows from the
indeterminacies of BSS. The permutation problem appears
when Pk is different in each frequency bin [5], which hampers
the reconstruction of the separated signals in the time domain.
Once this problem is resolved (e.g., through IVA) and Pk = P
is independent of k, the ordering of the separated sources given
by P is called global permutation.
The scaling ambiguity enables us to set the scales of the
separated signals to arbitrary values. In algorithms, these scales
must be prevented from growing to infinity or being reduced
to zero, which is typically solved by fixing the scale, for
example, to unity. In the frequency-domain BSS, however,
the random/normalized scalings result in modified magnitude
spectra of the separated signals, which are unacceptably differ-
ent from the original signals. This problem is typically solved
by reconstructing the spectra of signals as they appear on
sensors (microphones), which can be done using the estimated
mixing matrix [30] or through least squares projections; these
two approaches are mutually equivalent under the orthogonal
constraint, as is shown in [31].
C. Independent Vector Extraction
Without any loss on generality, let the SOI be the first vector
component in (2). Then, we can rewrite the mixing model for
purposes of the BSE problem as
xk = Aksk = aksk + yk, (8)
where ak is the mixing vector corresponding to the SOI (the
first source), which is equal to the first column of Ak. Next,
sk denotes the k SOI’s component; that is, the first element
of sk, and yk consists of the remaining background signals:
yk = xk − aksk. The vector component corresponding to the
SOI will be denoted by s = [s1, . . . , sK ]T .
The IVE approach to extract the SOI is based on the
assumption that sk1 is independent of yk2 for every k1, k2 ∈
{1, . . . ,K}. The elements of s are allowed to be dependent
but uncorrelated. Next, Ak is assumed to be square (the
determined mixture), which also means that yk belongs to a
d− 1 dimensional subspace. Under these assumptions, it was
shown in [24] that it is sufficient to parameterize the mixing
and de-mixing matrices, respectively, as
Ak =
(
ak Qk
)
=
(
γk h
H
k
gk
1
γk
(gkh
H
k − Id−1)
)
, (9)
and
Wk =
(
wHk
Bk
)
=
(
βk h
H
k
gk −γkId−1
)
, (10)
where Id denotes the d × d identity matrix, wk denotes the
separating vector such that wHk xk = sk which is partitioned as
wk = [βk; hk], and where the mixing vector ak is partitioned
as ak = [γk; gk]. The vectors ak and wk are linked through
so-called distortionless constraint wHk ak = 1. Bk is called
blocking matrix as it satisfies that Bkak = 0. The background
noise signals are defined as zk = Bkxk = Bkyk, and it holds
that yk = Qkzk.
D. Statistical model
Let p(s) denote the joint pdf of s and pzk(zk) denote the
pdf4 of zk. The joint pdf of the observed signals reads
px({xk}Kk=1) = p({w
H
k xk}
K
k=1) ·
K∏
k=1
pzk(Bkxk)|det Wk|
2.
(11)
Hence, the corresponding log-likelihood function for one sam-
ple (frame) of the observed signals is given by
L({wk}Kk=1, {ak}
K
k=1|{xk}
K
k=1) = log p({w
H
k xk}
K
k=1)
+
K∑
k=1
log pzk(Bkxk) + log |det Wk|
2 + const. (12)
4We might consider a joint pdf of z1, . . . , zK that could possibly involve
higher-order dependencies between the background components. However,
since pzk (·) is assumed Gaussian in this paper and since signals from different
mixtures (frequencies) are assumed to be uncorrelated as in the standard IVA,
we can directly consider z1, . . . , zK to be mutually independent.
4
In BSS and BSE, the true pdfs of the original sources are not
known, so suitable model densities have to be chosen. The rule
of thumb says that the mismatch between the true and model
densities mainly has an influence on the separation/extraction
accuracy [32]. Therefore, the aim is to select model densities
that reflect the true properties of the source signals as much as
possible. In BSE, it is typical to assume that the background
signals are Gaussian as these are not subject to extraction [24].
The concrete choice of the model pdf for SOI will be discussed
in Section III-E.
Let f(s) be the model pdf, replacing p(s). The back-
ground pdf will be assumed to be circular Gaussian with
zero mean and (unknown) covariance matrix Czk = E[zkz
H
k ],
i.e., CN (0,Czk). Disregarding the constant terms and using
|det Wk|2 = |γ|2(d−2), which follows from (10), the contrast
function, as derived from (12) assuming N i.i.d. samples and
replacing the unknown Czk with its sample-based estimate
Ĉzk = Ê[zkz
H
k ], has the form
C({wk}Kk=1, {ak}
K
k=1) = Ê[log f({w
H
k xk}
K
k=1)]
−
K∑
k=1
Ê[xHk B
H
k Ĉ
−1
zk
Bkxk] + (d− 2)
K∑
k=1
log |γk|2. (13)
E. Orthogonally Constrained Gradient Algorithm: OGIVEw
In [24], gradient-based algorithms were proposed for esti-
mation of the mixing and separating vectors that search for the
maximum of the contrast function (13). They iterate in small
steps in the direction of a constrained gradient of (13).
Specifically, the orthogonal constraint (OG) is imposed
between each pair of the parameter vectors ak and wk as
ak =
Ĉkwk
wHk Ĉkwk
, (14a) wk =
Ĉ−1k ak
aHk Ĉ
−1
k ak
, (14b)
where Ĉk is the sample-based estimate of the covariance
matrix Ck = E[xkxHk ]. The constrained gradient of (13)
is the gradient taken with respect to wk or ak when the
other parameter vector is dependent through (14a) or (14b),
respectively. The OG must be imposed, because updating ak
and wk as independent parameters (linked only through the
distortionless constraint) in the directions of unconstrained
gradients has been shown to be highly unstable.
The constrained gradient of (13) with respect to wk is equal
to
∂C
∂wHk
∣∣∣∣
w.r.t. (14a)
= ak − Ê[xkφk({wHk xk}
K
k=1)], (15)
where φk(s) = − ∂∂sk log f(s) is the score function cor-
responding to the model pdf f(·). It is readily seen that,
for N → +∞, the true separating vectors {wk}Kk=1 are the
stationary points of the contrast function (the gradient is zero)
only if Ê[wHk xkφk({w
H
k xk}
K
k=1)] = 1. Therefore, a modified
(normalized) gradient equals
∆k = ak −
Ê[xkφk({wHk xk}
K
k=1)]
Ê[wHk xkφk({w
H
k xk}
K
k=1)]
, (16)
and the rule for updating wk, k = 1, . . . ,K, is
wk ← wk + µ∆k, (17)
where µ > 0 is a step size parameter. After each update, the
scaling ambiguity can be fixed through normalizing the scale
of the extracted signal or by normalizing the current mixing
or separating vector (while preserving the distortionless con-
straint wHk ak = 1). The resulting algorithm is referred to as
OGIVEw, which is an acronym of “Orthogonally-Constrained
IVE” and the subscript means that the optimization proceeds
in variables {wk}Kk=1.
Alternatively, the optimization can also proceed in variables
{ak}Kk=1 under the constraint (14b). The corresponding algo-
rithm is referred to as OGIVEa; see [24].
F. CSV Mixing Model
We now consider the jPDM mixing model (4). Let the
samples of the observed signals be divided into T intervals;
for the sake of simplicity, we assume that they have the
same length Nb = N/T (let this number be an integer);
the intervals will be called blocks and will be indexed by
t ∈ T = {1, . . . , T}. The Constant Separating Vector (CSV)
mixing model comes from the jPDM model (4) where the
mixing matrices Ak,t obey a structure similar to the one given
by (9). In addition, the separating vectors are independent
of the block index t (i.e., are constant over the blocks);
specifically,
Ak,t =
(
ak,t Qk,t
)
=
(
γk,t h
H
k
gk,t
1
γk,t
(gk,th
H
k − Id−1)
)
,
(18)
and
Wk,t =
(
wHk
Bk,t
)
=
(
βk h
H
k
gk,t −γk,tId−1
)
. (19)
The idea behind the CSV model is that the SOI can
change its position from block to block, because the position
is determined by the mixing vectors ak,t, which in turn
depend on t. The separating vectors do not depend on t,
so they are forced to extract the speaker’s voice from all
positions visited during its movement; see the illustration in
Fig. 1. One advantage is given by the reduced number of
mixing model parameters, as confirmed by the theoretical
study on Cramér-Rao bounds in [27]; however, the model
also brings some limitations. In theory, the mixture must obey
the condition that, for each k, a separating vector exists such
that sk,t = wHk xk,t holds for every t; this condition seems
to be quite restrictive. Nevertheless, preliminary experiments
have shown that CSV is useful in practical situations [19]. An
efficient BSE can be achieved through CSV; especially, when
a sufficient number of microphones is used, which increases
the number of the degrees of freedom. Then, the existence
of the desired constant separation vectors follows from the
existence of linearly constrained minimum variance (LCMV)
beamformers; see [33].
The first part of our experimental study in Section IV
provides practical evidence of this capability of CSV, as well
as of the BSE algorithms based on it.
5
Fig. 1: An illustration of how the blind extraction of a moving
speaker can be solved based on CSV. The narrow area (in grey)
stands for a typical focus of a separating filter obtained by the
conventional methods. It is able to extract the speaker only
from a particular position. The green area denotes the focus
of a separating filter obtained through CSV: it covers the entire
area of the speaker’s movement.
G. Block OGIVEw for the CSV mixing model
We will now modify OGIVEw for the CSV mixing
model. This method will be referred to as BOGIVEw (Block
OGIVEw). A similar algorithm was derived in [19]5 for the
CMV variant of the jPDM model (Constant Mixing Vector),
which is referred to as BOGIVEa.
The derivation of BOGIVEw is straightforward by following
Sections II-D and II-E. Samples of the observed signals are
assumed to be i.i.d. within each block and independently
distributed across the blocks. Hence, the log-likelihood and
contrast functions (12) and (13) and, consequently, also the
gradient (16), have the same form in each block. The differ-
ence is that the block-dependent parameters and statistics must
be taken into account. Therefore, the block index t must be
included into the notation; namely, xk → xk,t, Ĉk → Ĉk,t,
ak → ak,t, Bk → Bk,t, etc. It is important to note that wk,
k = 1, . . . ,K, are independent of t in the CSV model. For
simplicity, the same nonlinear function φk(·) is assumed for all
blocks; nevertheless, its dependence on t could be considered
as well (we do not go that way in this paper).
The contrast function for the entire batch of the data is hence
given by
C
(
{wk,ak,t}k=1,...,K
t=1,...,T
)
=
1
T
T∑
t=1
{
Ê[log f({wHk xk,t}
K
k=1)]
−
K∑
k=1
Ê[xHk,tB
H
k,tĈ
−1
zk,t
Bk,txk,t] + (d− 2)
K∑
k=1
log |γk,t|2
}
,
(20)
5BOGIVEw is briefly mentioned in [19] as an algorithm similar to
BOGIVEa and is experimentally compared with others in that paper. However,
a detailed derivation of BOGIVEw , which assumes a mixing model different
from BOGIVEa, has not yet been published.
Its gradient computed under the OG (14a) is separately applied
in each block, that is,
ak,t =
Ĉk,twk
wHk Ĉk,twk
, (21)
is equal to
∂C
∂wHk
∣∣∣∣
w.r.t. (21)
=
1
T
T∑
t=1
{
ak,t − Ê[xk,tφk({wHk xk,t}
K
k=1)]
}
.
(22)
Similarly to (16), the normalized gradient reads
∆
avg
k =
1
T
T∑
t=1
{
ak,t − Ê[xk,tφk({wHk xk,t}
K
k=1)]/νk,t
}
,
(23)
where νk,t = Ê[wHk xk,tφk({w
H
k xk,t}
K
k=1)]. The rule for
updating wk, k = 1, . . . ,K, is, similarly to (17), given by
wk ← wk + µ∆
avg
k .
A detailed summary of BOGIVEw is given in Algorithm 1,
in which the method is started from the initial values of the
separating vectors. After each iteration, the separating vectors
are normalized so that their first elements are equal to one in
order to resolve the scaling ambiguity problem. (Alternatively,
the normalization of the scales of the extracted signals is
possible.) It is worth noting that the normalization of mixing
vectors ak,t is not possible here as compared to OGIVEw,
because these parameters are block-dependent. For T = 1,
BOGIVEw corresponds with OGIVEw.
Algorithm 1: BOGIVEw: Block-wise orthogonally con-
strained independent vector extraction
Input: xk,t,winik (k, t = 1, 2, . . . ), µ, tol
Output: ak,t,wk
1 foreach k = 1, . . . ,K, t = 1, . . . , T do
2 Ĉk,t = Ê[xk,tx
H
k,t];
3 wk = w
ini
k /(w
ini
k )1;
4 end
5 repeat
6 foreach k = 1, . . . ,K, t = 1, . . . , T do
7 ak,t ← (wHk Ĉk,twk)
−1(Ĉk,twk);
8 sk,t ← wHk xk,t;
9 end
10 foreach k = 1, . . . ,K, t = 1, . . . , T do
11 νk,t ← Ê[sk,tφk(s1,t, . . . , sK,t)];
12 end
13 foreach k = 1, . . . ,K do
14 Compute ∆avgk according to (23);
15 wk ← wk + µ∆
avg
k ;
16 wk ← wk/(wk)1;
17 end
18 until max{‖∆avg1 ‖, . . . , ‖∆
avg
K ‖} < tol;
III. AUXILIARY FUNCTION-BASED IVE
In [29], N. Ono derived the AuxIVA algorithm using an
auxiliary function-based optimization (AFO) technique. This
6
method provides a much faster and more stable alternative to
the natural gradient-based algorithm from [9]. In this section,
we briefly describe the main principles of the optimization
approach and its application within AuxIVA. Further we derive
a simple modification of AuxIVA for solving the problem
of IVE, which yields the AuxIVE algorithm. Finally, Block-
AuxIVE and its piloted variant assuming the CSV mixing
model are derived.
A. Original AuxIVA
In a general optimization problem, the goal is to find an
optimum point
θ = arg min
θ
J(θ), (24)
where J(θ) is a real-valued objective function. In AFO,
an auxiliary function Q(θ, ξ) is assumed to be known that
satisfies
J(θ) = min
ξ
Q(θ, ξ), (25)
where ξ is called auxiliary variable. The minimum of J(θ) is
then sought in two alternating steps, respectively,
ξi = arg min
ξ
Q(θi, ξ), (26)
θi+1 = arg min
θ
Q(θ, ξi), (27)
where i is the iteration index. In particular, AFO can be very
effective when the closed-form solution of (27) is available.
In IVA, the set of fully parameterized de-mixing matrices
{Wk}Kk=1 plays the role of θ and the contrast function
6 is
given by [9], [29]
J({Wk}Kk=1) = −
d∑
i=1
Ê[log f(ui)]−
K∑
k=1
log |det Wk|2,
(28)
where ui = [(wi1)
Hx1, . . . , (w
i
K)
HxK ]
T denotes the ith
separated vector component; (wik)
H denotes the ith row in
Wk. It is seen that the algebraic form of (28) mainly depends
on the model density f(·).
In [29], Theorem 1 formulates an assumption that a scalar
real-valued function GR(·) exists such that − log f(u) =
GR(‖u‖2) and that GR(r) is continuous and differentiable
in r such that G′R(r)/r is positive and continuous everywhere
and is monotonically decreasing in the wider sense for r ≥ 0.
It is then shown that the auxiliary function can be
Q({Wk}Kk=1, r) =
1
2
d∑
i=1
K∑
k=1
(wik)
HVikw
i
k
−
K∑
k=1
log |det Wk|2 +R, (29)
where
Vik = Ê[ϕ(ri)xkx
H
k ], (30)
6Unlike (13), the contrast (28) has a negative sign; hence the latter is to be
minimized while the former is to be maximized.
ϕ(r) = G′R(r)/r, and r = [r1, . . . , rd]
T plays the role of
the auxiliary variable. The remaining part of Q({Wk}Kk=1, r)
denoted by R is independent of {Wk}Kk=1. It holds that
J({Wk}Kk=1) ≤ Q({Wk}
K
k=1, r), (31)
where the equality holds if and only if
ri = ‖ui‖2 =
√√√√ K∑
k=1
|(wik)Hxk|2, i = 1, . . . , d. (32)
To realize the minimization step (27), the derivatives of
Q({Wk}Kk=1, r) with respect to the separating vectors are put
equal to zero, by which a set of equations is obtained. For
every k and i, the derivative reads
∂Q({Wk}Kk=1, r)
∂(wik)
H
=
1
2
Vikw
i
k −
∂
∂(wik)
H
log |det Wk|2.
(33)
The system of equations can be decoupled and solved indepen-
dently for each k. Using the identity ∂ log | detWk|
∂Wk
= W−Hk ,
the set of equations obtained for a given k is
(w
j
k)
HVikw
i
k = λ
iδij , 1 ≤ i, j ≤ d, (34)
where δij is the Kronecker delta, and λi, i = 1, . . . , d, are
arbitrary positive constants which reflect the scaling ambiguity
of the separating vectors. Here, we put all λs equal to one.
The problem defined by (34) has been known as Hy-
brid Exact-Approximate Joint Diagonalization (HEAD) [34],
whose closed-form solution poses an open problem. Therefore,
instead of updating (34) for all wik simultaneously, it is
proposed in [29] to update wik while the other w
j
k, (j 6= i)
are fixed. This leads to the following problem:
(wik)
HVikw
i
k = 1, (35)
(w
j
k)
HVikw
i
k = 0, (j 6= i). (36)
Equations (36) determine the directions of wik while (35)
determines their scales. Therefore, (35) can temporarily be
replaced by a dummy equation bHVikw
i
k = 1 where b is put
equal to wik obtained in the previous iteration of AuxIVA. A
simple update rule is obtained:
wik ←
(
WkV
i
k
)−1
ei, (37)
where ei is the ith column of Id. The result of (37) is then
re-scaled to satisfy (35).
To summarize, the complete update rules of AuxIVA for
each k and i are as follows:
ri =
√√√√ K∑
k=1
|(wik)Hxk|2, (38)
Vik = Ê[ϕ(ri)xkx
H
k ], (39)
wik ←
(
WkV
i
k
)−1
ei, (40)
wik ← w
i
k/
√
(wik)
HVikw
i
k. (41)
For a brief overview of AuxIVA, see also [35].
7
B. AuxIVE
In IVE, the contrast function is given by (13), which should
be maximized in variables wk and ak, k = 1, . . . ,K. We can
apply the AFO technique in a way similar to the previous
subsection, because the first term in (13) corresponds to one
term of the first sum in (28). Hence, following the same
assumption about the model density f(·) as in Theorem 1
in [29], the auxiliary function for (13) can have the form
Q({wk}Kk=1, {ak}
K
k=1, r) = −
1
2
K∑
k=1
(wk)
HVkwk
−
K∑
k=1
Ê[xHk B
H
k C
−1
zk
Bkxk] + (d− 2)
K∑
k=1
log |γk|2 +R,
(42)
where
Vk = Ê[ϕ(r)xkx
H
k ], (43)
and r is the auxiliary variable, which is scalar in this case; R
depends purely on r. The equality between the contrast (13)
and (42) holds if and only if r =
√∑K
k=1 |(wk)Hxk|2.
In a way similar to Section II-E, the OG is imposed between
the pairs of vector variables wk and ak, k = 1, . . . ,K, and
the optimization proceeds in wk. The constrained derivative
of (42) with respect to wHk has the form
7
∂Q({wk}Kk=1, r)
∂wHk
∣∣∣∣
w.r.t. (14a)
= ak −Vkwk. (44)
Putting the derivative equal to zero, we can derive a close-form
solution:
wk = V
−1
k ak. (45)
It means that the HEAD problem (34) need not be solved as
compared to IVA, and the update rules for AuxIVE are
r =
√√√√ K∑
k=1
|wHk xk|2, (46)
Vk = Ê[ϕ(r)xkx
H
k ], (47)
ak =
Ĉkwk
wHk Ĉkwk
, (48)
wk = V
−1
k ak. (49)
The pseudocode of AuxIVE corresponds to Algorithm 2 when
T = 1.
Very recently, a similar modification of AuxIVA for the
blind extraction of m sources, where m < d, has been
proposed in [36]; the algorithm is named OverIVA. AuxIVE
could be seen as a special variant of OverIVA designed for
m = 1.
7The form of (44) easily follows from the fact that the second and third
terms in (42) are the same as in (13), whose constrained derivative equals ak;
cf. (44) and (15).
C. Block AuxIVE
We can now modify AuxIVE for the CSV mixing model
following the results described in Section II-G. The contrast
function for CSV is given by (20). Comparing (20) with (13)
and using the same approach and assumptions to derive (42),
we obtain the auxiliary function for the CSV model in the
form
Q
(
{wk,ak,t, rt}k=1,...,K
t=1,...,T
)
=
1
T
T∑
t=1
{
−
1
2
K∑
k=1
wHk Vk,twk
−
K∑
k=1
Ê[xHk,tB
H
k,tC
−1
zk,t
Bk,txk,t]+(d−2)
K∑
k=1
log |γk,t|2
}
+R,
(50)
where
Vk,t = Ê[ϕ(rt)xk,tx
H
k,t], (51)
r = [r1, . . . , rT ]
T , is the auxiliary variable, and R depends
purely on r. When rt =
√∑K
k=1 |w
H
k xk,t|2 for every t =
1, . . . , T , (50) and (20) are equal.
An OG similar to the one used in Section II-G is imposed
between the pairs wk and ak,t in each block according to
the relationship (21). The constrained derivative of (50) with
respect to wk then takes on the form
∂Q
(
{wk, rt}k=1,...,K
t=1,...,T
)
∂wHk
∣∣∣∣∣∣∣∣
w.r.t. (21)
=
1
T
T∑
t=1
{ak,t −Vk,twk}.
(52)
Putting the derivative equal to zero, we obtain the close-form
solution as wk =
(∑T
t=1 Vk,t
)−1∑T
t=1 ak,t. The separating
vectors wk are then normalized so that their first elements are
equal to one.
To summarize, the complete update rules of Block AuxIVE
are as follows:
rt =
√√√√ K∑
k=1
|wHk xk,t|2, (53)
Vk,t = Ê
[
ϕ(rt)x
H
k,txk,t
]
, (54)
ak,t =
Ĉk,twk
wHk Ĉk,twk
, (55)
wk =
(
T∑
t=1
Vk,t
)−1
T∑
t=1
ak,t. (56)
The pseudo-code of the proposed method is described in
Algorithm 2.
D. Piloted Block AuxIVE
Owing to the indeterminacy of the ordering for the original
signals in BSS, it is not, in general, known which source is
currently being extracted through BSE. The crucial problem is
to ensure that the signal being extracted actually corresponds
to the SOI. Therefore, several approaches ensuring the global
convergence have been proposed, most of which are based on
8
Algorithm 2: Block AuxIVE: Auxiliary function based
IVE for the CSV Mixing Model
Input: xk,t,winik (k, t = 1, 2, . . . ), NumIter
Output: ak,t,wk
1 foreach k = 1, . . . ,K, t = 1, . . . , T do
2 Ĉk,t = Ê[xk,tx
H
k,t];
3 wk = w
ini
k /(w
ini
k )1;
4 end
5 Iter = 0;
6 repeat
7 foreach t = 1 . . . T do
8 rt ←
∑K
k=1
√
|wHk Xk,t|2; foreach k = 1 . . .K
do
9 ak,t ←
Ĉk,twk
wH
k
Ĉk,twk
;
10 Vk,t ← Ê[ϕ(rt)xk,txHk,t];
11 end
12 end
13 foreach k = 1 . . .K do
14 wHk ←
∑T
t=1 a
H
k,t
(∑T
t=1 Vk,t
)−1
;
15 wk ← wk/(wk)1;
16 end
17 Iter← Iter + 1;
18 until Iter < NumIter;
additional constraints assuming prior knowledge, e.g., about
the source position or a reference signal [37]–[40]. Recently,
an unconstrained supervised IVA using the so-called pilot
signals has been proposed in [28], where each pilot signal
is dependent on the source signals, so they have a joint pdf
that cannot be factorized into a product of marginal pdfs. This
idea has been extended to IVE in [24], where only the pilot
signal related to the SOI is needed.
Let the pilot signal dependent on the SOI (and independent
of the background) be denoted by o, and let the joint pdf of
s and o be p(s, o). Then, the pdf of the observed data is
px({xk}Kk=1) = p({w
H
k xk}
K
k=1, o)·
K∏
k=1
pzk(Bkxk)|det Wk|
2.
(57)
Comparing that expression with (11) and taking into account
the fact that o is independent of the mixing model parameters,
we can see that the Block AuxIVE admits a straightforward
modification.
In particular, provided that the model pdf f({wHk xk}
K
k=1, o)
replacing the unknown p(·) meets the conditions for the
application of AFO as in Section III-A, the piloted algorithm
has exactly the same steps as the non-piloted one with a
sole difference that the non-linearity ϕ(·) also depends on o.
The equality between the contrast function and the auxiliary
function holds if and only if
rt =
√√√√ K∑
k=1
|(wk)Hxk,t|2 + η2|ot|2, (58)
for t = 1, . . . , T , where ot stands for the pilot signal within the
tth interval, and η is a hyperparameter controlling the influence
of the pilot signal. Finally, Piloted Block AuxIVE is obtained
from Block AuxIVE by replacing the update step (53) with
(58).
Finding a suitable pilot signal poses an application-
dependent problem. For example, outputs of voice activity
detectors were used to pilot the separation of simultaneously
talking persons in [28]. Similarly, a video-based lip-movement
detection was considered in [41]. A video-independent solu-
tion was proposed in [42] using spatial information about the
area in which the speaker is located. All these approaches have
been shown useful although the pilot signals used in them
contain residual noise and interference.
E. Choice of f(·)
In this paper, we choose the model pdf in the same way as
it was proposed in the pioneering IVA paper [9]; namely,
f(s) ∝ exp{−‖s‖}, (59)
for which the kth score function is
ψk(s) = −
∂
∂sk
log f(s) =
sk
‖s‖
, (60)
and the related nonlinearity in (30), (43) and (51) is φ(‖s‖) =
‖s‖−1. This pdf satisfies the conditions for applying AFO
(Theorem 1 in [29]) and is known to be suitable for speech
signals that are typically super-Gaussian. It is also suitable for
Piloted Block AuxIVE when an extended vector component
s̃ = [s; νo] is considered; o denotes the pilot signal, and ν is
a scaling parameter that controls the influence of the pilot.
It is worth noting here that more accurate modeling of
the source pdf usually leads to improved performance. For
example, advanced statistical models are currently studied for
ILRMA [7], [43]. However, this topic goes beyond the scope
of this work.
IV. EXPERIMENTAL VALIDATION
In this section, we present results of experiments with
simulated as well as real-world recordings of moving speakers.
Our goal is to show the usefulness of the CSV mixing model
and compare the performance characteristics of the proposed
algorithms with other state-of-the-art methods.
A. Simulated room
In this example, we inspect de-mixing filters obtained by the
blind algorithms when extracting a moving speaker in a room
simulated by the image method [44]. The room has dimensions
4×4×2.5 (width×length×height) metres and T60 = 100 ms.
A linear array of five omnidirectional microphones is located
so that its center is at the position (1.8, 2, 1) m, and the array
axis is parallel with the room width. The spacing between
microphones is 5 cm.
The target signal is a 10 s long female utterance from
TIMIT. During that speech, the speaker is moving at a constant
speed on a 38◦ arc at a one-meter distance from the center of
the array; the situation is illustrated in Fig. 2a. The starting
9
TABLE I: Parameter setup for the tested methods in the
simulated room
Method # iterations step size µ block size Nb
OGIVEw 1000 0.2 n/a
Block OGIVEw 1000 0.2 250 frames
AuxIVE 100 n/a n/a
Block AuxIVE 100 n/a 250 frames
and ending positions are (1.8, 3, 1) m and (1.82, 2.78, 1) m,
respectively. The movement is simulated by 20 equidistantly
spaced RIRs on the path, which correspond to half-second in-
tervals of speech, whose overlap was smoothed by windowing.
Next, a directional source emitting a white Gaussian noise is
located at the position (2.8, 2, 1) m; that is, at a one-meter
distance to the right from the array.
The mixture of speech and noise has been processed by the
methods described in this paper in order to extract the speech
signal. Namely, we compare OGIVEw, Block OGIVEw,
AuxIVE and Block AuxIVE when operating in the STFT
domain with the FFT length of 512 samples and 128 samples
hop-size; the sampling frequency is fs = 16 kHz. Each
method has been initialized by the direction of arrival of the
speaker signal at the beginning of the sequence. The other
parameters of the methods are listed in Table I.
In order to visualize the performance of the extracting filters,
a 2×2 cm-spaced regular grid of positions spanning the whole
room is considered. Microphone responses (images) of the
white noise signal emitted from each position on the grid have
been simulated. The extracting filter of a given algorithm is
applied to the responses, and the output power is measured.
The average ratio between the output power and the power
of the input signals reflects the attenuation of the white noise
signal played from the given position.
The attenuation maps of the compared methods are shown
in Figures 2b through 2f. Table 2 shows the attenuation for
specific points in the room. In particular, the first five columns
in the table correspond to the speaker’s positions on the
movement path corresponding to angles 0◦ through 32◦. The
last column corresponds to the position of the interferer.
Fig. 2d shows the map of the initial filter corresponding
to the delay-and-sum (DS) beamformer steered towards the
initial position of the speaker. The beamformer yields a gentle
gain in the initial direction with no attenuation in the direction
of the interferer.
By contrast, all the compared blind methods steer a spatial
null towards the interferer and try to increase the gain of
the target signal. The spatial beam steered by Block AuxIVE
towards the speaker spans the whole angular range where the
speaker has appeared during the movement. Block OGIVEw
performs similarly. However, its performance is poorer, per-
haps due to its slower convergence or proneness to getting
stuck in a local extreme. AuxIVE and OGIVEw tend to focus
on only a narrow angular range (probably the most significant
part of the speech). The nulls steered towards the interferer
are more intense by AuxIVE and Block AuxIVE than by
the gradient methods. In conclusion, these results corroborate
the validity of the CSV mixing model and show the better
convergence properties of AuxIVE and Block AuxIVE.
TABLE II: The attenuation in selected points on the source
path and in the position of the interferer
0◦ 8◦ 16◦ 24◦ 32◦ Interferer
OGIVEw -1.09 -1.36 -2.02 -4.56 -5.08 -15.81
Block OGIVEw -1.20 -2.14 -1.69 -3.12 -3.87 -15.86
AuxIVE -5.85 -3.99 -3.08 -4.39 -5.12 -23.73
Block AuxIVE -3.22 -1.74 -1.27 -2.09 -2.67 -18.51
B. Real-world scenario using the MIRaGe database
The experiment here is designed to provide an exhaustive
test of the compared methods in challenging noisy situations
where the target speaker is performing small movements
within a confined area. Recordings are simulated using real-
world room impulse responses (RIRs) taken from the MIRaGe
database [45].
MIRaGe provides measured RIRs between microphones and
a source whose possible positions form a dense grid within
a 46 × 36 × 32 cm volume. MIRaGe is thus suitable for
our experiment, as it enables us to simulate small speaker
movements in a real environment.
The database setup is situated in an acoustic laboratory
which is a 6 × 6 × 2.4 m rectangular room with variable
reverberation time. Three reverberation levels with T60 equal
to 100, 300, and 600 ms are provided. The speaker’s area
involves 4104 positions which form the cube-shaped grid with
spacings of 2-by-2 cm over the x and y axes and 4 cm over
the z axis. Also, MIRaGe contains a complementary set of
measurements that provide information about the positions
placed around the room perimeter with spacing of ≈1 m, at
a distance of 1 m from the wall. These positions are referred
to as the out-of-grid positions (OOG). All measurements were
recorded by six static linear microphone arrays (5 mics per
array with the inter-microphone spacing of −13, −5, 0, +5
and +13 cm relative to the central microphone); for more
details about the database, see [45].
In the present experiment, we use Array 1, which is at
a distance of 1 m from the center of the grid, and the
T60 settings with 100 and 300 ms, respectively. For each
setting, 3840 noisy observations of a moving speaker were
synthesized as follows: each mixture consists of the moving
SOI, one static interfering speaker, and the noise. The SOI is
moving randomly over the grid positions. The movement is
simulated so that the position is changed every second. The
new position is randomly selected from all positions whose
maximum distance from the current position is 4 in both the
x and y axes. The transition between positions is smoothed
using the Hamming window of a length of fs/16 with one-half
overlaps. The interferer is located in a random OOG position
between 13 through 24, while the noise signal is equal to a
sum of signals that are located in the remaining OOG positions
(out of 13 through 24).
As the SOI and interferer signal, clean utterances of 4 male
and 4 female speakers from CHiME-4 [46] database were
selected; there are 20 different utterances, each having 10 s
in length per speaker. The noise signals correspond to random
parts of the CHiME-4 cafeteria noise recording. The signals
are convolved with the RIRs to match the desired positions,
and the obtained spatial images of the signals on microphones
10
(a) Setup of the simulated room conditions.
The position of interference is marked by the
red circle, the microphones by black circles
and the path of the source is marked by a
blue line.
(b) Attenuation in dB achieved by Block
AuxIVE
(c) Attenuation in dB achieved by AuxIVE
(d) Attenuation in dB achieved by Delay and
sum Beamformer
(e) Attenuation in dB achieved by Block
OGIVEw
(f) Attenuation in dB achieved by OGIVEw
Fig. 2: Setup of the simulated room and the attenuation in dB achieved by DOA, OGIVEw, Block OGIVEw, AuxIVE and
Block AuxIVE from the experiment in section IV-A
are summed up so that the interferer/noise power ratio, as well
as the power ratio between the SOI and interference plus noise,
is 0 dB.
The methods and their parameters are compared as follows:
OGIVEw, Block OGIVEw, AuxIVE, Block AuxIVE, Piloted
AuxIVE and Piloted Block AuxIVE. The number of iterations
for the AuxIVE-based methods is set to 150 and, for the
gradient-based method, to 2, 000. The block size for the block
methods is set to 350 frames. The gradient step-length for
OGIVEw and Block OGIVEw is set to µ = 0.2. The initial
separating vector wk is initialized by the DS pointing in
front of the microphone array. In the Piloted version of the
methods, the piloting signals are equal to the output of an
MPDR beamformer where the steering vector corresponds to
the ground true DOA of the SOI. All these methods operate in
the STFT domain with the FFT length of 512 and a hop-size
of 128 ; the sampling frequency is 16 kHz.
The SOI is blindly extracted from each mixture, and the
result is evaluated through the improvement of the Signal-to-
Interference-and-Noise ratio (iSINR) and Signal-to-Distortion
ratio (iSDR) defined as in [47] (SDR is computed after
compensating for the global delay). The averaged values of the
criteria are summarized in Table III together with the average
time to process one mixture. The averages show small but still
significant differences between the methods. Nevertheless, for
a deeper understanding to the results, we need to analyze the
histograms of iSINR shown in Fig. 3.
Fig. 3a shows the histograms for the entire set of mixtures
in the experiment, while Fig. 3b is evaluated on a subset of
mixtures in which the SOI has not moved away from the
starting position by more than 5 cm; there are 288 mixtures
of this kind. Now, we can observe two phenomenons. First,
it is seen that the non-block variants of AuxIVE yield more
results between 0 and 5 dB in Fig. 3a than in Fig. 3b and, on
the contrary, they show a higher percentage of very succesful
extractions (iSINR¿10 dB) in Fig. 3b than in Fig. 3a. That
means that they perform better for the subset of mixtures
where the SOI is almost static. The performance of the block-
based variants seem to be similar for the full set and the
subset. On the other hand, they seem to yield a fewer number
of trials where iSINR¿10 dB than the non-block methods. To
summarize, the block methods yield a more stable performance
than the non-block methods when the SOI is moving. The non-
block methods can yield higher iSINR when the SOI is static.
Second, the piloted variants of AuxIVE yield iSINR<
−5 dB in a much lower number of trials than the non-piloted
methods, as confirmed by the additional criterion in Table III.
This proves that the piloted algorithms have improved global
convergence. Simultaneously, the main peaks in the histograms
of the piloted methods seem to correspond to a lower iSINR
11
TABLE III: The SINR improvement with standard deviation, SDR improvement with standard deviation and extraction fail
percentage for the MIRaGe database experiment
T60 100 ms T60 300ms average
mean iSINR
[dB]
mean iSDR
[dB]
iSINR <-5 dB
[%]
mean iSINR
[dB]
mean iSDR
[dB]
iSINR <-5 dB
[%]
time per
mixture [s]
AuxIVE 6.62 ± 9.55 3.96 ± 2.14 12.71 4.27 ± 7.34 3.82 ± 2.00 13.01 8.00
Block AuxIVE 6.91 ± 8.83 4.02 ± 1.27 9.14 4.50 ± 6.42 3.48 ± 1.17 11.61 9.14
Piloted AuxIVE 6.95 ± 5.64 4.16 ± 1.14 2.53 5.77 ± 4.85 4.50 ± 1.53 2.32 8.02
Piloted Block AuxIVE 6.34 ± 3.66 3.86 ± 1.02 0.57 5.86 ± 3.46 4.03 ± 1.31 0.70 9.16
Block OGIVEw 4.32 ± 5.15 3.14 ± 1.56 15.32 2.28 ± 3.15 1.98 ± 1.02 22.15 86.45
OGIVEw 3.85 ± 4.33 3.58 ± 1.98 22.10 1.01 ± 2.17 2.14 ± 1.45 12.23 73.15
than those of the non-piloted versions. We conjecture that the
performance bias is caused by the fact that the pilot signal used
in this experiment does not contain clean SOI and is thus also
slightly dependent on the other signals in the mixture.
C. Speech enhancement/recognition on CHiME-4 datasets
We have verified the proposed methods also in the noisy
speech recognition task defined within the CHiME-4 chal-
lenge considering the six-channel track [46]. This dataset
contains simulated (SIMU) and real-world8 (REAL) utterances
of speakers in multi-source noisy environments. The recording
device is a tablet with multiple microphones, which is held
by a speaker. Since some recordings involve microphone
failures, the method from [48] is used to detect these failures.
If detected, the malfunctioning channels are excluded from
further processing of the given recording.
The experiment is evaluated in terms of Word Error Rate
(WER) as follows: The compared methods are used to extract
speech from the noisy recordings. Then, the enhanced signals
are forwarded to the baseline speech recognizer from [46]. The
WER achieved by the proposed methods is compared with the
results obtained on unprocessed input signals (Channel 5) and
with the techniques listed below.
BeamformIt [49] is a front-end algorithm used within the
CHiME-4 baseline system. It is a weighted delay-and-sum
beamformer requiring two passes over the processed recording
in order to optimize its inner parameters. We use the original
implementation of the technique available at [50].
The Generalized Eigenvalue Beamformer (GEV) is a front-
end solution proposed in [51], [52]. It represents the most
successful enhancers for CHiME-4 that rely on deep networks
trained for the CHiME-4 data. In the implementation used
here, a re-trained Voice-Activity-Detector (VAD) is used where
the training procedure was kindly provided by the authors of
[51]. We utilize the feed-forward topology of the VAD and
train the network using the training part of the CHiME-4
data. GEV utilizes the Blind Analytic Normalization (BAN)
postfilter for obtaining its final enhanced output signal.
All systems/algorithms operate in the STFT domain with the
FFT length of 512 and hop-size of 128 using the Hamming
window; the sampling frequency is 16 kHz. Block OGIVEw is
applied with Nb = 170 which corresponds to the block length
of 1.4 s. Block AuxIVE is applied with Nb = 250 ≈ 2 s.
These values have been tuned up to optimize the performance
8Microphone 2 is not used in the case of the real-world recordings as, here,
it is oriented away from the speaker.
TABLE IV: WERs [%] achieved in the CHiME-4 challenge.
System
Development Test
REAL SIMU REAL SIMU
Unprocessed 9.83 8.86 19.90 10.79
BeamformIt 5.77 6.76 11.52 10.91
GEV (VAD) 4.61 4.65 8.10 5.99
OGIVEw 5.59 4.96 9.51 6.34
Block OGIVEw 5.64 4.84 8.98 6.21
AuxIVE 5.97 5.21 10.43 6.82
Block AuxIVE 5.53 4.67 9.65 6.43
of these methods. All the proposed methods are initialized
by the Relative Transfer Function (RTF) estimator from [53];
Channel 5 of the data is selected as the target one (the spatial
image of the speech signal of this channel is being estimated).
The results shown in Table IV indicate that all methods
are able to improve the WER compared to the unprocessed
case. The GEV beamformer endowed with the pretrained VAD
achieves the best results. Comparable rates are also achieved
by the proposed unsupervised techniques; the WER of Block
AuxIVE is higher by a mere 0− 1.5%.
In general, the block-wise methods achieve lower WER than
their counterparts based on the standard mixing model; the
WER of Block OGIVEw is comparable with Block AuxIVE.
A significant advantage of the latter method is the faster
convergence and, consequently, much lower computational
burden. The total duration of the 5920 files in the CHiME
dataset is 10 hours and 5 minutes. The results presented for
Block OGIVEw have been achieved after 100 iterations on
each file, which translates into 7 hours and 45 minutes9 of
processing for the whole dataset. Block AuxIVE is able to
converge in 5 iterations; the whole enhancement has been
finished in 57 minutes.
An example of the enhancement yielded by the proposed
methods on one of the CHiME-4 recordings is shown in Fig. 4.
Within this particular recording, in the interval 1.75 − 3 s,
the target speaker was moved out of its initial position.
The AuxIVE algorithm focused on this initial direction only,
resulting in the vanishing voice during the movement interval.
Consequently, the automatic transcription is erroneous. In
contrast, Block AuxIVE is able to focus on both positions
of the speaker and recovers the signal of interest correctly.
V. CONCLUSIONS
We have proposed new IVE algorithms for BSE based on
the auxiliary function-based optimization. The algorithms are
9The computations run on a workstation endowed with Intel i7-
[email protected] processor with 16GB RAM.
12
-20 -10 0 10 20
iSINR [dB]
0
10
20
T
ri
a
ls
[
%
]
Block
-20 -10 0 10 20
iSINR [dB]
0
10
20
T
ri
a
ls
[
%
]
Non-Block
-20 -10 0 10 20
iSINR [dB]
0
10
20
T
ri
a
ls
[
%
]
Block Piloted
-20 -10 0 10 20
iSINR [dB]
0
10
20
T
ri
a
ls
[
%
]
Non-Block Piloted
(a) Percentage histogram of SINR improvement for each variation of
AuxIVE method over full dataset.
-20 -10 0 10 20
iSINR [dB]
0
10
20
T
ri
a
ls
[
%
]
Block
-20 -10 0 10 20
iSINR [dB]
0
10
20
T
ri
a
ls
[
%
]
Non-Block
-20 -10 0 10 20
iSINR [dB]
0
10
20
T
ri
a
ls
[
%
]
Block Piloted
-20 -10 0 10 20
iSINR [dB]
0
10
20
T
ri
a
ls
[
%
]
Non-Block Piloted
(b) Percentage histogram of SINR improvement for the variants of
AuxIVE over the subset with small movements of the SOI.
Fig. 3: Histograms of SINR improvement achieved by the variants of AuxIVE in the experiment of Section IV-B.
0 0.5 1 1.5 2 2.5 3 3.5 4
Time [s]
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
0.4
Block AuxIVE
AuxIVE
REF: IT WOULD TAKE A NEW CHAIRMAN THE EXECUTIVE IS SAID TO HAVE REPLIED
Bl. AuxIVE: IT WOULD TAKE A NEW CHAIRMAN THE EXECUTIVE IS SAID TO HAVE
AuxIVE: IT WOULD TAKE A NEW CHAIRMAN THE EXECUTIVE EFFECTIVE REPLIED
REPLIED
Speaker out of focused
position of AuxIVE
Fig. 4: Comparison of enhanced signals yielded from a
recording of a moving speaker by AuxIVE and Block AuxIVE.
shown to be faster in convergence then their gradient-based
counterparts. The block-based algorithms enable us to extract
a moving source by estimating a separating filter that passes
signals from the entire area of the source presence. This
way, the moving source can be extracted efficiently without
tracking in an on-line fashion. The experiments show that
these methods need not necessarily be more accurate (achieve
higher SINR) than standard methods, especially, when the
source is almost static. However, they are particularly robust
with respect to small source movements. For the future, they
provide us with alternatives to the conventional approaches
that adapt to the source movements through application of
static mixing models on short time-intervals.
Furthermore, we have proposed the semi-supervised variants
of (Block) AuxIVE utilizing pilot signals. The experiments
confirm that such algorithms yield stable global convergence
to the SOI even when the pilot signal is only a roughly pre-
extracted SOI containing a considerable residual of noise and
interference.
REFERENCES
[1] P. Comon and C. Jutten, Handbook of Blind Source Separation: Indepen-
dent Component Analysis and Applications, ser. Independent Component
Analysis and Applications Series. Elsevier Science, 2010.
[2] P. Comon, “Independent component analysis, a new concept?” Signal
Processing, vol. 36, pp. 287–314, 1994.
[3] A. Hyvärinen, J. Karhunen, and E. Oja, Independent Component Anal-
ysis. John Wiley & Sons, 2001.
[4] E. Vincent, T. Virtanen, and S. Gannot, Audio Source Separation and
Speech Enhancement, 1st ed. Wiley Publishing, 2018.
[5] H. Sawada, R. Mukai, S. Araki, and S. Makino, “A robust and precise
method for solving the permutation problem of frequency-domain blind
source separation,” IEEE Transactions on Speech and Audio Processing,
vol. 12, no. 5, pp. 530–538, Sep. 2004.
[6] D. Kitamura, N. Ono, H. Sawada, H. Kameoka, and H. Saruwatari,
“Determined blind source separation unifying independent vector anal-
ysis and nonnegative matrix factorization,” IEEE/ACM Transactions on
Audio, Speech, and Language Processing, vol. 24, no. 9, pp. 1626–1641,
2016.
[7] D. Kitamura, S. Mogami, Y. Mitsui, N. Takamune, H. Saruwatari,
N. Ono, Y. Takahashi, and K. Kondo, “Generalized independent low-
rank matrix analysis using heavy-tailed distributions for blind source
separation,” EURASIP Journal on Advances in Signal Processing, vol.
2018, no. 1, p. 28, May 2018.
[8] J. F. Cardoso, “Blind signal separation: statistical principles,” Proceed-
ings of the IEEE, vol. 86, no. 10, pp. 2009–2025, Oct 1998.
[9] T. Kim, H. T. Attias, S.-Y. Lee, and T.-W. Lee, “Blind source separation
exploiting higher-order frequency dependencies,” IEEE Transactions on
Audio, Speech, and Language Processing, pp. 70–79, Jan. 2007.
[10] A. Yeredor, “Blind separation of gaussian sources via second-order
statistics with asymptotically optimal weighting,” IEEE Signal Process-
ing Letters, vol. 7, no. 7, pp. 197–200, July 2000.
[11] D.-T. Pham and J. F. Cardoso, “Blind separation of instantaneous mix-
tures of nonstationary sources,” IEEE Transactions on Signal Processing,
vol. 49, no. 9, pp. 1837–1848, Sep 2001.
[12] P. Tichavský and A. Yeredor, “Fast approximate joint diagonalization in-
corporating weight matrices,” IEEE Transactions on Signal Processing,
vol. 57, no. 3, pp. 878–891, March 2009.
[13] Y. Li, T. Adalı, W. Wang, and V. D. Calhoun, “Joint blind source sep-
aration by multiset canonical correlation analysis,” IEEE Transactions
on Signal Processing, vol. 57, no. 10, pp. 3918–3929, Oct 2009.
13
[14] A. Yeredor, “Blind separation of gaussian sources with general covari-
ance structures: Bounds and optimal estimation,” IEEE Transactions on
Signal Processing, vol. 58, no. 10, pp. 5057–5068, Oct 2010.
[15] M. Anderson, T. Adalı, and X. Li, “Joint blind source separation with
multivariate gaussian model: Algorithms and performance analysis,”
IEEE Transactions on Signal Processing, vol. 60, no. 4, pp. 1672–1683,
April 2012.
[16] D. Lahat and C. Jutten, “Joint independent subspace analysis us-
ing second-order statistics,” IEEE Transactions on Signal Processing,
vol. 64, no. 18, pp. 4891–4904, Sept 2016.
[17] A. Yeredor, “Tv-sobi: An expansion of sobi for linearly time-varying
mixtures,” in Proceedings of The 4th International Symposium on Inde-
pendent Component Analysis and Blind Source Separation (ICA2003),
April 2003.
[18] T. Weisman and A. Yeredor, “Separation of periodically time-varying
mixtures using second-order statistics,” in Independent Component Anal-
ysis and Blind Signal Separation, J. Rosca, D. Erdogmus, J. C. Prı́ncipe,
and S. Haykin, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg,
2006, pp. 278–285.
[19] Z. Koldovský, J. Málek, and J. Janský, “Extraction of independent
vector component from underdetermined mixtures through block-wise
determined modeling,” in Proceedings of IEEE International Conference
on Audio, Speech and Signal Processing, vol. 7903–7907, May 2019.
[20] T. Taniguchi, N. Ono, A. Kawamura, and S. Sagayama, “An auxiliary-
function approach to online independent vector analysis for real-time
blind source separation,” in 2014 4th Joint Workshop on Hands-free
Speech Communication and Microphone Arrays (HSCMA), May 2014,
pp. 107–111.
[21] A. Hyvärinen, “Fast and robust fixed-point algorithm for independent
component analysis,” IEEE Transactions on Neural Networks, vol. 10,
no. 3, pp. 626–634, 1999.
[22] N. Delfosse and P. Loubaton, “Adaptive blind separation of independent
sources: A deflation approach,” Signal Processing, vol. 45, no. 1, pp.
59 – 83, 1995.
[23] I. Lee, T. Kim, and T.-W. Lee, “Fast fixed-point independent vector
analysis algorithms for convolutive blind source separation,” Signal
Processing, vol. 87, no. 8, pp. 1859–1871, 2007.
[24] Z. Koldovský and P. Tichavský, “Gradient algorithms for complex non-
gaussian independent component/vector extraction, question of conver-
gence,” IEEE Transactions on Signal Processing, vol. 67, no. 4, pp.
1050–1064, Feb 2019.
[25] V. Kautský, Z. Koldovský, and P. Tichavský, “Cramér-Rao-induced
bound for interference-to-signal ratio achievable through non-gaussian
independent component extraction,” in 2017 IEEE International Work-
shop on Computational Advances in Multi-Sensor Adaptive Processing
(CAMSAP), Dec 2017, pp. 94–97.
[26] ——, “Performance bound for blind extraction of non-Gaussian
complex-valued vector component from Gaussian background,” in Pro-
ceedings of IEEE International Conference on Audio, Speech and Signal
Processing, vol. 5287–5291, May 2019.
[27] V. Kautský, Z. Koldovský, P. Tichavský, and V. Zarzoso, “Cramér-
Rao bounds for complex-valued independent component extraction:
Determined and piecewise determined mixing models,” arXiv e-prints,
p. arXiv:1907.08790, Jul 2019.
[28] F. Nesta and Z. Koldovský, “Supervised independent vector analysis
through pilot dependent components,” in Proceedings of IEEE Interna-
tional Conference on Audio, Speech and Signal Processing, March 2017,
pp. 536–540.
[29] N. Ono, “Stable and fast update rules for independent vector analysis
based on auxiliary function technique,” in Proceedings of IEEE Work-
shop on Applications of Signal Processing to Audio and Acoustics, 2011,
pp. 189–192.
[30] K. Matsuoka and S. Nakashima, “Minimal distortion principle for
blind source separation,” in Proceedings of International Conference
on Independent Component Analysis and Signal Separation, Dec. 2001,
pp. 722–727.
[31] Z. Koldovský and F. Nesta, “Performance analysis of source image
estimators in blind source separation,” IEEE Transactions on Signal
Processing, vol. 65, no. 16, pp. 4166–4176, Aug. 2017.
[32] S. Fortunati, F. Gini, M. S. Greco, and C. D. Richmond, “Performance
bounds for parameter estimation under misspecified models: Funda-
mental findings and applications,” IEEE Signal Processing Magazine,
vol. 34, no. 6, pp. 142–157, Nov 2017.
[33] H. L. Van Trees, Optimum Array Processing: Part IV of Detection,
Estimation, and Modulation Theory. John Wiley & Sons, Inc., 2002.
[34] A. Yeredor, “On hybrid exact-approximate joint diagonalization,” in
2009 3rd IEEE International Workshop on Computational Advances in
Multi-Sensor Adaptive Processing (CAMSAP), Dec 2009, pp. 312–315.
[35] N. Ono, “Auxiliary-function-based independent vector analysis with
power of vector-norm type weighting functions,” in Proceedings of The
2012 Asia Pacific Signal and Information Processing Association Annual
Summit and Conference, Dec 2012, pp. 1–4.
[36] R. Scheibler and N. Ono, “Independent vector analysis with more
microphones than sources,” CoRR, vol. abs/1905.07880, 2019. [Online].
Available: http://arxiv.org/abs/1905.07880
[37] L. C. Parra and C. V. Alvino, “Geometric source separation: merg-
ing convolutive source separation with geometric beamforming,” IEEE
Transactions on Speech and Audio Processing, vol. 10, no. 6, pp. 352–
362, Sep. 2002.
[38] A. H. Khan, M. Taseska, and E. A. P. Habets, A Geometrically
Constrained Independent Vector Analysis Algorithm for Online Source
Extraction. Cham: Springer International Publishing, 2015, pp. 396–
403.
[39] A. Brendel, T. Haubner, and W. Kellermann, “Spatially Informed In-
dependent Vector Analysis,” arXiv e-prints, p. arXiv:1907.09972, Jul
2019.
[40] S. Bhinge, R. Mowakeaa, V. D. Calhoun, and T. Adalı, “Extraction of
time-varying spatiotemporal networks using parameter-tuned constrained
IVA,” IEEE Transactions on Medical Imaging, vol. 38, no. 7, pp. 1715–
1725, July 2019.
[41] F. Nesta, S. Mosayyebpour, Z. Koldovský, and K. Paleček, “Audio/video
supervised independent vector analysis through multimodal pilot de-
pendent components,” in Proceedings of European Signal Processing
Conference, Sep. 2017, pp. 1190–1194.
[42] J. Čmejla, T. Kounovský, J. Málek, and Z. Koldovský, “Independent
vector analysis exploiting pre-learned banks of relative transfer functions
for assumed target’s positions,” in Latent Variable Analysis and Signal
Separation, Y. Deville, S. Gannot, R. Mason, M. D. Plumbley, and
D. Ward, Eds. Cham: Springer International Publishing, 2018, pp.
270–279.
[43] S. Mogami, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi,
K. Kondo, and N. Ono, “Independent low-rank matrix analysis based
on time-variant sub-gaussian source model for determined blind source
separation,” IEEE/ACM Transactions on Audio, Speech, and Language
Processing, vol. 28, pp. 503–518, 2020.
[44] J. B. Allen and D. A. Berkley, “Image method for efficiently simulating
small-room acoustics,” The Journal of the Acoustical Society of America,
vol. 65, no. 4, pp. 943–950, 1979.
[45] J. Čmejla, T. Kounovský, S. Gannot, Z. Koldovský, and P. Tandeitnik,
“Mirage: Multichannel database of room impulse responses measured
on high-resolution cube-shaped grid in multiple acoustic conditions,”
2019.
[46] E. Vincent, S. Watanabe, A. A. Nugraha, J. Barker, and R. Marxer, “An
analysis of environment, microphone and data simulation mismatches in
robust speech recognition,” Computer Speech & Language, 2016.
[47] Z. Koldovský, J. Málek, P. Tichavský, and F. Nesta, “Semi-blind noise
extraction using partially known position of the target source,” IEEE
Transactions on Audio, Speech, and Language Processing, vol. 21,
no. 10, pp. 2029–2041, Oct 2013.
[48] J. Málek, Z. Koldovský, and M. Boháč, “Block-online multi-
channel speech enhancement using dnn-supported relative transfer
function estimates,” IET Signal Processing, 2019. [Online].
Available: https://digital-library.theiet.org/content/journals/10.1049/iet-
spr.2019.0304
[49] X. Anguera, C. Wooters, and J. Hernando, “Acoustic beamforming for
speaker diarization of meetings,” IEEE Transactions on Audio, Speech,
and Language Processing, vol. 15, no. 7, pp. 2011–2022, 2007.
[50] The 4th CHiME speech separation and recogni-
tion challenge. Accessed: 2019-12-02. [Online]. Available:
http://spandh.dcs.shef.ac.uk/chime challenge/chime2016/
[51] J. Heymann, L. Drude, and R. Haeb-Umbach, “Neural network based
spectral mask estimation for acoustic beamforming,” in 2016 IEEE
International Conference on Acoustics, Speech and Signal Processing
(ICASSP), March 2016, pp. 196–200.
[52] ——, “Wide residual BLSTM network with discriminative speaker
adaptation for robust speech recognition,” in Proc. of the 4th Intl.
Workshop on Speech Processing in Everyday Environments, CHiME-4,
2016.
[53] S. Gannot, D. Burshtein, and E. Weinstein, “Signal enhancement using
beamforming and nonstationarity with applications to speech,” IEEE
Transactions on Signal Processing, vol. 49, no. 8, pp. 1614–1626, Aug
2001.
| 0non-cybersec
| arXiv |
How are jQuery event handlers queued and executed?. <p>I have an input form, with a submit button. I don't want the user to be able to double click the submit button and double submit the form...</p>
<p>So I have added the following jQuery to my Form:</p>
<pre><code>var prevSubmitTime = new Date('2000-01-01');
function preventFromBeingDoubleSubmitted() {
$('form').each(function () {
$(this).submit(function (e) {
if ($("form").valid()) {
var curSubmitTime = new Date($.now());
// prevent the second submit if it is within 2 seconds of the first submit
if (curSubmitTime - prevSubmitTime < 2000) {
e.preventDefault();
}
prevSubmitTime = new Date($.now());
}
});
});
}
$(document).ready(function () {
preventFromBeingDoubleSubmitted();
});
</code></pre>
<p>The above code stores the submit time and prevents the second submit, if it is too early (less than 2 seconds), I don't want to permanently disable the submit button, in case there is a server side error...</p>
<p>This code does what I want, but when debugging the code, I can never hit a break point on <code>e.preventDefault();</code>... even if I double click the submit button.</p>
<p>It looks like the second submit event is waiting for the first submit event to complete before firing. </p>
<p>But, if I remove <code>preventFromBeingDoubleSubmitted()</code> function, then I would be able to double submit the form, by double clicking the submit button.</p>
<p>Can anyone explain why sometimes the submit events are fired immediately one after the other... and sometimes it is not the case? Does putting the event handler inside <code>.each()</code>, affects their execution behavior?</p>
| 0non-cybersec
| Stackexchange |
Repeating preactions. <p>I'm trying to use multiple similar preactions. Since I would prefer to automatise that I tried to use foreach with preactions. But it appears that's not working.</p>
<p>Let's see an example</p>
<pre><code>\documentclass{minimal}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\draw node
[preaction={fill,transform canvas={xscale=1.03,yscale=1.03},red}]
[preaction={fill,transform canvas={xscale=1.02,yscale=1.02},green}]
[preaction={fill,transform canvas={xscale=1.01,yscale=1.01},blue}]
[text width=10em, text height=10em,circle,fill=white] {} ;
\end{tikzpicture}
\end{document}
</code></pre>
<p>That will produce a (ugly) node with three coloured borders. Now I know that loops exist, I may want to use ten colours and I'm a lazy guy. So how can I use foreach to avoid copying-pasting ten times the same code? </p>
<p>I tried what seemed the most natural to me:</p>
<pre><code>\draw node
\foreach \scale/\color in {1.03/red, 1.02/green, 1.01/blue} {
[preaction={draw,transform canvas={xscale=\scale,yscale=\scale},\color}]
}
[text width=10em, text height=10em,circle,fill=white] {} ;
</code></pre>
<p>And the compilator complains:</p>
<pre>
ERROR: Package tikz Error: A node must have a (possibly empty) label text.
l.59 \foreach
\scale/\color in {1.03/red, 1.02/green, 1.01/blue} {
</pre>
<p>Any idea would be welcome.</p>
| 0non-cybersec
| Stackexchange |
Python list all submodules imported from module. <p>My code starts with the following:</p>
<pre><code>from module import *
</code></pre>
<p>How can I get a list of all <code>submodules</code> imported from this module.</p>
<p>For example module has:</p>
<pre><code>module.submodule
module.submodule2
module.anothersubmodule
</code></pre>
<p>I want to get after <code>from module import *</code>:</p>
<pre><code>[submodule, submodule2, anothersubmodule]
</code></pre>
<p>(not strings with name of <code>submodules</code>, but a list with all <code>submodules</code> themselves)</p>
<p>UPD: I understood that I asked about XY problem.
So here's what i'm trying to achieve:
I have a folder called <code>modules</code> that will have a bunch of scripts following the same pattern. They will all have a function like <code>main()</code>. In my main script i want to import them all and iterate like that:</p>
<pre><code>for i in modules:
i.main(*some_args)
</code></pre>
| 0non-cybersec
| Stackexchange |
Bootable Linux USB stick for micro-usb only Windows machine?. <p>Is there a way to use a bootable usb-stick for a Windows 64bit device which has only a micro-usb? For example the Lenovo Yoga Book.</p>
| 0non-cybersec
| Stackexchange |
Fittit, it is Sunday. Tell us your Victory this week.. **Welcome to the 184th Victory Sunday Thread**
It is Sunday, 3:30 p.m. in Grand Junction, CO. It's time to ask yourself: What was the one, best thing you did on behalf of your fitness this week? What was your Fitness Victory?
**We want to hear about it!**
**Here are the top five victories from last week**
* **Strikerjones (45 points)** I finally succeeded in loading a [300 pound Atlas stone to 48"](http://www.youtube.com/watch?v=G7L2JSShvHY&sns=em) at a body weight of 173 after missing it repeatedly for the last two months.
* **stayangry (35 points)** I'm on a cut, and I don't drink often, but yesterday I decided I was going to get drunk, so I went and did an hour of cardio to prep for the extra calories. But when I was done, I thought, fuck it, I'll just take this calorie deficit to the bank, and just continued my normal routine for the rest of the day. Not having a hangover this morning feels great.
* **iVirTroll (19 points)** This week marked 1 year on my weight loss journey! 60lbs down and weighed in at 198.8, putting me under 200 for the first time since I was in middle school, I'm now a junior in college. 6'1/20/M.
* **MSJ2 (18 points)**Hit a bench press single of 275 - something I've been aiming for for the past 2 years
* **almuftah (16 points)** Yesterday, my dad bought my younger sisters a bunk bed. It was too heavy for my dad to move it himself so I had to move it. My dad began to build it, but realized the pieces were heavy to move alone. I saw my dad needed some help so I told him 'go relax and ill build it' after me convincing him a few hours later it was complete. Seeing the smiles on my sisters faces and letting my dad relax I thought to myself 'this is why i lift.'
And now it's your turn. **Let's hear your fitness Victory this week! Don't forget to upvote your favorite Victories!**
*****
*The Victory Sunday thread is posted Sundays before 2pm Eastern Time. ^most ^^of ^^the ^^^time ^^^anyway.
If someone wants to volunteer to post next week's that'd be great! I likely won't be close to a keyboard and I ain't doing all this formatting on mobile. | 0non-cybersec
| Reddit |
How to re-install Linux from bootable usb stick?. <p>I’m trying to re-install Linux with a bootable usb stick. After I entered the bios model and saw the device under the submenu of boot, I just couldn’t open it or enter it to start the installation. Very confused.... By the way the creation of the bootable usb from ISOs was done by startup disk creator. I think something went wrong but I can’t figure it out.</p>
<p><a href="https://i.stack.imgur.com/F0Lb7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F0Lb7.png" alt="Screenshot of bios"></a></p>
| 0non-cybersec
| Stackexchange |
Cartopy: order of rendering layers with scatter data. <p>I am trying to plot position of several points (scatter plot) on a map using Cartopy (see code below). When I try to render the plot, data-points are rendered behind LAND-layer. But I want to plot my scatter-data over LAND-layer... What I am doing wrong?</p>
<p>Cartopy: ver. 0.12.x, Matplotlib: ver.1.4.2</p>
<pre><code>import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_extent([125, 150, 35, 63])
ax.stock_img()
ax.add_feature(cfeature.LAND) #If I comment this => all ok, but I need
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.RIVERS)
ax.coastlines()
ax.scatter(yc,xc,transform=ccrs.PlateCarree()) #yc, xc -- lists or numpy arrays
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/Lac6Y.png" alt="Points shown under the LAND layer"> </p>
<p><img src="https://i.stack.imgur.com/8MiY8.png" alt="Plot without LAND-layer"></p>
| 0non-cybersec
| Stackexchange |
Google the guy who increased the price of a pill to $750, Martin Shkreli : ). | 0non-cybersec
| Reddit |
Print "hello world" every X seconds. <p>Lately I've been using loops with large numbers to print out <code>Hello World</code>:</p>
<pre><code>int counter = 0;
while(true) {
//loop for ~5 seconds
for(int i = 0; i < 2147483647 ; i++) {
//another loop because it's 2012 and PCs have gotten considerably faster :)
for(int j = 0; j < 2147483647 ; j++){ ... }
}
System.out.println(counter + ". Hello World!");
counter++;
}
</code></pre>
<p>I understand that this is a very silly way to do it, but I've never used any timer libraries in Java yet. How would one modify the above to print every say 3 seconds?</p>
| 0non-cybersec
| Stackexchange |
broadFileSystemAccess UWP. <p>I'm trying to use <code>broadFileSystemAccess</code> Capability for UWP apps, But <code>broadFileSystemAccess</code> capability is not listed in my list of capabilites in Package.appxmanifest. </p>
<p>My min and max target version is 1803, build 17134, Please help me with this.</p>
| 0non-cybersec
| Stackexchange |
Wrapping urllib3.HTTPResponse in io.TextIOWrapper. <p>I use AWS <code>boto3</code> library which returns me an instance of <code>urllib3.response.HTTPResponse</code>. That response is a subclass of <code>io.IOBase</code> and hence behaves as a binary file. Its <code>read()</code> method returns <code>bytes</code> instances.</p>
<p>Now, I need to decode <code>csv</code> data from a file received in such a way. I want my code to work on both <code>py2</code> and <code>py3</code> with minimal code overhead, so I use <code>backports.csv</code> which relies on <code>io.IOBase</code> objects as input rather than on py2's <code>file()</code> objects.</p>
<p>The first problem is that <code>HTTPResponse</code> yields <code>bytes</code> data for CSV file, and I have <code>csv.reader</code> which expects <code>str</code> data.</p>
<pre><code>>>> import io
>>> from backports import csv # actually try..catch statement here
>>> from mymodule import get_file
>>> f = get_file() # returns instance of urllib3.HTTPResponse
>>> r = csv.reader(f)
>>> list(r)
Error: iterator should return strings, not bytes (did you open the file in text mode?)
</code></pre>
<p>I tried to wrap <code>HTTPResponse</code> with <code>io.TextIOWrapper</code> and got error <code>'HTTPResponse' object has no attribute 'read1'</code>. This is expected becuase <code>TextIOWrapper</code> is intended to be used with <code>BufferedIOBase</code> objects, not <code>IOBase</code> objects. And it only happens on <code>python2</code>'s implementation of <code>TextIOWrapper</code> because it always expects underlying object to have <code>read1</code> (<a href="https://github.com/python/cpython/blob/2.7/Modules/_io/textio.c#L1429" rel="nofollow noreferrer">source</a>), while <code>python3</code>'s implementation checks for <code>read1</code> existence and falls back to <code>read</code> gracefully (<a href="https://github.com/python/cpython/blob/3.6/Modules/_io/textio.c#L1495" rel="nofollow noreferrer">source</a>).</p>
<pre><code>>>> f = get_file()
>>> tw = io.TextIOWrapper(f)
>>> list(csv.reader(tw))
AttributeError: 'HTTPResponse' object has no attribute 'read1'
</code></pre>
<p>Then I tried to wrap <code>HTTPResponse</code> with <code>io.BufferedReader</code> and then with <code>io.TextIOWrapper</code>. And I got the following error:</p>
<pre><code>>>> f = get_file()
>>> br = io.BufferedReader(f)
>>> tw = io.TextIOWrapper(br)
>>> list(csv.reader(f))
ValueError: I/O operation on closed file.
</code></pre>
<p>After some investigation it turns out that the error only happens when the file doesn't end with <code>\n</code>. If it does end with <code>\n</code> then the problem does not happen and everything works fine.</p>
<p>There is some additional logic for closing underlying object in <code>HTTPResponse</code> (<a href="https://github.com/shazow/urllib3/blob/master/urllib3/response.py#L385" rel="nofollow noreferrer">source</a>) which is seemingly causing the problem.</p>
<p><strong>The question is:</strong> how can I write my code to</p>
<ul>
<li>work on both python2 and python3, preferably with no try..catch or version-dependent branching;</li>
<li>properly handle CSV files represented as <code>HTTPResponse</code> regardless of whether they end with <code>\n</code> or not?</li>
</ul>
<p>One possible solution would be to make a custom wrapper around <code>TextIOWrapper</code> which would make <code>read()</code> return <code>b''</code> when the object is closed instead of raising <code>ValueError</code>. But is there any better solution, without such hacks?</p>
| 0non-cybersec
| Stackexchange |
Conservative extenstion, identifying objects of different type. <p>In a comment <a href="https://terrytao.wordpress.com/books/analysis-i/" rel="nofollow noreferrer">here</a>, Terence Tao says:</p>
<blockquote>
<p>in practice we often “abuse notation” by identifying objects of one type with another, e.g. identifying the natural number 3 with the integer +3, the rational 3/1, and the real 3.0; this is technically a violation of the usual laws of typed first-order logic, but can be justified by passing to a suitable conservative extension of the original mathematical theory</p>
</blockquote>
<p>How can one construct this conservative extension in which one identifies these objects of different type? Can one formalize Tao's idea?</p>
| 0non-cybersec
| Stackexchange |
Watch Kanye West’s VMAs Speech Recut as a Stand-Up Comedy Act. | 0non-cybersec
| Reddit |
How to check whether record is delete or not in ROOM Database?. <p><strong>Using SQLite DB,</strong></p>
<p><strong>db.delete</strong> will return a long value that is the row ID of new row, or <strong>-1</strong> if an error occurred.
So you can check it to know delete is successful or not by :</p>
<pre><code>int result = db.delete(TABLE_NAME, COLUMN_ID + " = ?",
new String[]{String.valueOf(id)});
if(result != -1){
// Deleted successful
}
</code></pre>
<p>But , in <strong>ROOM DB</strong></p>
<p>we can delete record by :</p>
<pre><code> @Delete
void delete(Notes notes);
</code></pre>
<p>Is there any option to check whether record deleted or not ?</p>
| 0non-cybersec
| Stackexchange |
Two Classes Befriending Each Other. <p>I'm trying to make two classes friend with each other, but I keep getting a "Use of Undefined type A" error message.</p>
<p>Here is my code:</p>
<p>I've tried to add class A; as shown on top but still the same.</p>
<pre><code>#include <iostream>
class A;
class B
{
private:
int bVariable;
public:
B() :bVariable(9){}
void showA(A &myFriendA)
{
std::cout << "A.aVariable: " << myFriendA.aVariable << std::endl;// Since B is friend of A, it can access private members of A
}
friend class A;
};
class A
{
private:
int aVariable;
public:
A() :aVariable(7){}
void showB(B &myFriendB){
std::cout << "B.bVariable: " << myFriendB.bVariable << std::endl;
}
friend class B; // Friend Class
};
int main() {
A a;
B b;
b.showA(a);
a.showB(b);
system("pause");
return 0;
}
</code></pre>
<p>I'm trying to make class A access class B and vice versa via the friendship.</p>
| 0non-cybersec
| Stackexchange |
Say hi to Smokey. | 0non-cybersec
| Reddit |
Reference to a subfigure. <p>I'm using <code>subfigure</code> for some images. To reference to them in the text, I use the <code>\autoref{fig:test}</code> command. But this doesn't matter the problem is the same with <code>\ref</code> command.</p>
<p>To shorten the name I used: <code>\addto\extrasngerman{\def\figureautorefname{Abb.}}</code> and to have arabic numbers instead of chars: <code>\renewcommand*\thesubfigure{\arabic{subfigure}}</code></p>
<p>Problem (still without the modifications above):
It refers to it with </p>
<blockquote>
<p>11</p>
</blockquote>
<p>, but I want to have it referred like </p>
<blockquote>
<p>1.1</p>
</blockquote>
<p>MWE:</p>
<pre><code>\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[ngerman]{babel}
\usepackage[hidelinks]{hyperref}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\addto\extrasngerman{\def\figureautorefname{Abb.}}
\renewcommand*\thesubfigure{\arabic{subfigure}}
\begin{document}
\section{Test}
Hello. This is some text. I'm referring to a the test image (\autoref{fig:Test1}). Or to the second image with the ref command (Abb. \ref{fig:Test2}). What I want to have: (Abb. 1.2)
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{bmp0_test_image.png}
\caption{TestCaption1}
\label{fig:Test1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{bmp0_test_image.png}
\caption{TestCaption2}
\label{fig:Test2}
\end{subfigure}
\caption{Test-Figure}\label{fig:TestFigure}
\end{figure}
\end{document}
</code></pre>
<p><img src="https://i.stack.imgur.com/6FtOU.png" alt="Output"></p>
| 0non-cybersec
| Stackexchange |
10 Diana Tips by Arcsecond. | 0non-cybersec
| Reddit |
How to add customized icon to Ubuntu top menu bar. <p>I want to create an icon in top menu bar which lists the USB devices attached to the system. How can I add an icon to the top menubar? Where are the config files for that? I have gone through <code>/usr/share/applications/*.desktop</code>, but how to make it visible on top menu bar?</p>
| 0non-cybersec
| Stackexchange |
Shift to automation may prevent Trump from delivering on his jobs promise. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
New Apple watch already available on Alibaba. | 0non-cybersec
| Reddit |
Node.js undefined:1 [SyntaxError: Unexpected end of input]. <p>I am getting the following error when I execute the node.js script, I tried to investigate a lot by adding console.log() to trace the error but could not find any solution. [Note: I have also searched other Stackoverflow solution but none of it helped]</p>
<pre><code>undefined:1
{"ydht":{"status":{"code":200,"message":"OK"},"records":[
^
SyntaxError: Unexpected end of input
at Object.parse (native)
at IncomingMessage.<anonymous> (/tmp/subs_20140130/inc/getData.js:36:24)
at IncomingMessage.EventEmitter.emit (events.js:95:17)
at IncomingMessage.<anonymous> (_stream_readable.js:745:14)
at IncomingMessage.EventEmitter.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:407:10)
at emitReadable (_stream_readable.js:403:5)
at readableAddChunk (_stream_readable.js:165:9)
at IncomingMessage.Readable.push (_stream_readable.js:127:10)
at HTTPParser.parserOnBody [as onBody] (http.js:142:22)
</code></pre>
<p>Here is my code:</p>
<pre><code>var options = {
host: '<my host>',
port: 3128,
path: 'http://<some host>:4080'+searchQuery,
method: 'GET',
headers: {
'App-Auth': cert
}
};
var req = http.request(options, function(res) {
res.setEncoding('utf8'); //DEBUG
for ( var k in options) { console.log("[LOGGING] options :" + k + " = " + options[k]);} //DEBUG
res.on('data', function (resData) {
var resObj = "";
resObj = JSON.parse(resData);
console.log("[LOGGING] Response:: "+resObj);
if(resObj.ydht.status.code === 200 && resObj.ydht.records[0].key.length > 0) {
console.log("[LOGGING] Email "+em+" Key "+resObj.ydht.records[0].key);
var filePath = basePath + '/setData';
var setd = require(filePath);
setd.setMagData(resObj.ydht.records[0].key, ycacert, is_sub);
} else {
console.log("[LOGGING] Fail to fetch data em "+em+" nl "+nl);
}
});
res.on('end', function() {
console.log("[LOGGING] connection closed");
});
});
req.on('error', function(err) {
console.log("[LOGGING] Fail to fetch data em "+em+" nl "+nl);
});
req.end();
</code></pre>
<p>When I call the api using curl command, I get the below valid json response:</p>
<pre><code>{"ydht":{"status":{"code":200,"message":"OK"},"records":[{"metadata":{"seq_id":"intusnw1-14B3579A577-3","modtime":1422531339,"disk_size":99},"key":"[email protected]","fields":{"em":{"value":"[email protected]"},"is_confirm":{"value":""},"nl":{"value":"offerpop1"}}}],"continuation":{"scan_completed":false,"scan_status":200,"uri_path":"/YDHTWebService/V1/ordered_scan/dts.subs_email?order=asc&start_key=a0"}}}
</code></pre>
| 0non-cybersec
| Stackexchange |
Build a Dict of Counts based on Two Dataframe Columns. <p>I have a dataframe that looks like this:</p>
<pre><code> start stop
0 1 2
1 3 4
2 2 1
3 4 3
</code></pre>
<p>I'm trying to build a dictionary with key= (start, stop) pairs from my list of tuples and the value= count of their occurrence, regardless of the order. In other words, (1,2) and (2,1) would both count as an occurrence of the pair (1,2) in the list of tuples.</p>
<p>Desired output: <code>dict_count= {('1','2'):2, ('3','4'):2}</code></p>
<p>Here's my attempt:</p>
<p><code>my_list=[('1','2'),('3','4')]</code></p>
<pre><code>for pair in my_list:
count=0
if ((df[df['start']]==pair[0] and df[df['end']]==pair[1]) or (df[df['start']]==pair[1]) and df[df['end']]==pair[0])::
count+=1
dict_count[pair]=count
</code></pre>
<p>However, this gives me a KeyError:
<code>KeyError: "['1' ...] not in index"</code></p>
| 0non-cybersec
| Stackexchange |
Nashorn: JavaScript on the JVM FTW. | 0non-cybersec
| Reddit |
Windows XP Task Bar stuck on right side of screen. <p>Due to a misclick/drag, the task bar is on the right side of the screen in Windows XP.</p>
<p>The task bar is not locked, but it will not respond to any drag movement, and is stuck!</p>
<p>Any suggestions or experience with this?</p>
| 0non-cybersec
| Stackexchange |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
"Look at my paws, I cleaned them all by myself". | 0non-cybersec
| Reddit |
Brexit minister David Davis accused of 'having no idea what Brexit means' after saying UK wants to stay in single market. | 0non-cybersec
| Reddit |
Element disappears after removing class. <p>I've come across some strange behavior in Chrome 60.0 when removing a class from an element with a very specific configuration.</p>
<p>I removed the <code>fade</code> class from an <code><h1></code> element and it makes it completely disappear. The problem can be reproduced by removing the class in the dev-tools element inspector as well. Can anyone tell me what's going on here?</p>
<p>The element should just go back to full opacity after clicking the button.</p>
<p><img src="https://i.stack.imgur.com/FgJh3.gif" width="300"></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>var button = document.querySelector('button');
var h1 = document.querySelector('h1');
button.addEventListener('click', function(){
h1.classList.remove('fade');
});</code></pre>
<pre class="snippet-code-css lang-css prettyprint-override"><code>.center {
overflow: hidden;
}
h1 {
float: left;
overflow: hidden;
}
.fade {
opacity: .2;
}</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><div class="center">
<div>
<h1 class="fade">Watch me disappear</h1>
</div>
</div>
<button>Click</button></code></pre>
</div>
</div>
</p>
| 0non-cybersec
| Stackexchange |
A Different Perspective on the Pokemon Sun and Moon logo. | 0non-cybersec
| Reddit |
Can't find a gtx 1070 pre order anywhere?. Do any of you guys know where i can pre order the 1070? I cannot find it anywhere. | 0non-cybersec
| Reddit |
LPT: If somebody downplays your successes, they are not your friend and you should not spend time with them.. | 0non-cybersec
| Reddit |
This Boy Didn't Cut His Hair For 5 Years. His Mom Gets Choked Up Just Thinking About The Reason.. | 0non-cybersec
| Reddit |
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application.
I need to scale my app adding some spot instances that EB do not support.</p>
<p>So I create a second autoscaling from a launch configuration with spot instances.
The autoscaling use the same load balancer created by beanstalk.</p>
<p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p>
<p>This work fine, but:</p>
<ol>
<li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p>
</li>
<li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p>
</li>
</ol>
<p><strong>UPDATE</strong></p>
<p>Elastic Beanstalk add support to spot instance since 2019... see:
<a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
| 0non-cybersec
| Stackexchange |
What is "We’ve detected that your app is using an old version of the Google Play developer API" warning in Google Developer Console?. <p>We do not use any Google Play Developer APIs explicitly, yet we are receiving the following warning:</p>
<p><a href="https://i.stack.imgur.com/yAJmZ.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/yAJmZ.jpg" alt="enter image description here"></a></p>
<p>Is this related to <a href="https://developer.android.com/google/play/billing/billing_library_releases_notes" rel="noreferrer">https://developer.android.com/google/play/billing/billing_library_releases_notes</a> ?</p>
<p>We are currently using <strong>Google Play Billing Library 1.2.2 Release (2019-03-07)</strong></p>
<p>We don't plan to migrate <strong>Google Play Billing Library 2.0.1 Release (2019-06-06)</strong> because it would be a lot of work with little gain.</p>
<blockquote>
<p>Purchases must be acknowledged within three days</p>
</blockquote>
<p>But that is just my wild guess - that the Google Play Billing library is related to the Google Play Developer API. They may or may not be related to each other.</p>
<p>What does it mean by "We’ve detected that your app is using an old version of the Google Play developer API" ?</p>
<p>The following is the full set of our dependencies. Any idea what causes this warning?</p>
<pre><code>dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.billingclient:billing:1.2.2'
implementation 'androidx.multidex:multidex:2.0.1'
def lifecycle_version = '2.0.0-beta01'
// ViewModel and LiveData
implementation "androidx.lifecycle:lifecycle-extensions:$lifecycle_version"
// alternately - if using Java8, use the following instead of compiler
implementation "androidx.lifecycle:lifecycle-common-java8:$lifecycle_version"
def room_version = '2.1.0'
implementation "androidx.room:room-runtime:$room_version"
annotationProcessor "androidx.room:room-compiler:$room_version"
def work_version = "2.1.0"
implementation "androidx.work:work-runtime:$work_version"
// https://github.com/yccheok/SmoothProgressBar
implementation 'com.github.castorflex.smoothprogressbar:library:1.1.0'
// For Google Drive REST API - https://github.com/gsuitedevs/android-samples/blob/master/drive/deprecation/app/build.gradle
implementation('com.google.http-client:google-http-client-gson:1.26.0') {
exclude group: 'org.apache.httpcomponents'
}
implementation('com.google.api-client:google-api-client-android:1.26.0') {
exclude group: 'org.apache.httpcomponents'
}
implementation('com.google.apis:google-api-services-drive:v3-rev136-1.25.0') {
exclude group: 'org.apache.httpcomponents'
}
implementation 'com.google.firebase:firebase-messaging:19.0.1'
implementation 'com.google.android.gms:play-services-auth:17.0.0'
implementation 'androidx.appcompat:appcompat:1.1.0-beta01'
implementation 'androidx.preference:preference:1.1.0-beta01'
implementation 'com.google.android.material:material:1.1.0-alpha07'
implementation 'androidx.exifinterface:exifinterface:1.0.0'
implementation 'androidx.gridlayout:gridlayout:1.0.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
implementation 'com.google.code.gson:gson:2.8.5'
implementation 'com.github.yccheok:AndroidDraw:0.18'
implementation 'com.github.yccheok:SectionedRecyclerViewAdapter:0.4'
implementation 'com.github.yccheok:CalendarView:1.10'
implementation 'com.andrognito.patternlockview:patternlockview:1.0.0'
implementation 'com.github.bumptech.glide:glide:4.7.1'
annotationProcessor 'com.github.bumptech.glide:compiler:4.7.1'
implementation 'com.github.yccheok:PhotoView:0.1'
implementation 'com.github.yccheok:Matisse:1.6'
implementation 'com.jakewharton.threetenabp:threetenabp:1.1.1'
// https://github.com/romandanylyk/PageIndicatorView
implementation 'com.romandanylyk:pageindicatorview:1.0.2@aar'
implementation 'me.zhanghai.android.materialratingbar:library:1.3.2'
testImplementation 'junit:junit:4.12'
testImplementation "org.robolectric:robolectric:4.2.1"
testImplementation 'org.mockito:mockito-core:2.23.0'
testImplementation 'org.powermock:powermock-core:2.0.0-RC.4'
testImplementation 'org.powermock:powermock-module-junit4:2.0.0-RC.4'
testImplementation 'org.powermock:powermock-api-mockito2:2.0.0-RC.4'
androidTestImplementation 'androidx.test:runner:1.3.0-alpha01'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0-alpha01'
}
</code></pre>
<p>For project level dependencies, it is</p>
<pre><code>dependencies {
classpath 'com.android.tools.build:gradle:3.4.2'
classpath 'com.google.gms:google-services:4.2.0'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
</code></pre>
| 0non-cybersec
| Stackexchange |
I swear people are crazy. | 0non-cybersec
| Reddit |
How to show $SL_{n}(\mathbb{R})=\bigsqcup_{w\in W}LwU$ where L (or U) are lower(or upper) triangular matrix?. <p>I'd like to ask a homework problem that causes me many troubles for days. The problem is like below :</p>
<p>Let W denote the subgroup of permutation matrices in $SL_{n}(\mathbb{R})$. Show the following decomposition.</p>
<p>$SL_{n}(\mathbb{R})=\bigsqcup_{w\in W}LwU$</p>
<p>where L denotes all lower triangular matrices, and U denotes all upper triangular matrices.</p>
<p>I've found some similar statements in <a href="https://math.stackexchange.com/questions/290707/decompose-a-as-a-lpu">Decompose $A$ as $A = LPU$,</a> and the <a href="http://en.wikipedia.org/wiki/Bruhat_decomposition" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Bruhat_decomposition</a>. Especially, the first one is exactly the same with the problem above except the group the problem deal with : in the link, they deal with $GL_{n}(\mathbb{R})$.</p>
<p>I thought the problem has something wrong, because by the argument below :</p>
<p>When considering n=2 case, there exists only one permutation matrix - the identity matrix $I$ (if we regard a "permutation matrix" as the one earned by changing the rows of the identity matrix). Then, the decomposition says all the matrices with determinant 1(pick one of those, say A) can be decomposed as</p>
<p>$A = LU$ where $L = \begin{pmatrix} a_{1} & 0 \\ a_{2} & a_{3} \end{pmatrix}
$ and $U = \begin{pmatrix} b_{1} & b_{2} \\ 0 & b_{3} \end{pmatrix}$ and $a_{1}, a_{3}, b_{1}, b_{3}$ are not zero.</p>
<p>Then, $A = \begin{pmatrix} a_{1}b_{1} & a_{1}b_{2} \\ a_{2}b_{1} & a_{2}b_{2}+a_{3}b_{3} \end{pmatrix}$, where $a_{1}b_{1}$ should not be zero. But there exists elements in $SL_{n}(\mathbb{R})$ whose first row, frist column entry is zero. I felt there's something wrong, so I sent an e-mail to professor. She replied that $S_{n}$ can be embedded into $SL_{n}$ natrually. Hence following her advice, I examined the case n=2 again. By an elementary eigenvalue argument says that $S_{2}$ can be embedded into $SL_{2}(\mathbb{R})$ in the only one way i.e. to {I, -I}. The only different thing is -I, but by the same argument above, there are elements that cannot be represented by above decomposition even we consider -I.(just the problem of sign, huh?)</p>
<p>So I asked a professor again. And she answered me consider $PSL_{2}(\mathbb{R})$, not $SL_{2}(\mathbb{R})$ when n=2, and when n is odd, there would be no trouble like this.
She said this makes the problem absolutely correct. I still don't get it, and at this moment I gave up.</p>
<p>Are there anybody who can help with this problem? I'm just exhausted.</p>
| 0non-cybersec
| Stackexchange |
I made an LED flagpole beacon for my coachella camp site. | 0non-cybersec
| Reddit |
[USA] [OC] Messy intersection, car turning left lingers too long. | 0non-cybersec
| Reddit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.