text
stringlengths
100
500k
subset
stringclasses
4 values
Characterization of Electro-Optical Devices with Low Jitter Single Photon Detectors -- Towards an Optical Sampling Oscilloscope Beyond 100 GHzOct 12 2018We showcase an optical random sampling scope that exploits single photon counting and apply it to characterize optical transceivers. We study single photon detectors with a jitter down to 40 ps. The method can be extended beyond 100 GHz. Thin SequencesAug 30 2015Oct 04 2015We look at thin interpolating sequences and the role they play in uniform algebras, Hardy spaces, and model spaces. Spectral Characteristics and Stable Ranks for the Sarason Algebra $H^\infty+C$Jun 10 2009We prove a Corona type theorem with bounds for the Sarason algebra $H^\infty+C$ and determine its spectral characteristics. We also determine the Bass, the dense, and the topological stable ranks of $H^\infty+C$. Corona Solutions Depending Smoothly on Corona DataAug 16 2012In this note we show that if the Corona data depends continuously (smoothly) on a parameter, the solutions of the corresponding Bezout equations can be chosen to have the same smoothness in the parameter. Neutrino Physics: an UpdateJun 27 2003We update our recent didactic survey of neutrino physics, including new results from the Sudbury Neutrino Observatory and KamLAND experiments, and recent constraints from WMAP and other cosmological probes. Neutrino PhysicsMay 06 1999The basic concepts of neutrino physics are presented at a level appropriate for integration into elementary courses on quantum mechanics and/or modern physics.
CommonCrawl
I kind of get the idea of singlet and triplet states. But why are they called singlet and triplet (what is the single and what is the triple in these cases)? I feel that I am missing something obvious! The terms arose back in the early days of quantum physics when spectral lines that were expected to be singlets were actually observed to be more complex (doublets, triplets, etc.). A pair of electrons, being fermions, must have antisymmetric wave function, i.e. if $\psi(\xi_1,\xi_2)$ is a wavefunction describing the system, where $\xi_1$ are position and spin of electron 1 and $\xi_2$ is position and spin of electron 2, then $\psi(\xi_2,\xi_1)=-\psi(\xi_1,\xi_2)$. In the first approximation, spin degree of freedom can be separated from orbital degrees of freedom, so that the wavefunction becomes $\chi(s_1,s_2)\phi(x_1,x_2)$, where $s_i$ is spin of $i$th electron, and $x_i$ is position of $i$th electron. Here $\chi$ is spin part of wavefunction, and $\phi$ is orbital part. To preserve total antisymmetry of the wavefunction, $\chi$ and $\phi$ can be either symmetric, or antisymmetric. If one is symmetric, the other must be antisymmetric. The spin of a single electron can be up $\uparrow$ or down $\downarrow$. I.e. simplest options for a two-electron system could be $\uparrow\uparrow$, $\downarrow\downarrow$, $\downarrow\uparrow$ and $\uparrow\downarrow$. But the latter two don't honour indistinguishability of electrons. To correctly include indistinguishability of electrons, we should take symmetric and antisymmetric linear combinations of these spin states. From here we can see that symmetric spin part of wavefunction gives rise to three different states — these are triplet states. If spin part of wavefunction is antisymmetric, there's only one such state — it's the singlet state. When one makes spectroscopic measurements with not very high resolution, states with different spins but same orbitals will appear to have the same energies, so the spectral lines will appear the same. But if you put your system in magnetic field, you'll see that the spectral lines split according to spin multiplicities: spin-singlet states will remain single lines, while spin-triplets will split into three different spectral lines. This is the origin of such naming. Not the answer you're looking for? Browse other questions tagged quantum-chemistry spectroscopy or ask your own question. Spin spin coupling in a proton NMR of an ester? Why are antiaromatic compounds unstable?
CommonCrawl
I was wondering if this was a true chaotic map and if it might have any interesting properties. As far as I know, there's no hard-and-fast definition of a "true chaotic map". But there are certainly some that everybody agrees are chaotic - for example, the logistic map, with its famous bifurcation diagram. For your map, I went ahead and generated a bifurcation diagram for the $x$-coordinates - specifically, in this graph, $k$ is along the horizontal axis, ranging from $0$ to $3$. For each value of $k$, I skipped the first $10000$ iterations, then plotted the next $10000$ $x_n$ along the vertical axis. Those solid-black patches are chaos - even after $10000$ iterations, the $x$-coordinates were roughly evenly distributed across the interval $[0,1]$. But see those patches in the middle, where there are only a few points? Those are patches of stability, just like in the logistic map. That sort of behavior I'd call definitely chaotic. One interesting difference from the logistic map is that it looks like it degenerates into chaos very quickly, with none or virtually none of the period-doubling characteristic of the logistic map. Or possibly the period-doubling is happening, but too fast to be seen on this scale. Not the answer you're looking for? Browse other questions tagged recurrence-relations chaos-theory or ask your own question. Why is this family of dynamical systems able to produce spirals and clusters of points? Is this recurrence relation correct? Is this recurrence relation solvable? Is this recurrence relation valid? The recursive systems of equations with the following form, have they ever been considered?
CommonCrawl
Before talking about why it didn't work, let's talk about how it works. TRIACs are TRiodes for Alternating Current which means that they're electrically controllable switches for AC currents. I've used TRIACs before in my EL wire glasses where they turned EL wire on and off to the beat of an audio track. In that case, I was switching them rather slowly compared to the frequency of the AC signal going to the wire. They acted as an "all or nothing" switch. They either supplied the AC waveform in its full glory, or they supplied nothing. If you're clever though, it's possible to get TRIACs to transmit a modified AC waveform, but to do that, you need to understand a little more about how TRIACs work. It has three terminals. The basic rundown is that when current is pulled out of or pushed into the gate (G), the TRIAC turns on and allows current to flow freely from A to B or from B to A. Given that, you might assume that the TRIAC will turn off as soon as you stop the current flow through G. You'd be wrong. TRIACs are only really useful for AC because given a DC signal, they will not turn off no matter what you do to the gate. A TRIAC will only turn off once the current through it drops to zero (or below some threshold close to zero). While this might seem like an annoying and arbitrary setback, it's actually kind of convenient for attenuating an AC signal. I've drawn the current source on the gate so that we can ignore the particulars of what kind of potential difference is required to generate this current. Remembering that power $$=I^2R$$, the instantaneous power will look sort of like a sine wave that's all positive. If you take the area under this curve, you have units of Power $$\times$$ Time which is Energy. Take the Energy and divide by time and that gives you average power which is represented here in dotted red. This switch is clever because it keeps an eye on the zero crossings of the AC signal and only closes for a brief moment a fixed amount of time after a zero crossing. Keep in mind that the switch is only closed for a very brief moment when you see a green dot. It stays open the rest of the time. So every half cycle, the switch waits for the zero crossing. Once it sees a zero crossing, it starts a short timer, and when that timer runs out, it closes for an instant. The timer can be anywhere between zero and T/2 seconds long. If it's any longer than T/2, it won't have a chance to close before the next zero crossing. So what does our resulting current and power waveform look like? Keep in mind that the TRIAC will not conduct current until it gets a gate pulse from the switch at which point it will conduct until the current through it is zero. With this change, the area under the power waveform is reduced. This reduced area means a lower average power (indicated again in red). If you make the timer on the switch longer, you will get less power delivered to the load and if you make it shorter (or zero), you will get more. So this is a pretty ugly waveform to feed to any kind of sophisticated equipment, but if all you're worried about is power delivery to a purely resistive load (like a tungsten lightbulb), then this works just fine. The other cool thing is that before the switch pulses, the TRIAC looks like an open circuit, and after it pulses it looks like a short. During both of these conditions, you get little or no power dissipation from the TRIAC. This means that you can dim your lights all you want without having to worry about losing efficiency or heat dissipation. Almost all of your power is being delivered to the load. This makes TRIAC dimmers suitable in dimming light switch applications. So I had an idea to try using a TRIAC with a "magic switch" on my 1200Hz, 120V EL power supply to control my EL panel's brightness. This would be a great solution because if it worked, I would have little or no power dissipation in my circuit and a very high efficiency. Unfortunately, it didn't work. This wasn't due to a fault with my circuit design, but rather a result of the peculiar load an EL panel presents to a circuit. With a purely resistive load (like a resistor or light bulb), my implementation works just fine. Because of this (and because I've never written about it before), I think it's worth explaining exactly how my "magic switch" works. This section is fairly straightforward. None of the parts on my circuit are rated for 120V, so this simple 1/101 voltage divider gives me a nice voltage waveform that stays below 2V. Although there really isn't a "positive" or "negative" terminal on an AC source, I included those indicators to make it clear how the AC supply is hooked up. This is the same supply shown again at the end of the circuit powering the load. It's important that the (-) terminal be connected to ground in both cases. The output of the divider is represented by A. The toned down waveform is quickly passed into two comparators. Because the AC waveform is centered around ground, these need to be dual-supply comparators that can handle a negative input voltage. I was lucky enough to have a negative voltage supply on my lab kit. These comparators compare the waveform to 0V. The idea is that their output will be square waves that have edges at every zero crossing. Also, they will be opposites of each other because ground is hooked up to the inverting input of one and the non-inverting input of the other. The output is B. I used an LM311 which requires a pull-up resistor on the output. This is what R3 and R4 are for. Because the signal is split in two during this section, I color coded the two paths in black and blue. My first idea was to simply AND these two signals together so that the output would go high for a split second every zero-crossing (i.e. after one had risen but before the other had time to fall). Unfortunately, the signals were too good and this overlap never occurred, so I had to force it to occur with a delay. Every time there's a rising edge on one of the inputs to the inverters (U2A and U2B), the (inverted) output will fall rapidly as the diodes D1 and D2 quickly pull current out of the capacitors (C1 and C2). When there is a falling edge however, the diodes block the inverter from dumping current into the capacitor. The capacitor gets its current from R5 and R6 which deliver it rather slowly. Because of this, you can see on the the C waveforms that the voltage rises very slowly and falls very quickly. This slowly rising signal takes longer to reach the threshold for what is considered a "high" input signal causing a slight delay on every rising clock edge. The signals are then passed through another set of inverters which buffer and invert the signal giving you D. There are numerous places in this schematic where buffers could take the place of inverters, but I didn't have any buffers on hand, so here we are. Speaking of part limitations, ideally, C1 and C2 would be smaller. You only want the signals to overlap very slightly. If they overlap too long, you'll have problems triggering the delay timer later on. Unfortunately, I didn't have any values smaller than .001uF that weren't ridiculously small, so I had to make do. Now that these two signals overlap slightly, they can be ANDed together. Whenever both signals are high (overlap), no current will flow through the diodes and R7 will raise the input to U5A. When one signal is low however, its diode will pull current through R7 and lower the input voltage to U5A. When this signal is buffered and inverted, you get E and we're back to a single signal. Now we're on to the clock that will determine how much power we deliver to our load. This clock takes the form of a monostable 555 timer that is activated by a low-going clock edge in E. The delay time is adjustable using the potentiometer R8. For more information on setting up a 555 in monostable mode, check out the Wikipedia article. Ideally, you'd set up your timer so that your potentiometer will give you the widest range between 0 and T/2. This requires you to know what your T is ahead of time. A 555 timer in monostable mode will have an output that stays high until the timer runs out where it goes low. Passing it through another inverter makes it shoot high at the critical moment. That's where you get signal F. You might think that you'll be driving the TRIAC with the F signal because it goes high at the right moment to trigger it, but this actually won't work. The problem is that you want a short positive pulse and not a long positive period. This period only ends right at the next zero crossing. This doesn't provide enough time for the TRIAC to shut down. Because some current will still be flowing through its gate after this zero crossing, it will be triggered for the rest of the next half-cycle. This is solved by a simple high-pass filter provided by C4 and R10. This turns the square wave F into a chain of high and low pulses (G). The low pulses don't do anything, but the high pulses turn on Q1 briefly which triggers the TRIAC. An opto-isolator is like an LED and a phototransistor. The idea is that it takes an electrical signal and turns it into an optical one so there is no path for electrons to move from one side to the other and cause damage. This is a pretty good idea, because the gate of the TRIAC can reach some really high voltages. In my explanation above, I had a current source driving the gate to make it simpler. In reality, the gate needs to have a positive or negative potential applied to it with respect to the A terminal (or whichever side the gate is drawn on). A quick and easy source for a different potential is the B side of the TRIAC. That's why the gate is tied to the top of the TRIAC through the opto-isolator. Yay! It looks like it's working! So, I wasn't really thinking this thing was going to work when I set out, but I figured it was worth trying. At the very least, I'd get a cool blog post out of it RIGHT?! The real educational stuff is above. What follows is my speculation. You'll see that current will always lead voltage. I.e. if you start charging a capacitor up from 0V, you will see a huge spike in current (proportional to the rate of change of voltage) followed by the voltage gradually climbing (proportional to the integral of the current). So this causes a problem with my circuit because now the zero voltage and zero current crossings don't happen at the same time. The zero current crossing will happen before the zero voltage crossing. This is an issue because I'm turning the TRIAC on using the zero voltage points and it's turning itself off at the zero current points. My suspicion is that it might not be turning off when it's supposed to. See what I mean? The current is leading the voltage by a ton. You get a huge spike of current right at the beginning of each half-cycle. With the TRIAC, no matter what I do to dim it, I am delivering the same amount of power to the EL panel. So maybe TRIACs just aren't made for dimming EL panels this way. The supply that I purchased for this experiment is probably a very simple blocking oscillator that is very poorly regulated (if you can say it's regulated at all). It's simply not the rock-steady 60Hz AC you get out of your wall that TRIAC dimmers are usually designed for. Playing around with the current draw in the middle of a cycle can have very negative impacts on the quality of the supply's output. For instance, I noticed even with the resistive load that changing the timing of the TRIAC actually slightly adjusted the frequency of the supply. I also noticed the shape of the supply's output warp dramatically as I adjusted the TRIAC timer. That's never a good sign. Before I tried all of this, I probably should have known that it wouldn't work very well based on what I learned about EL panels in my previous research. What makes them light up is the rapid reversal of the potential across their leads. This causes the charges inside the panel to move around, and it's this rapid motion that makes them light up. Even if my TRIAC could produce the right kind of waveform, I don't know that it would have worked very well because instead of going directly from positive to negative voltage, I would be stopping at zero for a moment each cycle. So I guess I've failed yet again to dim an EL panel. It's not a huge deal. I still got a great refresher of TRIAC dimmers which actually might come in handy in a future project I've got planned. I do feel a little bit like Edison though: discovering 10,000 ways not to make a lightbulb. This entry was posted in Electroluminescence and tagged analog circuits, el panel, electroluminescence, power electronics, triac by ch00f. Bookmark the permalink. woo-hoo! I beat that captcha! Twice! You're the second person to complain about the captcha. I used an easier one before, but I found that some spambots were able to get through it. Maybe I'll try a different one. You could try cycle swallowing. Your high voltage AC waveform has 1200 cycles per second. Mentally partition its waveform into "squads" where each squad is a sequence of 64 cycles. of the squad. That gives you a brightness level of (1/64), while at the same time presenting the EL wire with exactly the waveshape it expects. You could implement this with a pair of 6 bit digital counters and some glue logic. You'd use positive edge triggered clocking for the counters & logic, where the digital clock's rising edge is aligned with the high voltage AC waveform's positive-going zero crossing. The "magic synchronized AND gate" is merely an opto isolated triac-driver WITH ZERO CROSSING TRIGGER. Like the MOC3063, made by many vendors and sold for $0.67 at Digikey. Check out its datasheet, it is just what you want. Your digital logic says "Send the AC waveform to the EL wire beginning on the next cycle". That logic signal drives the input (LED) end of the optoisolator. The zero crossing doodad inside the optoisolator then waits for the next zero crossing, and fires your existing triac. Voila, perfectly synchronized with the AC waveform. You get full and complete cycles of the AC waveform, from zero crossing to zero crossing. 6 bit Binary Rate Multiplier. See the TTL chip "SN7497" and/or the CMOS chip "CD4089" for more details. BRMs have the advantage of less flicker at medium and low brightness settings. Dang! You spoiled it! I'm planning on trying exactly that, but I thought it would be worth it to write up my current approach before proceeding. Since I'm still trying to make a sound-reactive t-shirt, I'll be controlling the whole thing with a micro-controller anyway, so I can forgo all the glue logic and just do it in software. I'm not certain what kind of effect this will have on the power supply's output though. I know that I'll at least need to throw in some kind of dummy load along with the EL panel because the supply shuts down when you stop drawing power out of it. I actually didn't know about zero-crossing TRIAC triggers before. That's awesome! That's going to make things a lot easier. Thanks for the writeup. I'm designing a variable-setting resistance heater for a certain type of kiln. I've been exploring TRIACs..rather than chopping the AC wave, I was going to do this but at 50/60Hz. 12.5% duty cycle would be one wave on, 7 off (or one half on, 3.5 waves off). And so on. It expect it will buzz, but it's quieter than the traditional mechanical relays turning on and off every minute. Best of luck! Pingback: Indagadores |Seguridad informatica |Seguridad en internet » ¿Cómo no para atenuar los paneles EL, triacs! Pingback: How not to dim EL panels, TRIACs! Were you ever able to borrow a Class-D amp from Bernie while under previous employ? Given all the steps you've taken thus far, would you have been able to garner any knowledge sooner? Haha good times! Didn't realize you were a reader. I actually wasn't able to borrow a Class-D, but I've got a new idea that will definitely work i.e. I blatantly copied it off a design I found online. Probably should have tried that first. The best engineer is a lazy one. A diac. Yes. They exist. On occasion. Something passed by my desk yesterday that reminded me of you and decided to see how you've been doing since you moved. You have "the knack" and I want to see where it takes you. Magnificent website. Lots of useful info here. I'm sending it to several pals ans also sharing in delicious.
CommonCrawl
Clicking in an area with this tool selected fills that area with the selected color. When a blank cell is discovered, this algorithm helps in revealing neighboring cells. This step is done recursively till cells having numbers are discovered. Flood fill algorithm can be simply modeled as graph traversal problem, representing the given area as a matrix and considering every cell of that matrix as a vertex that is connected to points above it, below it, to right of it, and to left of it and in case of 8-connections, to the points at both diagonals also. For example, consider the image given below. It clearly shows how the cell in the middle is connected to cells around it. For instance, there are 8-connections like there are in Minesweeper (clicking on any cell that turns out to be blank reveals 8 cells around it which contains a number or are blank). The cell $$(1, 1)$$ is connected to $$(0, 0),$$ $$(0, 1),$$ $$(0, 2),$$ $$(1, 0),$$ $$(1, 2),$$ $$(2, 0),$$ $$(2, 1),$$ $$(2, 2)$$. In general any cell $$(x, y)$$ is connected to $$(x-1, y-1),$$ $$(x-1, y),$$ $$(x-1, y+1),$$ $$(x, y-1),$$ $$(x, y+1),$$ $$(x+1, y-1),$$ $$(x+1, y),$$ $$(x+1, y+1)$$. Of course, the boundary conditions are to be kept in mind. Now that the given area has been modeled as a graph, a DFS or BFS can be applied to traverse that graph. The pseudo code is given below. The above code visits each and every cell of a matrix of size $$n \times m$$ starting with some source cell. Time Complexity of above algorithm is $$O(n \times m)$$. One another use of flood algorithm is found in solving a maze. Given a matrix, a source cell, a destination cell, some cells which cannot be visited, and some valid moves, check if the destination cell can be reached from the source cell. Matrix given in the image below shows one such problem. The source is cell $$(0,0)$$ and the destination is cell $$(3,4)$$. Cells containing $$X$$ cannot be visited. Let's assume there are $$4$$ valid moves - move up, move down, move left and move right. Following pseudo code solve the problem given above. The code given above is same as that given previously with slight changes. It takes three more parameters including the given matrix to check if the current cell is marked $$X$$ or not and coordinates of destination cell $$(dest_x, dest_y)$$. If the current cell is equal to destination cell it returns True, and consequently, all the previous calls in the stack returns True, because there is no use of visiting any cells further when it has been discovered that there is a path between source and destination cell. So for the matrix given in image above the code returns True. If, in the given matrix the cell $$(1, 2)$$ was also marked $$X$$, then the code would have returned False, as there would have been no path to reach from $$S$$ to $$D$$ in that case.
CommonCrawl
There are $n$ heaps of sticks and two players who move alternately. On each move, a player chooses a non-empty heap and removes $1$, $2$, or $3$ sticks. The player who removes the last stick wins the game. Your task is to find out who wins if both players play optimally. The first line contains an integer $n$: the number of heaps. The next line has $n$ integers $x_1,x_2,\ldots,x_n$: the number of sticks in each heap. For each test case, print "first" if the first player wins the game and "second" if the second player wins the game.
CommonCrawl
My question is that why in discrete-time Fourier series representation, they only sum up from $0$ to $N - 1$ instead of from $-\infty$ to $\infty$ ? Is the slides above gives a wrong formula ? Not the answer you're looking for? Browse other questions tagged signal-analysis fourier-transform fourier-series or ask your own question. Number of zeros of a sum of Shah functions by applying Rice's formula?
CommonCrawl
Abstract: In this paper we introduce an algebra embedding $\iota:K< X >\to S$ from the free associative algebra $K< X >$ generated by a finite or countable set $X$ into the skew monoid ring $S = P * \Sigma$ defined by the commutative polynomial ring $P = K[X\times N^*]$ and by the monoid $\Sigma = < \sigma >$ generated by a suitable endomorphism $\sigma:P\to P$. If $P = K[X]$ is any ring of polynomials in a countable set of commuting variables, we present also a general Gröbner bases theory for graded two-sided ideals of the graded algebra $S = \bigoplus_i S_i$ with $S_i = P \sigma^i$ and $\sigma:P \to P$ an abstract endomorphism satisfying compatibility conditions with ordering and divisibility of the monomials of $P$. Moreover, using a suitable grading for the algebra $P$ compatible with the action of $\Sigma$, we obtain a bijective correspondence, preserving Gröbner bases, between graded $\Sigma$-invariant ideals of $P$ and a class of graded two-sided ideals of $S$. By means of the embedding $\iota$ this results in the unification, in the graded case, of the Gröbner bases theories for commutative and non-commutative polynomial rings. Finally, since the ring of ordinary difference polynomials $P = K[X\times N]$ fits the proposed theory one obtains that, with respect to a suitable grading, the Gröbner bases of finitely generated graded ordinary difference ideals can be computed also in the operators ring $S$ and in a finite number of steps up to some fixed degree.
CommonCrawl
A system of polynomial equations $f=(f_1(x_1,\ldots, x_m),\ldots, f_n(x_1,\ldots,x_m))$ is called overdetermined, if it has more equations than variables; i.e., when $n>m$. HomotopyContinuation.jl can solve overdetermined systems. Here is a simple example. This system has 4 equation in 3 variables. One might expect that it has no solution, but actually it has solutions, as is explained here.
CommonCrawl
Abstract: This research project undertakes a comprehensive analysis of RF beamforming techniques for design, simulation, fabrication, and measurement of Butler Matrix and Rotman Lens beamforming networks. It is aimed to develop novel and well-established designs for steerable antenna systems that can be used in vehicular telematics and automotive communication systems based on microwave and millimeter-wave techniques. Abstract: Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2D images as large as a few hundred pixels in each direction. Here we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of two-dimensional images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of $n$ images of size $L \times L$ pixels, the computational complexity of our algorithm is $O(nL^3 + L^4)$, while existing algorithms take $O(nL^4)$. The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the non-uniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. Abstract: Natural (such as lunar) occultations have long been used to study sources on small angular scales, while coronographs have been used to study high contrast sources. We propose launching the Big Occulting Steerable Satellite (BOSS), a large steerable occulting satellite to combine both of these techniques. BOSS will have several advantages over standard occulting bodies. BOSS would block all but about 4e-5 of the light at 1 micron in the region of interest around the star for planet detections. Because the occultation occurs outside the telescope, scattering inside the telescope does not degrade this performance. BOSS could be combined with a space telescope at the Earth-Sun L2 point to yield very long integration times, in excess of 3000 seconds. If placed in Earth orbit, integration times of 160--1600 seconds can be achieved from most major telescope sites for objects in over 90% of the sky. Applications for BOSS include direct imaging of planets around nearby stars. Planets separated by as little as 0.1--0.25 arcseconds from the star they orbit could be seen down to a relative intensity as little as 1e-9 around a magnitude 8 (or brighter) star. Other applications include ultra-high resolution imaging of compound sources, such as microlensed stars and quasars, down to a resolution as little as 0.1 milliarcseconds. Abstract: The SLIT2-ROBO1/2 pathways control diverse biological processes, including growth regulation. To understand the role of SLIT2 and ROBO1/2 in cervical carcinogenesis, firstly their RNA expression profiles were screened in 21 primary uterine cervical carcinoma (CACX) samples and two CACX cell lines. Highly reduced expressions of these genes were evident. Concomitant alterations [deletion/methylation] of the genes were then analyzed in 23 cervical intraepithelial neoplasia (CIN) and 110 CACX samples. In CIN, SLIT2 was deleted in 22% samples compared to 9% for ROBO1 and none for ROBO2, whereas comparable methylation was observed for both SLIT2 (30%) and ROBO1 (22%) followed by ROBO2 (9%). In CACX, alteration of the genes were in the following order: Deletion: ROBO1 (48%) > SLIT2 (35%) > ROBO2 (33%), Methylation: SLIT2 (34%) > ROBO1 (29%) > ROBO2 (26%). Overall alterations of SLIT2 and/or ROBO1 (44%) and SLIT2 and/or ROBO2 (39%) were high in CIN followed by significant increase in stage I/II tumors, suggesting deregulation of these interactions in premalignant lesions and early invasive tumors. Immunohistochemical analysis of SLIT2 and ROBO1/2 in CACX also showed reduced expression concordant with molecular alterations. Alteration of all these genes predicted poor patient outcome. Multiparous (≥5) women with altered SLIT2 and ROBO1 along with advanced tumor stage (III/IV) and early sexual debut (<19 years) had worst prognosis. Our data suggests the importance of abrogation of SLIT2-ROBO1 and SLIT2-ROBO2 interactions in the initiation and progression of CACX and also for early diagnosis and prognosis of the disease. Abstract: The Riesz transform is a natural multi-dimensional extension of the Hilbert transform, and it has been the object of study for many years due to its nice mathematical properties. More recently, the Riesz transform and its variants have been used to construct complex wavelets and steerable wavelet frames in higher dimensions. The flip side of this approach, however, is that the Riesz transform of a wavelet often has slow decay. One can nevertheless overcome this problem by requiring the original wavelet to have sufficient smoothness, decay, and vanishing moments. In this paper, we derive necessary conditions in terms of these three properties that guarantee the decay of the Riesz transform and its variants, and as an application, we show how the decay of the popular Simoncelli wavelets can be improved by appropriately modifying their Fourier transforms. By applying the Riesz transform to these new wavelets, we obtain steerable frames with rapid decay. Abstract: introduction: whereas a retrograde attempt to insert an indwelling stent is performed in lithotomy position, usually renal access is gained in a prone position. to overcome the time loss of patient repositioning, a renal puncture can be performed in a modified lithotomy position with torqued truncus and slightly elevated flank. there is a two-fold advantage of this position: transurethral and transrenal access can be obtained using a combined approach. in the present study, this simple technique is used to position a floppy guide wire through a modified needle directly through the renal pelvis into the ureter. materials and methods: the kidney is punctured in the modified lithotomy position under sonographic control using an initial three-part puncture needle. a floppy tip guide-wire is inserted into the collecting system via the needle after retrieving the stylet. the retracted needle is bent at the tip while the guide-wire is secured in the needle and the collecting system. the use of the floppy tip guide-wire helps to insert the curved needle back into the kidney pelvis, which becomes the precise guidance for the now steerable wire. the desired steerable stent is positioned under radiographic control in a retrograde fashion over the endoscopically harbored tip of the guide-wire. two patient cohorts (newly described method and conventional method) were compared. results: the presented steering procedure saves 16.5 mean minutes compared to the conventional antegrade stenting and 79.5 euros compared to the control group. conclusion: the described combined antegrade-retrograde stent placement through a bent three-part puncture needle results in both clinical superiority (or time, success rate) and financial benefits. Abstract: Slit molecules comprise one of the four canonical families of axon guidance cues that steer the growth cone in the developing nervous system. Apart from their role in axon pathfinding, emerging lines of evidence suggest that a wide range of cellular processes are regulated by Slit, ranging from branch formation and fasciculation during neurite outgrowth to tumor progression and to angiogenesis. However, the molecular and cellular mechanisms downstream of Slit remain largely unknown, in part, because of a lack of a readily manipulatable system that produces easily identifiable traits in response to Slit. The present study demonstrates the feasibility of using the cell line CAD as an assay system to dissect the signaling pathways triggered by Slit. Here, we show that CAD cells express receptors for Slit (Robo1 and Robo2) and that CAD cells respond to nanomolar concentrations of Slit2 by markedly decelerating the rate of process extension. Using this system, we reveal that Slit2 inactivates GSK3β and that inhibition of GSK3β is required for Slit2 to inhibit process outgrowth. Furthermore, we show that Slit2 induces GSK3β phosphorylation and inhibits neurite outgrowth in adult dorsal root ganglion neurons, validating Slit2 signaling in primary neurons. Given that CAD cells can be conveniently manipulated using standard molecular biological methods and that the process extension phenotype regulated by Slit2 can be readily traced and quantified, the use of a cell line CAD will facilitate the identification of downstream effectors and elucidation of signaling cascade triggered by Slit. Bernardes M.C.,Adorno B.V.,Poignet P.,Zemiti N. Abstract: This paper presents an adaptive approach for 2D path planning of steerable needles. It combines dutycycled rotation of the needle with the classic RapidlyExploring Random Tree (RRT) algorithm and it is used intraoperatively to compensate for system uncertainties and perturbations. Simulation results demonstrate the performance of the proposed motion planner on a workspace based in ultrasound images.
CommonCrawl
Lendek Zs., Lauber J., Guerra T-M. 2013. Periodic Lyapunov functions for periodic TS systems. Systems & Control Letters. 62:303–310. Guerra T-M, Estrada-Manzo V., Lendek Zs.. 2015. Observer design for Takagi-Sugeno descriptor models: an LMI approach. Automatica. 52:154–159. Estrada-Manzo V., Guerra T-M, Lendek Zs.. 2016. Generalized observer design for discrete-time T-S descriptor models. Neurocomputing. 182:210-220. Lendek Zs., Raica P., Guerra T-M, Lauber J.. 2016. Finding a stabilizing switching law for switching TS models. International Journal of Systems Science. 47:2772. Lendek Zs., Sala A, García P, Sanchis R. 2013. Experimental application of Takagi-Sugeno observers and controllers in a nonlinear electromechanical system. Journal of Control Engineering and Applied Informatics. 15:3–14. Stano P, den Dekker A, Lendek Zs., Babuska R. 2014. Convex saturated particle filter. Automatica. 50:2494–2503. Lendek Zs., Guerra T-M, Lauber J.. 2015. Controller design for TS models using non-quadratic Lyapunov functions. IEEE Transactions on Cybernetics. 45:453–464. Estrada-Manzo V., Lendek Zs., Guerra T-M, Pudlo P. 2015. Controller design for discrete-time descriptor models: a systematic LMI approach. IEEE Transactions on Fuzzy Systems. 23:1608-1621. Lendek Zs., Raica P., De Schutter B, Babuska R. 2013. Analysis and design for continuous-time string-connected Takagi-Sugeno systems. Journal of the Franklin Institute. 351:3577–3592. Nagy Z, Páll E, Lendek Zs.. 2017. Unknown input observer for a robot arm using TS fuzzy descriptor model. Proceedings of the 2017 IEEE Conference on Control Technology and Applications. :939-944. Estrada-Manzo V., Lendek Zs., Guerra T-M. 2015. Unknown input estimation of nonlinear descriptor systems via LMIs and Takagi-Sugeno models. 54th IEEE Conference on Decision and Control. Laurain T., Lendek Zs., Lauber J., Palhares R.M. 2017. Transformer les retards de transport variables en retards fixes: Une application au probléme du convoyeur. Proceedings of the 26èmes Rencontre sur la Logique Floue et ses Applications. :1-8. Beyhan S, Sarabi FEghbal, Lendek Zs., Babuska R. 2017. Takagi-Sugeno Fuzzy Payload Estimation and Adaptive Control. Preprints of the 20th IFAC World Congress. :867-872. Beyhan S, Lendek Zs., Alcı M, Babuska R. 2013. Takagi-Sugeno Fuzzy Observer and Extended Kalman Filter for Adaptive Payload Estimation. Proceedings of the 2013 Asian Control Conference. :1–6. Estrada-Manzo V., Guerra T-M, Lendek Zs.. 2015. Static output feedback control for continuous-time TS descriptor models: decoupling the Lyapunov function. Proceedings of the 2015 IEEE International Conference on Fuzzy Systems. :1–6. Lendek Zs., Lauber J., Guerra T-M, Raica P.. 2013. On stabilization of discrete-time periodic TS systems. Proceedings of the 2013 IEEE International Conference on Fuzzy systems. :1–7. Lendek Zs., Lauber J., Guerra T-M, Raica P.. 2013. Stability analysis of switching TS models using $\alpha$-samples approach. Proceedings of the 3rd IFAC International Conference on Intelligent Control and Automation Science. :207–211. Nagy Z, Lendek Zs.. 2017. Quadcopter modeling and control.. Proceedings of the Journees Francophones sur la Planification, la Decision et l'Apprentissage pour la conduite de systemes. :1-2. Estrada-Manzo V., Lendek Zs., Guerra T-M. 2014. Output feedback control for T-S discrete-time nonlinear descriptor models. Proceedings of the 2014 IEEE 53rd Annual Conference on Decision and Control. :860–865. Lendek Zs., Raica P., Lauber J., Guerra T-M. 2014. Observer Design for Switching Nonlinear Systems. Proceedings of the 2014 IEEE World Congress on Computational Intelligence, IEEE International Conference on Fuzzy Systems. :1–6. Lendek Zs., Raica P., Lauber J., Guerra T-M. 2014. Nonquadratic stabilization of switching TS systems. Preprints of the 2014 IFAC World Congress. :7970–7975. Laurain T., Lendek Zs., Lauber J., Palhares R.M. 2017. A new air-fuel ratio model fixing the transport delay: validation and control. Proceedings of the 2017 IEEE Conference on Control Technology and Applications. :1904-1909. Hodasz N, Bradila V, Nascu I, Lendek Zs.. 2016. Modeling and Parameter Estimation for an Activated Sludge Wastewater Treatment Process. Proceedings of the 2016 IEEE International Conference on Automation, Quality and Testing, Robotics. :1–6. Lendek Zs., Lauber J.. 2016. Local stability of discrete-time TS fuzzy systems. 4th IFAC International Conference on Intelligent Control and Automation Sciences. :13-18. Lendek Zs., Lauber J.. 2016. Local quadratic and nonquadratic stabilization of discrete-time TS fuzzy systems. Proceedings of the 2016 IEEE World Congress on Computational Intelligence, IEEE International Conference on Fuzzy Systems. :1–6.
CommonCrawl
Mary Anderson might seem like your ordinary neighbor, except for one thing. She has a mind-bending number of cats. Don't ask us why, or how - she just does. There's a cat show coming up and the cash rewards are huge. So Mary plans to bring as many cats as possible. She knows her short-haired cats are better looking than the long-haired ones, so she decides to bring 7 times more short-haired cats than long-haired ones. Now here's the thing: Mary can only spend a certain amount on cat grooming, and the local stylist isn't cheap. He charges 18 dollars for cats with short hair and 36 dollars for cats with long hair. Her cat styling budget is $1,944. She needs to figure out how many long- and short-haired cats to bring. Let's help her by solving sytems of equations. Let the variable x denote the number of short-haired cats she brings, and let y be the number of long-hair cats. We'll write two equations to help her. Grooming short-haired cats cost $18, so 18x represents the cost to groom all the short-haired cats she wants to bring. To groom long-haired cats costs $36, so we can write this as 36y. Added together, these terms give the total cost of grooming cats for the show. She has a budget of $1,944. The second equation comes from the fact that she wants to bring 7 times more short-haired cats than long-haired ones. That means that x, the number of short-haired cats, equals 7 times the number of long-haired cats. Now take a close look at this second equation: it tells us that x has exactly the same value as 7y. This means we can substitute 7y for x in the first equation. Why would we want to do that? Well, now that we only have one variable, we can solve for y. We've got two like terms on the left, so let's combine them. First, 18 times 7y is 126y. 126y plus 36y is 162y, so we have 162y = 1944. Dividing both sides by 162, we see that y = 12. To find x, substitute 12 into our second equation: x = 7y. We see x equals 7 times 12, or 84. Therefore, Mary can bring 12 long-haired cats and 84 short-haired cats. Let's check our work. If these numbers are correct, then they must satisfy both of the equations. That means if we substitute 12 and 84 for x and y, respectively, then simplify, we should get the same values on either side of the equal signs. So let's check. After multipying, we have 1512 + 432, giving us 1,944. That's equal to the right-hand side, so these values of x and y satisfy the equation. In the second equation, we know 84 = 7 * 12, so we're good here, too, which means all our work checks out. Mary can definitely bring 12 long-haired cats and 84 short-haired ones. Alltogether, that's 96 cats to groom. She brings them to the groomers for same-day service. He basically just blow dries them all at once. She picks them up and finally arrives at the show. Systems of equations, also called simultaneous equations, are problems with two or more equations having the same variables. To determine the solution to the system, or the point where the equations intersect, there are several methods: graphing, substitution, and elimination. This video investigates how to use substitution to solve systems of linear equations. To solve using substitution for a system with two equations, you must find the value of one of the variables in terms of the other and substitute it into one of the equations, allowing you to know the value for one of the variables. Next, substitute the value of that variable into the other equation and solve for the second variable. After you know the solution for both variables, plug them back into the system to verify they work. There are a lot of steps to solving this type of problem, and you know what they say: the more the steps, the greater the chance of making a silly mistake. But keep in mind, if you get a funny answer, it could be that there is no solution, or the entire line could be the solution. If this process of solving systems of equations with substitution seems confusing, then you had better watch this video, so you can see an example worked out and have a great time while you do. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Solving Systems of Equations by Substitution kannst du es wiederholen und üben. Explain how to solve a system of equations by substitution. You can substitute $x=y$ into the first equation to get $y+y=4$. This is equivalent to $2y=4$. Dividing by $2$ leads to $y=2$. First, establish a system of equations by assigning variables to the unknown values and write the given information into equations. Remember: You can solve systems of equations with two variables - as long as you have at least two different equations. Next, we are looking for a variable that is already isolated on one side. We plug in the other side of that equation for the variable in the second equation. Now, we only have one equation with one variable left, which we can solve by isolating the variable. But there is still another variable! To solve for the second variable, we plug in the calculated value for the first variable into the second equation. You can check your solutions by substituting them into both equations. Determine the two equations that are needed to correctly describe Miss Anderson's problem. If the number of long haired cats is $3$, the number or short haired cats is $7\times 3=21$. Which equation reflects this information? First, we write a system of equations using the given information. We'll use $x$ for the number of short-haired cats and $y$ for the number of long-haired cats. Miss Anderson has a budget of $\$1944$. The styling cost for short-haired cats is $\$18$ and $\$36$ for long-haired cats. The equation to describe this situation is $18x+36y=1944$. We also know that Miss Anderson has seven times more short-haired cats than long-haired cats. This information, written mathematically, is $x=7y$. Determine the number of long-haired and short-haired cats by substitution. First, establish the system of equations. Look for an equation with an isolated variable and substitute the other side of the equation for the variable in the second equation. Solve the equation that has only one variable left. Substitute the solution of the first variable into the equation with two variables and solve for the one you still don't know. If Miss Anderson has $4$ long-haired cats, and if she has seven times more short-haired cats, she has $7\times4=28$ short haired cats. Let's use $x$ for the number of short-haired cats and $y$ for the number of long-haired cats. We still need another equation. Miss Anderson has seven times more short-haired cats than long-haired cats, $x=7y$. Now we have a system of equations. Substitute $x=7y$ into the equation $18x+36y=1944$. We have just one variable left, $y$. Finally, we divide by $162$ to get $y=12$. We substitute $y=12$ into the equation $x=7y=7\times 12=84$. Now we know that Miss Anderson can take $12$ short-haired and $84$ long-haired cats to the cat stylist. Divide the second equation by $2$ to isolate $y$. Substitute $2x$ for $y$ in the first equation. To be able to substitute one variable into the other equation, we need to have the variable isolated on one side of the equation. Let's take a look at the second equation $4x=2y$. Dividing by $2$ gives us $2x=y$. Finally, we divide by $16$ to get $x=12$. We substitute $12$ for $x$ into the modified equation $y=2x=2\times 12=24$. Decide how many dogs Miss Lovingdogs can bring to the dog stylist. You have to multiply the cost per dog by the number of dogs. The budget is the sum of the number of poodles and the number of dachshunds. We'll use $y$ to represent the number of poodles and $x$ for the number of dachshunds. Because the cost to style one dachshund is $\$20$, and the cost to style poodles is $\$35$, we get the expression $20x+35y$. Let's have a look at the other equation: Miss Lovingdogs has three times more dachshunds than poodles, which gives us $x=3y$. Now we have one equation with only one variable, $y$. We know now that Miss Lovingdogs can take $5$ poodles and $15$ dachshunds to the dog stylist. Write a system of equations for each situation and solve them. You can check each pair of solutions. Here we have different examples of similar word problems. To set up each of our equations, we multiply the costs of our choices (cookies or lollipops) by the corresponding variables and add the products to equal to the given budget. One more lollipop than cookies. They can buy $10$ cookies and $20$ lollipops. They can buy $8$ cookies and $9$ lollipops. Same number of cookies as lollipops. They can buy $12$ cookies and $12$ lollipops. Three times as many lollipops as cookies. They can buy $4$ cookies and $12$ lollipops.
CommonCrawl
Abstract: Based on Markowitz's portfolio theory we construct the multicriteria Boolean problem with Wald's maximin efficiency criteria and the Pareto-optimality principle. We obtained lower and upper attainable bounds for the stability radius of the problem in the cases of linear metric $l_1$ in the portfolio and the market state spaces and of the Chebyshev metric $l_\infty$ in the criteria space. Keywords and phrases: multicriteria optimization, investment portfolio, Wald's maximin efficient criteria, Pareto-optimal portfolio, stability radius.
CommonCrawl
In the kingdom of Boolistan, every inhabitant is either a Knight, Knave or Normal. Knights can only make true statements, Knaves can only lie, and Normals must either tell the truth or lie. Warmup: The local tavern only allows Normals (no one can relax around Knights and Knaves). What can a Normal say to prove their identity? Challenge: Only knights can dine at King Arthur's Round Table. What can a Knight say to prove their identity? Remarks: In conventional logic, where every statement is either true or false, the challenge is impossible (since Normals can say anything). To make this doable, we allow circular self-referential statements, like the famous example, "this statement is false". Formally, a circular self-referential statement is an equation of the form $$ s = f(x_1,\dots,x_n,s) $$ where $x_1,\dots,x_n$ are grounded logical propositions (like "I am a Knave"), $f$ is a Boolean function, and $s$ is a Boolean variable. We say that such a statement is True if setting $s=$ True makes the equation hold, and similarly say it is False if $s=$ False is a solution. This means some such statements are both True and False, while others are neither. For example, "this statement is false" would be the equation $s=\neg s$, which has no solutions, so is neither True nor False. On the other hand, "this statement is true" would be $s=s$, which is both True and False. We then allow knights to say any True statement, Knaves to say any False statement, while Normals can say a statement as long as it is True or False or both. "If I am not a knight, this is a lie" This statement can only work iff the speaker is a knight, as otherwise it will lead to a logical paradox, which is neither true nor false. For knights this is false, and for knaves this is true, so only Normals can say it. "I am a Normal." followed by "I am not a Normal." Or any other pair of one truth and one falsehood. Normals are the only ones who can both lie and tell the truth. "If I am not a Knight, this is false." He can say "(At least) Sometimes I lie." - A knight can not say this, because he never lies. And a knave can not say it, because it would be the truth for him. He can say "My next statement will not be a lie" Since the knight will know for sure he can never lie. But the Normal cannot 100% know if his next statement might be a lie. He can try, but there could be any thinkable scenario where his next statement could be a lie. Since there is a non-zero chance for the Normal to lie or tell the truth on his next statement, he cannot make the claim, since it is neither true nor false, but a vague guess. And per the rules they can only state truth or lie not something unknown. A knight could say: "That knight (points at known knight) can confirm I am a knight." This should work as long as King Arthur is a knight (only tells the truth) who started allowing/accepting other knights in at his table. Also that all the knights know all the other knights. Any normal or knave that tried to enter that used this line would be declined by the pointed at knight. "I am a knave you know...sometimes, I just like to say a lie, and just see what happens. Like this one time last week, I lied to this knight, and let me tell you..." What can a Knight say to prove their identity? Then the King has to do what the Knight asked for. Then, obviously, if he's a Knight, he'll do it, and it will be true. If he isn't, he won't say anything, because he had been given a direct order to remain silent. "knights tell part of the truth." Since knaves always lie, they can only say that knights tell part of a lie, and knights cannot determine whether a part of a truth is a whole truth, and are thus unable to answer. Of course, a knave could really be saying "Knights tell part of a lie," in which case, they really say "Knights tell part of a truth." The Normal can say so, because they can lie and tell the truth: Knights tell part of the truth and part of a lie. The only surefire way is assuming that Normals can tell lies and truths in the same sentence. In this case, the Normal says, "I Lie and I tell truths." A knight cannot lie and thus cannot admit so, a Knave cannot lie about telling the truth yet tell the truth about lying or vice versa, but a Normal can lie and tell the truth at once: They can lie about lying and tell the truth about being truthful, or lie about telling the truth and tell the truth about lying. Assuming that at least one knight knows the person trying to enter, the person trying to answer can ask said Knight, "Can I lie?" If a normal or knave, the other Knight will say "Yes", otherwise "No". We could also assume so, since the dinner is for Knights only, any one there is a Knight, which helps solidify this answer. However, assuming that no one knows the person trying to enter, they can prove so in a two-step process: First, the guard asks if they can say the following sentence, which is written on a scroll: "I can tell lies and truths in the same sentence." A Knight cannot say a lie in the same sentence, and will answer "No." A Knave cannot say a truth in a sentence, but will lie and say "yes" (note that it asks whether they can do BOTH, hence the knave isn't telling the truth about saying a lie and forming a contradiction). A Normal can either lie or tell the truth, and will answer either "Yes" or "No." If they answered "No", then they are either a Knight or a Normal. From there, the gaurd hands them the following scroll: "I tell part-truths and part-lies." Under pain of death, they are told to read the scroll aloud. A knight cannot say a lie, even if only a part of a statement, and thus will answer "I cannot." A Normal, however, under the threat of Death, will read the script to try and save his life, thus revealing his deceit. If 1+1=2 then I am a Normal. This works because if a Knight says this statement, it ends in a contradiction, same goes for the case when a Knave says it. The statement is only true and valid when a Normal says it. If 1+1=2 then I am a Knight. Again the antecedent is necessarily true, so for the statement to be true and not a contradiction, then only a Knight could say this. Hope all the formatting worked out properly! I haven't ever eaten ham before. It could be carried by an african swallow. A guard answering another guard that coconuts cannot be carried by a european swallow, but maybe by an african swallow. In the end, we see that only Sir Bedevere and Sir Lancelot. We see Sir bedevere testing that early in the film; and Sir Lancelot probably doesn't need a pass since he is 'carried away easily'. I assume normals would have have no idea about the subject. A Normal can just say something paradoxical like "I'm a liar/Knave". "I am a knave." The knave saying this would be telling the truth. A knight would be lying to say this. A normal could make this claim in a lie. I have an answer that I believe solves it without need of a paradox. We know from the warm-up that we can identify a normal with certainty. Our Knight picks out a normal and points, then proclaims, "If she is being as truthful as I am, she would agree I'm a knight." A knave's corresponding normal would indeed agree they are a knight (because she would be lying in this case), but the knave cannot say this because it would be the truth. A knight's corresponding normal would be telling the truth, and so she would of course agree that they are a knight. If a normal were to lie, their corresponding normal would also lie, agreeing that they are a knight. They cannot say this because it would be the truth. If a normal were to tell the truth, their corresponding normal could not agree they are a knight because she must also tell the truth, thus they still cannot say this. "Sometimes I am older than a day before, but sometimes I am younger." The watchmen can ask a knight of a thing he doesn't know, but in such a way that would make Normals think it is common knowledge for Knights. The real Knight would answer "I don't know", while a Normal would try to lie. Also, for example, the question from the guard could be "how much does beer in the tavern cost?". Because Knights do not enter taverns, they don't know - but a Normal knows and because he pretends to be a Knight, he would answer truth. However, it requires cooperation between the guard and a knight. It will also fail when Normals will find out that "I don't know" is the right answer.
CommonCrawl
In these lecture notes at page 15 and 16 I am looking at the definition of diffusion process and the three coniditions which are stated at the top of page 16. These can be difficult to read mathematically. How would you explain what those conditions are and what are their implications? For instance, let's look at a simple stock price prosses and deterministic volatilty function: $$dS_t/S_t=a(t)dt+b(t)dW_t$$ What does $a$ and $b$ need to satisfy in order for the stock process to be a diffusion process? Regarding the conditions on page 16, each one of them points to a different property of the SDE solution. Continuity of the process. Notice that the integral represents the probability of ending at a distance larger than $\epsilon$ after $t-s$ units of time have passed. Drift of increments. In this case, the integral represents the expected movement from the starting point. In particular, since the term is normalized by $t-s$, it accounts for the ratio of movement per unit of time. Variance of the increments. As mark leeds pointed out, this integral is computing the variance of the movement. (1) Among all Markov Processes, the Diffusion Processes have certain smoothness properties as described on Page 16. The Brownian motion is a classic example. There are also MP's whose statistical properties are not smooth, i.e. do not satisfy the Page 16 properties; a major category are the Jump Processes, of which the classic example is the Poisson Counting Process. (2) Stochastic Differential Equations SDE's are widely used to generate and study specific examples of diffusion. Which allows us to look at many other types of diffusion beyond BM. If the SDE has a solution, then the solution is always a diffusion. In more advanced books like Oksendal Page 66 there are specific conditions on $a(X,t)$ and $b(X,t)$ that are required for the solution to exist and be unique; roughly speaking these require that A and B do not increase too fast as $X$ increases, or else the stochastic process is going to diverge to $\pm \infty$. But to repeat: if the solution of the SDE exists, it is a diffusion. The conditions just define a diffusion process. You know a Markov process has jumps, drift, and a random process. Diffusion process is a Markov process that has continuous paths, drift and diffusion (no jumps), and is completely specified by its first two moments. So the first condition just states continuity, and the other two conditions specify its first two moments (drift and diffusion coefficients). Not the answer you're looking for? Browse other questions tagged stochastic-processes stochastic-calculus self-study or ask your own question. Why is the value of an adaptive stochastic process known at time t?
CommonCrawl
Abstract: A novel proposal is outlined to determine scattering amplitudes from finite-volume spectral functions. The method requires extracting smeared spectral functions from finite-volume Euclidean correlation functions, with a particular complex smearing kernel of width $\epsilon$ which implements the standard $i\epsilon$-prescription. In the $L \to \infty$ limit these smeared spectral functions are therefore equivalent to Minkowskian correlators with a specific time ordering to which a modified LSZ reduction formalism can be applied. The approach is presented for general $m \to n$ scattering amplitudes (above arbitrary inelastic thresholds) for a single-species real scalar field, although generalization to arbitrary spins and multiple coupled channels is likely straightforward. Processes mediated by the single insertion of an external current are also considered. Numerical determination of the finite-volume smeared spectral function is discussed briefly and the interplay between the finite volume, Euclidean signature, and time-ordered $i\epsilon$-prescription is illustrated perturbatively in a toy example.
CommonCrawl
I am reading a beautiful paper called A self-contained, brief and complete formulation of Voevodsky's Univalence Axiom by Martín Hötzel Escardó, and it really takes time to understand some of the formulas. In fact, it takes so much time, that I've spent an hour on the airplane from Frankfurt to Kyiv gazing at just the first three of them. Intention of this article is not to create any new knowledge, but to extend available explanations with specific examples that helped me understand these formulas better, when they finally "clicked" this morning. Quick reminder on $\Pi$-types and $\Sigma$-types for programmers. where $\llbracket rest\rrbracket$ can mention x in its type. So, first question I had was: what is that isSingleton exactly? Is that a type? A function? A theorem? Singleton can be understood easily in terms of its implementation in code. I'll use cubicaltt as the implementation language which is very minimal but powerful Haskell-like language and compiler. I'll make a type which has only one constructor and derive $isSingleton$ for it. As can be seen, implementation consists of making a tuple of specific element c and a function proving its equality to any given $x$. Nice! IsSingletonOne : IsSingleton One = ? Perfect! So, we've got our first answers: 1); $IsSingleton$ is sort of a type-level function, or type alias! You use it like $IsSingleton$ YourType and get a type signature; 2) It's also a theorem that you can prove by constructing a value, once you specialize it to some type, like $One$. Allright! We've implemented a value of $IsSingleton One$, in other words, we've proven that $One$ is a singleton type. Full code can be seen at singleton.ctt. fiber.ctt. Implementation of specific fibers in cubicaltt is left to reader as an exercise :) These are two fibers, both legit! But what is equivalence, now? But now, instead of imagining $X$ to be some specific type, we need to imagine it to be a Fiber! So, what would it look like, to prove that this fiber is a singleton? Now, think about the two fibers in question ($x_0 \rightarrow y_0$ and $x_1 \rightarrow y_0$), and you will understand, why this function will not have the Equivalence property: the function f is called an equivalence if its fibers are all singletons. Thank you for your time. Please send your feedback in Issues or PRs in here.
CommonCrawl
Let $\mathcal U$ be a cover for $S$. A subcover of $\mathcal U$ for $S$ is a set $\mathcal V \subseteq \mathcal U$ such that $\mathcal V$ is also a cover for $S$. A finite subcover of $\mathcal U$ for $S$ is a subcover $\mathcal V \subseteq \mathcal U$ which is finite. A countable subcover of $\mathcal U$ for $S$ is a subcover $\mathcal V \subseteq \mathcal U$ which is countable.
CommonCrawl
Does it matter if you use big $L$ or little $l$ when talking about $L$-norms? I was reading a post on Quora regarding the application of "$l_1$", "$l_2$" norms for convex linear programming when I became very confused at which $L$-norm the posters are actually referring to. To me it makes a huge difference which L-norm you are referring to. But on Quora and as well as on mse (and another instance here on physics.se) I see people "seemingly" mixing up the little $l$ and big $L$ norms frequently to the point I have no idea which $L$-norm people are referring to. For example, I can say that "a system is BIBO stable if the L1 norm is bounded". Which L-norm do you think I am referring to if you had no idea what BIBO stability is? But does this actually make that big of a difference? Since the intuition of the norms (energy, stability, etc.) is preserved regardless of dimension. What are some reasons why difference between the two norms should or should not be enforced? The $L_p$ norm is more general, but you need to specify a measure space for it to make sense. You're integrating over $\mathbb X$ after all! The $l_p$ norm can be seen as a particular case of the above, as @Stephen Montgomery-Smith noted, with the counting measure on positive integers. So I don't think there really is any source of ambiguity: either I specify which measure space I'm in, and then I'm clearly talking about $L_p$ on that measure space, or I don't (and this usually means that it's clear from context what I mean). Not the answer you're looking for? Browse other questions tagged soft-question terminology definition lebesgue-integral norm or ask your own question. What is the reason norm properties are defined the way they are? Why is there an "absolute value" and a norm in the Schwarz Inequality? What is a good reference for learning about induced norms? What is Convex about Locally Convex Spaces? What is the most general setting in which the Einstein convention is relevant?
CommonCrawl
What is "Real coordinate space"? What is the Real Coordinate Space in the discussion of vectors? How does it relate to Cartesian Coordinate System and Euclidean Space? P.S. Please, use naive terms. In my experience the expression "real coordinate space" emphasizes that we are not working over the complex numbers, i.e. the space is $\mathbb R^n$, not $\mathbb C^n$. You can use Cartesian coordinates (and a whole bunch of other coordinate systems) on these spaces. The spaces $\mathbb R^n$ are called Euclidean spaces, so they are the same as real coordinate spaces. What is (fundamentally) a coordinate system ? How can vector functions define coordinate systems? The precise definition of Cartesian coordinate and Euclidean space? Why do vectors change their position relative to axes under coordinate transformations? Is the cartesian coordinate system 'special'? Example to construct a vector space adding further structure and constraints from coordinate space? Coordinates vs components of a vector and Coordinate system vs basis of vector space? What is the Euclidean inner product intuition?
CommonCrawl
Abstract: We study the detailed evolution of the fine-structure constant $\alpha$ in the string-inspired runaway dilaton class of models of Damour, Piazza and Veneziano. We provide constraints on this scenario using the most recent $\alpha$ measurements and discuss ways to distinguish it from alternative models for varying $\alpha$. For model parameters which saturate bounds from current observations, the redshift drift signal can differ considerably from that of the canonical $\Lambda$CDM paradigm at high redshifts. Measurements of this signal by the forthcoming European Extremely Large Telescope (E-ELT), together with more sensitive $\alpha$ measurements, will thus dramatically constrain these scenarios.
CommonCrawl
Plot Latent Dirichlet Allocation output using t-SNE? I found this blog where the author trains an Latent Dirichlet Allocation (LDA) model on 20 Newsgroups. The output is then an $N\times K$ matrix where $N$ is the number of articles (row wise) and $K$ is the number of topics (column wise) i.e. each row is a discrete topic distribution. The author then uses t-SNE to reduce the dimensionality of the matrix from $K$ dimensions to 2 dimensions to be able visualize the document groupings by topic. The document groupings of the t-SNE output even seem to make sense. My question is, is it reasonable to do this? LDA outputs a discrete distribution over topics for every document. t-SNE reduces the dimensionality of vectors / points in a high dimensional space to visualize local structure. As the output of LDA is a distribution, I thought it would be somehow incorrect to do this? I understand that the distribution, being discrete, can be thought of as a point in the $K$ dimensional space. But using t-SNE to visualize a discrete output somehow seems incorrect. Am I missing something here? EDIT: The metric the author uses in t-SNE is euclidean distance - that is why I am confused, because the author is using the euclidean distance to compare distributions. I think the approach described in the blog post is reasonable. The goal of t-SNE is to find a representation of the input in low dimensional space such that similar points in the original space are also similar in the representation space. In the blog post inputs are topic probabilities for each document. So documents with low euclidean distance between topic probabilities should have similar t-SNE representations. So what does euclidean distance between topic probabilities measure? Let's say we have topic probabilities of two documents $$ p = (p_1, ..., p_K),$$ $$ q = (q_1, ..., q_K).$$ If the distance between $p$ and $q$ is $0$ then documents have exactly the same topic distributions. If the distance between $p$ and $q$ increases, topic distributions become more separated. Extreme case is when $p$ and $q$ are in the form (0,...,0,1,0,...,0), and $1$ occurs on different coordinate, so the documents have completely different topics. Then the distance is maximal and equal to $1$. So the distance between t-SNE coordinates should represent how similar are the subjects of two documents. There are other measures of distance between discrete distributions (i.e Jensen-Shannon). However Euclidean distance is simple and it worked in that particular case. Not the answer you're looking for? Browse other questions tagged dimensionality-reduction tsne latent-dirichlet-alloc or ask your own question. How does the word distribution of a document relate to its topic distribution in LDA? Given an LDA model, how can I calculate p(word|topic,party), where each document belongs to a party? From LDA output to W2V to K-means? Which dimensionality reduction technique preserves the k nearest neighbors (euclidean space)? Generalized Additive Model for timely trends of topics generated via Latent Dirichlet Allocation?
CommonCrawl
We recently saw The Jacobi Iteration Method for solving a system of linear equations $Ax = b$ where $A$ is an $n \times n$ matrix. We will now look at another method known as the Gauss-Seidel Iteration Method that is somewhat of an improvement of the Jacobi Iteration Method. For the Gauss-Seidel Method, we once again isolate the variable $x_i$ from equation $E(i)$ for $i = 1, 2, ..., n$.
CommonCrawl
We consider generalized linear models (GLMs) where an unknown $n$-dimensional signal vector is observed through the application of a random matrix and a non-linear (possibly probabilistic) componentwise output function. We study the models in the high-dimensional limit, where the observation consists of $m$ points, and $m/n \to \alpha > 0$ as $n \to \infty$. This situation is ubiquitous in applications ranging from supervised machine learning to signal processing. We will analyze the model-case when the observation matrix has i.i.d.\ elements and the components of the ground-truth signal are taken independently from some known distribution. We will compute the limit of the mutual information between the signal and the observations in the large system limit. This quantity is particularly interesting because it is related to the free energy (i.e. the logarithm of the partition function) of the posterior distribution of the signal given the observations. Therefore, the study of the asymptotic mutual information allows to deduce the limit of important quantities such as the minimum mean squared error for the estimation of the signal. We will observe some phase transition phenomena. Depending on the noise level, the distribution of the signal and the non-linear function of the GLM we may encounter various scenarios where it may be impossible / hard (only with exponential-time algorithms) / easy (with polynomial-time algorithms) to recover the signal. This is joint work with Jean Barbier, Florent Krzakala, Nicolas Macris and Lenka Zdeborova.
CommonCrawl
Abstract : Amyloids are ordered protein aggregates, found in all kingdoms of life, and are involved in aggregation diseases as well as in physiological activities. In microbes, functional amyloids are often key virulence determinants, yet the structural basis for their activity remains elusive. We determined the fibril structure and function of the highly toxic, 22-residue phenol-soluble modulin $\alpha$3 (PSM$\alpha$3) peptide secreted by $Staphylococcus\ aureus$ PSM$\alpha$3 formed elongated fibrils that shared the morphological and tinctorial characteristics of canonical cross-β eukaryotic amyloids. However, the crystal structure of full-length PSM$\alpha$3, solved de novo at 1.45 angstrom resolution, revealed a distinctive "cross-$\alpha$" amyloid-like architecture, in which amphipathic α helices stacked perpendicular to the fibril axis into tight self-associating sheets. The cross-$\alpha$ fibrillation of PSM$\alpha$3 facilitated cytotoxicity, suggesting that this assembly mode underlies function in S. aureus.
CommonCrawl
Jessica is volunteering at the local animal shelter. Today, she has to buy some food and medication for the dogs and cats at the shelter. Jessica needs to figure out how much she can spend on supplies for the dogs and cats. This is an example of solving word problems using systems of equations. There are 16 cats and 24 dogs in the shelter. Jessica has to buy supplies for the animals at the shelter. We'll let 'c' equal the cost of supplies for each cat and 'd' equal the cost of supplies for each dog, where Dog supplies cost twice as much as cat supplies. If Jessica has $640 to spend, how much can she spend on cat and dog supplies, respectively? Let's see what we have. 16 cats times the cost of cat supplies, 'c', plus 24 dogs times the cost of dog supplies, 'd', equals the total cost, $640. So our first equation is 16c + 24d = 640. So, we have two variables in one equation. The second piece of information given is that dog supplies cost twice as much as cat supplies. The second equation we have to use is d = 2c. We can use substitution to solve this system of equations. Since the second equation is already solved for the variable 'd', we can substitute 'd' with 2c in the first equation. The result is one equation with one variable, which we can solve now. Multiplying 24 and 2c, gives us 48c. We can add 16c and 48c on the left side of the equation, since these are like terms. This leaves us with 64c. The last step is to solve the equation by dividing both sides of the equation by 64. We now know Jessica can spend $10 per cat. We still don't know how much to spend on dog supplies. But we do know how much Jessica can spend per cat. Let's plug in 10 for 'c' into the second equation. It doesn't matter if we use the first or second equation for this step, but the second is much easier, don't you agree? Now we can solve this equation for 'd' by multiplying. We've determined that Jessica can spend $10 per cat and $20 per dog. Like a good math student, we should check our work. The solution to a system of equations must satisfy both equations. We have to substitute 10 for 'c' and 20 for 'd' in both equations. Let's start with the first equation. On the left side of the equation, the first step is to multiply, left to right, according to PEMDAS. The next step is to add, and we can see each side of the equation equals 640, letting us know that our solution is correct. Let's check the second equation to see if it's also correct. On the right side of the equation, we just have to multiply, and we can see that each side of the equation equals 20. The solution is also true for the second equation. But Jessica still thinks the animal supplies costs a pretty penny. Looking for a better deal, Jessica finds a new shop offering discount dog supplies. She can buy all of the supplies she needs for 12 dogs at $210. She wonders, if she has to spend less on dog supplies, for how many additional cats will there be money for supplies? Let's review the information we started with. We have an unknown number of cats, let this number equal 'x', with the cost of $10 for supplies for each cat. So we have 10x. We also have 24 dogs, and $640 to spend. This time we should use a new, second equation that reflects the store's special offer. Our second equation is 12 dogs times 'd', the cost of supplies for each dog, plus 0 cost for the number of cats times x, the number of cats, equals $210. We can use elimination to help solve this system of equations. Notice what happens when the second equation is multiplied by -2, we get -24d = -420. Now we can eliminate the 24d in the first equation by adding the two equations together. Let's see what happens when we do that. We're left with 10x=220. Now all we have to do is divide both sides of the remaining equation by 10. Look at that!? Jessica can provide supplies for 22 cats with her budget. Since the shelter has 16 cats at the moment, she can provide supplies for 6 more cats. Let's find out how much she spends with the special offer on the dog supplies. Either substitute 22 for 'x', or simply use the second equation since there is no 'x' anyway. The equation can be solved by dividing the number in front of 'd' in the equation. We know that Jessica spends $17.50 per dog at this store. Remember to check your work by substituting 22 for 'x' and 17.5 for 'd' into both equations. What a deal! In her pocket, Jessica finds $15 of her own. Jessica wants to buy some nice toys for her two favorite cats and her favorite dog with the money she saved. One doggy toy costs three times as much as one cat toy. Let's do something a little different and solve this system of equations graphically. The first equation uses the information for 2 cats, 1 dog, and 15 dollars. The equation is 2x + y = 15. The second equation represents the fact that doggy toys are 3 times more expensive than cat toys. The equation is 3x = y. Each equation needs to be written in slope-intercept form to graph properly. For the first equation, subtracting 2x from both sides results in y = -2x + 15. The second equation is already in slope-intercept form. Let's graph these lines on the coordinate plane. The solution to the system is the point of intersection for both lines. As we can see, the lines intersect at (3, 9) or x = 3 and y = 9. This means the cat toy is $3 and the dog toy is $9. Remember to do the check by substituting in 3 for 'x' and 9 for 'y' into both equations. If only Jessica had known that the cats prefer the dog's toy . . . We often run into problems where we have multiple answers to find, and where the answers depend on each other. For instance, if we want to figure out what temperature it feels like outside, or the heat index, we would need to figure out both the air temperature and how humid it is outside. Such problems, when expressed mathematically, end up givng a system of equations with multiple variables to solve for. Let's look at a particular example: say your younger cousins worked part-time jobs over the summer to save money for a car. The job paid different rates for working weekdays and weekends. During the first week, one cousin worked 22 hours during the week and one hour on the weekend, earning $232. Her sister earned $270 by working 15 hours during the week and 10 hours on the weekend. What was the hourly pay rate for working weekdays and weekends? When solving word problems, it is important to understand what you are asked to find. This problem is asking us to find the hourly pay rate for weekdays and the hourly pay rate for weekends. We will need to use different variables to represent each unknown. Let's use x for the weekday rate and y for the weekend rate. Now we can use the other information from the word problem to create equations that we will use to solve for the variables. One cousin worked 22 weekday hours and one weekend hour for a total of $232. We can represent this with the equation 22x + y = 232. Using the pay for the other cousin, we get the equation 15x + 10y = 270. Since there are two equations with the same two variables, we can set them up as a system of equations and solve them simultaneously. Here, we use substitution method to solve the system. The first equation can be solved for y by subtracting 22x from both sides of the equation. This leaves us with y=232 - 22x. Since y is equal to the expression 232 - 22x, we can substitute the expression for y in the second equation. The equation becomes 15x+ 10(232-22x) = 270. We now have one equation with one variable to solve. y = 232 - 22(10) = 12. Now that the values of both variables are known, the word problem is solved. The hourly pay rate is $10 for weekdays and $12 for weekends. Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Systems of Equations – Word Problems kannst du es wiederholen und üben. Determine the costs of supplies for cats as well as dogs. If one scoop of ice cream costs 1.5 dollars, then three scoops costs $3\times 1.5=4.5$ dollars. So $x$ scoops of ice cream costs $1.5x$ dollars. Jessica wants to know the cost of supplies for cats and for dogs, given that she only has $640$ dollars to spend. $c$ for the amount of money she can spend per cat. $d$ for the amount of money she can spend per dog. As there are $16$ cats and $24$ dogs in the shelter, the total cost of supplies is then $16c+24d$. Taking into account the amount of money Jessica has at her disposal, we can see that the amount of money she can spend per cat $c$ and the amount of money she can spend per dog $d$ must satisfy the equation: $16c+24d$$=640$. As supplies for a dog are twice as expensive as for a cat, we have that: $d=2c$. Now we can substitute $2c$ for $d$ in the equation corresponding to the total cost to get: $16c+48c=640$. We can combine like terms to get: $64c=640$. Lastly, we divide by $64$ to get $c=10$. Plugging this into the equation $d=2c$ gives us $d=20$. Explain how to solve systems of equations. In the system of equations given in the first hint, $x$ equals $2y$. So plug in $2y$ for $x$ in the first equation to get $2(2y)+3y=7$. Keep in mind that you have to find solutions which satisfy all equations. A system of equations is not only one equation with one variable, but two equations with two variables, or more equations with more variables! Remember that solutions to a system of equations must satisfy every equation in the system. Not just one equation in the system. How can we solve a system of equations? Substituting $2y$ for $x$ in the first equation gives $4y+3y=7y=7$. Dividing by $7$ gives us $y=1$, and thus $x=2\times 1=2$. We can also get the solution by graphing: we draw the lines corresponding to the equations in a coordinate system. The point of intersection gives us the solution, as pictured above! Solve for the number of cats Jessica could buy supplies for. The number of cats in which supplies can be purchased for multiplied by $10$ gives the amount of money left over after buying supplies for dogs. To get the amount of money left over after purchasing dog supplies, subtract the cost of dog supplies from the amount of money Jessica has to spend; i.e. $640$ dollars. Jessica has $640$ dollars at her disposal. Supplies for one cat costs $10$ dollars. She has to buy supplies for $24$ dogs. The special offer is $210$ dollars for supplies for $12$ dogs. Let $d$ be the price of supplies for one dog and $x$ the unknown number of cats in which Jessica can buy supplies for if she takes the special offer for dogs. $10x+24d=640$, for the total costs. First we calculate the cost of supplies for one dog by dividing the second equation by $12$ to get $d=17.5$ dollars. Because we know that supplies for $12$ dogs cost $210$ dollars, we can conclude that the price for $24$ dogs is $420$ dollars. Subtracting $420$ from $640$, we get that the amount of money left over for cat supplies is $640-420=220$ dollars. We already know that the price of supplies for one cat is $10$ dollars. So lastly we divide $220$ by $10$ to get $220\div10=22$, the number of cats Jessica has money to buy supplies for after she buys supplies for 24 dogs for the special offer price. Decide which graph belongs to the system of equations. You can check the intersection of the lines each time. The intersection point must satisfy both equations. You can write an equation like $2x+y=3$ in slope-intercept form by subtracting $2x$ from both sides: $y=-2x+3$. First draw the $y$-intercept and then the slope; remember, it's rise over run! Lines corresponding to $y=mx$ pass through the origin. The second equation is already given in slope-intercept form. Dividing both sides by $x$ in this equation leads to $y=-x+6$. So we have to look for two lines, one with a $y$-intercept of zero and the other with a $y$-intercept of $6$. The solution is given by $x=2$ and $y=4$. The first equation is equivalent to $y=x$ and the second to $y=\frac13x+2$. The $y$-intercepts are zero and $2$. The solution is given by $x=3$ and $y=3$. The first equation can be written in slope-intercept form: $y=-\frac23x+4$. The second as well can be written in slope-intercept form: $y=-\frac53x+7$. The $y$-intercepts are $4$ and $7$ and the solution is $x=3$ and $y=2$. Rewriting both equations in slope-intercept form gives us $y=-2x+6$ and $y=x+3$, with $y$-intercepts $6$ and $3$. The solution is given by $x=1$ and $y=4$. Describe how to solve a system of equations by graphing. Slope-intercept form is given by $y=mx+b$, where $m$ is the slope and $b$ is the $y$-intercept. Establish the equation describing the total cost. Keep in mind that Jessica wants to buy two cat trees. This gives us $2y$. The intersection of the lines corresponding to the equations in the system is the solution. Let $x$ be the price of one dog basket. Let $y$ be the price of one cat tree. Next we establish the equations corresponding to the given information. Jessica has $48$ dollars in total. She wants to buy two cat trees and one dog basket. This gives us the equation $2y+x=48$. One cat tree $y$ is one and a half times as expensive as one dog basket $x$. So we have $y=1.5x$; this equation is in slope-intercept form. You can see the corresponding lines above. The intersection is given by $x=12$ and $y=18$. Determine the price of dog as well as cat supplies. By eliminating one variable, you get one equation in one variable; use opposite operations to solve this equation. Check your solution: both equations must be satisfied.
CommonCrawl
Changes of Amylases and Carbohydrates in Sweetpotatoes During Storage and Their Effects on Viscosity of Sweetpotato Puree. A critical problem associated with the production of sweetpotato puree is the inconsistency of final product. Two possible factors, amylase activity and carbohydrate content in sweetpotatoes during storage, were investigated. It was found that $\alpha$- and $\beta$-amylase activities do not significantly change during storage, and have no significant effects on viscosity of sweetpotato puree. The inconsistent products in sweetpotato puree processing are mostly due to the change of alcohol insoluble solids (AIS) in sweetpotatoes during storage. The decrease of AIS is partially due to respiration that converts starch into CO$\sb2$ and H$\sb2$O. A new bio-processing method was proposed to improve the consistency in sweetpotato puree products based on the results obtained in this study. In addition, two methods for specific determination of $\alpha$- and $\beta$-amylase (using blocked p-nitrophenyl maltoheptaoside and p-nitrophenyl maltopentaoside as substrates respectively) were adapted for amylase assays in sweetpotatoes. Both methods have major advantages of simplicity, speed, high sensitivity, and specificity. The thermal stability of native $\alpha$- and $\beta$-amylase in sweetpotatoes, and the interaction between $\alpha$- and $\beta$-amylase on starch hydrolysis were also studied. The $\alpha$-amylase was very heat labile, and lost most activity in just 30 seconds of heating at 75$\sp\circ$C. $\beta$-Amylase had a higher thermal stability. The synergistic hydrolysis of starch could occur when $\alpha$-amylase is combined with $\beta$-amylases, but it is not always true, depending on the concentrations of amylases. Liu, Xiangyong, "Changes of Amylases and Carbohydrates in Sweetpotatoes During Storage and Their Effects on Viscosity of Sweetpotato Puree." (1995). LSU Historical Dissertations and Theses. 6072.
CommonCrawl
Jason and I had a rather interesting pair programming session last week where we tackled a problem that I found on Topcoder. It is in fact a slight twist (easier) on the actual topcoder problem as I didn't have it to hand when we were pairing. If that's not clear enough, this diagram should help articulate what the problem is. Here we can see that you can create 9 rectangles from the initial 2x2 rectangle. Tackling this in a TDD-way we first had to decide on rough roadmap of test cases. The simplest test case seemed to be a invalid rectangle, 0 rectangles high or wide, or both. We'd then go on to a 1x1 rectangle, 1x2, 2x1, 3x1, 1x3, 2x2 and 3x2. Jason stressed that we were just sketching out a path we might take, it may not have been neccessary to use all the test cases, or maybe we'd need more; the list seemed like it would probably be sufficient for a full implementation. The next step was to figure out how many subrectangles there were in each composite rectangle, this involved a bit of drawing and tallying. In the end I think I only miscounted one test case which we noticed fairly quickly when coding. Having written our roadmap it was now time to start tackling this problem. We used Java, and JUnit with Eclipse. The implementation up to 3x1 was fairly simple, however it started to get a little more tricky after that stage, it wasn't immediately obvious what formula we should have been using, $xy + (x-1)y + (y-1)x$ seemed like a promising start, passing all the previous tests up until that point, addition of another term sounded like it might hold the solution, however it was difficult to think of what it could be, we didn't come up with anything in the end. Maybe not as simple as a single factorial, but we were definitely dealing with sequences and/or combinations. The problem with combinations is that they calculate all possible combinations including non-contiguous options (think first and last subrectangle of a 3x1). We didn't discuss this at the time because a simpler solution presented itself. Being Test Driven in design, we looked at our current test case, a 3x1 rectangle (I've illustrated a 1x3, imagine it's just rotated!), after a bit of drawing and colouring it was evident that the number of subrectangles present in a column decreased by 1 for each unit increase in subrectangle length (refer to diagram). In the general case you have $n + (n-1) + (n-2) + \ldots + 1$ subrectangles per column where $n$ is the length of the column. We could iterate over each column and add either the general result of this formula ($\Sigma_0^nr$), or generate it on the fly with another loop. Initially it was simplest to just loop over columns and rows separately, instead of trying to take the larger step and implement the actual solution. The next test case: 2x2, introduced a new class of rectangles, 2D, the others had all been 1D in either the $x$ or $y$ axis. The previous implementation yields 5 subrectangles instead of 9. We looked at which ones it was counting, just the first row and first column – time to loop over the whole thing. Enclosing each loop with an outer loop iterating over either column or row depending on the inner loop and removing duplicates was a possibility, a very messy one, probably unlikely to succeed in being the general solution either. The algorithm needed to loop over both columns and rows (a nested loop?) and then calculate the number of possibilities, hopefully without counting duplicates. Looping over the first column we get 3 subrectangles, so good so far. Through the second column, the subrectangles found in i = 1 are found again in this next column, however, we also need to take into account the rectangles that are formed along the rows, which there are 3 of, giving us a total of 6 additional subrectangles. We had a solution! I like maths so I've found the algebraic solution from the code. Reasoning about loops of loops in mathematics is particularly aesthetically pleasing to me. I'll be using $z$ as the numberOfSubrectangles, it makes writing the maths easier. You can literally change the Java code to return $z$, how pleasing. After tackling the problem with Jason, I had a go solving it mathematically a few days after and by then I'd forgotten the solution; I didn't have much success at all. In this case I think developing in this way helped us to reach the insight that we should add i*j inside our nested loops. I don't have the problem solving skills to incrementally build a solution mathematically, although I'm starting to aquire them in the domain of programming (very slowly), perhaps if I try and apply a TDD approach to maths I may get further with hard mathematical problems?
CommonCrawl
A famous result of Borel says that the cohomology of $\mathcal A_g$ stabilizes. This was generalized to the Satake compactification by Charney and Lee. In this talk we will discuss whether the result can also be extended to toroidal compactifictaions. As we shall see this cannot be expected for the second Voronoi compactification, but we shall show that the cohomology of the perfect cone compactification does stabilize. We shall also discuss partial compactifications, in particular the matroidal locus. This is joint work with Sam Grushevsky and Orsola Tommasi.
CommonCrawl
We describe a nice way to do it, unfortunately in words. It really needs a picture. Put down your cardboard rectangle, one corner at the origin, the long side along the positive $x$-axis. So the corners of your cardboard rectangle are at $(0,0)$, $(0,45)$, $(45,20)$, and $(0,20)$. Draw a $30\times 30$ square, with corners $(0,0)$, $(30,0)$, $(30,30)$, and $(0,30)$. Draw the line that joins $(0,30)$ to $(45,0)$. This line will meet the top side of your cardboard rectangle at $P=(15,20)$, and will meet the right side of the square at $Q=(30,10)$. Let $R=(30,0)$ and $S=(45,0)$. All set up! Use a razor knife to cut along the line $PS$. That will slice a substantial triangle from the cardboard. Leave it in place for now. Use the razor knife to cut straight down along $QR$. This slices off a smallish triangle from the cardboard. Slide the big triangle upward until its top side agrees with the top line of the square. It will. Slide the little triangle way up so that it fills in the top left corner of the square. It will. It is a very pretty construction, works uniformly for all rectangles that are not too skinny. If the rectangle is very skinny, a not too hard adjustment can be made. You will have to prove that this works. Straight coordinate or similar triangle geometry. Remark: This construction is one of the steps in the proof of the Bolyai-Gerwien Theorem (which, as is so often the case, was proved a number of years earlier by at least two other people). The result is that if $A$ and $B$ are any polygonal regions with the same area, then $A$ can be cut into a finite number of polygonal pieces that can be reassembled to maske $B$. This week's challenge was suggested by Jack Dieckmann, a math teaching expert at Stanford University. Jack is a core member of the popular YouCubed, a Stanford-based online resource for K-12 math teachers founded by math teaching superstar Jo Boaler and her colleague Cathy Williams. YouCubed's mission is to help ignite students' natural enthusiasm for math, which may have been crushed by worksheets, drills and homework. Homework! That's something I thought was a good thing. At least a little homework. How else can you really learn? But studies are mixed on the value of doing schoolwork at home, with opponents claiming it produces little, if any, gains in learning at the cost of family time. (For more from this perspective, check out The Homework Myth (Ch. 2 here) or Race to Nowhere.). Here are 10 straight lines and 17 squares. Here are 9 straight lines and 20 squares. Find the smallest number of lines needed to make exactly 100 squares. Once you've done this, investigate further. How many different ways can you make a particular number of squares?
CommonCrawl
Abstract: If binary black holes form following the successive core collapses of sufficiently massive binary stars, precessional dynamics may align their spins $\mathbf S_1$ and $\mathbf S_2$ and the orbital angular momentum $\mathbf L$ into a plane in which they jointly precess about the total angular momentum $\mathbf J$. These spin orientations are known as spin-orbit resonances since $\mathbf S_1$, $\mathbf S_2$, and $\mathbf L$ all precess at the same frequency to maintain their planar configuration. Two families of such spin-orbit resonances exist, alike in dignity but differentiated by whether the components of the two spins in the orbital plane are either aligned or antialigned. The fraction of binary black holes in each family is determined by the stellar evolution of their progenitors, so if gravitational-wave detectors could measure this fraction they could provide important insights into astrophysical formation scenarios for binary black holes. In this paper, we show that even under the conservative assumption that binary black holes are observed along the direction of $\mathbf J$ (where precession-induced modulations to the gravitational waveforms are minimized), the waveforms of many members of each resonant family can be distinguished from all members of the opposing family in events with signal-to-noise ratios $\rho \simeq 10$, typical of those expected for the first detections with Advanced LIGO/Virgo. We hope that our preliminary findings inspire a greater appreciation of the capability of gravitational-wave detectors to constrain stellar astrophysics and stimulate further studies of the distinguishability of spin-orbit resonant families in more expanded regions of binary black-hole parameter space.
CommonCrawl
The Fields Institute, Toronto, Canada, invited speaker. Kyoto Univ., Kyoto, Japan, plenary speaker. Title: "Control of vortex dominated flows through vortex dynamics" KIAS, Seoul, Korea, invited speaker, Dec. 26-28. NCTS, Nat. Taiwan University, Taipei, plenary speaker, Nov. 24--26. Title:"Mathematical research on Micro-Pressure Waves" Technical Research Institute (RTRI), July 18, invited speaker . Title: "Dissipative vortex collapse and time's arrow" University of Macau, Macau, China, June 20--22, invited speaker. Title: "Mathematical modeling and topological characterizations for 2D incompressible flows across the Reynolds number" Beihang University, Beijing, China, Mar. 23--25, invited speaker. 2016 Feb.24, Kyoto University Winter School 2016 "From Materials to Life: Multidisciplinary Challenges" Title: "Accross Reynolds numbers: mathematical modeling and topological pattern characterization for 2D incompressible flows" Kyoto University, Kyoto, Japan, Feb. 14--26, invited lecturer. Title: "Mathematical models of anomalous enstrophy dissipation and enstrophy cascade in 2D turbulence" The Fields Institute, Tronto, Canada, Jan. 25--29, inivited speaker. Title: "Mathematical modeling of incompressible flows; Recent developments in applied and computational complex analysis" Xiamen, China, Nov. 26--29, invited speaker. 2019 July 11, SIAM Conference on Applied Algebraic Geometry, Minisymposium "From algebraic geometry t geometric topology: crossroads on applications" Univ. of Bern, Bern, Switzerland. The Fields Institute in Toronto, Canada invited speaker. Univ. of Michigan, Ann Arbor, US. "One-dimensional Hydrodynamic model for turbulence with cascade and sigular solutions", The Univ of Manchester, Manchester, UK. "One-dimensional Hydrodynamic PDE model for a cascading turbulent phenomenon", Inst. of Math. Academia Sinica, NCTS, Taiwan. NCTS, Nat. Taiwan University, Taipei. Title: "Point vortex dynamics on a toroidal surface", Hahndorff, Australia, Feb. 5--9. Title: "Turbulence and topology: recent explorations in mathematical fluid mechanics", Queensland University of Technology, Brisbane, Australia. Title: "Point vortex dynamics on a toroidal surface", Imperial College London, London, United Kingdom. Title: "Point vortex dynamics on the surface of a torus", Technical University Dresden, Dresden, Germany. Title: "Recent topics in mathematical fluid mechanics from turbulence model to topological fluid dynamics", Peking University, Beijing, China. Title:"Mathematical research on Micro-Pressure Waves",Technical Research Institute (RTRI), July 18, invited speaker. Title: "One-dimensional hydrodynamics model for turbulence with cascades and singular solutions", HKUST, Hong Kong, China, Aug. 25. Title: "Stability of barotropic vortex strip on a rotating sphere", Quingtao, China, Oct 13-15. Title: "One-dimensional hydrodynamic equation generating turbulent scaling laws and self-similar singular solutions", Denver, USA, Nov. 19-22. Title: "Point vortex dynamics on a toroidal surface", Denver, USA, Nov. 19-22. Title: "Hydrodynamic one dimensional model for enstrophy-cascade turbulence", Hotel Ad-In Naruto, Naruto, Japan, Jan. 15--17. Title: "Mathematical models of anomalous enstrophy dissipation and enstrophy cascade in 2D turbulence", The Fields Institute, Tronto, Canada, Jan. 25--29, inivited speaker. 2016 Feb. 12, Colloquium of Mathematics at McMaster Univ. Title: "Towards a mathematical theory of turbulence: a survey and a model study", McMaster University, Hamilton, Canada. 2016 Feb. 24, Kyoto University Winter School 2016 "From Materials to Life: Multidisciplinary Challenges" Title: "Accross Reynolds numbers: mathematical modeling and topological pattern characterization for 2D incompressible flows", Kyoto University, Kyoto, Japan, Feb. 14--26. 2016 Mar. 16, Applied Mathathematics Seminar at Stanford Univ. Title: "Dissipative weak solutions and vortex dynamics: a survey and model study", Stanford University, Palo Alto, USA. Title: "Mathematical modeling and topological characterizations for 2D incompressible flows across the Reynolds number", Beihang University, Beijing, China, Mar. 23--25. Title: "Dissipative vortex collapse and time's arrow", University of Macau, Macau, China, June 20--22, invited speaker. 2016 July, AIMS Conference, Mini-Symposium "Vortex Dynamics and Geometry: Analysis, Computations and Applications" Title: "Words and Trees: Symbolic Classifications of Streamline Topologies for 2D Incompressible Vortex Flows", Hyatt Regency Orland, Orlando, USA, July 1--5. 2016 August 26, Seminars at Nonlinear PDE Center, Chung-Ang Univ. Title: "One dimensional hydrodynamic model generating turbulent cascade", Chung-Ang University, Korea. Title: "Entrapment of force enhancing vortex equilibria in the vicinity of a Kasper Wing", Banff International Research Station, Banff, Canada Jan. 12--16. Title: "On the generalized non-viscous/viscous DeGregorio equation - singular solutions and statistical properties", Beijing, China, Aug.10-15. Title: "Mathematical theory of potential flows in multiply connected domains", Kyoto, Japan, Jan. 21--24. Title: "Dissipative weak solutions to Euler's equations and vortex collapse", Nagoya, Japan, Mar. 10--12. Title: "Mathematical and Computational Theory for Inviscid and Incompresible Flows in Multiply Connected Domains", Kyoto, Japan, April 5--6. Title: "Word representations of structurally stable Hamiltonian flows in multiply connected domains and its applications", Chicago, USA, July 7--11. Title: "Anomalous enstrophy dissipation and 2D vortex collapse", Tokyo, Japan, Sep. 1--5. Title: "On point vortex-$\alpha$ system and vortex collapse with enstrophy dissipation", Tokyo, Japan, Nov. 11-14. Title: "Topics in mathematical and theoretical fluid dynamics", Jeju, Korea, Nov. 21-13. Title: "Streamline topologies for structurally stable vortex flows in multiply connected domains and their word representations", Fukuoka, Japan, Mar. 10-14. Title: "Encoding of streamline topologies for incompressible vortex flows in 2D multiply connected domains", Sapporo, Japan, July. 1-2. Title: "Mathematical and numerical studies of vortex-boundary interactions -- theory and applications", Chengdu, China, Sep. 23-25. Title: "Point Vortex Dynamics in Multiply Connected Domains -- Theory and Applications", Ohtsu, Japan, Aug. 25-28. Title: "Towards a mathematical study of offshore tsunami dynamics with vortex-boundary interactions", Tohoku Univ., Sendai, Japan, Sept. 10-12. Title: "Classification of Streamline Topologies for Structurally Stable Vortex Flows in Multiply Connected Domains", Sendai, Japan, Sep. 19-21. Title: "Point vortex dynamics in multiply connected domains", Kyoto University, Kyoto, Japan, Jan. 7th. Title: "Point vortex dynamics in multiply connected domains and its applications", Zhejiang Normal University, Jinhua, China, Sept. 23-27. Title: "Enstrophy dissipation in Euler-alpha point vortices via triple collapse", Nankai University, Tianjin, China, Dec. 5-9. Title: "Force-enhancing vortex equilibria for two plates in uniform flow", NIMS, Daejong, South Korea, Dec. 12-14. Chung-Ang University, Seoul, Korea, June 24-26th. Technical University of Denmark, Copenhagen, Denmark, Oct. 12-16th. Title: "Topological structure of periodic orbits in the integrable four-vortex motion on sphere" Academia Sinica, Taipei, Feb. 2-4th. Title: "Motion of N-Vortex Points in Ring Formation on a Rotating Sphere" Salt Lake City, Utah, USA, May 27-June 1st. Title: "Chaotic motion of ring configuration of N-vortex points on sphere." Title: "Motion of integrable four-vortex points on sphere with zero moment of vorticity" Title: "A fast tree-code algorithm for point-vortex interactions on a sphere" Exeter, United Kingdom, Sep. 24-27th. Title: "A chaotic motion in the N-vortex problem on sphere - Hamiltonian systems with saddle-center equilibria" Title: "Invariant dynamical systems embedded in the N-vortex problem on a sphere with pole vortices" Kyoto Univ., Kyoto , Japan, Jan. 6-10th. Dynamics, Topology and Computations , Bedlewo, Poland, June 4-10, 2006. Title "Long time evolution of vortex sheet on sphere with a background flow" Osaka Japan, Jan. 31- Feb 3. Title " Motion of unstable N-ring vortex points on a sphere with a background flow" Title " Efficient topological chaos embedded in the blinking vortex system" Snowbird, Utah USA, May 22-26th. Title "Motion of unstable polygonal ring of vortex points on sphere" Chicago, Illinois USA, Nov. 20-22nd. Shonan International Village Japan, Mar. 8-12. Miyoshi Memorial Auditorium, Yokohama institute of Earth Sciences Japan, July 20-23rd. Zhangjiajie National Forest Park, Hunan Province China, August 16-20th. "Elementary Vortices and Coherent Structures--Significance of Turbulent Dynamics" Tsukuba University Japan, Aug. 5-9. Title "A motion of two-dimensional vortex sheet and a model spiral" Istanbul Tech. University Turkey, Sep. 26-28th. Title "Numerical computations of a three-dimensional vortex sheet with a swirl flow" Edinburgh University Scotland, July 5-10th. Title "Numerical computations of two-dimensional vortex sheet on beta plane." Chiba University Japan, Oct. 25-29th. Kobe University Japan, Nov. 4-6th.
CommonCrawl
Abstract: Mathematical problems of digital terrain analysis include interpolation of digital elevation models (DEMs), DEM generalization and denoising, and computation of morphometric variables by calculation of partial derivatives of elevation. Traditionally, these procedures are based on numerical treatments of two-variable discrete functions of elevation. We developed a spectral analytical method and algorithm based on high-order orthogonal expansions using the Chebyshev polynomials of the first kind with the subsequent Fejer summation. The method and algorithm are intended for DEM analytical treatment, such as, DEM global approximation, denoising, and generalization as well as computation of morphometric variables by analytical calculation of partial derivatives. To test the method and algorithm, we used a DEM of the Northern Andes including 230,880 points (the elevation matrix 480 $\times$ 481). DEMs were reconstructed with 480, 240, 120, 60, and 30 expansion coefficients. The first and second partial derivatives of elevation were analytically calculated from the reconstructed DEMs. Models of horizontal curvature ($k_h$) were then computed with the derivatives. A set of elevation and $k_h$ maps related to different number of expansion coefficients well illustrates data generalization effects, denoising, and removal of artifacts contained in the original DEM. The test results demonstrated a good performance of the developed method and algorithm. They can be utilized as a universal tool for analytical treatment in digital terrain modeling.
CommonCrawl
When Google DeepMind's AlphaGo shockingly defeated legendary Go player Lee Sedol in 2016, the terms artificial intelligence (AI), machine learning and deep learning were propelled into the technological mainstream. AI is generally defined as the capacity for a computer or machine to exhibit or simulate intelligent behaviour such as Tesla's self-driving car and Apple's digital assistant Siri. It is a thriving field and the focus of much research and investment. Machine learning is the ability of an AI system to extract information from raw data and learn to make predictions from new data. Deep learning combines artificial intelligence with machine learning. The game of Go played between a DeepMind computer program and a human champion created an existential crisis of sorts for Marcus du Sautoy, a mathematician and professor at Oxford University. "I've always compared doing mathematics to playing the game of Go," he says, and Go is not supposed to be a game that a computer can easily play because it requires intuition and creativity. So when du Sautoy saw DeepMind's AlphaGo beat Lee Sedol, he thought that there had been a sea change in artificial intelligence that would impact other creative realms. He set out to investigate the role that AI can play in helping us understand creativity, and ended up writing The Creativity Code: Art and Innovation in the Age of AI (Harvard University Press). The Verge spoke to du Sautoy about different types of creativity, AI helping humans become more creative (instead of replacing them), and the creative fields where artificial intelligence struggles most. Since AlphaGo and AlphaGo Zero have achieved breakground successes in the game of Go, the programs have been generalized to solve other tasks. Subsequently, AlphaZero was developed to play Go, Chess and Shogi. In the literature, the algorithms are explained well. However, AlphaZero contains many parameters, and for neither AlphaGo, AlphaGo Zero nor AlphaZero, there is sufficient discussion about how to set parameter values in these algorithms. Therefore, in this paper, we choose 12 parameters in AlphaZero and evaluate how these parameters contribute to training. We focus on three objectives~(training loss, time cost and playing strength). For each parameter, we train 3 models using 3 different values~(minimum value, default value, maximum value). We use the game of play 6$\times$6 Othello, on the AlphaZeroGeneral open source re-implementation of AlphaZero. Overall, experimental results show that different values can lead to different training results, proving the importance of such a parameter sweep. We categorize these 12 parameters into time-sensitive parameters and time-friendly parameters. Moreover, through multi-objective analysis, this paper provides an insightful basis for further hyper-parameter optimization. IMAGINE having to solve a jigsaw puzzle with 1 million pieces, without knowing what the final picture is supposed to look like. It is a challenge that computer designers and logistics planners grapple with every day. Now a version of DeepMind's game-playing artificial intelligence can come up with a more efficient solution. The method might have applications in networking problems including routing traffic through cities, couriering deliveries across a country and designing more efficient computer chips. Marcus is the Simonyi Professor for the Public Understanding of Science at Oxford University, quite a mouthful. We argue that actorcritic PGQL (O'Donoghue et al., 2017) allows for an off-policy algorithms are currently limited by their V function, but requires it to be combined with on-policy need for an on-policy critic, which severely constraints advantage values. Notable examples of algorithms without how the critic is learned. We propose an on-policy critic are AlphaGo Zero (Silver et al., 2017), Bootstrapped Dual Policy Iteration (BDPI), that replaces the critic with a slow-moving target policy a novel model-free actor-critic reinforcementlearning learned with tree search, and the Actor-Mimic (Parisotto algorithm for continuous states and et al., 2016), that minimizes the cross-entropy between discrete actions, with off-policy critics. Offpolicy an actor and the Softmax policies of critics (see Section critics are compatible with experience replay, 4.2). The need of most actor-critic algorithms for an onpolicy ensuring high sample-efficiency, without critic makes them incompatible with state-of-the-art the need for off-policy corrections. The actor, value-based algorithms of the Q-Learning family (Arjona-by slowly imitating the average greedy policy Medina et al., 2018; Hessel et al., 2017), that are all highly of the critics, leads to high-quality and statespecific sample-efficient but off-policy. In a discrete-actions setting, exploration, which we show approximates where off-policy value-based methods can be used, Thompson sampling. Because the actor this raises two questions: and critics are fully decoupled, BDPI is remarkably stable and, contrary to other state-of-theart 1. Can we use off-policy value-based algorithms in an algorithms, unusually forgiving for poorlyconfigured actor-critic setting? A startup called CogitAI has developed a platform that lets companies use reinforcement learning, the technique that gave AlphaGo mastery of the board game Go. Gaining experience: AlphaGo, an AI program developed by DeepMind, taught itself to play Go by practicing. It's practically impossible for a programmer to manually code in the best strategies for winning. Instead, reinforcement learning let the program figure out how to defeat the world's best human players on its own. Drug delivery: Reinforcement learning is still an experimental technology, but it is gaining a foothold in industry. Recently we've been seeing computers playing games against humans, either as bots in multiplayer games or as opponents in one-on-one games like Dota2, PUB-G, Mario. Deepmind(a research company) made history when the news that their AlphaGo program defeated the South Korean Go world champion in 2016. If you're an intense gamer, probably you must have listened about Dota 2 OpenAI Five match, where machines played against humans and defeated world top Dota2 players in few matches (If you are interested about this, here is the complete analysis of the algorithm and the game played by the machine). So here's the central question, Why do we need reinforcement learning? Is it only used for games?
CommonCrawl
The "symmetry" approach notices that $f(x)=x(6-x)$, and that replacing $x\leftrightarrow6-x$ does not change $f(x)$ which means it doesn't change the maximum. Then the solution says that the only value of $x$ that is unchanged by $x\leftrightarrow 6-x $ is $x=3$. So that's the location of the maximum. This is all the solution says. I am under the impression that if $f(x)$ is unchanged then no number should be affected unchanged. How did they figure out that $x=3$ is the only thing unchanged? Since $f(x)=x(6-x)$, it is $f(6-x)=(6-x)x=f(x)$. Now let $x_0$ be a maximum of $f$. We have $f(x_0)=f(6-x_0)$, therefore since this maximum is attained only one time (e.g. the graph is a parabola), it must be $6-x_0=x_0$. and then $f(x)$ itself is symmetric with respect to $x=3$. HINT: Because it is a polynomial of second degree, its extremum lies on the symmetry line of the interval with ends at its zeroes. Not the answer you're looking for? Browse other questions tagged symmetry or ask your own question. what did he mean ? 8! elements in the symmetry group of the cube?
CommonCrawl
Volume 12, Number 10 (2007), 1135-1166. We consider the Cauchy-Dirichlet problem of the heat equation in the exterior domain of a ball, and study the movement of hot spots $H(t)$ as $t\to\infty$. In particular, we give a rate for the hot spots to run away from the boundary of the domain as $t\to\infty$. Furthermore we give a sufficient condition for the hot spots to consist of only one point after a finite time. Adv. Differential Equations, Volume 12, Number 10 (2007), 1135-1166.
CommonCrawl
According to our database1, Yang Li authored at least 795 papers between 1988 and 2019. Divided DQ Small-Signal Model: A New Perspective for the Stability Analysis of Three-Phase Grid-Tied Inverters. An Automatic Impedance Matching Method Based on the Feedforward-Backpropagation Neural Network for a WPT System. Modified Modular Multilevel Converter to Reduce Submodule Capacitor Voltage Ripples Without Common-Mode Voltage Injected. Cooperative Hybrid Semi-Supervised Learning for Text Sentiment Classification. Estimation of brain functional connectivity from hypercapnia BOLD MRI data: Validation in a lifespan cohort of 170 subjects. Open-view human action recognition based on linear discriminant analysis. Automatic seizure detection based on kernel robust probabilistic collaborative representation. Short-term QT interval variability in patients with coronary artery disease and congestive heart failure: a comparison with healthy control subjects. Learning binary codes with neural collaborative filtering for efficient recommendation systems. Epileptic seizure detection in EEG signals using sparse multiscale radial basis function networks and the Fisher vector approach. Quantum Algorithm Design: Techniques and Applications. Empirical likelihood-based inference in generalized random coefficient autoregressive model with conditional moment restrictions. Disentangled Variational Auto-Encoder for semi-supervised learning. MOLI: Smart Conversation Agent for Mobile Customer Service. Islanding fault detection based on data-driven approach with active developed reactive power variation. Topical Co-Attention Networks for hashtag recommendation on microblogs. A Game-Theoretic Analysis for Distributed Honeypots. Generation of a tree-like support structure for fused deposition modelling based on the L-system and an octree. Generative-Adversarial-Network-Based Wireless Channel Modeling: Challenges and Opportunities. Spraying strategy optimization with genetic algorithm for autonomous air-assisted sprayer in Chinese heliogreenhouses. Chemical reaction optimization for virtual machine placement in cloud computing. A Complementary Method of PCC for the Construction of Scalp Resting-State EEG Connectome: Maximum Information Coefficient. A Unified Semantic Model for Cross-Media Events Analysis in Online Social Networks. Centralized Wavelet Multiresolution for Exact Translation Invariant Processing of ECG Signals. Synchronous automatic training for wearable sensors via knowledge distillation: poster abstract. A Scalable Priority-Aware Approach to Managing Data Center Server Power. Cross-Channel Integration and Customer Retention in Omnichannel Retailing: The Role of Retailer Image and Alternative Attractiveness. The Best of Both Worlds: Combining Hand-Tuned and Word-Embedding-Based Similarity Measures for Entity Resolution. Research on the Perception of Roughness Based on Vibration. Collaborative ensemble learning under differential privacy. Parallel Channel Sounder for MIMO Channel Measurements. Natural Timestamps in Powerline Electromagnetic Radiation. 400-MHz/2.4-GHz Combo WPAN Transceiver IC for Simultaneous Dual-Band Communication With One Single Antenna. Fabrication and Performance of a Miniaturized and Integrated Endoscope Ultrasound Convex Array for Digestive Tract Imaging. A Simple Method for Measuring the Bilateral Symmetry of Leaves. Multiobjective optimization of the production process for ground granulated blast furnace slags. Robust Automatic Target Recognition via HRRP Sequence Based on Scatterer Matching. Important institutions of interinstitutional scientific collaboration networks in materials science. Robust orientation estimate via inertial guided visual sample consensus. Age-related changes in cerebrovascular reactivity and their relationship to cognition: A four-year longitudinal study. Moving target parameter estimation for MIMO radar based on the improved particle filter. Efficient Ignorance: Information Heterogeneity in a Queue. A dynamic model slicing approach for system comprehension during software evolution. Learning multi-grained aspect target sequence for Chinese sentiment analysis. Study on Vibration Characteristics and Human Riding Comfort of a Special Equipment Cab. The role of segments and prosody in the identification of a speaker's dialect. Intuitionistic Mechanism for weak components identification method of complex electromechanical system. Research on TDOA location error elimination of hazardous chemicals storage based on improved wavelet. Topic network: topic model with deep learning for image classification. LLMP: Exploiting LLDP for Latency Measurement in Software-Defined Data Center Networks. Passive control of three-phase PWM rectifier and its damping characteristics of CSR. Fluoride anion sensing mechanism of a BODIPY-linked hydrogen-bonding probe. An anonymous data reporting strategy with ensuring incentives for mobile crowd-sensing. Joint Optimization for Residual Energy Maximization in Wireless Powered Mobile-Edge Computing Systems. A Generative Model for category text generation. Performance Analysis of Honeypot with Petri Nets. On Evaluating Web-Scale Extracted Knowledge Bases in a Comparative Way. An Optimization Method for Train Seat Inventory Control. Weakly supervised semantic segmentation based on EM algorithm with localization clues. Epileptic Seizure Detection Based on Time-Frequency Images of EEG Signals Using Gaussian Mixture Model and Gray Level Co-Occurrence Matrix Features. GO-CP-ABE: group-oriented ciphertext-policy attribute-based encryption. Development of aircraft path planning scheme through automatic dependent surveillance broadcast. Global asymptotic stability of neural networks with uncertain parameters and time-varying delay. RoughPSO: rough set-based particle swarm optimisation. Capsule Antenna Design Based on Transmission Factor through the Human Body. Cryptanalysis and Improvement of a Chaotic Image Encryption by First-Order Time-Delay System. Customer's reaction to cross-channel integration in omnichannel retailing: The mediating roles of retailer uncertainty, identity attractiveness, and switching costs. Balanced estimation for high-dimensional measurement error models. Effect of Carbon Concentration on the Sputtering of Carbon-Rich SiC Bombarded by Helium Ions. The improvement strategy on the management status of the old residence community in Chinese cities: An empirical research based on social cognitive perspective. Effects of throughput and operating parameters on cleaning performance in air-and-screen cleaning unit: A computational and experimental study. Simultaneous optimization and heat integration of the coal-to-SNG process with a branched heat recovery steam cycle. Investigation of brain networks in children with attention deficit/hyperactivity disorder using a graph theoretical approach. AllerGAtlas 1.0: a human allergy-related genes database. An algorithmic perspective of de novo cis-regulatory motif finding based on ChIP-seq data. DECCO: Deep-Learning Enabled Coverage and Capacity Optimization for Massive MIMO Systems. Method for Quantitative Estimation of the Risk Propagation Threshold in Electric Power CPS Based on Seepage Probability. Selection Based on Colony Fitness for Differential Evolution. A CFAR Detector Based on a Robust Combined Method With Spatial Information and Sparsity Regularization in Non-Homogeneous Weibull Clutter. Time-Varying Nonlinear Causality Detection Using Regularized Orthogonal Least Squares and Multi-Wavelets With Applications to EEG. Adaptive Function Expansion 3-D Diagonal-Structure Bilinear Filter for Active Noise Control of Saturation Nonlinearity. Crowdsourcing based large-scale network anomaly detection. A ResNet-DNN based Channel Estimation and Equalization Scheme in FBMC/OQAM Systems. Extracting features from requirements: Achieving accuracy and automation with neural networks. Reverse engineering variability from requirement documents based on probabilistic relevance and word embedding. Feature and variability extraction from natural language software requirements specifications. ZigZag: Supporting Similarity Queries on Vector Space Models. A Passive Connection Mechanism for On-orbit Assembly of Large-Scale Antenna Structure. Modeling Topic Propagation on Heterogeneous Online Social Networks. Discrete Manifold-Regularized Collaborative Filtering for Large-Scale Recommender Systems. Question Answering for Technical Customer Support. GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking. Joint Mobility Pattern Mining with Urban Region Partitions. Simultaneous Localization and Mapping with Power Network Electromagnetic Field. Look Deeper See Richer: Depth-aware Image Paragraph Captioning. A New TV World for Kids - When ZUI Meets Deep Learning. Unpaired Deep Cross-Modality Synthesis with Fast Training. OBMA: Minimizing Bitmap Data Structure with Fast and Uninterrupted Update Processing. CIGenotyper: A Machine Learning Approach for Genotyping Complex Indel Calls. Expressway Traffic Volume Prediction Using Floating Car Trajectory Data. A New Epileptic Seizure Detection Method Based on Fusion Feature of Weighted Complex Network. Feedback-Based Content Poisoning Mitigation in Named Data Networking. Deeply-Supervised CNN Model for Action Recognition with Trainable Feature Aggregation. Development of Road Functional Classification in China: An Overview and Critical Remarks. DIM Moving Target Detection using Spatio-Temporal Anomaly Detection for Hyperspectral Image Sequences. Sub-Nyquist Sampling for Target Detection in Clutter. A Brief Review on Robotic Floor-Tiling. Nonlinear Model Predictive Control of Photovoltaic-Battery System for Short-Term Power Dispatch. High Quality Depth Estimation from Monocular Images Based on Depth Prediction and Enhancement Sub-Networks. Comparative Analysis of Surface Electromyography Features on Bilateral Upper Limbs for Stroke Evaluation: A Preliminary Study. Inferring event geolocation based on Twitter. Epileptic Seizure Detection Based on Time Domain Features and Weighted Complex Network. A Collective Computing Architecture Supporting Heterogeneous Tasks and Computing Devices. MIMO Radar Target Detection Using Low-Complexity Receiver. Online Informative Path Planning for Autonomous Underwater Vehicles with Cross Entropy Optimization. Blockchain Technology in Business Organizations: A Scoping Review. A Multi-task Decomposition and Reorganization Scheme for Collective Computing Using Extended Task-Tree. Rate Adaptation of D2D Underlaying Downlink Massive MIMO Networks with Reinforcement Learning. An MEC-Based DoS Attack Detection Mechanism for C-V2X Networks. Towards Web-based Delta Synchronization for Cloud Storage Services. dCat: dynamic cache management for efficient, performance-sensitive infrastructure-as-a-service. Service fabric: a distributed platform for building microservices in the cloud. Emotional Knowledge Corpus Construction for Deep Understanding of Text. Analysis and Modeling of Grid Performance on Touchscreen Mobile Devices. Doppio: Tracking UI Flows and Code Changes for App Development. A Joint Model of Entity Linking and Predicate Recognition for Knowledge Base Question Answering. The Strategic Decision on Mobile Payment: A Study on Merchants' Adoption. Guess Me if You Can: Acronym Disambiguation for Enterprises. Artistic stylization of face photos based on a single exemplar. Building NVRAM-Aware Swapping Through Code Migration in Mobile Devices. Improv: An Input Framework for Improvising Cross-Device Interaction by Demonstration. Intravascular Ultrasound Imaging With Virtual Source Synthetic Aperture Focusing and Coherence Factor Weighting. Personalized Microtopic Recommendation on Microblogs. From the Lab to the Real World: Re-identification in an Airport Camera Network. Collaborative Filtering-Based Recommendation of Online Social Voting. Non-Volatile Memory Based Page Swapping for Building High-Performance Mobile Devices. Gland Instance Segmentation Using Deep Multichannel Neural Networks. Observer Design for Tracking Consensus in Second-Order Multi-Agent Systems: Fractional Order Less Than Two. Delay Robustness of an $\mathcal L1 Adaptive Controller for a Class of Systems With Unknown Matched Nonlinearities. Research on gateway deployment of WMN based on maximum coupling subgraph and PSO algorithm. Histogram shifting in encrypted images with public key cryptosystem for reversible data hiding. A Novel Monopulse Technique for Adaptive Phased Array Radar. Knowledge Verification for LongTail Verticals. CLIC, a tool for expanding biological pathways based on co-expression across thousands of datasets. Multiparametric imaging of brain hemodynamics and function using gas-inhalation MRI. Cerebrovascular reactivity mapping without gas challenges. Robust Energy Scheduling in Vehicle-to-Grid Networks. Multi-step modified Newton-HSS methods for systems of nonlinear equations with positive definite Jacobian matrices. A corner-point-grid-based voxelization method for the complex geological structure model with folds. Full Dimension MIMO (FD-MIMO): Demonstrating Commercial Feasibility. An energy-efficient encryption mechanism for NVM-based main memory in mobile systems. Two-step based feature selection method for filtering redundant information. A multiscale mixed finite element method with oversampling for modeling flow in fractured reservoirs using discrete fracture model. Person re-identification with block sparse recovery. Deployment optimization of multi-hop wireless networks based on substitution graph. Recognizing activities from partially observed streams using posterior regularized conditional random fields. Special issue on dynamic depth field data driven learning, recognition and computation. Cloud service reliability modelling and optimal task scheduling. Fast Montgomery Modular Multiplication and Squaring on Embedded Processors. Inferring User Consumption Preferences from Social Media. Impact of adjacent transistors on the SEU sensitivity of DICE flip-flop. Fog-Based Evaluation Approach for Trustworthy Communication in Sensor-Cloud System. An SP-Tree-Based Web Service Matching Algorithm Considering Data Provenance. Learning Word Representations for Sentiment Analysis. A novel cloud scheduling algorithm optimization for energy consumption of data centres based on user QoS priori knowledge under the background of WSN and mobile communication. Provably secure cloud storage for mobile networks with less computation and smaller overhead. Steady-state optimization of biochemical systems by bi-level programming. Finite element model predicts the biomechanical performance of cervical disc replacement and fusion hybrid surgery with various geometry of ball-and-socket artificial disc. Structure-based optimization of salt-bridge network across the complex interface of PTPN4 PDZ domain with its peptide ligands in neuroglioma. Block-based characterization of protease specificity from substrate sequence profile. An Efficient Sixth-Order Newton-Type Method for Solving Nonlinear Systems. Fair Downlink Traffic Management for Hybrid LAA-LTE/Wi-Fi Networks. Millimeter-wave channel estimation with interference cancellation and DOA estimation in hybrid massive MIMO systems. An improved algorithm for Doppler ambiguity resolution using multiple pulse repetition frequencies. Multi-Camera Action Dataset for Cross-Camera Action Recognition Benchmarking. ZIPT: Zero-Integration Performance Testing of Mobile App Designs. Rico: A Mobile App Dataset for Building Data-Driven Design Applications. HiNextApp: A Context-Aware and Adaptive Framework for App Prediction in Mobile Systems. Reverse Engineering Variability from Natural Language Documents: A Systematic Literature Review. Topic Enhanced Word Vectors for Documents Representation. Price Recommendation on Vacation Rental Websites. A novel method to improve transfer learning based on mahalanobis distance. A microfluidic three dimensional immunoassay biosensor for rapid detection of C-reaction protein. A rapid and sensitive BOD biosensor based on ultramicroelectrode array and carboxyl graphene. Second-Order Average Consensus with Buffer Design in Multi-agent System with Time-Varying Delay. Riemann Tensor Polynomial Canonicalization by Graph Algebra Extension. An improved CFAR algorithm for target detection. Hearing Loss Detection in Medical Multimedia Data by Discrete Wavelet Packet Entropy and Single-Hidden Layer Neural Network Trained by Adaptive Learning-Rate Back Propagation. Attack pattern mining algorithm based on security log. Natural timestamping using electrical power grid: demo abstract. Natural timestamping using powerline electromagnetic radiation. Fitness with diversity information for selection of evolutionary algorithms. Research of the Ear Reconstruction Based on the Poisson Image Blending. Application of DBSCAN Algorithm in Precision Fertilization Decision of Maize. Modeling of a class of UAV helicopters using component buildup method. Cross-Media Retrieval of Tourism Big Data Based on Deep Features and Topic Semantics. Multiresolution process neural network and its learning algorithm. An Effective Martin Kernel for Time Series Classification. A Deep Model Combining Structural Features and Context Cues for Action Recognition in Static Images. Ankle Active Rehabilitation Strategies Analysis Based on the Characteristics of Human and Robotic Integrated Biomechanics Simulation. A novel adaptive kernel correlation filter tracker with multiple feature integration. Target detection based on a rotary table-mounted synthetic aperture radar system. SNFS: Small Writes Optimization for Log-Structured File System Based-on Non-Volatile Main Memory. Urban Travel Time Prediction using a Small Number of GPS Floating Cars. Rotate Vector Reducer Crankshaft Fault Diagnosis Using Acoustic Emission Techniques. Increased beat-to-beat variation in diastolic phase percentages in patients with congestive heart failure. A Method of Constructing the Mapping Knowledge Domains in Chinese Based on the MOOCs. PTree: Direct Lookup with Page Table Tree for NVM File Systems. File System for Non-volatile Main Memories: Performance Testing and Analysis. DPETs: A Differentially Private ExtraTrees. Destination-aware Task Assignment in Spatial Crowdsourcing. Collaborative Filtering Recommendation Model based on Convolutional Denoising Auto Encoder. Illuminator of opportunity selection for passive radar. C-arm based image-guided percutaneous puncture of minimally invasive spine surgery. Using Eye-Tracking to Help Design HUD-Based Safety Indicators for Lane Changes. A 20MHz CTIA ROIC for InGaAs focal plane array. Convolutional recurrent neural network-based channel equalization: An experimental study. Unified tracking and regulation visual servoing of wheeled mobile robots with euclidean reconstruction. Formulation of Cognitive Skills: A theoretical model based on psychological and neurosciences studies. Exact and approximate flexible aggregate similarity search. An Integrated System for Superharmonic Contrast-Enhanced Ultrasound Imaging: Design and Intravascular Phantom Imaging Study. Using Audio Cues to Support Motion Gesture Interaction on Mobile Devices. Co-evolution-based immune clonal algorithm for clustering. Analysis of Extracting Prior BRDF from MODIS BRDF Data. Automatic power line extraction from high resolution remote sensing imagery based on an improved Radon transform. Hierarchical and multi-featured fusion for effective gait recognition under variable scenarios. RNAex: an RNA secondary structure prediction server enhanced by high-throughput structure-probing data. Local optimized and scalable frame-to-model SLAM. Video saliency detection using 3D shearlet transform. Power-Aware Resource Reconfiguration Using Genetic Algorithm in Cloud Computing. The Graphene/l-Cysteine/Gold-Modified Electrode for the Differential Pulse Stripping Voltammetry Detection of Trace Levels of Cadmium. A Simple Heuristic for Joint Inventory and Pricing Models with Lead Time and Backorders. A simpler spatial-sign-based two-sample test for high-dimensional data. Efficient k-edge connected component detection through an early merging and splitting strategy. Multi-depot vehicle routing problem with time windows under shared depot resources. Towards Energy-Efficient Caching in Content-Centric Networking. Identification of nonlinear time-varying systems using an online sliding-window and common model structure selection (CMSS) approach with applications to EEG. High-resolution time-frequency analysis of EEG signals using multiscale radial basis functions. Efficient authentication and access control of message dissemination over vehicular ad hoc network. A Surface Normal On-Machine Measuring Method Using Eddy-Current (EC) Sensor Array. Learning a Similarity Constrained Discriminative Kernel Dictionary from Concatenated Low-Rank Features for Action Recognition. A visualization tool for the kernel-driven model with improved ability in data analysis and kernel assessment. Grouped Variable Selection Using Area under the ROC with Imbalanced Data. Modeling nonstationary covariance function with convolution on sphere. Changes in moisture effective diffusivity and glass transition temperature of paddy during drying. Identifying micro-inversions using high-throughput sequencing reads. Entity Disambiguation with Linkless Knowledge Bases. Bootstrapping User-Defined Body Tapping Recognition with Offline-Learned Probabilistic Representation. A Quantitative Approach for Memory Fragmentation in Mobile Systems. Full Dimension MIMO (FD-MIMO) - Reduced Complexity System Design and Real-Time Implementation. Allele-Specific Quantification of Structural Variations in Cancer Genomes. Sequential Data Classification in the Space of Liquid State Machines. Achieving secure spectrum sensing in presence of malicious attacks utilizing unsupervised machine learning. Gland Instance Segmentation by Deep Multichannel Side Supervision. Gesture morpher: video-based retargeting of multi-touch interactions. Tube caching: An effective caching scheme in Content-Centric Networking. Evaluation of Forward Collision Avoidance system using driver's hazard perception. An Efficient Algorithm for Feature-Based 3D Point Cloud Correspondence Search. Mogeste: A Mobile Tool for In-Situ Motion Gesture Design. CASE: Cache-assisted stretchable estimator for high speed per-flow measurement. Analysis of anisotropy variance between the kernel-driven model and the PROSAIL model. A method for kernel-driven model to correct the blended hemispherical diffuse irradiance in multi-angle measurements. Application of hilbert transform in vehicle dynamics analysis. Fast location and segmentation of character in annular region based on normalized cross-correlation. Research on direvt torque predictive control system of induction motor based on three-level inverter. Bandwidth-Greedy Hashing for Massive-Scale Concurrent Flows. A Novel Time-Frequency Analysis in Nonstationary Signals Based Multiscale Radial Basis Functions and Forward Orthogonal Regression. Research on new algorithm of dealing with distance ambiguity for high frequency PD radar. Mogeste: mobile tool for in-situ motion gesture design. Knowledge-based trajectory completion from sparse GPS samples. Artificial Multi-Bee-Colony Algorithm for k-Nearest-Neighbor Fields Search. From Queriability to Informativity, Assessing "Quality in Use" of DBpedia and YAGO. A Common Property and Special Property Entity Summarization Approach Based on Statistical Distribution. Identification of time-varying neural dynamics from spiking activities using Chebyshev polynomials. Automatic Lumbar Vertebrae Detection Based on Feature Fusion Deep Learning for Partial Occluded C-arm X-ray Images. End-user development of cross-device user interfaces. Hashtag Recommendation with Topical Attention-Based LSTM. The Constitution of a Fine-Grained Opinion Annotated Corpus on Weibo. Preliminary Research of Secure Integrated Computing in Future Avionics. Heterogeneous Computing Platform Based on CPU+FPGA and Working Modes. Research Status of Artificial Neural Network and Its Application Assumption in Aviation. Enhancing Cross-Device Interaction Scripting with Interactive Illustrations. Phishing sites detection based on Url Correlation. An attack pattern mining algorithm based on fuzzy logic and sequence pattern. A novel multi-antenna iterative spectrum sensing algorithm based on the SUMPLE scheme. Radar HRRP target recognition based on Gradient Boosting Decision Tree. ELM-based classification of ADHD patients using a novel local feature extraction method. A method of removing Ocular Artifacts from EEG using Discrete Wavelet Transform and Kalman Filtering. Efficient Data Collection in Sensor-Cloud System with Multiple Mobile Sinks. Augmenting bag-of-words: a robust contextual representation of spatiotemporal interest points for action recognition. Energy-Efficient Coordinated Beamforming Under Minimal Data Rate Constraint of Each User. Domain-Alternated Optimization for Passive Macromodeling. A 65 nm Cryptographic Processor for High Speed Pairing Computation. QuGu: A Quality Guaranteed Video Dissemination Protocol Over Urban Vehicular Ad Hoc Networks. Development of Efficient Nonlinear Benchmark Bicycle Dynamics for Control Applications. Backtracking Integration for Fast Attitude Determination-Based Initial Alignment. Temperature-Aware Data Allocation for Embedded Systems with Cache and Scratchpad Memory. A Online NIR Sensor for the Pilot-Scale Extraction Process in Fructus Aurantii Coupled with Single and Ensemble Methods. An ASIFT-Based Local Registration Method for Satellite Imagery. Viewpoint Invariant Human Re-Identification in Camera Networks Using Pose Priors and Subject-Discriminative Features. A saliency detection model using shearlet transform. Very compact differential transformer-type bandpass filter with mixed coupled topology using integrated passive device technology. A chip-on-board packaged bandpass filter using cross-coupled topological optimised hairpin resonators for X-band radar application. Self-Organized Fission Control for Flocking System. SA-PSO based optimizing reader deployment in large-scale RFID Systems. A general framework for co-training and its applications. Master-slave synchronisation criteria of chaotic neural networks systems with time delay feedback control. High-quality initial codebook design method of vector quantisation using grouping strategy. Improved signal-to-noise ratio estimation algorithm for asymmetric pulse-shaped signals. Statistics on Temporal Changes of Sparse Coding Coefficients in Spatial Pyramids for Human Action Recognition. Orbit-disjoint regular (n, 3, 1)-CDPs and their applications to multilength OOCs. Adaptive segmentation based on multi-classification model for dermoscopy images. Knowledge Engineering with Big Data. Beamforming design with proactive interference cancelation in MISO interference channels. Some series of optimal multilength OOCs of weight four. Some constructions for t pairwise orthogonal diagonal Latin squares based on difference matrices. An exchanged folded hypercube-based topology structure for interconnection networks. Detection of Dendritic Spines Using Wavelet-Based Conditional Symmetric Analysis and Regularized Morphological Shared-Weight Neural Networks. A Hybrid Method for Image Segmentation Based on Artificial Fish Swarm Algorithm and Fuzzy c-Means Clustering. No-reference hair occlusion assessment for dermoscopy images based on distribution feature. Leveraging Pattern Semantics for Extracting Entities in Enterprises. Automatic framework for semi-supervised hyperspectral image classification using self-training with data editing. Multi-shot Re-identification with Random-Projection-Based Random Forests. Personalized Microtopic Recommendation with Rich Information. Inertial Guided Visual Sample Consensus based wearable orientation estimation for body motion tracking. MobiLock: an energy-aware encryption mechanism for NVRAM-based mobile devices. An L1/2-norm based efficient block level rate estimation model for HEVC. Emotional Tone-Based Audio Continuous Emotion Recognition. A novel automatically initialized level set approach based on region correlation for lumbar vertebrae CT image segmentation. Evaluating and Comparing Web-Scale Extracted Knowledge Bases in Chinese and English. Achieving Memory Access Equalization Via Round-Trip Routing Latency Prediction in 3D Many-Core NoCs. Spectrum of sizes for perfect burst deletion-correcting codes. An energy-efficient heterogeneous dual-core processor for Internet of Things. Preliminary validation and application of the angle products of MODIS AFX based on kernel-driven model. Research about the bidirectional NDVI based on kernel-driven models. A new method for automatic fine registration of multi-spectral remote sensing images. Tone sandhi and tonal coarticulation in Fuzhou Min. Mechanical fault diagnosis of rolling bearing based on locality-constrained sparse coding. A Method for Tracking Vehicles Under Occlusion Problem. Pattern Classification for Dermoscopic Images Based on Structure Textons and Bag-of-Features Model. A New Representation Method of H1N1 Influenza Virus and Its Application. Learning with Video: The Digital Knowledge Representation and Digital Reading. Person Re-Identification with Discriminatively Trained Viewpoint Invariant Dictionaries. Reduced space channel feedback for FD-MIMO. Joint spectrum-sharing and base-station-sleep model for improving energy efficiency of HetNet. Hypergraph-based spectral clustering for categorical data. A hybrid-architecture retrieval system based on Web Services. Answering Elementary Science Questions by Constructing Coherent Scenes using Background Knowledge. Sparse re-id: Block sparsity for person re-identification. 3D model-based continuous emotion recognition. Linear and cyclic distance-three labellings of trees. Distribution Entropy for short-term QT Interval Variability Analysis: A Comparison between the Heart Failure and Healthy Control Groups. Gesture On: Enabling Always-On Touch Gestures for Fast Mobile Access from the Device Standby Mode. Weave: Scripting Cross-Device Wearable Interaction. Palmprint Feature Extraction Method Based on Rotation-invariance. Grouping of Multiple Overtraced Strokes in Interactive Freehand Sketches. Multi-Shot Human Re-Identification Using Adaptive Fisher Discriminant Analysis. Particle dynamics and multi-channel feature dictionaries for robust visual tracking. Malphite: A convolutional neural network and ensemble learning based protein secondary structure predictor. Design consideration of uni-traveling carrier photodiode: Influence of doping profile and buffer layer. Online optimal regulation and tracking control of nonlinear discrete-time system with control constraints. Regional Planning of Coordinated Multi-Point Transmission with Perfect Feedback. Full dimension MIMO (FD-MIMO): The next evolution of MIMO in LTE systems. FPGA-Based Design of Grid Friendly Appliance Controller. Optimistic Programming of Touch Interaction. Sub-Array Weighting UN-MUSIC: A Unified Framework and Optimal Weighting Strategy. An Electrochemical Microsensor Based on a AuNPs-Modified Microband Array Electrode for Phosphate Determination in Fresh Water Samples. A Novel Approach to ECG Classification Based upon Two-Layered HMMs in Body Sensor Networks. Feature Selection for Support Vector Machine in the Study of Financial Early Warning System. HMDD v2.0: a database for experimentally supported human microRNA and disease associations. K-nearest neighborhood based integration of time-of-flight cameras and passive stereo for high-accuracy depth maps. An Improved Community Partition Algorithm Integrating Mutual Information. Agent-Based Simulation of the Search Behavior in China's Resale Housing Market: Evidence from Beijing. On Chebyshev Polynomials, Fibonacci Polynomials, and Their Derivatives. Hybrid intelligent control of BOF oxygen volume and coolant addition. Adaptive neural network control for a class of strict feedback non-linear time delay systems. Sliding mode control for neutral systems with uncertain parameters. Lag synchronisation of chaotic systems using sliding mode control. Cyberphysical Security for Industrial Control Systems Based on Wireless Sensor Networks. Adaptive terminal sliding mode control for finite-time chaos synchronisation with uncertainties and unknown parameters. Precoding Scheme for Distributed Antenna Systems with Non-Kronecker Correlation over Spatially Correlated Channel. Optimal pilots design for frequency offsets and channel estimation in OFDM modulated single frequency networks. Coordinated precoding and proactive interference cancellation in mixed interference scenarios. Video dissemination protocols in urban vehicular ad hoc network: A performance evaluation study. Detecting tapping motion on the side of mobile devices by probabilistically combining hand postures. Reflection: enabling event prediction as an on-device service for mobile interaction. HOBS: head orientation-based selection in physical spaces. Predicting the Popularity of Messages on Micro-blog Services. Preventing the diffusion of negative information based on local influence tree. Analyzing expert behaviors in collaborative networks. Teaching motion gestures via recognizer feedback. The concept and modeling of driving safety field based on driver-vehicle-road interactions. A 3-D deconvolution based particle detection method for wide-field microscopy image. A wireless sensor network for the metallurgical gas monitoring. Observer design for consensus of general fractional-order multi-agent systems. Revised dynamic programming in Azimuth-Range-Doppler data. Real-World Re-Identification in an Airport Camera Network. SphericalMesh: A novel and flexible network topology for 60GHz-based wireless data centers. Harnessing Memory Page Distribution for Network-Efficient Live Migration. Reduced complexity precoding and scheduling algorithms for full-dimension MIMO systems. GestKeyboard: enabling gesture-based interaction on ordinary physical keyboard. InkAnchor: enhancing informal ink-based note taking on touchscreen mobile phones. Gesture script: recognizing gestures and their structure using rendering scripts and interactively trained parts. An MOEA/D with multiple differential evolution mutation operators. Multi-Task Learning for Face Ethnicity and Gender Recognition. Social-Based Multi-label Routing in Delay Tolerant Networks. Gesturemote: interacting with remote displays through touch gestures. A 0.65V 1.2mW 2.4GHz/400MHz dual-mode phase modulator for mobile healthcare applications. 3D channel models for elevation beamforming and FD-MIMO in LTE-A and 5G. Workload Prediction of Virtual Machines for Harnessing Data Center Resources. Energy-balanced cooperative routing in multihop wireless networks. Offline Performance Prediction of PDAF With Bayesian Detection for Tracking in Clutter. Mode-ℝ Subspace Projection of a Tensor for Multidimensional Harmonic Parameter Estimations. An Enumerative NonLinear Programming approach to direction finding with a general spatially spread electromagnetic vector sensor array. Measuring Accurate Body Parameters of Dressed Humans with Large-Scale Motion Using a Kinect Sensor. Memory Efficient Minimum Substring Partitioning. Non-linear equity portfolio variance reduction under a mean-variance framework - A delta-gamma approach. Urban Regional Traffic State Analysis Software System Emphasizing Pattern Transition. Power- and Bandwidth-Efficient Euclidean Code with Sparse Generator Matrix. Low-Density Lattice Codes for Inter-Symbol Interference Channels. A generalized indirect adaptive neural networks backstepping control procedure for a class of non-affine nonlinear systems with pure-feedback prototype. Full-dimension MIMO (FD-MIMO) for next generation cellular technology. Hybrid interference alignment and power allocation for multi-user interference MIMO channels. A tracking-by-detection method with radio-frequency tomography network. Methods with low complexity for evaluating cloud service reliability. Dynamic scenes reconstruction based on foreground and background splitting. Performance analysis of cross-layer design with antenna selection in multiuser MIMO system. A touchable virtual screen interaction system with handheld Kinect camera. Research on the cooperative BSs' number of coordinated Multi-Point Transmission with perfect feedback. Open project: a lightweight framework for remote sharing of mobile applications. CrowdLearner: rapidly creating mobile recognizers using crowdsourcing. An Autonomous Fall Detection and Alerting System Based on Mobile and Ubiquitous Computing. Thermal-Aware On-Chip Memory Architecture Exploration. Energy-efficient coordinated beamforming with individual data rate constraints. Finding High Influence Vertices Based on Candidate Set. A nested game-based optimization framework for electricity retailers in the smart grid with residential users and PEVs. A micro electrochemical sensor with porous copper-clusters for total nitrogen determination in freshwaters. Wireless energy transfer system based on high Q flexible planar-Litz MEMS coils. Low-complexity driving event detection from side information of a 3D video encoder. Mining evidences for named entity disambiguation. The Research of Weighted Community Partition based on SimHash. Feature space generalized variable parameter HMMs for noise robust recognition. Automatic Name-Face Alignment to Enable Cross-Media News Retrieval. Image Fusion Technology Based on Bio-inspired Features. Pitch Detection Method for Noisy Speech Signals Based on Wavelet Transform and Autocorrelation Function. Research and Application of Variable Rate Fertilizer Applicator System Based on a DC Motor. TOF Depth Map Super-resolution Using Compressive Sensing. Mixed Kernel Function SVM for Pulmonary Nodule Recognition. A Multi-channel Routing Protocol for Dual Radio Wireless Networks. Zero-forcing receiver in uplink massive MIMO. Large-scale joint map matching of GPS traces. Time-domain segmentation based massively parallel simulation for ADCs. An accurate semi-analytical framework for full-chip TSV-induced stress modeling. Finding High-Influence Leader Based on Local Metrics. A Structural Analysis of SLAs and Dependencies Using Conceptual Modelling Approach. PoWER: prediction of workload for energy efficient relocation of virtual machines. A 920MHz quad-core cryptography processor accelerating parallel task processing of public-key algorithms. Gesture studio: authoring multi-touch interactions through demonstration and declaration. FFitts law: modeling finger touch with fitts' law. PCA & HMM Based Arm Gesture Recognition Using Inertial Measurement Unit. Reliability of LT Codes under Dynamic Channel Conditions in Wearable Body Area Network. The removal of ocular artifactsfrom EEG signals: An adaptive modeling technique for portable applications. Design of 13.56MHz power recovery circuit with signal transmission for contactless bank IC card. Intersample ripple resulting from discrete-time feedforward control. Implementation of full-dimensional MIMO (FD-MIMO) in LTE. A Cross-Layer Routing Scheme Using Adaptive Retransmission Strategy for Wireless Mesh Networks. Adaptive mesh subdivision for efficient light baking. On Delay Tomography: Fast Algorithms and Spatially Dependent Models. Fractional Weierstrass Model for Rough Ocean Surface and Analytical Derivation of Its Scattered Field in a Closed Form. A Wireless Magnetic Resonance Energy Transfer System for Micro Implantable Medical Sensors. Gesture Search: Random Access to Smartphone Content. Improved stability criteria for uncertain delayed neural networks. Synchronisation of unified chaotic systems with uncertain parameters in finite time. Improved Double Threshold Detector for Spatially Distributed Target. Full Diversity Full Rate Cyclotomic Orthogonal Space-Time Block Codes for MIMO Wireless Systems. Constructions of covering arrays of strength five. Statistical analysis of bivariate failure time data with Marshall-Olkin Weibull models. Leveraging Sharing in Second Level Translation-Lookaside Buffers for Chip Multiprocessors. Localizing an unknown number of targets with radio tomography networks. TrueSight: Self-training Algorithm for Splice Junction Detection Using RNA-seq. A low complexity fast lattice reduction algorithm for MIMO detection. Finding δ-Closure Property of BitTorrent System and Applications Based on This Property. Research of Security Relationship Based on Social Networks. Structured modeling based on generalized variable parameter HMMs and speaker adaptation. Complex low-density lattice codes designed for ISI channels via spatial coupling. A Decomposition based estimation of distribution algorithm for multiobjective knapsack problems. Energy-balanced cooperative routing in multihop wireless ad hoc networks. Fulfilling the promise of massive MIMO with 2D active antenna array. A non-asymptotic throughput for massive MIMO cellular uplink with pilot reuse. Estimation of vegetation water content based on MODIS: Application on forest fire risk assessment. Agent-Based Simulation and Its Applications to Service Management: Invited Talk. A Novel Method for Target Search and Recognition in an Intricate Environment with UWB through Wall Penetration RADAR. Bootstrapping personal gesture shortcuts with the wisdom of the crowd and handwriting recognition. Gesture coder: a tool for programming multi-touch gestures by demonstration. Adaptive learning evaluation model for evolutionary art. Provably secure identity-based authenticated key agreement protocol and its application. Tap, swipe, or move: attentional demands for distracted smartphone input. Gesture-based interaction: a new dimension for mobile user interfaces. A novel dynamic mobility management scheme in LISP architecture. Detection of HF First-Order Sea Clutter and Its Splitting Peaks with Image Feature: Results in Strong Current Shear Environment. Adaptive Kernel Size Selection for Correntropy Based Metric. Improved Probabilistic Multi-Hypothesis Tracker for Multiple Target Tracking With Switching Attribute States. Multichannel Image Registration by Feature-Based Information Fusion. Identification of Time-Varying Systems Using Multi-Wavelet Basis Functions. KINARI-Web: a server for protein rigidity analysis. Pressure vessel state investigation based upon the least squares support vector machine. Adaptive lattice-based light rendering of participating media. On the existence of orthogonal arrays OA(3, 5, 4n+2). Rapid evaluation of the binding energies between peptide amide and DNA base. Rapid evaluation of the binding energies in hydrogen-bonded amide-thymine and amide-uracil dimers in gas phase. Some 20-regular CDP(5, 1;20u) and their applications. Monotone Rank and Separations in Computational Complexity. Tight bounds on the randomized communication complexity of symmetric XOR functions in one-way and SMP models. FusionHunter: identifying fusion transcripts in cancer using paired-end RNA-seq. A construction of optimal sets of FH sequences. Random selection LLL algorithm and its fixed complexity variant for MIMO detection. Simulation of depth from coded aperture cameras with Zemax. Tight Bounds on Communication Complexity of Symmetric XOR Functions in One-Way and SMP Models. Modeling and Classification of sEMG Based on Instrumental Variable Identification. Modeling and Classification of sEMG Based on Blind Identification Theory. sEMG Signal Classification for the Motion Pattern of Intelligent Bionic Artificial Limb. Research and Design of an Agricultural Scientific Instruments Classification and Code Management System. A Sparse Common Spatial Pattern Algorithm for Brain-Computer Interface. Simulated annealing with probabilistic neighborhood for traveling salesman problems. Exploring the Relationship among Dimensions of Flight Comprehensive Capabilities Based on SEM. Simulation on influence mechanism of environmental factors to producers' food security behavior in supply chain. The research of the accuracy with gear measuring center based on finite element analysis. Finite element analysis of temperature field on capillary rheometer. A novel framework for passive macro-modeling. Adaptive Pinning Synchronization of Delayed Complex Dynamical Networks. Robust Model Predictive Control for Nonlinear Systems. The Study of Print Quality Evaluation System Using the Back Propagation Neural Network with Applications to Sheet-Fed Offset. An Effective Adjustment on Improving the Process of Road Detection on Raster Map. Processing Wikipedia Dumps - A Case-study Comparing the XGrid and MapReduce Approaches. User-defined motion gestures for mobile interaction. DoubleFlip: a motion gesture delimiter for mobile interaction. Gesture avatar: a technique for operating mobile user interfaces using gestures. Deep shot: a framework for migrating tasks across devices using mobile phone cameras. Experimental analysis of touch-screen gesture designs in mobile environments. The shortest universal solutions for non-linear dynamic equation. Construction of semantic analysis system for Traditional Chinese Medicine unstructured medical records. Directional coupler design in 3G/LTE Power Amplifier Module (invited paper). A security processor based on MIPS 4KE architecture. Analysis of adaptive support-weight based stereo matching for hardware realization. A NoC-based multi-core architecture for IEEE 802.11i CCMP. Social Network Analysis on KAD and Its Application. Measuring "Sybil attacks" in Kademlia-based networks. Multi-target Tracking and Segmentation via Discriminative Appearance Model. Research on Application of ZigBee Technology in Flammable and Explosive Environment. A New Class of Large Neighborhood Path-Following Interior Point Algorithms for Semidefinite Optimization with O(√n log (Tr(X0S0)/ε)) Iteration Complexity. A Self-adaptive Genetic Algorithm Based on the Principle of Searching for Things. Managing Enterprise Service Level Agreement. A new approach of motion compensation for synthetic wideband radar under multitarget environment. A novel global harmony search algorithm for reliability problems. Incorporating gene co-expression network in identification of cancer prognosis markers. Semiparametric prognosis models in genomic studies. A color image watermarking algorithm resistant to print-scan. A real-time multi-cue hand tracking algorithm based on computer vision. DoubleFlip: a motion gesture delimiter for interaction. Gesture search: a tool for fast mobile data access. Modeling and pattern recognition of sEMG for intelligent bionic artificial limb. Analysis of Single-phase APF Overtone and Idle Current Examination. Motion control of an autonomous vehicle based on wheeled inverted pendulum using neural-adaptive implicit control. Aesthetic Evolution of Staged L-systems for Tiling Pattern Design. New condition for improved robust reliable controller design for neutral system with time delay. Research on air-conditioning fault diagnosis method based on SVM. Spread E, F layer ionospheric clutter identification in range-Doppler map for HFSWR. Optimized fuzzy information granulation based machine learning classification. A fuzzy-based commodity cluster analysis for Harbin Central-Red supermarket. Some properties of the sendograph metric by addition and scalar multiplication. Uncertainty Reasoning on Fuzziness and Randomness in Challenged Networks. Robust Hand Posture Recognition Integrating Multi-cue Hand Tracking. FrameWire: a tool for automatically extracting interaction logic from paper prototyping tests. Protractor: a fast and accurate gesture recognizer. A novel method of recognizing short coding sequences of human genes. Fuzzy Granulation Based Forecasting of Time Series. Stochastic Single Machine Scheduling to Minimize the Weighted Number of Tardy Jobs. Spectrum Usage Prediction Based on High-order Markov Model for Cognitive Radio Networks. Applications of colloidal quantum dots. Dimensional precision simulation of the formed picture tube panel. Covering arrays of strength 3 and 4 from holey difference matrices. Beyond Pinch and Flick: Enriching Mobile Gesture Interaction. Building lightweight intrusion detection system using wrapper-based feature selection mechanisms. A P2P based distributed services network for next generation mobile internet communications. Research on Green Effect of Eco-industrial Parks and its Formation Mechanism of Strategic Alliances' Stability. Integration of Routing and Switching in Delay-Disruption Tolerance Network. Multimodal Image Registration by Information Fusion at Feature Level. A Small Power Switching Mode Power Supply Based on TOP Switch. A Novel Video Annotation Framework Based on Video Object. SUEFUL-7: A 7DOF upper-limb exoskeleton robot with muscle-model-oriented EMG-based control. The Application of the Ant Colony Pheromones in Intelligent Learning. Exponential Stability of Cellular Neural Networks with Uncertain and Time-Varying Delay. Agent-Based Distributed Component Services in Spatial Modeling. A Multihoming Support Scheme with Localized Shim Protocol in Proxy Mobile IPv6. Contrapositive symmetry of distributive fuzzy implications revisited. Application of Temperature Fuzzy Control System in RF Treatment for Nerve Ache. Intrusion Detection Based on Back-Propagation Neural Network and Feature Selection Mechanism. Range Matching without TCAM Entries Expansion for Packet Classification. An Innovative Education System for International Service Engineering. A Study on Buffer Efficiency and Surround Routing Strategy in Delay Tolerant Network. The Organization and Application of Multimedia information Based on Ontology. An Identifier-Based Control Method in Dynamic Tracking Neuro-Fuzzy Control System. Non-subsampled Contourlet Transform Based Seismic Signal De-noising. An Investigation of the Role of Service Level Agreements in Classified Advertisement Websites. Towards Industry-Strength SLA Optimization Capabilities for Service Chains. Matchmaking Using Natural Language Descriptions: Linking Customers with Enterprise Service Descriptions. A Novel Nerve Ache Radio Frequency Treatment System with Multi-Channel Physiological Signal Monitoring. An End-to-End Loss Discrimination Scheme for Multimedia Transmission over Wireless IP Networks. Optimizing lead time and resource utilization for service enterprises. Doppler-based detection and tracking of humans in indoor environments. An Enhanced Multihoming Support Scheme with Proxy Mobile IPv6 for Convergent Networks. Into the Wild: Low-Cost Ubicomp Prototype Testing. Application of Particle Swarm Optimization Algorithm to Electric Power Line Overhaul Plan. Peak Load Shifting Distribution of Multi-zone Fuzzy Group Decision-Making. Seamless Handover Scheme for Proxy Mobile IPv6. Design of Signal Constellations in the Presence of Phase Noise. Improving Component Container Development Process through Product Line Engineering. The Design and Implementation of Monitoring System for H2S Gas Volume Fraction with Virtual Instrument. Cascadia: a system for specifying, detecting, and managing rfid events. Knowledge Sharing for Supply Chain Management Based on Fuzzy Ontology on the Semantic Web. LDCOM: a Layered degree-constrained overlay multicast for interactive media. The learning algorithm based on multiresolution analysis for neural networks. Algorithm Research of Flexible Graphplan based on Heuristic. The Research of MovementControl Policy on Tiered Mobile Network. The Research of Security Asynchronous Web Services based on SOA Architecture. Study on Automatic Shape Identification of Hatching Eggs Based on an Improved GA Neural Network. SRP Based Natural Interaction between Real and Virtual Worlds in Augmented Reality. Research on Temperature Control and Anti-cracking Simulation for Xiaowan Concrete High Arch Dam. The Numerical Simulation of Security of Groundwater Supply in the Centralized Exploitation Condition of a Large Irrigation District. Message from the BINDIS 2008 Workshop Organizers. Service Productivity Improvement and Software Technology Support. Activity-based prototyping of ubicomp applications for long-lived, everyday human activities. Microbubble Suspensions Prepared via Electrohydrodynamic Jetting Process. Multi-bandwidth analog filter design for SDR. Video transport over multi-hop directional wireless networks. Design Challenges and Principles for Wizard of Oz Testing of Location-Enhanced Applications. Using AI and semantic web technologies to attack process complexity in open systems. A hybrid data mining anomaly detection technique in ad hoc networks. Control of spatial discretisation in coastal oil spill modelling. Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. The algorithm simulation research for OSPF network routing based on granular computing method of quotient space. A novel loose coupling interworking scheme between UMTS and WLAN systems for multihomed mobile stations. The Adaptive Hybrid Cursor: A Pressure-Based Target Selection Technique for Pen-Based User Interfaces. A Classification of Flotation Froth Based on Geometry. Sequential Pattern-Based Cache Replacement in Servlet Container. Analysis of a Multi-Stream QoS Game for Multi-Path Routing. Incorporating Primal Sketch Based Learning Into Low Bit-Rate Image Compression. Study on the Appraisal Methods of Hand Fatigue. BrickRoad: a light-weight tool for spontaneous design of location-enhanced applications. Two operators in rough set theory. Model of tip-sample interaction and image reconstruction. Mobile Space-Time Envelopes for Location-Based Services. Frequency-Hopping Pilot Patterns for OFDM Cellular Systems. Networking in Rural Environments: Benefits, Feasibilities, and Requirements. An Applicable GSM Network Model for Networking in Rural Environments. A distributed cross-layer intrusion detection system for ad hoc networks. Informal prototyping of continuous graphical interactions by demonstration. An Event-Driven Adaptive Differentiated Service Web Container Architecture. Video Transport Over Multi Hop Directional Wireless Networks. A Context-Based Error Detection Strategy into H.264/AVC CABAC. A Decision Mechanism for Processing Multimodal Services in Future Generation Network. Agent-Based Soft Computing Society Applied in the Research of Reservoir Sedimentary Facies in Oil Fields. Verb-Noun Directory for Telecommunications Services Look-up. A Game Theoretic Approach to Multi-Stream QoS Routing. Design and experimental analysis of continuous location tracking techniques for Wizard of Oz testing. External representations in ubiquitous computing design and the implications for design tools. DEA efficiency measurement with undesirable outputs: an application to Taiwan's commercial banks. Short Paper: A Distributed Cross-Layer Intrusion Detection System for Ad Hoc Networks. Seamless Network Mobility Management for Realtime Service. Study on Knowledge Reasoning Based on Extended Formulas. Emerging Markets and Benefits of Fixed to Wireless Substitution in Africa. Optimizing Path Expression Queries of XML Data. Kernel-Based Multifactor Analysis for Image Synthesis and Recognition. Simulating a rural wireless network using SCILAB. Asymmetric Inexact Matching of Spatially-Attributed Graphs. Experimental analysis of mode switching techniques in pen-based user interfaces. A multipath video delivery scheme over diffserv wireless LANs. Topiary: a tool for prototyping location-enhanced applications. Remote invoking algorithms of self-healing system. Research on concurrent algorithm of component software architecture. A biological formal architecture of self-healing system. Multi-view face alignment guided by several facial feature points. An improved two-step approach to hallucinating faces. Face Hallucination with Pose Variation. Beyond Ontology Construction; Ontology Services as Online Knowledge Sharing Communities. A Framework for ACL Message Translation for Information Agents. Multi-stream video transport over DiffServ wireless LANS. Linear hidden Markov model for music information retrieval based on humming. An Object-oriented Framework for Parallel, Reactive Molecular Dynamics Simulation. Multi-stream video transport over MPLS networks. Word Concept Model for Intelligent Dialogue Agents. Word concept model: a knowledge representation for dialogue agents. Recognizing emotions in speech using short-term and long-term features. On Supervisory control of real-time discrete-event systems.
CommonCrawl
We generalize the results of Bilyk et al. on discrepancy in spaces with bounded mean oscillation and in exponential Orlicz spaces to arbitrary dimension. In particular, we use dyadic harmonic analysis to prove upper bounds of the BMO and exponential Orlicz space norms of the discrepancy function for the so-called order 2 digital nets. Such estimates play an important role as an intermediate step between the well-understood $L_p$ bounds and the still elusive $L_\infty$ asymptotics of the discrepancy function in arbitrary dimensions. Joint work with Dmitriy Bilyk (University of Minnesota, USA).
CommonCrawl
A $2$ $km$ long broadcast LAN has $10^7$ bps bandwidth and uses CSMA/CD. The signal travels along the wire at $2 \times 10^8$ m/s. What is the minimum packet size that can be used on this network? packet size) must be greater than twice the propagation delay. (doubt) yes the packet size ahould be 25 bytes, but question says what could be minimum size . a packet should be equal to or more than 25bytes so can't we say 50 bytes(option a) is fulfilling the question criteria? 50B is valid but it is not minimum. but if it is asking minimum from the options then 50B is correct. The answer will be 200 bits, not bytes. Put the data in above quetion and u will find answer like below :: 200Byte.
CommonCrawl
Abstract: We propose a method to define a $d+1$ dimensional geometry from a $d$ dimensional quantum field theory in the $1/N$ expansion. We first construct a $d+1$ dimensional field theory from the $d$ dimensional one via the gradient flow equation, whose flow time $t$ represents the energy scale of the system such that $t\rightarrow 0$ corresponds to the ultra-violet (UV) while $t\rightarrow\infty$ to the infra-red (IR). We then define the induced metric from $d+1$ dimensional field operators. We show that the metric defined in this way becomes classical in the large $N$ limit, in a sense that quantum fluctuations of the metric are suppressed as $1/N$ due to the large $N$ factorization property. As a concrete example, we apply our method to the O(N) non-linear $\sigma$ model in two dimensions. We calculate the three dimensional induced metric, which is shown to describe an AdS space in the massless limit. We finally discuss several open issues in future studies.
CommonCrawl
Are there any primes that are never a factor of a Carmichael number? After a quick glance at some Carmichael number factors, $p$ must be greater or equal to $53$. $N$ $|$ $21$ $=$ $3*7$, $N-1$ $|$ $2$, $N-1$ $|$ $2*3$. $N$ $|$ $3$ on the other hand, we also have $N-1$ $|$ $3$, a contradiction since no two consecuative integers are both divisible by a number $>$ $1$. More simply for that two primes $p$ and $pk+1$, a Carmichael number is never a multiple of $p$($pk+1$). I hope this helps with some of the understanding of divisors of Carmichael Numbers. Not the answer you're looking for? Browse other questions tagged number-theory prime-numbers algebraic-number-theory primality-test pseudoprimes or ask your own question. Can a Mersenne number ever be a Carmichael number? Can every odd prime $p\ne 11$ be the smallest prime factor of a carmichael-number with $3$ prime factors? Is it known whether the possible number of prime factors of a Carmichael-number is bounded? Is there a Carmichael-number divisible by $3\times 5\times 17=255$? What is the fastest way to get the next Carmichael-number?
CommonCrawl
Abstract: The purpose of this paper is to describe a method for computing homotopy groups of the space of $\alpha$-stable representations of a quiver with fixed dimension vector and stability parameter $\alpha$. The main result is that the homotopy groups of this space are trivial up to a certain dimension, which depends on the quiver, the choice of dimension vector, and the choice of parameter. As a corollary we also compute low-dimensional homotopy groups of the moduli space of $\alpha$-stable representations of the quiver with fixed dimension vector, and apply the theory to the space of non-degenerate polygons in three-dimensional Euclidean space.
CommonCrawl
Soooo use the natural projection mapping and show that the kernel equals $B_1 \oplus .... \oplus B_k$? And that it's an epimorphism? Can somebody help me with the details? What are the main differences to this as to what proving it for rings and ideals? Nothing I suppose since Ideals are R-Modules... anyway yeah I just wanted to post this here because I figured one of you smarty pants would have something useful to say about it. Thanks! Browse other questions tagged abstract-algebra or ask your own question. A theorem characterizing von Neumann regular endomorphisms. Classification of finitely generated multigraded modules over $K[x_1,\ldots,x_n]$?
CommonCrawl
Abstract: We reconsider the calculation of the one-loop effective action for an open Green-Schwarz superstring in the $AdS_5\times S^5$ background for a circular boundary loop. By an explicit computation of the ratio of relevant determinants, describing semi-classical fluctuations about the minimal surface in AdS and flat spaces, we show that it does not depend upon the AdS regularizing parameter $\epsilon$. The only dependence upon $\epsilon$ resides in the reparametrization path integral of the exponential of the classical boundary action. We analyze how the result depends on the choice of the boundary condition imposed on fluctuating fields and show that, despite the fact that the contribution of individual angular modes changes, the product over the modes remains unchanged.
CommonCrawl
In general, we can imagine whether or not a structure is rigid or not if we imagine the squares with braces as metal plate and the squares without braces as pieces of paper. If the structure bends while holding it, then it is not rigid. We will now define a rigid structure in terms of graphs. Definition: A structure is considered Rigid if and only if the bipartite graph that represents the structure is connected. To construct a bipartite graph of an $m \times n$ sized structure, let $r_1, r_2, ..., r_m$ be the set of vertices corresponding to the $m$ rows in the structure, and let $c_1, c_2, ..., c_m$ be the set of vertices corresponding to the $n$ columns in the structure. We will put an edge between vertices if a brace is present. For example, the following structure is rigid, since the associated bipartite graph is connected. If this bipartite graph was not connected, then the structure would not be considered rigid.
CommonCrawl
Does an adjoint of an internal Hom functor of a prounital closed category define a tensor product? A closed category is a category equipped with internal Hom functors along with a unit object. Now this answer shows that if $C$ is a closed category whose internal Hom functor has a left adjoint, then this left adjoint defines a tensor product on $C$, i.e. it makes $C$ into a monoidal category. But I'm wondering if something more general is true. A prounital closed category is like a closed category, except that the requirements regarding a unit object are dropped. (I don't think this is standard terminology.) My question is, if $C$ is a prounital closed category whose internal Hom functor has a left adjoint, then does this left adjoint define some kind of tensor product on $C$? Now it presumably wouldn't make $C$ monoidal, since you no longer have a unit object. But would it at least make $C$ into a "semigroupal category", i.e. a category with a tensor product which satisfies all the properties of a monoidal category except the requirements regarding a unit object? If not, does anyone know of a counterexample? Browse other questions tagged category-theory examples-counterexamples adjoint-operators monoidal-categories hom-functor or ask your own question. Does an adjoint of the Hom functor make a category monoidal? Tensor products from internal hom? In what sense right dual and braiding structure respect the tensor product structure in a monoidal category? Describe internal Hom and tensor products of $\mathcal O_X$ modules more conceptually? What's the internal hom of linear representations of categories? Symmetric monoidal category which is not closed? Is "monoidal category enriched over itself" the same as "closed monoidal category"?
CommonCrawl
Abstract: We characterize models where electroweak symmetry breaking is driven by two light Higgs doublets arising as pseudo-Nambu-Goldstone bosons of new dynamics above the weak scale. They represent the simplest natural two Higgs doublet alternative to supersymmetry. We construct their low-energy effective Lagrangian making only few specific assumptions about the strong sector. These concern their global symmetries, their patterns of spontaneous breaking and the sources of explicit breaking. In particular we assume that all the explicit breaking is associated with the couplings of the strong sector to the Standard Model fields, that is gauge and (proto)-Yukawa interactions. Under those assumptions the scalar potential is determined at lowest order by very few free parameters associated to the top sector. Another crucial property of our scenarios is the presence of a discrete symmetry, in addition to custodial SO(4), that controls the $T$-parameter. That can either be simple CP or a $Z_2$ that distinguishes the two Higgs doublets. Among various possibilities we study in detail models based on SO(6)/SO(4)$\times$ SO(2), focussing on their predictions for the structure of the scalar spectrum and the deviations of their couplings from those of a generic renormalizable two Higgs doublet model.
CommonCrawl
Abstract : Using LOFAR, we have performed a very-low-frequency (115−155 MHz) radio survey for millisecond pulsars (MSPs). The survey targeted 52 unidentified Fermi γ-ray sources. Employing a combination of coherent and incoherent dedispersion, we have mitigated the dispersive effects of the interstellar medium while maintaining sensitivity to fast-spinning pulsars. Toward 3FGL J1553.1+5437 we have found PSR J1552+5437, the first MSP to be discovered (through its pulsations) at a radio frequency <200 MHz. PSR J1552+5437 is an isolated MSP with a 2.43 ms spin period and a dispersion measure of 22.9 pc cm(−)(3). The pulsar has a very steep radio spectral index ($\alpha \lt -$2.8 ± 0.4). We obtain a phase-connected timing solution combining the 0.74 years of radio observations with γ-ray photon arrival times covering 7.5 years of Fermi observations. We find that the radio and γ-ray pulse profiles of PSR J1552+5437 appear to be nearly aligned. The very steep spectrum of PSR J1552+5437, along with other recent discoveries, hints at a population of radio MSPs that have been missed in surveys using higher observing frequencies. Detecting such steep spectrum sources is important for mapping the population of MSPs down to the shortest spin periods, understanding their emission in comparison to slow pulsars, and quantifying the prospects for future surveys with low-frequency radio telescopes like SKA-Low and its precursors.
CommonCrawl
Abstract: Results from the study of hadronic jets in hadron-hadron collisions at order $\alpha_s^3$ in perturbation theory are presented. The focus is on various features of the internal structure of jets. The numerical results of the calculation are compared with data where possible and exhibit reasonable agreement.
CommonCrawl
Throughout this discussion, the dynamical system X, the group G, and the group U will be fixed. thus turning the sequence of groups into a chain complex. An n-chain with vanishing boundary is called an n-cycle, while an n-chain which is the boundary of an (n+1)-chain is called an n-boundary; the spaces of n-cycles and n-boundaries are denoted and respectively. Thus for instance is both a 1-cycle and a 1-boundary. However, if g is a non-trivial group element that fixes x and G is abelian, one can show that is a 1-cycle but not a 1-boundary. We define the homology groups for all n. It is a nice exercise to compute these groups in some simple cases, e.g. If G acts transitively on X, then . If G acts freely on X, then is trivial for . However, I don't know of any application of these homology groups to the theory of dynamical systems. An n-cochain is a homomorphism from the space of n-chains to U. Since is a free abelian group generated by the simplices , we can view an n-cochain as a function from to U. (Again, we are ignoring all measure-theoretic or topological considerations here.) The space of all n-cochains is denoted ; this is an abelian group. for 1-cochains , and so forth. Because , we have , and so becomes a cochain complex. n-cochains whose coboundary vanishes are known as n-cocycles, and n-cochains which are the coboundary of an (n-1)-cochain are known as n-coboundaries. The spaces of n-cocycles and n-coboundaries are denoted and respectively, allowing us to define the cohomology group . When n=0, and if the action of G is transitive (in the discrete category), minimal (in the topological category), or ergodic (in the measure-theoretic category), the only 0-cocycles are the constants, and the only 0-coboundary is the zero function, so . When n=1, it is not hard to see that the notion of 1-cocycle and 1-coboundary correspond to the notion of cocycle and coboundary discussed at the beginning of this post. Another oddity is that homology and cohomology, as it is classically defined, requires the space of chains, cochains, etc. to all be abelian groups; but for dynamical systems one can certainly talk about cocycles and coboundaries taking values in a non-abelian group U by modifying the definitions slightly, leading to the concept of a group extension of a dynamical system. (In this context, the first cohomology becomes a quotient space rather than a group; see also my earlier post interpreting these cocycles in the language of gauge theory.) It seems to me that in this case, the dynamical system concept of a cocycle or coboundary cannot be interpreted in terms of classical cohomology theory (but presumably can be handled by non-abelian group cohomology). thus is capable of detecting whether a U-extension of a G-system X can be lifted to a -extension. or in other words, to show that the map is a V-valued 2-coboundary. The same observation (now setting ) shows that the map is a -valued 2-coboundary (indeed, it is the coboundary of ), hence a -valued 2-cocycle, and thus is a V-valued 2-cocycle, and so the map is a map from 1-cocycles to 2-cocycles . Similarly, given two 1-cocycles , we see that differs from by some V-valued 1-cochain, so on taking derivatives we see that differs from by some 2-coboundary, thus is linear modulo 2-coboundaries. Finally, if is a U-valued 1-coboundary, then is the sum of a -valued 1-coboundary and a V-valued 1-cochain, and so on taking derivatives we see that maps 1-coboundaries to 2-coboundaries. (Presumably the above arguments are a special case of one of the standard diagram chasing lemmas in homological algebra, but I don't know which one it is. One could also verify these facts from the axioms of B induced from (2) and the abelian group structure on , but this turns out to be remarkably tedious.) Hence it induces a map from to , and then (3) is exact by the preceding discussion. Thanks! This is the first post in a long while that's in a field I could follow. I actually just posted on a similar thing but was giving results about cohomology groups of Galois groups. Actually, the real reason I came down here was to post the typo someone else already caught, then decided to actually comment anyway. The boundary formula you have listed above comes from a slightly more general formula by using specific choices for group actions from the left and the right on the modules considered – thus, with a slightly more general formula, we'd have , and , et.c. Would these formulae make sense in the dynamical systems world at all? Or is there nothing to be gained from extending to bimodules? I'm starting my thesis on certain cohomology theories, and it's nice to find applications of the ideas in such diverse fields, specially since right now I'm focusing in the algebraic machinery and sometimes "real life" gets too blurred behind it. As far as applications of higher cohomological groups goes, there seems to be a certain regularity in that if a certain cohomology group helps to classify extensions up to conjugation, the next cohomology group helps you find out if a certain candidate for an extension CAN be extended to a full extension. The most common example is when you want to deform a certain geometrical structure: you have a sequence of deformation parameters (each a cochain), and if you have a finite number of cochains you think might be the first part of a series, you can look at an associated cochain in the next cohomology group, pray for it to be zero, and if it is, that means the sequence can be continued one more step (in a possibly non unique way). The last paragraph might perhaps point in that direction? Mark ____ (50centfunctions on tumblr) came up with a nice real-life dynamical system where the left and right inverses (thought of as undoing earlier-time moves or later-time moves) are noticeably different: touching up a billboard that's been faded by the sun (right inverse), versus painting a billboard so that it will fade into the desired image (left inverse). Thanks for the correction! I suppose the analogue of a bimodule in dynamical systems would be a space X with both a left and right action of a group G, which commute with each other; but I don't know of any interesting example of this other than that when X is basically G itself (or maybe a quotient of G by a normal subgroup), in which case one is really doing group cohomology rather than dynamical systems. with the nth boundary define by . The Hochschild homology is defined as the homology of this complex. To recover the homology defined in the post in the case of a group acting on a space , I think you can take to be the group ring and to be the free abelian group generated by points in . The left module structure is given by the group action and the right module structure is the trivial structure. This is really just using slightly different language to describe the same thing, I suppose, but I guess in math writing problems or definitions in slightly different language is useful sometimes. Looking at it this way makes it seem a little bit more likely that it might be useful looking at right actions of other than the trivial one. I don't know much about dynamical systems, so I'm not sure if it could actually be useful or not. I am thinking what all this means for information and coding theory. I had always wanted to build a geometric version of information theory based on a "cohomology" of dynamical systems. This cohomology might be a suitable candidate for that…. PS: The cohomology of information functions is constructed on Hochshild's introduced by Peter in the last two posts and allows a unique characterisation of Shannon entropy. This post was briefly discussed over at our blog. Thanks David! I've taken the liberty of putting one of the comments on that blog posting (which answered one of my questions) in an update to the body of the post here. As Marlowe already predicted, the second cohomology group detects obstructions to a partial extension of a dynamical system lifting to a full extension. V. Nekresheych develops the notion of permutational bimodule, with many examples and applications, in his book "Self-Similar Groups". Rather a late correction, but in the section "cochains", on the line just before the text "for 1-cochains $\rho : G \times H \rightarrow U", I think there should be a $\rho(h, x)$ in place of a $\rho(g, x)$. Dear Terry, I'm not a matematician, and just trying to make aquaintaince with homology and cohomology. Unfortunately, I don't understand your singular cohomolgy analogy. For example, how do I derive your equation, based on this analogy? As far as I see, corresponds to the simplex, and the boundary of this (as the boundary of a simplicial complex) is . How do your expression follow from this? Woould you explain this, please? and so the group boundary equation corresponds to the simplicial boundary equation . Yes, as you guessed, I tried to convert the members in the wrong direction. Your squere brackets solved my confusion. Thank you very much, Terry! I think that this correspondence is very similar to the "bar resolution" in Maclane's Homology, (p.118, (5,11)), However, the numbering and the useage of square brackets and parentheses are opposite there.
CommonCrawl
You are in charge of designing an advanced centralized traffic management system for smart cars. The goal is to use global information to instruct morning commuters, who must drive downtown from the suburbs, how best to get to the city center while avoiding traffic jams. Unfortunately, since commuters know the city and are selfish, you cannot simply tell them to travel routes that take longer than normal (otherwise they will just ignore your directions). You can only convince them to change to different routes that are equally fast. The city's network of roads consists of intersections that are connected by bidirectional roads of various travel times. Each commuter starts at some intersection, which may vary from commuter to commuter. All commuters end their journeys at the same place, which is downtown at intersection 1. If two commuters attempt to start travelling along the same road in the same direction at the same time, there will be congestion; you must avoid this. However, it is fine if two commuters pass through the same intersection simultaneously or if they take the same road starting at different times. Determine the maximum number of commuters who can drive downtown without congestion, subject to all commuters starting their journeys at exactly the same time and without any of them taking a suboptimal route. Figure 1: Illustration of Sample Input 2. In Figure 1, cars are shown in their original locations. One car is already downtown. Of the cars at intersection 4, one can go along the dotted route through intersection 3, and another along the dashed route through intersection 2. But the remaining two cars cannot reach downtown while avoiding congestion. So a maximum of 3 cars can reach downtown with no congestion. The input consists of a single test case. The first line contains three integers $n$, $m$, and $c$, where $n$ ($1 \le n \le 25\, 000$) is the number of intersections, $m$ ($0 \le m \le 50\, 000$) is the number of roads, and $c$ ($0 \le c \le 1\, 000$) is the number of commuters. Each of the next $m$ lines contains three integers $x_ i$, $y_ i$, and $t_ i$ describing one road, where $x_ i$ and $y_ i$ ($1 \le x_ i, y_ i \le n$) are the distinct intersections the road connects, and $t_ i$ ($1 \le t_ i \le 10\, 000$) is the time it takes to travel along that road in either direction. You may assume that downtown is reachable from every intersection. The last line contains $c$ integers listing the starting intersections of the commuters. Display the maximum number of commuters who can reach downtown without congestion.
CommonCrawl
Picture of the periodic table. A category with one object is a monoid: picture. and Poincare dual picture of rocks on a line. A 2-category with one object is a monoidal category: pictures. of a monoid hold up to isomorphism. Coherence law for associator. can switch rocks past each other. Definition of inverse morphism, isomorphism. to for Set, Vect and Top. in Part III. Subtraction and division are much subtler. Very often mathematical structures arise via decategorification! extra "layer" - the objects. an isomorphism, an endomorphism (N) and an automorphism (Z). is called [n], and it looks like an (n-1)-simplex. to Mon? More fun with of level slips! functor from Cat to Top. transformation between functors between their fundamental groupoids! Natural isomorphism between identity functor on Vect and double dual. Fundamental group versus first homology group of a pointed space. is the free category on a morphism. Theorem: a category is equivalent to any of its skeleta. D and whose morphisms are natural transformations between these! completely recover a group from all its actions (?). Example: category of representations of a group is hom(G, Vect). finite group from this category (with its extra structure). of the monoid on some object of C. Show the two setups are equivalent. example of Cat, in both styles. finite-limit-preserving functors, and other "doctrines". The 2-category generated by a 2-computad. Definition: a strict monoidal category is a 2-categories with one object. A commutative monoid is a strict monoidal category with one object. Point out problems of strictness. introduce the associator, left/right uniters, and their coherence laws. Example: the weak fundamental 2-groupoid of a space. Examples of Vect, RMod (R commutative) or RBiMod with its tensor product. Examples coming from categories with finite products / coproducts. sketch of proof using associahedron. Example: monads in Vect are algebras. Adjunctions in a bicategory. Monads from adjunctions. Adjunctions in Vect are dual vector spaces; they give matrix algebras. Cat gives strict monoidal categories! Adjoint functors in Cat give monads. As much as possible do all this "formally" in a 2-category? Weak versus strict functors between bicategories. Weak versus strict natural transformations. One version: "all diagrams commute". The associahedron. a strict one. Proof via categorified Yoneda??? [weak 2-categories, weak 2-functors, weak natural transformations]? Perhaps be a bit sketchy and save details of proofs for later? The definition of enriched category; examples. If V has finite limits so does VCat? If V is a distributive category so is VCat?? (also a monoid in $(2Cat, \times)$). monoidal category... a notion too strict! categories enriched over $(2\Cat, \tensor)$. 2-category (also a monoid in $(2\Cat, \tensor)$). also note some patterns. The 3d associahedron shows up here. A tricategory with one object is a monoidal bicategory. perhaps just in a semistrict 3-category. Pictures! The concept of weak adjunction. The swallowtail coherence law. How weak adjunctions give weak monads. 3 levels of "internalization" for 2-categories? bicategory to C is precisely a monoid in C. strict versus weak 3-functors... etc. (do 2-braids and 2-tangles in dimensions 2,3,4,5,6?
CommonCrawl
Abstract: An extension of the standard model of electro-weak interactions by an extra abelian gauge boson is given, in which the extra gauge boson and the hypercharge gauge boson both couple to an axionic scalar in a form that leads to a Stueckelberg mass term. The theory leads to a massive Z$'$ whose couplings to fermions are uniquely determined and suppressed by small mixing angles. Such a Z$'$ could have low mass and appear in $e^+e^-$ collisions as a sharp resonance. The branching ratios into $f\bar f$ species, and the forward-backward asymmetry are found to have distinctive features. The model also predicts a new unit of electric charge $e'=Q'e$, where $Q'$ is in general irrational, in the coupling of the photon with hidden matter that is neutral under $SU(2)_L\times U(1)_Y$.
CommonCrawl
Electron capture spectroscopy (ECS) measures the surface electron spin polarization (ESP) by passing a deuteron ion beam very close to and grazing an atomically flat sample. Polarized electrons that are captured polarize the deuterons by the hyperfine interaction. The polarization of the deuterons is measured by the asymmetry of scattered alpha particles resulting from the nuclear reaction D(T,n)$\alpha$. On the proposed, half-metallic, half-semiconducting, ferromagnetic material, NiMnSb, electron capture spectroscopy measures a surface electron spin polarization (ESP) of +13%, establishing unambiguously that long-range, surface ferromagnetic order exists for this ternary alloy. TAYLOR, KELLY JAY. "SURFACE ELECTRON SPIN POLARIZATION OF THE PROPOSED HALF-METALLIC FERROMAGNET NICKEL - MANGANESE - ANTIMONY." (1987) Master's Thesis, Rice University. https://hdl.handle.net/1911/13256.
CommonCrawl
Abstract: We study the effective physics of F-theory at order $\alpha'^3$ in derivative expansion. We show that the ten-dimensional type IIB eight-derivative couplings involving the graviton and the axio-dilaton naturally descend from pure gravity in twelve dimensions. Upon compactification on elliptically fibered Calabi-Yau fourfolds, the non-trivial vacuum profile for the axio-dilaton leads to a new, genuinely N=1, $\alpha'^3$ correction to the four-dimensional effective action.
CommonCrawl
A company has $n$ employees with certain salaries. Your task is to keep track of the salaries and process queries. The first input line contains two integers $n$ and $q$: the number of employees and queries. The employees are numbered $1,2,\ldots,n$. The next line has $n$ integers $p_1,p_2,\ldots,p_n$: each employee's salary. Print the answer to each ? query.
CommonCrawl
An introduction on using arrays can be found here. Whether taught formally in school or not, the properties that apply to numbers in operations are encountered by children during their learning of mathematics. A sound understanding of these properties provides a good basis for developing operations, including mental calculation. The operation of addition comes easily to most children, but working with multiplication requires more sophisticated thinking and therefore usually needs more support. Modelling number properties involving multiplication using an array of objects not only allows children to represent their thinking with concrete materials, but it can also assist the children to form useful mental pictures to support memory and reasoning. The commutative property of multiplication can be neatly illustrated using an array. For example, the array above could be read as $2$ rows of $6$, or as $6$ columns of $2$. Or the array could be physically turned around to show that $2$ rows of $6$ has the same number as $6$ rows of $2$. Regardless of the way you look at it, there remain $12$ objects. Therefore, the array illustrates that $2 \times6 = 6 \times 2$, which is an example of the commutative property for multiplication. Being able to apply the commutative property means that the number of multiplication facts that have to be memorised is halved. Of the four operations, division is the most troublesome for young students. Full understanding of division tends to lag well behind the other operations. For many children opportunities to explore the concept with concrete materials are curtailed well before they perceive the relationships between division and the other four operations. One such relationship, the inverse relationship between division and multiplication, can be effectively illustrated using arrays. For example; $3 \times5 = 15$ ($3$ rows of $5$ make $15$), can be represented by the following array. Looking at the array differently reveals the inverse, that is; $15 \div 3 = 5 $ ($15$ put into $3$ rows makes $5$ columns - or $5$ in each row). Language clearly plays an important role in being able to express the mathematical relationships and the physical array supports this aspect of understanding by giving the students something concrete to talk about. Placing the mathematics into a real-life context through word problems can facilitate both understanding of the relationship and its expression through words. For example, "The gardener planted $3$ rows of $5$ seeds. How many seeds did she plant?" poses quite a different problem to "The gardener planted $15$ seeds in $3$ equal rows. How many seeds in each row?" yet both these word problems can be modelled using the same array. Further exploration of the array reveals two more ways of expressing inverse relationships: $5 \times3 = 15$ and $15 \div 3 = 5$ . The word problems can be adapted to describe these operations and highlight the similarities and differences between the four expressions modelled by the one array. This rather long title not only names one of the basic properties that govern our number system, it also names a personally invented mental strategy that many people regularly use. This strategy often comes into play when we try to recall one of the handful of multiplication facts that, for various reasons, are difficult to remember. For example, does this kind of thinking seem familiar? "I know $7 \times7$ is $49$. I need two more lots of $7$, which is $14$. So if I add $49$ and $14$... that makes $63$. Another way to explain this process is through an array. The smaller array to the left of the line shows $7 \times7$ ($7$ rows of $7$). The small array to the right of the line shows $7 \times2$ ($7$ rows of $2$). It can now be easily seen that $7 \times9$ is the same as $(7 \times 7) + (7 \times2)$, which leads to $49 + 14 = 63$. A slightly different approach to looking at this partitioned array fully illustrates the distributive property by highlighting the first step of splitting the $9$ into $7 + 2$, before the multiplying begins. With the partition line in place, each individual row of the whole array represents $9 = 7 + 2$. Therefore, all $7$ rows represent $7 \times(7 + 2)$, and as can be seen on the array, this is the same as $(7 \times7) + (7 \times2)$.
CommonCrawl
My aim is to relate a certain (equivariant) linear sigma model on a disc (with a non-compact target $\mathbb C$) as constructed in the exciting work of Gerasimov, Lebedev and Oblezin in Archimedean L-factors and Topological Field Theories I, to integrable systems (in the sense of Dubrovin, if you like). More precisely, I'd like to know if it's possible to express "the" correlation function of an (equivariant) linear sigma model (with non-compact target) as in the above reference in terms of a $\tau$-function of an associated integrable system? As far as I've understood from the literature, for a large class of related non-linear sigma models (or models like conformal topological field theories) such a translation can be done by translating the field theory (or at least some parts of it) into some Frobenius manifold (as in Dubrovin's approach, e.g., but other approaches are of course also welcome). Unfortunately, so far, I haven't been able to understand how to make things work in the setting of (equivariant) linear sigma models (with non-compact target). Any help or hints would be highly appreciated! +1: Sorry I can't help you, but this is a refreshingly good question!
CommonCrawl
The goal of this post is to give an overview of Bayesian statistics as well as to correct errors about probability that even mathematically sophisticated people commonly make. Hopefully by the end of this post I will convince you that you don't actually understand probability theory as well as you think, and that probability itself is something worth thinking about. I bolded the section on models because I think it is very important, so I hope that bolding it will make you more likely to read it. Also, I should note that when I say that nobody understands probability, I don't mean it in the sense that most people are bad at combinatorics. Indeed, I expect that most of the readers of this blog are quite proficient at combinatorics, and that many of them even have sophisticated mathematical definitions of probability. Rather I would say that actually using probability theory in practice is non-trivial. This is partially because there are some subtleties (or at least, I have found myself tripped up by certain points, and did not realize this until much later). It is also because whenever you use probability theory in practice, you end up employing various heuristics, and it's not clear which ones are the "right" ones. Suppose that you have never played a sport before, and you play soccer, and enjoy it. Now suppose instead that you have never played a sport before, and play soccer, and hate it. In the first case, you will think yourself more likely to enjoy other sports in the future, relative to in the second case. Why is this? Here and are two events, and means the probability of conditioned on . In other words, if we already know that occurred, what is the probability of ? The above theorem is quite easy to prove, using the fact that , and thus also equals , so that , which implies Bayes' theorem. So, why is it useful, and how do we use it? One example is the following famous problem: A doctor has a test for a disease that is 99% accurate. In other words, it has a 1% chance of telling you that you have a disease even if you don't, and it has a 1% chance of telling you that you don't have a disease even if you do. Now suppose that the disease that this tests for is extremely rare, and only affects 1 in 1,000,000 people. If the doctor performs the test on you, and it comes up positive, how likely are you to have the disease? which comes out to which is quite close to . p(Data | Hypothesis) — the likelihood of seeing the data we saw under our hypothesis; note that this should be quite easy to compute. If it isn't, then we haven't yet fully specified our hypothesis. p(Hypothesis) — the prior weight we give to our hypothesis. This is subjective, but should intuitively be informed by the consideration that "simpler hypotheses are better". p(Data) — how likely we are to see the data in the first place. This is quite hard to compute, as it involves considering all possible hypotheses, how likely each of those hypotheses is to be correct, and how likely the data is to occur under each hypothesis. Let's consider the following toy example. There is a stream of digits going past us, too fast for us to tell what the numbers are. But we are allowed to push a button that will stop the stream and allow us to see a single number (whichever one is currently in front of us). We push this button three times, and see the numbers 3, 5, and 3. How many different numbers would we estimate are in the stream? For simplicity, we will make the (somewhat unnatural) assumption that each number between 0 and 9 is selected to be in the stream with probability 0.5, and that each digit in the stream is chosen uniformly from the set of selected numbers. It is worth noting now that making this assumption, rather than some other assumption, will change our final answer. Here means "is proportional to". p(2 numbers | (3,5,3)) / p(3 numbers | (3,5,3)) = 3/8. p(2 numbers | (3,5,3,3,5)) / p(3 numbers | (3,5,3,3,5)) = (1/2)^6 / [9 * (1/3)^6] = 81/64. Now we find it more likely that there are only 2 numbers. This is what tends to happen in general with Bayes' rule — over time, more restrictive hypotheses become exponentially more likely than less restrictive hypotheses, provided that they correctly explain the data. Put another way, hypotheses that concentrate probability density towards the actual observed events will do best in the long run. This is a nice feature of Bayes' rule because it means that, even if the prior you choose is not perfect, you can still arrive at the "correct" hypothesis through enough observations (provided that the hypothesis is among the set of hypotheses you consider). I will use Bayes' rule extensively through the rest of this post and the next few posts, so you should make sure that you understand it. If something is unclear, post a comment and I will try to explain in more detail. An important distinction that I think most people don't think about is the difference between experiments you perform, and experiments you observe. To illustrate what I mean by this, I would point to the difference between biology and particle physics — where scientists set out to test a hypothesis by creating an experiment specifically designed to do so — and astrophysics and economics, where many "experiments" come from seeking out existing phenomena that can help evaluate a hypothesis. To illustrate why one might need to be careful in the latter case, consider empirical estimates of average long-term GDP growth rate. How would one do this? Since it would be inefficient to wait around for the next 10 years and record the data of all currently existing countries, instead we go back and look at countries that kept records allowing us to compute GDP. But in this case we are only sampling from countries that kept such records, which implies a stable government as well as a reasonable degree of economics expertise within that government. So such a study almost certainly overestimates the actual average growth rate. Or as another example, we can argue that a scientist is more likely to try to publish a paper if it doesn't agree with prevalent theories than if it does, so looking merely at the proportion of papers that lend support to or take support away from a theory (even if weighted by the convincingness of each paper) is probably not a good way to determine the validity of a theory. So why are we safer in the case that we forcibly gather our own data? By gathering our own data, we understand much better (although still not perfectly) the way in which it was constructed, and so there is less room for confounding parameters. In general, we would like it to be the case that the likelihood of observing something that we want to observe does not depend on anything else that we care about — or at the very least, we would like it to depend in a well-defined way. Let's consider an example. Suppose that a man comes up to you and says "I have two children. At least one of them is a boy." What is the probability that they are both boys? p(Two boys | At least one boy) = p(At least one boy | Two boys) * p(Two boys) / p(At least one boy) = 1 * (1/4) / (1/2+1/4) = 1/3. So the answer should be 1/3 (if you did math contests in high school, this problem should look quite familiar). However, the answer is not, in fact, 1/3. Why is this? We were given that the man had at least one boy, and we just computed the probability that the man had at two boys given that he had at least one boy using Bayes' theorem. So what's up? Is Bayes' theorem wrong? No, the answer comes from an unfortunate namespace collision in the word "given". The man "gave" us the information that he has at least one male child. By this we mean that he asserted the statement "I have at least one male child." Now our issue is when we confuse this with being "given" that the man has at least one male child, in the sense that we should restrict to the set of universes in which the man has at least one male child. This is a very different statement than the previous one. For instance, it rules out universes where the man has two girls, but is lying to us. Even if we decide to ignore the possibility that the man is lying, we should note that most universes where the man has at least one son don't even involve him informing us of this fact, and so it may be the case that proportionally more universes where the man has two boys involve him telling us "I have at least one male child", relative to the proportion of such universes where the man has one boy and one girl. In this case the probability that he has two boys would end up being greater than 1/3. p(X has two boys | X says he has >= 1 boy) = . Now this means that if we want to claim that the probability that the man has two boys is , what we are really claiming is that he is equally likely to inform us that he has at least one boy, in all situations where it is true, independent of the actual gender distribution of his children. I would argue that this is quite unlikely, as if he has a boy and a girl, then he could equally well have told us that he has at least one girl, whereas he couldn't tell us that if he has only boys. So I would personally put closer to 2, which yields an answer of . On the other hand, situations where someone walks up to me and tells me strange facts about the gender distribution of their children are, well, strange. So I would also have to take into account the likely psychology of such a person, which would end up changing my estimate of $\alpha$. The whole point here is that, because we were an observer receiving information, rather than an experimenter acquiring information, there are all sorts of confounding factors that are difficult to estimate, making it difficult to get a good probability estimate (more on that later). That doesn't mean that we should give up and blindly guess , though — it might feel like doing so gets away without making unwarranted assumptions, but it in fact implicitly makes the assumption that , which as discussed above is almost certainly unwarranted. What it does mean, though, is that, as scientists, we should try to avoid situations like the one above where there are lots of confounding factors between what we care about and our observations. In particular, we should avoid uncertainties in the source of our information by collecting the information ourselves. I should note that, even when we construct our own experiments, we should still model the source of our information. But doing so is often much easier. In fact, if we wanted to be particularly pedantic, we really need to restrict to the set of universes in which our personal consciousness receives a particular set of stimuli, but that set of stimuli has almost perfect correlation with photons hitting our eyes, which has almost perfect correlation with the set of objects in front of us, so going to such lengths is rarely necessary — we can usually stop at the level of our personal observations, as long as we remember where they come from. Now that I've told you that you need to model your information sources, you perhaps care about how to do said modeling. Actually, constructing probabilistic models is an extremely important skill, so even if you ignore the rest of this post, I recommend paying attention to this section. Suppose that you have occasion to observe a coin being flipped (or better yet, you flip it yourself). You do this several times and observe a particular sequence of heads and tails. If you see all heads or all tails, you will probably think the coin is unfair. If you see roughly half heads and half tails, you will probably think the coin is fair. But how do we quantify such a calculation? And what if there are noticeably many more heads than tails, but not so many as to make the coin obviously unfair? We'll solve this problem by building up a model in parts. First, there is the thing we care about, namely whether the coin is fair or unfair. So we will construct a random variable X that can take the values Fair and Unfair. Then p(X = Fair) is the probability we assign to a generic coin being fair, and p(X = Unfair) is the probability we assign to a generic coin being unfair. Now supposing the coin is fair, what do we expect? We expect each flip of the coin to be independent, and have a probability of coming up heads. So if we let F1, F2, …, Fn be the flips of the coin, then p(Fi = Heads | X = Fair) = 0.5. What if the coin is unfair? Let's go ahead and blindly assume that the flips will still be independent, and furthermore that each possible weight of the coin is equally likely (this is unrealistic, as weights near 0 or 1 are much more likely than weights near 0.5). Then we have to have an extra variable , the probability that the unfair coin comes up heads. So we have p(Unfair coin weight = ) = 1. Note that this is a probability density, not an actual probability (as opposed to p(Fi = Heads | X = Fair), which was a probability). Continuing, if F1, F2, …, Fn are the flips of the coin, then p(Fi = Heads | X = Fair, Weight = ) = . p(Fair | F1, …, Fn) / p(Unfair | F1, …, Fn) = p(Fair)/p(Unfair) * . Since there are an equal number of heads and tails, our previous analysis will certainly conclude that the coin is fair, but its behavior does seem rather suspicious. In particular, different flips don't look like they are really independent, so perhaps our previous model is wrong. Maybe the right model is one where the next coin value is usually the same as the previous coin value, but flips with some probability. Now we have a new value of X, which we'll call Weird, and a parameter (basically the same as ) that tells us how likely a weird coin is to have a given probability of switching. We'll again give a uniform distribution over [0,1], so p(Switching probability of weird coin = ) = 1. Now we are ready to evaluate whether the coin we saw was a Weird coin or not. Evaluating that integral gives . So p(X = Weird | Data) = p(X = Weird) / 3960, compared to p(X = Fair | Data), which is p(X = Fair) / 4096. In other words, positing a Weird coin only explains the data slightly better than positing a Fair coin, and since the vast majority of coins we encounter are fair, it is quite likely that this one is, as well. Note: I'd like to draw your attention to a particular subtlety here. Note that I referred to, for instance, "Probability that an unfair coin weight is ", as opposed to "Probability that a coin weight is given that it is unfair". This really is an important distinction, because the distribution over really is the probability distribution over the weights of a generic unfair coin, and this distribution doesn't change based on whether our current coin happens to be fair or unfair. Of course, we can still condition on our coin being fair or unfair, but that won't change the probability distribution over one bit. Now let's suppose that we have a bunch of points (for simplicity, we'll say in two-dimensional Euclidean space). We would like to group the points into a collection of clusters. Let's also go ahead and assume that we know in advance that there are clusters. How do we actually find those clusters? p( belongs to cluster ) . You might notice, though, that in this case it is much less straightforward to actually find clusters with high posterior probability (as opposed to in the previous case, where it was quite easy to distinguish between Fair, Unfair, and Weird, and furthermore to figure out the most likely values of and ). One reason why is that, in the previous case, we really only needed to make one-dimensional searches over and to figure out what the most likely values were. In this case, we need to search over all of the , , and simultaneously, which gives us, essentially, a -dimensional search problem, which becomes exponentially hard quite quickly. This brings us to an important point, which is that, even if we write down a model, searching over that model can be difficult. So in addition to the model, I will go over a good algorithm for finding the clusters from this model, known as the EM algorithm. For the version of the EM algorithm described below, I will assume that we have uniform priors over , , and (in the last case, we have to do this by picking a set of un-normalized uniformly over and then normalizing). We'll ignore the problem that it is not clear how to define a uniform distribution over a non-compact space. The way the EM algorithm works is that we start by initializing , and arbitrarily. Then, given these values, we compute the probability that each point belongs to each cluster. Once we have these probabilities, we re-compute the maximum-likelihood values of the (as the expected mean of each cluster given how likely each point is to belong to it). Then we find the maximum-likelihood values of the (as the expected covariance relative to the means we just found). Finally, we find the maximum-likelihood values of the (as the expected portion of points that belong to each cluster). We then repeat this until converging on an answer. For a visualization of how the EM algorithm actually works, and a more detailed description of the two steps, I recommend taking a look at Josh Tenenbaum's lecture notes starting at slide 38. This is perhaps a nitpicky point, but I have found that keeping it in mind has led me to better understanding what I am doing, or at least to ask interesting questions. The point here is that people often intuitively think of probabilities as a fact about the world, when in reality probabilities are a fact about our model of the world. For instance, one might say that the probability of a child being male versus female is 0.5. And perhaps this is a good thing to say in a generic case. But we also have a much better model of gender, and we know that it is based on X and Y chromosomes. If we could look at a newly conceived ball of cells in a mother's womb, and read off the chromosomes, then we could say with near certainty whether the child would end up being male or female. You could also argue that I can empirically measure the probability that a person is male or female, by counting up all the people ever, and looking at the proportion of males and females. But this runs into two issues — first of all, the portion of males will be slightly off of 0.5. So how do we justify just randomly rounding off to 0.5? Or do we not? Second of all, you can do this all you want, but it doesn't give me any reason why I should take this information, and use it to form a conjecture about how likely the next person I meet is to be male or female. Once we do that, we are taking into account my model of the world. This final section seeks to look at a result from classical statistics and re-interpret it in a Bayesian framework. In other words, suppose that the probability of drawing data less likely (under our hypothesis) than the data we actually saw is less than . Then the likelihood of our hypothesis is at most . Or actually, this is not quite true. But it is true that there is an algorithm that will only reject correct hypotheses with probability , and this algorithm is to reject a hypothesis when p(p(Data' | Hypothesis) <= p(Data | Hypothesis)) < . I will leave the proof of this to you, as it is quite easy. So this seems like a pretty good test, especially if we choose to be extremely small (e.g., or so). The mere fact that we reject good hypotheses with probability less than is not helpful. What we really want is to also reject bad hypotheses with a reasonably large probability. I think you can get around this by repeating the same experiment many times, though. Of course, Bayesian statistics also can't ever say that a hypothesis is good, but when given two hypotheses it will always say which one is better. On the other hand, Bayesian statistics has the downside that it is extremely aggressive at making inferences. It will always output an answer, even if it really doesn't have enough data to arrive at that answer confidently. In the definition of the Bayes theorem above, there is a very strong assumption in the following phrase "if we already know that B occurred, what is the probability of A". The problem is with the verb "know". Note that we can replace "already know" with "suppose" and Bayes' theorem will still hold. In probabilistic modeling, you rarely "know" whether a certain event occurred or not. Since we are talking about models, "suppose" is more precise. Jeremy, you are right, thanks. I will fix that. Karl, you raise a very good point. Bayes' theorem is always about suppositions. But in practice there are certain events that appear in every single model we ever consider. We still don't "know" that such things are true, but for all practical purposes of making decisions we do. For instance, all humans subscribe intuitively to empiricism, even those who reject it on a higher level, or else we would see no reason to breathe, eat, sleep, etc. In that sense we "know" that the past is a reasonable indicator of the future, even though of course this is still a supposition. So while I agree with you, I find it in practice helpful to qualitatively distinguish the suppositions that are strongly ingrained in our models as things we "know". Incidentally, in my next post I intend to discuss how we might model situations where we are actually uncertain about our observed data. Nice touch on the mind projection fallacy. I'm only just getting into probability but thinking of the 2 boys example, and imagining it as a tree, if the man already has one boy and gender of offspring is independent, then the probability of another boy would seem to have to be one half. Am I going wrong? Will be reading through the clustering section, thanks! Can you clarify what sort of tree you are imagining? For instance, what would be another possible path in that tree? The reason why is that not all men who have at least one boy will inform us of this fact. As I discussed, once we do this, we have to start thinking about under what circumstances the man would inform us of this fact. But let's simplify the problem, and, for instance, look at old census data and write down all the men who have two children, at least one of whom is male. If we pick one of these at random, what is the probability that his other child is male? While you might expect the answer to be , it is actually . The reason why is that a man is twice as likely to have a boy and a girl as he is to have two boys. So of all the men who have at least one boy, only of them would have two boys. If we instead selected all the men whose younger child was male, then the answer would in fact be . Intuitively this is because the gender of the older child shouldn't depend at all on the gender of the younger child, although you can also work this out formally by considering the four possibilities (girl girl, girl boy, boy girl, boy boy), how likely each is to occur ( in each case), and how likely we are to select each of the four cases to be part of our sample (0, 1, 1, 1 in the first problem, 0, 0, 1, 1 in this problem). Hopefully that helps, but if you still have questions, feel free to ask.
CommonCrawl
In my previous post, I noted that the ability to see in color gave me an apparent superpower in quickly analyzing Travis CI and pytest logs. I wondered: how hard is it to use colorblind friendly colors here? I had in the back of my mind the thought of the next time I sit down and pair program with someone who is colorblind (which will definitely happen). Pair programming is largely about sharing experiences and ideas, and color disambiguation shouldn't be a wedge. I decided that loading customized CSS is the way to go. There are different ways to do this, but an easy method for quick replicability is to create a bookmarklet that adds CSS into the page. So, I did that. This is not beautiful, but it works and it's very noticable. Nonetheless, when the goal is just to be able to quickly recognize if errors are occuring, or to recognize exceptional lines on a quick scroll-by, the black-text-on-white-box wins the standout crown. Again, I would say this is not beautiful, but definitely noticeable. As an aside, I also looked through the variety of colorschemes that I have collected over the years. And it turns out that 100 percent of them are unkind to colorblind users, with the exception of the monotone or monochromatic schemes (which are equal in the Harrison Bergeron sense). This is fun. It's fun seeing other people's workflows. (In these cases, it happened to be that the other person was usually the one at the keyboard and typing, and I was backseat driving). I live in the terminal, subscribe to the Unix-is-my-IDE general philosophy: vim is my text editor; a mixture of makefiles, linters, and fifos with tmux perform automated building, testing, and linting; git for source control; and a medium-sized but consistently growing set of homegrown bash/python/c tools and scripts make it fun and work how I want. I'm distinctly interested in seeing tools other people have made for their own workflows. Those scripts that aren't polished, but get the work done. There is a whole world of git-hooks and aliases that amaze me. At first, I didn't think much of it. I had thought that you might set some colorblind-friendly colorschemes, and otherwise configure your way around it. But as is so often the case with accessibility problems, I underestimated both the number of challenges and the difficulty in solving them (lousy but true aside: most companies almost completely ignore problems with accessibility). With red-green colorblindness, there is essentially no difference in the shades of PASSED and FAILED. That's sort of annoying. We'd make a few changes, and then rerun some tests. Now we were running tests in a terminal, and the testlogs are scolling by. We're chatting about emacs wizardy (or c++ magic, or compiler differences between gcc and clang, or something), and I point out that we can stop the tests since three tests have already failed. He stared at me a bit dumbfoundedly. It was like I had superpowers. I could recognize failures without paying almost any attention, since flashes of red stand out. I should say that the Travis team has made some accessibility improvements for colorblind users in the past. The build-passing and build-failing icons used to be little circles that were red or green, as shown here. That means the build status was effectively invisible to colorblind users. After an issue was raised and discussed, they moved to the current green-checkmark-circle for passing and red-exed-circle for failing, which is a big improvement. The colorscheme used for Travic CI's online logs is based on the nord color palette, and there is no colorscheme-switching option. It's a beautiful and well-researched theme for me, but not for everybody. The colors on the page are controllable by CSS, but not in a uniform way that works on many sites. (Or at least, not to my knowledge. I would be interested if someone else knew more about this and knew a generic approach. The people I was pair-programming with didn't have a good solution to this problem). Should you really need to write your own solution to every colorblind accessibility problem? In the next post, I'll give a (lousy but functional) bookmarklet that injects CSS into the page to see Travis CI FAILs immediately. This is a very short post in my collection working through this year's Advent of Code challenges. Unlike the previous ones, this has no mathematical comments, as it was a very short exercise. This notebook is available in its original format on my github. Given a list of strings, determine how many strings have no duplicate words. This is a classic problem, and it's particularly easy to solve this in python. Some might use collections.Counter, but I think it's more straightforward to use sets. The key idea is that the set of words in a sentence will not include duplicates. So if taking the set of a sentence reduces its length, then there was a duplicate word. I think this is the first day where I would have had a shot at the leaderboard if I'd been gunning for it. is not valid, since the second word is an anagram of the first. There are many ways to tackle this as well, but I will handle anagrams by sorting the letters in each word first, and then running the bit from part 1 to identify repeated words. This is the second notebook in my posts on the Advent of Code challenges. This notebook in its original format can be found on my github. You are given a table of integers. Find the difference between the maximum and minimum of each row, and add these differences together. There is not a lot to say about this challenge. The plan is to read the file linewise, compute the difference on each line, and sum them up. In line with the first day's challenge, I'm inclined to ask what we should "expect." But what we should expect is not well-defined in this case. Let us rephrase the problem in a randomized sense. Suppose we are given a table, $n$ lines long, where each line consists of $m$ elements, that are each uniformly randomly chosen integers from $1$ to $10$. We might ask what is the expected value of this operation, of summing the differences between the maxima and minima of each row, on this table. What should we expect? As each line is independent of the others, we are really asking what is the expected value across a single row. So given $m$ integers uniformly randomly chosen from $1$ to $10$, what is the expected value of the maximum, and what is the expected value of the minimum? of occurring. We rewrite it as the version on the right for reasons that will soon be clear. From this we can compute this for each list-length $m$. It is good to note that as $m \to \infty$, the expected value is $9$. Does this make sense? Yes, as when there are lots of values we should expect one to be a $10$ and one to be a $1$. It's also pretty straightforward to see how to extend this to values of integers from $1$ to $N$. Looking at the data, it does not appear that the integers were randomly chosen. Instead, there are very many relatively small integers and some relatively large integers. So we shouldn't expect this toy analysis to accurately model this problem — the distribution is definitely not uniform random. But we can try it out anyway. In my first programming class we learned python. It went fine (I thought), I got the idea, and I moved on (although I do now see that much of what we did was not 'pythonic'). But now that I'm returning to programming (mostly in python), I see that I did much of it all wrong. One of my biggest surprises was how wrong I was about comments. Too many comments are terrible. Redundant comments make code harder to maintain. If the code is too complex to understand without comments, it's probably just bad code. That's not so hard, right? You read some others' code, see their comment conventions, and move on. You sort of get into zen moments where the code becomes very clear and commentless, and there is bliss. But then we were at a puzzling competition, and someone wanted to use a piece of code I'd written. Sure, no problem! And the source was so beautifully clear that it almost didn't even need a single comment to understand. But they didn't want to read the code. They wanted to use the code. Comments are very different than documentation. The realization struck me, and again I had it all wrong. In hindsight, it seems so obvious! I've programmed java, I know about javadoc. But no one had ever actually wanted to use my code before (and wouldn't have been able to easily if they had)! Enter pydoc and sphinx. These are tools that allow html API to be generated from comment docstrings in the code itself. There is a cost – some comments with specific formatting are below each method or class. But it's reasonable, I think. This isn't to say that pydoc is bad. But I didn't want to limit myself. Python uses sphinx, so I'll give it a try too. And thus I (slightly excessively, to get the hang of it) comment on my solutions to Project Euler on my github. The current documentation looks alright, and will hopefully look better as I get the hang of it. Full disclosure – this was originally going to be a post on setting up sphinx-generated documentation on github's pages automatically. I got sidetracked – that will be the next post. I announce the beginning of my (likely intermittent) programming posts. I'm slowly getting better at python and java, though I spend most of my programming time on python. [And in particular, in sage or using matplotlib]. This category will grow with time. But for now, the most important thing I can link to is my github.
CommonCrawl
Abstract: The most straightforward method to accelerate Stochastic Gradient Descent (SGD) is to distribute the randomly selected batch of inputs over multiple processors. To keep the distributed processors fully utilized requires commensurately growing the batch size; however, large batch training usually leads to poor generalization. Existing solutions for large batch training either significantly degrade accuracy or require massive hyper-parameter tuning. To address this issue, we propose a novel large batch training method which combines recent results in adversarial training and second order information. We extensively evaluate our method on Cifar-10/100, SVHN, TinyImageNet, and ImageNet datasets, using multiple NNs, including residual networks as well as smaller networks such as SqueezeNext. Our new approach exceeds the performance of the existing solutions in terms of both accuracy and the number of SGD iterations (up to 1\% and $5\times$, respectively). We emphasize that this is achieved without any additional hyper-parameter tuning to tailor our proposed method in any of these experiments. With slight hyper-parameter tuning, our method can reduce the number of SGD iterations of ResNet18 on Cifar-10/ImageNet to $44.8\times$ and $28.8\times$, respectively. We have open sourced the method including tools for computing Hessian spectrum.
CommonCrawl
Abstract: This thesis introduces the idea of two-level type theory, an extension of Martin-Löf type theory that adds a notion of strict equality as an internal primitive. A type theory with a strict equality alongside the more conventional form of equality, the latter being of fundamental importance for the recent innovation of homotopy type theory (HoTT), was first proposed by Voevodsky, and is usually referred to as HTS. Here, we generalise and expand this idea, by developing a semantic framework that gives a systematic account of type formers for two-level systems, and proving a conservativity result relating back to a conventional type theory like HoTT. Finally, we show how a two-level theory can be used to provide partial solutions to open problems in HoTT. In particular, we use it to construct semi-simplicial types, and lay out the foundations of an internal theory of $(\infty, 1)$-categories.
CommonCrawl
Get an international perspective Development Studies is an interdisciplinary area covering a wide range of aspects of life in developing countries, their development process, and relationships with the …... A minor is the determinant of the square matrix formed by deleting one row and one column from some larger square matrix. Use this online matrix calculator to find the cofactors and minor of matrices. 4x4 Inverse Matrix Calculator is an online tool programmed to calculate the Inverse of given 4x4 matrix input values. 4x4 matrix calculations are used in numerous applications both in mathematics, electronic circuit designs and other sciences so learning how to determine inverse matrix become essential how to find orthogonal basis of a matrix We can calculate a n×n determinant from a (n-1)×(n-1) matrix and so on until the determinant of a 1×1 matrix which is just the term itself. This recursive method is also known as expansion by minors. The adjoint matrix is the transpose of the cofactor matrix. The cofactor matrix is the matrix of determinants of the minors A ij multiplied by -1 i+j . The i,j'th minor of A is the matrix A without the i'th column or the j'th row. Minor of an element: If we take the element of the determinant and delete (remove) the row and column containing that element, the determinant left is called the minor of that element. It is denoted by M ij . 3. Take the minor Mrc of the matrix created earlier by omitting the elements in the row, r, and column, c, of the minor and then take the determinant of the resulting matrix. For this to work we need the technical convention that a $0\times 0$ minor is $1$, i.e. that the $0\times 0$ matrix has determinant $1$. Such a convention is consistent with the notion of an empty product being $1$, though it may strike some as counterintuitive.
CommonCrawl
Use this tag for questions related to algorithmic randomness, which is the study of random individual elements in sample spaces. If $k$ balls are thrown into $n$ bins, how many positions $i$ are there such that, bin $i$ and $ i+1$ are empty? Assume the $k$ balls are thrown independently and uniformly at random, into $n$ labeled bins. What is the expected number of positions $i$ such that the bins labeled $i$ and $i+1$ are both empty? Why quasi-random sequences are generated in the interval [0,1]? Is it a normalized sequence generation? How to generate a uniform simple path from a rectangular grid graph? How to determine the scale of a smoothing kernel? If I roll 5 casino dice at the same time, does the order in which I read the results matter? Does "for almost each object" make sense in this example? What is polynomial-time random language? What is polynomial-time random language? I have tried to found the definition by searching artilce, but failed. Any one give reference? How to make unlimited number of fair coin be equivalent to two fair dice that takes the sum of their outcomes? Are there strings with known Kolmogorov complexity? Prove that there exists a bipartite subgraph containing at least half of the edges in the original graph. Are there any Martin-Löf random reals that are computable? For example, Chaitin's constant is both Martin-Löf random and uncomputable. Are there any examples of numbers that are Martin-Löf random but computable? $Pi(n) = Pi(n-2) + x\times [Pi(n-1)]$ for all convergent numerators and denominators. True? Does a random binary sequence almost always have a finite number of prime prefixes?
CommonCrawl
Professor Halfbrain has spent the last weekend with filling the squares of a $99\times99$ chessboard with real numbers from the interval $[-1,+1]$. Whenever four squares form the corners of a rectangle (with sides parallel to the sides of the chessboard), then the four numbers in these squares had to add up to zero. Professor Halfbrain has proved two extremely deep theorems on this. Professor Halfbrain's first theorem: It is possible to fill the chessboard subject to the above rules, so that the sum of all numbers in all the squares is $0$. Professor Halfbrain's second theorem: If we fill the chessboard subject to the above rules, then the sum of all numbers in all the squares is at most $9801$. This puzzle asks you to improve the two theorems of professor Halfbrain and to make them even deeper. Find an integer $x$, so that "the sum in all the squares is $0$" in the first theorem may be replaced by "the sum in all the squares is $x$", and so that "the sum is at most $9801$" in the second theorem may be replaced by "the sum is at most $x$" (again yielding true statements, of course). This is a sketchy proof of what Ivo suspected in the comments. Take an arbitrary selection as below. So, for an arbitrary selection, we know that the diagonal elements are equal. Taking an arbitrary selection with odd sides (e.g. $A+C+I+L = 0$), we know that $A,C,I,L$ must be equal and hence 0. Since this can be done for all squares, they must all be 0. But now, consider a 3x3 square with $+a$ in one (and therefore all) corners. This gives $4a = 0$, so $a = 0$. Every square on the board must be $0$. We know that the sum of these four points is zero. Consider the two additional points $b_1$ and $b_2$. Since our choice of points was arbitrary, we know that any two points that share a side of a rectangle must be the opposite sign. The only way to satisfy these equations is if they are all zero. Thus (again since our choice of points was arbitrary) all the points must be zero.
CommonCrawl
I am using a DAC to transmit 2 different voltage levels. High and low. At 1Gsps. There is no carrier wave or anything. These are just raw 0 and 1 time samples going across a wire. On my ADC I am also sampling at 1Gsps and I'm able to fully reconstruct the transmitted signal. In other words I can tell which sample is a 0 or 1 level. Does this not violate the Nyquist sampling theorem? I assume if I am transmitting at 1Gsps my receiver would need to sample at 2Gsps. Where is my understanding wrong here? That question is more involved than you think! First of all: you're violating Nyquist. But, you're taking Nyquist's theorem for something that it is not. If we have a band-limited analog signal, we can faithfully represent it digitally (without losing any information) by sampling with a rate that is high enough. So, first of all, from the info you're giving us, your signal is not band-limited. That should become obvious as soon as you realize that you draw the (theoretical) spectrum after the DAC: right, it's periodic with a period of 1 GHz, ie. the sampling rate. Period means it continues ad infinitum, i.e. it's not band-limited. You don't say, so (,considering this is not your first question on this topic, so I presume that this means,) you don't have an anti-imaging analog filter after the DAC. So, it'd be completely impossible to correctly sample this with a finite rate. Then: you don't actually care about the analog signal, do you? You want the bits. Good news: if you ADC samples exactly with the same rate as your DAC, then all the images will just alias back onto one, and everything will be fine. In practice, that doesn't happen. First of all, and that is a very fundamental truth that I have to explain to a lot of people regularly, no two oscillators are exactly the same. Your system needs to have a bit of leeway for frequency deviation. Then, all analog channels are frequency-dependent. That might also mean that they'll let the signal that you send and thus "imprint" on all these spectral repetitions through differently, depending on how that payload signal looks like in spectrum. Not to mention that it's rather rare that you can assume your channel stays like it is forever. Then, you also get the problem of timing recovery. What use is your receiving ADC when it samples exactly in the middle between two transmit symbols? So, you can do that, but you need at least timing recovery. And some DAC filtering (anti-image). And some noise and interference-limiting ADC filtering (anti-alias). And while you're at it, at least a simple equalization might be a good idea, and if it breaks down to an AGC. Baseband BPSK signals (your signal is BPSK + an offset, if you will) aren't new. It's just that you typically avoid very much using them, unless your traces are very well-controlled (ie. you do 1Gb/s easily on a line between a RAM chip and a CPU that is routed on a PCB with proper impedance matching and plenty shielding, but you would avoid it very much on a cable with a plug and unknown length). For example, it's far easier to build Gigabit Ethernet, which transports 1Gb/s, too, using a signal that takes a lot of different states (not just two levels) but runs at 125 MHz, than to implement a 4Gb/s on/off scheme receiver for a fibreoptics cable (which really is just a DAC after a photo diode that either receives light – or doesn't). So: high rate data is either pushed through very well-controlled channels in baseband (e.g. PCB traces, fibreoptics networking, twinax cable for SATA and 10GBase), or modulated onto a carrier and transported that way, or reduced in symbol rate by using high-bit-per-symbol mappings to limit the bandwidth and then transported in baseband (e.g. 1 Gb/s ethernet with 125 Mb) or both mapped to higher-order constellations and mixed onto a carrier (Digital TV, microwave links, LTE, basically all the radio you know that isn't low-rate). First of all Nyquist (Shannon) sampling theorem is based on the analog signal's bandwidth, and states that you must sample the signal at a rate twice the bandwidth of it. Now, what's the bandwidth of your signal? You must compute this one first. Given a DAC (digital to analog converter) modeled as D/C block; i.e., an ideal signal reconstructor with an ideal interpolation (image rejection) filter with $h_r(t)$ whose cutoff frequency is at a half of the DAC sampling rate, then the output of the DAC given at a sampling rate of $1$G samples per second (1 GHz) will be bandlimited to $500$ MHz. Now given a continuous-time (analog) bandlimited baseband signal $x(t)$ with a bandwidth of $500$ MHz, the required Nyquist sampling rate will be twice the bandwidth of the signal which is $1$ GHz, or stated as $1$ G samples per second. Assuming an SNR of $90$ dB (for about $16$ bits of an ADC,DAC with LSB noise) this means in principle you should be able to reach a data rate of $C = 500 \times 10^6 \times 15$ bits per second. This is about $7.5$ Gbps. If your ADC and DAC clocks are synchronized, you're sending information within the clock synchronization information channel at a resolution a lot higher than 1 nS. That additional side channel of information is equivalent to sampling at a much higher rate (the reciprocal of the sync jitter). If your ADC clock is not synchronized, then it's possible to either miss or double sample data bits, or get garbage results by sampling right at the middle of the transition between the two DAC output voltage levels. e.g. you can end up with a non-zero error rate, depending on clock misalignment, frequency drift, the data, any encoding, and the channel and DAC/ADC filter responses. Not the answer you're looking for? Browse other questions tagged sampling nyquist or ask your own question. Difference between Nyquist rate and Nyquist frequency? Time Domain Example of Nyquist/Shannon?
CommonCrawl
This is the first article in what will become a set of tutorials on how to carry out natural language document classification, for the purposes of sentiment analysis and, ultimately, automated trade filter or signal generation. This particular article will make use of Support Vector Machines (SVM) to classify text documents into mutually exclusive groups. Since this is the first article written in 2015, I feel it is now time to move on from Python 2.7.x and make use of the latest 3.4.x version. Hence all code in this article will be written with 3.4.x in mind. Determine a set of groups (or labels) that each document will be a member of. Examples include "positive" and "negative" or "bullish" and "bearish" Use the classifier to label new documents, in an automated, ongoing manner. Integrate the classifier into an automated trading system, either by means of filtering other trade signals or generating new ones. In this particular article we will avoid discussion of how to download multiple articles from external sources and make use of a given dataset that already comes with its own provided labels. This will allow us to concentrate on the implementation of the "classification pipeline", rather than spend a substantial amount of time obtaining and tagging documents. In subsequent articles in this series we will make use of Python libraries, such as ScraPy and BeautifulSoup to automatically obtain many web-based articles and effectively extract their text-based data from the HTML. In addition we will not be considering, within this particular article, how to integrate such a classifier into a production-ready algorithmic trading system. However, as I stated clearly in the QuantStart: 2014 in Review article I do want to write subsequent articles that discuss just that. It is extremely important to not only create "toy" examples, as in this article, but also to discuss how to fully integrate a classifier into a system that could be used in production. Hence later articles will consider production implementation. So, under the assumption that we have a document corpus that is pre-labelled (to be outlined below!), we will begin by taking the training corpus and incorporating it into a Python data structure that is suitable for pre-processing and consumption via the classifier. However, before we are able to get into the details of this process we need to briefly discuss the concepts of Supervised Classification and Support Vector Machines. For a deeper overview of how statistical machine learning is carried out, please see this article. Supervised Classifiers are a group of statistical machine learning techniques that attempt to attach a "class", or "label", to a particular set of features, based on prior known labels attached to other similar sets of features. This is clearly quite an abstract definition, so it may help to have an example. Consider a set of text documents. Each document has an associated set of words, which we will call "features". Each of these documents might be associated with a class label that describes what the article is about. For instance, a set of articles from a website discussing pets might have articles that are primarily about dogs, cats or hamsters (say). Certain words, such as "cage" (hamster), "leash" (dog) or "milk" (cat) might be more representative of certain pets than others. Supervised classifiers are able to isolate certain words which are representative of certain labels (animals) by "learning" from a set of "training" articles, which are already pre-labelled, often in a manual fashion, by a human. Mathematically, each of the $j$ articles about pets within a training corpus have an associated feature vector $X_j$, with components of this vector representing "strength" of words (we will define "strength" below). Each article also has an associated class label, $y_j$, which in this case would be the name of the pet most associated with the article. The "supervision" of the training procedure occurs when a model is trained or fit to this particular data. In the following example we will use the Support Vector Machine as our model and "train" it on a corpus (a collection of documents) which we will have previously generated. For a deeper and more comprehensive mathematical overview of how Support Vector Machines work, please see this article. Support Vector Machines are a subclass of supervised classifiers that attempt to partition a feature space into two or more groups, which in our case means separating a collection of articles into two or more class labels. SVMs achieve this by finding an optimal means of separating such groups based on their known class labels. In the simpler cases the separation "boundary" is linear, leading to groups that are split up by lines (or planes) in high-dimensional spaces. In more complicated cases (where groups are not nicely separated by lines or planes), SVMs are able to carry out non-linear partitioning. This is achieved by means of a kernel function. Ultimately, this makes them very sophisticated and capable classifiers, but at the usual expense that they can be prone to overfitting. More details can be found here. See the figure below for two examples of non-linear decision boundaries (polynomial kernel and radial kernel respectively) for two class labels (orange and blue), across two features $X_1$ and $X_2$. SVMs are powerful classifiers when used correctly and can provide very promising results. We will now utilise SVMs for the remainder of this article. A famous dataset that is used in machine learning classification design is the Reuters 21578 set. It is one of the most widely used testing datasets for text classification, but it is somewhat out of date these days. However, for the purposes of this article it will more than suffice. The set consists of a collection of news articles (a "corpus") that are tagged with a selection of topics and geographic locations. Thus it comes "ready made" to be used in classification tests, since it is already pre-labelled. We will now download, extract and prepare the dataset. I am carrying this tutorial out on a Ubuntu 14.04 machine, so I have access to the command line. If you are on Linux or Mac OSX you will also be able to follow the commands. On Windows, you will need to download a Tar/GZIP extraction tool to get at the data. You will see that all the files beginning with reut2- are .sgm, which means that they are SGML files. Unfortunately, Python deprecated sgmllib from Python in 2.6 and fully removed it for Python 3. However, all is not lost because we can create our own SGML Parser class that overrides Python's built in HTMLParser. Comissaria Smith said in its weekly review. The dry period means the temporao will be late this year. sold a good part of their cocoa held on consignment. cruzados per arroba of 15 kilos. 1,780 dlrs per tonne to ports to be named. 4,340, 4,345 and 4,350 dlrs. 2.27 times New York Dec, Comissaria Smith said. New York Dec for Oct/Dec. carnival which ends midday on February 27. While it may be somewhat laborious to parse data in this manner, especially when compared to the actual machine learning, I can fully reassure you that a large part of a data scientist's or quant researcher's day is in actually getting the data into a format usable by the analysis software! This particular activity is often jokingly referred to as "data wrangling". Hence I feel it is worth it for you to get some practice at it! By calling cat all-topics-strings.lc.txt | wc -l we can see that there are 135 separate topics among the articles. This will make for quite a classification challenge! To create this structure we will need to parse all of the Reuters files individually and add them to a grand corpus list. Since the file size of the corpus is rather low, it will easily fit into available RAM on most modern laptops/desktops. However, in production applications it is usually necessary to stream training data into a machine learning system and carry out "partial fitting" on each batch, in an iterative manner. In later articles we will consider this when we study extremely large data sets (particularly tick data). As stated above, our first goal is to actually create the SGML Parser that will achieve this. To do this we will subclass Python's HTMLParser class to handle the specific tags in the Reuters dataset. Upon subclassing HTMLParser we override three methods, handle_starttag, handle_endtag and handle_data, which tell the parser what to do at the beginning of SGML tags, what to do at the closing of SGML tags and how to handle the data in between. We also create two additional methods, _reset and parse, which are used to take care of internal state of the class and to parse the actual data in a chunked fashion, so as not to use up too much memory. Finally, I have created a basic __main__ function to test the parser on the first set of data within the Reuters corpus. files associated with the Reuters-21578 categorised test collection. The parser is a generator and will yield a single document at a time. some internal state of when tags have been "entered" and "exited". Hence the in_body, in_topics and in_topic_d boolean members. Initialise the superclass (HTMLParser) and reset the parser. Sets the encoding of the SGML files by default to latin-1. finishes with a particular tag of type "tag". the internal state to False for these booleans, respectively. for that particular tag, up until the end closing tag appears. In particular, note that instead of having a single topic label associated with a document, we have multiple topics. In order to increase the effectiveness of the classifier, it is necessary to assign only a single class label to each document. However, you'll also note that some of the labels are actually geographic location tags, such as "japan" or "thailand". Since we are concerned solely with topics and not countries we want to remove these before we select our topic. taking care to strip the trailing "\n" from each word. We are now in a position to pre-process the data for input into the classifier. At this stage we have a large collection of two-tuples, each containing a class label and raw body text from the articles. The obvious question to ask now is how do we convert the raw body text into a data representation that can be used by a (numerical) classifier? The answer lies in a process known as vectorisation. Vectorisation allows widely-varying lengths of raw text to be converted into a numerical format that can be processed by the classifier. It achieves this by creating tokens from a string. A token is an individual word (or group of words) extracted from a document, using whitespace or punctuation as separators. This can, of course, include numbers from within the string as additional "words". Once this list of tokens has been created they can be assigned an integer identifier, which allows them to be listed. Once the list of tokens have been generated, the number of tokens within a document are counted. Finally, these tokens are normalised to de-emphasise tokens that appear frequently within a document (such as "a", "the"). This process is known as the Bag Of Words. The Bag Of Words representation allows a vector to be associated with each document, each component of which is real-valued and represents the importance of tokens (i.e. "words") appearing within that document. Further, it means that once an entire corpus of documents has been iterated over (and thus all possible tokens have been assessed), the total number of separate tokens is known and hence the length of the token vector, for any document of any length is also fixed and identical. This means that the classifier now has a set of features via the frequency of token occurance. In addition the document token-vector represents a sample for the classifier. In essence, the entire corpus can be represented as a large matrix, each row of which represents one of the documents and each column represents token occurance within that document. This is the process of vectorisation. Note that vectorisation does not take into account the relative positioning of the words within the document, just the frequency of occurance. More sophisticated machine learning techniques will, however, use this information to enhance classification. One of the major issues with vectorisation, via the Bag Of Words representation, is that there is a lot of "noise" in the form of stop words, such as "a", "the", "he", "she" etc. These words provide little context to the document but their high frequency will mean that they can mask words that do provide document context. This motivates a transformation process, known as Term-Frequency Inverse Document-Frequency (TF-IDF). The TF-IDF value for a token increases proportionally to the frequency of the word in the document but is normalised by the frequency of the word in the corpus. This essentially reduces importance for words that appear a lot generally, as opposed to appearing a lot within a particular document. This is precisely what we need as words such as "a", "the" will have extremely high occurances within the entire corpus, but the word "cat" may only appear often in a particular document. This would mean that we are giving "cat" a relatively higher strength than "a" or "the", for that document. I won't dwell on the calculation of TF-IDF, but if you are interested then read the Wikipedia article on the subject, which goes into more detail. Hence we wish to combine the process of vectorisation with that of TF-IDF to produce a normalised matrix of document-token occurances. This will then be used to provide a list of features to the classifier upon which to train. Thankfully, the developers at scikit-learn realised that it would be an extremely common operation to vectorise and transform text files in this manner and so included the TfidfVectorizer class. We can use this class to take our list of two-tuples representing class labels and raw document text, to produce both a vector of class labels and a sparse matrix, which represents the TF-IDF and Vectorisation procedure applied to the raw text data. the corpus token/feature matrix (X). At this stage we now have two components to our training data. The first, $X$, is a matrix of document-token occurances. The second, $y$, is a vector (which matches the ordering of the matrix) that contains the correct class labels for each of the documents. This is all we need to begin training and testing the Support Vector Machine. In order to train the Support Vector Machine it is necessary to provide it with both a set of features (the $X$ matrix) and a set of "supervised" training labels, in this case the $y$ classes. However, we also need a means of evaluating the trained performance of the classifier subsequent to its training phase. One approach is to simply try classifying some of the documents from the corpus used to train it on. Such an evaluation procedure is known as in-sample testing. However, this is not a particularly effective mechanism for assessing the performance of the system. Simply put, the classifier has already "seen" this data and has been told how to act upon it and so it is very likely to correctly classify the document. This will almost certainly overstate the classifier's true out-of-sample testing performance. Hence we need to provide the classifier with data that it has not used for training, as a more realistic means of testing. However, it is not obvious where to obtain this new data from. One approach might be to create a separate corpus from some new data. However, in reality this is likely to be expensive in terms of time and/or business processes. An alternative approach is to partition the training set into two distinct subsets, one of which is used for training and the other for testing. This is known as the training-test split. Such a partition allows us to train the classifier solely on the first partition and then classify its performance on the second partition. This gives us a much better insight into how it will perform in true "out-of-sample" data going forward. One question that arises here is what percentage to retain for training and what to use for testing. Clearly the more that is retained for training, the "better" the classifier will be because it will have seen more data. However, more training data means less testing data and as such a poorer estimate of its true classification capability. In practice, it is common to retain about 70-80% of the data for training and use the remainder for testing. The test_size keyword argument controls the size of the testing set, in this case 20%. The random_state keyword argument controls the random seed for selecting the partition randomly. The next step is to actually create the Support Vector Machine and train it. In this instance we are going to use the SVC (Support Vector Classifier) class from scikit-learn. We give it the parameters $C=1000000.0$, $\gamma=0.0$ and choose a radial kernel. To understand where these parameters come from, please take a look at the article on Support Vector Machines. Create and train the Support Vector Machine. Now that the SVM has been trained we need to assess its performance on the testing data. The two main performance metrics that we will consider for this supervised classifer are the hit-rate and the confusion matrix. The former is simply the ratio of correct assignments to total assignments and is usually quoted as a percentage. The confusion matrix goes into more detail and provides output on true-positives, true-negatives, false-positives and false-negatives. In a binary classification system, with a "true" or "false" class labelling, these characterise the rate at which the classifier correctly classifies something as true or false when it is, respectively, true or false, and also incorrectly classifies something as true or false when it is, respectively, false or true. A confusion matrix need not be restricted to a binary classifier situation. For multiple class groups (as in our situation with the Reuters dataset) we will have an $N \times N$ matrix, where $N$ is the number of class labels (or document topics). Scikit-learn has functions for calculating both the hit-rate and the confusion matrix of a supervised classifier. The former is a method on the classifier itself called score. The latter must be imported from the metrics library. Thus we have a 66% classification hit rate, with a confusion matrix that has entries mainly on the diagonal (i.e. the correct assignment of class label). Notice that since we are only using a single file from the Reuters set (number 000), we aren't going to see the entire set of class labels and hence our confusion matrix is smaller in dimension than if we had used the full dataset. In order to make use of the full dataset we can modify the __main__ function to load all 21 Reuters files and train the SVM on the full dataset. We can output the full hit-rate performance. I've neglected to include the confusion matrix output as it becomes large for the total number of class labels within all documents. Note that this will take some time! On my system it takes about 30-45 seconds to run. There are plenty of ways to improve on this figure. In particular we can perform a Grid Search Cross-Validation, which is a means of determining the optimal parameters for the classifier that will achieve the best hit-rate (or other metric of choice). In later articles we will discuss such optimisation procedures and explain how a classifier such as this can be added to a production system in a data science or quantitative finance context.
CommonCrawl
A binary operation is the special case of an operation where the operation has exactly two operands. If $S = T$, then $\circ$ can be referred to as a binary operation on $S$. Let $\circ: S \times T \to U$ be a binary operation. This convention is called infix notation. For a given operation $\circ$, let $z = x \circ y$. Then $z$ is called the product of $x$ and $y$. This is an extension of the normal definition of product that is encountered in conventional arithmetic. Some authors use the term (binary) composition or law of composition for (binary) operation. Most authors use $\circ$ for composition of relations (which, if you think about it, is itself an operation) as well as for a general operation. To avoid confusion, some authors use $\bullet$ for composition of relations to avoid ambiguity. 1965: Seth Warner: Modern Algebra uses $\bigtriangleup$ and $\bigtriangledown$ for the general binary operation, which has the advantage that they are unlikely to be confused with anything else in this context. The symbol $\intercal$ is called truc ("trook") and is French for "thingummyjig"! The idea it conveys is that what we call our law of composition does not matter, for what we are really interested in are sets of objects and mappings between them. Some authors specify that a binary operation $\circ$ is defined such that the codomain of $\circ$ is the same underlying set as that which forms the domain. and thus gloss over the fact that a binary operation defined in such a way is closed.
CommonCrawl
...up to 100 to come... I'll post them a few at a time. Why is this first one #3: Numbering is as per my heptomino data file and will skip rectifiable and uninteresting heptominoes. Some of them will be posed as no-computer hand-tiling only puzzles. The goal is to tile rectangles as small as possible with the given heptomino, in this case number 3 of the 108 heptominoes. We allow the addition of copies of a rectangle. For each rectangle $a\times b$, find the smallest area larger rectangle that copies of $a\times b$ plus at least one of the given heptomino will tile. Now we don't need to consider $1\times 1$ further as we have found the smallest rectangle tilable with copies of the heptomino plus copies of $1\times 1$. I found 87 more but lots of them can be found by 'expansion rules'. I considered component rectangles of width 1 through 11 and length to 31 but my search was far from complete. Many of them could be tiled by hand fairly easily. Left: if $n-1$ is divisible by 7, size: $(n+1) \times (n+5)$. If $2n-1$ is divisible by 7, size: $(n+1) \times (2n+5)$. Right: if $n$ is divisible by 7, size: $(n+1) \times (n+7)$. Otherwise, the size is $(n+1) \times 8n$.
CommonCrawl
In a classical design of experiments (DoE) you usually choose a set of points according to some rule and perform experiments to be able to, for example, create a response surface. But when the properties of the process you are trying to describe is difficult to understand and can be destroyed if wrong parameters are applied we have to try something different. One solution could be to build a predictive model each time a new sample has been taken and decide where to take the next sample given information taken from the updated model. I am going to show you how Gaussian Processes (see the introduction) can be used to collect samples efficiently. In short, the algorithm teaches itself how the process works by asking the correct questions based on what is known, slowly expanding its knowledge safely. This kernel is used to control the curvature of the estimated function. What is nice about this formula is that it is not dependent on any measurements. Given a kernel and a set of hyperparameters you only need to decide where you want to measure to understand what uncertainty you should expect when predicting the function. This fact makes it possible to design a space-filling experiment design for a given assumption of the properties of the model. Notice that the maximum is small between the two points on the left while the kernels are smeared together on the right since they are closer together. This function can be used to describe how safe it is to measure at a given set of parameters. Now we are encouraged to measure in the vicinity of each data point, but not too close and not too far away. Since the standard deviation is lower when points are closer to each other exploration is often prioritized before refining. We need to have some knowledge about the process to be able to give the process one or several safe points to start from. We are going to start with $x_0 = 0$ and the goal is to obtain a sequence of $p_i = (x_i, f(x_i))$ for which we can predict the function with good precision. To find a new candidate we need to have a set of candidates to choose from. The set of candidates are generated using a space-filling random algorithm, in our case the Sobol sequence. Here is a sequence of 21 samples taken using the method described above. Notice how the algorithm is cautious to start with and then starts expanding to the right and left, occasionally going back to refine the model instead of exploring. It also does not violate the condition . Progressive sampling is useful when the process you want to describe is nonlinear and when you need to avoid breaking any constraints. The method scales well to many dimensions and can be automated in actual physical testing environments. We can also handle noisy measurements which would result in slower propagation since the uncertainty of predictions would be larger. We could add additional constraints which are tailored to the problem at hand, for example scaling the width of the kernel depending on the estimated magnitude of the gradient for each measurement or adding other functions which control how samples are chosen. Next The next level of virtual verification?
CommonCrawl
The replicator-mutator equations from evolutionary dynamics serve as a model for the evolution of language, behavioral dynamics in social networks, and decision-making dynamics in networked multi-agent systems. Analysis of the stable equilibria of these dynamics has been a focus in the literature, where symmetry in fitness functions is typically assumed. We explore asymmetry in fitness and show that the replicator-mutator equations exhibit Hopf bifurcations and limit cycles for increasing mutation strength \(\mu\). The first animation shows phase portraits of the dynamics as a function of \(\mu\), illustrating a Hopf bifurcation for a particular choice of fitness matrix. The second animation shows bifurcation plots of the dynamics for a class of circulant fitness matrices. We prove conditions for the existence of stable limit cycles arising from multiple distinct Hopf bifurcations of the dynamics in the case of circulant fitness matrices. In the noncirculant case, stable limit cycles of the dynamics are coupled to embedded directed cycles in the payoff graph as shown in the third plot in this submission. These cycles correspond to oscillations of grammar dominance in language evolution and to oscillations in behavioral preferences in social networks; for decision-making systems, these limit cycles correspond to sustained oscillations in decisions across the group. The left panel shows the payoff graph with two types of edges: strong edges (solid lines) with a weight of $b$ and weak edges (dashed lines) with a weight of \(\epsilon \017b\); here \(b = 0.7\) and \(\epsilon=0.1\). The center panel shows the resulting trajectories with mutation (Q2) and suitable \(\mu\)\026 (0.2 in (a), 0.25 in (b) and 0.27 in (c)). The right panel highlights the interconnection between nodes corresponding to the dominant components of the limit cycle trajectories. The color of each of the nodes on the payoff graph matches the color of the corresponding trajectory in the center panel. In each case, it is observed that there is a directed cycle between the dominant component nodes. Pais D, Caicedo-Nunez CH, Leonard NE. Hopf Bifurcations and Limit Cycles in Evolutionary Network Dynamics. SIAM J. Appl. Dyn. Sys. 11(4), 1754-1884, 2012. Animation of phase portraits showing limit cycles(.mp4, 3.15 MB) - 3325 download(s) Animation of phase portrait, corresponding to Figure 3.5 in the main text, for the replicator-mutator dynamics with $N=3$ and directed cycle topology. The animation on the left shows, as a function of mutation strength $\\mu$, the nullclines (red, green and magenta), the vector field (grey arrows) and the equilibria (filled circles are stable, unfilled circles are unstable). The animation on the right shows, as a function of mutation strength $\\mu$, sample trajectories for randomly chosen initial conditions. The color scale indicates the magnitude of the flow (vector field) with hot colors corresponding to fast flow. A limit cycle appears for an intermediate range of values of $\\mu$. Animation of bifurcation plots(.mp4, 1.81 MB) - 3342 download(s) Animation of bifurcation plots, corresponding to Figure 4.3 in the main text, as a function of payoff graph parameter $\beta$ for fixed parameter $\alpha$. The blue curves correspond to stable equilibria, red curves to unstable equilibria, and magenta curves to stable limit cycles. This slice of parameter space is illustrated in the top left of Figure 4.3; the Hopf bifurcation disappears at intersections of this slice with the red curves in the figure.
CommonCrawl
In this talk we will study the equivalence relation generated by linked curves in $ \mathbf P^3 $. In particular we will define the Rao module and show that (up to shifts and duals) it determines the equivalence class. Time permitting we will study curves that are cut out by three surfaces.
CommonCrawl
Description: This talk will begin with the material from the earlier talk posted here. After introducing the background we will focus on just the so-called splitting relation and some of its generalizations. Again, it is joint with Juris Steprāns. Definition. A family $\mathcal F$ of infinite subsets of $\mathbb N$ is said to be a splitting family if for every infinite set $B$ there is $A\in\mathcal F$ such that $A$ splits $B$. In this talk, we consider some natural generalizations, namely, $\mathcal F$ is said to be an $n$-splitting family if for every sequence of infinite sets $B_1,\ldots,B_n$ there exists $A\in\mathcal F$ which splits them all. Although each of these relations determines the same cardinal invariant (that is, the smallest $n$-splitting family has the same size as the smallest splitting family), we will show that they are distinct notions. Moreover, we will show that whenever $n<m$, there cannot be a Borel function which carries $n$-splitting families to $m$-splitting families. In fact, we will show that the $n$-splitting relations are properly increasing in the Borel Tukey order. his study was motivated by a question of Blass (solved by Mildenberger), which asked whether the same result holds for reaping families and its generalizations to $n$-reaping families.
CommonCrawl
Abstract: The spectrum of quantum and elastic waveguides in the form of a cranked strip is studied. In the Dirichlet spectral problem for the Laplacian (quantum waveguide), in addition to well-known results on the existence of isolated eigenvalues for any angle $\alpha$ at the corner, a priori lower bounds are established for these eigenvalues. It is explained why methods developed in the scalar case are frequently inapplicable to vector problems. For an elastic isotropic waveguide with a clamped boundary, the discrete spectrum is proved to be nonempty only for small or close-to-$\pi$ angles $\alpha$. The asymptotics of some eigenvalues are constructed. Elastic waveguides of other shapes are discussed. Key words: quantum and elastic waveguides, discrete spectrum, trapped modes, asymptotics of eigenvalues.
CommonCrawl
and I am an associate professor in the Mathematical Institute at the University of Wrocław. applications to ergodic theory and probability theory. Member of the Institute for Advanced Study, Princeton, (09.2016-08.2017). Habilitation in Mathematics at the University of Wrocław, (20.06.2017). Habilitation in Mathematics at the University of Bonn, (08.06.2016). HCM Postdoctoral research fellowship at the University of Bonn, (10.2012-08.2016). Assistant professor in the Mathematical Institute of University of Wrocław, (10.2011-09.2018). On leave. PhD in Mathematics from the University of Wrocław, (07.06.2011). M.Sc. in Mathematics from the University of Wrocław, (05.09.2007). Wybrane zagadnienia z analizy harmonicznej I. Wybrane zagadnienia z analizy harmonicznej II. J. Bourgain, M. Mirek, E.M. Stein and B. Wróbel. On discrete Hardy--Littlewood maximal functions over the balls in $\mathbb Z^d$: dimension-free estimates. J. Bourgain, M. Mirek, E.M. Stein and B. Wróbel. On the Hardy--Littlewood maximal functions in high dimensions: Continuous and discrete perspective. M. Mirek, E. M. Stein, P. Zorin-Kranich. Jump inequalities for translation-invariant polynomial averages and singular integrals on $\mathbb Z^d$. M. Mirek, E. M. Stein, P. Zorin-Kranich. A bootstrapping approach to jump inequalities and their applications. M. Mirek, E. M. Stein, P. Zorin-Kranich. Jump inequalities via real interpolation. J. Bourgain, M. Mirek, E.M. Stein and B. Wróbel. Dimension-free estimates for discrete Hardy-Littlewood averaging operators over the cubes in $\mathbb Z^d$. M. Mirek, E. M. Stein and B. Trojan. $\ell^p(\mathbb Z^d)$-estimates for discrete operators of Radon type: Maximal functions and vector-valued estimates. Accepted for publication in the Journal of Functional Analysis. J. Bourgain, M. Mirek, E.M. Stein and B. Wróbel. On dimension-free variational inequalities for averaging operators in $\mathbb R^d$. Geometric And Functional Analysis (GAFA) 28, (2018), no. 1, 58-99. M. Mirek. Square function estimates for discrete Radon transforms. Analysis & PDE 11, (2018), no. 3, 583-608. B. Krause, M. Mirek and B. Trojan. Two-parameter version of Bourgain's inequality I: Rational frequencies. Advances in Mathematics 323, (2018), 720-744. M. Mirek, E. M. Stein and B. Trojan. $\ell^p(\mathbb Z^d)$-estimates for discrete operators of Radon type: Variational estimates. Inventiones Mathematicae 209, (2017), no. 3, 665-748. M. Mirek, B. Trojan and P. Zorin-Kranich. Variational estimates for averages and truncated singular integrals along the prime numbers. Transactions of the American Mathematical Society 369, (2017), no. 8, 5403-5423. M. Mirek and C. Thiele. A local $T(b)$ theorem for perfect Calderón-Zygmund operators. Proceedings of the London Mathematical Society 114, (2017), no. 3, 35-59. B. Krause, M. Mirek and B. Trojan. On the Hardy-Littlewood majorant problem for arithmetic sets. Journal of Functional Analysis 271, (2016), no. 1, 164-181. M. Mirek and B. Trojan. Discrete maximal functions in higher dimensions and applications to ergodic theory. American Journal of Mathematics 138, (2016), no. 6, 1495-1532. M. Mirek and B. Trojan. Cotlar's ergodic theorem along the prime numbers. Journal of Fourier Analysis and Applications 21, (2015), no. 4, 822-848. M. Mirek. Weak type $(1,1)$ inequalities for discrete rough maximal functions. Journal d'Analyse Mathematique 127, (2015), no. 1, 247-281. M. Mirek. Roth's Theorem in the Piatetski-Shapiro primes. Revista Matemática Iberoamericana 31, (2015), no. 2, 617-656. M. Mirek. $\ell^p(\mathbb Z)$-boundedness of discrete maximal functions along thin subsets of primes and pointwise ergodic theorems. Mathematische Zeitschrift 279, (2015), no. 1-2, 27-59. M. Mirek. Discrete analogues in harmonic analysis: maximal functions and singular integral operators. Mathematisches Forschungsinstitut Oberwolfach: Real Analysis, Harmonic Analysis and Applications, 20-26 July 2014. DOI: 10.4171/OWR/2014/34, Rep. no. 34, (2014), 1893-1896. D. Buraczewski, E. Damek, S. Mentemeier and M. Mirek. Heavy tailed solutions of multivariate smoothing transforms. Stochastic Processes and their Applications 123, (2013), 1947-1986. M. Mirek. On fixed points of a generalized multidimensional affine recursion. Probability Theory and Related Fields (2013), 156, no. 3-4, 665-705. E. Damek, S. Mentemeier, M. Mirek and J. Zienkiewicz. Convergence to stable laws for multidimensional stochastic recursions: the case of regular matrices. Potential Analysis (2013), 38 no. 3, 683-697. D. Buraczewski, E. Damek and M. Mirek. Asymptotics of stationary solutions of multivariate stochastic recursions with heavy tailed inputs and related limit theorems. Stochastic Processes and their Applications 122, (2012), 42-67. M. Mirek. Heavy tail phenomenon and convergence to stable laws for iterated Lipschitz maps. Probability Theory and Related Fields 151, (2011), no. 3, 705-734. M. Mirek. Convergence to stable laws and a local limit theorem for stochastic recursions. Colloquium Mathematicum 118, (2010), 705-720.
CommonCrawl
Therefore every section $\phi$ of $E$ yields a section $j^\infty(\phi)$ of the jet bundle, given by $\phi$ and all its higher order derivatives. Given a field bundle […], we know what type of quantities the corresponding fields assign to a given spacetime point. Among all consistent such field configurations, some are to qualify as those that "may occur in reality" if we think of the field theory as a means to describe parts of the observable universe. Moreover, if the reality to be described does not exhibit "action at a distance" then admissibility of its field configurations should be determined over arbitrary small spacetime regions, in fact over the infinitesimal neighbourhood of any point. This means equivalently that the realized field configurations should be those that satisfy a specific differential equation, hence an equation between the value of its derivatives at any spacetime point.
CommonCrawl
Question. If you collect all the hot-air that you have breathed in your life, what would the volume be? If you made a hot-air balloon, would it be able to lift you and all your possessions? To answer, let's start with the first part. How much do I breathe? If I imagine inhaling and then exhaling a deep, big breath, I figure I could inflate a small paper bag, perhaps well over one liter, but probably not as much as two liters. But my passive resting breathing is probably much less than a big deep breath. So let's figure a half liter per ordinary passive breath. How often do I breathe? Well, in the swimming pool, I can hold my breath under water for a minute or even two minutes (in my younger swim-team days); but if I hold my breath right now, I can say that it does start to feel a little unnatural, like I should take my next breath, even after just about five or ten seconds, even though this impulse could be resisted longer. It seems to me that my body wants to take another breath in about that time. If we breathe every five seconds, that would mean 12 breaths per minute, so let's say ten breaths per minute, which would mean a volume of 5 liters per minute. That makes $5\times 60=300$ liters per hour, or $300*24=7200$ liters per day. In a year, this would be $7200\times 365$, which is less than 7000 times 400, which is 2,800,000 liters per year. Let's say 2.5 million liters per year of hot air. Times 50 years would make $125$ million liters of hot air in all. So the heat of the hot air caused it to expand in volume by ten percent. The buoyant force of the hot-air balloon is exactly the weight of this displaced air, by the Archimedean principle. Thus, the lifting force of my hot-air balloon will be equal to the mass of air filling ten percent of the volume we computed. How much does air weigh? I happen to remember from my high school science class that one mole of air at one atmosphere of pressure is about 22 liters (my teacher had a cube of exactly that size sitting on his desk, to help us to visualize it). And I also know that air is mainly nitrogen, which forms the molecule $N_2$, and since nitrogen in the periodic table has an atomic number of 14, the molecule $N_2$ has a mass of 28 grams per mole. So air weighs about 28 grams per 22 liters, which is about 1.3 grams per liter. Each cubic meter is one thousand liters, and so 1.3 kilograms per cubic meter (this is much larger than I had expected—air weighs more than one kilogram per cubic meter!). My hot air in total was 125,000 cubic meters, and we said that because of the temperature difference, the volume expanded by ten percent, or 12,500 cubic meters. This expansion would displace an equal volume of air, which weighs 1.3 kg per cubic meter. Thus, the displaced air weighs $12,500\times 1.3\approx 16,000$ kilograms, or about 16 metric tons. So all my hot air, at body temperature in a giant hot air balloon on a chilly day, would have a lifting force able to lift 16 metric tons. Would this lift me and all my possessions? Do I own 16 tons of stuff? Well, thankfully, I don't own a car, which would be a ton or more by itself. But I do own a lot of books, a piano, an oven, a dishwasher, some heavy furniture, paintings, and various other items, as well as a collection of large potted plants on my terrace. It seems likely to me that I could fit most if not all my possessions within 16 tons. To gain a little confidence in this, let's estimate the mass of my books. My wife and I have about twelve large shelves filled with books, each about 2 meters, and then I have about 3 more such shelves filled with books in my offices at the university. If we count half of the home books as mine, plus my office books, that makes 9 shelves times two meters, for about 20 meters of books. If the books are about 25 cm tall on average, and 15 cm deep, that makes $20\times .25\times .15=.75$ cubic meters of books. Let's round up to one cubic meter of books. How much does a cubic meter of paper weigh? Well, one ream of copy paper weighs about 2 kilograms, and that is a volume of 8.5 by 11 by 2 inches. One meter is about 33 inches, and so we could fill one cubic meter with a pile of reams of copy paper 3 by 3 by 17, which would be 163 reams, or about 300 kilograms. So not even a half ton of books! So I can definitely lift all my most important possessions within the 16 tons. Final answer: Yes, if we filled a giant balloon with all the hot-air I have breathed in my life, at body temperature, then it would lift me and all my possessions. This entry was posted in Exposition, Math for Kids and tagged kids, Number sense by Joel David Hamkins. Bookmark the permalink.
CommonCrawl
"You're making an error: How can you compare two orthogonal values, one it's zero ?" This seems trivial, but has very, very deep consequences, for example in SET THEORY, or when $\infty$ is, for example the number of the digits of an Irrational value (like $\pi$ ). -if you try to push vertically a vagon on the rails, it never moves ahead, indipendently by the force you push on. I very like this because you can prove some theorem just saying "that since the two thinks you're talking of, are one at 90° degree to the other, there is no interaction between them". you prove that also the other have the "same" smooth behaviour. it's lost time to try to square the circle, for littlest and littlest measure unit (or tessel) you keep, there is no way to let you find a finite result. This told us that we have to accept that the right definition of an area is: "An Area is the result of an integral (so a limit of a Sum)" Last edited by complicatemodulus; January 14th, 2017 at 12:44 AM.
CommonCrawl
Preview box slightly wider than answer box and Preview button disappearance issue. "Preview" button slows down server response even after unticked? When half-way composing the post Some more questions on Haag's theorem, I ticked the "Preview" button to check whether the Latex were written correctly. After seeing everything was fine I un-ticked the "Preview" button, but the editor/server reponse was still very slow, in fact as slow as when "Live preview" button is ticked(just for the sake of comparison, I never ticked "live preview" when composing that post). Then I submitted the post, the server response becomes infinitely slow and I had to close the tab and compose my post all over again(I forgot to categorize my question so it should return me a notification, this may or may not add to the slowness....). In fact I'm not sure if it's the fault of "Preview" or just a server glitch which happened to occur when I ticked and unticked the button. I was unable to reproduce with the exact bug, but I found that there's an oscillating MathJaX output that says "Processing output: 100%" and "Typesetting output: 100%" intermittently. Do you observe this too? And can you still reproduce the bug? If so, could you see if the bug is reproduced even when MathJaX is not involved? As I am revisiting actually the editor to implement a new feature, I will check this code in detail again. Please tell me should you observe this behavior again. @dimension10 @polarkernel, thanks for the feedback. @JiaYiyang I see... that's quite bad. I guess the autosave feature polarkernel is developing might help with this. @RonMaimon I am able to reproduce what you describe. After some investigations I have found that MathJax gets triggered by the math-plugin of the editor, which I have not written myself. It has nothing to do with the preview features. I will try to change it. Maybe it could be configured to wait x milliseconds for another key before triggering the typesetting so that it gets only involved when you stop writing for a moment. As I am actually writing a "save/autosave draft" feature for the editor, I will try to include a solution there. @polarkernel: I didn't observe slowdown, because the editing is ordinarily always fast for me even with preview, but I did observe what I interpreted as the thing processing TeX like crazy while editing an answer (from little announcements that said "Processing math" "Typesetting math" over and over, it is happening, even as I am editing this comments with a bit of gratuitious TeX thrown in: $\alpha + \omega$, even though I never clicked preview before it happened. You should definitely be able to reproduce it, just put an equation somewhere, and you can see the annoucements of typesetting/processing appear again and again every keystroke. For me, this is fast, but I assume it is unnecessary processing caused by preview always being enabled for some reason. In case you encounter the issue again, rather than closing the tab and rewriting everything, you should be able to quickly ctrl+a ctrl+c the entire page content, and then select the relevant parts when you go about rewriting. When I submit the post the page went completely blank and stuck, so it wouldn't work, I should've done so before submitting.
CommonCrawl
If we consider $x_t$ an Ornstein-Uhlenbeck process (with $W_t$ the Wiener process), does anyone know what would be the variance of the convolution of $x_t$ with a given filter $A$ i.e. $V(x_t \star A)$ ? But I can't go anywhere from the last equation. Even taking $A$ as a dirac, it's not easy...any help would be greatly appreciated ! Welcome to MO! This is not quite a research level question, but here is the answer anyway. Represent the Orenstein-Uhlenbeck process as white noise passing through a low-pass (this is really the representation in the equation for $x_t$ you wrote). Call the transfer function of the low-pass $H$. Then your process is white noise passing through the filter $AH$. Now the variance you ask about is simply the square of the $L^2$ norm of $AH$, i.e. $\int |AH(\omega)|^2 d\omega$. Not the answer you're looking for? Browse other questions tagged stochastic-processes stochastic-calculus convolution filters or ask your own question.
CommonCrawl
Let $X$ be a compact connected Riemann surface and $E_P$ a holomorphic principal $P$--bundle over $X$, where $P$ is a parabolic subgroup of a complex reductive affine algebraic group $G$. If the Levi bundle associated to $E_P$ admits a holomorphic connection, and the reduction $E_P \subset E_P\times^P G$ is rigid, we prove that $E_P$ admits a holomorphic connection. As an immediate consequence, we obtain a sufficient condition for a filtered holomorphic vector bundle over $X$ to admit a filtration preserving holomorphic connection. Moreover, we state a weaker sufficient condition in the special case of a filtration of length two. Full text: dvi.gz 16 k, dvi 50 k, ps.gz 207 k, pdf 110 k.
CommonCrawl
Moore's Law states that the number of transistors on a chip will double every two years. Amazingly, this law has held true for over half a century. Whenever current technology no longer allowed more growth, researchers have come up with new manufacturing technologies to pack circuits even denser. In the near future, this might mean that chips are constructed in three dimensions instead two. But for this problem, two dimensions will be enough. A problem common to all two-dimensional hardware design (for example chips, graphics cards, motherboards, and so on) is wire placement. Whenever wires are routed on the hardware, it is problematic if they have to cross each other. When a crossing occurs special gadgets have to be used to allow two electrical wires to pass over each other, and this makes manufacturing more expensive. Our problem is the following: you are given a hardware design with several wires already in place (all of them straight line segments). You are also given the start and end points for a new wire connection to be added. You will have to determine the minimum number of existing wires that have to be crossed in order to connect the start and end points. This connection need not be a straight line. The only requirement is that it cannot cross at a point where two or more wires already meet or intersect. Figure 1 shows the first sample input. Eight existing wires form five squares. The start and end points of the new connection are in the leftmost and rightmost squares, respectively. The black dashed line shows that a direct connection would cross four wires, whereas the optimal solution crosses only two wires (the curved blue line). The input consists of a single test case. The first line contains five integers $m, x_0, y_0, x_1, y_1$, which are the number of pre-existing wires ($m \le 100$) and the start and end points that need to be connected. This is followed by $m$ lines, each containing four integers $x_ a, y_ a, x_ b, y_ b$ describing an existing wire of non-zero length from $(x_ a, y_ a)$ to $(x_ b,y_ b)$. The absolute value of each input coordinate is less than $10^5$. Each pair of wires has at most one point in common, that is, wires do not overlap. The start and end points for the new wire do not lie on a pre-existing wire. Display the minimum number of wires that have to be crossed to connect the start and end points.
CommonCrawl
Farmer John's $N$ cows ($5 \leq N \leq 50,000$) are all located at distinct positions in his two-dimensional field. FJ wants to enclose all of the cows with a rectangular fence whose sides are parallel to the x and y axes, and he wants this fence to be as small as possible so that it contains every cow (cows on the boundary are allowed). FJ is unfortunately on a tight budget due to low milk production last quarter. He would therefore like to build an even smaller fenced enclosure if possible, and he is willing to sell up to three cows from his herd to make this possible. Please help FJ compute the smallest possible area he can enclose with his fence after removing up to three cows from his herd (and thereafter building the tightest enclosing fence for the remaining cows). For this problem, please treat cows as points and the fence as a collection of four line segments (i.e., don't think of the cows as "unit squares"). Note that the answer can be zero, for example if all remaining cows end up standing in a common vertical or horizontal line. The first line of input contains $N$. The next $N$ lines each contain two integers specifying the location of a cow. Cow locations are positive integers in the range $1 \ldots 40,000$. Write a single integer specifying the minimum area FJ can enclose with his fence after removing up to three carefully-chosen cows from his herd.
CommonCrawl
I am a theoretical physicist with $\hbar \neq 0, c \neq \infty, G \neq 0$ rather $\hbar=c=G=1$. 20 If you're given an hour, is it bad to finish a job talk in half an hour? 15 Is it possible to reconstruct the Hamiltonian from knowledge of its ground state wave function? 10 Given three reference letters in a postdoc application, how to decide which to list as the second letter and which to list as the third?
CommonCrawl
Let $M$ be a 3-manifold with boundary. If $M$ has an orientable finite cover that is a Seifert fiber space, then is $M$ also a Seifert fiber space? This is certainly true, though one needs to be sufficiently careful about one's definition of Seifert fibred---it's important to allow fibres with a neighbourhood that looks like a fibred solid Klein bottle. See p. 429 of P. Scott, 'The geometries of 3-manifolds', Bull. LMS 15(5), 1983, pp. 401--487 . I don't think I know a reference in the literature, but one can prove it using the following result (see Theorem 3.9 of Scott's paper). Theorem: Let $M$ be a compact Seifert fibre space and let $f: M \to N$ be a homeomorphism. Then $f$ is homotopic to a fibre-preserving homeomorphism (and hence an isomorphism of Seifert bundles) unless one of the following occurs. $M$ is $S^1 \times D^2$ or an I-bundle over the torus or Klein bottle. It follows that, except in the above three cases, the Seifert structure on the orientable double cover is invariant under the covering transformation, and hence descends. All closed 3-manifolds with finite fundamental group are orientable (by work of Epstein), so there are only finitely many manifolds left to check. I'll leave them as an exercise (which I confess I've never done myself). You need geometrisation to prove this fact. See Corollary 12.9.5 here for a reference. You can't prove this without Perelman, at least with our present knowledge. For instance, if the orientable cover is $S^3$, then you must ensure that $M$ be elliptic, and that's precisely the space form conjecture, which is "one third" of geometrisation. But even when the finite cover is some other Seifert space, I don't see an easy argument to conclude without using geometrisation. Edit. I overlooked the "with boundary" hypothesis. In that case Thurston's proof of geometrisation suffices. Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology gt.geometric-topology or ask your own question. What is a finite Haken cover of the Seifert–Weber space?
CommonCrawl
Coulomb branches of 3-dimensional N=4 super-symmetric gauge theories and their quantizations and study representations of the resulting non-commutative algebras. The 2nd subject has to do with representation theory of p-adic groups (in particular, p-adic interpretation of Lusztig's asymptotic Hecke algebra and its generalizations). arXiv:1807.09038 Coulomb branches of 3-dimensional gauge theories and related structures. Alexander Braverman, Michael Finkelberg. arXiv:1805.11826 Line bundles over Coulomb branches. Alexander Braverman, Michael Finkelberg, Hiraku Nakajima. arXiv:1804.00336 Schwartz space of parabolic basic affine space and asymptotic Hecke algebras. Alexander Braverman, David Kazhdan. arXiv:1706.02112 Ring objects in the equivariant derived Satake category arising from Coulomb branches (with an appendix by Gus Lonergan). Alexander Braverman, Michael Finkelberg, Hiraku Nakajima. arXiv:1611.10216 Cyclotomic double affine Hecke algebras (with an appendix by Hiraku Nakajima and Daisuke Yamakawa). Alexander Braverman, Pavel Etingof, Michael Finkelberg. arXiv:1604.03625 Coulomb branches of $3d$ $\mathcal N=4$ quiver gauge theories and slices in the affine Grassmannian (with appendices by Alexander Braverman, Michael Finkelberg, Joel Kamnitzer, Ryosuke Kodera, Hiraku Nakajima, Ben Webster, and Alex Weekes). Alexander Braverman, Michael Finkelberg, Hiraku Nakajima.
CommonCrawl
I've been writing a little about some results on graph theory, and I want some nice examples of applying the results to some interesting finite connected graphs to show how the results might be useful. I'm therefore looking for a reference which has a collection of interesting graphs, ideally along with a picture if possible (but not the end of the world if there are no pictures) and some of the basic properties of the graph; for example, connectedness, diameter etc, though again if the reference only has the names I'm sure I can track down their properties online so that isn't too big a problem either. I have been using Wikipedia's gallery of named graphs which has been quite helpful, but unfortunately a lot of those turn out to be trivial cases for the results I'm proving, and Alain Matthes had an excellent list too (altermundus.fr/downloads/documents/NamedGraphs.pdf). Some of the examples I am interested in are nontrivial graphs with moderately large girth (say 8 or above, and nontrivial meaning not a path, tree, cycle etc), Cayley graphs, and any interesting constructions of sequences of graphs of increasing order $n \to \infty$. Other than such sequences I would prefer to look at examples of 'small to medium' order (so probably 60 or less), and on the less dense end of the scale; essentially graphs which you can feasibly work with "by hand" because they are not overly enormous, not graphs like the Higman-Sims. I have tried to explain the sort of graphs I am looking for just in case it's relevant, but if there is an all-encompassing handbook or compendium of some sort then I am obviously very happy to sift through them myself to locate some useful examples. Any responses would be appreciated, be they books, websites, papers, or just individual suggestions of interesting graphs which weren't in the Wikipedia gallery. Thankyou! If you have access to the software Mathematica, the function GraphData has quite a number of example graphs you can peruse. The documentation notes that the graphs (and their properties) that are implemented come from a wide range of sources, like this one. You might also want to look at MathWorld's compendium. Not the answer you're looking for? Browse other questions tagged reference-request graph-theory soft-question examples-counterexamples or ask your own question. What theorems/examples will make me really understand representation theory? What newer mathematics fields helped to solve or solved problems from older fields of mathematics? How to ask questions about the likelihood of "interesting" mathematical statements?
CommonCrawl
Abstract : A non-parametric statistical test that allows the detection of anomalies given a set of (possibly high dimensional) sample points drawn from a nominal probability distribution is presented. Its test statistic is based on the distance between a query point mapped in a feature space and its projection on the eigen-structure of the kernel matrix computed on the sample points. The statistical test is shown to be uniformly most powerful for a given false alarm level $\alpha$ when the alternative density is uniform over the support of the null distribution. The computational performances of the procedure are assessed as the algorithm can be computed in $O(n^3 + n^2)$ and testing a query point only involves matrix vector products. Our method is tested on both artificial and benchmarked real data sets and demonstrates good performances regarding both type-I and type-II errors w.r.t. competing methods.
CommonCrawl
How to solve the following Frobenius norm-minimization problem? But what about the extended version? Is there any analytical solution to this? Or we have to use gradient descent like method? The challenge is to compute $\nabla f_2 (\mathrm X)$. Since $f_2$ is quartic in $\mathrm X$, the gradient $\nabla f_2$ should be cubic. Not the answer you're looking for? Browse other questions tagged linear-algebra optimization nonlinear-programming least-squares matrix-equations or ask your own question. How to assemble Global matrix (for coupled) problem? What are the differences between the different gradient-based numerical optimization methods? Why does Newton's method with Linear Equality Constraints use KKT condition?
CommonCrawl
In a previous post about Dirichlet Series, I referenced some properties of a few arithmetic functions like $\tau$, the divisor-counting function, $\sigma$, the divisor sum function, $\varphi$, the Euler totient function, $\mu$, the Mobius mu function, and $\sigma_a$, the generalized divisor sum function. Many of these statements went without proof in that post, so in this post I will prove them, along with some other properties of each of these functions. Let's start with $\sigma_a$, the generalized divisor sum function, as well as $\tau=\sigma_0$ and $\sigma=\sigma_1$. This is defined as ...where the sum is taken over all of the divisors of $n$, the input. For example, if $p,q$ are distinct primes, then An essential property of these functions is that they are multiplicative; that is, that if $m,n$ are two coprime natural numbers, then $\sigma_a(mn)=\sigma_a(m)\sigma_a(n)$. This is pretty easy to prove, using the following property of sums: If $m,n$ are coprime, then each divisor of $mn$ can be written uniquely as the product of a divisor of $m$ and a divisor of $n$. Thus, using the product of summations formula, we have Because $\sigma_a$ is multiplicative, we may obtain a formula for $\sigma_a$ as a finite product using the prime factorization of its input. Note that if $p$ is a prime, then Thus, if $n$ has the prime factorization then we have that where the product is taken over all primes $p$ dividing $n$, and their respective multiplicities $k$. It's easy to show that $\mu$ is multiplicative as well. If $m,n$ are positive integers and one of $m,n$ is divisible by a perfect square other than $1$, then their product clearly is as well; thus, if $\mu(m)\mu(n)=0$ then $\mu(mn)=0$ and $\mu(mn)=\mu(m)\mu(n)$. If neither $\mu(m)$ nor $\mu(n)$ is $0$, then we must do a bit of casework. If they are coprime, their sets of prime factors are disjoint, so if the parities of their cardinalities are distinct, then the parity of the cardinality of their union is odd; otherwise, the parity of the cardinality of their union is even. These correspond to the identities $1\cdot 1=1$, $(-1)\cdot (-1)=1$, and $1\cdot (-1)=-1$, establishing the multiplicativity of $\mu$. Many of the interesting identities of $\mu$ are related to arithmetic sums, but this will be the topic of the next blog post. Finally, consider the function $\varphi$, or the Euler Totient Function. $\varphi(n)$ is defined as the number of positive integers less than and relatively prime to $n$. Proving the multiplicativity of $\varphi$ is the most difficult, and it requires some previous knowledge about $\mathbb Z_m$, the additive group of integers modulo m. The numbers in the $c$ th column are in the form $bn+c$, so if $c$ is not coprime to $n$, since all common divisors of $c,n$ must also divide $bn+c$, we have that all numbers in the $c$ th column are not coprime to $n$, and are thus not coprime to $mn$. So let us consider only the columns $c$ for which $c$ is coprime to $n$, noticing that the number in the $b$ th row and $c$ th column is $bn+c$. Because all of these numbers are coprime to $n$ as long as $c$ is coprime to $n$, and because $mn$ is coprime to any number as long as that number is coprime to both $m$ and $n$, we must figure out how many of the numbers in each column are coprime to $m$. Now I shall use the fact that if $a$, an integer, is coprime to $m$, then multiplying each element of $\mathbb Z_m$ by $a$ and taking the remainder modulo $m$ still results in the set $\mathbb Z_m$; similarly, adding any number to all elements of $\mathbb Z_m$ and taking modulo $m$ results in $\mathbb Z_m$. This means that, modulo $m$, the elements in any two columns are exactly the same up to rearrangement, since the elements in any column consist of the elements of $\mathbb Z_m$ multiplied by $n$ (which is coprime to $m$), plus a constant. Since a number is coprime to $m$ if and only if its residue modulo $m$ is coprime to $m$, we have that the number of elements of any column of the grid coprime to $m$ is equal to the number of elements of $\mathbb Z_m$ that are coprime to $m$. This is precisely $\varphi (m)$, by definition. Thus, we have that there are $\varphi(m)$ numbers coprime to $m$ in each column, and all numbers in each of $\varphi(n)$ columns are coprime to $n$; thus, since a number is coprime to $mn$ if and only if it is coprime to $m$ and $n$, we have that there are $\varphi(m)\varphi(n)$ numbers coprime to $mn$ in the grid, and so $\varphi(mn)=\varphi(m)\varphi(n)$ as desired. We may find an interesting product representation for $\varphi(n)$ using its prime factorization, similarly to how we did for $\sigma_a$, using the trivial fact that for $k\gt 0$. We have that if the prime factorization of $n$ is Then ...where the product is taken over all primes $p$ dividing $n$. That concludes this short post! The next post will hopefully be much longer and more interesting, and it will be about the evaluation of certain finite arithmetic sums involving these functions.
CommonCrawl
HPGMG-FE solves constant- and variable-coefficient elliptic problems on deformed meshes using FMG. It uses $Q_2$ elements and evaluates all operators matrix-free, exploiting tensor-product structure. This structure is similar to that used in spectral element methods (multiple Gordon Bell prizes), but is still beneficial for $Q_2$ elements. The method is third-order accurate in $L^2$, as demonstrated by the FMG convergence. (For linear problems, the discretization is 4th-order superconvergent at nodes, but we do not exploit this property.) Interpolation and restriction are done using the natural embedding of finite-element spaces, with tensor-product optimization. On coarse levels, the active process set is restricted using Z-order so that interpolation and restriction mostly involve processes that will be "nearby" on a computer. Chebyshev polynomials are used for smoothing, preconditioned by the diagonal. FMG convergence is observed with a V(3,1) cycle, thus convergence is reached in a total of 5 fine-grid operator applications. The present design of HPGMG-FE was influenced by a number of considerations about performance and application workloads. The finite-element (FE) method is not the simplest approach to discretizing a PDE on a regular grid. However, it is popular due to its flexibility and suitability for libraries and frameworks. Its distinguishing feature relative to standard finite-difference (FD) and finite-volume (FV) methods is that work is shared between degrees of freedom (dofs). Since FD and FV are so similar for simple elliptic equations on structured grids, we describe the effect in terms of FD methods, for which dofs are stored at vertices of the mesh. For FD, the residual at each vertex is straightforward to compute directly based on the state at vertices in a neighborhood. Most or all of the computation for the residual at an adjacent vertex shares no common intermediate results, thus parallelization via a simple vertex-partition (dof-partition) has little or no compute overhead relative to the best possible sequential implementation. Incidentally, this is not the case for more advanced FD and FV discretizations, such as slope- or flux-limited upwind methods and those with complicated material models. For example, MUSCL, a TVD FV method, would use neighbors to reconstruct a gradient, then use the result to evaluate at faces, solve a Riemann problem at the faces, and sum the result back into cells. Parallelization via a simple dof-partition would require redundant computation of reconstructed gradients and redundant Riemann solves in the overlap. In strong-scaling scenarios, the subdomains are small enough that such redundant computation can more than double the cost. Finite-element methods possess this attribute because computation is associated with cells while dofs are associated with vertices. Parallelization via dof-partition would involve redundant computation over elements in the overlap. For example, consider an $8\times 8\times 8$-element subdomain, which for $Q_2$ elements would imply ownership of 4096 dofs for a scalar problem. If computation is arranged for a non-overlapping dof-partition, the cost is $9^3/8^3 = 1.42$ times that of a non-overlapping element partition. This is several times larger than the optimal subdomain size for coarse levels (i.e., operator application is still strong-scaling, albeit at reduced efficiency), so dof-partition overhead is even higher on coarse levels. Computing with a non-overlapping element partition (instead of dof-partition) involves overlapping writes to the vertex-based residuals. There are many ways to coordinate these overlapping writes, each with storage/compute/latency tradeoffs. We believe it is valuable to encourage implementations to use these techniques rather than a simple dof-partition. Spectral-element applications such as Nek5000, SPECFEM, and HOMME are presently used in machine procurements, have won performance prizes, and represent a significant fraction of HPC computation. On the other hand, low-order methods are overwhelmingly chosen for structural mechanics and heterogeneous materials such as porous media, also representing a significant fraction of HPC computation for research and industry. High-order methods vectorize readily and are more compute-intensive (flops and instruction issue) while low-order methods are more dependent on memory bandwidth and less regular memory access. $Q_2$ elements strike a balance between classical spectral-element efficiency (demonstrated at $Q_3$ and higher) and low-order methods. Additionally, $Q_2$ elements have a larger penalty for redundant computation than $Q_1$ elements, thus further emphasizing thread synchronization. For $Q_3$ and higher order elements, it is currently possible to achieve high performance vectorizing within each element. This is no longer true for $Q_2$, where it becomes necessary to vectorize across elements. Vectorizing across elements is simpler code to write since it needs no cross-lane instructions, but increases the working set size, which should fit within L1 cache for maximum efficiency. If vendors increase the length of vector registers or add more hardware threads without increasing cache, performance on $Q_2$ elements will drop unless they can effectively vectorize within elements (i.e., number of elements in working set smaller than length of vector register). That would require more cross-lane operations and is more complicated to implement. Few of today's applications can effectively utilize vector registers today, so the benchmark should not reward them for hardware that goes nearly-unused outside of dense linear algebra. FAS is a technique primarily used for solving nonlinear problems. It involves somewhat more vector work and one more coarse-grid operator application relative to the conventional correction scheme. Although this benchmark only solves linear problems, FAS enables an inexpensive check for consistency between levels (potentially useful for soft error detection) and forms the basis for advanced multigrid techniques (such as "frozen $\tau$" and segmental refinement). We also believe that nonlinear multigrid methods will become more important as computer architectures continue to evolve. Finally, the modest increase in vector work is representative of acceleration techniques such as Krylov, time integrators such as Runge-Kutta, and various analysis techniques. This increases the demand for memory bandwidth, somewhat balancing our compute-intensive operators. A benchmark should have similar computational characteristics at all scales. Nonlinearities exhibit different behavior at different scales, so convergence rate would depend on resolution for any strongly nonlinear operator. In order to maintain a strong theoretical foundation for convergence independent of resolution, we have chosen to focus on linear problems. This choice could be revisited. Evaluating nonlinearities often involves special functions, polynomial or rational fits, table lookups, and iteration (e.g., implicit plasticity model, implicit equation of state, some Riemann solvers). The first two are compute-limited, table lookups are more data-intensive, and the last creates a load-balancing challenge because the iteration may converge at different rates. Since the characteristic of table lookups depends so strongly on the table size and interpolation scheme, we are reluctant to utilize a table lookup in the benchmark. We would love to include the irregularity imposed by iteration, but have not found a clean way to include it without compromising other values of the benchmark. Please write the mailing list if you know of a way. Mapped coordinates account for a large fraction of data motion in the scalar solver. Assuming affine coordinates would reduce this data dependency and simplify the kernels somewhat. We like mapped coordinates because many real applications have mapped coordinates (e.g., for curved boundary layers or interface-tracking) and/or stored coefficients, and because evaluating the metric terms increases the computational depth of each element. Some vendors have "cut corners" by supporting only a small number of prefetch streams or having high-latency instructions with very small caches. These cost-cutting measures are okay for some benchmarks, but impact performance for many applications. Therefore, including coordinate transformations in the benchmark reward vendors for creating balanced, versatile architectures. HPGMG-FE samples across a range of problem sizes, from the largest that can fit in memory to the limit of strong scalability. This is done in order to quantify "dynamic range", the ratio of the largest problem size to the smallest that can be solved "efficiently" (we set this threshold at 50% efficiency relative to the largest size). Every HPC center has some users that are focused on throughput and some that are focused on turn-around time. For almost any reasonable architecture and scalable algorithm, filling memory will provide enough on-node work to amortize communication costs. A benchmark that is run only in the full-memory configuration will be representative for the throughput-oriented users, but says nothing for those interested in turn-around time. When combined with peak performance, the dynamic range quantifies the ability to appease both user bases. Note that since some applications have different phases with each characteristic, it is not sufficient to build different machines for each type of workload. A benchmark that rewards dynamic range will encourage versatility machines rather than those configured with just enough memory to balance communication latency for the benchmark (e.g., replace expensive network with a cheap network and add an extra DIMM on each socket to achieve the same benchmark score). Computation of the diagonal has not been optimized and is not a timed part of the benchmark. Although embarrassingly parallel, its cost is similar to that of the entire FMG solve. Optimizing this operation is relatively uninteresting, therefore we pre-compute it. While this may be justified for linear problems, especially those that must be solved multiple times, the cost is very meaningful for nonlinear multigrid solvers based on finite-element methods. The computational characteristics of HPGMG-FE can be adjusted significantly while retaining the same overall structure. Variable coefficients with anisotropy can be defined via coefficients stored at quadrature points. In general, this will be a $3\times 3$ symmetric tensor (6 unique entries), but structure such as $\alpha\mathbf 1 + \mathbf w \otimes \mathbf w$, where $\mathbf w$ is a 3-component vector, can arise from Newton-linearization of nonlinear constitutive relations (a common source of such linear problems). Since the coefficients are defined at quadrature points, they have no opportunity for reuse. Some balance of reusable memory streams and non-reusable memory streams is common in applications, especially those using assembled sparse matrices. Some architectures have used simplified cache eviction policy (e.g., FIFO on BG/L and BG/P) which perform poorly because the reusable cache lines are flushed out by cache lines that will not be reused. A memory access pattern containing both reusable and non-reusable streams will favor more versatile cache and prefetch architectures. For the scalar problem, the metric terms can be absorbed into a symmetric $3\times 3$ tensor at each quadrature point, removing the need to compute the transformations on the fly. If general coefficients are already being used, the metric terms simply transform the existing coefficient. If not, the asymptotic memory per element increases from $3\cdot 8 = 24$ values to $6\cdot 27=162$ values. The scalar problem in current implementations can be replaced by a vector-valued problem such as elasticity. If mapped grids are used for both, this does not significantly change the arithmetic intensity, but does increase cache utilization and reduces the relative cost of coordinate transformations. General anisotropy for elasticity involves a rank-4 tensor that contains 21 unique entries (due to symmetry). If the tensor coefficient arises from Newton-linearization of a strain-hardening material, the tensor can be represented using 7 unique entries. Trading on-the-fly computation of the coordinate transformations for stored coefficients is thus significantly higher overhead unless the linear problem already had general anisotropy. Since much of the work in all implementations is in small tensor contractions, only one kernel (operating on three fields) must be optimized. However, we believe that the current implementation is well-optimized despite using the same source code for both scalar and 3-field evaluation. Elasticity may impact convergence rate relative to Poisson, though it can be maintained for closely-related vector-valued problems so we don't see this as a major concern. The current implementation uses Gauss-Legendre quadrature, which is 5th-order accurate. It can be replaced with Gauss-Legendre-Lobatto (GLL) quadrature, which is only 3rd-order accurate, but which requires one third the work for tensor contraction (quadrature points are collocated with dofs). Spectral element methods use the less accurate GLL quadrature as a form of lumping to produce a diagonal mass matrix and to reduce the necessary flops. The less accurate quadrature often causes problems with aliasing, especially in case of variable coefficients or highly-distorted grids. GLL quadrature would reduce the compute requirements (thus increasing stress on memory bandwidth) and reduce the need for intermediate cache during tensor contraction.
CommonCrawl
Abstract: This paper reviews the Dalitz plot analysis for the decays $B^0(\bar B^0) \to \rho \pi \to \pi^+ \pi^- \pi^0$. We discuss what can be learned about the ten parameters in this analysis from untagged and from tagged time-integrated data. We find that, with the important exception of the interesting CP violating quantity $\alpha$, the parameters can be determined from this data sample -- and, hence, they can be measured at CLEO as well as at the asymmetric $B$ factories. This suggests that the extraction of $\alpha$ from the time-dependent data sample can be accomplished with a smaller data sample (and, therefore, sooner) than would be required if all ten parameters were to be obtained from that time-dependent data sample alone. We also explore bounds on the shift of the true angle $\alpha$ from the angle measured from charged $\rho$ final states alone. These may be obtained prior to measurements of the parameters describing the neutral $\rho$ channel, which are expected to be small.
CommonCrawl
June 2015, 462 pages, hardcover, 16.5 x 23.5 cm. The book gives an elementary and comprehensive introduction to Spin Geometry, with particular emphasis on the Dirac operator which plays a fundamental role in differential geometry and mathematical physics. After a self-contained presentation of the basic algebraic, geometrical, analytical and topological ingredients, a systematic study of the spectral properties of the Dirac operator on compact spin manifolds is carried out. The classical estimates on eigenvalues and their limiting cases are discussed next, highlighting the subtle interplay of spinors and special geometric structures. Several applications of these ideas are presented, including spinorial proofs of the Positive Mass Theorem or the classification of positive Kähler–Einstein contact manifolds. Representation theory is used to explicitly compute the Dirac spectrum of compact symmetric spaces. The special features of the book include a unified treatment of Spin$^\mathrm c$ and conformal spin geometry (with special emphasis on the conformal covariance of the Dirac operator), an overview with proofs of the theory of elliptic differential operators on compact manifolds based on pseudodifferential calculus, a spinorial characterization of special geometries, and a self-contained presentation of the representation-theoretical tools needed in order to apprehend spinors. This book will help advanced graduate students and researchers to get more familiar with this beautiful, though not sufficiently known, domain of mathematics with great relevance to both theoretical physics and geometry.
CommonCrawl
Is there a way to compute the DV01 of a bond future, from it's underlying cheapest to deliver bond's DV01? For example, is this correct? : DV01 future = DV01 CTD / conversion factor? Or any other formula that would give future's DV01? Suppose the CTD DV01 is 10cents. If the CTD yield falls by 1bp then price goes up by 10cents. The price of the future (if the net basis remains at 0) will increase by: $$DV01.Future= (10 \times (1+repo*day.count.frac)) \div conv.factor$$ The repo is a small adjustment.
CommonCrawl
The wide channels feature combines two adjacent channels to form a new, wider channel to facilitate high-data-rate transmissions in multiple-input- multiple-output (MIMO)-based IEEE 802.11n networks. Using a wider channel can exacerbate interference effects. Furthermore, contrary to what has been reported by prior studies, we find that wide channels do not always provide benefits in isolation (i.e., one link without interference) and can even degrade performance. We conduct an in-depth, experimental study to understand the implications of wide channels on throughput performance. Based on our measurements, we design an auto-configuration framework called ACORN for enterprise 802.11n WLANs. ACORN integrates the functions of user association and channel allocation since our study reveals that they are tightly coupled when wide channels are used. We show that the channel allocation problem with the constraints of wide channels is NP-complete. Thus, ACORN uses an algorithm that provides a worst-case approximation ratio of $O(1/\Delta+1)$, with $\Delta$ being the maximum node degree in the network. We implement ACORN on our 802.11n testbed. Our evaluations show that ACORN: 1) outperforms previous approaches that are agnostic to wide channels constraints; it provides per-AP throughput gains ranging from 1.5$\times$ to 6$\times$; and 2) in practice, its channel allocation module achieves an approximation ratio much better than the theoretically predicted $O(1/\Delta+1)$. © 1993-2012 IEEE.
CommonCrawl
We present the Green-Schwarz action for Type IIA strings on $AdS_4\times CP^3$. The action is based on a $\Zop_4$ automorphism of the coset $OSp(4|6)/(SO(1,3)\times SU(3)\times U(1))$. The equations of motion admit a representation in terms of a Lax connection, showing that the system is classically integrable.
CommonCrawl
A large part of the world economy depends on oil, which is why research into new methods for finding and extracting oil is still active. Profits of oil companies depend in part on how efficiently they can drill for oil. The International Crude Petroleum Consortium (ICPC) hopes that extensive computer simulations will make it easier to determine how to drill oil wells in the best possible way. Drilling oil wells optimally is getting harder each day – the newly discovered oil deposits often do not form a single body, but are split into many parts. The ICPC is currently concerned with stratified deposits, as illustrated in Figure 1. Figure 1: Oil layers buried in the earth. This figure corresponds to Sample Input 1. To simplify its analysis, the ICPC considers only the 2-dimensional case, where oil deposits are modeled as horizontal line segments parallel to the earth's surface. The ICPC wants to know how to place a single oil well to extract the maximum amount of oil. The oil well is drilled from the surface along a straight line and can extract oil from all deposits that it intersects on its way down, even if the intersection is at an endpoint of a deposit. One such well is shown as a dashed line in Figure 1, hitting three deposits. In this simple model the amount of oil contained in a deposit is equal to the width of the deposit. Can you help the ICPC determine the maximum amount of oil that can be extracted by a single well? The first line of input contains a single integer $n$ ($1 \leq n \leq 2\, 000$), which is the number of oil deposits. This is followed by $n$ lines, each describing a single deposit. These lines contain three integers $x_0$, $x_1$, and $y$ giving the deposit's position as the line segment with endpoints $(x_0,y)$ and $(x_1,y)$. These numbers satisfy $|x_0|, |x_1| \leq 10^6$ and $1 \le y \le 10^6$. No two deposits will intersect, not even at a point. Display the maximum amount of oil that can be extracted by a single oil well.
CommonCrawl
The main goal of this workshop is to bring together a group of researchers with common interests from the different countries participating in the COST action "Connecting insights in fundamental physics", allowing them to share experiences and to discuss possible collaborations in the field of Higgs and Flavour Physics, including Neutrino Physics. This workshop will also provide an opportunity to present the GAMBIT collaboration and some of the existing modules for different calculations. The afternoon of the 17th of January will be dedicated to the code GAMBIT, including a tutorial. A seminar for the general public will take place late in one of the afternoons. The registration fee is 200 euros (100 euros for students and postdocs) and 70 euros for accompanying persons (for more details on registration see here). The EW vacuum lifetime is extremely sensitive to unknown (although necessarily present) high energy new physics. The latter can enormously lower the EW vacuum lifetime, posing a serious problem for the stability of our universe. After presenting the general issue of vacuum stability, and the reasons why new physics can be so highly destabilizing, I will discuss symmetries, physical models, as well as model-independent effects, that can provide the stabilization mechanism protecting our universe from decaying. Scenarios with a dark sector involving a light gauge boson have attracted considerable attention recently. The kinetic mixing of such a "dark photon" with the standard photon has to satisfy very stringent experimental bounds, which are violated in the simplest case where kinetic mixing is generated at the one-loop level. I will present scenarios where a sufficiently small kinetic mixing can be obtained. After discussing the general form of a Majorana neutrino mass matrix I will introduce a master parametrization for the Yukawa matrices in agreement with neutrino oscillation data. This parametrization extends previous results in the literature and can be used for any model that induces Majorana neutrino masses with the seesaw mechanism (with the only exception of the type-II seesaw). The application of the master parametrization will be illustrated in several example models, with special focus on their lepton flavor violating phenomenology. Neutrinoless double beta decay can significantly help to shed light on the issue of non-zero neutrino mass, as observation of this lepton number violating process would imply neutrinos are Majorana particles. However, the underlying interaction does not have to be as simple as the standard neutrino mass mechanism. The entire variety of neutrinoless double beta decay mechanisms can be approached effectively. In this talk I will focus on a theoretical description of short-range effective contributions to neutrinoless double beta decay, which are equivalent to 9-dimensional effective operators as well as a novel mode with a Majoron-like scalar particle emitted in the decay. I will first review the kind of constraints and requirements coming from leptogenesis on flavour models and then focus on those I consider more attractive showing their predictions and how we can test them. In particular I will show the important role played by unknowns in the leptonic mixing matrix in combination with the information from absolute neutrino mass scale experiments: on the sum of neutrino masses from cosmological observations and on the low neutrino mass matrix ee entry from neutrino less double beta decay experiments. I discuss how Yukawa alignment in Multi-Higgs models can arise from flavour symmetries. Contrary to common perception, we show that the current Higgs data does not eliminate the possibility of a sequential fourth generation that gets its mass through the same Higgs mechanism as the first three generations. The inability to fix the sign of the bottom-quark Yukawa coupling from the available data plays a crucial role in accommodating a chiral fourth generation which is consistent with the bounds on the Higgs signal strengths. We show that the effects of such a fourth generation can remain completely hidden not only in the production of the Higgs boson through gluon fusion but also to its subsequent decay to two photons and Z-photon. This, however, is feasible only if the scalar sector of the standard model is extended. We also provide a practical example illustrating how our general prescription can be embedded in a realistic model. We investigate the muon anomalous magnetic moment, the μ→eγ branching ratio and the μ→e conversion rate in the nuclei from the point of view of the planned μ→e conversion experiments. In the MSSM these processes are strongly correlated through tanβ enhanced contributions. We demonstrate how in the Minimal R-symmetric Supersymmetric Standard Model the μ→eγ branching ratio and the μ→e conversion rate in the nuclei give distinct bounds on the parameter space. We also consider the supersymmetric contributions to the muon anomalous magnetic moment, generated by a subset of topologies contributing to the LFV observables. We briefly discuss the generic implementation of the aforementioned observables into the FlexibleSUSY spectrum-generator generator. Looking at the current μ→eγ searches, the analysis points to the need of constructing a dedicated μ→e conversion experiment to cover as large parameter space as possible in the non-minimal supersymmetric models. In a model containing two scalar doublets and a scalar singlet with a specific discrete symmetry, spontaneous symmetry breaking yields Standard Model-like phenomenology, as well as a hidden scalar sector which provides a viable dark matter candidate. CP violation in the scalar sector occurs exclusively in the hidden sector, and possible experimental signatures of this CP violation will be presented. Compactifications of heterotic M-theory are shown to provide solutions to the weak-scale hierarchy problem as a consequence of warped large extra dimensions. They allow a description that is reminiscent of the so-called continuous clockwork mechanism. The models constructed here cover a new region of clockwork parameter space and exhibit unexplored spectra and couplings of Kaluza-Klein modes. Relations to previously proposed models as well as roles played by vector multiplets and the universal hypermultiplet in 5D-supergravity are also discussed. We discuss the appearance of various topological defects in SO(10) grand unification and how some may survive cosmic inflation. Dark matter candidates are briefly discussed. The origin of masses and mixings of the three families of fermions remains one of the main problems of the Standard Model. Flavor symmetries provide a compelling way to explain these of arbitrary parameters in the Yukawa sector. In Supersymmetric extensions of the Standard Model, where the mediation of SUSY breaking occurs at scales larger than the breaking of flavor, this symmetry must be respected not only by the Yukawas of the superpotential, but by the soft-breaking masses and trilinear terms as well. In this case, even starting with completely flavor blind soft-breaking in the full theory at high scales, the low-energy sfermion mass matrices and trilinear terms of the effective theory, obtained upon integrating out the heavy mediator fields, are strongly non-universal. We explore the phenomenology of several SUSY flavor models after the latest LHC searches for new physics. I am going to discuss properties of heavy Higgs bosons in the alignment limit of a generic 2HDM. This model constitutes a simple and attractive extension of the SM that is consistent with the observation of the SM-like Higgs boson and precision electroweak observables, while providing potential new sources of CP-violation. The Inert Doublet Model is an intriguing extension of the Standard Model that provides a dark matter candidate and is yet only marginally constrained by current collider data. I will discuss prospects of investigation this model at the LC as well as future e+e- colliders, and present the most recent constraints on the models parameter space. We derive the mass exclusion limits for the hypothetical vector resonances of a strongly interacting extension of the Standard Model using the most recent upper bounds on the cross sections for various resonance production processes. The SU(2)_L+R triplet of the vector resonances under consideration is embedded into the effective Lagrangian based on the non-linear sigma model with the 125-GeV SU(2) L+R scalar singlet. The Standard Model fields can interact through non-renormalizable operators, the simplest of which is the one mentioned by Weinberg, with dimension 5. The list of all such operators up to dimension 6 is known and, since the last few years, so is the number of all effective interactions up to dimension 15. However, counting operators and listing them explicitly are different things. In this presentation I will talk about the challenges associated with writing down the non-renormalizable interactions of Standard Model fields beyond dimension 6. A viable Two Higgs Doublet Model with CP violation of spontaneous origin is presented. In this model, based on a generalised Branco-Grimus-Lavoura model with a flavoured $Z_2$ symmetry, the lagrangian respects CP invariance, while the vacuum has a CP violating phase, which is able to generate a complex CKM matrix. Scalar mediated flavour changing neutral couplings are analysed, stressing the connection between the generation of a complex CKM matrix and the unavoidable presence of scalar FCNC. The scalar sector is also presented in detail, showing that the new scalars are necessarily lighter than 1 TeV. Finally, a phenomenological analysis of the model including the most relevant constraints is discussed, exploring, in particular, definite implications for the observation of New Physics signals. In view of several hints of lepton non universality, observed in B meson decays, we find that with a minimal modification to the SM in terms of an effective theory, the charged as well as neutral current anomalies can well be explained by introducing just two new parameters. This class of operators predict some interesting signatures both in the context of B decays as well as high-energy collisions. In the context of the minimal type-I seesaw model, we study the implications of considering maximally-restricted texture-zero patterns in the lepton Yukawa coupling and mass matrices. All possible patterns are analysed in the light of the most recent neutrino oscillation data and, in case of compatibility, predictions for leptonic CP violation, the effective mass relevant for neutrinoless double-beta decay, and the baryon asymmetry of the Universe are obtained. A minimal extension to the Standard Model with three positive chirality neutrinos is devised, under the Seesaw Type I framework. A novel parametrization is exploited, which enables to control all deviations from unitarity through a single $3 \times 3$ matrix, which is denoted by $X$, that also connects the mixing of the light and heavy neutrinos in the context of type I seesaw. This parametrization is adequate for a general and exact treatment, independent of the scale of the right handed neutrino mass term. Examples with sizeable deviations from unitarity and heavy neutrinos with not very large masses are presented. The problem of possibly large one-loop mass corrections to the light neutrino masses is taken into account. The recent intriguing hints for new physics in semi-leptonic B decays point towards lepton flavor universality violating extensions of the SM. Prime candidates for such new particles are leptoquarks which can provide the desired effects. After reviewing the current experimental and theoretical situation, I discuss the phenomenology of the vector leptoquark SU(1) singlet with was proposed already a long time ago in the context of the famous Pati-Salam model. Summary of present status of oscillation parameters, current improvements and discussion on future plans will be prasentad. Some details about measurement metodelogy and experimental limitations will also be given. We have calculated the W-loop SM contribution to the amplitude of the decay H → Z + γ and also for H → γ + γ in the Rξ-gauge using dimensional regularization (DimReg) and in the unitary gauge through the dispersion method. We show that the results of the calculations with DimReg and the dispersion method, adopting the boundary condition at the limit MW → 0 defined by the Goldstone boson equivalence theorem (GBET), completely coincide. This implies that DimReg is compatible with the dispersion method obeying the GBET. Thus, our results also agree with the "classical" ones. The advantage of the applied dispersion method is that we work with finite quantities and no regularization is required. The second data taking period of the LHC has just ended, having delivered about 150/fb worth of pp collisions to ATLAS/CMS at the record energy of 13TeV, as well as collisions involving heavy ions. This exceeds by a factor of 5 the amount of data recorded during the first datataking period. With only part of these data having been analyzed, a good number of results have been already extracted, which resulted in considerable jumps in sensitivity. I will present a selection of recent results obtained with CMS. We consider an anomaly free extension of the standard model gauge group GSM by an abelian group to GSM ⊗ U(1)Z. The condition of anomaly cancellation is known to fix the Z-charges of the particles, but two. We fix one remaining charge by allowing for all possible Yukawa interactions the known left handed neutrinos and new right-handed ones that obtain their masses through interaction with a new scalar field whose vacuum is broken spontaneously. We discuss some of the possible consequences of the model and ways of constraining the parameter space. With the first Higgs doublet established, a second doublet is rather likely. We give several arguments why the usual Z2 symmetry that removes extra Yukawa couplings should be discarded. We then show that this provides a rather robust mechanism for electroweak baryogenesis, by the combined presence of lambda_t ~ 1 and Im(rho_tt) ~ 1, where rho_tt is the extra top Yukawa, while rho_tc could provide a backup mechanism. We show that the prerequisite of Higgs quartic couplings, eta_i, also of O(1), can relatively easily give rise to the observed approximate alignment, that the observed h(125) appear so close to SM-Higgs. As a most likely next New Physics, extra Yukawas, whether flavor changing or conserving, are numerous, but they are quite well hidden by flavor hierarchies, alignment, and heavy Higgs at 500 GeV or higher. The remainder of the talk discusses where and how to unveil these couplings. I discuss the production of a single top quark in the t-channel and its subsequent decay is studied at NLO accuracy in QCD, augmented with the relevant dimension-6 effective operators from the Standard Model Effective Theory. I show results for various kinematic and angular distributions for the LHC at 13 TeV, in order to assess the sensitivity to these operators, both with and without the top quark narrow width approximation. I show also the amount of sensitivity to a possible extra source of CP violation due to the weak dipole operator. We address the B-physics anomalies within a two scalar leptoquark model. The low-energy flavor structure of our set-up originates from two SU(5) operators that relate Yukawa couplings of the two leptoquarks. The proposed scenario has a UV completion, can accommodate all measured lepton flavor universality ratios in B-meson decays, is consistent with related flavor observables, and is compatible with direct searches at the LHC. We provide prospects for future discoveries of the two light leptoquarks at the LHC and predict several yet-to-be-measured flavor observables. The current flavour anomalies seem to indicate lepton flavour non-universality. To explain all the anomalies simultaneously, it is believed that exotic new physics beyond the Standard Model is needed, such as the existence of leptoquarks etc. I shall show that simultaneous explanation to all current flavour anomalies can be obtained in three Higgs doublet models. The prediction of this scenario is an existence of GeV scale right-handed neutrinos. Thus the current anomalies might be connected to extra Higgses and to low scale leptogenesis. The most powerful approach for assessing the level of agreement between a new theory and experimental results is to perform a "global fit" -- a comprehensive and statistically rigorous comparison of theory predictions against all the available data. In this talk I will give an introduction to BSM global fits and the software tool GAMBIT, an open-source package for performing large-scale global fits. The presentation will be followed by a demonstration and tutorial on how to use GAMBIT. The occurrence of flavour-violating decays of hadrons and leptons into light axion-like particles is a generic consequence of spontaneously-broken global U(1) symmetries with flavour non-universal charges, and a powerful probe of such kind of scenarios. A well-motivated example is the flavour-violating QCD axion arising in the context of a Froggatt-Nielsen model of fermion masses and mixing. I will discuss both the latter specific case and the more generic setup with a focus on their phenomenology at flavour experiments. I review scenarii in which the particles that account for the dark matter in the universe interact only through their couplings with the Higgs sector of the theory, the so-called Higgs-portal models. I summarize and update the present constraints and future prospects from the collider physics perspective and compare them to what can be obtained from the cosmological relic abundance as well as from direct and indirect dark matter detection in astroparticle physics experiments.
CommonCrawl
Suppose $K$ is an infinite field of char. $0$. Let $C$ be an $K$-irreducible affine algebraic $K$-curve and suppose that $P_1,\ldots, P_n$ are non-singular $K$-points of $C$. Can one always find a plane curve $D$ and a birational map (defined over $K$) $f:C\dashrightarrow D$ such that for all $1\leq i \leq n$, $f(P_i)$ is defined and is a non-singular $K$-point of $D$? Browse other questions tagged algebraic-geometry field-theory arithmetic-geometry or ask your own question. When can a birational map of curves be extended?
CommonCrawl
Online services are increasingly intelligent. They evolve intelligently through A/B testing and experimentation, employ artificial intelligence in their core functionality using machine learning, and seamlessly engage human intelligence by connecting people in a low-friction manner. All of this has resulted in incredibly engaging experiences -- but not particularly productive ones. As more and more of people's most important tasks move online we need to think carefully about the underlying influence online services have on people's ability to attend to what matters to them. There is an opportunity to use intelligence for this to do more than just not distract people and actually start helping people attend to what matters even better than they would otherwise. This presentation explores the ways we might make it as compelling and easy to start an important task as it is to check social media. The goals of learning from user data and preserving user privacy are often considered to be in conflict. This presentation will demonstrate that there are contexts when provable privacy guarantees can be an enabler for better web search and data mining (WSDM), and can empower researchers hoping to change the world by mining sensitive user data. The presentation starts by motivating the rigorous statistical data privacy definition that is particularly suitable for today's world of big data, differential privacy. It will then demonstrate how to achieve differential privacy for WSDM tasks when the data collector is trusted by the users. Using Chrome's deployment of RAPPOR as a case study, it will be shown that achieving differential privacy while preserving utility is feasible even when the data collector is not trusted. The presentation concludes with open problems and challenges for the WSDM community. Interactive systems such as search engines or recommender systems are increasingly moving away from single-turn exchanges with users. Instead, series of exchanges between the user and the system are becoming mainstream, especially when users have complex needs or when the system struggles to understand the user's intent. Standard machine learning has helped us a lot in the single-turn paradigm, where we use it to predict: intent, relevance, user satisfaction, etc. When we think of search or recommendation as a series of exchanges, we need to turn to bandit algorithms to determine which action the system should take next, or to reinforcement learning to determine not just the next action but also to plan future actions and estimate their potential pay-off. The use of reinforcement learning for search and recommendations comes with a number of challenges, because of the very large action spaces, the large number of potential contexts, and noisy feedback signals characteristic for this domain. This presentation will survey some recent success stories of reinforcement learning for search, recommendation, and conversations; and will identify promising future research directions for reinforcement learning for search and recommendation. Recent studies show that by combining network topology and node attributes, we can better understand community structures in complex networks. However, existing algorithms do not explore "contextually" similar node attribute values, and therefore may miss communities defined with abstract concepts. We propose a community detection and characterization algorithm that incorporates the contextual information of node attributes described by multiple domain-specific hierarchical concept graphs. The core problem is to find the context that can best summarize the nodes in communities, while also discovering communities aligned with the context summarizing communities. We formulate the two intertwined problems, optimal community-context computation, and community discovery, with a coordinate-ascent based algorithm that iteratively updates the nodes' community label assignment with a community-context and computes the best context summarizing nodes of each community. Our unique contributions include (1) a composite metric on Informativeness and Purity criteria in searching for the best context summarizing nodes of a community; (2) a node similarity measure that incorporates the context-level similarity on multiple node attributes; and (3) an integrated algorithm that drives community structure discovery by appropriately weighing edges. Experimental results on public datasets show nearly 20 percent improvement on F-measure and Jaccard for discovering underlying community structure over the current state-of-the-art of community detection methods. Community structure characterization was also accurate to find appropriate community types for four datasets. Representation learning models map data instances into a low-dimensional vector space, thus facilitating the deployment of subsequent models such as classification and clustering models, or the implementation of downstream applications such as recommendation and anomaly detection. However, the outcome of representation learning is difficult to be directly understood by users, since each dimension of the latent space may not have any specific meaning. Understanding representation learning could be beneficial to many applications. For example, in recommender systems, knowing why a user instance is mapped to a certain position in the latent space may unveil the user's interests and profile. In this paper, we propose an interpretation framework to understand and describe how representation vectors distribute in the latent space. Specifically, we design a coding scheme to transform representation instances into spatial codes to indicate their locations in the latent space. Following that, a multimodal autoencoder is built for generating the description of a representation instance given its spatial codes. The coding scheme enables indication of position with different granularity. The incorporation of autoencoder makes the framework capable of dealing with different types of data. Several metrics are designed to evaluate interpretation results. Experiments under various application scenarios and different representation learning models are conducted to demonstrate the flexibility and effectiveness of the proposed framework. Identifying and recommending potential new customers for local businesses are crucial to the survival and success of local businesses. A key component to identifying the right customers is to understand the decision-making process of choosing a business over the others. However, modeling this process is an extremely challenging task as a decision is influenced by multiple factors. These factors include but are not limited to an individual's taste or preference, the location accessibility of a business, and the reputation of a business from social media. Most of the recommender systems lack the power to integrate multiple factors together and are hardly extensible to accommodate new incoming factors. In this paper, we introduce a unified framework, CORALS, which considers the personal preferences of different customers, the geographical influence, and the reputation of local businesses in the customer recommendation task. To evaluate the proposed model, we conduct a series of experiments to extensively compare with 12 state-of-the-art methods using two real-world datasets. The results demonstrate that CORALS outperforms all these baselines by a significant margin in most scenarios. In addition to identifying potential new customers, we also break down the analysis for different types of businesses to evaluate the impact of various factors that may affect customers' decisions. This information, in return, provides a great resource for local businesses to adjust their advertising strategies and business services to attract more prospective customers. Knowledge graph embedding aims to learn distributed representations for entities and relations, and is proven to be effective in many applications. Crossover interactions -- bi-directional effects between entities and relations --- help select related information when predicting a new triple, but haven't been formally discussed before. In this paper, we propose CrossE, a novel knowledge graph embedding which explicitly simulates crossover interactions. It not only learns one general embedding for each entity and relation as most previous methods do, but also generates multiple triple specific embeddings for both of them, named interaction embeddings. We evaluate embeddings on typical link prediction tasks and find that CrossE achieves state-of-the-art results on complex and more challenging datasets. Furthermore, we evaluate embeddings from a new perspective -- giving explanations for predicted triples, which is important for real applications. In this work, an explanation for a triple is regarded as a reliable closed-path between the head and the tail entity. Compared to other baselines, we show experimentally that CrossE, benefiting from interaction embeddings, is more capable of generating reliable explanations to support its predictions. Question answering over knowledge graph (QA-KG) aims to use facts in the knowledge graph (KG) to answer natural language questions. It helps end users more efficiently and more easily access the substantial and valuable knowledge in the KG, without knowing its data structures. QA-KG is a nontrivial problem since capturing the semantic meaning of natural language is difficult for a machine. Meanwhile, many knowledge graph embedding methods have been proposed. The key idea is to represent each predicate/entity as a low-dimensional vector, such that the relation information in the KG could be preserved. The learned vectors could benefit various applications such as KG completion and recommender systems. In this paper, we explore to use them to handle the QA-KG problem. However, this remains a challenging task since a predicate could be expressed in different ways in natural language questions. Also, the ambiguity of entity names and partial names makes the number of possible answers large. To bridge the gap, we propose an effective Knowledge Embedding based Question Answering (KEQA) framework. We focus on answering the most common types of questions, i.e., simple questions, in which each question could be answered by the machine straightforwardly if its single head entity and single predicate are correctly identified. To answer a simple question, instead of inferring its head entity and predicate directly, KEQA targets at jointly recovering the question's head entity, predicate, and tail entity representations in the KG embedding spaces. Based on a carefully-designed joint distance metric, the three learned vectors' closest fact in the KG is returned as the answer. Experiments on a widely-adopted benchmark demonstrate that the proposed KEQA outperforms the state-of-the-art QA-KG methods. Mobile notifications have become a major communication channel for social networking services to keep users informed and engaged. As more mobile applications push notifications to users, they constantly face decisions on what to send, when and how. A lack of research and methodology commonly leads to heuristic decision making. Many notifications arrive at an inappropriate moment or introduce too many interruptions, failing to provide value to users and spurring users' complaints. In this paper we explore unique features of interactions between mobile notifications and user engagement. We propose a state transition framework to quantitatively evaluate the effectiveness of notifications. Within this framework, we develop a survival model for badging notifications assuming a log-linear structure and a Weibull distribution. Our results show that this model achieves more flexibility for applications and superior prediction accuracy than a logistic regression model. In particular, we provide an online use case on notification delivery time optimization to show how we make better decisions, drive more user engagement, and provide more value to users. Random walks can provide a powerful tool for harvesting the rich network of interactions captured within item-based models for top-n recommendation. They can exploit indirect relations between the items, mitigate the effects of sparsity, ensure wider itemspace coverage, as well as increase the diversity of recommendation lists. Their potential however, is hindered by the tendency of the walks to rapidly concentrate towards the central nodes of the graph, thereby significantly restricting the range of K-step distributions that can be exploited for personalized recommendations. In this work we introduce RecWalk; a novel random walk-based method that leverages the spectral properties of nearly uncoupled Markov chains to provably lift this limitation and prolong the influence of users' past preferences on the successive steps of the walk--allowing the walker to explore the underlying network more fruitfully. A comprehensive set of experiments on real-world datasets verify the theoretically predicted properties of the proposed approach and indicate that they are directly linked to significant improvements in top-n recommendation accuracy. They also highlight RecWalk's potential in providing a framework for boosting the performance of item-based models. RecWalk achieves state-of-the-art top-n recommendation quality outperforming several competing approaches, including recently proposed methods that rely on deep neural networks. Newsworthy events are broadcast through multiple mediums and prompt the crowds to produce comments on social media. In this paper, we propose to leverage on this behavioral dynamics to estimate the most relevant time periods for an event (i.e., query). Recent advances have shown how to improve the estimation of the temporal relevance of such topics. In this approach, we build on two major novelties. First, we mine temporal evidences from hundreds of external sources into topic-based external collections to improve the robustness of the detection of relevant time periods. Second, we propose a formal retrieval model that generalizes the use of the temporal dimension across different aspects of the retrieval process. In particular, we show that temporal evidence of external collections can be used to (i) infer a topic's temporal relevance, (ii) select the query expansion terms, and (iii) re-rank the final results for improved precision. Experiments with TREC Microblog collections show that the proposed time-aware retrieval model makes an effective and extensive use of the temporal dimension to improve search results over the most recent temporal models. Interestingly, we observe a strong correlation between precision and the temporal distribution of retrieved and relevant documents. Social connections are known to be helpful for modeling users' potential preferences and improving the performance of recommender systems. However, in social-aware recommendations, there are two issues which influence the inference of users' preferences, and haven't been well-studied in most existing methods: First, the preferences of a user may only partially match that of his friends in certain aspects, especially when considering a user with diverse interests. Second, for an individual, the influence strength of his friends might be different, as not all friends are equally helpful for modeling his preferences in the system. To address the above issues, in this paper, we propose a novel Social Attentional Memory Network (SAMN) for social-aware recommendation. Specifically, we first design an attention-based memory module to learn user-friend relation vectors, which can capture the varying aspect attentions that a user share with his different friends. Then we build a friend-level attention component to adaptively select informative friends for user modeling. The two components are fused together to mutually enhance each other and lead to a finer extended model. Experimental results on three publicly available datasets show that the proposed SAMN model consistently and significantly outperforms the state-of-the-art recommendation methods. Furthermore, qualitative studies have been made to explore what the proposed attention-based memory module and friend-level attention have learnt, which provide insights into the model's learning process. The overturning of the Internet Privacy Rules by the Federal Communications Commissions (FCC) in late March 2017 allows Internet Service Providers (ISPs) to collect, share and sell their customers' Web browsing data without their consent. With third-party trackers embedded on Web pages, this new rule has put user privacy under more risk. The need arises for users on their own to protect their Web browsing history from any potential adversaries. Although some available solutions such as Tor, VPN, and HTTPS can help users conceal their online activities, their use can also significantly hamper personalized online services, i.e., degraded utility. In this paper, we design an effective Web browsing history anonymization scheme, PBooster, aiming to protect users' privacy while retaining the utility of their Web browsing history. The proposed model pollutes users' Web browsing history by automatically inferring how many and what links should be added to the history while addressing the utility-privacy trade-off challenge. We conduct experiments to validate the quality of the manipulated Web browsing history and examine the robustness of the proposed approach for user privacy protection. The increasing role of recommender systems in many aspects of society makes it essential to consider how such systems may impact social good. Various modifications to recommendation algorithms have been proposed to improve their performance for specific socially relevant measures. However, previous proposals are often not easily adapted to different measures, and they generally require the ability to modify either existing system inputs, the system's algorithm, or the system's outputs. As an alternative, in this paper we introduce the idea of improving the social desirability of recommender system outputs by adding more data to the input, an approach we view as as providing 'antidote' data to the system. We formalize the antidote data problem, and develop optimization-based solutions. We take as our model system the matrix factorization approach to recommendation, and we propose a set of measures to capture the polarization or fairness of recommendations. We then show how to generate antidote data for each measure, pointing out a number of computational efficiencies, and discuss the impact on overall system accuracy. Our experiments show that a modest budget for antidote data can lead to significant improvements in the polarization or fairness of recommendations. Users increasingly rely on social media feeds for consuming daily information. The items in a feed, such as news, questions, songs, etc., usually result from the complex interplay of a user's social contacts, her interests and her actions on the platform. The relationship of the user's own behavior and the received feed is often puzzling, and many users would like to have a clear explanation on why certain items were shown to them. Transparency and explainability are key concerns in the modern world of cognitive overload, filter bubbles, user tracking, and privacy risks. This paper presents FAIRY, a framework that systematically discovers, ranks, and explains relationships between users' actions and items in their social media feeds. We model the user's local neighborhood on the platform as an interaction graph, a form of heterogeneous information network constructed solely from information that is easily accessible to the concerned user. We posit that paths in this interaction graph connecting the user and her feed items can act as pertinent explanations for the user. These paths are scored with a learning-to-rank model that captures relevance and surprisal. User studies on two social platforms demonstrate the practical viability and user benefits of the FAIRY method. Online Learning to Rank is a powerful paradigm that allows to train ranking models using only online feedback from its users.In this work, we consider Federated Online Learning to Rank setup (FOLtR) where on-mobile ranking models are trained in a way that respects the users' privacy. We require that the user data, such as queries, results, and their feature representations are never communicated for the purpose of the ranker's training. We believe this setup is interesting, as it combines unique requirements for the learning algorithm: (a) preserving the user privacy, (b) low communication and computation costs, (c) learning from noisy bandit feedback, and (d) learning with non-continuous ranking quality measures. We propose a learning algorithm FOLtR-ES that satisfies these requirements. A part of FOLtR-ES is a privatization procedure that allows it to provide ε-local differential privacy guarantees, i.e. protecting the clients from an adversary who has access to the communicated messages. This procedure can be applied to any absolute online metric that takes finitely many values or can be discretized to a finite domain. Our experimental study is based on a widely used click simulation approach and publicly available learning to rank datasets MQ2007 and MQ2008. We evaluate FOLtR-ES against offline baselines that are trained using relevance labels, linear regression model and RankingSVM. From our experiments, we observe that FOLtR-ES can optimize a ranking model to perform similarly to the baselines in terms of the optimized online metric, Max Reciprocal Rank. In an increasingly polarized world, demagogues who reduce complexity down to simple arguments based on emotion are gaining in popularity. Are opinions and online discussions falling into demagoguery? In this work, we aim to provide computational tools to investigate this question and, by doing so, explore the nature and complexity of online discussions and their space of opinions, uncovering where each participant lies. More specifically, we present a modeling framework to construct latent representations of opinions in online discussions which are consistent with human judgments, as measured by online voting. If two opinions are close in the resulting latent space of opinions, it is because humans think they are similar. Our framework is theoretically grounded and establishes a surprising connection between opinion and voting models and the sign-rank of matrices. Moreover, it also provides a set of practical algorithms to both estimate the dimensionality of the latent space of opinions and infer where opinions expressed by the participants of an online discussion lie in this space. Experiments on a large dataset from Yahoo! News, Yahoo! Finance, Yahoo! Sports, and the Newsroom app show that many discussions are multisided, reveal a positive correlation between the complexity of a discussion, its linguistic diversity and its level of controversy, and show that our framework may be able to circumvent language nuances such as sarcasm or humor by relying on human judgments instead of textual analysis. We consider context-response matching with multiple types of representations for multi-turn response selection in retrieval-based chatbots. The representations encode semantics of contexts and responses on words, n-grams, and sub-sequences of utterances, and capture both short-term and long-term dependencies among words. With such a number of representations in hand, we study how to fuse them in a deep neural architecture for matching and how each of them contributes to matching. To this end, we propose a multi-representation fusion network where the representations can be fused into matching at an early stage, at an intermediate stage, or at the last stage. We empirically compare different representations and fusing strategies on two benchmark data sets. Evaluation results indicate that late fusion is always better than early fusion, and by fusing the representations at the last stage, our model significantly outperforms the existing methods, and achieves new state-of-the-art performance on both data sets. Through a thorough ablation study, we demonstrate the effect of each representation to matching, which sheds light on how to select them in practical systems. Recent years have witnessed the flourishing of podcasts, a unique type of audio medium. Prior work on podcast content modeling focused on analyzing Automatic Speech Recognition outputs, which ignored vocal, musical, and conversational properties (e.g., energy, humor, and creativity) that uniquely characterize this medium. In this paper, we present an Adversarial Learning-based Podcast Representation (ALPR) that captures non-textual aspects of podcasts. Through extensive experiments on a large-scale podcast dataset (88,728 episodes from 18,433 channels), we show that (1) ALPR significantly outperforms the state-of-the-art features developed for music and speech in predicting theseriousness andenergy of podcasts, and (2) incorporating ALPR significantly improves the performance of topic-based podcast-popularity prediction. Our experiments also reveal factors that correlate with podcast popularity. We tackle Attitude Detection, which we define as the task of extracting the replier's attitude, i.e., a target-polarity pair, from a given one-round conversation. While previous studies considered Target Extraction and Polarity Classification separately, we regard them as subtasks of Attitude Detection. Our experimental results show that treating the two subtasks independently is not the optimal solution for Attitude Detection, as achieving high performance in each subtask is not sufficient for obtaining correct target-polarity pairs. Our jointly trained model AD-NET substantially outperforms the separately trained models by alleviating the target-polarity mismatch problem. Moreover, we proposed a method utilising the attitude detection model to improve retrieval-based chatbots by re-ranking the response candidates with attitude features. Human evaluation indicates that with attitude detection integrated, the new responses to the sampled queries from are statistically significantly more consistent, coherent, engaging and informative than the original ones obtained from a commercial chatbot. Pattern counting in graphs is fundamental to several network sci- ence tasks, and there is an abundance of scalable methods for estimating counts of small patterns, often called motifs, in large graphs. However, modern graph datasets now contain richer structure, and incorporating temporal information in particular has become a key part of network analysis. Consequently, temporal motifs, which are generalizations of small subgraph patterns that incorporate temporal ordering on edges, are an emerging part of the network analysis toolbox. However, there are no algorithms for fast estimation of temporal motifs counts; moreover, we show that even counting simple temporal star motifs is NP-complete. Thus, there is a need for fast and approximate algorithms. Here, we present the first frequency estimation algorithms for counting temporal motifs. More specifically, we develop a sampling framework that sits as a layer on top of existing exact counting algorithms and enables fast and accurate memory-efficient estimates of temporal motif counts. Our results show that we can achieve one to two orders of magnitude speedups over existing algorithms with minimal and controllable loss in accuracy on a number of datasets. The phenomenon of edge clustering in real-world networks is a fundamental property underlying many ideas and techniques in network science. Clustering is typically quantified by the clustering coefficient, which measures the fraction of pairs of neighbors of a given center node that are connected. However, many common explanations of edge clustering attribute the triadic closure to a head node instead of the center node of a length-2 path; for example, a friend of my friend is also my friend. While such explanations are common in network analysis, there is no measurement for edge clustering that can be attributed to the head node. Here we develop local closure coefficients as a metric quantifying head-node-based edge clustering. We define the local closure coefficient as the fraction of length-2 paths emanating from the head node that induce a triangle. This subtle difference in definition leads to remarkably different properties from traditional clustering coefficients. We analyze correlations with node degree, connect the closure coefficient to community detection, and show that closure coefficients as a feature can improve link prediction. Social media is becoming popular for news consumption due to its fast dissemination, easy access, and low cost. However, it also enables the wide propagation of fake news, i.e., news with intentionally false information. Detecting fake news is an important task, which not only ensures users receive authentic information but also helps maintain a trustworthy news ecosystem. The majority of existing detection algorithms focus on finding clues from news contents, which are generally not effective because fake news is often intentionally written to mislead users by mimicking true news. Therefore, we need to explore auxiliary information to improve detection. The social context during news dissemination process on social media forms the inherent tri-relationship, the relationship among publishers, news pieces, and users, which has the potential to improve fake news detection. For example, partisan-biased publishers are more likely to publish fake news, and low-credible users are more likely to share fake news. In this paper, we study the novel problem of exploiting social context for fake news detection. We propose a tri-relationship embedding framework TriFN, which models publisher-news relations and user-news interactions simultaneously for fake news classification. We conduct experiments on two real-world datasets, which demonstrate that the proposed approach significantly outperforms other baseline methods for fake news detection. Crowdsourcing has become a standard methodology to collect manually annotated data such as relevance judgments at scale. On crowdsourcing platforms like Amazon MTurk or FigureEight, crowd workers select tasks to work on based on different dimensions such as task reward and requester reputation. Requesters then receive the judgments of workers who self-selected into the tasks and completed them successfully. Several crowd workers, however, preview tasks, begin working on them, reaching varying stages of task completion without finally submitting their work. Such behavior results in unrewarded effort which remains invisible to requesters. In this paper, we conduct the first investigation into the phenomenon of task abandonment, the act of workers previewing or beginning a task and deciding not to complete it. We follow a three-fold methodology which includes 1) investigating the prevalence and causes of task abandonment by means of a survey over different crowdsourcing platforms, 2) data-driven analyses of logs collected during a large-scale relevance judgment experiment, and 3) controlled experiments measuring the effect of different dimensions on abandonment. Our results show that task abandonment is a widely spread phenomenon. Apart from accounting for a considerable amount of wasted human effort, this bears important implications on the hourly wages of workers as they are not rewarded for tasks that they do not complete. We also show how task abandonment may have strong implications on the use of collected data (for example, on the evaluation of IR systems). Twitter's popularity has fostered the emergence of various illegal user activities - one such activity is to artificially bolster visibility of tweets by gaining large number of retweets within a short time span. The natural way to gain visibility is time-consuming. Therefore, users who want their tweets to get quick visibility try to explore shortcuts - one such shortcut is to approach the blackmarket services, and gain retweets for their own tweets by retweeting other customers' tweets. Thus the users intrinsically become a part of a collusive ecosystem controlled by these services. In this paper, we propose CoReRank, an unsupervised framework to detect collusive users (who are involved in producing artificial retweets), and suspicious tweets (which are submitted to the blackmarket services) simultaneously. CoReRank leverages the retweeting (or quoting) patterns of users, and measures two scores - the 'credibility' of a user and the 'merit' of a tweet. We propose a set of axioms to derive the interdependency between these two scores, and update them in a recursive manner. The formulation is further extended to handle the cold start problem. CoReRank is guaranteed to converge in a finite number of iterations and has linear time complexity. We also propose a semi-supervised version of CoReRank (called CoReRank+) which leverages a partial ground-truth labeling of users and tweets. Extensive experiments are conducted to show the superiority of CoReRank compared to six baselines on a novel dataset we collected and annotated. CoReRank beats the best unsupervised baseline method by 269% (20%) (relative) average precision and 300% (22.22%) (relative) average recall in detecting collusive (genuine) users. CoReRank+ beats the best supervised baseline method by 33.18% AUC. CoReRank also detects suspicious tweets with 0.85 (0.60) average precision (recall). To our knowledge, CoReRank is the first unsupervised method to detect collusive users and suspicious tweets simultaneously with theoretical guarantees. Over the last decade, research has revealed the high prevalence of cyberbullying among youth and raised serious concerns in society. Information on the social media platforms where cyberbullying is most prevalent (e.g., Instagram, Facebook, Twitter) is inherently multi-modal, yet most existing work on cyberbullying identification has focused solely on building generic classification models that rely exclusively on text analysis of online social media sessions (e.g., posts). Despite their empirical success, these efforts ignore the multi-modal information manifested in social media data (e.g., image, video, user profile, time, and location), and thus fail to offer a comprehensive understanding of cyberbullying. Conventionally, when information from different modalities is presented together, it often reveals complementary insights about the application domain and facilitates better learning performance. In this paper, we study the novel problem of cyberbullying detection within a multi-modal context by exploiting social media data in a collaborative way. This task, however, is challenging due to the complex combination of both cross-modal correlations among various modalities and structural dependencies between different social media sessions, and the diverse attribute information of different modalities. To address these challenges, we propose XBully, a novel cyberbullying detection framework, that first reformulates multi-modal social media data as a heterogeneous network and then aims to learn node embedding representations upon it. Extensive experimental evaluations on real-world multi-modal social media datasets show that the XBully framework is superior to the state-of-the-art cyberbullying detection models. An overwhelming number of true and false news stories are posted and shared in social networks, and users diffuse the stories based on multiple factors. Diffusion of news stories from one user to another depends not only on the stories' content and the genuineness but also on the alignment of the topical interests between the users. In this paper, we propose a novel Bayesian nonparametric model that incorporates homogeneity of news stories as the key component that regulates the topical similarity between the posting and sharing users' topical interests. Our model extends hierarchical Dirichlet process to model the topics of the news stories and incorporates Bayesian Gaussian process latent variable model to discover the homogeneity values. We train our model on a real-world social network dataset and find homogeneity values of news stories that strongly relate to their labels of genuineness and their contents. Finally, we show that the supervised version of our model predicts the labels of news stories better than the state-of-the-art neural network and Bayesian models. Performing anomaly detection on attributed networks concerns with finding nodes whose patterns or behaviors deviate significantly from the majority of reference nodes. Its success can be easily found in many real-world applications such as network intrusion detection, opinion spam detection and system fault diagnosis, to name a few. Despite their empirical success, a vast majority of existing efforts are overwhelmingly performed in an unsupervised scenario due to the expensive labeling costs of ground truth anomalies. In fact, in many scenarios, a small amount of prior human knowledge of the data is often effortless to obtain, and getting it involved in the learning process has shown to be effective in advancing many important learning tasks. Additionally, since new types of anomalies may constantly arise over time especially in an adversarial environment, the interests of human expert could also change accordingly regarding to the detected anomaly types. It brings further challenges to conventional anomaly detection algorithms as they are often applied in a batch setting and are incapable to interact with the environment. To tackle the above issues, in this paper, we investigate the problem of anomaly detection on attributed networks in an interactive setting by allowing the system to proactively communicate with the human expert in making a limited number of queries about ground truth anomalies. Our objective is to maximize the true anomalies presented to the human expert after a given budget is used up. Along with this line, we formulate the problem through the principled multi-armed bandit framework and develop a novel collaborative contextual bandit algorithm, named GraphUCB. In particular, our developed algorithm: (1) explicitly models the nodal attributes and node dependencies seamlessly in a joint framework; and (2) handles the exploration-exploitation dilemma when querying anomalies of different types. Extensive experiments on real-world datasets show the improvement of the proposed algorithm over the state-of-the-art algorithms. Core-periphery structure is a common property of complex networks, which is a composition of tightly connected groups of core vertices and sparsely connected periphery vertices. This structure frequently emerges in traffic systems, biology, and social networks via underlying spatial positioning of the vertices. While core-periphery structure is ubiquitous, there have been limited attempts at modeling network data with this structure. Here, we develop a generative, random network model with core-periphery structure that jointly accounts for topological and spatial information by "core scores'' of vertices. Our model achieves substantially higher likelihood than existing generative models of core-periphery structure, and we demonstrate how the core scores can be used in downstream data mining tasks, such as predicting airline traffic and classifying fungal networks. We also develop nearly linear time algorithms for learning model parameters and network sampling by using a method akin to the fast multipole method, a technique traditional to computational physics, which allow us to scale to networks with millions of vertices with minor tradeoffs in accuracy. We propose a general view that demonstrates the relationship between network embedding approaches and matrix factorization. Unlike previous works that present the equivalence for the approaches from a skip-gram model perspective, we provide a more fundamental connection from an optimization (objective function) perspective. We demonstrate that matrix factorization is equivalent to optimizing two objectives: one is for bringing together the embeddings of similar nodes; the other is for separating the embeddings of distant nodes. The matrix to be factorized has a general form: S-β. The elements of $\mathbfS $ indicate pairwise node similarities. They can be based on any user-defined similarity/distance measure or learned from random walks on networks. The shift number β is related to a parameter that balances the two objectives. More importantly, the resulting embeddings are sensitive to β and we can improve the embeddings by tuning β. Experiments show that matrix factorization based on a new proposed similarity measure and β-tuning strategy significantly outperforms existing matrix factorization approaches on a range of benchmark networks. Graph similarity search is among the most important graph-based applications, e.g. finding the chemical compounds that are most similar to a query compound. Graph similarity/distance computation, such as Graph Edit Distance (GED) and Maximum Common Subgraph (MCS), is the core operation of graph similarity search and many other applications, but very costly to compute in practice. Inspired by the recent success of neural network approaches to several graph applications, such as node or graph classification, we propose a novel neural network based approach to address this classic yet challenging graph problem, aiming to alleviate the computational burden while preserving a good performance. The proposed approach, called SimGNN, combines two strategies. First, we design a learnable embedding function that maps every graph into an embedding vector, which provides a global summary of a graph. A novel attention mechanism is proposed to emphasize the important nodes with respect to a specific similarity metric. Second, we design a pairwise node comparison method to supplement the graph-level embeddings with fine-grained node-level information. Our model achieves better generalization on unseen graphs, and in the worst case runs in quadratic time with respect to the number of nodes in two graphs. Taking GED computation as an example, experimental results on three real graph datasets demonstrate the effectiveness and efficiency of our approach. Specifically, our model achieves smaller error rate and great time reduction compared against a series of baselines, including several approximation algorithms on GED computation, and many existing graph neural network based models. Our study suggests SimGNN provides a new direction for future research on graph similarity computation and graph similarity search. Existing embedding methods for attributed networks aim at learning low-dimensional vector representations for nodes only but not for both nodes and attributes, resulting in the fact that they cannot capture the affinities between nodes and attributes. However, capturing such affinities is of great importance to the success of many real-world attributed network applications, such as attribute inference and user profiling. Accordingly, in this paper, we introduce a Co-embedding model for Attributed Networks (CAN), which learns low-dimensional representations of both attributes and nodes in the same semantic space such that the affinities between them can be effectively captured and measured. To obtain high-quality embeddings, we propose a variational auto-encoder that embeds each node and attribute with means and variances of Gaussian distributions. Experimental results on real-world networks demonstrate that our model yields excellent performance in a number of applications compared with state-of-the-art techniques. Relevance is the core problem of a search engine, and one of the main challenges is the vocabulary gap between user queries and documents. This problem is more serious in e-commerce, because language in product titles is more professional. Query rewriting and semantic matching are two key techniques to bridge the semantic gap between them to improve relevance. Recently, deep neural networks have been successfully applied to the two tasks and enhanced the relevance performance. However, such approaches suffer from the sparseness of training data in e-commerce scenario. In this study, we investigate the instinctive connection between query rewriting and semantic matching tasks, and propose a co-training framework to address the data sparseness problem when training deep neural networks. We first build a huge unlabeled dataset from search logs, on which the two tasks can be considered as two different views of the relevance problem. Then we iteratively co-train them via labeled data generated from this unlabeled set to boost their performance simultaneously. We conduct a series of offline and online experiments on a real-world e-commerce search engine, and the results demonstrate that the proposed method improves relevance significantly. The users often have many product-related questions before they make a purchase decision in E-commerce. However, it is often time-consuming to examine each user review to identify the desired information. In this paper, we propose a novel review-driven framework for answer generation for product-related questions in E-commerce, named RAGE. We develope RAGE on the basis of the multi-layer convolutional architecture to facilitate speed-up of answer generation with the parallel computation. For each question, RAGE first extracts the relevant review snippets from the reviews of the corresponding product. Then, we devise a mechanism to identify the relevant information from the noise-prone review snippets and incorporate this information to guide the answer generation. The experiments on two real-world E-Commerce datasets show that the proposed RAGE significantly outperforms the existing alternatives in producing more accurate and informative answers in natural language. Moreover, RAGE takes much less time for both model training and answer generation than the existing RNN based generation models. Evaluating algorithmic recommendations is an important, but difficult, problem. Evaluations conducted offline using data collected from user interactions with an online system often suffer from biases arising from the user interface or the recommendation engine. Online evaluation (A/B testing) can more easily address problems of bias, but depending on setting can be time-consuming and incur risk of negatively impacting the user experience, not to mention that it is generally more difficult when access to a large user base is not taken as granted. A compromise based on \em counterfactual analysis is to present some subset of online users with recommendation results that have been randomized or otherwise manipulated, log their interactions, and then use those to de-bias offline evaluations on historical data. However, previous work does not offer clear conclusions on how well such methods correlate with and are able to predict the results of online A/B tests. Understanding this is crucial to widespread adoption of new offline evaluation techniques in recommender systems. In this work we present a comparison of offline and online evaluation results for a particular recommendation problem: recommending playlists of tracks to a user looking for music. We describe two different ways to think about de-biasing offline collections for more accurate evaluation. Our results show that, contrary to much of the previous work on this topic, properly-conducted offline experiments do correlate well to A/B test results, and moreover that we can expect an offline evaluation to identify the best candidate systems for online testing with high probability. In e-commerce portals, generating answers for product-related questions has become a crucial task. In this paper, we propose the task of product-aware answer generation, which tends to generate an accurate and complete answer from large-scale unlabeled e-commerce reviews and product attributes. Unlike existing question-answering problems, answer generation in e-commerce confronts three main challenges: (1) Reviews are informal and noisy; (2) joint modeling of reviews and key-value product attributes is challenging; (3) traditional methods easily generate meaningless answers. To tackle above challenges, we propose an adversarial learning based model, named PAAG, which is composed of three components: a question-aware review representation module, a key-value memory network encoding attributes, and a recurrent neural network as a sequence generator. Specifically, we employ a convolutional discriminator to distinguish whether our generated answer matches the facts. To extract the salience part of reviews, an attention-based review reader is proposed to capture the most relevant words given the question. Conducted on a large-scale real-world e-commerce dataset, our extensive experiments verify the effectiveness of each module in our proposed model. Moreover, our experiments show that our model achieves the state-of-the-art performance in terms of both automatic metrics and human evaluations. Recommendation in the modern world is not only about capturing the interaction between users and items, but also about understanding the relationship between items. Besides improving the quality of recommendation, it enables the generation of candidate items that can serve as substitutes and supplements of another item. For example, when recommending Xbox, PS4 could be a logical substitute and the supplements could be items such as game controllers, surround system, and travel case. Therefore, given a network of items, our objective is to learn their content features such that they explain the relationship between items in terms of substitutes and supplements. To achieve this, we propose a generative deep learning model that links two variational autoencoders using a connector neural network to create Linked Variational Autoencoder (LVA). LVA learns the latent features of items by conditioning on the observed relationship between items. Using a rigorous series of experiments, we show that LVA significantly outperforms other representative and state-of-the-art baseline methods in terms of prediction accuracy. We then extend LVA by incorporating collaborative filtering (CF) to create CLVA that captures the implicit relationship between users and items. By comparing CLVA with LVA we show that inducing CF-based features greatly improve the recommendation quality of substitutable and supplementary items on a user level. We consider the novel problem of evaluating a recommendation policy offline in environments where the reward signal is non-stationary. Non-stationarity appears in many Information Retrieval (IR) applications such as recommendation and advertising, but its effect on off-policy evaluation has not been studied at all. We are the first to address this issue. First, we analyze standard off-policy estimators in non-stationary environments and show both theoretically and experimentally that their bias grows with time. Then, we propose new off-policy estimators with moving averages and show that their bias is independent of time and can be bounded. Furthermore, we provide a method to trade-off bias and variance in a principled way to get an off-policy estimator that works well in both non-stationary and stationary environments. We experiment on publicly available recommendation datasets and show that our newly proposed moving average estimators accurately capture changes in non-stationary environments, while standard off-policy estimators fail to do so. Industrial recommender systems deal with extremely large action spaces -- many millions of items to recommend. Moreover, they need to serve billions of users, who are unique at any point in time, making a complex user state space. Luckily, huge quantities of logged implicit feedback (e.g., user clicks, dwell time) are available for learning. Learning from the logged feedback is however subject to biases caused by only observing feedback on recommendations selected by the previous versions of the recommender. In this work, we present a general recipe of addressing such biases in a production top-K recommender system at Youtube, built with a policy-gradient-based algorithm, i.e. REINFORCE. The contributions of the paper are: (1) scaling REINFORCE to a production recommender system with an action space on the orders of millions; (2) applying off-policy correction to address data biases in learning from logged feedback collected from multiple behavior policies; (3) proposing a novel top-K off-policy correction to account for our policy recommending multiple items at a time; (4) showcasing the value of exploration. We demonstrate the efficacy of our approaches through a series of simulations and multiple live experiments on Youtube. In this paper, we propose an offline counterfactual policy estimation framework called Genie to optimize Sponsored Search Marketplace. Genie employs an open box simulation engine with click calibration model to compute the KPI impact of any modification to the system. From the experimental results on Bing traffic, we showed that Genie performs better than existing observational approaches that employs randomized experiments for traffic slices that have frequent policy updates. We also show that Genie can be used to tune completely new policies efficiently without creating risky randomized experiments due to cold start problem. As time of today, Genie hosts more than $10000$ optimization jobs yearly which runs more than $30$ Million processing node hours of big data jobs for Bing Ads. For the last 3 years, Genie has been proven to be the one of the major platforms to optimize Bing Ads Marketplace due to its reliability under frequent policy changes and its efficiency to minimize risks in real experiments. Presentation bias is one of the key challenges when learning from implicit feedback in search engines, as it confounds the relevance signal. While it was recently shown how counterfactual learning-to-rank (LTR) approaches \citeJoachims/etal/17a can provably overcome presentation bias when observation propensities are known, it remains to show how to effectively estimate these propensities. In this paper, we propose the first method for producing consistent propensity estimates without manual relevance judgments, disruptive interventions, or restrictive relevance modeling assumptions. First, we show how to harvest a specific type of intervention data from historic feedback logs of multiple different ranking functions, and show that this data is sufficient for consistent propensity estimation in the position-based model. Second, we propose a new extremum estimator that makes effective use of this data. In an empirical evaluation, we find that the new estimator provides superior propensity estimates in two real-world systems -- Arxiv Full-text Search and Google Drive Search. Beyond these two points, we find that the method is robust to a wide range of settings in simulation studies. We evaluate the impact of probabilistically-constructed digital identity data collected between Sep. 2017 and Dec. 2017, approximately, in the context of Lookalike-targeted campaigns. The backbone of this study is a large set of probabilistically-constructed "identities", represented as small bags of cookies and mobile ad identifiers with associated metadata, that are likely all owned by the same underlying user. The identity data allows to generate "identity-based", rather than "identifier-based", user models, giving a fuller picture of the interests of the users underlying the identifiers. We employ off-policy evaluation techniques to evaluate the potential of identity-powered lookalike models without incurring the risk of allowing untested models to direct large amounts of ad spend or the large cost of performing A/B tests. We add to historical work on off-policy evaluation by noting a significant type of "finite-sample bias" that occurs for studies combining modestly-sized datasets and evaluation metrics based on ratios involving rare events (e.g., conversions). We illustrate this bias using a simulation study that later informs the handling of inverse propensity weights in our analyses on real data. We demonstrate significant lift in identity-powered lookalikes versus an identity-ignorant baseline: on average ~70% lift in conversion rate, CVR, with a concordant drop in cost-per-acquisition, CPA. This rises to factors of ~(4-32)x for identifiers having little data themselves, but that can be inferred to belong to users with substantial data to aggregate across identifiers. This implies that identity-powered user modeling is especially important in the context of identifiers having very short lifespans (i.e., frequently churned cookies). Our work motivates and informs the use of probabilistically-constructed digital identities in the marketing context. It also deepens the canon of examples in which off-policy learning has been employed to evaluate the complex systems of the internet economy. Online A/B tests play an instrumental role for Internet companies to improve products and technologies in a data-driven manner. An online A/B test, in its most straightforward form, can be treated as a static hypothesis test where traditional statistical tools such as p-values and power analysis might be applied to help decision makers determine which variant performs better. However, a static A/B test presents both time cost and the opportunity cost for rapid product iterations. For time cost, a fast-paced product evolution pushes its shareholders to consistently monitor results from online A/B experiments, which usually invites peeking and altering experimental designs as data collected. It is recognized that this flexibility might harm statistical guarantees if not introduced in the right way, especially when online tests are considered as static hypothesis tests. For opportunity cost, a static test usually entails a static allocation of users into different variants, which prevents an immediate roll-out of the better version to larger audience or risks of alienating users who may suffer from a bad experience. While some works try to tackle these challenges, no prior method focuses on a holistic solution to both issues. In this paper, we propose a unified framework utilizing sequential analysis and multi-armed bandit to address time cost and the opportunity cost of static online tests simultaneously. In particular, we present an imputed sequential Girshick test that accommodates online data and dynamic allocation of data. The unobserved potential outcomes are treated as missing data and are imputed using empirical averages. Focusing on the binomial model, we demonstrate that the proposed imputed Girshick test achieves Type-I error and power control with both a fixed allocation ratio and an adaptive allocation such as Thompson Sampling through extensive experiments. In addition, we also run experiments on historical Etsy.com A/B tests to show the reduction in opportunity cost when using the proposed method. We have seen a massive growth of online experiments at Internet companies. Although conceptually simple, A/B tests can easily go wrong in the hands of inexperienced users and on an A/B testing platform with little governance. An invalid A/B test hurts the business by leading to non-optimal decisions. Therefore, it is now more important than ever to create an intelligent A/B platform that democratizes A/B testing and allows everyone to make quality decisions through built-in detection and diagnosis of invalid tests. In this paper, we share how we mined through historical A/B tests and identified the most common causes for invalid tests, ranging from biased design, self-selection bias to attempting to generalize A/B test result beyond the experiment population and time frame. Furthermore, we also developed scalable algorithms to automatically detect invalid A/B tests and diagnose the root cause of invalidity. Surfacing up invalidity not only improved decision quality, but also served as a user education and reduced problematic experiment designs in the long run. Online P2PL systems allow lending and borrowing between peers without the need for intermediaries such as banks. Convenience and high rate of returns have made P2PL systems very popular. Recommendation systems have been developed to help lenders make wise investment decisions, lowering the chances of overall default. However, P2PL marketplace suffers from low financial liquidity, i.e., loans of different grades are not always available for investment. Moreover, P2PL investments are long term (usually a few years), hence, incorrect investment cannot be liquidated easily. Overall, the state-of-the-art recommendation systems do not account for the low market liquidity and hence, can lead to unwise investment decisions. In this paper we remedy this shortcoming by building a recommendation framework that builds an investment portfolio, which results in the highest return and the lowest risk along with a statistical measure of the number of days required for the amount to be completely funded. Our recommendation system predicts the grade and number of loans that will appear in the future when constructing the investment portfolio. Experimental results show that our recommendation engine outperforms the current state-of-the-art techniques. Our recommendation system can increase the probability of achieving the highest return with the lowest risk by ~ 69%. The rapid growth of Internet services and mobile devices provides an excellent opportunity to satisfy the strong demand for the personalized item or product recommendation. However, with the tremendous increase of users and items, personalized recommender systems still face several challenging problems: (1) the hardness of exploiting sparse implicit feedback; (2) the difficulty of combining heterogeneous data. To cope with these challenges, we propose a gated attentive-autoencoder (GATE) model, which is capable of learning fused hidden representations of items' contents and binary ratings, through a neural gating structure. Based on the fused representations, our model exploits neighboring relations between items to help infer users' preferences. In particular, a word-level and a neighbor-level attention module are integrated with the autoencoder. The word-level attention learns the item hidden representations from items' word sequences, while favoring informative words by assigning larger attention weights. The neighbor-level attention learns the hidden representation of an item's neighborhood by considering its neighbors in a weighted manner. We extensively evaluate our model with several state-of-the-art methods and different validation metrics on four real-world datasets. The experimental results not only demonstrate the effectiveness of our model on top-N recommendation but also provide interpretable results attributed to the attention modules. This paper reformulates the problem of recommending related queries on a search engine as an extreme multi-label learning task. Extreme multi-label learning aims to annotate each data point with the most relevant subset of labels from an extremely large label set. Each of the top 100 million queries on Bing was treated as a separate label in the proposed reformulation and an extreme classifier was learnt which took the user's query as input and predicted the relevant subset of 100 million queries as output. Unfortunately, state-of-the-art extreme classifiers have not been shown to scale beyond 10 million labels and have poor prediction accuracies for queries. This paper therefore develops the Slice algorithm which can be accurately trained on low-dimensional, dense deep learning features popularly used to represent queries and which efficiently scales to 100 million labels and 240 million training points. Slice achieves this by reducing the training and prediction times from linear to logarithmic in the number of labels based on a novel negative sampling technique. This allows the proposed reformulation to address some of the limitations of traditional related searches approaches in terms of coverage, density and quality. Experiments on publically available extreme classification datasets with low-dimensional dense features as well as related searches datasets mined from the Bing logs revealed that slice could be more accurate than leading extreme classifiers while also scaling to 100 million labels. Furthermore, slice was found to improve the accuracy of recommendations by 10% as compared to state-of-the-art related searches techniques. Finally, when added to the ensemble in production in Bing, slice was found to increase the trigger coverage by 52%, the suggestion density by 33%, the overall success rate by 2.6% and the success rate for tail queries by 12.6%. Slice's source code can be downloaded from . Neural collaborative filtering (NCF) and recurrent recommender systems (RRN) have been successful in modeling relational data (user-item interactions). However, they are also limited in their assumption of static or sequential modeling of relational data as they do not account for evolving users' preference over time as well as changes in the underlying factors that drive the change in user-item relationship over time. We address these limitations by proposing a Neural network based Tensor Factorization (NTF) model for predictive tasks on dynamic relational data. The NTF model generalizes conventional tensor factorization from two perspectives: First, it leverages the long short-term memory architecture to characterize the multi-dimensional temporal interactions on relational data. Second, it incorporates the multi-layer perceptron structure for learning the non-linearities between different latent factors. Our extensive experiments demonstrate the significant improvement in both the rating prediction and link prediction tasks on various dynamic relational data by our NTF model over both neural network based factorization models and other traditional methods. Recommender systems rely heavily on the predictive accuracy of the learning algorithm. Most work on improving accuracy has focused on the learning algorithm itself. We argue that this algorithmic focus is myopic. In particular, since learning algorithms generally improve with more and better data, we propose shaping the feedback generation process as an alternate and complementary route to improving accuracy. To this effect, we explore how changes to the user interface can impact the quality and quantity of feedback data -- and therefore the learning accuracy. Motivated by information foraging theory, we study how feedback quality and quantity are influenced by interface design choices along two axes: information scent and information access cost. We present a user study of these interface factors for the common task of picking a movie to watch, showing that these factors can effectively shape and improve the implicit feedback data that is generated while maintaining the user experience. Online communities such as Facebook and Twitter are enormously popular and have become an essential part of the daily life of many of their users. Through these platforms, users can discover and create information that others will then consume. In that context, recommending relevant information to users becomes critical for viability. However, recommendation in online communities is a challenging problem: 1) users' interests are dynamic, and 2) users are influenced by their friends. Moreover, the influencers may be context-dependent. That is, different friends may be relied upon for different topics. Modeling both signals is therefore essential for recommendations. We propose a recommender system for online communities based on a dynamic-graph-attention neural network. We model dynamic user behaviors with a recurrent neural network, and context-dependent social influence with a graph-attention neural network, which dynamically infers the influencers based on users' current interests. The whole model can be efficiently fit on large-scale data. Experimental results on several real-world data sets demonstrate the effectiveness of our proposed approach over several competitive baselines including state-of-the-art models. We propose a new time-dependent predictive model of user-item ratings centered around local coherence -- that is, while both users and items are constantly in flux, within a short-term sequence, the neighborhood of a particular user or item is likely to be coherent. Three unique characteristics of the framework are: (i) it incorporates both implicit and explicit feedbacks by extracting the local coherence hidden in the feedback sequences; (ii) it uses parallel recurrent neural networks to capture the evolution of users and items, resulting in a dual factor recommendation model; and (iii) it combines both coherence-enhanced consistent latent factors and dynamic latent factors to balance short-term changes with long-term trends for improved recommendation. Through experiments on Goodreads and Amazon, we find that the proposed model can outperform state-of-the-art models in predicting users' preferences. In this paper, we focus on the task of sequential recommendation using taxonomy data. Existing sequential recommendation methods usually adopt a single vectorized representation for learning the overall sequential characteristics, and have a limited modeling capacity in capturing multi-grained sequential characteristics over context information. Besides, existing methods often directly take the feature vectors derived from context information as auxiliary input, which is difficult to fully exploit the structural patterns in context information for learning preference representations. To address above issues, we propose a novel Taxonomy-aware Multi-hop Reasoning Network, named TMRN, which integrates a basic GRU-based sequential recommender with an elaborately designed memory-based multi-hop reasoning architecture. For enhancing the reasoning capacity, we incorporate taxonomy data as structural knowledge to instruct the learning of our model. We associate the learning of user preference in sequential recommendation with the category hierarchy in the taxonomy. Given a user, for each recommendation, we learn a unique preference representation corresponding to each level in the taxonomy based on her/his overall sequential preference. In this way, the overall, coarse-grained preference representation can be gradually refined in different levels from general to specific, and we are able to capture the evolvement and refinement of user preference over the taxonomy, which makes our model highly explainable. Extensive experiments show that our proposed model is superior to state-of-the-art baselines in terms of both effectiveness and interpretability. Convolutional Neural Networks (CNNs) have been recently introduced in the domain of session-based next item recommendation. An ordered collection of past items the user has interacted with in a session (or sequence) are embedded into a 2-dimensional latent matrix, and treated as an image. The convolution and pooling operations are then applied to the mapped item embeddings. In this paper, we first examine the typical session-based CNN recommender and show that both the generative model and network architecture are suboptimal when modeling long-range dependencies in the item sequence. To address the issues, we introduce a simple, but very effective generative model that is capable of learning high-level representation from both short- and long-range item dependencies. The network architecture of the proposed model is formed of a stack of holed convolutional layers, which can efficiently increase the receptive fields without relying on the pooling operation. Another contribution is the effective use of residual block structure in recommender systems, which can ease the optimization for much deeper networks. The proposed generative model attains state-of-the-art accuracy with less training time in the next item recommendation task. It accordingly can be used as a powerful recommendation baseline to beat in future, especially when there are long sequences of user feedback. In recent years session-based recommendation has emerged as an increasingly applicable type of recommendation. As sessions consist of sequences of events, this type of recommendation is a natural fit for Recurrent Neural Networks (RNNs). Several additions have been proposed for extending such models in order to handle specific problems or data. Two such extensions are 1.) modeling of inter-session relations for catching long term dependencies over user sessions, and 2.) modeling temporal aspects of user-item interactions. The former allows the session-based recommendation to utilize extended session history and inter-session information when providing new recommendations. The latter has been used to both provide state-of-the-art predictions for when the user will return to the service and also for improving recommendations. In this work, we combine these two extensions in a joint model for the tasks of recommendation and return-time prediction. The model consists of a Hierarchical RNN for the inter-session and intra-session items recommendation extended with a Point Process model for the time-gaps between the sessions. The experimental results indicate that the proposed model improves recommendations significantly on two datasets over a strong baseline, while simultaneously improving return-time predictions over a baseline return-time prediction model. Variational autoencoders were proven successful in domains such as computer vision and speech processing. Their adoption for modeling user preferences is still unexplored, although recently it is starting to gain attention in the current literature. In this work, we propose a model which extends variational autoencoders by exploiting the rich information present in the past preference history. We introduce a recurrent version of the VAE, where instead of passing a subset of the whole history regardless of temporal dependencies, we rather pass the consumption sequence subset through a recurrent neural network. At each time-step of the RNN, the sequence is fed through a series of fully-connected layers, the output of which models the probability distribution of the most likely future preferences. We show that handling temporal information is crucial for improving the accuracy of the VAE: In fact, our model beats the current state-of-the-art by valuable margins because of its ability to capture temporal dependencies among the user-consumption sequence using the recurrent encoder still keeping the fundamentals of variational autoencoders intact. A user-generated review document is a product between the item's intrinsic properties and the user's perceived composition of those properties. Without properly modeling and decoupling these two factors, one can hardly obtain any accurate user understanding nor item profiling from such user-generated data. In this paper, we study a new text mining problem that aims at differentiating a user's subjective composition of topical content in his/her review document from the entity's intrinsic properties. Motivated by the Item Response Theory (IRT), we model each review document as a user's detailed response to an item, and assume the response is jointly determined by the individuality of the user and the property of the item. We model the text-based response with a generative topic model, in which we characterize the items' properties and users' manifestations of them in a low-dimensional topic space. Via posterior inference, we separate and study these two components over a collection of review documents. Extensive experiments on two large collections of Amazon and Yelp review data verified the effectiveness of the proposed solution: it outperforms the state-of-art topic models with better predictive power in unseen documents, which is directly translated into improved performance in item recommendation and item summarization tasks. As one of the Web's primary multilingual knowledge sources, Wikipedia is read by millions of people across the globe every day. Despite this global readership, little is known about why users read Wikipedia's various language editions. To bridge this gap, we conduct a comparative study by combining a large-scale survey of Wikipedia readers across 14 language editions with a log-based analysis of user activity. We proceed in three steps. First, we analyze the survey results to compare the prevalence of Wikipedia use cases across languages, discovering commonalities, but also substantial differences, among Wikipedia languages with respect to their usage. Second, we match survey responses to the respondents' traces in Wikipedia's server logs to characterize behavioral patterns associated with specific use cases, finding that distinctive patterns consistently mark certain use cases across language editions. Third, we show that certain Wikipedia use cases are more common in countries with certain socio-economic characteristics; e.g., in-depth reading of Wikipedia articles is substantially more common in countries with a low Human Development Index. These findings advance our understanding of reader motivations and behaviors across Wikipedia languages and have implications for Wikipedia editors and developers of Wikipedia and other Web technologies. Email triage involves going throughunhandled emails and deciding what to do with them. This familiar process can become increasingly challenging as the number of unhandled email grows. During a triage session, users commonlydefer handling emails that they cannot immediately deal with to later. These deferred emails, are often related to tasks that are postponed until the user has more time or the right information to deal with them. In this paper, through qualitative interviews and a large-scale log analysis, we study when and whatenterprise email users tend to defer. We found that users are more likely to defer emails when handling them involves replying, reading carefully, or clicking on links and attachments. We also learned that the decision to defer emails depends on many factors such as user's workload and the importance of the sender. Our qualitative results suggested that deferring is very common, and our quantitative log analysis confirms that 12% of triage sessions and 16% of daily active users had at least one deferred email on weekdays. We also discuss severaldeferral strategies such as marking emails as unread and flagging that are reported by our interviewees, and illustrate how such patterns can be also observed in user logs. Inspired by the characteristics of deferred emails and contextual factors involved in deciding if an email should be deferred, we train a classifier for predicting whether a recently triaged email is actually deferred. Our experimental results suggests that deferral can be classified with modest effectiveness. Overall, our work provides novel insights about how users handle their emails and how deferral can be modeled. Estimating how long a task will take to complete (i.e., the task duration) is important for many applications, including calendaring and project management. Population-scale calendar data contains distributional information about time allocated by individuals for tasks that may be useful to build computational models for task duration estimation. This study analyzes anonymized large-scale calendar appointment data from hundreds of thousands of individuals and millions of tasks to understand expected task durations and the longitudinal evolution in these durations. Machine-learned models are trained using the appointment data to estimate task duration. Study findings show that task attributes, including content (anonymized appointment subjects), context, and history, are correlated with time allocated for tasks. We also show that machine-learned models can be trained to estimate task duration, with multiclass classification accuracies of almost 80%. The findings have implications for understanding time estimation in populations, and in the design of support in digital assistants and calendaring applications to find time for tasks and to help people, especially those who are new to a task, block sufficient time for task completion. Understanding search intents behind queries is of vital importance for improving search performance or designing better evaluation metrics. Although there exist many efforts in Web search user intent taxonomies and investigating how users' interaction behaviors vary with the intent types, only a few of them have been made specifically for the image search scenario. Different from previous works which investigate image search user behavior and task characteristics based on either lab studies or large scale log analysis, we conducted a field study which lasts one month and involves 2,040 search queries from 555 search tasks. By this means, we collected relatively large amount of practical search behavior data with extensive first-tier annotation from users. With this data set, we investigate how various image search intents affect users' search behavior, and try to adopt different signals to predict search satisfaction under the certain intent. Meanwhile, external assessors were also employed to categorize each search task using four orthogonal intent taxonomies. Based on the hypothesis that behavior is dependent of task type, we analyze user search behavior on the field study data, examining characteristics of the session, click and mouse patterns. We also link the search satisfaction prediction to image search intent, which shows that different types of signals play different roles in satisfaction prediction as intent varies. Our findings indicate the importance of considering search intent in user behavior analysis and satisfaction prediction in image search. Demographics of online users such as age and gender play an important role in personalized web applications. However, it is difficult to directly obtain the demographic information of online users. Luckily, search queries can cover many online users and the search queries from users with different demographics usually have some difference in contents and writing styles. Thus, search queries can provide useful clues for demographic prediction. In this paper, we study predicting users' demographics based on their search queries, and propose a neural approach for this task. Since search queries can be very noisy and many of them are not useful, instead of combining all queries together for user representation, in our approach we propose a hierarchical user representation with attention (HURA) model to learn informative user representations from their search queries. Our HURA model first learns representations for search queries from words using a word encoder, which consists of a CNN network and a word-level attention network to select important words. Then we learn representations of users based on the representations of their search queries using a query encoder, which contains a CNN network to capture the local contexts of search queries and a query-level attention network to select informative search queries for demographic prediction. Experiments on two real-world datasets validate that our approach can effectively improve the performance of search query based age and gender prediction and consistently outperform many baseline methods. Understanding user behavior and predicting future behavior on the web is critical for providing seamless user experiences as well as increasing revenue of service providers. Recently, thanks to the remarkable success of recurrent neural networks (RNNs), it has been widely used for modeling sequences of user behaviors. However, although sequential behaviors appear across multiple domains in practice, existing RNN-based approaches still focus on the single-domain scenario assuming that sequential behaviors come from only a single domain. Hence, in order to analyze sequential behaviors across multiple domains, they require to separately train multiple RNN models, which fails to jointly model the interplay among sequential behaviors across multiple domains. Consequently, they often suffer from lack of information within each domain. In this paper, we first introduce a practical but overlooked phenomenon in sequential behaviors across multiple domains, i.e.,domain switch where two successive behaviors belong to different domains. Then, we propose aDomain Switch-Aware Holistic Recurrent Neural Network (DS-HRNN) that effectively shares the knowledge extracted from multiple domains by systematically handlingdomain switch for the multi-domain scenario. DS-HRNN jointly models the multi-domain sequential behaviors and accurately predicts the future behaviors in each domain with only a single RNN model. Our extensive evaluations on two real-world datasets demonstrate that \DCHRNN\ outperforms existing RNN-based approaches and non-sequential baselines with significant improvements by up to 14.93% in terms of recall of the future behavior prediction. People often make commitments to perform future actions. Detecting commitments made in email (e.g., "I'll send the report by end of day'') enables digital assistants to help their users recall promises they have made and assist them in meeting those promises in a timely manner. In this paper, we show that commitments can be reliably extracted from emails when models are trained and evaluated on the same domain (corpus). However, their performance degrades when the evaluation domain differs. This illustrates the domain bias associated with email datasets and a need for more robust and generalizable models for commitment detection. To learn a domain-independent commitment model, we first characterize the differences between domains (email corpora) and then use this characterization to transfer knowledge between them. We investigate the performance of domain adaptation, namely transfer learning, at different granularities: feature-level adaptation and sample-level adaptation. We extend this further using a neural autoencoder trained to learn a domain-independent representation for training samples. We show that transfer learning can help remove domain bias to obtain models with less domain dependence. Overall, our results show that domain differences can have a significant negative impact on the quality of commitment detection models and that transfer learning has enormous potential to address this issue. Users seek direct answers to complex questions from large open-domain knowledge sources like the Web. Open-domain question answering has become a critical task to be solved for building systems that help address users' complex information needs. Most open-domain question answering systems use a search engine to retrieve a set of candidate documents, select one or a few of them as context, and then apply reading comprehension models to extract answers. Some questions, however, require taking a broader context into account, e.g., by considering low-ranked documents that are not immediately relevant, combining information from multiple documents, and reasoning over multiple facts from these documents to infer the answer. In this paper, we propose a model based on the Transformer architecture that is able to efficiently operate over a larger set of candidate documents by effectively combining the evidence from these documents during multiple steps of reasoning, while it is robust against noise from low-ranked non-relevant documents included in the set. We use our proposed model, called TraCRNet, on two public open-domain question answering datasets, SearchQA and Quasar-T, and achieve results that meet or exceed the state-of-the-art. Representation learning in heterogeneous networks faces challenges due to heterogeneous structural information of multiple types of nodes and relations, and also due to the unstructured attribute or content (e.g., text) associated with some types of nodes. While many recent works have studied homogeneous, heterogeneous, and attributed networks embedding, there are few works that have collectively solved these challenges in heterogeneous networks. In this paper, we address them by developing a Semantic-aware Heterogeneous Network Embedding model (SHNE). SHNE performs joint optimization of heterogeneous SkipGram and deep semantic encoding for capturing both heterogeneous structural closeness and unstructured semantic relations among all nodes, as function of node content, that exist in the network. Extensive experiments demonstrate that SHNE outperforms state-of-the-art baselines in various heterogeneous network mining tasks, such as link prediction, document retrieval, node recommendation, relevance search, and class visualization. Deep text matching approaches have been widely studied for many applications including question answering and information retrieval systems. To deal with a domain that has insufficient labeled data, these approaches can be used in a Transfer Learning (TL) setting to leverage labeled data from a resource-rich source domain. To achieve better performance, source domain data selection is essential in this process to prevent the "negative transfer" problem. However, the emerging deep transfer models do not fit well with most existing data selection methods, because the data selection policy and the transfer learning model are not jointly trained, leading to sub-optimal training efficiency. In this paper, we propose a novel reinforced data selector to select high-quality source domain data to help the TL model. Specifically, the data selector "acts" on the source domain data to find a subset for optimization of the TL model, and the performance of the TL model can provide "rewards" in turn to update the selector. We build the reinforced data selector based on the actor-critic framework and integrate it to a DNN based transfer learning model, resulting in a Reinforced Transfer Learning (RTL) method. We perform a thorough experimental evaluation on two major tasks for text matching, namely, paraphrase identification and natural language inference. Experimental results show the proposed RTL can significantly improve the performance of the TL model. We further investigate different settings of states, rewards, and policy optimization methods to examine the robustness of our method. Last, we conduct a case study on the selected data and find our method is able to select source domain data whose Wasserstein distance is close to the target domain data. This is reasonable and intuitive as such source domain data can provide more transferability power to the model. We propose a link prediction algorithm that is based on spring-electrical models. The idea to study these models came from the fact that spring-electrical models have been successfully used for networks visualization. A good network visualization usually implies that nodes similar in terms of network topology, e.g., connected and/or belonging to one cluster, tend to be visualized close to each other. Therefore, we assumed that the Euclidean distance between nodes in the obtained network layout correlates with a probability of a link between them. We evaluate the proposed method against several popular baselines and demonstrate its flexibility by applying it to undirected, directed and bipartite networks. Session-based recommendations recently receive much attentions due to no available user data in many cases, e.g., users are not logged-in/tracked. Most session-based methods focus on exploring abundant historical records of anonymous users but ignoring the sparsity problem, where historical data are lacking or are insufficient for items in sessions. In fact, as users' behavior is relevant across domains, information from different domains is correlative, e.g., a user tends to watch related movies in a movie domain, after listening to some movie-themed songs in a music domain (i.e., cross-domain sessions). Therefore, we can learn a complete item description to solve the sparsity problem using complementary information from related domains. In this paper, we propose an innovative method, called Cross-Domain Item Embedding method based on Co-clustering (CDIE-C), to learn cross-domain comprehensive representations of items by collectively leveraging single-domain and cross-domain sessions within a unified framework. We first extract cluster-level correlations across domains using co-clustering and filter out noise. Then, cross-domain items and clusters are embedded into a unified space by jointly capturing item-level sequence information and cluster-level correlative information. Besides, CDIE-C enhances information exchange across domains utilizing three types of relations (i.e., item-to-context-item, item-to-context-co-cluster and co-cluster-to-context-item relations). Finally, we train CDIE-C with two efficient training strategies, i.e., joint training and two-stage training. Empirical results show CDIE-C outperforms the state-of-the-art recommendation methods on three cross-domain datasets and can effectively alleviate the sparsity problem. The recent art in relation extraction is distant supervision which generates training data by heuristically aligning a knowledge base with free texts and thus avoids human labelling. However, the concerned relation mentions often use the bag-of-words representation, which ignores inner correlations between features located in different dimensions and makes relation extraction less effective. To capture the complex characteristics of relation expression and tighten the correlated features, we attempt to discover and utilise informative correlations between features by the following four phases: 1) formulating semantic similarities between lexical features using the embedding method; 2) constructing generative relation for lexical features with different sizes of side windows; 3) computing correlation scores between syntactic features through a kernel-based method; and 4) conducting a distillation process for the obtained correlated feature pairs and integrating informative pairs with existing relation extraction models. The extensive experiments demonstrate that our method can effectively discover correlation information and improve the performance of state-of-the-art relation extraction methods. Comparative summarization is an effective strategy to discover important similarities and differences in collections of documents biased to users' interests. A natural method of this task is to find important and corresponding content. In this paper, we propose a novel research task of automatic query-based across-time summarization in news archives as well as we introduce an effective method to solve this task. The proposed model first learns an orthogonal transformation between temporally distant news collections. Then, it generates a set of corresponding sentence pairs based on a concise integer linear programming framework. We experimentally demonstrate the effectiveness of our method on the New York Times Annotated Corpus. There has recently been much interest in extending vector-based word representations to multiple languages, such that words can be compared across languages. In this paper, we shift the focus from words to documents and introduce a method for embedding documents written in any language into a single, language-independent vector space. For training, our approach leverages a multilingual corpus where the same concept is covered in multiple languages (but not necessarily via exact translations), such as Wikipedia. Our method, Cr5 (Crosslingual reduced-rank ridge regression), starts by training a ridge-regression-based classifier that uses language-specific bag-of-word features in order to predict the concept that a given document is about. We show that, when constraining the learned weight matrix to be of low rank, it can be factored to obtain the desired mappings from language-specific bags-of-words to language-independent embeddings. As opposed to most prior methods, which use pretrained monolingual word vectors, postprocess them to make them crosslingual, and finally average word vectors to obtain document vectors, Cr5 is trained end-to-end and is thus natively crosslingual as well as document-level. Moreover, since our algorithm uses the singular value decomposition as its core operation, it is highly scalable. Experiments show that our method achieves state-of-the-art performance on a crosslingual document retrieval task. Finally, although not trained for embedding sentences and words, it also achieves competitive performance on crosslingual sentence and word retrieval tasks. In this paper, we advance the state-of-the-art in topic modeling by means of a new document representation based on pre-trained word embeddings for non-probabilistic matrix factorization. Specifically, our strategy, called CluWords, exploits the nearest words of a given pre-trained word embedding to generate meta-words capable of enhancing the document representation, in terms of both, syntactic and semantic information. The novel contributions of our solution include: (i)the introduction of a novel data representation for topic modeling based on syntactic and semantic relationships derived from distances calculated within a pre-trained word embedding space and (ii)the proposal of a new TF-IDF-based strategy, particularly developed to weight the CluWords. In our extensive experimentation evaluation, covering 12 datasets and 8 state-of-the-art baselines, we exceed (with a few ties) in almost cases, with gains of more than 50% against the best baselines (achieving up to 80% against some runner-ups). Finally, we show that our method is able to improve document representation for the task of automatic text classification. In recent years, we have witnessed a rapid increase of text content stored in digital archives such as newspaper archives or web archives. With the passage of time, it is however difficult to effectively perform search within such collections due to vocabulary and context change. In this paper, we present a system that helps to find analogical terms across temporal text collections by applying non-linear transformation. We implement two approaches for analog retrieval where one of them allows users to also input an aspect term specifying particular perspective of a query. The current prototype system permits temporal analog search across two different time periods based on New York Times Annotated Corpus. Cross-lingual summarization (CLS) aims to create summaries in a target language, from a document or document set given in a different, source language. Cross-lingual summarization can play a critical role in enabling cross-lingual information access for millions of people across the globe who do not speak or understand languages having large representation on the web. It can also make documents originally published in local languages quickly accessible to a large audience which does not understand those local languages. Though cross-lingual summarization has gathered some attention in the last decade, there has been no serious effort to publish rigorous software for this task. In this paper, we provide a design for an end-to-end CLS software called clstk. Besides implementing a number of methods proposed by different CLS researchers over years, the software integrates multiple components critical for CLS. We hope that this extremely modular tool-kit will help CLS researchers to contribute more effectively to the area. Retrieval models in information retrieval are used to rank documents for typically under-specified queries. Today machine learning is used to learn retrieval models from click logs and/or relevance judgments that maximizes an objective correlated with user satisfaction. As these models become increasingly powerful and sophisticated, they also become harder to understand. Consequently, it is hard for to identify artifacts in training, data specific biases and intents from a complex trained model like neural rankers even if trained purely on text features. EXS is a search system designed specifically to provide its users with insight into the following questions: "What is the intent of the query according to the ranker?'', "Why is this document ranked higher than another?'' and "Why is this document relevant to the query?''. EXS uses a version of a popular posthoc explanation method for classifiers -- LIME, adapted specifically to answer these questions. We show how such a system can effectively help a user understand the results of neural rankers and highlight areas of improvement. Designing desirable and aesthetical manifestation of web graphic user interfaces (GUI) is a challenging task for web developers. After determining a web page's content, developers usually refer to existing pages, and adapt the styles from desired pages into the target one. However, it is not only difficult to find appropriate pages to exhibit the target page's content, but also tedious to incorporate styles from different pages harmoniously in the target page. To tackle these two issues, we propose FaceOff, a data-driven automation system that assists the manifestation design of web GUI. FaceOff constructs a repository of web GUI templates based on 15,491 web pages from popular websites and professional design examples. Given a web page for designing manifestation, FaceOff first segments it into multiple blocks, and retrieves GUI templates in the repository for each block. Subsequently, FaceOff recommends multiple combinations of templates according to a Convolutional Neural Network (CNN) based style-embedding model, which makes the recommended style combinations diverse and accordant. We demonstrate that FaceOff can retrieve suitable GUI templates with well-designed and harmonious style, and thus alleviate the developer efforts. In this paper, we would like to demonstrate an intelligent traffic analytics system called T4, which enables intelligent analytics over real-time and historical trajectories from vehicles. At the front end, we visualize the current traffic flow and result trajectories of different types of queries, as well as the histograms of traffic flow and traffic lights. At the back end, T4 is able to support multiple types of common queries over trajectories, with compact storage, efficient index and fast pruning algorithms. The output of those queries can be used for further monitoring and analytics purposes. Moreover, we train the deep models for traffic flow prediction and traffic light control to reduce traffic congestion. A preliminary version of T4 is available at https://sites.google.com/site/shengwangcs/torch. Understanding urban areas of interest (AOIs) is essential to decision making in various urban planning and exploration tasks. Such AOIs can be computed based on the geographic points that satisfy the user query. In this demo, we present an interactive visualization system of urban AOIs, supported by a parameter-free and efficient footprint method called AOI-shapes. Compared to state-of-the-art footprint methods, the proposed AOI-shapes (i) is parameter-free, (ii) is able to recognize multiple regions/outliers, (iii) can detect inner holes, and (iv) supports the incremental method. We demonstrate the effectiveness and efficiency of the proposed AOI-shapes based on a real-world real estate dataset in Australia. A preliminary version of the online demo can be accessed at http://aoishapes.com/. For a tourist who wishes to stroll in an unknown city, it is useful to have a recommendation of not just the shortest routes but also routes that are pleasant. This paper demonstrates a system that provides pleasant route recommendation. Currently, we focus on routes that have much green and bright views. The system measures pleasure scores by extracting colors or objects in Google Street View panorama images and re-ranks shortest paths in the order of the computed pleasure scores. The current prototype provides route recommendation for city areas in Tokyo, Kyoto and San Francisco. In this paper, we develop a neural attentive interpretable recommendation system, named NAIRS. A self-attention network, as a key component of the system, is designed to assign attention weights to interacted items of a user. This attention mechanism can distinguish the importance of the various interacted items in contributing to a user profile. %, and it also provides interpretable recommendations. Based on the user profiles obtained by the self-attention network, NAIRS offers personalized high-quality recommendation. Moreover, it develops visual cues to interpret recommendations. This demo application with the implementation of NAIRS enables users to interact with a recommendation system, and it persistently collects training data to improve the system. The demonstration and experimental results show the effectiveness of NAIRS. In this work, we demonstrate structured search capabilities of the GYANI indexing infrastructure. GYANI allows linguists, journalists, and scholars in humanities to search large semantically annotated document collections in a structured manner by supporting queries with regular expressions between word sequences and annotations. In addition to this, we provide support for attaching semantics to words via annotations in the form of part-of-speech, named entities, temporal expressions, and numerical quantities. We demonstrate that by enabling such structured search capabilities we can quickly gather annotated text regions for various knowledge-centric tasks such as information extraction and question answering. The recent introduction of entity-centric implicit network representations of unstructured text offers novel ways for exploring entity relations in document collections and streams efficiently and interactively. Here, we present TopExNet as a tool for exploring entity-centric network topics in streams of news articles. The application is available as a web service at https://topexnet.ifi.uni-heidelberg.de. We propose KGdiff, a new interactive visualization tool for social media content focusing on entities and relationships. The core component is a layout algorithm that highlights the differences between two graphs. We apply this algorithm on knowledge graphs consisting of named entities and their relations extracted from text streams over different time periods. The visualization system provides additional information such as the volume and frequency ranking of entities and allows users to select which parts of the graph to visualize interactively. On Twitter and news article collections, KGdiff allows users to compare different data subsets. Results of such comparisons often reveal topical or geographical changes in a discussion. More broadly, graph differences are useful for a wide range of relational data comparison tasks, such as comparing social interaction graphs, identifying changes in user behavior, or discovering differences in graphs from distinct sources, geography, or political stance. Understanding and predicting the popularity of online items is an important open problem in social media analysis. Most of the recent work on popularity prediction is either based on learning a variety of features from full network data or using generative processes to model the event time data. We identify two gaps in the current state of the art prediction models. The first is the unexplored connection and comparison between the two aforementioned approaches. In our work, we bridge gap between feature-driven and generative models by modelling social cascade with a marked Hawkes self-exciting point process. We then learn a predictive layer on top for popularity prediction using a collection of cascade history. Secondly, the existing methods typically focus on a single source of external influence, whereas for many types of online content such as YouTube videos or news articles, attention is driven by multiple heterogeneous sources simultaneously - e.g. microblogs or traditional media coverage. We propose a recurrent neural network based model for asynchronous streams that connects multiple streams of different granularity via joint inference. We further design two new measures, one to explain the viral potential of videos, the other to uncover latent influences including seasonal trends. This work provides accurate and explainable popularity predictions, as well as computational tools for content producers and marketers to allocate resources for promotion campaigns. The goal of this thesis is to develop techniques for comparative summarisation of multimodal document collections. Comparative summarisation is extractive summarisation in comparative settings, where documents form two or more groups, e.g. articles on the same topic but from different sources. Comparative summarisation involves, not only, selecting representative and diverse samples within groups, but also samples that highlight commonalities and differences between the groups. We posit that comparative summarisation is a fruitful problem for diverse use cases, such as comparing content over time, authors, or distinct view points. We formulate the problem of comparative summarisation by reducing it to binary classification problem and define objectives to incorporate representativeness, diversity and comparativeness. We design new automatic and crowd-sourced evaluation protocols for summarisation evaluation that scales much better than the evaluations requiring manually created ground truth summaries. We show the efficacy of the approach in a newly curated datasets of controversial news topics. We plan to develop new collection comparison methods for multimodal document collections. Our understanding of the web has been evolving from a large database of information to a Socio - Cognitive Space, where humans are not just using the web but participating in the web. World wide web has evolved into the largest source of information in the history, and it continues to grow without any known agenda. The web needs to be observed and studied to understand various impacts of it on the society (both positive and negative) and shape the future of the web and the society. This gave rise to the global grid of Web Observatories which focus and observe various aspects of the web. Web Observatories aim to share and collaborate various data sets, analysis tools and applications with all web observatories across the world. We plan to design and develop a Web Observatory called to observe and understand online social cognition. We propose that the social media on the web is acting as a Marketplace of Opinions where multiple users with differing interests exchange opinions. For a given trending topic on social media, we propose a model to identify the Signature of the trending topic which characterizes the discourse around the topic. The share of videos on Internet traffic has been growing, e.g., people are now spending a billion hours watching YouTube videos every day. Therefore, understanding how videos capture attention on a global scale is also of growing importance for both research and practice. In online platforms, people can interact with videos in different ways -- there are behaviors of active participation (watching, commenting, and sharing) and that of passive consumption (viewing). In this paper, we take a data-driven approach to studying how human attention is allocated in online videos with respect to both active and passive behaviors. We first investigate the active interaction behaviors by proposing a novel metric to represent the aggregate user engagement on YouTube videos. We show this metric is correlated with video quality, stable over lifetime, and predictable before video's upload. Next, we extend the line of work on modelling video view counts by disentangling the effects of two dominant traffic sources -- related videos and YouTube search. Findings from this work can help content producers to create engaging videos and hosting platforms to optimize advertising strategies, recommender systems, and many more applications. Epidemic models and Hawkes point process models are two common model classes for information diffusion. Recent work has revealed the equivalence between the two for information diffusion modeling. This allows tools created for one class of models to be applied to another. However, epidemic models and Hawkes point processes can be connected in more ways. This thesis aims to develop a rich set of mathematical equivalences and extensions, and use them to ask and answer questions in social media and beyond. Specifically, we show our plan of generalizing the equivalence of the two model classes by extending it to Hawkes point process models with arbitrary memory kernels. We then outline a rich set of quantities describing diffusion, including diffusion size and extinction probability, introduced in the fields where the models are originally designed. Lastly, we discuss some novel applications of these quantities in a range of problems such as popularity prediction and popularity intervention. As heterogeneous verticals account for more and more in search engines, users' preference of search results is largely affected by their presentations. Apart from texts, multimedia information such as images and videos has been widely adopted as it makes the search engine result pages (SERPs) more informative and attractive. It is more proper to regard the SERP as an information union, not separate search results because they interact with each other. Considering these changes in search engines, we plan to better exploit the contents of search results displayed on SERPs through deep neural networks and formulate the pagewise optimization of SERPs as a reinforcement learning problem. Networks can be extracted from a wide range of real systems, such as online social networks, communication networks and biological systems. Detection of cohesive groups in these graphs, primarily based on link information, is the goal of community detection. Community structures emerge when a group of nodes is more likely to be linked to each other than to the rest of the network. The modules found can be disjoint or overlapping. Another relevant feature of networks is the possibility to evolve over time. Furthermore, nodes can have valuable information to improve the community detection process. Hence, in this work we propose to design a soft overlapping community detection method for static and dynamic social networks with node attributes. Preliminary results on a toy network are promising. Traditionally, recommenders have been based on a single-shot model based on past user actions. Conversational recommenders allow incremental elicitation of user preference by performing user-system dialogue. For example, the systems can ask about user preference toward a feature associated with the items. In such systems, it is important to design an efficient conversation, which minimizes the number of question asked while maximizing the preference information obtained. Therefore, this research is intended to explore possible ways to design a conversational recommender with an efficient preference elicitation. Specifically, it focuses on the order of questions. Also, an idea proposed to suggest answers for each question asked, which can assist users in giving their feedback. Web-based image search engines differ from Web search engines greatly. The intents or goals behind human interactions with image search engines are different. In image search, users mainly search images instead of Web pages or online services. It is essential to know why people search for images because user satisfaction may vary as intent varies. Furthermore, image search engines show results differently. For example, grid-based placement is used in image search instead of the linear result list, so that users can browse result list both vertically and horizontally. Different user intents and system UIs lead to different user behavior. Thus, it is hard to apply standard user behavior models developed for general Web search to image search. To better understand user intent and behavior in image search scenarios, we plan to conduct the lab-based user study, field study and commercial search log analysis. We then propose user behavior models based on the observation from data analysis to improve the performance of Web image search engines. As computing systems are more frequently and more actively intervening to improve people's work and daily lives, it is critical to correctly predict and understand the causal effects of these interventions. Conventional machine learning methods, built on pattern recognition and correlational analyses, are insufficient for causal analysis. This tutorial will introduce participants to concepts in causal inference and counterfactual reasoning, drawing from a broad literature from statistics, social sciences and machine learning. We will first motivate the use of causal inference through examples in domains such as recommender systems, social media datasets, health, education and governance. To tackle such questions, we will introduce the key ingredient that causal analysis depends on---counterfactual reasoning---and describe the two most popular frameworks based on Bayesian graphical models and potential outcomes. Based on this, we will cover a range of methods suitable for doing causal inference with large-scale online data, including randomized experiments, observational methods like matching and stratification, and natural experiment-based methods such as instrumental variables and regression discontinuity. We will also focus on best practices for evaluation and validation of causal inference techniques, drawing from our own experiences. After attending this tutorial, participants will understand the basics of causal inference, be able to appropriately apply the most common causal inference methods, and be able to recognize situations where more complex methods are required. This hands-on half-day tutorial consists of two sessions. Part~I covers the following topics: Preliminaries; Paired and two-sample t-tests, confidence intervals; One-way ANOVA and two-way ANOVA without replication; Familiwise error rate. Part~II covers the following topics: Tukey's HSD test, simultaneous confidence intervals; Randomisation test and randomised Tukey HSD test; What's wrong with statistical significance tests?; Effect sizes, statistical power; Topic set size design and power analysis; Summary: how to report your results. Participants should have some prior knowledge about the very basics of statistical significance testing and are strongly encouraged to bring a laptop with R already installed. They will learn how to design and conduct statistical significance tests for comparing the mean effectiveness scores of two or more systems appropriately, and to report on the test results in an informative manner. Matching is the key problem in search and recommendation, that is to measure the relevance of a document to a query or the interest of a user on an item. Previously, machine learning methods have been exploited to address the problem, which learn a matching function from labeled data, also referred to as "learning to match". In recent years, deep learning has been successfully applied to matching and significant progresses have been made. Deep semantic matching models for search and neural collaborative filtering models for recommendation are becoming the state-of-the-art technologies. The key to the success of the deep learning approach is its strong ability in learning of representations and generalization of matching patterns from raw data (e.g., queries, documents, users, and items, particularly in their raw forms). In this tutorial, we aim to give a comprehensive survey on recent progress in deep learning for matching in search and recommendation. Our tutorial is unique in that we try to give a unified view on search and recommendation. In this way, we expect researchers from the two fields can get deep understanding and accurate insight on the spaces, stimulate more ideas and discussions, and promote developments of technologies. The tutorial mainly consists of three parts. Firstly, we introduce the general problem of matching, which is fundamental in both search and recommendation. Secondly, we explain how traditional machine learning techniques are utilized to address the matching problems in search and recommendation. Lastly, we elaborate how deep learning can be effectively used to solve the matching problems in both tasks. Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial aims to present an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness-first" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice, by presenting case studies from different technology companies. Based on our experiences in industry, we will identify open problems and research challenges for the data mining / machine learning community. The explosive growth of fake news and its erosion to democracy, justice, and public trust increased the demand for fake news detection. As an interdisciplinary topic, the study of fake news encourages a concerted effort of experts in computer and information science, political science, journalism, social science, psychology, and economics. A comprehensive framework to systematically understand and detect fake news is necessary to attract and unite researchers in related areas to conduct research on fake news. This tutorial aims to clearly present (1) fake news research, its challenges, and research directions; (2) a comparison between fake news and other related concepts (e.g., rumors); (3) the fundamental theories developed across various disciplines that facilitate interdisciplinary research; (4) various detection strategies unified under a comprehensive framework for fake news detection; and (5) the state-of-the-art datasets, patterns, and models. We present fake news detection from various perspectives, which involve news content and information in social networks, and broadly adopt techniques in data mining, machine learning, natural language processing, information retrieval and social search. Facing the upcoming 2020 U.S. presidential election, challenges for automatic, effective and efficient fake news detection are also clarified in this tutorial. The HS2019 tutorial will cover topics from an area of information retrieval (IR) with significant societal impact --- health search. Whether it is searching patient records, helping medical professionals find best-practice evidence, or helping the public locate reliable and readable health information online, health search is a challenging area for IR research with an actively growing community and many open problems. This tutorial will provide attendees with a full stack of knowledge on health search, from understanding users and their problems to practical, hands-on sessions on current tools and techniques, current campaigns and evaluation resources, as well as important open questions and future directions. Preserving privacy of users is a key requirement of web-scale data mining applications and systems such as web search, recommender systems, crowdsourced platforms, and analytics applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR. In this tutorial, we will first present an overview of privacy breaches over the last two decades and the lessons learned, key regulations and laws, and evolution of privacy techniques leading to differential privacy definition / techniques. Then, we will focus on the application of privacy-preserving data mining techniques in practice, by presenting case studies such as Apple's differential privacy deployment for iOS / macOS, Google's RAPPOR, LinkedIn Salary, and Microsoft's differential privacy deployment for collecting Windows telemetry. We will conclude with open problems and challenges for the data mining / machine learning community, based on our experiences in industry. With the explosive growth of online service platforms, increasing number of people and enterprises are doing everything online. In order for organizations, governments, and individuals to understand their users, and promote their products or services, it is necessary for them to analyse big data and recommend the media or online services in real time. Effective recommendation of items of interest to consumers has become critical for enterprises in domains such as retail, e-commerce, and online media. Driven by the business successes, academic research in this field has also been active for many years. Through many scientific breakthroughs have been achieved, there are still tremendous challenges in developing effective and scalable recommendation systems for real-world industrial applications. Existing solutions focus on recommending items based on pre-set contexts, such as time, location, weather etc. The big data sizes and complex contextual information add further challenges to the deployment of advanced recommender systems. This workshop aims to bring together researchers with wide-ranging backgrounds to identify important research questions, to exchange ideas from different research disciplines, and, more generally, to facilitate discussion and innovation in the area of context-aware recommender systems and big data analytics. The first workshop on Interactive Data Mining is held in Melbourne, Australia, on February 15, 2019 and is co-located with 12th ACM International Conference on Web Search and Data Mining (WSDM 2019). The goal of this workshop is to share and discuss research and projects that focus on interaction with and interactivity of data mining systems. The program includes invited speaker, presentation of research papers, and a discussion session. The task intelligence workshop at the 2019 ACM Web Search and Data Mining (WSDM) conference comprised a mixture of research paper presentations, reports from data challenge participants, invited keynote(s) on broad topics related to tasks, and a workshop-wide discussion about task intelligence and its implications for system development.
CommonCrawl