text
stringlengths
3
1.74M
label
class label
2 classes
source
stringclasses
3 values
How to average values in list of lists that contain same parameter. <p>I want to average the 2nd values in all lists that contain the same 1st value, and convert these averaged lists into a new list of lists. </p> <p>For example i want to convert this:</p> <pre><code>[['foo', 13], ['foo', 15], ['bar', 14], ['bar', 16], ['bar', 5]] </code></pre> <p>to this: </p> <pre><code>[['foo', avg(13+15)], ['bar', avg(14+16+5)]] </code></pre> <p>Any ideas of a simple way to do this?</p>
0non-cybersec
Stackexchange
I feel like I have to hide sometimes dating someone with kids is this right?. I'm in a relationship with a girl with 2 kids girl 7 and boy 4. About 5 months now. They love me to death we feel like a perfect fit we have a blast doing nothing they say all the time they wish I was their stepdad they love me kiss me on the cheek and beg me to stay over at night but. The dad is sorry they barely know him sometimes they cry when they have to go visit him. There's things I can't do yet like stay over or babysit or put pictures up on Facebook because she's scared they might tell him or he might see them and try to take her to court for full custody or her mom will tell her she shouldnt do stuff with us all because it might make him mad. he has said he doesn't want them to be around me or he will try to go to court which he probably won't he can't even afford child support but it sucks having to hide and hold back because of other people's opinions and what they might do. He has no legitimate reason for not wanting them to be around me besides just pure jealousy or not wanting the mom to be happy hes like that. What is the best way to handle this situation.
0non-cybersec
Reddit
TIFU: By "meatspinning" my mom.. My girlfriend was over for a few days, and unfortunately I live with my parents while I attend University. We have a goofy relationship..and more often than not I am doing something awkward or crazy to get a laugh out of her. She left the room without saying much and I assumed she walked into the bathroom next to my bedroom, as shortly after she left, I hear the door close. I pulled off my pants and slowly unlocked to door to avoid making a sound... Got my dick in a full helicopter then burst into the room. My mom is sitting there screaming at the horror at the tornado of cock headed towards her. (cocknado) I am so embarrassed.. we haven't made eye contact yet. Will keep you up to date when the awkward silence ends Edit: Mom: Don't worry about it. Your dad does that all the time... Me: TO BATTLE!!!!..JK -She just started laughing while making the salad for dinner and said "what the fuck Michael" Edit:Turned out OK. We had a good chuckle at dinner and she knows I was trying to bug my Gf but now she thinks I am a weirdo haha.
0non-cybersec
Reddit
What are &quot;nwnode:&quot; URLs, and why did Lion break them?. <p>After installing OS X 10.7, <a href="http://db.tt/dVppKR6" rel="nofollow noreferrer">Dropbox</a> presented the following error upon loading: "URLs with the type "nwnode:" are not supported". I also got this error from <a href="http://cocoatech.com/" rel="nofollow noreferrer">Path Finder</a> (although I don't know what I did to cause it).</p> <p>Both applications only gave me the error once and have not done it since.</p> <p>What are nwnode: URLs? What purpose did they serve? And why did Lion stop supporting them?</p> <p><img src="https://dl.dropbox.com/u/2044/nwnode_error.png" alt="URLs with the type 'nwnode:' are not supported" /></p>
0non-cybersec
Stackexchange
Straight-line drawing of regular polyhedra. <blockquote> <p>Find the minimum number of straight lines needed to cover a crossing-free straight-line drawing of the icosahedron <span class="math-container">$(13\dots 15)$</span> and of the dodecahedron <span class="math-container">$(9\dots 10)$</span> (in the plane). </p> </blockquote> <p>For example, the cube can be covered by 7 lines:</p> <p><a href="https://i.stack.imgur.com/l8OqJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l8OqJ.png" alt="enter image description here"></a></p> <p>(The problem was posed 13.10.2017 by Alexander Wolff and Alexander Ravsky on page <a href="http://www.math.lviv.ua/szkocka/viewpage.php?page=78" rel="nofollow noreferrer">78</a> of <a href="http://www.math.lviv.ua/szkocka/viewbook.php?vol=1" rel="nofollow noreferrer">Volume 1</a> of the <a href="http://www.math.lviv.ua/szkocka" rel="nofollow noreferrer">Lviv Scottish Book</a>. </p> <p>The prize for solution: <em>A bottle of Franconia wine!</em>).</p>
0non-cybersec
Stackexchange
Explanation of POCO. <p>I'm wondering if anyone can give a solid explanation (with example) of POCO (Plain Old CLR Object). I found a <a href="http://en.wikipedia.org/wiki/Plain_Old_CLR_Object" rel="noreferrer">brief explanation on Wikipedia</a> but it really doesn't give a solid explanation.</p>
0non-cybersec
Stackexchange
It's a free car.
0non-cybersec
Reddit
The computing system that won 'Jeopardy!' is helping doctors fight cancer.
0non-cybersec
Reddit
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
Our focus will be to get past their total and look for some reverse-swing and spin in the last two days of the match..
0non-cybersec
Reddit
I've wanted to build a PC for nearly 10 years, and after finishing school, generating enough income, and accumulating parts over the past few months, I've finally built my first PC!. Ever since my dad used to put together computers in the old Athlon days (for work use, not gaming), I've always fantasized about putting together my own, but school always got in the way. Well now, I'm no longer a student, and after working for a few months, I deliberated for a while on when would be a good time to build. And then that [infamous Amazon pricing error](https://www.reddit.com/r/buildapcsales/comments/9o122l/gpu_evga_gtx_1070_ti_8gb_sc_gaming_black_edition/) happened for those EVGA 1070ti's, and that became the first part I bought lol (yes, I was one of the lucky few that got one!) Thanks to /r/buildapcsales and a lot of patiently waiting, I finally got all my parts and put it together. My build is listed below along with the prices I paid. I'd say I got some pretty good deals and I'm really happy with the final result. Nothing too flashy with crazy RGB, but still looks awesome! Obligatory pics: [https://imgur.com/a/U4s5yGe](https://imgur.com/a/U4s5yGe) [PCPartPicker part list](https://pcpartpicker.com/list/XN8Kmq) / [Price breakdown by merchant](https://pcpartpicker.com/list/XN8Kmq/by_merchant/) |Type|Item|Price| |:-|:-|:-| |**CPU**|[Intel - Core i7-8700K 3.7 GHz 6-Core Processor](https://pcpartpicker.com/product/sxDzK8/intel-core-i7-8700k-37ghz-6-core-processor-bx80684i78700k)|Purchased For $319.33| |**CPU Cooler**|[Scythe - Mugen 5 Rev. B 51.17 CFM CPU Cooler](https://pcpartpicker.com/product/8GBrxr/scythe-mugen-5-rev-b-512-cfm-cpu-cooler-scmg-5100)|Purchased For $38.92| |**Motherboard**|[Gigabyte - Z390 AORUS ULTRA ATX LGA1151 Motherboard](https://pcpartpicker.com/product/n6gzK8/gigabyte-z390-aorus-ultra-atx-lga1151-motherboard-z390-aorus-ultra)|Purchased For $205.66| |**Memory**|[G.Skill - Ripjaws V Series 16 GB (2 x 8 GB) DDR4-3000 Memory](https://pcpartpicker.com/product/tMvZxr/gskill-memory-f43000c15d16gvgb)|Purchased For $109.99| |**Storage**|[HP - EX920 1 TB M.2-2280 Solid State Drive](https://pcpartpicker.com/product/88bwrH/hp-ex920-1tb-m2-2280-solid-state-drive-2yy47aaabc)|Purchased For $179.99| |**Video Card**|[EVGA - GeForce GTX 1070 Ti 8 GB SC GAMING ACX 3.0 Black Edition Video Card](https://pcpartpicker.com/product/jvwqqs/evga-geforce-gtx-1070-ti-8gb-sc-gaming-acx-30-black-edition-video-card-08g-p4-5671-kr)|Purchased For $294.43| |**Case**|[Fractal Design - Design Define R6 USB-C - TG ATX Mid Tower Case](https://pcpartpicker.com/product/z3kj4D/fractal-design-design-define-r6-usb-c-tg-atx-mid-tower-case-fd-ca-def-r6c-bk-tgl)|Purchased For $119.99| |**Power Supply**|[EVGA - SuperNOVA G1+ 750 W 80+ Gold Certified Fully-Modular ATX Power Supply](https://pcpartpicker.com/product/xTMwrH/evga-supernova-g1-750w-80-gold-certified-fully-modular-atx-power-supply-120-gp-0750-x1)|Purchased For $56.99| |**Total**||**$1325.30**| &#x200B;
0non-cybersec
Reddit
How similar are other mammals taste buds to ours?. Also, if other mammals only eat to survive, why do my dogs drool over steak and things like that when they've already been fed that night?
0non-cybersec
Reddit
Debt Collection Company Was Using Fake Sheriffs, Judges and Courtroom To Scare Debtors.
0non-cybersec
Reddit
Best Tooth extraction Ever.
0non-cybersec
Reddit
One of the most important guides.
0non-cybersec
Reddit
Colony of ants trying to survive a flooded river.
0non-cybersec
Reddit
Got my first Direct Deposit at a new job. It is significantly less than predicted.. I know that this sounds like something that everyone says, but I think that there must be a real problem. I recently got a new job and it is my first "real" job at a local outdoors store. I get paid $10 an hour and have worked for 24 hours (4-4 hour shifts and 1-8 hour shift) I know that that does not mean that I will get $240 because of taxes and etc, but i was expecting more than the $119.27 that was deposited into my account. I did not claim myself when filling out the wp-40. I am wondering if maybe this direct deposit did not cover all the hours that I have worked and that they may appear on my next paycheck? I did not get a physical paycheck, just the direct deposit. Any advice or information would be greatly appreciated! thanks
0non-cybersec
Reddit
Bị đau nhức đầu nên ăn gì? | Diễn Đàn Học Tập - Kênh Tài Liệu Học Hành - Gia Sư.
0non-cybersec
Reddit
JAX-RS and EJB exception handling. <p>I'm having trouble handling exceptions in my RESTful service:</p> <pre><code>@Path("/blah") @Stateless public class BlahResource { @EJB BlahService blahService; @GET public Response getBlah() { try { Blah blah = blahService.getBlah(); SomeUtil.doSomething(); return blah; } catch (Exception e) { throw new RestException(e.getMessage(), "unknown reason", Response.Status.INTERNAL_SERVER_ERROR); } } } </code></pre> <p>RestException is a mapped exception:</p> <pre><code>public class RestException extends RuntimeException { private static final long serialVersionUID = 1L; private String reason; private Status status; public RestException(String message, String reason, Status status) { super(message); this.reason = reason; this.status = status; } } </code></pre> <p>And here is the exception mapper for RestException:</p> <pre><code>@Provider public class RestExceptionMapper implements ExceptionMapper&lt;RestException&gt; { public Response toResponse(RestException e) { return Response.status(e.getStatus()) .entity(getExceptionString(e.getMessage(), e.getReason())) .type("application/json") .build(); } public String getExceptionString(String message, String reason) { JSONObject json = new JSONObject(); try { json.put("error", message); json.put("reason", reason); } catch (JSONException je) {} return json.toString(); } } </code></pre> <p>Now, it is important for me to provide both a response code AND some response text to the end user. However, when a RestException is thrown, this causes an EJBException (with message "EJB threw an unexpected (non-declared) exception...") to be thrown as well, and the servlet only returns the response code to the client (and not the response text that I set in RestException).</p> <p>This works flawlessly when my RESTful resource isn't an EJB... any ideas? I've been working on this for hours and I'm all out of ideas.</p> <p>Thanks!</p>
0non-cybersec
Stackexchange
How can I keep track of a changing file?. <p>It will be changing name and content. And eventually inode (after a backup).</p> <p>Is there a way of keeping a fixed reference ID to a file? At least, if I have a list of inodes, how can I connect it with the new inodes when I backup or transfer the file to a new partition?</p>
0non-cybersec
Stackexchange
The latest iteration of my setup..
0non-cybersec
Reddit
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
Defcon: Practical Cell Phone Spying.
1cybersec
Reddit
My lil bro did this chiefs logo. He has self esteem problems so please say something nice to him in the comments. U don’t have to upvote..
0non-cybersec
Reddit
I took pics of (almost) every outfit I wore to work the past two weeks. Feedback?.
0non-cybersec
Reddit
Bi polar.
0non-cybersec
Reddit
The Eastern Sierras at Dusk - Taken from the White Mountains in California.
0non-cybersec
Reddit
Why are those ducks swimming in grass? (x-post /r/aww).
0non-cybersec
Reddit
Protests and Patriotism | the philosophy behind Colin Kaepernick, his protests and society's response.
0non-cybersec
Reddit
ar X iv :p hy si cs /0 60 80 71 v1 [ ph ys ic s. ed -p h] 7 A ug 2 00 6 Study of the Damped Pendulum. Akhil Arora, Rahul Rawat, Sampreet Kaur & P. Arun Department of Physics & Electronics, S.G.T.B. Khalsa College, University of Delhi, Delhi - 110 007, India∗ Abstract Experiments on the oscillatory motion of a suspended bar magnet throws light on the damping effects acting on the pendulum. The viscous drag offered by air was found the be the main contributor for slowing the pendulum down. The nature and magnitude of the damping effects were shown to be strongly dependent on the amplitude. 1 http://arXiv.org/abs/physics/0608071v1 I. INTRODUCTION The simple pendulum is pedagogically a very important experiment. As a result of it’s simplicity, it is introduced at high school level. The equation of motion (EOM) of a pendulum undergoing simple harmonic motion (SHM) is given as d2y dt2 = − g L y = −ω2oy whose solution is easily derivable and can be taught in a class which has been introduced to calculus. The EOM can be modified to account for damping as seen in a real pendulum and yet the equation and it’s solution remains trivial as1 d2y dt2 + ( b L ) dy dt + ω2oy = 0 y(t) = e−βt(Acosω′t + Bsinω′t) (1) where β = (b/2mL) and ω′ = √ ω2o − β2. However, this approach taken by textbooks over- simplifies the complex motion of the pendulum and implies that only the pendulum’s amplitude attenuates with time. On the contrary along with the amplitude even the oscillation’s time period varies2, a feature overlooked in classroom physics and carried forward for a long time by students. The difficulty in measuring these variations also does not encourage routine experimentation in high schools/ undergraduate laboratories. However, with the advent of micro-computers such measurements can now be made easily. Most of the experiments reported have measured the change in amplitude2,3,4 while examples of measuring variation in time period is rare5. Since, both amplitude and the time period varies with successive oscillation, one can expect the pendulum’s velocity to vary with time at a given position. While Gregory1 used knowledge of the oscillation time period to extract information on the pendulum’s velocity, Avinash Singh et al6 used a novel method to estimate the pendulum’s velocity. A bar magnet was attached to a rigid semicircular aluminum frame of radius ’L’ which pivoted about the center of the circle such that the bar magnet oscillates through a coil kept at the mean position. As the magnet periodically passed through the coil, it generated a series of emf pulses. The arrangement with proper circuitry determined the peak emf. Avinash et al6 approximated the 2 peak emf (ξmax) as ξmax ≈ ( dφ dt ) max ωmax where ωmax is the maximum velocity as the bar magnet passed through the mean position. This method has it’s advantage when one proposes to study the damping effects in a pendu- lum. Most of the works studying the variation in oscillation amplitude4,7,8 with time have the pendulum’s suspension connected to a variable resistance (potentiometer) which introduces a sliding friction in the pendulum’s motion. Complex mathematics with assumption that all damping contributors act independently is then used to filter out information of each con- tribution. Wang et al9 used a novel but costly method using Doppler effect to monitor the position of the pendulum to study it’s damping. Thus, Avinash et al6 provides a interesting yet cheap method to study the damped pendulum. While they rightly pointed out that several parameters of the experiment such as velocity and strength of the magnet and the number of turns in the coil can be varied, they did not explicitly discuss them theoretically or study these factors experimentally. Hence, in this manuscript, we have furthered the study made in ref 6 and have tried to address these issues. II. EXPERIMENTAL SETUP Our pendula was made by suspending a standard bar magnet by a cotton thread. The tread was fastened to a small hook drilled into one pole of the bar magnet. The length of the bar magnet (2l) was 7cm and the cotton thread (Ls) used was 53cm long. A coil of 1000 turns was kept near the pendula’s mean position at a distance ’d’ from the magnet’s lower pole (see fig 1). The magnetic field at point ’A’ is evaluated by B = µom 4π [ 1 BA2 − 1 CA2 ] (2) where m is the dipole moment. ’AC’ and ’BA’ can be written in terms of the pendulum’s position (angle Θ) using the cosine law. That is, BA2 = OB2 + OA2 − OB.OAcosΘ AC2 = OC2 + OA2 − OC.OAcosΘ 3 where Θ L 2l d A C B O D E FIG. 1: Pendulum with the mass being replaced by a bar magnet. The detecting coil is at ’A’. OC = Ls + 2l OB = Ls OA = Ls + 2l + d hence, BA2 = L2s + (Ls + 2l + d) 2 − Ls.(Ls + 2l + d)cosΘ AC2 = (Ls + 2l) 2 + (Ls + 2l + d) 2 − (Ls + 2l).(Ls + 2l + d)cosΘ Based on the assumption that 2l and d are relatively small compared to Ls, the higher powers of 2l and d can be neglected. Hence, eqn(2) can be written as B ≈ µoM 4π [ 2l L2s(Ls + 4l + d)(2 − cosΘ) ] The induced emf is proportional to the rate of change in the number of magnetic lines cutting the coil. dB dt ≈ − µoM 4π [ 2l L2s(Ls + 4l + d) ] [ sinΘ (2 − cosΘ)2 ] dΘ dt 4 Based on this, the respective induced emf can be written as ξ = −N dB dt ≈ µoMN 4π [ 2l L2s(Ls + 4l + d) ] [ sinΘ (2 − cosΘ)2 ] dΘ dt (3) where N is the number of turnings in the coil. Eqn(3) can be written in a compact form ξ = ξo [ sinΘ (2 − cosΘ)2 ]( dΘ dt ) (4) where ξo = µoMN 4π [ 2l L2s(Ls + 4l + d) ] (5) Thus, as the distance between the magnet and detecting coil is increased, the induced emf decreases. Infact the induced emf is quite weak and is amplified by an op amp circuit. The high input impedance of the IC741 opamp ensures that a true measurement of the emf is made. To digitalise this analog signal (see fig 1, ref 6) using an Analog to Digital Convertor ADC-0809 (see fig 2a), the amplified output is rectified and the peak value is held by charging a capacitor. The capacitor is discharged via a large resistance so that it retains the peak value till the next peak value arrives. We require the ADC to start conversion once the peak value is attained by the capacitor. This implies a synchronization between the input emf pulses and the ADC’s start of conversion (SOC) pulses. To achieve this synchronization it is best to generate the required SOC pulse by wave-shaping the input itself. The amplified input after rectification is fed to a comparator which compares to +1v. This is to avoid spurious/accidental triggering due to noise. The infinite gain results in pulses with sharp edges. The width of these pulses are approximately To/4 (for our pendula ≈ 390ms). This would be too large for serving as a SOC and hence is reduced to a 5µs pulse using a monostable timer made with IC55510. The sequencing and synchronization can be understood from the various waveforms shown in fig 2b. The designed circuit digitalises the analog emf and on completion sends an EOC to the computer or microprocessor kit (in case of a microprocessor this is done through a programmable I/O IC8155 chip, details of which can be found in the book by Goankar11) which then reads the eight bit data and stores it for retrivial. This project was done using an 8085 microprocessor kit. The programme and flowchart used is detailed in the Appendix. 5 1v + 2 3 555 SOC ALE 26 6 22 7 EOC 0809 clk 10 + − + − (i) (i) (iv) data from coil (a) (ii) (iii) (v) from coil (i) (ii) (iii) (iv) (v) t (b) +1volts FIG. 2: The (a) schematic diagram of the circuit used and the (b) important waveforms at the points marked in the circuit. The reliability of our circuit can be tested by measuring the maximum emf induced in the coil for varying distances ’d’. Eq(4) shows that the measured maximum emf would be directly proportional to ξo which inturn is inversely proportional to ’d’ (see eqn 5). Fig 3 6 1.5 2 2.5 3 3.5 4 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 ξ m a x d (in cm) FIG. 3: Variation in the maximum induced emf with increasing distance between the coil and the magnet. shows the variation in the experimentally determined ξmax with ’d’. While the inverse nature is evident, the value of (Ls + 4l) as returned by curve fitting eqn(5) on our data is substantially off mark from the actual lengths. This is expected since eqn(4) and (5) are very simplified approximations. III. VARIATION OF INDUCED EMF WITH INITIAL DISPLACEMENT A. While undergoing undamped oscillation The velocity of an undamped pendulum undergoing SHM is given as ( dΘ dt ) = ωo √ Θ2m − Θ2 where ωo = √ g/(Ls + 2l) is the frequency of oscillation and Θm is the initial displacement given to the pendulum. Therefore, the emf induced by our pendulum undergoing undamped SHM 7 would be given as (using eq 4) ξ ≈ ωoξo   sinΘ √ Θ2m − Θ2 (2 − cosΘ)2   (6) The variation in induced emf with time of an undamped pendulum undergoing SHM is as cal- culated using eqn(6) is shown in fig(4). The maximum angular displacement used to generate the graph using eqn(6) was 5o. The emf pulse shown in fig 4 is only for half a cycle starting from one extreme position to the opposite extreme. As the magnet approaches the coil, the flux increases and as it crosses the mean position, the emf is negative since the magnet is receding from the coil. Eventhough the velocity (dΘ/dt) is maximum at the mean position, since the variation in flux (dφ/dt) is zero, the induced emf is zero as the pendulum passes the mean position. -4-2 02 4 -0.1 -0.05 0 0.05 0.1 �dBdt �0 �10�3" � (in radian) ! � sin(x)[2� os(x)℄2p�2m � x2 FIG. 4: A measure of the induced emf with oscillating angle. The graph was generated using eqn(6) with Θm = 5 o. The position of the pendulum when the maximum emf is generated (between 0 < Θ < Θm) can be found as a problem of maxima and minima dξ dΘ = ξo    (2 − cosΘ)2 [ sinΘ d dΘ ( dΘ dt ) + cosΘ ( dΘ dt )] − (2 − cosΘ)sin2Θ ( dΘ dt ) (2 − cosΘ)3    = ξo    (2 − cosΘ) [ sinΘ d dΘ ( dΘ dt ) + cosΘ ( dΘ dt )] − sin2Θ ( dΘ dt ) (2 − cosΘ)2    8 For cases of small angle oscillations eqn(??) reduces to dξ dΘ = ξo [ Θ d dΘ ( dΘ dt ) + (1 − Θ2) ( dΘ dt )] (7) dξ dΘ = ωoξo   −2Θ2 2 √ Θ2m − Θ2 + (1 − Θ2) √ Θ2m − Θ2   Θ2 = (1 − Θ2)(Θ2m − Θ 2) Solving the quadratic equation Θ4 − (2 + Θ2m)Θ 2 + Θ2m = 0 we have Θpeak = ± Θm√ 2 (8) Since eqn(7) was used to determine position of extrema, the above condition is only valid for undamped small angle oscillations. The maxima as per this condition for magnet oscillating through Θm = 5 o occurs at ±0.0617 radians (or ±3.53o). The maximum emf that is induced, hence is (use eqn 6) ξmax = ωoξo Θm√ 2 × sinΘm√ 2 ( 2 − cosΘm√ 2 )2 Since, these equations and conditions are essentially valid for small angles, ξmax = ( ωoξo 2 ) Θ2m (9) However, a physical pendulum is prone to damping and hence in the next section we investigate as to how the maximum induced emf varies with initial displacement for a damped pendulum. 9 -0.02 -0.015 -0.01 -0.005 0 0.005 0.01 0.015 0.02 0 0.5 1 1.5 2 2.5 3 ξ t (in sec) β=0.0 β=0.4 FIG. 5: Variation of induced emf when oscillation is damped (β = 0.45s−1) is compared with the case of no damping (i.e. β = 0.0s−1). -0.02-0.015-0.01 -0.00500.005 0.010.0150.02 -0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 � � (in radian) ��*HHY ? HHY (a)(b) FIG. 6: The variation of induced emf of fig 3 is plotted w.r.t. angular position of the pendulum (i.e. Θ) for the cases (a) β = 0.0s−1 and (b) β = 0.45s−1. B. While undergoing damped oscillation We have already stated in our introduction that the damped motion described by eqn(1) exhibits how the pendulum’s oscillation amplitude decreases exponentially with time. The 10 EOM whose solution is given by eqn(1) describes a linear system. The solution can be further trivialized without losing any generality as Θ = Θme −βtsin(ω′t) (10) from which the velocity can be calculated as dΘ dt = ω′Θme −βtcos(ω′t) − βΘme−βtsin(ω′t) = Θme −βt[ω′cos(ω′t) − βsin(ω′t)] (11) Substituting the above expression in eqn(4) we obtain the relation showing the variation of induced emf with time. This variation is shown in fig 5. It is also clear from the figure that the peaks in the induced emf occurs at ωt = (2n + 1)π 4 . Hence, the angles at which maxima occur in general is written as Θpeak = ± Θm√ 2 e − (2n+1)π 4tanφ (12) where tanφ = ω′/β. Our circuit is designed only to measure peak emfs at n=0,2,4,6....., where only the positive solutions of eqn(12) would contribute. Using our condition on eqn(11) and eqn(4) we have ( dΘ dt ) peak = (ω′ − β) Θm√ 2 e − (2n+1)π 4tanφ ξpeak = (ω ′ − β)ξo    sinΘm√ 2 e − (2n+1)π 4tanφ (2 − cosΘm√ 2 e − (2n+1)π 4tanφ )2    Θm√ 2 e − (2n+1)π 4tanφ (13) For small angle oscillations eqn(13) reduces to ξpeak = (ω′ − β)ξo 2 Θ2me − (2n+1)π 2tanφ (14) The variation in emf (seen w.r.t time in fig 5) when viewed w.r.t oscillating angle Θ shows how the peak position decreases (eq 12) as also the amplitude of the maximum induced emf decreases (eqn 14) with each half cycle. Eqn(9) and eqn(14) shows that the maximum emf induced for damped pendulums undergoing SHM is directly proportional to the square of maximum angular displacement given to the pendulum. We have recorded the first maxima 11 1 2 3 4 5 6 7 8 0 200 400 600 800 1000 1200 1400 1600 ξ m a x Θ2 (in degrees2) FIG. 7: The variation of induced emf with the initial angular displacement of the pendulum. It shows the expected parabolic dependence (i.e. Θ2m). reading (i.e. n=0) of the induced emf for various angles upto 40o. The linear relation between ξmax and Θ 2 m is evident. Before commenting further, it must be recollected that eq(1) is valid for small angle oscillations, i.e. for Θm < 5 o, yet a good linearity is obtained till Θ = 40o. Experimental data for Θm ≥ 45o deviate markedly from this linear trend. Remember eqn(12) was obtained with the assumption that the pendula’s motion is described by eqn(10). This equation describes the motion of a pendulum oscillating in a viscous medium with small velocity. It would be shown in the next section that the pendulum’s velocity is quite apprecia- ble for Θm ≥ 45o and hence it’s motion is not described as in eqn(10), explaining the departure for linearity. IV. RESULTS AND OBSERVATIONS Our prelimary measurements are in good correspondence with commonly known notions and hence we proceed to investigate further the nature of damping in our pendula. It should be noted that the amount of damping and it’s nature are strongly pendula dependent and all results reported here are specific to our experiment and can not be taken as general. We have 12 0.4 0.5 0.6 0.7 0.8 0.9 1 0 20 40 60 80 100 120 140 160 ξ (N o rm a liz e d ) n (a) (b) (c) (d) FIG. 8: The variation in peak induced emf measured with each oscillation is shown for initial dis- placements (Θm) (a) 5 o, (b) 30o, (c) 55o and (d) 65o. recorded the maxima in induced emfs for 80 oscillations for various initial displacements. Since for each oscillations, we get two positive maxima in induced emf, figure 8 shows the variation in maxima reading of induced emf for 160 peaks. A general expression ae−bn was fitted to the data of ξmax w.r.t n using a standard and freely available curve fitting software called ”Curxpt v3.0”. Good fits were obtained for oscillations set by initial displacements upto 40o. The exponential fall in emf is indicative of the rate of loss of energy from the oscillating system. It indicates the loss to follow the relation dE dt ∝ −E or dE dt = −bE This indicates that the velocity is low and hence the damping/resistive force acting on the pendulum is proportional to the velocity12. Figure 9 shows the variation of the decay constant ’b’ (of ae−bn) with respect to the maximum displacement (Θm) given to the pendulum. The graph indicates that as Θm increases, the velocity with which the pendulum moves increases with which the damping constant increases. 13 0.001 0.0015 0.002 0.0025 0.003 0.0035 0.004 0.0045 0.005 0 10 20 30 40 50 60 70 D a m p in g C o n st a n t θm FIG. 9: The variation decay constant (b of general equation ae−bn) of the exponentially decaying region. The continuous line is the parabolic fit for the data points (0.00056 √ Θm). 0 0.05 0.1 0.15 0.2 0.25 45 50 55 60 65 70 D e ca y C o n st a n t θm FIG. 10: The variation decay in ’b’, a measure of decay in the early oscillations when pendulum was set in motion with displacements > 45o. For displacement angles beyond 45o (Θm ≥ 45o), eventhough visual examination of the curves in fig 8 suggests an exponential damping, the data points do not fit to an exponential fall relation. A detailed examination suggests a more complex process is taking place with initial damping being sharper. Infact the initial 25-30 data points fit to a/nb. The data 14 beyond this fit to the exponential fall equation. The a/nb fit corresponds to the damping force being proportional to higher powers of velocity (vγ , where γ > 1) and in turn the rate of energy loss also being proportional to higher terms of energy. That is, the rate of energy loss for our pendulum set into oscillations with a displacement angle > 45o is given as dE dt = −αE 1+b b Figure 10 plots ’b’ versus Θm. The power term (b) is being treated as a measure of damping and is consistent with the results of fig 9, i.e. as initial displacement increases the damping becomes large with a proportionality to the pendulum’s velocity. The resistive force being proportional to higher powers of velcoity has been reported earlier also3,4,13,14,15,16. A system is reported to have a constant friction (γ = 0) or a linear dependence of velocity (γ = 1) or a quadratic depenence of velocity (γ = 2). Corresponding to which the pendulum’s amplitude decays linearly, exponentially and inverse power decay respectively with time. It hence maybe concluded that for our pendulum sent into motion by initial displacements Θ ≥ 45o, the damping force is proportional to vγ where γ > 1. V. CONCLUSION A simple experiment of setting a suspended bar magnet into oscillations, is a rich source of information. Not only does it give exposure to Faraday’s induction law and a basic understand- ing of induced emf’s dependence on angle of oscillation, it enables us to study the damping effects on the pendulum. This method is better than previously used methods since the mea- suring technique does not introduce additional contributions to damping. When the oscillation imparted to the pendulum is very large, the damping effect is also strong with the damping force being proportional to vγ , where γ > 1 and ’v’ is the pendulum’s velocity. This brings down the oscillation amplitude of the pendulum and it’s velocity. As the velocity becomes low, the resistive force acting on the pendulum changes it’s nature and becomes proportional to ’v’. Considering the rich information obtained from the experiment and the simplicity of the experiment, it allows the method to be easily implemented as a routine experiment in undergraduate laboratories. 15 Acknowledgements The authors would like to express their gratitude to the lab technicians of the Department of Physics and Electronics, SGTB Khalsa College, for the help rendered in carrying out the experiment. ∗ Electronic address: [email protected] 1 Gregory M. Quist, ”The PET and pendulum: An application of microcomputers to undergraduate laboratory”, Am. J. Phys., 51, 145-148 (1983). 2 M. F. Mclnerney, ”Computer-aided experiments with the damped harmonic oscillator”, Am. J. Phys., 53, 991-996 (1985). 3 A. R. Ricchiuto and A. Tozzi, ”Motion of a harmonic oscillator with sliding and viscous friction”, Am. J. Phys., 50, 176-179 (1982). 4 Patrick T. Squire, ”Pendulum Damping”, Am. J. Phys., 54, 984-991 (1986). 5 Neha Agarwal, Nitin Verma and P. Arun, ”Simple Pendulum revisited”, European. J. Phys., 26, 517-523 (2005). 6 Avinash Singh, Y. N. Mohapatra and Satyendra Kumar, ”Electromagnetic induction and damping: Quantitative experiments using a PC interface”, Am. J. Phys., 70, 424-427 (2002). 7 L.F.C. Zonetti, A.S.S. Camargo, J. Sartori, D.V de Sousa and L.A.O. Nunes, ”A demonstration of dry and viscous damping of an oscillating pendulum”, Eur. J. Phys., 20, 85-88 (1999). 8 John C. Simbach and Joseph Priest, ”Another look at a damped physical pendulum”, Am. J. Phys., 73, 1079-1080 (2005). 9 Xiao-jun Wang, Chris Schmitt and Marvin Payne, ”Oscillation with three damping effects”, Eur. J. Phys., 23, 155-164 (2002). 10 Ramakant A. Gayakwad, ”Opamps and Linear Integrated Circuits”, Prentice-Hall India, Delhi (1999). 11 Ramesh S. Gaonkar, ”Microprocessor Architecture, Programming and applications with the 8085/8080A”, Wiley Eastern Ltd. Delhi (1986). 16 mailto:[email protected] 12 Avinash Singh, arXiv:physics/0206086. 13 B. J. Miller, ”More Realistic Treatment of the Simple Pendulum without Difficult Mathematics”, Am. J. Phys., 42, 298-303 (1974). 14 F. S. Crawford, ”Damping of a simple pendulum”, Am. J. Phys. 43, 276-277 (1975). 15 N. F. Pederson and O. H. Soerensen, ”The compound pendulum in intermediate laboratories and demonstrations”, Am. J. Phys. 45, 994-998 (1977). 16 R. A. Nelson and M. C. Olsson, ”The pendulum: Rich physics from a simple system”, Am. J. Phys. 54, 112-121 (1986). 17 http://arXiv.org/abs/physics/0206086 Appendix Table 1. Program used to collect data. Address Mnemonics Hex Code Address Mnemonics Hex Code C400 LXI SP 31 C40A 01H 01 C401 00H 00 C40B JZ CA C402 C3H C3 C40C 07H 07 C403 MVI A 3E C40D C4H C4 C404 00H 00 C40E IN DB C405 OUT D3 C40F 09H 09 C406 08H 08 C410 PUSH PSW F5 C407 IN DB C411 JMP C3 C408 OBH OB C412 07H 07 C409 ANI E6 C413 C4H C4 18 Set Control word making portsPort A and Port C input Initialise Stack Pointer Start Read port C, (contains EOC) AND content with 01H Is EOC high Read Data at Port A Push PSW to save data Unconditional Jump No Yes FIG. 11: Flowchart. 19 Introduction Experimental Setup Variation of Induced Emf with Initial Displacement While undergoing undamped oscillation While undergoing damped oscillation Results and Observations Conclusion Acknowledgements References Appendix
0non-cybersec
arXiv
A different time. My brother-in-law in 1989..
0non-cybersec
Reddit
Does 7zip for Linux support AES-NI?. <p>I am looking to encrypt a set of fairly large files using well supported, ideally cross-platform software. I understand that 7zip uses 256 bit AES encryption.</p> <p>Does anyone know if standard Linux builds of 7z can make use of AES-NI (ie CPU acceleration for AES)?</p>
0non-cybersec
Stackexchange
ITAP of my pup on Bondcliff Summit.
0non-cybersec
Reddit
Florida Quietly Shortened Yellow Traffic Light Lengths Below Federal Standards, Resulting in More Red Light Camera Tickets and Millions in Additional Revenue.
0non-cybersec
Reddit
IDA PRO + Windows 10 + WinDbg. <p>I didn't know That we've got a Reverse Engineering community around here :D I am very glad on that....</p> <p>anyway... I haven't used IDA Pro for quite some time, upgraded to win 10 in the mean time.</p> <p>I am unable to launch debugging directly from IDA Pro. WinDbg is setup correctly, windbg attaches a process just fine on itself. WinDbg has been added to the PATH variable.</p> <p>When i try to launch debugging from ida PRO,or attach I get the error:</p> <p>"Could not initialize WinDgbEngine (..) %1 is not a valid Win32 application"</p> <p>ideas? seems like something is wrong with parameters passing?</p>
0non-cybersec
Stackexchange
Second Derivative of log. <p>Let: $\log(s)=z$ </p> <p>I understand that </p> <p>$$\frac{\partial}{\partial s}=\frac{\partial}{\partial z} \frac{\partial z}{\partial s} = e^{-z}\frac{\partial}{\partial z}$$</p> <p>What is the second derivative, ie $\frac{\partial^2}{\partial s^2}$</p> <p>Applying the Product rule I reach the following: </p> <p>$$\frac{\partial^2}{\partial s^2} = \frac{\partial^2}{\partial z^2} \frac{\partial z}{\partial s} + \frac{\partial^2}{\partial z^2} \frac{\partial z}{\partial s} = e^{-z}\left(\frac{\partial^2}{\partial z^2} - \frac{\partial}{\partial z} \right ) $$</p> <p>Is this correct or should we reach to: </p> <p>$$\frac{\partial^2}{\partial s^2} =e^{-2z}\left(\frac{\partial^2}{\partial z^2} - \frac{\partial}{\partial z} \right )$$</p>
0non-cybersec
Stackexchange
I planted sweet pea seeds in mid December. They are now in the ground and have yet to flower. I'm in New Orleans. How long til they bloom?. I'm worried that they won't start to flower until the heat is too much. I have them partially shaded. Any advice? [Here's what I planted](http://www.rareseeds.com/princess-elizabeth-sweet-pea/). The site doesn't give any info on days to bloom or anything.
0non-cybersec
Reddit
What's one command or trick your dog does that you love?. Something s/he does that just gets you excited...
0non-cybersec
Reddit
Using [self method] or @selector(method)?. <p>Can anyone enlighten me as to the differences between the two statements below. </p> <pre><code>[self playButtonSound]; </code></pre> <p>AND:</p> <pre><code>[self performSelector:@selector(playButtonSound)]; </code></pre> <p>I am just asking as I had some old code that used <code>@selector</code>, now with a little more knowledge I can't think why I did not use <code>[self playButtonSound]</code> instead, they both seem to do the same as written here.</p> <p>gary</p>
0non-cybersec
Stackexchange
Comment out an include statement inside an HTML file using Jekyll. <p>Is there a way to comment out an include statement inside an HTML file using Jekyll?</p> <p>For example I have this inside one of my HTML files that I'd like to temporarily comment out. Standard HTML comment doesn't seem to work.</p> <pre><code>{% include navbar.html %} </code></pre>
0non-cybersec
Stackexchange
NFL Checklist for Week 1 of the Season.
0non-cybersec
Reddit
Wedding Cutlery.
0non-cybersec
Reddit
Negative look ahead python regex. <p>I would like to regex match a sequence of bytes when the string '02 d0' does not occur at a specific position in the string. The position where this string of two bytes cannot occur are byte positions 6 and 7 starting with the 0th byte on the right hand side. </p> <p>This is what I have been using for testing:</p> <pre><code>#!/usr/bin/python import re p0 = re.compile('^24 [\da-f]{2} 03 (01|03) [\da-f]{2} [\da-f]{2} [\da-f]{2} (([^0])| (0[^2])|(02 [^d])|(02 d[^0])) 01 c2 [\da-f]{2} [\da-f]{2} [\da-f]{2} 23') p1 = re.compile('^24 [\da-f]{2} 03 (01|03) [\da-f]{2} [\da-f]{2} [\da-f]{2} (([^0])|(0[^2])|(02 [^d])|(02 d[^0])) 01') p2 = re.compile('^24 [\da-f]{2} 03 (01|03) [\da-f]{2} [\da-f]{2} [\da-f]{2} (([^0])|(0[^2])|(02 [^d])|(02 d[^0]))') p3 = re.compile('^24 [\da-f]{2} 03 (01|03) [\da-f]{2} [\da-f]{2} [\da-f]{2} (?!02 d0) 01') p4 = re.compile('^24 [\da-f]{2} 03 (01|03) [\da-f]{2} [\da-f]{2} [\da-f]{2} (?!02 d0)') yes = '24 0f 03 01 42 ff 00 04 a2 01 c2 00 c5 e5 23' no = '24 0f 03 01 42 ff 00 02 d0 01 c2 00 c5 e5 23' print p0.match(yes) # fail print p0.match(no) # fail print '\n' print p1.match(yes) # fail print p1.match(no) # fail print '\n' print p2.match(yes) # PASS print p2.match(no) # fail print '\n' print p3.match(yes) # fail print p3.match(no) # fail print '\n' print p4.match(yes) # PASS print p4.match(no) # fail </code></pre> <p>I looked at <a href="https://stackoverflow.com/questions/9843338/regex-negative-look-ahead-between-two-matches">this example</a>, but that method is less restrictive than I need. Could someone explain why I can only match properly when the negative look ahead is at the end of the string? What do I need to do to match when '02 d0' does not occur in this specific bit position?</p>
0non-cybersec
Stackexchange
Kristen Cavallari: Can everyone shut up now?.
0non-cybersec
Reddit
Extending another TypeScript function with additional arguments. <p>Is it possible in TypeScript to define a function type and extend its argument list in another type (overloading function type?)?</p> <p>Let's say I have this type: <code> type BaseFunc = (a: string) =&gt; Promise&lt;string&gt; </code></p> <p>I want to define another type with one additional argument (b: number) and the same return value.</p> <p>If at some point in the future <code>BaseType</code> adds or changes arguments this should also be reflected in my overloaded function type.</p> <p>Is this possible?</p>
0non-cybersec
Stackexchange
moving changed files to another branch for check-in. <p>This often happens to me: I write some code, go to check in my changes, and then realize I'm not in the proper branch to check in those changes. However I can't switch to another branch without my changes reverting. Is there a way to move changes to another branch to be checked in there?</p>
0non-cybersec
Stackexchange
Why your first PR for a large feature should be its feature flag.
0non-cybersec
Reddit
jQuery .serializeObject is not a function - only in Firefox. <p>I'm using jQuery, and specifically this function</p> <p><code>$("#postStatus").serializeObject();</code></p> <p>It works absolutely fine in Chrome and Safari, but when I do it in Firefox it doesn't work. I used Firebug to see what error it was giving, and i'm getting this</p> <p><code>$("#postStatus").serializeObject is not a function</code> </p> <p>Why doesn't this function work in Firefox?</p> <p>UPDATE...</p> <p>Oh yes, I completely forgot that it's not a core function. I remember that I searched a way to serialize a form and found this solution;</p> <pre><code>$.fn.serializeObject = function() { var o = {}; var a = this.serializeArray(); $.each(a, function() { if (o[this.name]) { if (!o[this.name].push) { o[this.name] = [o[this.name]]; } o[this.name].push(this.value || ''); } else { o[this.name] = this.value || ''; } }); return o; }; </code></pre> <p>I've managed to fix this issue by placing the function above at the top of the JS file. Thanks for your help guys.</p>
0non-cybersec
Stackexchange
UWSGI can&#39;t import module &#39;mysite&#39; with nginx and flask. <p>I'm new to using uwsgi and nginx and I haven't been able to figure out why I am getting this error from uwsgi: </p> <pre><code>ImportError: No module named mysite unable to load app 0 (mountpoint='my_ipaddr|') (callable not found or import error) </code></pre> <p>Here is my nginx config file:</p> <pre><code>server { listen 80; server_name my_ipaddr; location /static { alias /var/www/mysite/static; } location / { include uwsgi_params; uwsgi_pass unix:/tmp/mysite.sock; uwsgi_param UWSGI_PYHOME /var/www/mysite/venv; uwsgi_param UWSGI_CHDIR /var/www/mysite; uwsgi_param UWSGI_MODULE app; uwsgi_param UWSGI_CALLABLE app; } </code></pre> <p>Here is my mysite.ini for uwsgi:</p> <pre><code>[uwsgi] vhost=true socket=/tmp/mysite.sock venv = /var/www/mysite/venv </code></pre> <p>Here is my app.py:</p> <pre><code>from flaskext.markdown import Markdown from views import app Markdown(app) def main(): app.run() if __name__ == '__main__': main() </code></pre> <p>I am able to run the app with uwsgi when launching it from the command line but I haven't been able to get it working with nginx using the above setup.</p>
0non-cybersec
Stackexchange
Exit terminal after running a bash script. <p>I am trying to write a <code>bash</code> script to open certain files (mostly pdf files) using the <code>gnome-open</code> command. I also want the terminal to exit once it opens the pdf file.</p> <p>I have tried adding <code>exit</code> to the end of my script however that does not close the terminal. I did try to search online for an answer to my question but I couldn't find any proper one, I would really appreciate it if you guys could help.</p> <p>I need an answer that only kills the terminal from which I run the command not all the terminals would this be possible? The previous answer which I accepted kills all the terminal windows that are open. I did not realize this was the case until today.</p>
0non-cybersec
Stackexchange
Kölsch - 1983 [House/Techno] (2015) | Now streaming on Spotify.
0non-cybersec
Reddit
Tabular with overbrackets connecting elements. <p>I want to make a vertical tree with brackets like the one bellow.</p> <p><a href="https://i.stack.imgur.com/kvRJ1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kvRJ1.png" alt="enter image description here"></a></p> <p>I tried <code>\overbrace</code> but I can't connect the 2nd row nodes (<code>\Omega_{r}</code> and <code>\Omega_{m}</code>) with the first.</p> <pre><code>\begin{tabular}{l|c c c} Total matter&amp;&amp; $\Omega$ &amp;\\ Different equations of state &amp;$\Omega_{m}$&amp;$\Omega_{r}$&amp;$\Omega_{\Lambda}$\\ Difefrent species &amp;$\overbrace{\Omega_{b}\quad\Omega_{c}}^{}$&amp; $\overbrace{\Omega_{\gamma}\quad\Omega_{\nu}}^{}$&amp; \end{tabular} </code></pre> <p>I get</p> <p><a href="https://i.stack.imgur.com/EAVF2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EAVF2.png" alt="enter image description here"></a></p>
0non-cybersec
Stackexchange
She deserves every bit.
0non-cybersec
Reddit
I have a contract at a well known computer company. I do about two hours work per week and bill for 40 hours. [Remorse]: I've been there over a year and up to now they've paid me $228,000. I don't have to go into the office. Nobody emails me unless I email them. Nobody sets up a meeting with me unless I set it up. Sometimes, I go into the office just to see what's happening. This morning I was there at 9:15, got a coffee, read the company website, added two pictures to a Sharepoint site. By 10 am I was bored. I hung out until 11:00, then picked up a sandwich from the cafeteria and went to the golf driving range. I was home around 1pm, where I've been on Reddit. I'm just waiting for happy hour so I can start drinking. On other days I might do my own side business, but still charge for my main contract as well. I feel terrible about it, but I can't turn down the money. I keep getting praise from the client and they keep extending my contract, even though nobody seems to know or care what I do. Managers are just happy if I keep out of their way.
0non-cybersec
Reddit
Any good way to calculate $\frac {\alpha ^ n - 1 } {\alpha - 1} \pmod{c}$. <p>I tried by multiplying modular inverse of denominator to the numerator and then taking modulo <span class="math-container">$c$</span>, but there are problems when the inverse does not exist. </p> <p>So is there a good way to solve this problem.</p> <p>Constraints <span class="math-container">$$ 1 \le \alpha \le 1e9 $$</span> <span class="math-container">$c$</span> is a prime <span class="math-container">$$ 1 \le n \le 1e9 $$</span></p>
0non-cybersec
Stackexchange
Bounds for associated Legendre polynomials. <p>I am trying to analyze the behaviour of the Associated Legendre polynomials <span class="math-container">$P_{n}^{m}$</span> on <span class="math-container">$[0,1]$</span>. More specifically, I am trying to get upper bounds for <span class="math-container">$P_{n}^{m}$</span> on <span class="math-container">$[0,1]$</span>. Bernstein's inequality for the Legendre polynomial is a classical result which states the following <span class="math-container">\begin{align*} |P_{n}(x)| \leq \sqrt{\frac{2}{\pi n}}\frac{1}{(1-x^2)^{1/4}}. \end{align*}</span> I am interested in bounds of the above type for <span class="math-container">$P_{n}^{m}$</span>'s. For a fixed <span class="math-container">$n\in \mathbb{N}$</span>, normalizing <span class="math-container">$P_{n}^{m}$</span> appropriately as below, for every <span class="math-container">$|m|\leq n$</span> we have <span class="math-container">\begin{align*} \frac{(n-m)!}{(n+m)!} \int_{0}^{1} P_{n}^{m}(x)^2 dx = \frac{C}{2n+1}. \end{align*}</span><br /> So specifically, I am interested in upper bounds on <span class="math-container">$[0,1]$</span> for the following functions, for a fixed <span class="math-container">$n$</span>, as <span class="math-container">$m \in \mathbb{Z}$</span> varies in <span class="math-container">$[-n,n]$</span>. <span class="math-container">\begin{align*} L_{n}^{m}(x) := \sqrt{\frac{(n-m)!}{(n+m)!}} ~P_{n}^{m} \end{align*}</span></p> <p>I did a bit of searching and found that bounds for the above collection of normalized Associated Legendre functions are available in <a href="https://www.sciencedirect.com/science/article/pii/S0021904598932075" rel="nofollow noreferrer">this</a> and <a href="https://www.sciencedirect.com/science/article/pii/002190459190077N" rel="nofollow noreferrer">this</a> and I state them below.</p> <p><span class="math-container">\begin{align} \sqrt{\frac{(n-m)!}{(n+m)!}}~ |P_{n}^{m} (x)| &amp; \leq \frac{1}{2^m m!} \sqrt{\frac{(n+m)!}{(n-m)!}} (1-x^2)^{m/2} := A_{n}^{m}(x) , \tag{1}\label{1}\\ \sqrt{\frac{(n-m)!}{(n+m)!}}~ |P_{n}^{m} (x)| &amp; \leq \frac{1}{n^{1/4}} \frac{1}{(1-x^2)^{1/8}} =: f_{n}(x).\tag{2} \label{2} \end{align}</span></p> <p>I tried to see how good these bounds are. It seemed that <span class="math-container">$A_{n}^{m}$</span> is a good approximate for <span class="math-container">$L_{n}^{m}$</span> near 1 (which is expected as it captures the vanishing of <span class="math-container">$L_{n}^{m}$</span> at 1), but elsewhere it is not good.</p> <p>And the bound <span class="math-container">$f_n$</span> appears to be just an upper bound which does not necessarily capture any feature of <span class="math-container">$L_{n}^{m}$</span>.</p> <p>Hence my question is: Are some other better bounds (than the ones in \ref{1} and \ref{2}) known for <span class="math-container">$L_{n}^{m}$</span>'s?</p> <p>Thanks!</p>
0non-cybersec
Stackexchange
How do you cope with loneliness?.
0non-cybersec
Reddit
No &#39;Access-Control-Allow-Origin&#39; - Node / Apache Port Issue. <p>i've created a small API using Node/Express and trying to pull data using Angularjs but as my html page is running under apache on localhost:8888 and node API is listen on port 3000, i am getting the No 'Access-Control-Allow-Origin'. I tried using <code>node-http-proxy</code> and Vhosts Apache but not having much succes, please see full error and code below. </p> <blockquote> <p>XMLHttpRequest cannot load localhost:3000. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'localhost:8888' is therefore not allowed access." </p> </blockquote> <pre><code>// Api Using Node/Express var express = require('express'); var app = express(); var contractors = [ { "id": "1", "name": "Joe Blogg", "Weeks": 3, "Photo": "1.png" } ]; app.use(express.bodyParser()); app.get('/', function(req, res) { res.json(contractors); }); app.listen(process.env.PORT || 3000); console.log('Server is running on Port 3000') </code></pre> <h2>Angular code</h2> <pre><code>angular.module('contractorsApp', []) .controller('ContractorsCtrl', function($scope, $http,$routeParams) { $http.get('localhost:3000').then(function(response) { var data = response.data; $scope.contractors = data; }) </code></pre> <h2>HTML</h2> <pre><code>&lt;body ng-app="contractorsApp"&gt; &lt;div ng-controller="ContractorsCtrl"&gt; &lt;ul&gt; &lt;li ng-repeat="person in contractors"&gt;{{person.name}}&lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/body&gt; </code></pre>
0non-cybersec
Stackexchange
Is sending plain passwords over SSL as part of a password update process bad?. <p>The Web application I'm working on is 100% SSL secured (or rather TLS as it is called today...). The application recently has been audited by a security company. I mostly agree with their results but there was one thing that led to great debates:</p> <p>As part of the password change process for users, the user has to provide the old password as well as the new one two times—nothing unusual. In addition to that the new password has to conform to a password policy (minimum length, yadda yadda).</p> <p>The application is realized with Vaadin which uses small AJAX messages to update the UI. The whole logic of the application lives on the server. This means that all validation of the password change form happens on the server. <strong>In order to validate the form, both the old password as well as the two new passwords (which should match of course) have to be sent to the server. If there is anything wrong (old password is wrong, new passwords don't match, new password doesn't conform to the password policy), the user gets an error. Unfortunately as part of the syncing process Vaadin sends all form data back to the client again—including old and new passwords.</strong></p> <p>Since all this happens over SSL I never thought twice about it but the security company saw this as a security risk of the highest severity. Note that the issue in the eyes of the security company was not that the data is sent to the server but <strong>that the server included the data in its response in case validation failed</strong>. So our current solution is to empty all fields if validation fails. This leads to poor user experience as the user has to fill in three text fields again and again if for example the passwords repeatedly don't match the password policy.</p> <p>Am I being naïve in thinking this is way over the top? I mean, if an attacker breaks the encryption, they have access to the whole traffic anyway.</p> <hr> <p><em>edit</em>: Regarding shoulder surfing I want to make clear that <strong>no password is ever echoed back to the user</strong>. All input fields are proper password fields that only show placeholders but no actual characters.</p>
0non-cybersec
Stackexchange
c++11 std::unique_ptr error cmake 3.11.3 bootstrap. <p>I am trying to bootstrap cmake 3.11.3 on Ubuntu 16.04.4 LTS xenial.</p> <p>I have upgrade my gnu g++ compiler as follows:</p> <pre><code>&gt; $ g++ --version g++ (Ubuntu 8.1.0-5ubuntu1~16.04) 8.1.0 Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. </code></pre> <p>And manually re-pointed the symbolic links:</p> <pre><code>$ ll /usr/bin/*g++* lrwxrwxrwx 1 root root 5 Jun 8 16:57 /usr/bin/g++ -&gt; g++-8* -rwxr-xr-x 1 root root 919832 Apr 24 15:02 /usr/bin/g++-5* lrwxrwxrwx 1 root root 22 Jun 6 04:26 /usr/bin/g++-8 -&gt; x86_64-linux-gnu-g++-8* lrwxrwxrwx 1 root root 22 Jun 8 16:58 /usr/bin/x86_64-linux-gnu-g++ -&gt; x86_64-linux-gnu-g++-8* lrwxrwxrwx 1 root root 5 Apr 24 15:02 /usr/bin/x86_64-linux-gnu-g++-5 -&gt; g++-5* -rwxr-xr-x 1 root root 1071984 Jun 6 04:26 /usr/bin/x86_64-linux-gnu-g++-8* </code></pre> <p>However, I get the following error in the configuration of cmake:</p> <pre><code>$ sudo ./bootstrap --------------------------------------------- CMake 3.11.3, Copyright 2000-2018 Kitware, Inc. and Contributors Found GNU toolchain C compiler on this system is: gcc C++ compiler on this system is: g++ Makefile processor on this system is: make g++ has setenv g++ has unsetenv g++ does not have environ in stdlib.h g++ has stl wstring g++ has &lt;ext/stdio_filebuf.h&gt; --------------------------------------------- make: Warning: File 'Makefile' has modification time 2.3 s in the future make: 'cmake' is up to date. make: warning: Clock skew detected. Your build may be incomplete. loading initial cache file /mnt/ganymede/user/gpeytavi/srv_admin/software/cmake-3.11.3/Bootstrap.cmk/InitialCacheFlags.cmake CMake Error at CMakeLists.txt:92 (message): The C++ compiler does not support C++11 (e.g. std::unique_ptr). -- Configuring incomplete, errors occurred! See also "/mnt/ganymede/user/gpeytavi/srv_admin/software/cmake-3.11.3/CMakeFiles/CMakeOutput.log". See also "/mnt/ganymede/user/gpeytavi/srv_admin/software/cmake-3.11.3/CMakeFiles/CMakeError.log". --------------------------------------------- Error when bootstrapping CMake: Problem while running initial CMake --------------------------------------------- </code></pre> <p>Any idea why I get a c++11 <code>std::unique_ptr</code> non-compliant error?</p>
0non-cybersec
Stackexchange
How can I &quot;cache&quot; a mongoDB/Mongoose result to be used in my Express.js views and routes. <p>What I'm trying to achieve is some sort of way to <strong>cache results</strong> of a <strong>mongoDB/Mongoose</strong> query that I can use in my views and routes. I'd need to be able to update this cache whenever a new document is added to the collection. I'm not sure if this is possible and if it is then how to do it, due to how the functions are asynchronous</p> <p>This is currently what I have for storing the galleries, however this is executed with every request.</p> <pre><code>app.use(function(req, res, next) { Gallery.find(function(err, galleries) { if (err) throw err; res.locals.navGalleries = galleries; next(); }); }); </code></pre> <p>This is used to get gallery names, which are then displayed in the navigation bar from a dynamically generated gallery. The gallery model is setup with just a <strong>name</strong> of the gallery and a <strong>slug</strong></p> <p>and this is part of my <strong>EJS</strong> view inside of my navigation which stores the values in a dropdown menu.</p> <pre><code>&lt;% navGalleries.forEach(function(gallery) { %&gt; &lt;li&gt; &lt;a href='/media/&lt;%= gallery.slug %&gt;'&gt;&lt;%= gallery.name %&gt;&lt;/a&gt; &lt;/li&gt; &lt;% }) %&gt; </code></pre> <p>The website I'm working on is expected to get hundreds of thousands of concurrent users, so I don't want to have to query the database for every single request if not needed, and just update it whenever a new gallery is created.</p>
0non-cybersec
Stackexchange
How to decrypt encrypted script files for a game?. <p>I want to mod Tony hawks underground 2. </p> <p>I noticed that the files in a folder <code>scripts</code> (down the game's directory tree), are all encrypted. </p> <p>Where do I start if I want to decrypt them?</p>
0non-cybersec
Stackexchange
How to cover holes with disks of a fixed radius?. <p>So you have a sheet / area of a given dimension, and within this area are holes (their center point(x,y) and radius are given). The problem is you need to cover these holes with patches. These circular patches have a fixed radius (ie: radius of 5) and are not allowed to overlap with each other (but can touch). You're allowed to use as many as you like, the goal is not to find the most optimal number, but to see if it's possible to cover every single hole.</p> <p>I've solved a similar problem with a KD tree, but due to the 3D dimensional nature of the holes in this problem, I'm unsure on how to approach it. Just looking for a pointer in the right direction, not the coded solution :) </p>
0non-cybersec
Stackexchange
[Homemade] burgers with beer cheese, caramelized onions on a pretzel bun.
0non-cybersec
Reddit
Jacques-Chaban-Delmas Bridge in Bordeaux, Gironde, France.
0non-cybersec
Reddit
Regarding booting Ubuntu 16.04 onto Windows 10. <p>I've run into multiple issues in trying to dual boot Ubuntu 16.04 onto Windows 10. My laptop is an Acer Aspire E15. I've tried manually partitioning space and installing Ubuntu as well as doing the easier method of just choosing "Install alongside Windows." I've had Windows on my laptop before installing Ubuntu. But after installation, no boot loader comes up that asks if I want either Windows or Ubuntu. </p> <p>I'm posting this because I've read multiple articles and forums about this same issue but none of what I've read have fixed the issue. One such fix is by going to cmd and typing in the command "bcdedit /set {bootmgr} path \EFI\ubuntu\grubx64.efi" But this didn't work. I notice that "Ubuntu" doesn't even show up as a bootable option after installing ubuntu. Is there any recommendations for this? i.e. should I try an older version of Ubuntu? Would that even work? </p> <p>Thanks! -Jacob Hempel</p>
0non-cybersec
Stackexchange
The pressure required to crush this lego vehicle.
0non-cybersec
Reddit
The way of the lady.
0non-cybersec
Reddit
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
Connection Reset when port forwarding with Vagrant. <p>I have Vagrant/VirtualBox running an Ubuntu 12.04 LTS OS. I have configured Vagrant to forward the guest port 8000 to my host port 8888.</p> <pre><code>[default] Preparing network interfaces based on configuration... [default] Forwarding ports... [default] -- 22 =&gt; 2222 (adapter 1) [default] -- 8000 =&gt; 8888 (adapter 1) [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! </code></pre> <p>When the virtual machine starts up, I start a Django dev server on port 8000.</p> <pre><code>Development server is running at http://127.0.0.1:8000/ Quit the server with CONTROL-C. </code></pre> <p>Okay great, I can put it in the background and I can even <code>curl localhost:8000</code> and get some output from the server </p> <pre><code>&lt;div id="explanation"&gt; &lt;p&gt; You're seeing this message because you have &lt;code&gt;DEBUG = True&lt;/code&gt; in your Django settings file and you haven't configured any URLs. Get to work! &lt;/p&gt; &lt;/div&gt; </code></pre> <p>But when I try to hit the server from my host machine with a Firefox/Chrome/Telnet I'm getting Connection Reset/Connection Lost/ERR_CONNECTION_RESET etc.</p> <p>First I thought it may be some iptables thing, but it turns out Ubuntu has default allow everything. I also turned off the firewall on my host machine. How can I get to the bottom of this?</p>
0non-cybersec
Stackexchange
What blizzard REALLY thinks of Goblins (screen cap from Goblin cinematic).
0non-cybersec
Reddit
Status on Naoki Katoh&#39;s &quot;Rectangle Wiring Problem&quot; (minimum length tree to cover a partitioned rectangle)?. <p>I have found this interesting problem in graph theory and geometry which is allegedly an open problem but latest status seems to be from 01/25/02. I can't seem to find any more information about it, not even other papers describing it.</p> <p><a href="https://i.stack.imgur.com/jT7Ub.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jT7Ub.png" alt="Problem"></a></p>
0non-cybersec
Stackexchange
How can I replace a window&#39;s URL hash with another response?. <p>I am trying to change a hashed URL (document.location.hash) with the replace method, but it doesn't work.</p> <pre><code>$(function(){ var anchor = document.location.hash; //this returns me a string value like '#categories' $('span').click(function(){ $(window).attr('url').replace(anchor,'#food'); //try to change current url.hash '#categories' //with another string, I stucked here. }); }); </code></pre> <p>I dont want to change/refresh page, I just want to replace URL without any responses.</p> <p>Note: I don't want to solve this with a href="#food" solution.</p>
0non-cybersec
Stackexchange
At the Movies: Why Nothing Said About Hoover Could Be Too Bad .
0non-cybersec
Reddit
Found this in my front yard. Has a pearly sheen, is very slightly magnetic, and one side is sort of flat like it was broken..
0non-cybersec
Reddit
Gotta do it before she came undone too.
0non-cybersec
Reddit
So my friend had some pretty nice senior pictures taken.
0non-cybersec
Reddit
ggplot2 + aes_string inside a function via formula interface. <p>Interactively, this example works fine:</p> <pre><code>p &lt;- ggplot(mtcars, aes(mpg, wt)) + geom_point() p + facet_grid(. ~ vs) </code></pre> <p>Now, make a function with a formula interface and use <code>aes_string</code> to do this same thing, and it doesn't work (error is: <code>Error in layout_base(data, cols, drop = drop) : At least one layer must contain all variables used for facetting</code>):</p> <pre><code>tf &lt;- function(formula, data) { res &lt;- as.character(formula[[2]]) fac2 &lt;- as.character(formula[[3]][3]) fac1 &lt;- as.character(formula[[3]][2]) # p &lt;- ggplot(aes_string(x = fac1, y = res), data = data) # p &lt;- p + geom_point() # original attempt p &lt;- ggplot() # This is Joran's trick, but it doesn't work here p &lt;- p + geom_point(aes_string(x = fac1, y = res), data = data) p &lt;- p + facet_grid(.~fac2) # comment this out, and it works but # of course is not faceted } p &lt;- tf(formula = wt ~ am*vs, data = mtcars) </code></pre> <p>By Joran's trick I refer to <a href="https://stackoverflow.com/questions/14348781/ggplot2-inside-function-with-a-2nd-aesthetic-scoping-issue">here</a>, which is a similar question I posted recently. In this case <code>ggplot2</code>doesn't see my faceting request. Making it <code>facet_grid(".~fac2")</code> had no effect. Suggestions? I'm continually out-witted by these things. Thanks!</p>
0non-cybersec
Stackexchange
Safely Pulling Over in Traffic.
0non-cybersec
Reddit
Printer problem with UFW. <p>My printer is physically connected to my computer. Ubuntu has found the printer, but printing isn't working. Does it need localhost to be opened on UFW to work ?</p>
0non-cybersec
Stackexchange
American children under the age of about 7 or 8 don't know what it's like to have a white man as president..
0non-cybersec
Reddit
Leveling my first holy pally, I can't stop seeing <3.
0non-cybersec
Reddit
Security challenges of administrative share in windows 7. <p>I am going to the network+ class, and today my teacher said that every time you want to configure a network, disable the hidden share of hard drives in windows. He said that if this is enabled, you will be hacked very easily. Is that correct while anyone who want to access shared drives must has the administrator permission?</p> <p>And also as I searched, to disable this configuration permanently I have to disable the administrative share completely. What problems may happen if this feature is disabled?</p> <p>Thanks.</p>
0non-cybersec
Stackexchange
Extract text and put into table. <p>As result of the checkresiduals() function from the forecast package and rbind() function I got this matrix (ETS_RESIDUALS):</p> <pre><code>#Result of checkresiduals() function [,1] [1,] "Q* = 161.83, df = 18.8, p-value &lt; 2.2e-16" [2,] "Q* = 125.46, df = 18.8, p-value &lt; 2.2e-16" [3,] "Q* = 263.65, df = 18.8, p-value &lt; 2.2e-16" [4,] "Q* = 81.503, df = 18.8, p-value = 8.763e-10" [5,] "Q* = 36.616, df = 18.8, p-value = 0.008178" str(ETS_RESIDUALS) #chr [1:5, 1] "Q* = 161.83, df = 18.8, p-value &lt; 2.2e-16" "Q* = 125.46, df = 18.8, p-value &lt; 2.2e-16" "Q* = 263.65, df = 18.8, p-value &lt; 2.2e-16" ... class(ETS_RESIDUALS) #[1] "matrix" </code></pre> <p>Now, my intention is to split this lines of text with grep() or other functions into a data.frame (with four columns TEST, Q*, df, p-value), like in the example below:</p> <pre><code>TEST Q* df p-value -------------------------------------------- TEST_1 161.83 18.8 2.2e-16 TEST_2 125.46 18.8 2.2e-16 TEST_3 263.65 18.8 2.2e-16 TEST_4 81.503 18.8 8.763e-10 TEST_5 36.616 18.8 0.008178 </code></pre> <p>I try with this lines of code but results are not good.</p> <pre><code>ETS_RESIDUALS %&gt;% stringr::str_replace_all("(\\S+) =", "`\\1` =") %&gt;% paste0("data.frame(", ., ", check.names = FALSE)") </code></pre> <p>Can anyone help me with this code?</p>
0non-cybersec
Stackexchange
MySQL Access Denied, tried a few things, pulling hair. <p>I'm trying to get to know Django (my first attempts at a framework, or any backend work for that matter), and I'm seriously stumped by MySQL, and SQL in general.</p> <p>I'm trying to create a new database and I get:</p> <pre><code>ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'dbname' </code></pre> <p>So I've tried the advice here: <a href="https://stackoverflow.com/questions/8838777/error-1044-42000-access-denied-for-user-localhost-to-database-db">Access denied for user localhost</a></p> <p>Which might work, but using: <code>mysql -uroot -p</code> I can't seem to remember my password. I don't recall setting one, but leaving it blank doesn't work either.</p> <p>Also, I have to run mysql using:</p> <pre><code>/Applications/MAMP/Library/bin/mysql </code></pre> <p>Because:</p> <pre><code>mysql </code></pre> <p>results in:</p> <pre><code>ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) </code></pre> <p>I'm guessing that's related to the error above, right?</p> <p>I'm in a right mess here, SQL and the command line intimidate the heck out of me and it seems so easy to break stuff. If anyone could offer some pointers that would be great. </p>
0non-cybersec
Stackexchange
how can I list-initialize my own class?. <p>I want my own class can be list-initialized like vector: </p> <pre><code>myClass a = {1, 2, 3}; </code></pre> <p>How can I do that using C++11 capabilities?</p>
0non-cybersec
Stackexchange
I mean, I've seen worse. But for a bank...?.
0non-cybersec
Reddit
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
Disable Months On Month/DatePicker. <p>Please look at my <a href="http://jsfiddle.net/RWY2X/31/" rel="noreferrer">fiddle</a>.</p> <p>I have a monthpicker which only allows users to select a year in advance but what I want is for past months to be disabled and also any months after a year in advance to be disabled but I cant figure out how to get this to work.</p> <p>Example Scenario Current month is 'October' so for 'Year 2015' months 'Jan to Sept' will be disabled and months 'Nov to Dec' will be disabled for 'Year 2016'</p> <p>I have tried using minDate: "0" and maxDate: "1y" but they don't work.</p> <p><strong>HTML</strong></p> <pre><code>&lt;div class="input-group date" style="width: 200px"&gt; &lt;input type="text" id="example1" class="form-control" style="cursor: pointer"/&gt; &lt;span class="input-group-addon"&gt; &lt;i class="glyphicon glyphicon-calendar"&gt;&lt;/i&gt; &lt;/span&gt; &lt;/div&gt; </code></pre> <p><strong>JQuery</strong></p> <pre><code>$('#example1').datepicker ({ format: "MM yyyy", minViewMode: 1, autoclose: true, startDate: new Date(new Date().getFullYear(), '0', '01'), endDate: new Date(new Date().getFullYear()+1, '11', '31') }); </code></pre>
0non-cybersec
Stackexchange
If $X$ is an affine variety, is $X$ one component of a complete intersection with two?. <p>This is an idle question, but I give the example that motivated me below.</p> <p>Say $X \subseteq {\mathbb A}^n_k$ is irreducible and $k$ is infinite. Then by picking a regular point of $X$ and picking equations from $X$'s ideal that cut out $T_x X$, we get a scheme containing $X$ as a component.</p> <blockquote> <p>If we pick those equations generically, can we ensure that that scheme is a complete intersection with at most one extra component beyond $X$?</p> </blockquote> <p>The example that got me wondering this is where $X = ${$(A,B) : AB = BA$} is the space of pairs of commuting matrices. Then one case of the above construction is $Y = ${$(A,B) : AB-BA$ is diagonal}, which is a reduced complete intersection with two components. I thought this was interesting but now I'm guessing it's the expected behavior.</p>
0non-cybersec
Stackexchange
Finding the rightmost non-zero digit in $770^{3520}$. <p>$770^{3520}$</p> <p>I am trying to find right non zero most digit in above exp.</p> <p>I divide the exp into $77^{3520}$ $*$ $10^{3520}$</p> <p>and dont know what to do next....plz help</p>
0non-cybersec
Stackexchange
Raccoon eating grapes using his stupid little hands. Very satisfying to watch..
0non-cybersec
Reddit
C++ return value without return statement. <p>When I ran this program:</p> <pre><code>#include &lt;iostream&gt; int sqr(int&amp;); int main() { int a=5; std::cout&lt;&lt;"Square of (5) is: "&lt;&lt; sqr(a) &lt;&lt;std::endl; std::cout&lt;&lt;"After pass, (a) is: "&lt;&lt; a &lt;&lt;std::endl; return 0; } int sqr(int &amp;x) { x= x*x; } </code></pre> <p>I got the following output:</p> <pre><code>Square of (5) is: 2280716 After pass, (a) is: 25 </code></pre> <p>What is <code>2280716</code>? And, how can I get a value returned to <code>sqr(a)</code> while there is no <code>return</code> statement in the function <code>int sqr(int &amp;x)</code>?</p> <p>Thanks.</p>
0non-cybersec
Stackexchange
How to use spot instance with amazon elastic beanstalk?. <p>I have one infra that use amazon elastic beanstalk to deploy my application. I need to scale my app adding some spot instances that EB do not support.</p> <p>So I create a second autoscaling from a launch configuration with spot instances. The autoscaling use the same load balancer created by beanstalk.</p> <p>To up instances with the last version of my app, I copy the user data from the original launch configuration (created with beanstalk) to the launch configuration with spot instances (created by me).</p> <p>This work fine, but:</p> <ol> <li><p>how to update spot instances that have come up from the second autoscaling when the beanstalk update instances managed by him with a new version of the app?</p> </li> <li><p>is there another way so easy as, and elegant, to use spot instances and enjoy the benefits of beanstalk?</p> </li> </ol> <p><strong>UPDATE</strong></p> <p>Elastic Beanstalk add support to spot instance since 2019... see: <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-11-25-spot.html</a></p>
0non-cybersec
Stackexchange
GCC 4.8 with GNU STL produces bad code for std::string constructor?. <p>So a bit of C++ code:</p> <pre><code>void func( const std::string&amp; theString ) { std::string theString( theString ); theString += " more string"; std::cout &lt;&lt; theString; } </code></pre> <p>which compiles fine with <code>GCC 4.8</code> and <code>VS 2013</code>. From my C++ knowledge, the code is okay with a local variable <code>theString</code> being brought into scope which then hides <code>theString</code> from the function argument. At the point of <code>theString</code> construction, the only <code>theString</code> in scope is the function argument which is passed to the <code>std::string</code> constructor. The constructed <code>std::string</code> is then named <code>theString</code> which comes into scope and is <code>theString</code> used later in the code. Phew!</p> <p>However, <code>GCC</code> seems to act like <code>theString</code> passed to the <code>std::string</code> constructor is the local <code>theString</code> (which hasn't been constructed yet) causing the compiled program to crash. With VS 2013 the code compiles and runs fine.</p> <p>So,</p> <ol> <li>Is my code correct? Or am I doing something outside spec which means the GCC behaviour is undefined.</li> <li>Is this a bug in GCC?</li> </ol>
0non-cybersec
Stackexchange
Root user cannot read /root. <p>I added a user called 'kma' to root group. Then I changed permission of /root/ to 770. Which should give the user read, write and execute access since I added him to the root group. But it still gives permission denied error. What am I doing wrong here?</p> <p>Commands I executed:</p> <pre><code>sudo adduser kma root sudo chmod 770 /root/ cd /root/ &lt;------- Gives permission error </code></pre>
0non-cybersec
Stackexchange
How should I talk to women?. I'm a fairly good conversationalist and flirt well too but I can't seem to translate that into action. I understand that women are people too but I just don't have the confidence to make a move. I can't dance worth a damn and I'm very introverted though I've worked through that to talk to people I don't know. I'm also a huge nerd. So, how do I move beyond just talking? Edit: I'm not a bad looking dude or overweight.
0non-cybersec
Reddit
How can I format my Google calendar like the &quot;Days of the Year&quot; calendar?. <p>Is there any way to format my calendar entries like the "Days of the Year" calendar does? (Shown here, the "40" in the top corner.)</p> <p><img src="https://i.stack.imgur.com/GhY9p.png" alt="enter image description here"></p> <p>I am adding events to the calendar using the Calendar API using <code>myCalendar.createAllDayEvent('Title', myDate)</code>.</p> <p>I can't find any information in the API documentation that seems to relate to this.</p>
0non-cybersec
Stackexchange
ar X iv :1 31 0. 45 39 v1 [ q- fi n. T R ] 1 6 O ct 2 01 3 Modeling the coupled return-spread high frequency dynamics of large tick assets Gianbiagio Curato Scuola Normale Superiore di Pisa, Italy Fabrizio Lillo Scuola Normale Superiore di Pisa, Dipartimento di Fisica e Chimica, Università di Palermo, Italy October 17, 2018 Abstract Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We introduce a Markov-switching modeling approach for price change, where the latent Markov process is the transition between spreads. We then use a finite Markov mixture of logit regressions on past squared returns to describe the dependence of the probability of price changes. The model can thus be seen as a Double Chain Markov Model. We show that the model describes the shape of return distribution at different time aggregations, volatility clustering, and the anomalous decrease of kurtosis of returns. We calibrate our models on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data. Keywords: Large tick assets, bid-ask spread dynamics, returns-spread coupling, Double chain Markov model, Markov chain Montecarlo. 1 http://arxiv.org/abs/1310.4539v1 1 Introduction In financial markets, the price of an order cannot assume arbitrary values but it can be placed on a grid of values fixed by the exchange. The tick size is the smallest interval between two prices, i.e. the grid step, and it is measured in the currency of the asset. It is institutionally mandated and sets a limit on how finely prices may be specified. The grid is evenly spaced for a given asset, and the tick size depends on the price. In the recent years there has been a growing interest toward the role of tick size in determining the statistical properties of returns, spread, limit order book, etc. [14, 13, 8, 23, 15, 20, 11, 22, 43]. The absolute tick size is not the best indicator for understanding and describing the high frequency dynamics of prices. Consider, for example, two highly liquid NASDAQ stocks, namely Apple (AAPL) and Microsoft (MSFT). For both stocks the tick size is one cent. However, in the period we investigated in this paper (July and August 2009), the average price of AAPL was 157$ while the average price of MSFT was 24$. Thus a one cent price movement for AAPL corresponds to 0.6 bp, while for MSFT it is 4.2 bp. Therefore we can expect that the high frequency dynamics of AAPL will be significantly different from the one of MSFT. Recent literature has introduced the notion of an effective tick size to account and quantify the different behavior of returns and spread processes of assets for a given value of tick size. Qualitatively we say that an asset has a large tick size when the price is averse to variations of the order of a single tick and when the bid-ask spread is almost always equal to one tick. Conversely an asset is small tick size when the price is only weakly averse to variations of the order of a single tick and the bid-ask spread can assume a wide range of values, e.g. from one to ten or more ticks [20, 22]. Several papers in empirical and theoretical market microstructure have emphasized that large and small tick size assets belong to different “classes” [19, 23, 24]. Order book models designed for small tick assets do not describe correctly the dynamics of large tick assets [24]. Moreover the ultra high frequency statistical regularities of prices and of the order book are quite different in the two classes. In this paper we are interested in modeling the dynamics of large tick assets at ultra high frequency and taking expliciteply into account the discreteness of prices. More specifically, we introduce a class of models describing the coupled dynamics of returns and spread for large tick assets in transaction time1. In our models, returns are defined as mid-price changes2 and are measured in units of half tick, which is the minimum amount the mid-price can change. Therefore, these models are defined in a discrete state space [5, 10] and the time evolution is described in discrete time. Our purpose is to model price dynamics in order to reproduce statistical properties of mid-price dynamics at different time scales and stylized facts like volatility clustering. Notice that, rather than considering a non observable efficient price and describing the data as the effect of the round off error due to tick size, we directly model the observable quantities, such as spread and mid-price, by using a time series approach. The motivation of our work comes from two interesting empirical observa- 1Hereafter we define the transaction time as an integer counter of events defined by the execution of a market order. Note that if a market order is executed against several limit orders, our clock advances only by one unit. 2With a little abuse of language we use returns and mid-price changes interchangeably . 2 -2 -1 0 1 2 price change (half ticks) 0 0.2 0.4 0.6 0.8 pr ob ab il it y k even k odd -25 -20 -15 -10 -5 0 5 10 15 20 25 price change (half ticks) 0 0.05 0.1 pr ob ab il it y k even k odd Figure 1: The left panel shows the tick by tick mid-price change distri- bution, r (t,∆t = 1) = pm (t+ 1) − pm (t), while the right panel shows the mid-price change distribution aggregated at 128 transactions, r (t,∆t = 128) = pm (t+ 128)− pm (t). The investigated stock is Microsoft. 10 0 10 1 10 2 10 3 lag (number of trades) 10 -2 A C F o f sq ua re d m id -p ri ce c ha ng es 0 20 40 60 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 Figure 2: Sample autocorrelation function of tick by tick squared mid-price changes for Microsoft. The plot is in log-log scale and the red dashed line is a best fit of the autocorrelation function in the considered region. The estimated exponent is γ = 0.301. The inset shows the behavior for small values of the lag. tions. Let us consider first the unconditional distribution of mid price change at different time scales. In the left panel of Fig. 1 we show the histogram of mid- price change of MSFT at the finest time scale, i.e. between two transactions. It is clear that most of the times the price does not change, while sometimes it changes by one or two half ticks. When we aggregate the returns on a longer time scale, for example 128 transactions (see right panel of Fig. 1), a non triv- ial distribution emerges, namely a distribution where odd values of returns are systematically less populated than even values. It is important to notice that if we assume that returns of individual trades are independent and identically distributed3, we would never be able to reproduce an histogram like the one shown in the right panel of Fig. 1. In fact in this case the histogram would be, as expected, bell shaped. The second observation concerns the properties of volatility of tick by tick returns. Figure 2 shows the autocorrelation function of squared returns of 3For example, if we randomize our sample of tick by tick mid-price changes 3 MSFT in transaction time. Square returns can be seen here as a simple proxy of volatility. First of all notice that the autocorrelation is negative for small lags. It then reaches a maximum around 10 trades and then it decays very slowly to zero. We observe that between 10 and more than 500 trades, the decay of the autocorrelation function is well described by a power law function, corr ( r2 (t) , r2 (t+ τ) ) ∼ τ−γ , and the estimated exponent γ ≃ 0.3 is similar to the one observed at lower frequency and by sampling returns in real time rather than transaction time4. We conclude therefore that very persisitent volatility clustering and possibly long range volatility is observed also at tick by tick level. The purpose of this paper is to develop a discrete time series model that is able to explain and reproduce simultaneously these two empirical observations, namely the change of the distribution of price changes at different time scales and the shape of the volatility autocorrelation. As a modeling approach, we note that the observation of Fig. 1 suggests that the return process can be characterized by different regimes which are defined by some variable, observable or not, in the order-book dynamics. The key intuition behind our modeling approach is that for large tick assets the dynamics of mid-price and of spread are intimately related and that the process of returns is conditioned to the spread process. The conditioning rule describes the connection between the stochastic motion of mid-price and spread on the grid. For large tick assets the spread typically assumes only few values. For ex- ample, for MSFT spread size is observed to be 1 or 2 ticks almost always. The discreteness of mid-price dynamics can be connected to the spread dynamics if we observe that, when the spread is constant in time, returns can assume only even values. Instead when the spread changes, returns can display only odd values. Figure 3 shows the mechanical relation between the two processes. The dynamics of returns is thus linked to dynamics of spread transitions. This rela- tion leads us to design models in which the return process depends on the transi- tion between two subsequent spread states, distinguishing the case in which the spread remains constant and the case when it changes. From a methodological point of view we obtain this by defining a variable of state that describes the spread transition. We use a Hidden Markov, or Markov Switching, Model [2, 9] for returns, in which the spread transition is described by a Markov chain that defines different regimes for the return process. The Markov Switching approach is able to describe the change in shape of the distribution of price change (Fig. 1), but not the persistence of volatility. To this end, we propose a more sophisticated model by allowing the returns process to be an regressive process in which regressors are the past value of squared returns [1, 25, 26, 27]. We show how to calibrate the models on real data and we tested them on the large tick assets MSFT and CSCO, traded at NASDAQ market in the period July-August 2009. We show that the full model reproduces very well the empirical data. The paper is organized as follows. In Section 2 we review the main applica- tions of Markov-switching modeling in the econometrics field. In Section 3 we present our modeling approach. In Section 4 we present our data for the MSFT 4It is worth noticing that in general the round-off error severely reduces the correlation properties of a stochastic process, even if the Hurst exponent of a long memory process is pre- served [16]. Therefore the autocorrelation function shown in Fig. 2 is a strong underestimation of the tick by tick volatility clustering of the unobservable efficient price. 4 t t+1 t+1 t+1 tick time 2400 2401 2402 2403 2404 2405 P ri ce ( ce nt s) mid-price(t) mid-price(t+1) Constant spread t t+1 t+1 tick time 2400 2401 2402 2403 2404 2405 Variable spread Figure 3: Coupling of spread and returns for large tick assets. On the left we show the three possible transitions when s(t) = s(t + 1) = 1. In this case the possible price changes are r(t) ∈ (−2, 0, 2) (measured in 1/2 tick size). On the right we show the two possible transitions when s(t) = 1 and s(t + 1) = 2. In this case the possible values of price changes are r(t) ∈ (−1, 1). stock and we describe the observed stylized facts of price dynamics. In Section 5 we describe the calibration of the models on real data and we discuss how well the different models reproduce the stylized facts. Finally, in Section 6 we draw some conclusions and we discuss future works. 2 Review of Markov switching models in econo- metrics Markov switching models (MS models) have become increasingly popular in econometric studies of industrial production, interest rates, stock prices and unemployment rates [9, 30]. They are also known as hidden Markov models (HMM) [2, 33, 34], used for example in speech recognition and DNA analysis. In these models the distribution that generates an observation depends on the states of an underlying and unobserved Markov process. They are flexible gen- eral purpose models for univariate and multivariate time series, especially for discrete-valued series, including categorical variables and series of counts [31]. Markov switching models belong to a general class of mixture distributions [30]. Econometricians’ initial interest in this class of distributions was based on their ability to flexibly approximate general classes of density functions and gener- ate a wider range of values for the skewness and kurtosis than is obtainable by using a single distribution. Along these lines Granger and Orr [37] and Clark [38] considered time-independent mixtures of normal distributions as a means of modeling non-normally distributed data. These models, however, did not cap- ture the time dependence in the conditional variance found in many economic time series, as evidenced by the vast literature on ARCH models that started 5 with Engle [9]. By allowing the mixing probabilities to display time dependence, Markov switching models can be seen as a natural generalization of the origi- nal time-independent mixture of normals model. Timmermann [32] has shown that the mixing property enables them to generate a wide range of coefficients of skewness, kurtosis and serial correlation even when based on a very small number of underlying states. Regime switches in economic time series can be parsimoniously represented by Markov switching models by letting the mean, variance, and possibly the dynamics of the series depend on the realization of a finite number of discrete states. The basic MS model is: y (t) = µS(t) + σS(t)ǫ (t) , (1) where S (t) = 1, 2, · · · , k denotes the unobserved state indicator which follows an ergodic k-state Markov process and ǫ (t) is a zero-mean random variable which is i.i.d. over time [39]. Another relevant model is the Markov switching autoregressive model (MSAR(q)) of order q that allows for state-independent autoregressive dynamics: y (t) = µS(t) + q ∑ j=1 φj ( y (t− j)− µS(t−j) ) + σS(t)ǫ (t) . (2) It became popular in econometrics for analyzing economic time series such as the GDP data through the work of Hamilton [40]. In its most general form the MSAR model allows that the autoregressive coefficients are also affected by S (t) [32]: y (t) = µS(t) + q ∑ j=1 φj,S(t−j) ( y (t− j)− µS(t−j) ) + σS(t)ǫ (t) . (3) There is a key difference with respect to ARCH models, which is another type of time-dependent mixture processes. While Markov switching models mix a finite number of states with different mean and volatility parameters based on an exogenous state process, ARCH models mix distributions with volatility parameters drawn from an infinite set of states driven by lagged innovations to the series. We can make use of the above models when we want to model a continuous state random variable y (t). In our case we want a model for a discrete variable, i.e. the observed integer price differences, in a microstructure market environ- ment. Therefore the models for continuous variables presented above cannot be used in our problem. We propose to model the coupled dynamics of spreads and price differences in the setting defined by the Double Chain Markov Models (DCMM) [25, 26]. This is the natural extension of HMM models in order to allow the hidden Markov process to select one of a finite number of Markov chains to drive the observed process at each time point. If a time series can be decomposed into a finite mixture of Markov chains, then the DCMM can be applied to describe the switching process between these chains. In turn DCMM belongs to the family of Markov chains in random environments [28, 29]. In discrete time, DCCM describes the joint dynamics of two random vari- ables: x (t), whose state at time t is unknown for an observer external to the 6 process, and y (t), which is observable. The model is described by the following elements: • A set of hidden states, S (x) = {1, · · · , Nx}. • A set of possible outputs, S (y) = {1, · · · , Ny}. • The probability distribution of the first hidden state, π0 = {π0,1, · · · , π0,Nx}. • A transition matrix between hidden states, M = {mij} , i, j ∈ S (x). • A set of transition matrices between successive outputs of y (t) given a particular state of x (t), Vx(t)=k,ij , i, j ∈ S (y), k ∈ S (x). There are three different estimation problems: the estimation of the probability of a sequence of observations y(0), · · · , y(T ) given a model; the estimation of parameters π0, M, Vk given a sequence of observations; the estimation of the optimal sequence of hidden states given a model and a sequence of outputs. Our data, i.e. limit order book data, instead allow us to see directly the process that defines the hidden Markov process, i.e. the spread process. In this way we can estimate directly the matrices M and Vk by a simple maximum likelihood approach without using the Expectation Maximization (EM) algo- rithm and the Viterbi algorithm, that are usually used when the hidden process is not observable [25, 26]. We use the stationary probability distribution for the process x (t) as initial probability distribution π0 in order to perform our calculations and simulations. We use the DCMM model as a mathematical framework for spread and price differences processes without treating spread process as an hidden process. Among the few financial applications of the DCMM model we mention Ref.s [35, 36]. In the former paper, authors studied the credit rating dynamics of a portfolio of financial companies, where the unobserved hidden process is the state of the broader economy. In Eisenkopf [36] instead the author considered a problem in which a credit rating process is influenced by unobserved hidden risk situations. To the best of our knowledge our paper is the first application of DCMM to the field of market microstructure and high frequency financial data. 3 Models for the coupled dynamics of spread and returns In this section we present the models describing the process of returns r (t,∆t) = pm (t+∆t)−pm (t) at time scale ∆t, where we define the mid-price as pm (t) = (pASK (t) + pBID (t)) /2 and we choose to measure r in units of half tick size. In our models, return process follows different time series processes conditioned on the dynamics of transitions of the spread s (t) = pASK (t)− pBID (t). Hereafter we will use the notation r (t) = r (t,∆t = 1). The spread variable s is measured in units of 1 tick size, so we have r (t,∆t) ∈ Z and s (t) ∈ N. The time variable t ∈ N is the transaction time. 7 3.1 Markov-Switching models Spread process. It is well known that spread process is autocorrelated in time [42, 4, 7, 18]. We model the spread s (t) as a stationary Markov(1) [41] process5: P (s (t) = j|s (t− 1) = i, s (t− 2) = k, · · · ) = P (s (t) = j|s (t− 1) = i) = pij , (4) where i, j ∈ N are spread values. As mentioned, we limit the set of spread values to s ∈ {1, 2}, because we want to describe the case of large tick assets. We also assume that the process s (t) is not affected by the return process r (t). The spread process is described by the transition matrix: B = ( p11 p12 p21 p22 ) where the normalization is given by ∑2 j=1 pij = 1. The vector of stationary probabilities is the eigenvector π of B′ relative to eigenvalue 1, which is B′π = π, π = ( (1− p22) / (2− p11 − p22) (1− p11) / (2− p11 − p22) ) (5) where B′ denotes the transpose of the matrix B. This vector represents the unconditional probabilities of s (t), so πk = P (s (t) = k) with k = 1, 2. Starting from the s (t) process, it is useful to define a new stationaryMarkov(1) process x (t) that describes the stochastic dynamics of transitions between states s (t) and s (t+ 1) as x (t) = 1 if s (t+ 1) = 1, s (t) = 1, x (t) = 2 if s (t+ 1) = 2, s (t) = 1, x (t) = 3 if s (t+ 1) = 1, s (t) = 2, x (t) = 4 if s (t+ 1) = 2, s (t) = 2. (6) This process is characterized by a new transition matrix M =     m11 m12 m13 m14 m21 m22 m23 m24 m31 m32 m33 m34 m41 m42 m43 m44     =     p11 p12 0 0 0 0 p21 p22 p11 p12 0 0 0 0 p21 p22     in which the stationary vector is given by M ′λ = λ, λ =     (p21p11) / (1− p11 + p21) p21 (1− p11) / (1− p11 + p21) p21 (1− p11) / (1− p11 + p21) (1− p21) (1− p11) / (1− p11 + p21)     . (7) A limiting case is when the spread process s (t) is described by a Bernoulli process. In this case we set P (s (t) = 1) = p. Although s (t) is an i.i.d. process, 5We have tried other specifications of the spread process, such as for example a long memory process, but this does not change significantly our results. 8 the spread transition process xB (t) is a Markov process defined by: MB =     p (1− p) 0 0 0 0 p (1− p) p (1− p) 0 0 0 0 p (1− p)     , λB =     p2 p (1− p) p (1− p) (1− p)2     . In the general case, the process x (t) is defined by two parameters p11, p21 (which are reduced to p in Bernoulli case) that we can estimate from spread data. Mid-price process. We can now define a Markov-switching process for returns r (t) which is conditioned to the process x (t), i.e. to the spread transitions. Returns are measured in half ticks and we limit the set of possible values to r (t) ∈ {−2,−1, 0, 1, 2}, as observed in our sample. The discreteness of the price grid imposes the mechanical constraints x (t) = 1 −→ r (t) ∈ {−2, 0, 2} , x (t) = 2 −→ r (t) ∈ {−1, 1} , x (t) = 3 −→ r (t) ∈ {−1, 1} , x (t) = 4 −→ r (t) ∈ {−2, 0, 2} . (8) The mapping between transitions x (t) and allowed values of the mid-price changes r (t) has been done by using the cases shown in Fig. 3. This assumption is grounded on the empirical observation that mid-price changes |r (t)| > 2 are extremely rare for large tick assets (see Section 4). In the simplest model, we assume that the probability distribution of re- turns between two transactions depends only on the spread transition between them. We can therefore define the following conditional probabilities defining the process of returns: P (r (t) = ±2|x (t) = 1; θ) = θ1, P (r (t) = 0|x (t) = 1; θ) = 1− 2θ1, P (r (t) = ±1|x (t) = 2; θ) = 1/2, P (r (t) = ±1|x (t) = 3; θ) = 1/2, P (r (t) = ±2|x (t) = 4; θ) = θ4, P (r (t) = 0|x (t) = 4; θ) = 1− 2θ4. (9) Notice that we have assumed symmetric distributions for returns between pos- itive and negative values and θ = (θ1, θ4) ′ is the parameter vector that we can estimate from data. The parameter θ1 (θ4) describes the probability that mid-price changes when the spread remains constant at one (two) ticks. The coupled model of spread and return described here will be termed the MS model. When we consider the special case of spread described by a Bernoulli process we will refer to it as the MSB model. Properties of price returns. Here we derive the moments and the autocorrelation functions corr (r (t) , r (t+ τ)) ≡ ζ (τ) and corr ( r2 (t) , r2 (t+ τ) ) ≡ ρ (τ) under the MS model. The quantity ζ (τ) is useful to study the statistical efficency of price, while ρ (τ) describes volatility clustering in transaction time. 9 We compute first the vectors of conditional first, second and fourth moments E [r (t) |x (t) = k] = m1,k, E [ r2 (t) |x (t) = k ] = m2,k, E [ r4 (t) |x (t) = k ] = m4,k. (10) where mj,k indicates the k−th component of the vector mj . We have m1 = 0, m2 = (8θ1, 1, 1, 8θ4) ′ and m4 = (32θ1, 1, 1, 32θ4) ′ . Then we compute uncondi- tional moments by using the stationary vector λ as E [r (t)] = 4 ∑ k=1 E [r (t) |x (t) = k]P [x (t) = k] =m′1λ, E [ r2 (t) ] = 4 ∑ k=1 E [ r2 (t) |x (t) = k ] P [x (t) = k] =m′2λ, E [ r4 (t) ] = 4 ∑ k=1 E [ r4 (t) |x (t) = k ] P [x (t) = k] =m′4λ, V ar [r (t)] = m′2λ− (m′1λ) 2 , V ar [ r2 (t) ] = m′4λ− (m′2λ) 2 , (11) In order to compute the linear autocorrelation function ζ(τ) we need to com- pute E [r (t) r (t+ τ)], by using conditional independence of r (t) with respect to x (t). We obtain: E [r (t) r (t+ τ)] = = 4 ∑ i=1 4 ∑ j=1 E [r (t) r (t+ τ) |x (t) = i, x (t+ τ) = j]P [x (t) = i, x (t+ τ) = j] = 4 ∑ i=1 4 ∑ j=1 E [r (t) |x (t) = i]E [r (t+∆t) |x (t+ τ) = j]P [x (t) = i, x (t+ τ) = j] = 4 ∑ i=1 4 ∑ j=1 m1,im1,jλiM τ ij = λ ′ΛM τm1, (12) where we define the matrix Λ = diag (m1,1,m1,2,m1,3,m1,4). The autocorrela- tion function of returns is given by: ζ (τ) = λ ′ΛM τm1 − (m′1λ) 2 m′2λ− (m′1λ) 2 , (13) in our specific case ζ (τ) = 0 because symmetry leads to m1 = 0. We also compute the autocorrelation function of squared returns ρ(τ) which is equal to ρ (τ) = λ ′ΣM τm2 − (m′2λ) 2 m′4λ− (m′2λ) 2 , (14) where we define the matrix Σ = diag (m2,1,m2,2,m2,3,m2,4). 10 -2 -1 0 1 2 price change (half ticks) 0 0.2 0.4 0.6 0.8 pr ob ab il it y k even k odd -25 -20 -15 -10 -5 0 5 10 15 20 25 price change (half ticks) 0 0.05 0.1 pr ob ab il it y k even k odd Figure 4: Unconditional distributions of mid-price changes for the simulation of MS model calibrated on MSFT. The left panel shows r (t,∆t = 1) = pm (t+ 1)− pm (t), whereas the right panel shows r (t,∆t = 128) = pm (t+ 128)− pm (t). As expected, both correlation functions depends on powers of the transition probability matrix M . For a Markov process, M is diagonalizable and we can write M τ = CM τDC −1, where: M τD =     0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 (p11 − p21)τ     , C =     1 0 1 1 p11 (p11−1) 0 1 p21 (p11−1) 0 1 1 1 0 p21 (p21−1) 1 p21 (p11−1)     . In the limit case in which the spread is described by a Bernoulli process, the matrix MB is not diagonalizable but has all eigenvalues in R, i.e. sp (MB) = (0, 0, 0, 1), and we can compute its Jordan canonical form JB. Thus we can rewrite the lag dependence as M τB = EJ τ BE −1, where: JB =     0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0     , E =     ( p− p2 ) ( 1− p2 ) p2 0 −p2 −p2 p2 0 ( p− p2 ) −p2 p2 p−1 p −p2 −p2 p2 1     . The structure of block diagonal matrix JB implies that J τ B = J 2 B = 0, ∀τ ≥ 2 and that ρ (τ) is a constant function for τ ≥ 2. Discussion. The qualitative comparison of real data and model shows that the MS model is able to reproduce the distribution of returns quite well. This can be seen by comparing Fig.1 with Fig.4. It is worth noting that, at least qualitatively, also the Bernoulli model MSB is able to reproduce the underes- timation of odd values of returns with respect to the even values, as observed in real data. Therefore it is the coupling of spread and return, rather than the memory properties of spread, which is responsible of the behavior of the aggregated return distribution of Fig. 1. It is also possible to show that the model has linearly uncorrelated returns, as observed in real data, at least for lags larger than few transactions. However the model fails to describe the volatility clustering. In fact, we can prove that ρ (τ) is an exponential function, exp(−aτ , with a = − ln(p11 − p21), i.e. the model describes an exponentially decaying volatility clustering. As the 11 data calibration shows (see Section 5 and Figure 5), the predicted behavior of ρ (τ) under the MS model is much smaller than the one observed in real data. Therefore this model is unable to reproduce the volatility clustering as well as any long memory property. This observation motivates us to develop a model that, preserving the structure of the coupling between spread and returns discussed so far, is able to describe non exponential volatility clustering. This model is developed in the next section. 3.2 A double chain Markov model with logit regression The Markov switching model is not able to explain the empirically observed correlation of squared returns shown in Fig. 2. Therefore in the second class of models we consider an autoregressive switching model for returns [17, 30] in order to study correlation of squared returns. The idea is to use logit regressions on past values of variables, i.e. returns and squared returns in order to reduce the number of parameters that one would have with an higher order Markov process. The model is thus defined by the following conditional probabilities [6]: P (r (t) |x (t) = k,Ω (t− 1) ; θk) , k ∈ {1, 2, 3, 4} Ω ′ (t− 1) = ( r2 (t− 1) , ..., r2 (t− p) , r (t− 1) , ..., r (t− e) ) = ( Ω ′ r2 ,Ω ′ r ) θ ′ k = ( αk,β ′ k,γ ′ k ) , (15) where we define an informative (p+e)-dimensional vector of regressorsΩ, made of the past e returns and p squared returns. Each parameter vector θk is com- posed by the scalar αk, the p-dimensional vector βk which describes the regres- sion on past values of squared returns, and the e-dimensional vector γk which describes the regression on past returns. In order to handle the discreteness of returns we make use of a logit re- gression. To this end we first convert the returns series in a binary series b (t) ∈ {0, 1}. When the spread remains constant between t and t + 1 (i.e. x(t) = 1 or x(t) = 4), we set r (t) = ±2 −→ b (t) = 1 r (t) = 0 −→ b (t) = 0 (16) while when the spread changes, (i.e. x(t) = 2 or x(t) = 3) we set r (t) = 1 −→ b (t) = 1 r (t) = −1 −→ b (t) = 0 (17) Then by denoting by ηk (t) the conditional probability of having b (t) = 1, the logit regression is P (b (t) |x (t) = k,Ω (t− 1) ; θk) = exp { b (t) log ( ηk (t) 1− ηk (t) ) + log (1− ηk (t)) } ηk (t) = exp ( αk +Ω ′ r2 (t− 1)βk +Ω′r (t− 1)γk ) 1 + exp ( αk +Ω ′ r2 (t− 1)βk +Ω ′ r (t− 1)γk ) (18) 12 and we finally obtain the process for r (t) by: { P (r (t) = ±2|x (t) = 1,Ω (t− 1) ; θ1) = η1 (t) /2, P (r (t) = 0|x (t) = 1,Ω (t− 1) ; θ1) = 1− η1 (t) { P (r (t) = 1|x (t) = 2,Ω (t− 1) ; θ2) = η2 (t) , P (r (t) = −1|x (t) = 2,Ω (t− 1) ; θ2) = 1− η2 (t) , { P (r (t) = 1|x (t) = 3,Ω (t− 1) ; θ3) = η3 (t) , P (r (t) = −1|x (t) = 3,Ω (t− 1) ; θ3) = 1− η3 (t) . { P (r (t) = ±2|x (t) = 4,Ω (t− 1) ; θ4) = η4 (t) /2, P (r (t) = 0|x (t) = 4,Ω (t− 1) ; θ4) = 1− η4 (t) , (19) These equations define the general DCMM(e, p) model. In the rest of the paper we will consider the case e = 0 and for the sake of simplicity we will denote DCMM(p)=DCMM(0, p). In our case the independent latent Markov process is represented by the transition process x (t) and the dependent Markov process is represented by the r (t) processes. The form of stochastic dependence is defined by the logit rules in Eq. (19). For the sake of clarity, here we consider the case p = 1, while its extension to a general value for p is considered in Appendix A. The definition of the process for r (t) ∈ {−2,−1, 0, 1, 2}, and i, j ∈ {1, 2, 3, 4, 5}, in the case of p = 1 (DCMM(1)) is the following: P (r (t) = (3− j) |x (t) = k, r (t− 1) = (3− i) ; θk) = Ak,ij . (20) We have four possible transition matrices Ax(t)=k for k ∈ {1, 2, 3, 4}, determined by the latent process x (t): Ax(t)=1 =       η1 ( r2 (t− 1) = 4 ) /2 0 1− η1 ( r2 = 4 ) 0 η1 ( r2 = 4 ) /2 η1 ( r2 (t− 1) = 1 ) /2 0 1− η1 ( r2 = 1 ) 0 η1 ( r2 = 1 ) /2 η1 ( r2 (t− 1) = 0 ) /2 0 1− η1 ( r2 = 0 ) 0 η1 ( r2 = 0 ) /2 η1 ( r2 (t− 1) = 1 ) /2 0 1− η1 ( r2 = 1 ) 0 η1 ( r2 = 1 ) /2 η1 ( r2 (t− 1) = 4 ) /2 0 1− η1 ( r2 = 4 ) 0 η1 ( r2 = 4 ) /2       Ax(t)=2 =       0 η2 ( r2 (t− 1) = 4 ) 0 1− η2 ( r2 = 4 ) 0 0 η2 ( r2 (t− 1) = 1 ) 0 1− η2 ( r2 = 1 ) 0 0 η2 ( r2 (t− 1) = 0 ) 0 1− η2 ( r2 = 0 ) 0 0 η2 ( r2 (t− 1) = 1 ) 0 1− η2 ( r2 = 1 ) 0 0 η2 ( r2 (t− 1) = 4 ) 0 1− η2 ( r2 = 4 ) 0       where we have specified the temporal dependence in regressors only in the first column. The others two matrices have same definitions: A4 = A1 (η1 → η4) and A3 = A2 (η2 → η3). In this way, assuming that the latent process has reached the stationary distribution defined by Eq. 7, we can define an overall Markov chain by the transition matrix N that describes the r (t) process: N = 4 ∑ k=1 λkAk. (21) The matrix N is defined by 6 + 4p parameters: p11, p21, αk,β ′ k. 13 The probabilities of the process for r2 (t) ∈ {0, 1, 4}, and i, j ∈ {1, 2, 3}, in the case of p = 1 (DCMM(1)) is P ( r2 (t) = (3− j)2 |x (t) = k, r2 (t− 1) = (3− i)2 ; θk ) = Vx(t),ij . (22) which can be calculated from the knowledge of the matrix A. In particular, we have four possible transition matrices Vx(t)=k for k ∈ {1, 2, 3, 4}, determined by the latent process x (t): Vx(t)=1 =   η1 ( r2 (t− 1) = 4 ) 0 1− η1 ( r2 = 4 ) η1 ( r2 (t− 1) = 1 ) 0 1− η1 ( r2 = 1 ) η1 ( r2 (t− 1) = 0 ) 0 1− η1 ( r2 = 0 )   , Vx(t)=2 =   0 1 0 0 1 0 0 1 0   . We can define an overall Markov process for r2 (t) described by a transition matrix S, assuming that the transition process x (t) has reached the stationary distribution: S = 4 ∑ k=1 λkVk. (23) The matrix S is defined by 4+2p parameters: p11, p21, αk,β ′ k, where k ∈ {1, 4}. The function corr ( r2 (t) , r2 (t+ τ) ) = ρ (τ) for the DCMM(1) process is the correlation of the Markov(1) process defined by S. We solve the eigenvalue equation for S relative to the eigenvalue 1 in order to determine the stationary probability vector ψ: S′ψ = ψ, (24) the entire spectrum is given by sp (S) = (0, 1, e3), where the last eigenvalue is: e3 = − [(η4 (0)− η4 (4)) (1− p11 − p21 + p11p21) + (η1 (0)− η1 (4)) p11p21] p21 − p11 + 1 . (25) If we define the vectors δ, δ2 and ξ, where δi = (3− i)2, δ2,i = (3− i)4 and ξ = δ ⊙ψ , the moments are given by: E [ r2 (t) ] = δ′ψ, E [ r4 (t) ] = δ′2ψ, E [ r2 (t) r2 (t+ τ) ] = ξ′Sτδ. (26) Finally, we have the expression for ρ (τ) in the case p = 1: ρ (τ) = ξ ′Sτδ − ( δ ′ ψ )2 δ ′ 2ψ − ( δ ′ ψ )2 . (27) The generalization of the calculation of ρ (t) to any value of the order p is reported in the appendix A. In order to estimate the parameter vector θ′ = ( θ ′ 1, θ ′ 2, θ ′ 3, θ ′ 4 ) we maximize the partial-loglikelihood, L (θ) = T ∑ t=p+1 log [ 4 ∑ k=1 P (x (t) = k|Ω (t− 1) ; θk)P (b (t) |x (t) = k,Ω (t− 1) ; θk) ] , (28) 14 1 10 lag τ (number of transactions) -0.03 -0.02 -0.01 0 0.01 0.02 0.03 ρ (τ ) Corr(r 2 (t),r 2 (t+τ)) Corr MS B Corr MS(HMM) Corr DCMM(1) Corr DCMM(3) Figure 5: Autocorrelation function of squared returns, ρ (τ). The black circles are the real data of MSFT asset. The red squares are the result of the MSB model, the green diamonds refer to the MS model, the blu up triangles refer to the DCMM(1) model and the pink down triangles refer to DCMM(3) model, all calibrated on the MSFT asset. where T is the length of sample, and we assume that parameters p11 and p21 are known. Since the dynamics of spread transitions is independent from the past informative set, i.e. P (x (t) = k|Ω (t− 1) ; θk) = P (x (t) = k), we have L (θ) = T ∑ t=p+1 log [ 4 ∑ k=1 P (x (t) = k)P (b (t) |x (t) = k,Ω (t− 1) ; θk) ] , (29) In the case of large tick assets, it is λ1 ≈ 1 and we can use the approximation L (θ) ≈ T ∑ t=p+1 log ( P (b (t) |x (t) = 1,Ω (t− 1) ; θ1) ) . (30) For example for MSFT we have λ1 ≈ 0.9. With this approximation we estimate only the vector θ1 and the parameter θ4 of Eq.9, that are enough in order to define matrices Vk. Moreover we can approximate Vx(t)=4 ≈   2θ4 0 1− 2θ4 2θ4 0 1− 2θ4 2θ4 0 1− 2θ4.   In this way we neglect the contribution of regressors Ω (t− 1) (weighted by β4) and make use of the simpler expression in Eq. 9 when x (t) = 4. As before, this approximation holds if the weight of Vx(t)=4 is negligible, i.e. λ4 ≈ 0, i.e. when there is a small number of spread transitions s (t) = 2 → s (t+ 1) = 2. This is the case when we have large tick assets, where we have almost always s (t) = 1. In the case of MSFT asset for example we have λ4 ≃ 0.04. 15 asset activity # trades mean (ticks/2) σ (ticks/2) ex. kurt π̂1 MSFT high 184, 542 −2.82 ∗ 10−4 0.652 5.13 0.92 42 days low 348, 253 8.96 ∗ 10−4 0.514 9.89 0.95 CSCO high 145, 084 −1.32 ∗ 10−3 0.673 4.73 0.92 42 days low 275, 879 1.44 ∗ 10−3 0.551 8.46 0.95 Table 1: Summary statistics for assets MSFT and CSCO in the two subsamples of high and low trading activity. σ is the standard deviation and ex. kurt is the excess kurtosis of tick by tick returns, and π̂1 is the fraction of time the spread is equal to one tick. We have performed the calculation of the autocorrelation ρ(τ) of the squared returns for p = 1, 3 and the result is reported in Fig. 5. We have calibrated the parameters on the MSFT asset (see next Sections for details). We note that the MS and MSB models underestimate very strongly ρ(τ). Note that for the MS model, ρ(τ) calibrated on real data is very small but not zero as predicted by the theory. The DCMM(p) model, on the other hand, is able to fit very well ρ(τ) up to lag τ = p. Remarkably the model captures very well also the negative correlation for very short lags. However this observation indicates that an higher order DCMM(p) model might be able to fit better the real data. In the next Sections we will show that this is indeed the case. 4 Data We have investigated two stocks, namely Microsoft (MSFT) and Cisco (CSCO), both traded at NASDAQ market in the period July-August 2009, corresponding to 42 trading days. Data contains time stamps corresponding to order execu- tions, prices, size of trading volume and direction of trading. The time resolution is one millisecond. In this article we report mostly the results for MSFT asset, which are very similar to those for CSCO. Non stationarities can be very important when investigating intraday fi- nancial data. For this reason and in order to restrict our empirical analysis to roughly stationary time intervals, we first compute the intensity of trad- ing activity at time t conditional to a specific value k of mid-price change, i.e. p (t|r(t) = k). As we can see from Figure 6, the unconditional trading intensity p(t) is not stationary during the day [21]. As usual, trading activity is very high at the beginning and at the end of the day. For this reason, we discard transaction data in the first and last six minutes of trading day. Moreover figure shows that the relative frequencies of the three values of returns change during the day, except for returns larger than two ticks that are very rare throughout the day. Most important, in the first part of the day, one tick or two tick re- turns are more frequent than zero returns, while after approximately 10 : 30 the opposite is true. For this reason we split our times series in two subsamples. The first sample, corresponding to a period of high trading intensity, covers the time sets t ∈ (9 : 36, 10 : 30) ∪ (15 : 45, 15 : 54), where time is measured in hours. The second sample, corresponding to low trading intensity, covers the time set t ∈ [10 : 30, 15 : 45]. Table 1 reports a summary statistics of the two subsamples. We then analyze the empirical autocorrelation function of squared returns 16 10 12 14 16 time of the day (hours) 0.01 0.1 1 10 p( t | r ) p(t) p(t| ret(t)=0) p(t| |ret(t)|=1) p(t| |ret(t)|=2) p(t| |ret(t)|>2) time limits Figure 6: Unconditional and conditional probability distribution of the time of the day when a transaction occurs. We bin data into 6 minute intervals. corr ( r2 (t) , r2 (t+ τ) ) = ρ (τ) for these two series. As we can see from Fig. 7, for τ > 5 both time series display a significant positive and slowly decaying au- tocorrelation, which is a quantitative manifestation of volatility clustering. The series corresponding to low trading activity displays smaller, yet very persistent, volatility clustering. 5 Estimation of the models and comparison with real data We have estimated the models described in Secs.3.1 and 3.2 and we have used Monte Carlo simulations to generate artificial time series calibrated on real data. The properties of these time series have been compared with those from real data. More specifically we have considered three models: (i) the MSB model, where spread is described by a Bernoulli process and there are no logit regressors; (ii) the MS model, where spread is a Markov(1) process and there are no logit regressors; (iii) the DCMM(p) model, where spread is a Markov(1) process and the set of logit regressors includes only the past p values of squared returns. Notice therefore that in this last model we set e = 0. Finally, we have estimated the model separately for high and low activity regime. 17 0 50 100 150 200 lag τ (number of transactions) -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04 ρ( τ) ρ(τ) MSFT HIGH ρ(τ) MSFT LOW i.i.d. hyp. Figure 7: Sample autocorrelation function of squared returns, ρ (τ) for MSFT. Black circles refer to high trading activity series and the red squares refer to low trading activity series. The dashed lines indicate 2σ confidence intervals in the hypothesis of i.i.d. time series. activity π̂1 p̂11 p̂21 θ̂1 θ̂4 high 9.17 ∗ 10 −1 9.53 ∗ 10 −1 5.22 ∗ 10 −1 4.81 ∗ 10 −2 1.51 ∗ 10 −3 low 9.52 ∗ 10 −1 9.72 ∗ 10 −1 5.50 ∗ 10 −1 2.85 ∗ 10 −2 2.65 ∗ 10 −4 Table 2: Estimated parameters for the MSFT asset. 5.1 Estimation of the models From spread and returns data we computed the estimators π̂1, p̂11, p̂21, θ̂1, θ̂4 of the parameters defined in Sec.3.1. They are given by π̂1 = n1 Ns , p̂ij = nij ∑2 j=1 nij , θ̂k = 1 2 ( 1− n0k Nk ) , (31) where n1 is the number of times s (t) = 1, Ns is the length of the spread time series, nij is the number of times the value of spread i is followed by the value j, n0k is the number of times returns are zero in the regime x (t) = k, and Nk is the length of the subseries of returns in the same regime. For the last estimator θ̂k we count only zero returns because we assumed that the returns are distributed symmetrically in the set (−2, 0, 2). We have checked that this assumption represents a good approximation for our data sets. The estimated parameters for MSFT asset are shown in Table 2. In order to estimate the DCMM(p) model we need to estimate the vector θ. For both regimes we use the approximated log-likelihood of Eq. 30 because 18 i β̂1i st.error z − value 1 −1.56 ∗ 10−1 9 ∗ 10−3 −18.4 ∗ ∗∗ 2 −4.03 ∗ 10−2 7.4 ∗ 10−3 −5.45 ∗ ∗∗ 3 2.18 ∗ 10−2 7.0 ∗ 10−3 3.12 ∗ ∗ 4 4.58 ∗ 10−2 6.9 ∗ 10−3 6.61 ∗ ∗∗ 5 7.13 ∗ 10−2 6.8 ∗ 10−3 10.5 ∗ ∗∗ 6 7.59 ∗ 10−2 6.8 ∗ 10−3 11.2 ∗ ∗∗ 7 5.94 ∗ 10−2 6.9 ∗ 10−3 8.57 ∗ ∗∗ 8 6.06 ∗ 10−2 6.9 ∗ 10−3 8.76 ∗ ∗∗ 9 5.94 ∗ 10−2 6.9 ∗ 10−3 8.55 ∗ ∗∗ 10 5.58 ∗ 10−2 7.0 ∗ 10−3 8.01 ∗ ∗∗ 11 5.69 ∗ 10−2 6.9 ∗ 10−3 8.20 ∗ ∗∗ 12 4.14 ∗ 10−2 7.1 ∗ 10−3 5.86 ∗ ∗∗ 13 5.79 ∗ 10−2 6.9 ∗ 10−3 8.36 ∗ ∗∗ 14 5.17 ∗ 10−2 7.0 ∗ 10−3 7.40 ∗ ∗∗ 15 4.18 ∗ 10−2 7.1 ∗ 10−3 5.93 ∗ ∗∗ 16 3.76 ∗ 10−2 7.1 ∗ 10−3 5.30 ∗ ∗∗ 17 4.86 ∗ 10−2 7.0 ∗ 10−3 6.92 ∗ ∗∗ 18 5.11 ∗ 10−2 7.0 ∗ 10−3 7.31 ∗ ∗∗ 19 3.52 ∗ 10−2 7.1 ∗ 10−3 4.95 ∗ ∗∗ 20 2.96 ∗ 10−2 7.2 ∗ 10−3 4.14 ∗ ∗∗ 21 3.92 ∗ 10−2 7.1 ∗ 10−3 5.54 ∗ ∗∗ 22 2.51 ∗ 10−2 7.2 ∗ 10−3 3.49 ∗ ∗∗ 23 2.70 ∗ 10−2 7.2 ∗ 10−3 3.76 ∗ ∗∗ 24 3.50 ∗ 10−2 7.1 ∗ 10−3 4.93 ∗ ∗∗ 25 2.32 ∗ 10−2 7.2 ∗ 10−3 3.23 ∗ ∗ Table 3: Estimated parameters β1i for MSFT asset in the high activity regime. Stars indicate significance levels: ∗∗∗ (0.001) , ∗∗(0.01) , ∗ (0.05) , . (0.1) , (1). we have for low volatility series P (x (t) = 1) ≈ 0.92 and for high volatility P (x (t) = 1) ≈ 0.87. Thus we need to estimate only the vector θ1 = ( α1,β1 ′ ) by a standard generalized linear regression and we use an iterative reweighted least squares technique [6]. In this way we generate the returns series in regime x (t) = 1, instead for the other regimes the generator follows the rules in Eq. 9, i.e. we use the estimator θ̂4. The order of model is fixed to p = 50 in order to investigate the impact of past squared returns on the returns process. For simplicity we report here only the results from high activity time series. We find α1 = −2.921(0.019) and we report the first 25 values of β1i in Table 3. The estimates of β1i are significantly positive for i > 2 up to i = 50, with the exception of i = 36, 37. Moreover they display a maximum for i = 6. We perform a power law fit on these parameters, β̂1i ∝ i−α, and we find a significant exponent α = 0.626(0.068). We hypothesize that this functional dependence of β1i from i could be connected to the slow decay of the autocorrelation function of squared returns, but we have not investigated further this aspect. 5.2 Comparison with real data After having estimated the three models on the real data, we have generated for each model 25 data samples of length 106 observations. In this way we are be able to determine an empirical statistical error on quantities that we 19 0 10 20 30 40 50 60 τ (number of transactions) -0.03 -0.02 -0.01 0 0.01 0.02 0.03 ρ( τ) real data DCMM(p=50) model real data fit 0 20 40 60 τ (number of transactions) -0.04 -0.02 0 0.02 0.04 ρ( τ) real data DCCM(p=50) real data fit- Figure 8: Empirical autocorrelation functions corr ( r2 (t) , r2 (t+ τ) ) for real (black) and simulated (red) data according to DCMM(50) model. The red squares are a power law fit on the real data. The left panel refers to MSFT and the right panel to CSCO. measure on these artificial samples. We have considered three quantities to be compared with real data. Beside the autocorrelation of squared returns, in order to analyze the return distribution at different transaction time scales ∆t, we have measured the empirical standard deviation and excess kurtosis σ (∆t) = ( E [ ((pm(t+∆t)− pm(t)) − E[pm(t+∆t)− pm(t)])2 ])1/2 (32) κ (∆t) = E [ ((pm(t+∆t)− pm(t))− E[pm(t+∆t)− pm(t)])4 ] σ4 (∆t) − 3 (33) The normalized standard deviation σN (∆t) = σ (∆t) / √ ∆t gives information of the diffusive character of the price process, because σN (∆t) is constant for diffusion. The behavior of κ (∆t) as a function of ∆t describes the convergence of the distribution of returns toward the Gaussian distribution [4]. We first investigate the autocorrelation properties of squared returns ρ (τ). This function is compatible with zero for MSB and MS models except for the first lag where we have measured a significant positive value ρ (τ = 1) ≈ 0.01. The model with regressors DCMM(p = 50), instead, is able to reproduce remarkably well the values of ρ (τ) up to τ = 50, as we can see from Fig. 8, both for MSFT and for CSCO. The behavior of ρ (τ) around τ ≃ 0 is also very well reproduced by the model. The model underestimates the values of the autocorrelation of the real process for τ > 50 but it generates values that are still significantly positive. We have performed a power law fit on real and DCMM(p = 50) simulated data for values of lags corresponding to τ ∈ [6, 50]. For real data we found α = 0.298(0.023) and for simulated data α = 0.300(0.028). Since α < 1 this model is able to reproduce long memory shape of correlation ρ (τ) for a number of values of lags τ equal to the order of model p. We then analyzed the distributional properties, i.e. normalized standard deviation σN (∆t) and excess kurtosis κ (∆t). For each value of ∆t and for each model we calculate the average and standard deviation of the 25 simulations and we compare the simulation results with real data (see Fig. 9). The three models are clearly diffusive. Moreover MS and DCMM(p = 50) models reproduce the empirical values of σN better than the MSB model. The 20 10 0 10 1 10 2 10 3 ∆t (number of transactions) 0.62 0.64 0.66 0.68 0.7 0.72 σ N (∆ t) real data MS B model MS model DCCM(p=50) model 10 0 10 1 10 2 10 3 ∆t (number of transactions) 10 -2 10 -1 10 0 10 1 κ( ∆ t) real data MS B model MS model DCCM(p=50) model Figure 9: Left. Rescaled volatility σN (∆t) of aggregated returns on time scale ∆t for MSB (red line), MS (green line), and DCMM(p = 50) (blue line), compared with the same quantity for real data for high volatility series (black line). Right. Excess kurtosis κ (∆t) of aggregated returns on time scale ∆t for MSB (red line), MS(green line), DCMM(p = 50) (blue line), compared with the same quantity for real data for high volatility series (black line). In both panels error bars are the standard deviation obtained from 25 Monte Carlo simulations of the corresponding models. difference between MS and DCMM(p = 50) models are appreciable only for ∆t > 128, i.e. this parameter is almost the same for these two models. The behavior of excess kurtosis, instead, is different between the models (see the right panel of Fig. 9). The excess kurtosis for MSB and MS models is well fit by a power law κ (∆t) ∼ ∆−α with α = 0.901(0.027) (MSB) and α = 0.997(0.052) (MS). These values are consistent with a short range correlation of volatility. In fact, it can be shown [4] that stochastic volatility models with short range autocorrelated volatility are characterized by α = 1. On the contrary, stochastic volatility models with long range autocorrelated volatility display a slower decay. This is exactly what it is observed for real data and for the DCMM(p = 50) model. In both cases we observe an anomalous scaling of kurtosis that is more compatible with a stochastic volatility model in which volatility is a long memory process. 6 Conclusion We have developed Markov-switching models for describing the coupled dynam- ics of spread and returns of large tick assets in transaction time. The underlying Markov process is the process of transitions between consecutive spread values. In this way returns are described by different processes depending on whether the spread is constant or not in time. We have shown that this mechanism is needed in order to model the different shape of the distribution of mid-price changes at different aggregation in number of trades. In order to be able to model the persistent volatility clustering, we have introduced a Markov model with logit regressors represented by past values of returns and squared returns. We have calibrated the model on the stock Microsoft and Cisco and, by using Monte Carlo simulations, we have found that the model reproduces remarkably well and in a quantitative way the empirical stylized facts. In particular we are able to reproduce the shape of the distribution at different aggregations, uncor- 21 related returns, diffusivity, slowly decaying autocorrelation function of squared returns, and anomalous decay of kurtosis on different time scales, i.e. the con- vergence to the Gaussian. As a possible extension, we observe that, if we want to reproduce more precisely the autocorrelation function of squared returns up to a certain number of lags, we need to estimate a number of parameters, i.e. order of model, at least equal to this value. We find that these parameters scale with a power law function of parameter’s index, i.e it is a function of the number of past lags at which regressors are defined. A possible improvement of this model could be to develop a model in which we estimate directly a parametric function with a small number of parameters (for example a power law function) that can describe how these parameters scale when we consider a certain order for the model. Finally we note that we have developed this model in the case of large tick assets but this limitation is represented only by the choice of a limited set of values for spread and returns variables. In principle the extension to any kind of asset is represented only by a model in which we can have several values for spread, not only 1 or 2, and a broader set of values for returns. Acknowledgements We would like to thank Alessandro Profeti and Andrea Carlo Giuseppe Mennucci for the development and support of the computer facility HAF922.sns used to perform data analysis and Montecarlo simulations, written in R language, reported in this article. Authors acknowledge partial support by the grant SNS11LILLB “Price formation, agents het- erogeneity, and market efficiency” A Correlation of squared returns for DCMM(p) model The definition of the process for r2 (t) ∈ {0, 1, 4} in the case of a general value of p for the DCMM model is reported in Eq. 19. This stochastic process is a stationary Markov process of order p for each value of k [31] : P ( r2 (t) = (3− ip+1)2 |x (t) = k; r2 (t− 1) = (3− ip)2 , · · · , r2 (t− p) = (3− i1)2 ; θk ) = Vx(t);i1i2...ip+1 , (34) where we have k ∈ {1, 2, 3, 4} and a p + 1-dimensional vector of indices î = (i1, i2, · · · , ip+1), where each index can assume values il ∈ {1, 2, 3} for each l ∈ {1, 2, · · · , p+ 1}. We stress the concept that the index ip+1 defines the present value of the squared return r2 (t), instead the indices i1, · · · , ip define the past history of the process of squared returns, i.e. i1 defines the oldest value 22 of r2 = r2 (t− p). The transition probabilities are given by: Vx(t)=k∈{1,4};i1i2...,ip+1=1 = ηk (i1, · · · , ip) = exp [ αk + ∑p l=1 βk,l (3− ip−l+1) 2 ] 1 + exp [ αk + ∑p l=1 βk,l (3− ip−l+1) 2 ] , Vx(t)=k∈{1,4};i1i2...,ip+1=2 = 0, Vx(t)=k∈{1,4};i1i2...,ip+1=3 = 1 1 + exp [ αk + ∑p l=1 βk,l (3− ip−l+1) 2 ] , Vx(t)=k∈{2,3};i1i2...,ip+1=1 = 0, Vx(t)=k∈{2,3};i1i2...,ip+1=2 = 1, Vx(t)=k∈{2,3};i1i2...,ip+1=3 = 0, (35) for each value of the p-dimensional vector i = (i1, · · · , ip). We have 3p+1 values for the transition probabilities whit normalization: ∀k; ∀ i1, · · · , ip : 3 ∑ ip+1=1 Vx(t)=k;i1i2...,ip+1 = 1. (36) We can recover an equivalent Markov(1) process defined on vector-states Y (t). We define a p-dimensional vector of squared returns: Y (t) [i] = ( r2 (t− p+ 1) = (3− i1)2 , · · · , r2 (t) = (3− ip)2 ) , (37) In this case the index ip defines the present state of the squared return r 2 (t). The vector-processY (t) is a first order Markov chain on the state space {0, 1, 4}p, i.e. Y (t) can assume 3p different values. We define four transition matri- ces Ux(t)=k ∈ M3p,3p (R) in order to represent the equivalent Markov pro- cess for each possible value of x (t). These matrices describe the transition Y (t) → Y (t+ 1), that we could represent also by the transition between vectors of indices: (i1, · · · , ip) → (i2, · · · , ip+1). We have to map the tran- sition probabilities Vx(t)=k;i1i2...ip+1 to the elements of matrix Uk;m,n, where m,n ∈ {1, · · · , 3p}. We can obtain this by the simple rule: (i1, · · · , ip+1) → (m,n) , m (i1, · · · , ip) = [ p−1 ∑ l=1 3p−l (3− il) ] + 4− ip, n (i2, · · · , ip+1) = [ p−1 ∑ l=1 3p−l (3− il+1) ] + 4− ip+1, Ux(t)=k;m,n = Vx(t)=k;i1i2...ip+1. (38) This rules are unable to fill the entire matrix Uk;m,n, because when we study the Markov process for Y (t) we have a lot of forbidden transitions, so the elements of matrix that aren’t captured by the above rules have 0 values. For the case 23 p = 2 the shape of Uk is: U1 =               [1− η1 (0, 0)] 0 η1 (0, 0) 0 0 0 0 0 0 0 0 0 [1− η1 (0, 1)] 0 η1 (0, 1) 0 0 0 0 0 0 0 0 0 [1− η1 (0, 4)] 0 η1 (0, 4) [1− η1 (1, 0)] 0 η1 (1, 0) 0 0 0 0 0 0 0 0 0 [1− η1 (1, 1)] 0 η1 (1, 1) 0 0 0 0 0 0 0 0 0 [1− η1 (1, 4)] 0 η1 (1, 4) [1− η1 (4, 0)] 0 η1 (4, 0) 0 0 0 0 0 0 0 0 0 [1− η1 (4, 1)] 0 η1 (4, 1) 0 0 0 0 0 0 0 0 0 [1− η1 (4, 4)] 0 η1 (4, 4)               , U2 =               0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0               , U3 = U2, U4 = U1 (η1 → η4) . In U1 we have η1 (i1, i2) = η1 ( r2 (t− 2) = (3− i1)2 , r2 (t− 1) = (3− i2)2 ) . Fi- nally, we define an overall Markov process for Y (t), defined by 4+2p parameters: p11, p21, αk,β ′ k, where k ∈ {1, 4}: S = 4 ∑ k=1 λkUk, (39) where λk are given by Eq. 7. Now our goal is to calculate the moments for the variable r2 (t) from the process defined by Eq. 39. First of all we have to solve the eigenvalue equation for S relative to the eigenvalue 1 in order to determine the stationary probability vector for Y (t): S′Ψ = Ψ. (40) The 3p-dimensional vector Ψ represents all possible values of the stationary 3p-variate distribution of the variable Y (t): P (Y (t) [i1, · · · , ip]) = Ψm(i1,··· ,ip). (41) From the 3p-dimensional vector Ψ we compute the stationary 3-dimensional probability vector ψ′ = (ψ1, ψ2, ψ3) for the process r 2 (t), i.e. we have for each index ip ∈ {1, 2, 3}: ψip = P [ r2 (t) = (3− ip)2 ] = 3 ∑ i1=1 · · · 3 ∑ ip−1=1 Ψm(i1,··· ,ip), (42) where ip defines the present value of r 2 (t) and we use mappings defined in Eq. 38. The stationary probability to have a fixed value of r2 at time t depends on 24 all possible values of r2 during the past p − 1 lags. In order to determine the present probabilities we have to sum probabilities corresponding to all possible past trajectories defined by the past p− 1 lags. We compute corr ( r2 (t) , r2 (t+ τ) ) = ρ (τ) by means of the transition prob- abilities P ( r2 (t) = (3− a)2 , r2 (t+ τ) = (3− b)2 ) , where a, b ∈ {1, 2, 3}, of the p-order Markov process in term of the matrix S: P ( r2 (t) = (3− a)2 , r2 (t+ τ) = (3− b)2 ) = P (i (a) , j (b)) , (43) where i (a) = (i1, · · · , ip = a) and i (b) = (i1, · · · , ip = b) are the p-dimensional vectors of indices describing the past p − 1 lags respect to times t and t + τ . We have to perform the sum of probabilities corresponding to each of the possible values of i1, · · · , ip−1 and j1, · · · , jp−1, i.e. on il, jl ∈ {1, 2, 3} ∀l ∈ {1, · · · , p− 1}: P (i (a) , j (b)) = ∑ (i1,··· ,ip−1,ip=a) ∑ (j1,··· ,jp−1,jp=b) P (Y (t) [i (a)] ,Y (t+ τ) [j (b)]) = ∑ (i1,··· ,ip−1,ip=a) ∑ (j1,··· ,jp−1,jp=b) (Sτ )m(i(a)),n(j(b)) Ψm(i(a)), (44) where we use mappings defined in Eq. 38 and the matrix power Sτ , because we sum on all possible transitions Y (t) → Y (t+ τ) holding fixed the values of indices ip = a and jp = b. At this point we can compute the moments of our interest: E [ r2 (t) ] = 3 ∑ i=1 (3− i)2 ψi = 4ψ1 + ψ2, E [ r4 (t) ] = 3 ∑ i=1 (3− i)4 ψi = 16ψ1 + ψ2, E [ r2 (t) r2 (t+ τ) ] = 3 ∑ a=1 3 ∑ b=1 (3− a)2 (3− b)2 P (i (a) , j (b)) , (45) from which we can determine the function ρ (τ). We have determined the func- tion ρ (τ) for p = 3 making the following approximation for the matrix V4: Vx(t)=k=4;i1i2...ip+1=1 = 2θ4, Vx(t)=k=4;i1i2...ip+1=2 = 0, Vx(t)=k=4;i1i2...ip+1=3 = 1− 2θ4, (46) this approximation is justified only in the case λ1 ≈ 1, i.e. we have the same approximation that leads us to Eq. 30. In this way we have found the results reported in Fig. 5 for DCMM(p = 3). References [1] Wallach, H.M., 2004. Conditional Random Fields: An Introduction. Univer- sity of Pennsylvania CIS Technical Report MS-CIS-04-21. 25 [2] Rabiner, L.R., 1989. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE 77, (2), 257- 286. [3] Hamilton, J.D., 1994. Time Series Analysis. Princeton University Press, Princeton, New Jersey. [4] Bouchaud, J.-P., Potters, M., 2003. Theory of Financial Risks: From Statis- tical Physics to Risk Management. Cambridge University Press, New York. [5] McKenzie, E., 2000. Discrete Variates Time Series. University of Strathclyde. [6] Kedem, B., Fokianos, K., 2002. Regression Models for Time Series Analysis. Wiley-Interscience, Hoboken, New Jersey. [7] Grob-Klubmann, A., Hautsch, N., 2011. Predicting Bid-Ask Spreads us- ing long memory autoregressive conditional Poisson Models. Working Paper Humboldt-Universität zu Berlin. [8] Gillemot, L., Farmer, J.D., Lillo, F., 2006. There’s more to volatility than volume. Quantitative Finance 6 (5), 371-384. [9] Hamilton, J.D., 2005. Regime-Switching models. Palgrave Dictionary of Eco- nomics. [10] Liesenfeld, R., Nolte, I., Pohlmeier, W., 2003. Modeling financial trans- action price movements: a dynamic integer count data model. Empirical Economics 30, 795-825. [11] Al Dayri, K.A., 2011. Market microstructure and modeling of the trading flow. These de doctorat, Ecole Polytechnique. [12] Clauset, A., Shalizi, C.R., Newman, M.E.J., 2009. Power-Law distributions in empirical data. SIAM Review 51 (4), 661-703. [13] Munnix, M.C., Schafer, R., Gühr, T., 2010. Impact of the tick-size on financial returns and correlations. Physica A 389 (21), 4828-4843. [14] Onnela, J.-P., Toyli, J., Kaski, K., 2009. Tick size and stock returns. Phys- ica A 388, 441-454. [15] La Spada, G., Farmer, J.D. and Lillo, F., 2011. Tick size and price diffusion, Econophysics of order-driven markets. Springer, 173-187. [16] La Spada, G. and Lillo, F., 2013. The effect of round-off error on long memory processes, Studies in Nonlinear Dynamics and Econometrics (in press). [17] Ferland, R., Latour, A. and Oraichi, D., 2004. Integer-valued GARCH pro- cess. Journal of time series analysis 27 (6), 923-942. [18] Ponzi, A., Lillo, F. and Mantegna, R.N., 2009. Market reaction to a bid-ask spread change: A power-law relaxation dynamics. Physical Review E 80 (1), 016112-1/016112-12. 26 [19] Wyart, M., Bouchaud, J.-P., Kockelkoren, J., Potters, M. , Vettorazzo, M., 2008. Relation between bid-ask spread, impact and volatility in order-driven markets. Quantitative Finance 8 (1), 41-57. [20] Robert, C.Y. and Rosenbaum, M., 2011. A new approach for the dynamics of ultra high frequency data: the model with uncertainty zones. Journal of Financial Econometrics 9, 344-366. [21] Andersen, T.G., Bollerslev, T., 1997. Intraday periodicity and volatility persistence in financial markets. Journal of empirical finance 4 (2), 115-158. [22] Dayri, K., Rosenbaum, M., 2012. Large tick assets: implicit spread and optimal tick size. arXiv:1207.6325. [23] Eisler, Z., Bouchaud, J.P. and Kockelkoren, J., 2012. The price impact of order book events: market orders, limit orders and cancellations. Quantita- tive Finance 12 (9). [24] Mike, S., Farmer, J.D., 2008. An empirical behavioral model of liquidity and volatility. Journal of Economic Dynamics and Control 32, 200-234. [25] Berchtold, A., 1999. The double chain Markov model. Communications in Statistics - Theory and Methods 28 (11), 2569-2589. [26] Berchtold, A., 2002. High-order extensions of the Double Chain Markov Model. Stochastic Models 18 (2), 193-227. [27] Berchtold, A. and Raftery, A.E., 2002. The mixture transition distribution model for high-order Markov chains and non-gaussian time series. Statistical Science 17 (3), 328-356. [28] Cogburn, R., 1984. The ergodic theory of Markov chains in random envi- ronments. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 66 (1), 109-128. [29] Cogburn, R., 1990. On direct convergence and periodicity for transition probabilities of Markov chains in random environments. The Annals of prob- ability 18 (2), 642-654. [30] Frühwirth-Schnatter, S., 2006. Finite mixture and Markov switching mod- els. Springer series in statistics, . [31] Zucchini, W. and MacDonald, I.L., 2009. Hidden Markov Models for Time Series: an Introduction Using R. Chapman & Hall/CRC, Taylor & Francis Group. [32] Timmermann, A., 2000. Moments of Markov switching models. Journal of Econometrics 96, 75-111. [33] Ryden, T., Terasvirta, T., Asbrink, S., 1998. Stylized facts of daily returns series and the hidden Markov model. Journal of Applied Econometrics 13 (3), 217-244. [34] Bulla, J., Bulla, I., 2006. Stylized facts of financial time series and hidden semi-Markov models. Computational statistics & data analysis 51, 2192- 2209. 27 http://arxiv.org/abs/1207.6325 [35] Fitzpatrick, M., Marchev, D., 2012. Efficient Bayesian estimation of the multivariate Double Chain Markov Model. Statistics and Computing, Springer. [36] Eisenkopf, A., 2008. The real nature of credit transitions. Working paper, URL: http://ssrn.com/abstract=968311. [37] Granger, C.W.J., 1972. Infinite variance and research strategy in time series analysis. Journal of American Statistical Association 67, 275-285. [38] Clark, P.C., 1973. A subordinated stochastic process model with finite vari- ance for speculative prices. Econometrica 41, 135-155. [39] Engel, C., Hamilton J.D., 1990. Long swings in the dollar: are they in the data and do markets know it? American Economic Review 89, 689-713. [40] Hamilton, J.D., 1989. A new approach to the economic analysis of nonsta- tionary time series and the business cycle. Econometrica 57, 357-384. [41] Guilbaud, F., Pham H., 2011. Optimal high frequency trading with limit and market orders. arXiv:1106.5040v1. [42] Plerou, V., Gopikrishnan, P. and Stanley, H. E., 2005. Quantifying fluctu- ations in market liquidity: Analysis of the bid-ask spread. Physical Review E Vol. 71, 046131. [43] Gareche, A., Disdier, G., Kockelkoren, J., and Bouchaud, J.-P. 2013. A Fokker-Planck description for the queue dynamics of large tick stocks. Preprint at arXiv:1304.6819. 28 http://ssrn.com/abstract=968311 http://arxiv.org/abs/1106.5040 http://arxiv.org/abs/1304.6819 1 Introduction 2 Review of Markov switching models in econometrics 3 Models for the coupled dynamics of spread and returns 3.1 Markov-Switching models 3.2 A double chain Markov model with logit regression 4 Data 5 Estimation of the models and comparison with real data 5.1 Estimation of the models 5.2 Comparison with real data 6 Conclusion A Correlation of squared returns for DCMM(p) model
0non-cybersec
arXiv